Please login to be able to save your searches and receive alerts for new content matching your search criteria.
For undergraduate students newly introduced to quantum mechanics, solving simple Schrödinger equations is relatively straightforward. However, the more profound challenge lies in comprehending the underlying physical principles embedded in the solutions. During my academic experience, a recurring conceptual difficulty was understanding why only s orbitals, and not others like p orbitals, exhibit spherical symmetry. At first glance, this seems paradoxical, given that the potential energy function itself is spherically symmetric. Specifically, why do p orbitals adopt a dumbbell shape instead of a spherical one? For a hydrogen atom with an electron in the 2p state, which specific 2p orbital does the electron occupy, and how do the x, y, and z-axes in 2px, 2py, and 2pz connect to the real world? Additionally, is the atom still spherically symmetric in such a state? These questions relate to core concepts of quantum mechanics concerning symmetry, mixed state, and superposition. This paper delves into these questions by investigating this specific case, utilizing the advanced visualization capabilities offered by ChatGPT. This paper underscores the importance of emerging AI tools in enhancing students’ understanding of abstract principles.
Sentiment analysis is a vital task in natural language processing (NLP) that aims to identify and extract the emotional states and opinions of text. In this study, we conduct a comprehensive comparison of large language models (LLMs), such as ChatGPT and Google Bard, with conventional methods in sentiment analysis. We employ a rigorous evaluation framework that covers four essential metrics: accuracy, precision, recall, and the F1-score. Our results reveal that TextBlob outperforms other methods, achieving an impressive accuracy of 69% and precision of 83%. On the other hand, Bard shows a relatively poor performance, with only 39% accuracy and 46% precision. This study offers valuable insights into the diverse capabilities of AI models in sentiment analysis. A key finding of this study is the importance of model selection according to the specific requirements of the task. Each model has its own strengths and weaknesses, which are reflected in their performance profiles. Moreover, the context in which these models operate is crucial. For instance, ChatGPT generates varied responses, Bard struggles with multiple sentences, and Robustly Optimized BERT Pretraining Approach (RoBERTa) balances precision and recall. This study also reveals the performance gap between LLMs and state-of-the-art deep learning methods. We believe this work will inspire future research and applications of ChatGPT and similar AI models in sentiment analysis and related tasks.
This paper explores the integration of generative AI and large language models into the realm of software engineering education and training, with a specific focus on the transformation of traditional peer assessment methodologies. The motivation stems from the growing demand for innovative educational techniques that can effectively engage and empower learners in mastering Software Engineering principles. The proposed approach involves presenting students with modeling exercises solved by ChatGPT, prompting them to critically evaluate and provide constructive feedback on the generated solutions. By engaging students in a dialogue with the AI model, we aim to foster a dynamic learning environment where learners can articulate their considerations and insights, thereby enhancing their comprehension of software engineering principles, critical thinking and self evaluation skills. Preliminary results from pilot implementations indicate promising outcomes, suggesting that this approach not only enhances the quality of peer feedback but also contributes to a more interactive and engaging educational experience.
ChatGPT, a recently developed product by openAI, is successfully leaving its mark as a multi-purpose natural language based chatbot. In this paper, we are more interested in analyzing its potential in the field of computational biology. A major share of work done by computational biologists these days involve coding up bioinformatics algorithms, analyzing data, creating pipelining scripts and even machine learning modeling and feature extraction. This paper focuses on the potential influence (both positive and negative) of ChatGPT in the mentioned aspects with illustrative examples from different perspectives. Compared to other fields of computer science, computational biology has (1) less coding resources, (2) more sensitivity and bias issues (deals with medical data), and (3) more necessity of coding assistance (people from diverse background come to this field). Keeping such issues in mind, we cover use cases such as code writing, reviewing, debugging, converting, refactoring, and pipelining using ChatGPT from the perspective of computational biologists in this paper.
The most fundamental oversight in Artificial Intelligence (AI) is probably the avoidance of conscious learning. The most widespread misconduct in AI seems to be Post-Selection. Avoiding conscious learning, we program humanoid robots that do not have the intent to learn new skills, such as standing up, walking, jumping, speaking, and thinking. Because programmed-in skills are all brittle, we must imitate natural intelligence, from insects to humans, that all learn from consciousness. Worldwide AI researchers and the public have been misled by media hypes about false AI performances, from Deep Learning to ChatGPT, rooted in Post-Selection. However, the Post-Selection protocol behind such hypes is fatally flawed by alleged misconduct — Misconduct 1, Cheating in the absence of a test; Misconduct 2, hiding bad-looking data. In other words, the reported errors are only data-fitting errors, instead of testing errors. This paper discusses how Conscious Learning is enabled by Developmental Network 3 (DN-3) to learn from its own intents without Post-Selection misconduct. Among many new concepts presented here, this paper establishes a new theorem that the expected error of the luckiest system in a future test is the same as any other less lucky system, namely, average. In contrast, DN-3 develops a sole network that is optimal in the sense of maximum likelihood (ML), better than the luckiest system on a validation set. The ML optimality transfers the performance on a validation set in the prior lifetime to a test set in the future lifetime. Many other AI techniques, e.g., symbolic, connectionist, and evolutional, also use Post-Selection which lacks such a transfer.
With the rapid development of large language models (LLMs), the quality of AI-generated context (AIGC) is rapidly improving, and the correctness and detection of generated content have become a global challenge. In this paper, we review the current methods of AIGC detector and introduce the definition, dataset and methods of AIGC detection, including the manual-based methods, rule-based methods, statistical learning-based methods, deep learning-based methods, knowledge enhancement-based methods and watermarking-based methods. However, these methods have very low recognition accuracy when facing the latest LLMs, such as ChatGPT and GPT-4. This paper also suggests that more work should be put into identifying AIGC quality in the future, such as whether there are logical errors, knowledge errors or data falsification, which may have more severe consequences and be more likely to be detected.
ChatGPT has demonstrated its potential as a surrogate knowledge graph. Trained on extensive data sources, including open-access publications, peer-reviewed research articles, and biomedical websites, ChatGPT extracted information on gene relationships and biological pathways so that it can be used to predict them. However, a major challenge is model hallucination, that is, high false positive rates. To assess and address this challenge, we systematically evaluated ChatGPT’s capacity for predicting gene relationships using GPT-3.5-turbo, GPT-4, and GPT-4o. Benchmarking against the KEGG Pathway Database as the ground truth, we experimented with diverse prompting strategies, targeting gene relationships of activation, inhibition, and phosphorylation. We introduced an innovative iterative prompt refinement technique. By assessing prompt efficacy using metrics such as F-1 score, precision, and recall, GPT-4 suggested improved prompts. A refined prompt, which combines a specialized role with explanatory text, significantly enhanced the performance. Going beyond pairwise gene relationships, we also deciphered complex gene interplays, such as gene interaction chains and pathways pertinent to diseases such as non-small cell lung cancer. Direct prompts showed limited success, but “least-to-most” prompting exhibited significant potentials for such network constructions. The methods in this study may be used for other bioinformatics prediction problems.
This chapter proposes four key foundational technologies in the era of the Fourth Industrial Revolution, collectively termed ABCD technologies: artificial intelligence, big data analytics, cloud computing, and digital technology. While the four technologies emerged independently across different time frames, they have become increasingly interwoven over time, driving the advent of new technologies. This chapter delves into the dynamics of their interplay and articulates their critical role in catalyzing a variety of emerging technologies. This study also highlights how their relevance differs across the emerging technologies, as illustrated by three examples: ChatGPT, the Internet of Things (IoT), and blockchains. Moreover, the study provides implications for both firms and policymakers on how to effectively leverage the four foundational technologies and respond to the fast-changing technology environment in the era of the Fourth Industrial Revolution and beyond.
This chapter discusses the impacts of digital technologies on society and on the education sector, reviews e-learning and hybrid learning, looks at learning and training in the organization, highlights the recent development of artificial intelligence (especially ChatGPT) and its impact on education, and comments on the role of the government and teacher in the education in the digital era.
This chapter presents the background of ChatGPT, discusses the benefits it can provide to higher education, points out its limitations and concerns and issues, and looks at strategies and success factors for the successful implementation of ChatGPT in the higher education organizations.
The quickly-expanding nature of published medical literature makes it challenging for clinicians and researchers to keep up with and summarize recent, relevant findings in a timely manner. While several closed-source summarization tools based on large language models (LLMs) now exist, rigorous and systematic evaluations of their outputs are lacking. Furthermore, there is a paucity of high-quality datasets and appropriate benchmark tasks with which to evaluate these tools. We address these issues with four contributions: we release Clinfo.ai, an open-source WebApp that answers clinical questions based on dynamically retrieved scientific literature; we specify an information retrieval and abstractive summarization task to evaluate the performance of such retrieval-augmented LLM systems; we release a dataset of 200 questions and corresponding answers derived from published systematic reviews, which we name PubMed Retrieval and Synthesis (PubMedRS-200); and report benchmark results for Clinfo.ai and other publicly available OpenQA systems on PubMedRS-200.
Large Language Models (LLMs) are a type of artificial intelligence that has been revolutionizing various fields, including biomedicine. They have the capability to process and analyze large amounts of data, understand natural language, and generate new content, making them highly desirable in many biomedical applications and beyond. In this workshop, we aim to introduce the attendees to an in-depth understanding of the rise of LLMs in biomedicine, and how they are being used to drive innovation and improve outcomes in the field, along with associated challenges and pitfalls.
The interest in the use of the ChatGpt tool by professionals in the tourism sector is increasing, but the acceptance of this tool by these professionals is not the same for all, there being different factors that affect its acceptance. In this chapter, first, the interest in this topic is demonstrated by means of a bibliometric study formed by related scientific publications, second, a co-occurrence study is carried out on the publications found to determine the most significant variables, and, from them, lastly, to obtain the factors mentioned at the beginning, thus creating a theoretical adaptation of the technological adaptation model, that model how the tourism sector comes to accept and use ChatGpt. With this information, a company in the tourism sector has a theoretical evaluation model to measure the degree of acceptance of its workers toward ChatGpt tools, being able to distinguish the strongest points and having the opportunity to establish an adaptation strategy according to the weakest points.