The language model ChatGPT has outperformed students in various subjects, according to recent research. A study conducted by the University of California, Los Angeles (UCLA) revealed that the Artificial Intelligence (AI) GPT-3 surpassed students in analogical thinking. Now, researchers from the New York University Abu Dhabi (NYUAD) have examined how ChatGPT can answer exam questions in different fields such as computer science, politics, engineering, and psychology, and whether it outperforms average students.

The study, published in the scientific journal Scientific Reports, involved NYUAD teachers providing ten questions each for ChatGPT to answer. The AI then generated three answer sentences for each question, which were evaluated by examiners who did not know whether the answers came from the AI or a human. In nine out of 32 questions, the answers generated by ChatGPT received similar or higher average grades than those of the students. The AI’s average grade of 9.56 in the “Introduction to Public Policy” course was the most significant deviation from the students’ average grade of 4.39. However, in courses such as mathematics and economics, students consistently outperformed ChatGPT.

The study’s findings suggest that ChatGPT has the potential to provide accurate and reliable answers to exam questions in various fields. However, it is important to note that the AI’s performance is not consistent across all subjects. The researchers also emphasized the need for further research to explore the potential of AI in education and its impact on the future of learning. As technology continues to advance, it is essential to understand how AI can be integrated into education to enhance learning outcomes.

Leave a Reply

Your email address will not be published. Required fields are marked *