In a recent study conducted by the University of Memphis, linguists were unable to distinguish between texts written by humans and those generated by the Large language model (LLM) ChatGPT. This raises concerns about the potential use of AI in academic settings, as students could potentially use it to cheat on assignments and exams. The study also highlights the need for ethical guidelines and regulations surrounding the use of AI in research and education.

According to Open AI, the company that developed ChatGPT, technical systems cannot reliably detect whether a text was written by a human or AI. This has led to fears that students may use AI to cheat on assignments and exams. To test this hypothesis, researchers at the University of Memphis conducted an experiment in which 72 linguists were asked to evaluate whether a scientific text was written by a human or ChatGPT. Despite using linguistic and stylistic features to analyze the texts, the linguists were only able to correctly identify 39% of the texts.

The study highlights the need for ethical guidelines and regulations surrounding the use of AI in research and education. While AI can be a powerful tool for learning and discovery, it also has the potential to be misused. As such, it is important to establish clear guidelines for its use in academic settings to ensure that it is used ethically and responsibly. The authors of the study hope that their findings will spark a larger discussion about the use of AI in education and research, and lead to the development of necessary ethical guidelines and regulations.

Leave a Reply

Your email address will not be published. Required fields are marked *