While concerns about jobs and education frequently make headlines, the reality is that large-scale language models like ChatGPT will impact almost every aspect of our life. These new technologies arouse widespread worries about the potential for artificial intelligence to reinforce social prejudices, commit fraud and identity theft, produce fake news, disseminate false information, and more. Researchers are working to give tech users the tools to reduce these hazards. For example, the researchers show that it is possible to teach someone how to distinguish between text produced by a machine and text composed by a human.
As educators reevaluate learning in the wake of ChatGPT, concerns about the integrity of the job market have extended to the classroom from the creative economy to the managerial class. Although concerns about jobs and education frequently make headlines, the reality is that large-scale language models like ChatGPT will impact almost every aspect of our life.
These new technologies arouse widespread worries about the potential for artificial intelligence to reinforce social prejudices, commit fraud and identity theft, produce fake news, disseminate false information, and more. However, they are also advancing scholarly research.
Concerns about Artificial Intelligence
A group of academics at the University of Pennsylvania School of Engineering and Applied Science is working to enable computer users to lessen these hazards. The authors show that individuals can learn to distinguish between language created by a machine and text authored by a human in a peer-reviewed study presented at the Association for the Advancement of Artificial Intelligence meeting in February 2023.
It’s crucial to be aware of the actions you may take to evaluate your source’s credibility before selecting a recipe, sharing an article, or giving out your credit card information.
Teaching Users to Spot Fake Texts
The research, conducted by Chris Callison-Burch, an associate professor in the Department of Computer and Information Science (CIS), and by Liam Dugan and Daphne Ippolito, two CIS Ph.D. students, shows that text produced by AI or Natural Language Processing is reliable.
Callison-Burch said, “We’ve demonstrated that humans can teach themselves to recognize machine-generated writings. “People begin with preconceived notions about the kinds of mistakes a computer might make, but these presumptions aren’t always true. Given enough guidance and examples, we can eventually learn to recognize the kinds of mistakes that machines are currently committing.”
Dugan continues, “AI nowadays is surprisingly adept at writing very fluid, extremely grammatical text. “It does, however, make errors. We establish that machines make certain faults that we may learn to recognize, such as common sense, relevance, reasoning, and logical errors.”
Real or Fake Text? a unique web-based training game, was used to gather data for the study.
With the help of this training game, detection studies’ usual experimental approach has been transformed into a more realistic simulation of how real people utilize artificial intelligence to produce text.
With conventional approaches, participants are asked to answer yes or no questions about whether a computer produced a particular text. Just categorizing a text as real or phony is required for this exercise, and replies are graded as correct or incorrect.
By providing samples that all start as human-written, the Penn model greatly improves the conventional detection study into a useful training assignment. Participants are then asked to indicate where each example transitions into created text. Finally, students are graded after recognizing and explaining the textual cues that point to errors.
The study’s findings demonstrate that participants’ scores were much higher than those obtained by chance, demonstrating that Natural Language Processing-generated text can be somewhat recognized.
In addition to gamifying the process and making it more interesting, Dugan’s approach offers a more realistic training environment. “Generated texts, including those generated by ChatGPT, start with human-provided cues,” the statement reads.
The paper discusses artificial intelligence as it exists today and sketches out a positive—even exciting—future for our interaction with this technology.
“Five years ago, according to Dugan, models struggled to remain on topic and construct coherent sentences. Currently, they hardly ever use poor grammar. Our study reveals the faults that distinguish AI chatbots, but it’s critical to remember that they have developed and will continue to evolve. The fact that AI-written material cannot be read is not the change that should cause alarm. Individuals must keep educating themselves to distinguish between the two and use detecting software as a supplement.” According to Callison-Burch, there are good reasons why people are concerned about AI. “Our work provides arguments to dispel these fears. We will be able to focus on how AI text generators can assist us in creating more creative and intriguing writings once we can control our optimism about them.”
Ippolito, a co-leader of the Penn study who is currently a Research Scientist at Google, adds to Dugan’s emphasis on detection by emphasizing the best uses for these technologies in her work. For instance, she contributed to Wordcraft, an AI writing tool created in collaboration with established authors. Unfortunately, authors or researchers did not find a compelling alternative for a fiction writer. Still, they did see great value in AI’s capacity to support the creative process.
These technologies, in Callison-opinion, and Burch’s, are best suited for creative writing at present. The lack of guarantee of factuality makes news reports, term papers, and legal counsel poor use cases.
“You can push this technology in some exciting, wonderful areas,” says Dugan. Although we now know that we may be educating ourselves to be better readers and writers, people are hooked on bad examples, such as plagiarism and fake news.
Does the text support NLP?
Natural language processing (NLP), also known as text mining or text analytics, is an artificial intelligence (AI) technique that turns free (unstructured) text found in documents and databases into normalized, structured data that can be used for analysis or as input for machine learning (ML) algorithms.
What does text NLP do?
The field of computer science known as “natural language processing” (NLP) is more particularly the field of “artificial intelligence” (AI) that is concerned with providing computers with the capacity to comprehend written and spoken words like that of humans.