Language Models Like ChatGPT Could Be Plagiarising in More Ways Than Just ‘Copy-Paste’, Say Researchers

Penn University research team tested OpenAI's GPT-2 for plagiarism.

Advertisement
By ANI | Updated: 20 February 2023 18:12 IST
Highlights
  • Study can help AI researchers build more robust language models in future
  • The results of the study only apply to GPT-2
  • Researchers will present their findings at the 2023 ACM Web Conference
Language Models Like ChatGPT Could Be Plagiarising in More Ways Than Just ‘Copy-Paste’, Say Researchers

ChatGPT has already been banned in several schools in the US

Photo Credit: Unsplash

Concerns about plagiarism are raised when language models, presumably including ChatGPT, paraphrase and reuse concepts from training data without citing the original source.

Before finishing their next assignment with a chatbot, students might want to give it some thought. According to a research team led by Penn University that undertook the first study to specifically look at the topic, language models that generate text in response to user prompts plagiarise content in more ways than one.

"Plagiarism comes in different flavours," said Dongwon Lee, professor of information sciences and technology at Penn State. "We wanted to see if language models not only copy and paste but resort to more sophisticated forms of plagiarism without realizing it."

The researchers focused on identifying three forms of plagiarism: verbatim, or directly copying and pasting content; paraphrasing, or rewording and restructuring content without citing the original source; and idea, or using the main idea from a text without proper attribution. They constructed a pipeline for automated plagiarism detection and tested it against OpenAI's GPT-2 because the language model's training data is available online, allowing the researchers to compare generated texts to the 8 million documents used to pre-train GPT-2.

Advertisement

The scientists used 210,000 generated texts to test for plagiarism in pre-trained language models and fine-tuned language models, or models trained further to focus on specific topic areas. In this case, the team fine-tuned three language models to focus on scientific documents, scholarly articles related to COVID-19, and patent claims. They used an open-source search engine to retrieve the top 10 training documents most similar to each generated text and modified an existing text alignment algorithm to better detect instances of verbatim, paraphrase and idea plagiarism.

The team found that the language models committed all three types of plagiarism and that the larger the dataset and parameters used to train the model, the more often plagiarism occurred. They also noted that fine-tuned language models reduced verbatim plagiarism but increased instances of paraphrasing and idea plagiarism. In addition, they identified instances of the language model exposing individuals' private information through all three forms of plagiarism. The researchers will present their findings at the 2023 ACM Web Conference, which takes place from April 30-May 4 in Austin, Texas.

Advertisement

"People pursue large language models because the larger the model gets, generation abilities increase," said lead author Jooyoung Lee, a doctoral student in the College of Information Sciences and Technology at Penn State. "At the same time, they are jeopardizing the originality and creativity of the content within the training corpus. This is an important finding."

The study highlights the need for more research into text generators and the ethical and philosophical questions that they pose, according to the researchers.

Advertisement

"Even though the output may be appealing, and language models may be fun to use and seem productive for certain tasks, it doesn't mean they are practical," said Thai Le, assistant professor of computer and information science at the University of Mississippi who began working on the project as a doctoral candidate at Penn State. "In practice, we need to take care of the ethical and copyright issues that text generators pose."

Though the results of the study only apply to GPT-2, the automatic plagiarism detection process that the researchers established can be applied to newer language models like ChatGPT to determine if and how often these models plagiarize training content. Testing for plagiarism, however, depends on the developers making the training data publicly accessible, said the researchers.

The current study can help AI researchers build more robust, reliable and responsible language models in future, according to the scientists. For now, they urge individuals to exercise caution when using text generators.

"AI researchers and scientists are studying how to make language models better and more robust, meanwhile, many individuals are using language models in their daily lives for various productivity tasks," said Jinghui Chen, assistant professor of information sciences and technology at Penn State. "While leveraging language models as a search engine or a stack overflow to debug code is probably fine, for other purposes, since the language model may produce plagiarized content, it may result in negative consequences for the user."

The plagiarism outcome is not something unexpected, added Dongwon Lee.

"As a stochastic parrot, we taught language models to mimic human writings without teaching them how not to plagiarize properly," he said. "Now, it's time to teach them to write more properly, and we have a long way to go."


The OnePlus 11 5G was launched at the company's Cloud 11 launch event which also saw the debut of several other devices. We discuss this new handset and all of OnePlus' new hardware on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.
 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: OpenAI, GPT 2, ChatGPT, Plagiarism, AI
Advertisement

Related Stories

Popular Mobile Brands
  1. Vivo Y400 Pro 5G India Launch Today: All You Need to Know
  2. OTT Releases This Week: Ground Zero, Detective Sherdil, Found S2, and More
  3. Oppo Reno 14 5G Series Teased to Launch in India Soon
  4. Samsung Galaxy M36 5G India Launch Date and Key Features Revealed
  5. Poco F7 5G to Be Equipped With a Snapdragon 8s Gen 4 SoC
  6. Nothing Phone 3 to Get New Glyph Matrix Interface on the Rear Panel
  7. Samsung Galaxy Z Fold 7 Leaked Renders Suggest Design Changes
  8. Apollo Astronauts Found Orange Glass Beads on the Moon, Scientists Now Know Why
  9. OnePlus Bullets Wireless Z3 With Up to 36 Hours Battery Launched in India
  10. Realme Buds Air 7 Pro Review: Eye-Catching Design, Thumping Bass
  1. Samsung Galaxy Z Fold 7 Leaked Renders Hint at Design Changes; Storage Options Tipped
  2. Vivo Y400 Pro 5G Launching Today: Price in India, Expected Features and Specifications
  3. Fast Radio Bursts Reveal Universe’s Missing Matter Hidden in Cosmic Intergalactic Fog
  4. Apollo Astronauts Found Orange Glass Beads on the Moon, Scientists Now Know Why
  5. World’s Oldest Tailored Dress Found in Egyptian Tomb Dates Back Over 5,000 Years
  6. Ancient Footprints in White Sands Confirm Humans Reached America 23,000 Years Ago
  7. Humanoid Robot Achieves Controlled Flight Using Jet Propulsion and AI Systems
  8. Curiosity Rover Reaches Uyuni Quad, Begins New Mars Mapping and Surface Analysis Campaign
  9. NASA to Gather Reentry Imagery of European Commercial Capsule Using High-Altitude Aircraft
  10. ESA's Proba-3 Unveils First-Ever Artificial Solar Eclipse Images from Precision Satellite Formation
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.