Should universities be worried about the increasing capabilities of AI?

Person coding with MacBook Pro

If a piece of writing was 49 per cent written by AI, with the remaining 51 per cent written by a human, is this considered original work? Image:  Unsplash/ Danial Igdery

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Sarah Elaine Eaton

Michael mindzak.

is using ai to write essays cheating

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Education, Gender and Work is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, education, gender and work.

  • The use of technology in academic writing is already widespread, with teachers and students using AI-based tools to support the work they are doing.
  • However, as AI becomes increasingly advanced, institutions need to properly define what can be defined as AI-assistance and what is plagiarism or cheating, writes an academic.
  • For example, if a piece of writing was 49% written by AI, with the remaining 51% written by a human, is this considered original work?

The dramatic rise of online learning during the COVID-19 pandemic has spotlit concerns about the role of technology in exam surveillance — and also in student cheating .

Some universities have reported more cheating during the pandemic, and such concerns are unfolding in a climate where technologies that allow for the automation of writing continue to improve.

Over the past two years, the ability of artificial intelligence to generate writing has leapt forward significantly , particularly with the development of what’s known as the language generator GPT-3. With this, companies such as Google , Microsoft and NVIDIA can now produce “human-like” text .

AI-generated writing has raised the stakes of how universities and schools will gauge what constitutes academic misconduct, such as plagiarism . As scholars with an interest in academic integrity and the intersections of work, society and educators’ labour, we believe that educators and parents should be, at the very least, paying close attention to these significant developments .

AI & academic writing

The use of technology in academic writing is already widespread. For example, many universities already use text-based plagiarism detectors like Turnitin , while students might use Grammarly , a cloud-based writing assistant. Examples of writing support include automatic text generation, extraction, prediction, mining, form-filling, paraphrasing , translation and transcription.

Advancements in AI technology have led to new tools, products and services being offered to writers to improve content and efficiency . As these improve, soon entire articles or essays might be generated and written entirely by artificial intelligence . In schools, the implications of such developments will undoubtedly shape the future of learning, writing and teaching.

Misconduct concerns already widespread

Research has revealed that concerns over academic misconduct are already widespread across institutions higher education in Canada and internationally.

In Canada, there is little data regarding the rates of misconduct. Research published in 2006 based on data from mostly undergraduate students at 11 higher education institutions found 53 per cent reported having engaged in one or more instances of serious cheating on written work, which was defined as copying material without footnoting, copying material almost word for word, submitting work done by someone else, fabricating or falsifying a bibliography, submitting a paper they either bought or got from someone else for free.

Academic misconduct is in all likelihood under-reported across Canadian higher education institutions .

There are different types of violations of academic integrity, including plagiarism , contract cheating (where students hire other people to write their papers) and exam cheating, among others .

Unfortunately, with technology, students can use their ingenuity and entrepreneurialism to cheat. These concerns are also applicable to faculty members, academics and writers in other fields, bringing new concerns surrounding academic integrity and AI such as:

  • If a piece of writing was 49 per cent written by AI, with the remaining 51 per cent written by a human, is this considered original work?
  • What if an essay was 100 per cent written by AI, but a student did some of the coding themselves?
  • What qualifies as “AI assistance” as opposed to “academic cheating”?
  • Do the same rules apply to students as they would to academics and researchers?

We are asking these questions in our own research , and we know that in the face of all this, educators will be required to consider how writing can be effectively assessed or evaluated as these technologies improve.

a chart showing the growth forecasts of AI

Augmenting or diminishing integrity?

At the moment, little guidance, policy or oversight is available regarding technology, AI and academic integrity for teachers and educational leaders.

Over the past year, COVID-19 has pushed more students towards online learning — a sphere where teachers may become less familiar with their own students and thus, potentially, their writing.

While it remains impossible to predict the future of these technologies and their implications in education, we can attempt to discern some of the larger trends and trajectories that will impact teaching, learning and research.

Have you read?

Professor robot – why ai could soon be teaching in university classrooms, how digital technology is changing the university lecture, this is how university students can emerge from the pandemic stronger, technology & automation in education.

A key concern moving forward is the apparent movement towards the increased automation of education where educational technology companies offer commodities such as writing tools as proposed solutions for the various “problems” within education.

An example of this is automated assessment of student work, such as automated grading of student writing . Numerous commercial products already exist for automated grading, though the ethics of these technologies are yet to be fully explored by scholars and educators.

Overall, the traditional landscape surrounding academic integrity and authorship is being rapidly reshaped by technological developments. Such technological developments also spark concerns about a shift of professional control away from educators and ever-increasing new expectations of digital literacy in precarious working environments .

These complexities, concerns and questions will require further thought and discussion. Educational stakeholders at all levels will be required to respond and rethink definitions as well as values surrounding plagiarism, originality, academic ethics and academic labour in the very near future.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:

The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Emerging Technologies .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

is using ai to write essays cheating

Why regulating AI can be surprisingly straightforward, when teamed with eternal vigilance

Rahul Tongia

May 28, 2024

is using ai to write essays cheating

How venture capital is investing in AI in the top five global economies — and shaping the AI ecosystem

Piyush Gupta, Chirag Chopra and Ankit Kasare

May 24, 2024

is using ai to write essays cheating

What can we expect of next-generation generative AI models?

Andrea Willige

May 22, 2024

is using ai to write essays cheating

Solar storms hit tech equipment, and other technology news you need to know

Sebastian Buckup

May 17, 2024

is using ai to write essays cheating

Generative AI is trained on just a few of the world’s 7,000 languages. Here’s why that’s a problem – and what’s being done about it

Madeleine North

is using ai to write essays cheating

Critical minerals demand has doubled in the past five years – here are some solutions to the supply crunch

Emma Charlton

May 16, 2024

  • Share full article

Advertisement

Supported by

Don’t Use A.I. to Cheat in School. It’s Better for Studying.

Generative A.I. tools can annotate long documents, make flashcards, and produce practice quizzes.

Video player loading

By Brian X. Chen

Hello! We’re back with another bonus edition of On Tech: A.I. , a pop-up newsletter that teaches you about artificial intelligence, how it works and how to use it.

Last week, I went over how to turn your chatbot into a life coach . Let’s now shift into an area where many have been experimenting with A.I. since last year: education.

Generative A.I.’s specialty is language — guessing which word comes next — and students quickly realized that they could use ChatGPT and other chatbots to write essays . That created an awkward situation in many classrooms. It turns out, it’s easy to get caught cheating with generative A.I. because it is prone to making stuff up , a phenomena known as “hallucinating.”

But generative A.I. can also be used as a study assistant. Some tools make highlights in long research papers and even answer questions about the material. Others can assemble study aids, like quizzes and flashcards.

One warning to keep in mind: When studying, it’s paramount that the information is correct, and to get the most accurate results, you should direct A.I. tools to focus on information from trusted sources rather than pull data from across the web. I’ll go over how to do that below.

First, let’s explore one of the most daunting studying tasks: reading and annotating long papers. Some A.I. tools, such as Humata.AI , Wordtune Read and various plug-ins inside ChatGPT, act as research assistants that will summarize documents for you.

I prefer Humata.AI because it answers your questions and shows highlights directly inside the source material, which allows you to double check for accuracy.

On the Humata.AI website, I uploaded a PDF of a scientific research paper on the accuracy of smartwatches in tracking cardio fitness. Then I clicked the “Ask” button and asked it how Garmin watches performed in the study. It scrolled down to the relevant part of the document mentioning Garmin, made highlights and answered my question.

is using ai to write essays cheating

Most interesting to me was when I asked the bot whether my understanding of the paper was correct — that on average, wearable devices like Garmins and Fitbits tracked cardio fitness fairly accurately, but there were some individuals whose results were very wrong. “Yes, you are correct,” the bot responded. It followed up with a summary of the study and listed the page numbers where this conclusion was mentioned.

Generative A.I. can also help with rote memorization. While any chatbot will generate flashcards or quizzes if you paste in the information that you’re studying, I decided to use ChatGPT because it includes plug-ins that generate study aids that pull from specific web articles or documents.

(Only subscribers who pay $20 a month for ChatGPT Plus can use plug-ins. We explained how to use them in a previous newsletter .)

I wanted ChatGPT to create flashcards for me to learn Chinese vocabulary words. To do this, I installed two plug-ins: Link Reader, which let me tell the bot to use data from a specific website, and MetaMentor, a plug-in that automatically generates flashcards.

In the ChatGPT dashboard, I selected both plug-ins. Then, I wrote this prompt:

Act as a tutor. I am a native English speaker learning Chinese. Take the vocabulary words and phrases from this link and create a set of flashcards for each: https://preply.com/en/blog/basic-chinese-words/

About five minutes later, the bot responded with a link where I could download the flashcards. They were exactly what I asked for.

Next, I wanted my tutor to quiz me. I told ChatGPT that I was studying for the written exam to get my motorcycle license in California. Again, using the Link Reader plug-in, I pasted a link to the California D.M.V.’s latest motorcycle handbook (an important step because traffic laws vary between states and rules are occasionally updated) and asked for a multiple-choice quiz.

The bot processed the information inside the handbook and produced a quiz, asking me five questions at a time.

Finally, to test my grasp of the subject, I directed ChatGPT to ask me questions without presenting multiple-choice answers. The bot adapted accordingly, and I aced the quiz.

I would have loved having these tools when I was in school. And probably would have earned better grades with them as study companions.

What’s next?

Next week, in the final installment of this how-to newsletter, we’ll take everything we’ve learned and apply it to enriching the time we spend with our families.

Brian X. Chen is the lead consumer technology writer for The Times. He reviews products and writes Tech Fix , a column about the social implications of the tech we use. Before joining The Times in 2011, he reported on Apple and the wireless industry for Wired. More about Brian X. Chen

  • Future Students
  • Current Students
  • Faculty/Staff

Stanford Graduate School of Education

News and Media

  • News & Media Home
  • Research Stories
  • School's In
  • In the Media

You are here

What do ai chatbots really mean for students and cheating.

Student working on laptop and phone and notebook

The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing research into cheating among U.S. high school students before and after the release of ChatGPT.  

“There’s been a ton of media coverage about AI making it easier and more likely for students to cheat,” said Denise Pope , a senior lecturer at Stanford Graduate School of Education (GSE). “But we haven’t seen that bear out in our data so far. And we know from our research that when students do cheat, it’s typically for reasons that have very little to do with their access to technology.”

Pope is a co-founder of Challenge Success , a school reform nonprofit affiliated with the GSE, which conducts research into the student experience, including students’ well-being and sense of belonging, academic integrity, and their engagement with learning. She is the author of Doing School: How We Are Creating a Generation of Stressed-Out, Materialistic, and Miseducated Students , and coauthor of Overloaded and Underprepared: Strategies for Stronger Schools and Healthy, Successful Kids.  

Victor Lee is an associate professor at the GSE whose focus includes researching and designing learning experiences for K-12 data science education and AI literacy. He is the faculty lead for the AI + Education initiative at the Stanford Accelerator for Learning and director of CRAFT (Classroom-Ready Resources about AI for Teaching), a program that provides free resources to help teach AI literacy to high school students. 

Here, Lee and Pope discuss the state of cheating in U.S. schools, what research shows about why students cheat, and their recommendations for educators working to address the problem.

Denise Pope

Denise Pope

What do we know about how much students cheat?

Pope: We know that cheating rates have been high for a long time. At Challenge Success we’ve been running surveys and focus groups at schools for over 15 years, asking students about different aspects of their lives — the amount of sleep they get, homework pressure, extracurricular activities, family expectations, things like that — and also several questions about different forms of cheating. 

For years, long before ChatGPT hit the scene, some 60 to 70 percent of students have reported engaging in at least one “cheating” behavior during the previous month. That percentage has stayed about the same or even decreased slightly in our 2023 surveys, when we added questions specific to new AI technologies, like ChatGPT, and how students are using it for school assignments.

Victor Lee

Isn’t it possible that they’re lying about cheating? 

Pope: Because these surveys are anonymous, students are surprisingly honest — especially when they know we’re doing these surveys to help improve their school experience. We often follow up our surveys with focus groups where the students tell us that those numbers seem accurate. If anything, they’re underreporting the frequency of these behaviors.

Lee: The surveys are also carefully written so they don’t ask, point-blank, “Do you cheat?” They ask about specific actions that are classified as cheating, like whether they have copied material word for word for an assignment in the past month or knowingly looked at someone else’s answer during a test. With AI, most of the fear is that the chatbot will write the paper for the student. But there isn’t evidence of an increase in that.

So AI isn’t changing how often students cheat — just the tools that they’re using? 

Lee: The most prudent thing to say right now is that the data suggest, perhaps to the surprise of many people, that AI is not increasing the frequency of cheating. This may change as students become increasingly familiar with the technology, and we’ll continue to study it and see if and how this changes. 

But I think it’s important to point out that, in Challenge Success’ most recent survey, students were also asked if and how they felt an AI chatbot like ChatGPT should be allowed for school-related tasks. Many said they thought it should be acceptable for “starter” purposes, like explaining a new concept or generating ideas for a paper. But the vast majority said that using a chatbot to write an entire paper should never be allowed. So this idea that students who’ve never cheated before are going to suddenly run amok and have AI write all of their papers appears unfounded.

But clearly a lot of students are cheating in the first place. Isn’t that a problem? 

Pope: There are so many reasons why students cheat. They might be struggling with the material and unable to get the help they need. Maybe they have too much homework and not enough time to do it. Or maybe assignments feel like pointless busywork. Many students tell us they’re overwhelmed by the pressure to achieve — they know cheating is wrong, but they don’t want to let their family down by bringing home a low grade. 

We know from our research that cheating is generally a symptom of a deeper, systemic problem. When students feel respected and valued, they’re more likely to engage in learning and act with integrity. They’re less likely to cheat when they feel a sense of belonging and connection at school, and when they find purpose and meaning in their classes. Strategies to help students feel more engaged and valued are likely to be more effective than taking a hard line on AI, especially since we know AI is here to stay and can actually be a great tool to promote deeper engagement with learning.

What would you suggest to school leaders who are concerned about students using AI chatbots? 

Pope: Even before ChatGPT, we could never be sure whether kids were getting help from a parent or tutor or another source on their assignments, and this was not considered cheating. Kids in our focus groups are wondering why they can't use ChatGPT as another resource to help them write their papers — not to write the whole thing word for word, but to get the kind of help a parent or tutor would offer. We need to help students and educators find ways to discuss the ethics of using this technology and when it is and isn't useful for student learning.

Lee: There’s a lot of fear about students using this technology. Schools have considered putting significant amounts of money in AI-detection software, which studies show can be highly unreliable. Some districts have tried blocking AI chatbots from school wifi and devices, then repealed those bans because they were ineffective. 

AI is not going away. Along with addressing the deeper reasons why students cheat, we need to teach students how to understand and think critically about this technology. For starters, at Stanford we’ve begun developing free resources to help teachers bring these topics into the classroom as it relates to different subject areas. We know that teachers don’t have time to introduce a whole new class, but we have been working with teachers to make sure these are activities and lessons that can fit with what they’re already covering in the time they have available. 

I think of AI literacy as being akin to driver’s ed: We’ve got a powerful tool that can be a great asset, but it can also be dangerous. We want students to learn how to use it responsibly.

More Stories

Students in a classroom taking a test

⟵ Go to all Research Stories

Get the Educator

Subscribe to our monthly newsletter.

Stanford Graduate School of Education

482 Galvez Mall Stanford, CA 94305-3096 Tel: (650) 723-2109

  • Contact Admissions
  • GSE Leadership
  • Site Feedback
  • Web Accessibility
  • Career Resources
  • Faculty Open Positions
  • Explore Courses
  • Academic Calendar
  • Office of the Registrar
  • Cubberley Library
  • StanfordWho
  • StanfordYou

Improving lives through learning

Make a gift now

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

  • Work & Careers
  • Life & Arts

Become an FT subscriber

Try unlimited access Only $1 for 4 weeks

Then $75 per month. Complete digital access to quality FT journalism on any device. Cancel anytime during your trial.

  • Global news & analysis
  • Expert opinion
  • Special features
  • FirstFT newsletter
  • Videos & Podcasts
  • Android & iOS app
  • FT Edit app
  • 10 gift articles per month

Explore more offers.

Standard digital.

  • FT Digital Edition

Premium Digital

Print + premium digital, ft professional, weekend print + standard digital, weekend print + premium digital.

Essential digital access to quality FT journalism on any device. Pay a year upfront and save 20%.

  • Global news & analysis
  • Exclusive FT analysis
  • FT App on Android & iOS
  • FirstFT: the day's biggest stories
  • 20+ curated newsletters
  • Follow topics & set alerts with myFT
  • FT Videos & Podcasts
  • 20 monthly gift articles to share
  • Lex: FT's flagship investment column
  • 15+ Premium newsletters by leading experts
  • FT Digital Edition: our digitised print edition
  • Weekday Print Edition
  • Videos & Podcasts
  • Premium newsletters
  • 10 additional gift articles per month
  • FT Weekend Print delivery
  • Everything in Standard Digital
  • Everything in Premium Digital

Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.

  • 10 monthly gift articles to share
  • Everything in Print
  • Make and share highlights
  • FT Workspace
  • Markets data widget
  • Subscription Manager
  • Workflow integrations
  • Occasional readers go free
  • Volume discount

Terms & Conditions apply

Explore our full range of subscriptions.

Why the ft.

See why over a million readers pay to read the Financial Times.

International Edition

Subscribe or renew today

Every print subscription comes with full digital access

Science News

How chatgpt and similar ai will disrupt education.

Teachers are concerned about cheating and inaccurate information

Students are turning to ChatGPT for homework help. Educators have mixed feeling about the tool and other generative AI.

Glenn Harvey

Share this:

By Kathryn Hulick

April 12, 2023 at 7:00 am

“We need to talk,” Brett Vogelsinger said. A student had just asked for feedback on an essay. One paragraph stood out. Vogelsinger, a ninth grade English teacher in Doylestown, Pa., realized that the student hadn’t written the piece himself. He had used ChatGPT.

The artificial intelligence tool, made available for free late last year by the company OpenAI, can reply to simple prompts and generate essays and stories. It can also write code.

Within a week, it had more than a million users. As of early 2023, Microsoft planned to invest $10 billion into OpenAI , and OpenAI’s value had been put at $29 billion, more than double what it was in 2021.

It’s no wonder other tech companies have been racing to put out competing tools. Anthropic, an AI company founded by former OpenAI employees, is testing a new chatbot called Claude. Google launched Bard in early February, and the Chinese search company Baidu released Ernie Bot in March.

A lot of people have been using ChatGPT out of curiosity or for entertainment. I asked it to invent a silly excuse for not doing homework in the style of a medieval proclamation. In less than a second, it offered me: “Hark! Thy servant was beset by a horde of mischievous leprechauns, who didst steal mine quill and parchment, rendering me unable to complete mine homework.”

But students can also use it to cheat. ChatGPT marks the beginning of a new wave of AI, a wave that’s poised to disrupt education.

When Stanford University’s student-run newspaper polled students at the university, 17 percent said they had used ChatGPT on assignments or exams at the end of 2022. Some admitted to submitting the chatbot’s writing as their own. For now, these students and others are probably getting away with it. That’s because ChatGPT often does an excellent job.

“It can outperform a lot of middle school kids,” Vogelsinger says. He might not have known his student had used it, except for one thing: “He copied and pasted the prompt.”

The essay was still a work in progress, so Vogelsinger didn’t see it as cheating. Instead, he saw an opportunity. Now, the student and AI are working together. ChatGPT is helping the student with his writing and research skills.

“[We’re] color-coding,” Vogelsinger says. The parts the student writes are in green. The parts from ChatGPT are in blue. Vogelsinger is helping the student pick and choose a few sentences from the AI to expand on — and allowing other students to collaborate with the tool as well. Most aren’t turning to it regularly, but a few kids really like it. Vogelsinger thinks the tool has helped them focus their ideas and get started.

This story had a happy ending. But at many schools and universities, educators are struggling with how to handle ChatGPT and other AI tools.

In early January, New York City public schools banned ChatGPT on their devices and networks. Educators were worried that students who turned to it wouldn’t learn critical-thinking and problem-solving skills. They also were concerned that the tool’s answers might not be accurate or safe. Many other school systems in the United States and around the world have imposed similar bans.

Keith Schwarz, who teaches computer science at Stanford, said he had “switched back to pencil-and-paper exams,” so students couldn’t use ChatGPT, according to the Stanford Daily .

Yet ChatGPT and its kin could also be a great service to learners everywhere. Like calculators for math or Google for facts, AI can make writing that often takes time and effort much faster. With these tools, anyone can generate well-formed sentences and paragraphs. How could this change the way we teach and learn?

Who said what?

When prompted, ChatGPT can craft answers that sound surprisingly like those from a student. We asked middle school and high school students from across the country, all participants in our Science News Learning education program , to answer some basic science questions in two sentences or less. The examples throughout the story compare how students responded with how ChatGPT responded when asked to answer the question at the same grade level.

illustration of circuitry

What effect do greenhouse gases have on the Earth?

Agnes b. | grade 11, harbor city international school, minn..

Greenhouse gases effectively trap heat from dissipating out of the atmosphere, increasing the amount of heat that remains near Earth in the troposphere.

Greenhouse gases trap heat in the Earth’s atmosphere, causing the planet to warm up and leading to climate change and its associated impacts like sea level rise, more frequent extreme weather events and shifts in ecosystems.

illustration of circuitry

The good, bad and weird of ChatGPT

ChatGPT has wowed its users. “It’s so much more realistic than I thought a robot could be,” says Avani Rao, a sophomore in high school in California. She hasn’t used the bot to do homework. But for fun, she’s prompted it to say creative or silly things. She asked it to explain addition, for instance, in the voice of an evil villain.

Given how well it performs, there are plenty of ways that ChatGPT could level the playing field for students and others working in a second language or struggling with composing sentences. Since ChatGPT generates new, original material, its text is not technically plagiarism.

Students could use ChatGPT like a coach to help improve their writing and grammar, or even to explain subjects they find challenging. “It really will tutor you,” says Vogelsinger, who had one student come to him excited that ChatGPT had clearly outlined a concept from science class.

Educators could use ChatGPT to help generate lesson plans, activities or assessments — perhaps even personalized to address the needs or goals of specific students.

Xiaoming Zhai, an expert in science education at the University of Georgia in Athens, tested ChatGPT to see if it could write an academic paper . He was impressed with how easy it was to summarize knowledge and generate good writing using the tool. “It’s really amazing,” he says.

All of this sounds wonderful, but really big problems exist. Most worrying, ChatGPT and other similar tools can often get things very wrong. They don’t pull facts from databases. Rather, they are trained to generate new text that sounds natural. They remix language without understanding it, which can lead to glaring mistakes.

The news website CNET came under fire earlier this year for using AI to churn out dozens of articles, many of them packed with errors. In an early advertisement for the Bard chatbot, it made a factual error about the James Webb Space Telescope, incorrectly claiming that it had taken the very first picture of an exoplanet. And ChatGPT said in a conversation posted on Twitter that the fastest marine mammal was the peregrine falcon. A falcon, of course, is a bird and doesn’t live in the ocean.

ChatGPT is “confidently wrong,” says Casey Fiesler, an expert in the ethics of technology at the University of Colorado Boulder. “There are mistakes and bad information.” She has made multiple TikTok videos about the pitfalls of ChatGPT .

Most of ChatGPT’s training data come from before September 2021, and it does not provide sources for its information. If asked for sources, it makes them up, Fiesler revealed in one video . Zhai, who sees the tool as an assistant, discovered the exact same thing. When he asked ChatGPT for citations, it gave him sources that looked correct. But they didn’t actually exist.

Define the term biodiversity.

Dante a. | grade 10, clark magnet high school, calif..

Biodiversity refers to the variety of living species and ecosystems that exist in a particular region or on the planet as a whole. It encompasses the different genes, species and ecosystems that make up the natural world and the relationships between them.

Biodiversity refers to the variety of living organisms that inhabit the Earth and the ecosystems they form. It includes the diversity of species, genes and ecosystems, and is important for maintaining the balance of nature and sustaining life on our planet.

How ChatGPT works

ChatGPT’s mistakes make sense if you know how it works. “It doesn’t reason. It doesn’t have ideas. It doesn’t have thoughts,” explains Emily M. Bender, a computational linguist at the University of Washington in Seattle.

ChatGPT was developed using at least two types of machine learning. The primary type is a large language model based on an artificial neural network. Loosely inspired by how neurons in the brain interact, this computing architecture finds statistical patterns in vast amounts of data.

A language model learns to predict what words will come next in a sentence or phrase by churning through vast amounts of text. It places words and phrases into a multidimensional map that represents their relationships to one another. Words that tend to come together, like peanut butter and jelly, end up closer together in this map.

The size of an artificial neural network is measured in parameters. These internal values get tweaked as the model learns. In 2020, OpenAI released GPT-3. At the time, it was the biggest language model ever, containing 175 billion parameters. It had trained on text from the internet as well as digitized books and academic journals. Training text also included transcripts of dialog, essays, exams and more, says Sasha Luccioni, a Montreal-based researcher at Hugging Face, a company that builds AI tools.

OpenAI improved upon GPT-3 to create GPT-3.5. In early 2022, the company released a fine-tuned version of GPT-3.5 called InstructGPT. This time, OpenAI added a new type of machine learning. Called reinforcement learning with human feedback, it puts people into the training process. These workers check the AI’s output. Responses that people like get rewarded. Human feedback can also help reduce hurtful, biased or inappropriate responses. This fine-tuned language model powers freely available ChatGPT. As of March, paying users receive answers powered by GPT-4, a bigger language model.

During ChatGPT’s development, OpenAI added extra safety rules to the model. It will refuse to answer certain sensitive prompts or provide harmful information. But this step raises another issue: Whose values are programmed into the bot, including what it is — or is not — allowed to talk about?

OpenAI is not offering exact details about how it developed and trained ChatGPT. The company has not released its code or training data. This disappoints Luccioni because it means the tool can’t benefit from the perspectives of the larger AI community. “I’d like to know how it works so I can understand how to make it better,” she says.

When asked to comment on this story, OpenAI provided a statement from an unnamed spokesperson. “We made ChatGPT available as a research preview to learn from real-world use, which we believe is a critical part of developing and deploying capable, safe AI systems,” the statement said. “We are constantly incorporating feedback and lessons learned.” Indeed, some experimenters have gotten the bot to say biased or inappropriate things despite the safety rules. OpenAI has been patching the tool as these problems come up.

ChatGPT is not a finished product. OpenAI needs data from the real world. The people who are using it are the guinea pigs. Notes Bender: “You are working for OpenAI for free.”

What are black holes and where are they found?

Althea c. | grade 11, waimea high school, hawaii.

A black hole is a place in space where gravity is so strong that nothing, not even light, may come out.

Black holes are extremely dense regions in space where the gravity is so strong that not even light can escape, and they are found throughout the universe.

ChatGPT’s academic performance

How good is ChatGPT in an academic setting? Catherine Gao, a doctor and medical researcher at Northwestern University’s Feinberg School of Medicine in Chicago, is part of one team of researchers that is putting the tool to the test.

Gao and her colleagues gathered 50 real abstracts from research papers in medical journals and then, after providing the titles of the papers and the journal names, asked ChatGPT to generate 50 fake abstracts. The team asked people familiar with reading and writing these types of research papers to identify which were which .

“I was surprised by how realistic and convincing the generated abstracts were,” Gao says. The reviewers mistook roughly one-third of the AI-generated abstracts as human-generated.

In another study, Will Yeadon and colleagues tested whether AI tools could pass a college exam . Yeadon, a physics instructor at Durham University in England, picked an exam from a course that he teaches. The test asks students to write five short essays about physics and its history. Students have an average score of 71 percent, which he says is equivalent to an A in the United States.

Yeadon used the tool davinci-003, a close cousin of ChatGPT. It generated 10 sets of exam answers. Then Yeadon and four other teachers graded the answers using their typical standards. The AI also scored an average of 71 percent. Unlike the human students, though, it had no very low or very high marks. It consistently wrote well, but not excellently. For students who regularly get bad grades in writing, Yeadon says, it “will write a better essay than you.”

These graders knew they were looking at AI work. In a follow-up study, Yeadon plans to use work from the AI and students and not tell the graders whose is whose.

What is heat?

Precious a. | grade 6, canyon day junior high school, ariz..

Heat is the transfer of kinetic energy from one medium or object to another, or from an energy source to a medium or object through radiation, conduction and convection.

Heat is a type of energy that makes things warmer. It can be produced by burning something or through electricity.

Tools to check for cheating

When it’s unclear whether ChatGPT wrote something or not, other AI tools may help. These tools typically train on AI-generated text and sometimes human-generated text as well. They can tell you how likely it is that text was composed by an AI. Many of the existing tools were trained on older language models, but developers are working quickly to put out new, improved tools.

A company called Originality.ai sells access to a tool that trained on GPT-3. Founder Jon Gillham says that in a test of 10,000 samples of texts composed by models based on GPT-3, the tool tagged 94 percent of them correctly as AI-generated. When ChatGPT came out, his team tested a smaller set of 20 samples. Each only 500 words in length, these had been created by ChatGPT and other models based on GPT-3 and GPT-3.5. Here, Gillham says, the tool “tagged all of them as AI-generated. And it was 99 percent confident, on average.”

In late January 2023, OpenAI released its own free tool for spotting AI writing, cautioning that the tool was “not fully reliable.” The company is working to add watermarks to its AI text, which would tag the output as machine-generated, but doesn’t give details on how. Gillham describes one possible approach: Whenever it generates text, the AI ranks many different possible words for each position. If its developers told it to always choose the word ranked in third place rather than first place at specific points in its output, those words could act as a fingerprint, he says.

As AI writing tools improve, the tools to sniff them out will need to improve as well. Eventually, some sort of watermark might be the only way to sort out true authorship.

What is DNA and how is it organized?

Luke m. | grade 8, eastern york middle school, pa..

DNA, or deoxyribonucleic acid, is kept inside the cells of living things, where it holds instructions for the genetics of the organism it is inhabiting.

DNA is like a set of instructions that tells our cells what to do. It’s organized into structures called chromosomes, which contain all of the DNA in a cell.

ChatGPT and the future of writing

There’s no doubt we will soon have to adjust to a world in which computers can write for us. But educators have made these sorts of adjustments before. As high school student Rao points out, Google was once seen as a threat to education because it made it possible to look up facts instantly. Teachers adapted by coming up with teaching and testing materials that don’t depend as heavily on memorization.

Now that AI can generate essays and stories, teachers may once again have to rethink how they teach and test. Rao says: “We might have to shift our point of view about what’s cheating and what isn’t.”

Some teachers will prevent students from using AI by limiting access to technology. Right now, Vogelsinger says, teachers regularly ask students to write out answers or essays at home. “I think those assignments will have to change,” he says. But he hopes that doesn’t mean kids do less writing.

Teaching students to write without AI’s help will remain essential, agrees Zhai. That’s because “we really care about a student’s thinking,” he stresses. And writing is a great way to demonstrate thinking. Though ChatGPT can help a student organize their thoughts, it can’t think for them, he says.

Kids still learn to do basic math even though they have calculators (which are often on the phones they never leave home without), Zhai acknowledges. Once students have learned basic math, they can lean on a calculator for help with more complex problems.

In the same way, once students have learned to compose their thoughts, they could turn to a tool like ChatGPT for assistance with crafting an essay or story. Vogelsinger doesn’t expect writing classes to become editing classes, where students brush up AI content. He instead imagines students doing prewriting or brainstorming, then using AI to generate parts of a draft, and working back and forth to revise and refine from there.

Though he’s overwhelmed about the prospect of having to adapt his teaching to another new technology, he says he is “having fun” figuring out how to navigate the new tech with his students.

Rao doesn’t see AI ever replacing stories and other texts generated by humans. Why? “The reason those things exist is not only because we want to read it but because we want to write it,” she says. People will always want to make their voices heard.

More Stories from Science News on Tech

robots playing soccer

Reinforcement learning AI might bring humanoid robots to the real world

A screenshot of a fake website, showing a young girl hugging an older woman. The tagline says "Be the favorite grandkid forever"

Should we use AI to resurrect digital ‘ghosts’ of the dead?

On the left, Emo, a robot with a blue silicone face, smiles in tandem with researcher Yuhang Hu, on the right. Hu wears a black t-shirt.

This robot can tell when you’re about to smile — and smile back

an illustration of robotic hand using a human brain as a puppet on strings while inside a person's head

AI learned how to sway humans by watching a cooperative cooking game

Two abstract heads look at each other. One has a computer brain and the other has a real human brain.

Why large language models aren’t headed toward humanlike understanding

pinkish meat-infused rice in a white bowl

Could a rice-meat hybrid be what’s for dinner?

A baby sits on a white couch reading a book next to a teddy bear.

How do babies learn words? An AI experiment may hold clues

A man's prosthetic hand hovers over metal cubes that he sorted into a red area for hot and blue area for cold. A sensor on the index finger of the prosthetic hand is connected to a box higher up on his arm where the nerve impulses to sense temperature originate. A thermal image on a laptop show that the cubes were sorted correctly.

A new device let a man sense temperature with his prosthetic hand

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Universities say AI cheats can't be beaten, moving away from attempts to block AI

An overhead shot of university students sitting in a lecture theatre.

A number of universities have told a Senate inquiry it will be too difficult, if not impossible, to prevent students using AI to cheat assessments, and the institutions will have to change how they teach instead. 

Key points:

  • Universities have warned against banning AI technologies in academia
  • Several say AI cheating in tests will be too difficult to stop, and it is more practical to change assessment methods
  • The sector says the entire nature of teaching will have to change to ensure students continue to effectively learn

The tertiary sector is on the frontline of change coming from the rise in popularity of generative AI, technologies that can produce fresh content learned from massive databases of information.

Universities have widely reported experiences of students using AI to write essays or cheat assessments, with some returning to pen and paper testing to combat attempts to cheat.

In submissions to a Senate inquiry into the use of generative AI in education, a number now say it is not practical to consider attempting to ban the technologies from use in assessments.

Instead, some such as Monash University in Melbourne say the sector should "decriminalise" AI, and move away from banning it or attempting to detect its use.

"Beyond a desire to encourage responsible experimentation ... an important factor in taking this position is that detection of AI-generated content is unlikely to be feasible," its submission reads.

"Emerging evidence suggests that humans are not reliable in detecting AI-generated content.

"Equally, AI detection tools are non-transparent and unreliable in their testing and reporting of their own accuracy, and are likely to generate an intolerably high proportion of both false positives and false negatives."

Monash submitted that even if regulations were introduced to require AI tools to inject "watermarks" into their code to make AI detectable, other generative AI technologies could still be used to strip out those watermarks.

Instead, it and the nation's other largest universities under the Group of Eight (Go8) umbrella say the sector will have to change how it teaches and assesses students, using more oral or other supervised exams, practical assessments and the use of portfolios.

"Generative AI tools are rapidly evolving and will be part of our collective future – playing an important role in future workplaces and, most likely, our daily lives," tGo8 submitted. 

"Entirely prohibiting the use of generative AI in higher education is therefore both impractical and undesirable."

The National Tertiary Education Union (NTEU) has also expressed skepticism that universities would be able to completely manage AI misconduct — not only in assessment, but also in research.

"There is a real risk that AI applications will be considerably ahead of current research integrity processes that would detect problems or irregularities," the NTEU submitted. 

"It may be that problematic research is not detected for some time, if at all, by which time there could be widespread ramifications."

New system needed to ensure students are actually learning

Universities have also given evidence that they have begun using generative AI across every aspect of what they do.

Monash University described how it has tested using AI "personalised course advisers" to help students navigate their degrees and classes, AI-powered mock job interviews for real positions and simulated customers or clients for learning.

"For example, in health education, the tool provides realistic ‘patients’ with detailed medical histories, personas, and varied willingness to share embarrassing medical details with learners who must put the work in to develop rapport with the patients to obtain relevant information in a realistic virtual clinical environment," it said.

A dummy lies in a bed, a group of people speaking behind it.

As generative AI begins to be picked up at every part of the learning cycle, the tertiary education union warned that, over time, there was a risk universities may reach a point "where they can no longer assure the required learning has occurred in what they claim to be teaching".

"Teaching staff will need to continuously develop new methods of assessment that assess students at a level beyond the levels of AIs," it said.

The Queensland University of Technology said unless the nature of learning changed, there was a risk AI could promote "laziness and lack of independent thought".

"The possibility of a situation [arises] in which activities are created by the educators using AI, the learners use AI to create their responses, and the educators use AI to mark/grade and even give feedback," Queensland University of Technology wrote.

"At its most extreme, such a scenario suggests the question of who, if anyone, has learnt anything? And what was the purpose of the assessment?"

  • X (formerly Twitter)

Related Stories

International students raise alarm about new ai detection tools being used by australian universities.

Logo Open AI is in white on black phone screen in front of a white computer screen with words.

This university drama class asked Chat GPT to write a play for it — here's what it came up with

University puts on play written by artificial intelligence website ChatGPT

ChatGPT's class divide: Are public school bans on the AI tool giving private school kids an unfair edge?

A composite image of headshots of Arlene, Matt and Freya.

  • Access To Education
  • Federal Government
  • Government and Politics

The College Essay Is Dead

Nobody is prepared for how AI will transform academia.

An illustration of printed essays arranged to look like a skull

Suppose you are a professor of pedagogy, and you assign an essay on learning styles. A student hands in an essay with the following opening paragraph:

The construct of “learning styles” is problematic because it fails to account for the processes through which learning styles are shaped. Some students might develop a particular learning style because they have had particular experiences. Others might develop a particular learning style by trying to accommodate to a learning environment that was not well suited to their learning needs. Ultimately, we need to understand the interactions among learning styles and environmental and personal factors, and how these shape how we learn and the kinds of learning we experience.

Pass or fail? A- or B+? And how would your grade change if you knew a human student hadn’t written it at all? Because Mike Sharples, a professor in the U.K., used GPT-3, a large language model from OpenAI that automatically generates text from a prompt, to write it. (The whole essay, which Sharples considered graduate-level, is available, complete with references, here .) Personally, I lean toward a B+. The passage reads like filler, but so do most student essays.

Sharples’s intent was to urge educators to “rethink teaching and assessment” in light of the technology, which he said “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” Essay generation is neither theoretical nor futuristic at this point. In May, a student in New Zealand confessed to using AI to write their papers, justifying it as a tool like Grammarly or spell-check: ​​“I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalised because I don’t write eloquently and I didn’t feel that was right,” they told a student paper in Christchurch. They don’t feel like they’re cheating, because the student guidelines at their university state only that you’re not allowed to get somebody else to do your work for you. GPT-3 isn’t “somebody else”—it’s a program.

The world of generative AI is progressing furiously. Last week, OpenAI released an advanced chatbot named ChatGPT that has spawned a new wave of marveling and hand-wringing , plus an upgrade to GPT-3 that allows for complex rhyming poetry; Google previewed new applications last month that will allow people to describe concepts in text and see them rendered as images; and the creative-AI firm Jasper received a $1.5 billion valuation in October. It still takes a little initiative for a kid to find a text generator, but not for long.

The essay, in particular the undergraduate essay, has been the center of humanistic pedagogy for generations. It is the way we teach children how to research, think, and write. That entire tradition is about to be disrupted from the ground up. Kevin Bryan, an associate professor at the University of Toronto, tweeted in astonishment about OpenAI’s new chatbot last week: “You can no longer give take-home exams/homework … Even on specific questions that involve combining knowledge across domains, the OpenAI chat is frankly better than the average MBA at this point. It is frankly amazing.” Neither the engineers building the linguistic tech nor the educators who will encounter the resulting language are prepared for the fallout.

A chasm has existed between humanists and technologists for a long time. In the 1950s, C. P. Snow gave his famous lecture, later the essay “The Two Cultures,” describing the humanistic and scientific communities as tribes losing contact with each other. “Literary intellectuals at one pole—at the other scientists,” Snow wrote. “Between the two a gulf of mutual incomprehension—sometimes (particularly among the young) hostility and dislike, but most of all lack of understanding. They have a curious distorted image of each other.” Snow’s argument was a plea for a kind of intellectual cosmopolitanism: Literary people were missing the essential insights of the laws of thermodynamics, and scientific people were ignoring the glories of Shakespeare and Dickens.

The rupture that Snow identified has only deepened. In the modern tech world, the value of a humanistic education shows up in evidence of its absence. Sam Bankman-Fried, the disgraced founder of the crypto exchange FTX who recently lost his $16 billion fortune in a few days , is a famously proud illiterate. “I would never read a book,” he once told an interviewer . “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.” Elon Musk and Twitter are another excellent case in point. It’s painful and extraordinary to watch the ham-fisted way a brilliant engineering mind like Musk deals with even relatively simple literary concepts such as parody and satire. He obviously has never thought about them before. He probably didn’t imagine there was much to think about.

The extraordinary ignorance on questions of society and history displayed by the men and women reshaping society and history has been the defining feature of the social-media era. Apparently, Mark Zuckerberg has read a great deal about Caesar Augustus , but I wish he’d read about the regulation of the pamphlet press in 17th-century Europe. It might have spared America the annihilation of social trust .

These failures don’t derive from mean-spiritedness or even greed, but from a willful obliviousness. The engineers do not recognize that humanistic questions—like, say, hermeneutics or the historical contingency of freedom of speech or the genealogy of morality—are real questions with real consequences. Everybody is entitled to their opinion about politics and culture, it’s true, but an opinion is different from a grounded understanding. The most direct path to catastrophe is to treat complex problems as if they’re obvious to everyone. You can lose billions of dollars pretty quickly that way.

As the technologists have ignored humanistic questions to their peril, the humanists have greeted the technological revolutions of the past 50 years by committing soft suicide. As of 2017, the number of English majors had nearly halved since the 1990s. History enrollments have declined by 45 percent since 2007 alone. Needless to say, humanists’ understanding of technology is partial at best. The state of digital humanities is always several categories of obsolescence behind, which is inevitable. (Nobody expects them to teach via Instagram Stories.) But more crucially, the humanities have not fundamentally changed their approach in decades, despite technology altering the entire world around them. They are still exploding meta-narratives like it’s 1979, an exercise in self-defeat.

Read: The humanities are in crisis

Contemporary academia engages, more or less permanently, in self-critique on any and every front it can imagine. In a tech-centered world, language matters, voice and style matter, the study of eloquence matters, history matters, ethical systems matter. But the situation requires humanists to explain why they matter, not constantly undermine their own intellectual foundations. The humanities promise students a journey to an irrelevant, self-consuming future; then they wonder why their enrollments are collapsing. Is it any surprise that nearly half of humanities graduates regret their choice of major ?

The case for the value of humanities in a technologically determined world has been made before. Steve Jobs always credited a significant part of Apple’s success to his time as a dropout hanger-on at Reed College, where he fooled around with Shakespeare and modern dance, along with the famous calligraphy class that provided the aesthetic basis for the Mac’s design. “A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem,” Jobs said . “The broader one’s understanding of the human experience, the better design we will have.” Apple is a humanistic tech company. It’s also the largest company in the world.

Despite the clear value of a humanistic education, its decline continues. Over the past 10 years, STEM has triumphed, and the humanities have collapsed . The number of students enrolled in computer science is now nearly the same as the number of students enrolled in all of the humanities combined.

And now there’s GPT-3. Natural-language processing presents the academic humanities with a whole series of unprecedented problems. Practical matters are at stake: Humanities departments judge their undergraduate students on the basis of their essays. They give Ph.D.s on the basis of a dissertation’s composition. What happens when both processes can be significantly automated? Going by my experience as a former Shakespeare professor, I figure it will take 10 years for academia to face this new reality: two years for the students to figure out the tech, three more years for the professors to recognize that students are using the tech, and then five years for university administrators to decide what, if anything, to do about it. Teachers are already some of the most overworked, underpaid people in the world. They are already dealing with a humanities in crisis. And now this. I feel for them.

And yet, despite the drastic divide of the moment, natural-language processing is going to force engineers and humanists together. They are going to need each other despite everything. Computer scientists will require basic, systematic education in general humanism: The philosophy of language, sociology, history, and ethics are not amusing questions of theoretical speculation anymore. They will be essential in determining the ethical and creative use of chatbots, to take only an obvious example.

The humanists will need to understand natural-language processing because it’s the future of language, but also because there is more than just the possibility of disruption here. Natural-language processing can throw light on a huge number of scholarly problems. It is going to clarify matters of attribution and literary dating that no system ever devised will approach; the parameters in large language models are much more sophisticated than the current systems used to determine which plays Shakespeare wrote, for example . It may even allow for certain types of restorations, filling the gaps in damaged texts by means of text-prediction models. It will reformulate questions of literary style and philology; if you can teach a machine to write like Samuel Taylor Coleridge, that machine must be able to inform you, in some way, about how Samuel Taylor Coleridge wrote.

The connection between humanism and technology will require people and institutions with a breadth of vision and a commitment to interests that transcend their field. Before that space for collaboration can exist, both sides will have to take the most difficult leaps for highly educated people: Understand that they need the other side, and admit their basic ignorance. But that’s always been the beginning of wisdom, no matter what technological era we happen to inhabit.

  • Newsletters
  • Account Activating this button will toggle the display of additional content Account Sign out

A.I. Is Making It Easier Than Ever for Students to Cheat

Look out, educators. You’re about to confront a pernicious new challenge that is spreading, kudzu-like , into your student writing assignments: papers augmented with artificial intelligence.

The first online article generator debuted in 2005. Now, A.I.-generated text can now be found in novels , fake news articles and real news articles , marketing campaigns, and dozens of other written products. The tech is either free or cheap to use, which places it in the hands of anyone. And it’s probably already burrowing into America’s classrooms right now.

Using an A.I. program is not “plagiarism” in the traditional sense—there’s no previous work for the student to copy, and thus no original for teachers’ plagiarism detectors to catch. Instead, a student first feeds text from either a single or multiple sources into the program to begin the process. The program then generates content by using a set of parameters on a topic, which then can be personalized to the writer’s specifications. With a little bit of practice, a student can use AI to write his or her paper in a fraction of the time that it would normally take to write an essay.

I wanted to see what would happen if I fed an A.I. text generator like Sudowrite some sample texts for assignments. (Sudowrite is just one such program; I could name more, but I’d rather not make it even easier for students to cheat.) The results showed that teachers must be made aware of what is coming to their classrooms.

Say a student wanted to write an essay about U.S.-China policy and decided to use an A.I. program to speed along the process. I fed it the first few lines from a recent Pentagon report : “The PRC is increasingly clear in its ambitions and intentions. Beijing seeks to reshape the international order to better align with its authoritarian system and national interests, as a vital component of its strategy to achieve the ‘great rejuvenation of the Chinese nation.’ ” The A.I. text generator provided these following sentences:

It is increasingly willing to disregard—or to actively undermine—existing international rules and norms when it feels its vital interests are at stake. The U.S. and its allies face a rapidly modernizing military, whose capabilities are growing rapidly, especially in the air and at sea. China has also made great strides in its development of cyber capabilities, and it is focused on the ability to degrade or disrupt critical U.S. military technology, as well as on targeting civilian infrastructure. In addition to being a challenge to the U.S. military and its allies, China’s activities in cyberspace have the potential to degrade U.S. political and economic institutions, and the global information infrastructure is at risk. 

If a high school student turned that in verbatim, their teacher would probably suspect that there was something amiss. But if this were turned in at the undergraduate or graduate level, and especially if the student were to edit the text, it would be much harder to detect.

Or let’s say they need to write a paper about climate change. Here’s what came up when I fed Sudowrite the first few lines from an article on climate change impacts from the National Oceanic and Atmospheric Administration website:

Though we often think about human-induced climate change as something that will happen in the future, it is an ongoing process. Ecosystems and communities in the United States and around the world are being impacted today. 

It’s not particularly insightful, but it provides the A.I. program some basic building blocks. The artificial intelligence continued along this slightly edited line of reasoning:

The consensus view emerging from the Intergovernmental Panel on Climate Change (IPCC) and other respected assessments is that the planet is warming because of the accumulation of heat-trapping gases in the atmosphere caused by human activity. The level of scientific consensus that human activities are primarily responsible for climate change is like the consensus finding on smoking causing lung cancer.

The program had likely already been trained to know the term “climate change” was connected to the IPCC; by making certain inferences it could generate text which seemed like a reasonable extension of the idea.

What about ninth-grade English? Channeling my 14-year-old self, I wrote these lines:

“Romeo and Juliet” is a play that explores many different topics such as romance, betrayal, and tragedy. It has interesting characters that make it such a good story, even 400 years later. 

The computer spat back this take (which I’ve edited slightly):

Romeo, the main character, is a very passionate and immature young man who is torn between his love for Rosaline and his love for Juliet. He is also very rash and desperate to get Juliet and will do almost anything to win her. His desperation for Juliet is an interesting trait and is one that the younger kids can relate to, as well as older adults with first crushes. Romeo is an interesting character and is very different from how kids would imagine him to be. But he is also a great inspiration, showing that love is worth fighting for.

Again, the A.I. program had been previously trained to know that Romeo and Juliet contained certain characters, themes, and plot elements. But what stood out to me was that the text was crafted at an appropriate writing level for a high schooler, who would be reading about these star-crossed lovers for the first time. The awkward phrasing and the syntax errors baked into the program make it seem convincing.

I’ve struggled to find the right analogy to describe this phenomenon. Is using A.I. to write graded papers like athletes taking performance-enhancing drugs? As a society and as a sporting culture, we’ve decided certain drugs are forbidden, as they provide the user unfair advantages. Further, the cocktail of drugs flowing through these competitors and malicious sports programs could cause real physical and psychological harm to the athletes themselves. Would individuals using AI in writing be likewise considered in the same boat—a cheat to the system providing undue advantages, which also creates harm in the long run by impeding writing skills?

Or might using A.I. be more like using performance-enhancing gear in sports, which is both acceptable and encouraged? To use another sports analogy, even beginner tennis players today use high-performance carbon composite rackets instead of 1960s-era wooden racket technology. Swimmers wear nylon and elastane suits and caps to reduce drag. Bikers have stronger, lighter bicycles than their counterparts used a generation ago. Baseball bats evolved from wood to aluminum and developed better grips; baseball mitts have become more specialized over the decades.

Numerous educators assert that A.I. is more like the former. They consider using these programs violate academic integrity. Georgetown University professor Lise Howard told me, “I do think it’s unethical and an academic violation to use AI to write paragraphs, because academic work is all about original writing.” Written assignments have two purposes, argues Ani Ross Grubb, part-time faculty member in the Carroll School of Management at Boston College: “First is to test the learning, understanding, and critical thinking skills of students. Second is to provide scaffolding to develop those skills. Having AI write your assignments would go against those goals.”

Certainly, one can argue that this topic has already been covered in university academic integrity codes. Using A.I. might open students to serious charges. For instance, American University indicates, “All papers and materials submitted for a course must be the student’s original work unless the sources are cited” while the University of Maryland similarly notes that it is prohibited to use dishonesty to “gain an unfair advantage, and/or using or attempting to use unauthorized materials, information, or study aids in any academic course or exercise.”

But some study aids are generally considered acceptable. When writing papers, it is perfectly fine to use grammar- and syntax-checking products standard on Microsoft Word and other document creating programs. Other A.I. programs like Grammarly help write better sentences and fix errors. Google Docs finishes sentences in drafts and emails.

So the border between using those kinds of assistive computer programs and full-on cheating remains fuzzy. Indeed, as Jade Wexler, associate professor of special education at the University of Maryland, noted, A.I. could be a valuable tool to help level the playing field for some students. “It goes back to teachers’ objectives and students’ needs,” she said. “There’s a fine balance making sure both of those are met.”

Thus there are two intertwined questions at work. First: Should institutions permit A.I.-enhanced writing? If the answer is no, then the second question is: How can professors detect it? After all, it’s unclear whether there’s a technical solution to keeping A.I. from worming into student papers. An educator’s up-to-date knowledge on relevant sources will be of limited utility since the verbiage has not been swiped from pre-existing texts.

Still, there may be ways to minimize these artificial enhancements. One is to codify at the institutional level what is acceptable and what is not; in July the Council of Europe took a few small steps, publishing new guidelines which begin to grapple with these new technologies creating fraud in education. Another would be to keep classes small and give individual attention to students. As Jessica Chiccehitto Hindman, associate professor of English at Northern Kentucky University, noted, “When a writing instructor is in a classroom situation where they are unable to provide individualized attention, the chance for students to phone it in—whether this is plagiarism, A.I., or just writing in a boring, uninvested way—goes up.” More in-class writing assignments—no screens allowed—could also help. Virginia Lee Strain, associate professor of English and director of the honors program at Loyola University Chicago, further argued, “AI is not a problem in the classroom when a student sits down with paper and pencil.”

But in many settings, more one-on-one time simply isn’t a realistic solution, especially at high schools or colleges with large classes. Educators juggle multiple classes and courses, and for them to get to know every student every semester isn’t going to happen.

A more aggressive stance would be for high schools and universities to explicitly declare using A.I. will be considered an academic violation—or at least update their honor codes to reflect what they believe is the right side of the line concerning academic integrity. That said, absent a mechanism to police students, it might paradoxically introduce students to a new way to generate papers faster.

Educators realize some large percentage of students will cheat or try to game the system to their advantage. But perhaps, as Hindman says, “if a professor is concerned that students are using plagiarism or AI to complete assignments, the assignments themselves are the problem, not the students or the AI.” If an educator is convinced that students are using these forbidden tools, he or she might consider using alternate means to generate grades such as assigning oral exams, group projects, and class presentations. Of course, as Hindman notes, “these types of high-impact learning practices are only feasible if you have a manageable number of students.”

AI is here to stay whether we like it or not. Provide unscrupulous students the ability to use these shortcuts without much capacity for the educator to detect them, combined with other crutches like outright plagiarism, and companies that sell papers, homework, and test answers, and it’s a recipe for—well, not disaster, but the further degradation of a type of assignment that has been around for centuries.

Future Tense is a partnership of Slate , New America , and Arizona State University that examines emerging technologies, public policy, and society.

comscore beacon

Students Are Using AI Text Generators to Write Papers—Are They Cheating?

High school and college students are awakening to the grade-boosting possibilities of text-generating software. their teachers are struggling to catch up..

When West—not his real name—enrolled in college last year at the University of Rhode Island, he quickly realized that his professors expected a lot out of him. He had scheduled teaching, he had assignments, he had tests—and he didn’t want to devote an equal amount of time to all of them.

“I would like to say I’m pretty smart,” West said. So he turned to a homework-completion trick he’d started using the year before in high school. He logged into GPT-3, a text-generating tool developed by OpenAI, which can create written content from simple prompts. Trained on a vast corpus of preexisting language drawn from Wikipedia, Common Crawl, and other sources, GPT-3 is intended as a tool for automating writing tasks. But it’s also increasingly helping students like West avoid some of the tedium of academic writing and skip right to the fun part (being done).

“I was like, ‘Holy shit,’ you know, like, it was insane,” he said. “When people are children, they imagine that a machine can do their homework. And I just happened to stumble upon that machine.”

is using ai to write essays cheating

  • TODAY Plaza
  • Share this —

Health & Wellness

  • Watch Full Episodes
  • Read With Jenna
  • Inspirational
  • Relationships
  • TODAY Table
  • Newsletters
  • Start TODAY
  • Shop TODAY Awards
  • Citi Concert Series
  • Listen All Day

Follow today

More Brands

  • On The Show

Teachers sound off on ChatGPT, the new AI tool that can write students’ essays for them

Teachers are talking about a new artificial intelligence tool called ChatGPT — with dread about its potential to help students cheat, and with anticipation over how it might change education as we know it.

On Nov. 30, research lab OpenAI released the free AI tool ChatGPT , a conversational language model that lets users type questions — “What is the Civil War?” or “Who was Leonardo da Vinci?” — and receive articulate, sophisticated and human-like responses in seconds. Ask it to solve complex math equations and it spits out the answer, sometimes with step-by-step explanations for how it got there.

According to a fact sheet sent to TODAY.com by OpenAI, ChatGPT can answer follow-up questions, correct false information, contextualize information and even acknowledge its own mistakes.

Some educators worry that students will use ChatGPT to get away with cheating more easily  — especially when it comes to the five-paragraph essays assigned in middle and high school and the formulaic papers assigned in college courses. Compared with traditional cheating in which information is plagiarized by being copied directly or pasted together from other work, ChatGPT pulls content from all corners of the internet to form brand new answers that aren't derived from one specific source, or even cited.

Therefore, if you paste a ChatGPT-generated essay into the internet, you likely won't find it word-for-word anywhere else. This has many teachers spooked — even as OpenAI is trying to reassure educators .

"We don’t want ChatGPT to be used for misleading purposes in schools or anywhere else, so we’re already developing mitigations to help anyone identify text generated by that system," an OpenAI spokesperson tells TODAY.com "We look forward to working with educators on useful solutions, and other ways to help teachers and students benefit from artificial intelligence."

Still, #TeacherTok is weighing in about potential consequences in the classroom.

"So the robots are here and they’re going to be doing our students' homework,” educator Dan Lewer said in a TikTok video . “Great! As if teachers needed something else to be worried about.”

“If you’re a teacher, you need to know about this new (tool) that students can use to cheat in your class,” educational consultant Tyler Tarver said on TikTok .

“Kids can just tell it what they want it to do: Write a 500-word essay on ‘Harry Potter and the Deathly Hallows,’” Tarver said. “This thing just starts writing it, and it looks legit.”

Taking steps to prevent cheating

ChatGPT is already being prohibited at some K-12 schools and colleges.

On Jan. 4, the New York City Department of Education restricted ChatGPT on school networks and devices "due to concerns about negative impacts on student learning, and concerns regarding the safety and accuracy of content," Jenna Lyle, a department spokesperson, tells TODAY.com. "While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success."

A student who attends Lawrence University in Wisconsin tells TODAY.com that one of her professors warned students, both verbally and in a class syllabus, not to use artificial intelligence like ChatGPT to write papers or risk receiving a zero score.

And last month, a student at Furman University in South Carolina got caught using ChatGPT to complete a 1,200-word take-home exam on the 18th century philosopher David Hume.

“The essay confidently and thoroughly described Hume’s views on the paradox of horror in (ways) that were thoroughly wrong,” Darren Hick, an assistant professor of philosophy, explained in a Dec. 15 Facebook post . “It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bullsh--ting after that.”

Hick tells TODAY.com that traditional cheating signs — for example, sudden shifts in a person’s writing style — weren’t apparent in the student’s essay.

To confirm his suspicions, Hick says he ran passages from the essay through a separate OpenAI detector, which indicated the writing was AI-generated. Hick then did the same thing with essays from other students. That time around, the detector suggested that the essays had been written by human beings.

Eventually, Hick met with the student, who confessed to using ChatGPT. She received a failing grade for the class and faces further disciplinary action.

“I give this student credit for being updated on new technology,” says Hick. “Unfortunately, in their case, so am I.”

Getting at the heart of teaching

OpenAI acknowledges that its ChatGPT tool is capable of providing false or harmful answers. OpenAI Chief Executive Officer Sam Altman tweeted that ChatGPT is meant for “ fun creative inspiration ” and that “ it’s a mistake to be relying on it for anything important right now.” 

Kendall Hartley, an associate professor of educational technology at the University of Las Vegas, Nevada, notes that ChatGPT is "blowing up fast," presenting new challenges for detection software like iThenticate and TurnItIn , which teachers use to cross-reference student work to material published online.

Still, even with all the concerns being raised, many educators say they are hopeful about ChatGPT's potential in the classroom.

When you think about the amazing teachers you’ve had, it’s likely because they connected with you as a student. That won’t change with the introduction of AI.”

Tiffany Wycoff, a former school principal

"I'm excited by how it could support assessment or students with learning disabilities or those who are English language learners," Lisa M. Harrison, a former seventh grade math teacher and a board of trustee for the Association for Middle Level Education , tells TODAY.com. Harrison speculates that ChatGPT could support all sorts of students with special needs by supplementing skills they haven’t yet mastered.

Harrison suggests workarounds to cheating through coursework that requires additional citations or verbal components. She says personalized assignments — such as asking students to apply a world event to their own personal experiences — could deter the use of AI.

Educators also could try embracing the technology, she says.

"Students could write essays comparing their work to what's produced by ChatGPT or learn about AI," says Harrison.

Tiffany Wycoff, a former elementary and high school principal who is now the chief operating officer of the professional development company Learning Innovation Catalyst (LINC), says AI offers great potential in education.

“Art instructors can use image-based AI generators to (produce) characters or scenes that inspire projects," Wycoff tells TODAY.com. "P.E. coaches could design fitness or sports curriculums, and teachers can discuss systemic biases in writing.”

Wycoff went straight to the source, asking ChatGPT, "How will generative AI affect teaching and learning in classrooms?" and published a lengthy answer on her company's blog .

According to ChatGPT's answer, AI can give student feedback in real time, create interactive educational content (videos, simulations and more), and create customized learning materials based on individual student needs.

The heart of teaching, however, can't be replaced by bots.

"When you think about the amazing teachers you’ve had, it’s likely because they connected with you as a student," Wycoff says. "That won’t change with the introduction of AI."

Tarver agrees, telling TODAY.com, "If a student is struggling and then suddenly gets a 98 (on a test), teachers will know."

"And if students can go in and type answers in ChatGPT," he adds, "we're asking the wrong questions.”

Elise Solé is a writer and editor who lives in Los Angeles and covers parenting for TODAY Parents. She was previously a news editor at Yahoo and has also worked at Marie Claire and Women's Health. Her bylines have appeared in Shondaland, SheKnows, Happify and more.

is using ai to write essays cheating

Parents are primed for a smartphone ‘revolt,’ says this expert

is using ai to write essays cheating

Jelly Roll’s teen daughter snuck out … and got caught on camera. The internet can’t stop laughing

is using ai to write essays cheating

Transgender teen booed after winning girls’ track race at state championship

is using ai to write essays cheating

Why do teens say, ‘Fax, No Printer’?

is using ai to write essays cheating

What do teens mean when they say ‘sigma’?

is using ai to write essays cheating

‘Caught in 4k’: What the slang phrase really means

is using ai to write essays cheating

Is 'Challengers' appropriate for kids? A guide for parents

is using ai to write essays cheating

Albuquerque high school principal removed after drag queen show at prom

is using ai to write essays cheating

Pennsylvania school board reinstates gay author's anti-bullying speech after public outcry

is using ai to write essays cheating

A dad records his young son's interaction with teens at the trampoline park, and people are loving it

is using ai to write essays cheating

What is the potential of AI writing? Is cheating its greatest purpose?

Insights from Jasper AI’s interview with Chris Caren, Turnitin CEO

Christine Lee

Turnitin has successfully developed an AI writing detector and company plans to add this functionality to its core writing integrity products as early as April 2023.

is using ai to write essays cheating

At every turn, academic integrity has been both supported by and tested by technology. And, for nearly 25 years, Turnitin has been at the forefront of academic integrity and writing technology.

By completing this form, you agree to Turnitin's Privacy Policy . Turnitin uses the information you provide to contact you with relevant information. You may unsubscribe from these communications at any time.

AI writing has, in a short time, transformed the landscape of academic integrity.

That said, AI writing has been around for decades. The term Artificial Intelligence, also known as AI, was coined by John McCarthy in 1956. AI writing itself has existed since 1967 when Alison Knowles used the programming language FORTRAN to write poems. Before that, Alan Turing initiated discussions around AI when he asked, “Can computers think?” in 1950.

From that point on, AI writing has flourished, gaining visibility in recent years. In 2014, Associated Press became the first newsroom to have an AI editor. The Washington Post used Heliograph to write articles for the Rio Olympics in 2016. Now, there are a number of AI writing services accessible to the mainstream, allowing students, researchers , and others to input several data points so that an AI writer can complete an essay or article.

That AI writing has served valid functions and penetrated respected bastions of journalism muddies the waters for academic writing. If newspapers can use AI, why can’t students or researchers?

Similar to contract cheating , using AI to write an assignment isn’t technically plagiarism. No original work is being copied. But at the same time, it isn’t the student’s original work. In many ways, AI writing and contract cheating are very much alike; in the realm of contract cheating , students submit a request to an essay mill and receive a written essay in return . With AI writers, the only difference is that a computer generates the work. To that end, using AI writing to complete an assignment and represent it as your own work qualifies as academic misconduct.

Which brings us to the crossroads of AI writing in the classroom, where AI-generated text has caused new disruption.

At Turnitin, we’re incorporating the “big picture” of AI writing and its long-term benefits while at the same time safeguarding the true intention behind learning and knowledge acquisition. Turnitin CEO Chris Caren recently engaged with Jasper AI in an interview that addresses the potential and challenges of AI writing in the education space:

“We know that AI can be a force for good in education,” Caren states in the interview, “when the tools are accessed equitably, transparently, and skillfully. As an academic integrity company, we advocate for the responsible use of AI. When leveraged skillfully, we see AI potentially being used as a learning aid and intermediary tool that may even facilitate deeper intellectual inquiry. We’re focused on all possible current and evolving use cases for AI writing tools, considering both the positive and negative implications of these tools .”

So, what are the positive implications, according to Caren?

  • AI can empower students by, says Caren, putting “students in the role of giving feedback, rather than always being on the receiving end. This gives students new perspectives on how to evaluate their own writing.”
  • AI can support the entire learning journey . Caren states, “For more advanced writers, generative AI can remove much of the repetitive mechanics of writing. This allows seasoned writers to focus on the bigger picture and higher level thinking.”
  • AI can uphold feedback loops at scale . “For teachers,” shares Caren, “these Large Language Models have the potential to massively scale summarization and feedback. This results in teachers spending more time giving feedback on high level concepts, and possibly even improving grading consistency and fairness.”

AI also presents very real challenges to educators right now.

The immediate challenges of AI as Chris Caren states them in the Jasper AI interview are:

  • AI writing tools, when not being used to further student learning, result in a form of academic misconduct ; remediation is a priority for educators. It is also a priority at Turnitin, where detection of content created by Ai writing tools such as GPT 3.5, ChatGPT, and the like are in progress .
  • Defining AI writing’s role in education (and/or misconduct) is still in progress . According to Caren, “We’ve also heard from educators that there are diverse perceptions on whether AI writing should be allowed or not in academic work. At Turnitin, we recognize the most recent AI writing tools as marking a clear point of no return.” This impacts how AI writing is addressed in honor codes.
  • AI writing in student writing assignments is, frankly speaking, brand new and educators are still discovering AI’s full capabilities and thus strategies to mitigate misconduct.

AI is a disruptive technology; it can hinder things short term, but disruption can also open up opportunities for permanent and positive change . AI writing, according to Caren, is “firmly part of the educational landscape.”

Reviewing academic integrity policies with students is always best practice, and openly discussing AI writing and its place in the classroom is not only a way to clarify boundaries but a way to build communication channels with students. Caren states, “We all need to go back to basics and reimagine the true intent behind academic assessment : as a means of demonstrating knowledge acquisition.”

Educators worry about students using artificial intelligence to cheat

John Yang

John Yang John Yang

Harry Zahn

Harry Zahn Harry Zahn

Leave your feedback

  • Copy URL https://www.pbs.org/newshour/show/educators-worry-about-students-using-artificial-intelligence-to-cheat

Earlier this month, New York City public schools blocked access to the popular artificial intelligence tool ChatGPT. Educators are concerned that students could use this technology to write papers – the tool wasn't even a month old when a college professor in South Carolina caught a student using it to write an essay in philosophy class. Darren Hick of Furman University joins John Yang to discuss.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

When the New York Public School District blocked access to the popular artificial intelligence tool ChatGPT earlier this month, it was the latest response to concerns over how rapidly changing technology is effecting our lives. Educators worry that students are using this technology to write papers and that they'll never have to learn how to write on their own.

The tool wasn't even a month old when a college professor caught a student using it to write an essay for her philosophy class. Darren Hick Furman University wrote about that on his Facebook page. And he's with us now for our periodic series, the AI Frontier.

Professor Hick, what were the red flags. This was an essay for the record about Scottish Philosopher, David Hume. And the paradox of her, as you were reading this essay, what were the red flags that this might have been other than her own product?

Darren Hick, Furman University:

There's several red flags that come up. And in any case of plagiarism. They just sort of build up until you have to screech the grading to a halt and look into the problem. In this case, it got some basic issues exactly right. Other things fundamentally wrong. It talks about things that the student wouldn't have learned about in class, which is always something of a flag and connected things together in a way that was just thoroughly wrong. But it was beautifully written. Well, beautifully for a college take home exam, anyway. So, it was a weird collection of flags.

You say these, were they different from the red flags you'd get from, I don't know, what we'd call — I guess we call old flash fashion plagiarism?

Darren Hick:

Well, normally what happens when a student plagiarizes, there's a sort of cost benefit analysis that goes on. But usually it's a panic, right? At the end of the semester, they realize they don't have enough time or enough knowledge to put this thing together properly. And so, they cobbled pieces together that don't really fit together. What was odd about this is there was some of that. There was things that they didn't understand. Things that they didn't understand, or seem to understand. But it was just so well composed, which is not something you would see with an essay that's cobbled together at the last second. It was nicely written. Sentence by sentence, it was nicely structured, it was just really odd.

And you actually use the ChatGPT product or part of this tool to confirm the plagiarism?

That's right, there was a detector that was designed by the same lab that had created the GTP generator in the first place. And so, I knew that this thing was around. So, I thought, well, when I had these suspicions, let's plug it in and see what it has to say. At that point, I had never done anything with it. So, this was new investigation for me.

What did you take away from this experience? What does this lead you to think about this technology? About the uses, the misuse of it? Do you have a policy about it in your class now? What do you walk away from this with?

I have a really ambivalent view on this, it's, on the one hand, it's fascinating. It's a great toy. I've been playing with it a lot. I can do all kinds of things with it. I had it right Christmas stories for me, it's just a lot of fun. On the other hand, it's terrifying. It's — what's terrifying about it is that it's learning software. It's designed to get better.

So, when I caught the student using it, it was maybe three weeks old at that point, not even, and it was an infant. But a month from now, it's going to be better. A year from now, it's going to be better. Five years from now, the genies out of the bottle. So, my worry is mostly about how do we keep up with this thing? How do we prepare for this thing?

Plagiarism isn't anything new. I don't expect a new flood of plagiarists, but in that cost benefit analysis that I was talking about, this changes the analysis for students. This is a tool that makes things easier, faster. And so, my worry is that we'll get more students who are using this method, and we need to be prepared for that coming. So, what I have to do in the classroom is change that analysis again.

How are you going about that? I mean, you talk about sort of having to, you know, sort of keep up with this out? How are you doing it in your classroom?

Well, I have to rethink every assignment that I give. You think about plagiarism every time you give an assignment anyway, so that's not new. You have to think of new methods though. Currently, in my syllabus, the newest changes, I tell students if I catch a whiff of AI in their essays, I'm going to throw out their essay and give them an impromptu oral exam on the spot, with the hope that that's enough to make them say, well, I better understand what I'm putting in that essay.

You say that there's sort of there — it's a mixed bag that there are some upsides to this tool. What are the upsides in your view?

I have seen a lot of people trying to think creatively about how to use this in a classroom. We can't ignore new technology. There's fun things to be had here. I think one of the best suggestions that I saw is somebody said, assign a prompt to your students. Have your students put that into ChatGPT. See what essay it produces and then write a new essay that analyzes that essay, says what it gets right, what it gets wrong. That's creative, that's interesting. That's getting a little bit ahead of the bar. Of course, I would ask what stops ChatDPT from analyzing its own essay?

That's a good point, you could have a sort of a circular argument here.

You have to stick a step ahead if you can.

Yeah, but could you see using as a teaching tool?

Sure. I teach about the ethics of AI in my intro philosophy class. We're going to be poking around at this thing later this semester. So, it's absolutely raising questions that are worth asking. But at the same time, it is a potentially dangerous tool.

Darren Hick of Furman University, thank you very much.

Thanks so much for having me.

Listen to this Segment

FILE PHOTO: Moderna's COVID-19 vaccine at the McKesson distribution center in Olive Branch, Mississippi

Watch the Full Episode

John Yang is the anchor of PBS News Weekend and a correspondent for the PBS NewsHour. He covered the first year of the Trump administration and is currently reporting on major national issues from Washington, DC, and across the country.

Support Provided By: Learn more

More Ways to Watch

  • PBS iPhone App
  • PBS iPad App

Educate your inbox

Subscribe to Here’s the Deal, our politics newsletter for analysis you won’t find anywhere else.

Thank you. Please check your inbox to confirm.

Cunard

College professors are considering creative ways to stop students from using AI to cheat

  • Some professors say students are using new tech to pass off AI-generated content as their own.
  • Academics are concerned that colleges are not set up to combat the new style of cheating.
  • Professors say they considering switching back to written assessments and oral exams. 

Insider Today

College professors are feeling the heat when it comes to AI. 

Some professors say students are using OpenAI's buzzy chatbot, ChatGPT , to pass off AI-generated content as their own.

Antony Aumann, a philosophy professor at Northern Michigan University, and Darren Hick, a philosophy professor at Furman University, both say they've caught students submitting essays written by ChatGPT.

The issue has led to professors considering creative ways to stamp out the use of AI in colleges.

Blue books and oral exams 

"I'm perplexed about how to handle AI going forward," Aumann told Insider.

He said one way he was considering tackling the problem was shifting to lockdown browsers, a type of software that aims to prevent students from cheating when taking exams online or remotely.

Other academics are considering more drastic action. 

Related stories

"I'm planning on going medieval on the students and going all the way back to oral exams," Christopher Bartel, a philosophy professor at Appalachian State University, said. "They can AI generate text all day long in their notes if they want, but if they have to be able to speak it, that's a different thing."

Bartel said there were inclusivity concerns around this, however. "Students who have deep social anxieties about speaking in public is something we'll have to figure out."

"Another way to deal with AI is for faculty to avoid giving students assignments that are very well covered," he said. "If the students have to be able to engage with a unique idea that hasn't been covered very deeply in other places there isn't going to be a lot of text that the AI generator can draw from."

Aumann said some professors were suggesting going back to traditional written assessments like blue books.

"Since the students would be writing their essays in class by hand, there would not be an opportunity for them to cheat by consulting the chatbot," he said.

'The genie is out of the bottle'

Although there were red flags in the AI-generated essays that alerted both Aumann and Hick to the use of ChatGPT, Aumann thinks these are just temporary.  

He said the chatbot's essays lacked individual personality but after playing around with it, he was able to get it to write less formally. "I think that any of the red flags we have are just temporary as far as students can get around," he said.

"My worry is the genie is out of the bottle," said Hick, who believed the technology was going to get better. "That's kind of inevitable," he said. 

Bartel agreed that students could get away with using AI very easily. "If they ask the program to write one paragraph summarizing an idea, then one paragraph summarizing another idea, and edit those together it would be completely untraceable for me and it might even be a decent essay," he said.

A representative for OpenAI told Insider they didn't want ChatGPT to be used for misleading purposes.

"Our policies require that users be upfront with their audience when using our API and creative tools like DALL-E and GPT-3," the representative said. "We're already developing mitigations to help anyone identify text generated by that system."

Although there are AI detection programs that offer an analysis of how likely the text is to be written by an AI program, some academics are concerned this wouldn't be enough to prove a case of AI plagiarism.

"We will need something to account for the fact that we now have an imperfect way of testing whether or not something is a fake," Bartel said. "I don't know what that new policy is."

Axel Springer, Business Insider's parent company, has a global deal to allow OpenAI to train its models on its media brands' reporting.

is using ai to write essays cheating

  • Main content

Advertisement

ChatGPT detector could help spot cheaters using AI to write essays

A tool called GPTZero can identify whether text was produced by a chatbot, which could help teachers tell if students are getting AI to help with their homework

By Alex Wilkins

17 January 2023

A hand holds a smartphone with the OpenAI logo

People can use OpenAI’s ChatGPT to generate almost any text they want

rafapress/Shutterstock

A web tool called GPTZero can identify whether an essay was generated by the artificial intelligence chatbot ChatGPT with high accuracy. This could help identify cheating in schools and misinformation, but only if OpenAI, the company behind the popular chatbot, continues to give access to the underlying AI models.

OpenAI is reportedly working on inserting a watermark to text that its models generate. But in the time since ChatGPT became publicly available in December 2022, millions of people…

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.

To continue reading, subscribe today with our introductory offers

No commitment, cancel anytime*

Offer ends 2nd of July 2024.

*Cancel anytime within 14 days of payment to receive a refund on unserved issues.

Inclusive of applicable taxes (VAT)

Existing subscribers

More from New Scientist

Explore the latest news, articles and features

US government lab is using GPT-3 to analyse research papers

Subscriber-only

EU's Artificial Intelligence Act will lead the world on regulating AI

University students with morning lectures tend to have lower grades, will artificial intelligence ever discover new laws of physics, popular articles.

Trending New Scientist articles

Now AI can write students’ essays for them, will everyone become a cheat?

AI’s automatic writing tools are making it easier for students to cheat. Associate Director Rob Reich says tech companies and AI developers must agree to self-regulate.

Celebrating 150 years of Harvard Summer School. Learn about our history.

Should I Use ChatGPT to Write My Essays?

Everything high school and college students need to know about using — and not using — ChatGPT for writing essays.

Jessica A. Kent

ChatGPT is one of the most buzzworthy technologies today.

In addition to other generative artificial intelligence (AI) models, it is expected to change the world. In academia, students and professors are preparing for the ways that ChatGPT will shape education, and especially how it will impact a fundamental element of any course: the academic essay.

Students can use ChatGPT to generate full essays based on a few simple prompts. But can AI actually produce high quality work, or is the technology just not there yet to deliver on its promise? Students may also be asking themselves if they should use AI to write their essays for them and what they might be losing out on if they did.

AI is here to stay, and it can either be a help or a hindrance depending on how you use it. Read on to become better informed about what ChatGPT can and can’t do, how to use it responsibly to support your academic assignments, and the benefits of writing your own essays.

What is Generative AI?

Artificial intelligence isn’t a twenty-first century invention. Beginning in the 1950s, data scientists started programming computers to solve problems and understand spoken language. AI’s capabilities grew as computer speeds increased and today we use AI for data analysis, finding patterns, and providing insights on the data it collects.

But why the sudden popularity in recent applications like ChatGPT? This new generation of AI goes further than just data analysis. Instead, generative AI creates new content. It does this by analyzing large amounts of data — GPT-3 was trained on 45 terabytes of data, or a quarter of the Library of Congress — and then generating new content based on the patterns it sees in the original data.

It’s like the predictive text feature on your phone; as you start typing a new message, predictive text makes suggestions of what should come next based on data from past conversations. Similarly, ChatGPT creates new text based on past data. With the right prompts, ChatGPT can write marketing content, code, business forecasts, and even entire academic essays on any subject within seconds.

But is generative AI as revolutionary as people think it is, or is it lacking in real intelligence?

The Drawbacks of Generative AI

It seems simple. You’ve been assigned an essay to write for class. You go to ChatGPT and ask it to write a five-paragraph academic essay on the topic you’ve been assigned. You wait a few seconds and it generates the essay for you!

But ChatGPT is still in its early stages of development, and that essay is likely not as accurate or well-written as you’d expect it to be. Be aware of the drawbacks of having ChatGPT complete your assignments.

It’s not intelligence, it’s statistics

One of the misconceptions about AI is that it has a degree of human intelligence. However, its intelligence is actually statistical analysis, as it can only generate “original” content based on the patterns it sees in already existing data and work.

It “hallucinates”

Generative AI models often provide false information — so much so that there’s a term for it: “AI hallucination.” OpenAI even has a warning on its home screen , saying that “ChatGPT may produce inaccurate information about people, places, or facts.” This may be due to gaps in its data, or because it lacks the ability to verify what it’s generating. 

It doesn’t do research  

If you ask ChatGPT to find and cite sources for you, it will do so, but they could be inaccurate or even made up.

This is because AI doesn’t know how to look for relevant research that can be applied to your thesis. Instead, it generates content based on past content, so if a number of papers cite certain sources, it will generate new content that sounds like it’s a credible source — except it likely may not be.

There are data privacy concerns

When you input your data into a public generative AI model like ChatGPT, where does that data go and who has access to it? 

Prompting ChatGPT with original research should be a cause for concern — especially if you’re inputting study participants’ personal information into the third-party, public application. 

JPMorgan has restricted use of ChatGPT due to privacy concerns, Italy temporarily blocked ChatGPT in March 2023 after a data breach, and Security Intelligence advises that “if [a user’s] notes include sensitive data … it enters the chatbot library. The user no longer has control over the information.”

It is important to be aware of these issues and take steps to ensure that you’re using the technology responsibly and ethically. 

It skirts the plagiarism issue

AI creates content by drawing on a large library of information that’s already been created, but is it plagiarizing? Could there be instances where ChatGPT “borrows” from previous work and places it into your work without citing it? Schools and universities today are wrestling with this question of what’s plagiarism and what’s not when it comes to AI-generated work.

To demonstrate this, one Elon University professor gave his class an assignment: Ask ChatGPT to write an essay for you, and then grade it yourself. 

“Many students expressed shock and dismay upon learning the AI could fabricate bogus information,” he writes, adding that he expected some essays to contain errors, but all of them did. 

His students were disappointed that “major tech companies had pushed out AI technology without ensuring that the general population understands its drawbacks” and were concerned about how many embraced such a flawed tool.

Explore Our High School Programs

How to Use AI as a Tool to Support Your Work

As more students are discovering, generative AI models like ChatGPT just aren’t as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work.

Generate ideas for essays

Have ChatGPT help you come up with ideas for essays. For example, input specific prompts, such as, “Please give me five ideas for essays I can write on topics related to WWII,” or “Please give me five ideas for essays I can write comparing characters in twentieth century novels.” Then, use what it provides as a starting point for your original research.

Generate outlines

You can also use ChatGPT to help you create an outline for an essay. Ask it, “Can you create an outline for a five paragraph essay based on the following topic” and it will create an outline with an introduction, body paragraphs, conclusion, and a suggested thesis statement. Then, you can expand upon the outline with your own research and original thought.

Generate titles for your essays

Titles should draw a reader into your essay, yet they’re often hard to get right. Have ChatGPT help you by prompting it with, “Can you suggest five titles that would be good for a college essay about [topic]?”

The Benefits of Writing Your Essays Yourself

Asking a robot to write your essays for you may seem like an easy way to get ahead in your studies or save some time on assignments. But, outsourcing your work to ChatGPT can negatively impact not just your grades, but your ability to communicate and think critically as well. It’s always the best approach to write your essays yourself.

Create your own ideas

Writing an essay yourself means that you’re developing your own thoughts, opinions, and questions about the subject matter, then testing, proving, and defending those thoughts. 

When you complete school and start your career, projects aren’t simply about getting a good grade or checking a box, but can instead affect the company you’re working for — or even impact society. Being able to think for yourself is necessary to create change and not just cross work off your to-do list.

Building a foundation of original thinking and ideas now will help you carve your unique career path in the future.

Develop your critical thinking and analysis skills

In order to test or examine your opinions or questions about a subject matter, you need to analyze a problem or text, and then use your critical thinking skills to determine the argument you want to make to support your thesis. Critical thinking and analysis skills aren’t just necessary in school — they’re skills you’ll apply throughout your career and your life.

Improve your research skills

Writing your own essays will train you in how to conduct research, including where to find sources, how to determine if they’re credible, and their relevance in supporting or refuting your argument. Knowing how to do research is another key skill required throughout a wide variety of professional fields.

Learn to be a great communicator

Writing an essay involves communicating an idea clearly to your audience, structuring an argument that a reader can follow, and making a conclusion that challenges them to think differently about a subject. Effective and clear communication is necessary in every industry.

Be impacted by what you’re learning about : 

Engaging with the topic, conducting your own research, and developing original arguments allows you to really learn about a subject you may not have encountered before. Maybe a simple essay assignment around a work of literature, historical time period, or scientific study will spark a passion that can lead you to a new major or career.

Resources to Improve Your Essay Writing Skills

While there are many rewards to writing your essays yourself, the act of writing an essay can still be challenging, and the process may come easier for some students than others. But essay writing is a skill that you can hone, and students at Harvard Summer School have access to a number of on-campus and online resources to assist them.

Students can start with the Harvard Summer School Writing Center , where writing tutors can offer you help and guidance on any writing assignment in one-on-one meetings. Tutors can help you strengthen your argument, clarify your ideas, improve the essay’s structure, and lead you through revisions. 

The Harvard libraries are a great place to conduct your research, and its librarians can help you define your essay topic, plan and execute a research strategy, and locate sources. 

Finally, review the “ The Harvard Guide to Using Sources ,” which can guide you on what to cite in your essay and how to do it. Be sure to review the “Tips For Avoiding Plagiarism” on the “ Resources to Support Academic Integrity ” webpage as well to help ensure your success.

Sign up to our mailing list to learn more about Harvard Summer School

The Future of AI in the Classroom

ChatGPT and other generative AI models are here to stay, so it’s worthwhile to learn how you can leverage the technology responsibly and wisely so that it can be a tool to support your academic pursuits. However, nothing can replace the experience and achievement gained from communicating your own ideas and research in your own academic essays.

About the Author

Jessica A. Kent is a freelance writer based in Boston, Mass. and a Harvard Extension School alum. Her digital marketing content has been featured on Fast Company, Forbes, Nasdaq, and other industry websites; her essays and short stories have been featured in North American Review, Emerson Review, Writer’s Bone, and others.

5 Key Qualities of Students Who Succeed at Harvard Summer School (and in College!)

This guide outlines the kinds of students who thrive at Harvard Summer School and what the programs offer in return.

Harvard Division of Continuing Education

The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

Harvard Division of Continuing Education Logo

How-To Geek

Can chatgpt write essays: is using ai to write essays a good idea.

Should GPT Affect Your GPA?

Quick Links

Chatgpt: a game changer for essay writing, so, can chatgpt draft essays, the two faces of chatgpt for essays, navigating the ethics of ai in essay writing, drawing parallels: ai and contract cheating, striking a balance: ai assistance and academic integrity, ai essay writing: handle with care, key takeaways.

While ChatGPT can potentially boost essay writing, it has limitations and raises ethical concerns. Critical reasoning and fact-checking remain vital, as AI tools can sometimes lack accuracy and consistency. Ensuring ethical usage, fostering academic integrity, and integrating AI responsibly into education are essential as AI becomes commonplace.

Navigating the digital landscape of education is no small task, especially when you bring AI tools like ChatGPT into the mix. If the thought of using ChatGPT to pen your essays has crossed your mind, here's some food for thought.

We've crossed a new threshold in essay writing, thanks to ChatGPT. This AI powerhouse can spin out structured and relevant text with minimal or even zero human intervention. However, while it has the chops to draft essays, it's crucial to understand its shortcomings and the ethical responsibility of maintaining academic integrity.

Related: ChatGPT: How to Use the AI Chatbot for Free

In a nutshell, yes. ChatGPT can whip up essays, but it's not all roses. While it can generate text that emulates human composition, its reliability in offering accurate information or holding a consistent argument can be dicey. The bottom line: It doesn't hold a candle to human reasoning, critical thinking, and fact verification.

Before we dive into the ethical maze of ChatGPT for essay writing, let's hash out some practical aspects.

  • Writing booster: As a first draft, AI-generated text could elevate your writing quality and introduce fresh angles.
  • Pocket-friendly: For now, ChatGPT is accessible to every student free of charge.
  • Consistency concerns: ChatGPT can occasionally crank out content that's logically disjointed or factually off.
  • Learning risks: A heavy reliance on ChatGPT might undercut the learning objectives of essay writing.

Wielding ChatGPT effectively still demands a firm grip on critical reading and reasoning skills. You need knowledge of the subject area to make a good essay with ChatGPT!

Employing AI to pen essays stirs up a whirlwind of academic integrity questions. It's a handy tool for sparking ideas and honing writing skills, but tipping into over-reliance could stray into academic dishonesty territory, with serious repercussions like academic sanctions, expulsion, or even degree retraction. The trick is to view AI as a sidekick, not a stand-in for your intellectual input.

Related: 6 Things You Shouldn't Use ChatGPT For

Ever considered how using ChatGPT for essay writing compares to paying someone to do your essay for you (a.k.a. contract cheating)? While both scenarios involve a degree of outsourcing, they differ. With ChatGPT, you're still required to engage with the text, perform fact-checks, and ensure narrative consistency, unlike contract cheating.

Interestingly, it seems that ChatGPT is already affecting the livelihoods of professional contract cheating services . Whether this is a net positive or not is a matter of debate.

Related: Don't Trust ChatGPT to Do Math

ChatGPT can be a powerful tool for enhancing your essays and writing style when used responsibly. By submitting your text for assessment to ChatGPT, you can receive improvement suggestions and weave these changes into your work, preserving your original ideas while leveraging the AI's linguistic abilities.

As AI technologies charge ahead, the sophistication and prevalence of AI-driven essay-writing applications are set to grow. Students and educators must remember the potential upsides and pitfalls of incorporating AI into academic writing.

Yes, AI detection tools are on the horizon, but the line between AI-generated and human text is blurring, making detection increasingly challenging. Additionally, the risk of false positives could potentially label innocent students as cheaters. Originality AI, one of the leading companies working on AI detection tools for education, has this to say:

...at Originality.AI we don't believe that an AI detection score alone is enough for disciplinary action... ( Originality AI )

Instead of banking on the impossible dream of foolproof detection, let's focus on fostering a culture of academic honesty and mindful AI usage. As AI writing tools become commonplace, we must rethink our learning assessments to integrate these tools responsibly.

As AI writing assistance becomes baked into software like Microsoft Word, it won't be feasible to penalize every student who uses it. Perhaps more importantly, the world we are preparing students for will require effective skills in using these AI tools.

Related: 8 ChatGPT AI Alternatives (Free and Paid)

Our task lies in educating students on using these tools ethically and effectively to augment their intellectual development rather than overshadow it. In a world increasingly reliant on AI, we need to ensure we're not outsourcing our critical thinking but using AI as a valuable companion on our academic journeys.

Photo of a person's hands typing on a laptop.

AI-assisted writing is quietly booming in academic journals. Here’s why that’s OK

is using ai to write essays cheating

Lecturer in Bioethics, Monash University & Honorary fellow, Melbourne Law School, Monash University

Disclosure statement

Julian Koplin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Monash University provides funding as a founding partner of The Conversation AU.

View all partners

If you search Google Scholar for the phrase “ as an AI language model ”, you’ll find plenty of AI research literature and also some rather suspicious results. For example, one paper on agricultural technology says:

As an AI language model, I don’t have direct access to current research articles or studies. However, I can provide you with an overview of some recent trends and advancements …

Obvious gaffes like this aren’t the only signs that researchers are increasingly turning to generative AI tools when writing up their research. A recent study examined the frequency of certain words in academic writing (such as “commendable”, “meticulously” and “intricate”), and found they became far more common after the launch of ChatGPT – so much so that 1% of all journal articles published in 2023 may have contained AI-generated text.

(Why do AI models overuse these words? There is speculation it’s because they are more common in English as spoken in Nigeria, where key elements of model training often occur.)

The aforementioned study also looks at preliminary data from 2024, which indicates that AI writing assistance is only becoming more common. Is this a crisis for modern scholarship, or a boon for academic productivity?

Who should take credit for AI writing?

Many people are worried by the use of AI in academic papers. Indeed, the practice has been described as “ contaminating ” scholarly literature.

Some argue that using AI output amounts to plagiarism. If your ideas are copy-pasted from ChatGPT, it is questionable whether you really deserve credit for them.

But there are important differences between “plagiarising” text authored by humans and text authored by AI. Those who plagiarise humans’ work receive credit for ideas that ought to have gone to the original author.

By contrast, it is debatable whether AI systems like ChatGPT can have ideas, let alone deserve credit for them. An AI tool is more like your phone’s autocomplete function than a human researcher.

The question of bias

Another worry is that AI outputs might be biased in ways that could seep into the scholarly record. Infamously, older language models tended to portray people who are female, black and/or gay in distinctly unflattering ways, compared with people who are male, white and/or straight.

This kind of bias is less pronounced in the current version of ChatGPT.

However, other studies have found a different kind of bias in ChatGPT and other large language models : a tendency to reflect a left-liberal political ideology.

Any such bias could subtly distort scholarly writing produced using these tools.

The hallucination problem

The most serious worry relates to a well-known limitation of generative AI systems: that they often make serious mistakes.

For example, when I asked ChatGPT-4 to generate an ASCII image of a mushroom, it provided me with the following output.

It then confidently told me I could use this image of a “mushroom” for my own purposes.

These kinds of overconfident mistakes have been referred to as “ AI hallucinations ” and “ AI bullshit ”. While it is easy to spot that the above ASCII image looks nothing like a mushroom (and quite a bit like a snail), it may be much harder to identify any mistakes ChatGPT makes when surveying scientific literature or describing the state of a philosophical debate.

Unlike (most) humans, AI systems are fundamentally unconcerned with the truth of what they say. If used carelessly, their hallucinations could corrupt the scholarly record.

Should AI-produced text be banned?

One response to the rise of text generators has been to ban them outright. For example, Science – one of the world’s most influential academic journals – disallows any use of AI-generated text .

I see two problems with this approach.

The first problem is a practical one: current tools for detecting AI-generated text are highly unreliable. This includes the detector created by ChatGPT’s own developers, which was taken offline after it was found to have only a 26% accuracy rate (and a 9% false positive rate ). Humans also make mistakes when assessing whether something was written by AI.

It is also possible to circumvent AI text detectors. Online communities are actively exploring how to prompt ChatGPT in ways that allow the user to evade detection. Human users can also superficially rewrite AI outputs, effectively scrubbing away the traces of AI (like its overuse of the words “commendable”, “meticulously” and “intricate”).

The second problem is that banning generative AI outright prevents us from realising these technologies’ benefits. Used well, generative AI can boost academic productivity by streamlining the writing process. In this way, it could help further human knowledge. Ideally, we should try to reap these benefits while avoiding the problems.

The problem is poor quality control, not AI

The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record.

After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail to prevent its publication.

We need to get better at ensuring academic papers are free from serious mistakes, regardless of whether these mistakes are caused by careless use of AI or sloppy human scholarship. Not only is this more achievable than policing AI usage, it will improve the standards of academic research as a whole.

This would be (as ChatGPT might say) a commendable and meticulously intricate solution.

  • Artificial intelligence (AI)
  • Academic journals
  • Academic publishing
  • Hallucinations
  • Scholarly publishing
  • Academic writing
  • Large language models
  • Generative AI

is using ai to write essays cheating

Lecturer / Senior Lecturer - Marketing

is using ai to write essays cheating

Head, School of Psychology

is using ai to write essays cheating

Senior Lecturer (ED) Ballarat

is using ai to write essays cheating

Senior Research Fellow - Women's Health Services

is using ai to write essays cheating

Assistant Editor - 1 year cadetship

GMA Logo

  • COVID-19 Full Coverage
  • Cover Stories
  • Ulat Filipino
  • Special Reports
  • Personal Finance
  • Other sports
  • Pinoy Achievers
  • Immigration Guide
  • Science and Research
  • Technology, Gadgets and Gaming
  • Chika Minute
  • Showbiz Abroad
  • Family and Relationships
  • Art and Culture
  • Health and Wellness
  • Shopping and Fashion
  • Hobbies and Activities
  • News Hardcore
  • Walang Pasok
  • Transportation
  • Missing Persons
  • Community Bulletin Board
  • GMA Public Affairs
  • State of the Nation
  • Unang Balita
  • Balitanghali
  • News TV Live

My Stream

Students under probe for allegedly using AI in submitted academic requirements

is using ai to write essays cheating

Some students at the University of the Philippines Diliman (UPD) are under investigation after allegedly using artificial intelligence (AI) to fulfill academic requirements.

In a report by Ian Cruz on "24 Oras,"  an AI detector system was used to check these academic requirements submitted by the students.

"It has come to out attention about the alleged instances of academic requirements submitted by students of the University that were created by Large Language Modu le (LLM) systems, such as ChatGPT (Generative Pre-trained Transformers). LLMs answer queries by generating useful texts containing concepts and ideas learned from the large body of information available on the Internet," the Faculty of UP Artificial Intelligence Program said in a statement.

“The professor verified the students' submission with two Al detector systems that led to the conclusion that the work was most likely written by an Al.”

The university professors expressed concern over the students' use of AI, adding that there are consequences to the action they have taken.

“We, as faculty members, are very concerned that our students here in UP are basically cheating. They misrepresented the submission coming from this AI tools in submission po of actual academic requirement,” said Dr. Eugene Rex Jalao, UPD Artificial Intelligence Program coordinator.

“The students in question, whether proven that they have actually committed dishonesty, there are consequences. The most severe here in UP Diliman is of course expulsion. but it also ranges from suspension or even a failing grade,” he added.

Dr. Jalao also said that AI does not need to be banned in school, but its improper use needs to be prevented.

Meanwhile, Dominic Caringal and Miguel Dichoso, both Computer Engineering students, said AI is a big help for students but there should be a limit to its use.

“Hindi rin ganun kaganda ang mga nagiging sagot parang pwede lang po siyang maging tulong para mas marami kang malaman about sa certain information, for example po yung mga codes,” said Dominic.

They said that AI can be used as aid in learning and for research.

“In an academic matter ginagamit po siya as a basis to learn more about how it should be done. It is not to give kumbaga yung mismong sagot, but rather to help us learn paano sumagot and then, yun yung limitations kumbaga,” said Miguel.

GMA Integrated News is trying to get a statement from the Commission on Higher Education (CHEd) as of posting time. -- Sherylin Untalan/BAP, GMA Integrated News

comscore

‘Commendable’, ‘meticulous’ and ‘intricate’: the giveaways that students have been using AI

But students cheating on essays is just the tip of the iceberg when it comes to artificial intelligence and plagiarism.

is using ai to write essays cheating

US actress Scarlett Johansson poses during a photocall for the film Asteroid City at the 76th edition of the Cannes Film Festival in Cannes, southern France. Johansson was outraged recently when one of OpenAI’s new voice assistants for ChatGPT4o allegedly sounded so much like her that even friends and family were fooled. She had been approached twice for permission to use her voice by Sam Altman, OpenAI’s chief executive. Photograph: Christophe Simon/AFP via Getty Images

Breda O'Brien's face

Large language models (LLMs) and generative artificial intelligence (GenAI) have a plagiarism problem. And it’s not just confined to individuals seeking an unfair advantage. The problem has been baked in since the beginning of GenAI.

As third-level colleges wind down and assignments are graded, it is universally acknowledged that students are using chatbots and virtual assistants much more often. The most common use seems to be deploying GenAI (artificial intelligence capable of generating text, images and other data, typically in response to prompts) to tweak and improve a completed essay.

Alternatively, people use AI-generated content and then fact-check and paraphrase it. Both methods avoid the hallucination problem, that is, a virtual assistant merrily making up plausible answers and references. Is this just an updated version of using essays from a student a year or two ahead of you? It is still cheating. But the long-term consequences of relying on AI are much more far-reaching.

The same plagiarism problem exists with coursework at Leaving Cert level. There is significant disquiet about senior-cycle reform which mandates that every subject will have what are called additional assessment components – that is, assessments that are not traditional written exams. Currently, before senior-cycle reform is completed, of 41 subjects offered at Leaving Cert level only 12 did not have an additional assessment component . (Many of these, such as oral exams, are not susceptible to the use of AI but others, such as research projects and essays, definitely are.)

‘Commendable’, ‘meticulous’ and ‘intricate’: the giveaways that students have been using AI

How a US university that prides itself on freedom of speech dealt with student protests over Gaza

How a US university that prides itself on freedom of speech dealt with student protests over Gaza

Why does no one give up their seat on public transport any more?

Why does no one give up their seat on public transport any more?

Did no alarm bell tinkle from RTÉ's editorial structures about its programme on abortion services?

Did no alarm bell tinkle from RTÉ's editorial structures about its programme on abortion services?

[  You’re not imagining it - the internet really is getting worse. Here’s why  ]

Undisclosed use of GenAI has also infested scientific research. One study (where the researchers obviously had a highly developed sense of irony) analysed scientific peer review in AI conferences after the advent of ChatGPT. The results showed that somewhere between 6.5 per cent and 16.9 per cent of text submitted as peer reviews to these conferences could have been substantially modified by LLMs.

The greatly increased use of adjectives such as “commendable”, “meticulous”, and “intricate” was one of the giveaways. At the moment, AI flattens language to blandness but that will soon change.

The actor Scarlett Johansson was outraged recently when one of OpenAI’s new voice assistants for ChatGPT4o allegedly sounded so much like her that even friends and family were fooled. She had been approached twice for permission to use her voice by Sam Altman, OpenAI’s chief executive – you know, the guy who got fired and reinstated within a week for allegedly being less than candid with his board?

She said no both times. OpenAI denies that Sky, a breathy, flirtatious voice which is one of five options for conversing with ChatGPT4o, has anything to do with Johansson but has still withdrawn the voice. Johansson was the voice of Samantha in Spike Jonze’s 2013 movie Her , where Theodore, a lonely introvert played by Joaquin Phoenix, falls in love with the near-omniscient virtual assistant. Spoiler alert – Samantha is carrying on conversations with 8,316 other people as she talks to Theodore and is in love with 641 of them. Altman loves the movie.

The mechanisms used by GenAI companies to train their LLMs are like something from science fiction. Immense, unfathomable amounts of data are needed

Johansson is a powerful, rich person but other less well-known voice actors making a modest living allege their voices have been copied and used by GenAI companies. The New York Times reported the case of a couple named Paul Skye Lehrman and Linnea Sage, who got most of their voice acting gigs on Fiverr, a low-cost freelance site. The couple alleges that Lovo, a Californian start-up, created clones of their voices illegally and that this threatens their ability to make a living.

The New York Times itself has to date spent $1 million suing OpenAI and Microsoft. It claims the companies breached fair use by not only training their LLMs on New York Times articles but also by reproducing pieces virtually word for word in chatbot answers.

The mechanisms used by GenAI companies to train their LLMs are like something from science fiction. Immense, unfathomable amounts of data are needed. A team led by a New York Times technology correspondent Cade Metz discovered that by 2021 OpenAI had already used every respectable source of internet English-language texts – Wikipedia, Reddit and millions of websites and digital books. So OpenAI invented technology to transcribe video and audio, including YouTube.

[  If AI teachers ever learn to care about students, that will be the least of our worries  ]

You might think that Google, which owns YouTube, would have objected to this blatant breach of YouTube fair use regulations, but no. Metz alleges that Google could not point the finger at OpenAI because it was busy doing the same thing. Authors, visual artists, musicians and computer programmers are just some of the groups suing GenAI companies for using their work without permission and thereby threatening their livelihoods.

Ireland depends on big tech for economic viability. GenAI is the latest profit-seeking battleground but appears to be based on levels of plagiarism that make students furtively trying to improve essays by using ChatGPT look like rank amateurs.

GenAI is an extraordinary, world-changing technological development but is built on unpaid, stolen labour. AI is currently mostly feeding off the creativity and livelihoods of the vulnerable and powerless. But cheating has a way of coming back to bite the cheater. Could our sanguine embrace of AI’s cheating heart hasten the redundancy not just of the vulnerable but also of countless roles that once were the exclusive province of human beings?

IN THIS SECTION

Should the dublin city traffic plan go ahead this summer, ireland is poor at innovation, but a world-class producer of complacency and self-satisfaction, clear air turbulence: after a decade as a cabin crew member, i have little doubt it’s getting worse, israel’s treatment of ireland’s ambassador was a macabre, medieval circus, when it comes to the state’s must vulnerable children, we don’t seem to learn from our failures, rod stewart in dublin review: the 79-year-old rock icon does his best to keep misery at bay amid downpours, de niro held, questioned in paris over vice ring, vw group to sell cars directly to irish customers for all its brands from 2026, irish winters could drop to -15 degrees in ‘runaway climate change’ scenario, reports find, man arrested scaling railings of leinster house in attempt to remove palestinian flag, employee awarded three weeks’ pay for breach of sick leave law, nicky english: final spin of the wheel with jeopardy for nearly everyone, latest stories, ireland has fewer critical-care beds than european norm, report shows, dramatic shifts in extreme weather conditions likely to affect sleep patterns, study finds, john o’shea’s ireland squad announcement overshadowed by damien duff’s remarks, dutch coalition chooses dick schoof as prime minister, israel denies striking tent encampment as troops advance on rafah.

  • Terms & Conditions
  • Privacy Policy
  • Cookie Information
  • Cookie Settings
  • Community Standards

is using ai to write essays cheating

8 Ways to Create AI-Proof Writing Prompts

C reating 100 percent AI-proof writing prompts can often be impossible but that doesn’t mean there aren’t strategies that can limit the efficacy of AI work. These techniques can also help ensure more of the writing submitted in your classroom is human-generated. 

I started seeing a big uptick in AI-generated work submitted in my classes over the last year and that has continued. As a result, I’ve gotten much better at recognizing AI work , but I’ve also gotten better at creating writing prompts that are less AI-friendly. 

Essentially, I like to use the public health Swiss cheese analogy when thinking about AI prevention: All these strategies on their own have holes but when you layer the cheese together, you create a barrier that’s hard to get through. 

The eight strategies here may not prevent students from submitting AI work, but I find these can incentivize human writing and make sure that any work submitted via AI will not really meet the requirements of the assignment. 

1. Writing AI-Proof Prompts: Put Your Prompt Into Popular AI tools such as ChatGPT, Copilot, and Bard 

Putting your writing prompt into an AI tools will give you an immediate idea of how most AI tools will handle your prompt. If the various AI chatbots do a good, or at least adequate, job immediately, it might be wise to tweak the prompt. 

One of my classes asks students to write about a prized possession. When you put this prompt into an AI chatbot, it frequently returns an essay about a family member's finely crafted watch. Obviously, I now watch out for any essays about watches. 

2. Forbid Cliché Use

Probably the quickest and easiest way to cut back on some AI use is to come down hard on cliché use in writing assignments. AI tools are essentially cliché machines, so banning these can prevent a lot of AI use. 

Equally as important, this practice will help your students become better writers. As any good writer knows, clichés should be avoided like the plague. 

3. Incorporate Recent Events

The free version of ChatGPT only has access to events up to 2022. While there are plugins to allow it to search the internet and other internet-capable AI tools, some students won’t get further than ChatGPT. 

More importantly, in my experience, all AI tools struggle to incorporate recent events as effectively as historic ones. So connecting class material and assignments to events such as a recent State of Union speech or the Academy Awards will make any AI writing use less effective. 

4. Require Quotes

AI tools can incorporate direct quotations but most are not very good at doing so. The quotes used tend to be very short and not as well-placed within essays. 

Asking an AI tool for recent quotes also can be particularly problematic for today’s robot writers. For instance, I asked Microsoft's Copilot to summarize the recent Academy Awards using quotes, and specifically asked it to quote from Oppenheimer's director Christopher Nolan’s acceptance speech. It quoted something Nolan had previously said instead. Copilot also quoted from Wes Anderson’s acceptance speech, an obvious error since Anderson wasn’t at the awards .  

5. Make Assignments Personal

Having students reflect on material in their own lives can be a good way to prevent AI writing. In-person teachers can get to know their students well enough to know when these types of personal details are fabricated. 

I teach online but still find it easier to tell when a more personalized prompt was written by AI. For example, one student submitted a paper about how much she loved skateboarding that was so non-specific it screamed AI written. Another submitted a post about a pair of sneakers that was also clearly written by a "sole-less" AI (I could tell because of the clichés and other reasons). 

6. Make Primary or Scholarly Sources Mandatory

Requiring sources that are not easily accessible on the internet can stop AI writing in its tracks. I like to have students find historic newspapers for certain assignments. The AI tools I am familiar with can’t incorporate these. 

For instance, I asked Copilot to compare coverage of the first Academy Awards in the media to the most recent awards show and to include quotes from historic newspaper coverage. The comparison was not well done and there were no quotes from historical newspaper coverage. 

AI tools also struggle to incorporate journal articles. Encouraging your students to include these types of sources ensures the work they produce is deeper than something that can be revealed by a quick Google search, which not only makes it harder for AI to write but also can raise the overall quality.  

7. Require Interviews, Field Trips, Etc. 

Building on primary and scholarly sources, you can have your students conduct interviews or go on field trips to historic sites, museums, etc. 

AI is still, thankfully, incapable of engaging in these types of behavior. This requires too much work for every assignment but it is the most effective way to truly ensure your work is human- not computer-written. 

If you’re still worried about AI use, you can even go a step further by asking your students to include photos of them with their interview subjects or from the field trips. Yes, AI art generators are getting better as well, but remember the Swiss cheese analogy? Every layer of prevention can help. 

8. Have Students Write During Class

As I said to start, none of the methods discussed are foolproof. Many ways around these safeguards already exist and there will be more ways to bypass these in the future. So if you’re really, really worried about AI use you may want to choose what I call the “nuclear option.” If you teach in person you can require students to write essays in person. 

This approach definitely works for preventing AI and is okay for short pieces, but for longer pieces, it has a lot of downsides. I would have trouble writing a long piece in this setting and imagine many students will as well. Additionally, this requirement could create an accusatory class atmosphere that is more focused on preventing AI use than actually teaching. It’s also not practical for online teaching. 

That all being said, given how common AI writing has become in education, I understand why some teachers will turn to this method. Hopefully, suggestions 1-7 will work but if AI-generated papers are still out of hand in your classroom, this is a blunt-force method that can work temporarily. 

Good luck and may your assignments be free of AI writing! 

  • 7 Ways To Detect AI Writing Without Technology
  • Best Free AI Detection Sites
  • My Student Was Submitting AI Papers. Here's What I Did

AI-proof writing prompts

MIT Technology Review

  • Newsletters

Five ways criminals are using AI

Generative AI has made phishing, scamming, and doxxing easier than ever.

  • Melissa Heikkilä archive page

the head of a glowing crocodile with stacks of coins in its open mouth

Artificial intelligence has brought a big boost in productivity—to the criminal underworld. 

Generative AI provides a new, powerful tool kit that allows malicious actors to work far more efficiently and internationally than ever before, says Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro. 

Most criminals are “not living in some dark lair and plotting things,” says Ciancaglini. “Most of them are regular folks that carry on regular activities that require productivity as well.”

Last year saw the rise and fall of WormGPT , an AI language model built on top of an open-source model and trained on malware-related data, which was created to assist hackers and had no ethical rules or restrictions. But last summer, its creators announced they were shutting the model down after it started attracting media attention. Since then, cybercriminals have mostly stopped developing their own AI models. Instead, they are opting for tricks with existing tools that work reliably. 

That’s because criminals want an easy life and quick gains, Ciancaglini explains. For any new technology to be worth the unknown risks associated with adopting it—for example, a higher risk of getting caught—it has to be better and bring higher rewards than what they’re currently using. 

Here are five ways criminals are using AI now. 

The  biggest use case for generative AI among criminals right now is phishing, which involves trying to trick people into revealing sensitive information that can be used for malicious purposes, says Mislav Balunović, an AI security researcher at ETH Zurich. Researchers have found that the rise of ChatGPT has been accompanied by a huge spike in the number of phishing emails . 

Spam-generating services, such as GoMail Pro, have ChatGPT integrated into them, which allows criminal users to translate or improve the messages sent to victims, says Ciancaglini. OpenAI’s policies restrict people from using their products for illegal activities, but that is difficult to police in practice, because many innocent-sounding prompts could be used for malicious purposes too, says Ciancaglini. 

OpenAI says it uses a mix of human reviewers and automated systems to identify and enforce against misuse of its models, and issues warnings, temporary suspensions and bans if users violate the company’s policies. 

“We take the safety of our products seriously and are continually improving our safety measures based on how people use our products,” a spokesperson for OpenAI told us. “We are constantly working to make our models safer and more robust against abuse and jailbreaks, while also maintaining the models’ usefulness and task performance,” they added. 

In a report from February, OpenAI said it had closed five accounts associated with state-affiliated malicous actors. 

Before, so-called Nigerian prince scams, in which someone promises the victim a large sum of money in exchange for a small up-front payment, were relatively easy to spot because the English in the messages was clumsy and riddled with grammatical errors, Ciancaglini. says. Language models allow scammers to generate messages that sound like something a native speaker would have written. 

“English speakers used to be relatively safe from non-English-speaking [criminals] because you could spot their messages,” Ciancaglini says. That’s not the case anymore. 

Thanks to better AI translation, different criminal groups around the world can also communicate better with each other. The risk is that they could coordinate large-scale operations that span beyond their nations and target victims in other countries, says Ciancaglini.

Deepfake audio scams

Generative AI has allowed deepfake development to take a big leap forward, with synthetic images, videos, and audio looking and sounding more realistic than ever . This has not gone unnoticed by the criminal underworld.

Earlier this year, an employee in Hong Kong was reportedly scammed out of $25 million after cybercriminals used a deepfake of the company’s chief financial officer to convince the employee to transfer the money to the scammer’s account. “We’ve seen deepfakes finally being marketed in the underground,” says Ciancaglini. His team found people on platforms such as Telegram showing off their “portfolio” of deepfakes and selling their services for as little as $10 per image or $500 per minute of video. One of the most popular people for criminals to deepfake is Elon Musk, says Ciancaglini. 

And while deepfake videos remain complicated to make and easier for humans to spot, that is not the case for audio deepfakes. They are cheap to make and require only a couple of seconds of someone’s voice—taken, for example, from social media—to generate something scarily convincing.

In the US, there have been high-profile cases where people have received distressing calls from loved ones saying they’ve been kidnapped and asking for money to be freed, only for the caller to turn out to be a scammer using a deepfake voice recording. 

“People need to be aware that now these things are possible, and people need to be aware that now the Nigerian king doesn’t speak in broken English anymore,” says Ciancaglini. “People can call you with another voice, and they can put you in a very stressful situation,” he adds. 

There are some for people to protect themselves, he says. Ciancaglini recommends agreeing on a regularly changing secret safe word between loved ones that could help confirm the identity of the person on the other end of the line. 

“I password-protected my grandma,” he says.  

Bypassing identity checks

Another way criminals are using deepfakes is to bypass “know your customer” verification systems. Banks and cryptocurrency exchanges use these systems to verify that their customers are real people. They require new users to take a photo of themselves holding a physical identification document in front of a camera. But criminals have started selling apps on platforms such as Telegram that allow people to get around the requirement. 

They work by offering a fake or stolen ID and imposing a deepfake image on top of a real person’s face to trick the verification system on an Android phone’s camera. Ciancaglini has found examples where people are offering these services for cryptocurrency website Binance for as little as $70. 

“They are still fairly basic,” Ciancaglini says. The techniques they use are similar to Instagram filters, where someone else’s face is swapped for your own. 

“What we can expect in the future is that [criminals] will use actual deepfakes … so that you can do more complex authentication,” he says. 

is using ai to write essays cheating

Jailbreak-as-a-service

If you ask most AI systems how to make a bomb, you won’t get a useful response.

That’s because AI companies have put in place various safeguards to prevent their models from spewing harmful or dangerous information. Instead of building their own AI models without these safeguards, which is expensive, time-consuming, and difficult, cybercriminals have begun to embrace a new trend: jailbreak-as-a-service. 

Most models come with rules around how they can be used. Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails. 

Services such as EscapeGPT and BlackhatGPT offer anonymized access to language-model APIs and jailbreaking prompts that update frequently. To fight back against this growing cottage industry, AI companies such as OpenAI and Google frequently have to plug security holes that could allow their models to be abused. 

Jailbreaking services use different tricks to break through safety mechanisms, such as posing hypothetical questions or asking questions in foreign languages. There is a constant cat-and-mouse game between AI companies trying to prevent their models from misbehaving and malicious actors coming up with ever more creative jailbreaking prompts. 

These services are hitting the sweet spot for criminals, says Ciancaglini. 

“Keeping up with jailbreaks is a tedious activity. You come up with a new one, then you need to test it, then it’s going to work for a couple of weeks, and then Open AI updates their model,” he adds. “Jailbreaking is a super-interesting service for criminals.”

Doxxing and surveillance

AI language models are a perfect tool for not only phishing but for doxxing (revealing private, identifying information about someone online), says Balunović. This is because AI language models are trained on vast amounts of internet data, including personal data, and can deduce where, for example, someone might be located.

As an example of how this works, you could ask a chatbot to pretend to be a private investigator with experience in profiling. Then you could ask it to analyze text the victim has written, and infer personal information from small clues in that text—for example, their age based on when they went to high school, or where they live based on landmarks they mention on their commute. The more information there is about them on the internet, the more vulnerable they are to being identified. 

Balunović was part of a team of researchers that found late last year that large language models, such as GPT-4, Llama 2, and Claude, are able to infer sensitive information such as people’s ethnicity, location, and occupation purely from mundane conversations with a chatbot. In theory, anyone with access to these models could use them this way. 

Since their paper came out, new services that exploit this feature of language models have emerged. 

While the existence of these services doesn’t indicate criminal activity, it points out the new capabilities malicious actors could get their hands on. And if regular people can build surveillance tools like this, state actors probably have far better systems, Balunović says. 

“The only way for us to prevent these things is to work on defenses,” he says.

Companies should invest in data protection and security, he adds. 

Artificial intelligence

Sam altman says helpful agents are poised to become ai’s killer function.

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

  • James O'Donnell archive page

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

  • MIT Technology Review Insights archive page

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

The AI Tools for Teachers Are Getting More Robust. Here’s How

is using ai to write essays cheating

  • Share article

Big ed-tech players are adding generative artificial intelligence tools to existing products already popular in the K-12 world. While some educators and AI experts are excited, they say these new tools make the need for teacher training on AI even more urgent.

Google is providing Gemini —its generative AI model—as an add-on for educational institutions using its Workspace for Education product. A lower-tier version of Gemini Education provides access to generative AI features in Google’s Workspace apps, such as Docs, Sheets, Slides, and Gmail, and access to the Gemini chatbot. The premium version offers additional features, such as AI-powered note taking and summaries in Meet.

Khan Academy and Microsoft are partnering to provide free access to Khanmigo—its AI powered teaching assistant—for teachers. Both tools can help educators with administrative work and creating materials for students.

The announcements come as more educators try out the emerging technology and as school districts discuss and create guidelines and policies for AI use .

“This is a great example of the speed of AI tool development,” said Pat Yongpradit, the chief academic officer at Code.org and a leader of TeachAI, an initiative to support schools in using and teaching about AI. “And I’m really happy that the big companies are thinking about how to serve educators, and they’re not just focused on creating general purpose chatbot tools.”

How do teachers use AI?

Educators who are using AI have mostly been using ChatGPT or other free AI tools available online . But these releases from Google and Khan Academy should increase access to AI tools, Yongpradit said.

“If you’re using Google stuff, which zillions of schools are, now you have [AI] stuff to play with,” he said. And Khanmigo being free now is “wonderful, because cost has been an issue with accessing Khanmigo before.”

Mark Erlenwein, the principal of Staten Island Technical High School in New York, has yet to try Gemini, but he has tried Khanmigo and said he’s “impressed” by what he sees so far. With Khanmigo, Khan Academy has provided ready-made prompts and activities that teachers could use. For instance, if a teacher wants Khanmigo to help her create an exit ticket, she would just click that button, input a grade level, subject, and topic, and then Khanmigo will produce the exit ticket.

In the past, Erlenwein has mostly used ChatGPT, but now with these new tools, “it kind of comes down to: Do you want to go directly to ChatGPT to do the work or do you want to use these platforms that honestly would make it easier for a teacher?”

Photo collage of teacher working at desk with laptop computer.

It’s helpful that Khan Academy has figured out “the mastery behind asking the prompt correctly,” Erlenwein said, “which is what I think is going to be the difference-maker here” because now educators don’t have to spend time figuring out the perfect prompt to ask the chatbot to get what they want.

Along with increasing access, these tools also most likely have better data privacy and security practices than some of the other free AI tools that have cropped up, Yongpradit said.

“A hit to [Google’s and Khan Academy’s] reputation is a much bigger hit that has way larger ramifications than some ed-tech startup maybe messing up or not being as careful about privacy and security and age appropriateness and compliance with existing policies,” he said.

Google said in its announcement that it isn’t using data that users put into Gemini to train its AI model without permission and that it isn’t sharing data with other users or organizations. And Khanmigo, which is powered by the same technology behind ChatGPT, is also not training the AI model. It does use feedback from educators to refine prompts.

Why training on how to use AI is important for teachers

Providing access to AI tools is great, educators and AI experts say, but it’s meaningless if educators don’t know how to use the tools properly.

Schools need funding and time to provide professional development for educators to understand how to use these tools, as well as how to evaluate the effectiveness of the tools, they say.

So far, more than 7 in 10 teachers said they haven’t received any professional development on using AI in the classroom, according to a nationally representative EdWeek Research Center survey of 953 educators, including 553 teachers, conducted between Jan. 31 and March 4.

052224 EW LeadSym 406 BS

“Teachers are going to have to really figure out what makes the most sense for their context,” Yongpradit said. “They still need to have that healthy skepticism and evaluation skills to make the most sense of [what the AI tool produces].”

Glenn Kleiman, a senior adviser for Stanford University’s Graduate School of Education, said it’s also important to start thinking beyond using AI to improve efficiency. Districts, researchers, and policymakers need to start thinking about the changes in content and pedagogy as AI continues to advance.

“The advances in AI tools are running ahead of advances in the human capacity to use them well,” he said.

School districts need to be thoughtful about crafting policies on AI use

The fast pace at which AI is evolving might make districts feel like they need to respond just as quickly, but they should keep calm and have a thoughtful implementation over time, Kleiman added.

Erlenwein has always been an early adopter of new technologies, and he’s excited about these new tools’ ability to increase efficiency and productivity. However he is concerned about the lack of research on what effects it will have on K-12.

“As an early adopter, I’m ready to adopt. But I don’t know if we really know what we’re adopting,” he said.

Yongpradit of TeachAI said there are organizations that are trying to figure out how to fund the research that’s needed, but it takes time. For now, districts should trust their robust evaluation, procurement, and piloting processes, he said.

Sign Up for EdWeek Tech Leader

Edweek top school jobs.

Motherboard image with large "AI" letters with an animated magnifying glass pans in from the left.

Sign Up & Sign In

module image 9

IMAGES

  1. How to rewrite essays with ai using wordbot

    is using ai to write essays cheating

  2. How to Use AI to Write Essays, Projects, Scripts Using ChatGPT OpenAi

    is using ai to write essays cheating

  3. Scholarship Essays: Is Using AI Writing Tools Considered Cheating

    is using ai to write essays cheating

  4. 6 Best AI Essay Writer Tools to Create 100% Original Content

    is using ai to write essays cheating

  5. Writing Essays With AI: A Guide

    is using ai to write essays cheating

  6. 10 Best AI Essay Writers In 2024 (Reviewed)

    is using ai to write essays cheating

VIDEO

  1. How to Write Essays with AI!!

  2. AI Writing Detector Creates Undetectable AI Content In Seconds! [New

COMMENTS

  1. Is it cheating if students use AI to help with coursework?

    How can educators and students deal with the rise of AI-generated writing and the implications for academic integrity? This article explores the ethical and practical dilemmas of using AI to write essays and other academic work.

  2. Is Using AI for School Cheating?

    Learn how to use AI tools like ChatGPT and Grammarly to help you study and improve your work, and how to avoid cheating and plagiarism by using them. Find out Penn Foster's policy on AI and academic dishonesty, and the consequences of misusing AI in school.

  3. Professors Caught Students Cheating on College Essays With ChatGPT

    Two professors who say they caught students cheating on essays with ChatGPT explain why AI plagiarism can be hard to prove. ChatGPT, an AI chatbot, has had the internet in a frenzy since it ...

  4. Can I Use A.I. to Grade My Students' Papers?

    I am a junior-high-school English teacher. In the past school year, there has been a significant increase in students' cheating on writing assignments by using artificial intelligence.

  5. Don't Use A.I. to Cheat in School. It's Better for Studying

    It turns out, it's easy to get caught cheating with generative A.I. because it is prone to making stuff up, a phenomena known as "hallucinating.". But generative A.I. can also be used as a ...

  6. What do AI chatbots really mean for students and cheating?

    October 31, 2023. By Carrie Spector. SHARE: PRINT. The launch of ChatGPT and other artificial intelligence (AI) chatbots has triggered an alarm for many educators, who worry about students using the technology to cheat by passing its writing off as their own. But two Stanford researchers say that concern is misdirected, based on their ongoing ...

  7. AI breakthrough ChatGPT raises alarm over student cheating

    Universities are being urged to safeguard against the use of artificial intelligence to write essays after the emergence of a sophisticated chatbot that can imitate academic work, leading to a ...

  8. How ChatGPT and similar AI will disrupt education

    The new chatbot ChatGPT and other generative AI encourage cheating and offer up incorrect info, but they could also be used for good. ... The test asks students to write five short essays about ...

  9. Universities say AI cheats can't be beaten, moving away from attempts

    Universities have widely reported experiences of students using AI to write essays or cheat assessments, with some returning to pen and paper testing to combat attempts to cheat.

  10. Will ChatGPT Kill the Student Essay?

    The College Essay Is Dead. Nobody is prepared for how AI will transform academia. By Stephen Marche. Paul Spella / The Atlantic; Getty. December 6, 2022. Suppose you are a professor of pedagogy ...

  11. AI is making it easier than ever for students to cheat.

    Say a student wanted to write an essay about U.S.-China policy and decided to use an A.I. program to speed along the process. ... Would individuals using AI in writing be likewise considered in ...

  12. Students Are Using AI Text Generators to Write Papers—Are They Cheating

    Students Are Using AI Text Generators to Write Papers—Are They Cheating? High school and college students are awakening to the grade-boosting possibilities of text-generating software. Their teachers are struggling to catch up. When West—not his real name—enrolled in college last year at the University of Rhode Island, he quickly realized ...

  13. ChatGPT Artificial Intelligence Can Help Students Cheat On Essays

    Some educators worry that students will use ChatGPT to get away with cheating more easily — especially when it comes to the five-paragraph essays assigned in middle and high school and the ...

  14. What is the potential of AI writing? Is cheating its ...

    In many ways, AI writing and contract cheating are very much alike; in the realm of contract cheating, students submit a request to an essay mill and receive a written essay in return. With AI writers, the only difference is that a computer generates the work. To that end, using AI writing to complete an assignment and represent it as your own ...

  15. Educators worry about students using artificial intelligence to cheat

    See what essay it produces and then write a new essay that analyzes that essay, says what it gets right, what it gets wrong. That's creative, that's interesting. That's getting a little bit ahead ...

  16. Professors Are Getting Creative to Stop Students Using AI to Cheat

    Jan 21, 2023, 12:00 AM PST. Professors have reported examples of students cheating on their assignments by using AI. Getty Images. Some professors say students are using new tech to pass off AI ...

  17. ChatGPT detector could help spot cheaters using AI to write essays

    A web tool called GPTZero can identify whether an essay was generated by the artificial intelligence chatbot ChatGPT with high accuracy. This could help identify cheating in schools and ...

  18. ChatGPT: Is using AI to help with your school work cheating?

    ChatGPT started in November 2022 and is an online service that can answer questions and also write essays as if it were a human being. More like this: Could AI chatbots help people cheat on homework?

  19. Now AI can write students' essays for them, will everyone become a cheat?

    Now AI can write students' essays for them, will everyone become a cheat? AI's automatic writing tools are making it easier for students to cheat. Associate Director Rob Reich says tech companies and AI developers must agree to self-regulate. Nov 28, 2022.

  20. Should I Use ChatGPT to Write My Essays?

    How to Use AI as a Tool to Support Your Work. As more students are discovering, generative AI models like ChatGPT just aren't as advanced or intelligent as they may believe. While AI may be a poor option for writing your essay, it can be a great tool to support your work. Generate ideas for essays. Have ChatGPT help you come up with ideas for ...

  21. Can ChatGPT Write Essays: Is Using AI to Write Essays a Good Idea?

    While ChatGPT can potentially boost essay writing, it has limitations and raises ethical concerns. Critical reasoning and fact-checking remain vital, as AI tools can sometimes lack accuracy and consistency. Ensuring ethical usage, fostering academic integrity, and integrating AI responsibly into education are essential as AI becomes commonplace.

  22. AI-assisted writing is quietly booming in academic journals. Here's why

    Used well, generative AI can boost academic productivity by streamlining the writing process. In this way, it could help further human knowledge. In this way, it could help further human knowledge.

  23. Students under probe for allegedly using AI in submitted academic

    Some students at the University of the Philippines Diliman (UPD) are under investigation after allegedly using artificial intelligence (AI) to fulfill academic requirements. In a report by Ian Cruz on "24 Oras," an AI detector system was used to check these academic requirements submitted by the students. "It has come to out attention about the ...

  24. ChatGPT in the classroom: The fine line between cheating and ...

    ChatGPT, an AI system that can write an entire essay for you if you just give it the needed prompt, is the newest trend exploding on college campuses. Bell says she believes ChatGPT creates a ...

  25. Students Aren't Writing Well Anymore. Can AI Help?

    Using AI To Aid Writing Gains. There's lots that can be done to fix this problem: Improved teacher professional development, greater emphasis on writing across the curriculum, etc.

  26. How teachers started using ChatGPT to grade assignments

    A new tool called Writable, which uses ChatGPT to help grade student writing assignments, is being offered widely to teachers in grades 3-12.. Why it matters: Teachers have quietly used ChatGPT to grade papers since it first came out — but now schools are sanctioning and encouraging its use. Driving the news: Writable, which is billed as a time-saving tool for teachers, was purchased last ...

  27. 'Commendable', 'meticulous' and 'intricate': the giveaways that

    'Commendable', 'meticulous' and 'intricate': the giveaways that students have been using AI But students cheating on essays is just the tip of the iceberg when it comes to artificial ...

  28. 8 Ways to Create AI-Proof Writing Prompts

    5. Make Assignments Personal. Having students reflect on material in their own lives can be a good way to prevent AI writing. In-person teachers can get to know their students well enough to know ...

  29. Five ways criminals are using AI

    Jailbreaking allows users to manipulate the AI system to generate outputs that violate those policies—for example, to write code for ransomware or generate text that could be used in scam emails.

  30. The AI Tools for Teachers Are Getting More Robust. Here's How

    Educators who are using AI have mostly been using ChatGPT or other free AI tools available online. But these releases from Google and Khan Academy should increase access to AI tools, Yongpradit said.