Should AI be in schools?

Not according to a new poll of likely voters and parents, reports Chalkboard News.

The Voters Voice Poll conducted by Noble Predictive Insights in July found that over two-thirds (68 percent) of the nearly 2,300 respondents believe artificial intelligence (AI) should stay out of schools, with just 22 percent thinking it should be kept in schools. Sixty-nine percent of parents with children under 18 also believe AI shouldn’t be in schools.

Those surveyed were asked whether they agreed or disagreed with the following statement: “AI should be kept out of schools — it makes cheating too easy.”

AI-fueled cheating has been a concern among educators, but at least according to survey results on the topic, data doesn’t suggest that AI is increasing the frequency of cheating. Stanford University researchers polled students from 40 different high schools and “found that the percentage of students who admitted to cheating has remained flat since the advent of ChatGPT and other readily available generative AI tools,” reported Education Week. (The researchers respond here to questions about whether the surveyed students could be lying about cheating, which of course is plausible.)

Pew Research Center data from a fall 2023 survey of students aged 13 to 17 found that among those familiar with ChatGPT, 20 percent said using it to write essays was acceptable compared to 57 percent who said it was not. Using it to research new topics, on the other hand, was viewed as acceptable by 69 percent of student respondents.

Even before ChatGPT was released, cheating was more common than maybe realized. Surveys of more than 70,000 high schools from 2002 to 2015 by the International Center for Academic Integrity found that 64 percent of students admitted to cheating on a test — “a similar outcome to Stanford’s findings after the rise of AI chatbot tools,” pointed out The 74.

Of course, this could always change as students become more familiar with the technology available, and is something that will likely be — and need to be — monitored.

More educators are now using AI detection tools (which, use AI to function), according to a survey by the Center for Democracy and Technology, with those same respondents reporting that “students are increasingly getting in trouble for using AI to complete assignments,” noted Education Week. A majority of the surveyed teachers also said that “generative AI has made them more distrustful of whether their students’ work is actually theirs.”

One educator shared with Education Week that teachers have used the AI-detection tools “as a check on their gut instincts when they have suspicions that a student has improperly used generative AI.”

“For example, we had a teacher recently who had students writing paragraphs about Supreme Court cases, and a student used AI to generate answers to the questions,” he said. “For her, it did not match what she had seen from the student in the past, so she went online to use one of the tools that are available to check for AI usage. That’s what she used as her decider.”

Annie Chechitelli, chief product officer of Turnitin, a company that identifies when AI writing tools have been used, told Education Week she believes as educators become more comfortable with generative AI the focus “will shift from detection to transparency.”

“[H]ow should students cite or communicate the ways they’ve used AI? When should educators encourage students to use AI in assignments? And do schools have clear policies around AI use and what, exactly, constitutes plagiarism or cheating?”

“What the feedback we’re hearing now from students is: ‘I’m gonna use it. I would love a little bit more guidance on how and when so I don’t get in trouble,’” Chechitelli said.