AI grading essays sparks ethical debates among experts

Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarism detection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023.

Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They’re also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante” for what’s expected in the classroom.

Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products. But while some schools have formed policies on how students can or can’t use AI for schoolwork, many do not have guidelines for teachers.

The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money.

“If teachers use it solely to grade, and the students are using it solely to produce a final product, it’s not going to work,” said Gayeski. The time and place for AI

How teachers use AI depends on many factors, particularly when it comes to grading, according to Dorothy Leidner, a professor of business ethics at the University of Virginia. If the material being tested in a large class is largely declarative knowledge — so there is a clear right and wrong — then a teacher grading using the AI “might be even superior to human grading,” she told CNN.

But Leidner noted when it comes to smaller classes or assignments with less definitive answers, grading should remain personalized so teachers can provide more specific feedback and get to know a student’s work, and, therefore, progress over time. She suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures.

But teachers should then grade students’ work themselves when looking for novelty, creativity and depth of insight. “Using feedback that is not truly from me seems like it is shortchanging that relationship a little,” she said. Ethics professor Leidner agreed, saying this should particularly be avoided for doctoral dissertations and master’s theses because the student might hope to publish the work.

“It would not be right to upload the material into the AI without making the students aware of this in advance,” she said. “And maybe students should need to provide consent.”

Some teachers are leaning on software called Writable that uses ChatGPT to help grade papers but is “tokenized,” so essays do not include any personal information, and it’s not shared directly with the system. Other educators are using platforms such as Turnitin that boast plagiarism detection tools to help teachers identify when assignments are written by ChatGPT and other AI.

But these types of detection tools are far from foolproof; OpenAI shut down its own AI-detection tool last year due to what the company called a “low rate of accuracy.” He acknowledges schools are having conversations about using generative AI tools to create things like promotion and tenure files, performance reviews, and job postings.”

He worries it’s still too early to understand how AI will be integrated into everyday life. He is also concerned that some administrators who don’t teach in classrooms may craft policy that misses nuances of instruction.

“That may create a danger of oversimplifying the problems with AI use in grading and instruction,” he said. “Oversimplification is how bad policy is made.” To start, he said educators can identify clear abuses of AI and begin policy-making around those.

Leidner, meanwhile, said universities can be very high level with their guidance, such as making transparency a priority — so students have a right to know when AI is being used to grade their work — and identifying what types of information should never be uploaded into an AI or asked of an AI.

But she said universities must also be open to “regularly reevaluating as the technology and uses evolve.

Editor’s Note: Samantha Murphy Kelly’s article was published in CNN on 6 April 2024.

@AI University: The World’s Leading AI Guidelines Expert for Managers, Educators, and Civil Servants.

The biggest challenge in implementing generative AI in any organization is identifying guidelines for generative AI for all levels of staff. The second biggest challenge is obtaining buy-in from C-level executives and heads of departments. Organizations need to realize that implementing AI is a change and buy-in process. ~ Robren, Founder of AI University

#RobrenReview: 9 | 10
Published on: 7 April 2024.

How to Set AI Guidelines  
Read more:   https://tinyurl.com/AIGuide1​


Comments

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.