In a bid to streamline her teaching methods, Georgia State University professor G. Sue Kasun has turned to generative artificial intelligence. This summer, while developing a course on integrating identity and culture in language education, Kasun utilized Gemini, Google’s AI chatbot, to generate innovative ideas for class activities and materials.
Kasun, who specializes in language, culture, and education, sought Gemini’s help to brainstorm readings and activities. She noted, “There were suggestions of offering different choices like having students generate an image, having students write a poem. And these are things that I could maybe think of but we have limits on our time, which is probably our most valuable resource as faculty.” Kasun also employs Gemini for creating grading rubrics, underscoring the importance of verifying the AI’s output for alignment with learning objectives.
She is part of a growing community of educators incorporating AI tools into their teaching. A survey conducted by Tyton Partners revealed that nearly 40% of higher education administrators and 30% of instructors now use generative AI regularly, a significant increase from earlier in 2023.
Further research by Anthropic, the company behind the AI chatbot Claude, indicates that professors globally are leveraging AI for tasks like curriculum development, research, grant writing, and even creating interactive learning tools.
Exploring AI in Academic Settings
Anthropic’s study analyzed about 74,000 interactions between higher education email users and Claude over a short period. Curriculum development emerged as the primary use, encompassing 57% of the conversations. A notable application included professors using AI to develop interactive simulations, such as web-based games, to aid student understanding.
Academic research was another significant category, making up 13% of interactions. Administrative tasks like budgeting, drafting letters, and setting agendas also featured prominently. The analysis suggests that while routine tasks are often automated, teaching and lesson design involve more collaboration between educators and AI.
However, the study’s scope is limited as it did not disclose the exact number of professors involved, and it only reflects a specific timeframe, potentially affecting the findings.
AI in Grading and Its Challenges
Grading constituted about 7% of the AI interactions, with educators automating substantial portions of the grading process. Despite this, a survey by Anthropic and Northeastern University found that many faculty members considered AI least effective for grading tasks.
Marc Watkins, a lecturer at the University of Mississippi, expressed concerns over AI’s role in education. He warned, “This sort of nightmare scenario that we might be running into is students using AI to write papers and teachers using AI to grade the same papers. If that’s the case, then what’s the purpose of education?” Watkins also criticized the potential erosion of professor-student relationships through AI automation.
Need for Guidance in AI Use
Professor Kasun echoes the sentiment that AI should not be used for grading and emphasizes the need for institutional support in navigating AI technology. “We are here, sort of alone in the forest, fending for ourselves,” she stated.
Drew Bent from Anthropic suggests that tech companies should collaborate with educational institutions to guide the integration of AI. However, he stresses that prescriptive approaches from tech firms are not advisable.
The dialogue between educators and AI developers continues as the decisions made today could shape the educational landscape for future students.
Copyright 2025 NPR


