Chatbots and changes: UTM undergrad examines how data science profs are grappling with AI

Close up of computer screen menu.

In the age of ChatGPT and other supercharged AI tools, university students might be asked to evaluate a response from a chatbot in one class and use a locked-down computer under supervision for another.

The range of experiences underscores how Large Language Models (LLMs) are rewriting the rules in academics – not only are they changing student learning, they’re also putting immense pressure on university instructors and how they prepare undergraduates for the working world.  

This uncomfortable tension was recently documented in a research paper by Ana Elisa Lopez-Miranda, a 21-year-old fourth-year student in the mathematical and computational sciences department at the University of Toronto Mississauga.

Lopez-Miranda’s work – conducted alongside UTM and UBC instructors Rohan Alexander and Tiffany Timbers, respectively – was recently published in the Harvard Data Science Review, a significant feat for an early-career researcher.

The qualitative study mirrors a broader societal mood: fear and anxiety about the uncertainty of human workers in a world adjusting to AI.

However, many instructors are beginning to accept LLMs as a reality and are actively designing their teaching around them, the research indicates.

Figuring out how to work with – not against – what some call “a magic homework machine” is one of the core outcomes of the research, says Lopez-Miranda.

“I had a bunch of professors being like, ‘but if we don't teach them this … what if they lose their jobs to someone who does know how to use it to be more efficient,’” she recounted. “‘Shouldn't we be training them to be able to use it accurately in their job so that they can get a job?’”

The research, which was supported by funding from the University of Toronto’s Data Sciences Institute and a NSERC Alliance Grant, was assigned and conducted as part of Lopez-Miranda’s internship with the Investigative Journalism Foundation in the summer of 2025.

Lopez-Miranda and her co-authors spent time interviewing and coding the responses of 42 different data science instructors from 33 schools in nine countries for the study.

They found that some instructors are allowing students to use chatbots – which have been trained only on course materials – with their chats reviewed by instructors to see where students are getting confused.

Others have begun “scaffolding” large projects, which involves breaking them up into smaller steps (like plans, drafts and final submissions) that show a student’s thinking and make it more difficult to fully rely on LLMs.

As for assessment, several are asking students to submit videos explaining their code, line by line – a method that builds on a growing use of oral exams in class.

Some assignments ask students to critique an answer written by an LLM instead of writing their own.

Others put more weight on presentations and peer teaching so students can actively explain the material.

And a few universities now use locked-down computers in supervised classrooms to limit access to LLM tools during exams.

While 58 per cent of study participants have already integrated LLM use in their courses, almost a third have not – and do not plan to use the tech in their classrooms, the research indicates.

Lopez-Miranda says some instructors believe that AI can give students the “illusion” of learning, when in fact they miss the fundamentals and slowly lose skills they’re in school to build.

The study’s interviews revealed a clear blind spot, says Lopez-Miranda, in that instructors don’t fully know how students use or feel about LLMs, a gap she wants to address in follow-up research.

“I was surprised to find a lot of instructors asking me for my own opinions, because they truly don't know how students are using them or if they have resignations about them,” she says.

For co-author Alexander, an associate professor in UTM’s department of statistical sciences, the most striking finding from the study was how deeply the technology unsettles professors’ sense of identity.

“I feel the same way,” he says. “Having to read and evaluate AI-generated papers from students – that’s not why I became a teacher.”

One of the important steps going forward, Alexander says, is to help students navigate LLMs in the best possible way.

“What we need to focus on now is developing our students’ good taste.”