In January of this year, I was drafting a hypothetical email with a group in class. Someone suggested using something called ChatGPT to write it. I didn’t quite know what it was at the time, and had to look more into it on my own later that night.
I learned that, with the right prompts, the large language model could write practically anything — emails, essays, articles — with nearly perfect conversational dialogue.
Through winter quarter and beyond, I heard much more about such artificial intelligence, including concerns about its usage. In addition to questions about whether it would replace human jobs and open new privacy woes, artificial intelligence programs are challenging education systems across the globe.
After hearing classmates, staff, and members of my Strategic Communications cohort talk about their uses of large language models as well as their fears, I decided to look more into how the use of artificial intelligence is evolving in the college setting.
Developing guidelines
Stacy Vander Velde is the director of UC Davis’ Office of Student Support and Judicial Affairs, or OSSJA, which is charged with upholding standards of academic honesty and responsible behavior. She said she has seen her role change with the introduction of AI technologies like ChatGPT.
According to Vander Velde, rumblings of artificial intelligence misuse began in fall quarter 2022. By the end of winter quarter 2023, there were a handful of cases, and referrals increased exponentially by spring.
“Everything is evolving quickly,” said Vander Velde. “We’re trying to stay up to date as this technology unfolds.”
While UC Davis hasn’t implemented a specific policy saying that use of AI is prohibited, the university has made minor changes in the Academic Code of Conduct outlining academic misconduct and artificial intelligence. It has also shared tips and tools surrounding AI use with professors.
Vander Velde said she and other members of OSSJA don’t see artificial intelligence as inherently evil, though. They’re aware of its use by students and professionals to better understand curricula or write short blurbs.
OSSJA’s next steps are to figure out proper and improper uses of artificial intelligence, coming up with sound rationale as to why the improper uses are deemed unacceptable.
“In our office we really uphold the core of our process,” Vander Velde explained. “We want to make sure that the students have rights, and we also have a very important role in educating them so that they don’t find themselves crossing that line into misconduct.”
A professor’s opinion
This past spring, my intermediate microeconomics professor included a ChatGPT section on his syllabus. I expected the section to condemn its use, but I was surprised to read that his course actually welcomed it.
Professor Erich Muehlegger was interested in learning how students use large language models like ChatGPT to learn new material. Funnily enough, he used ChatGPT to write the artificial intelligence section of his syllabus.
“I actually hadn’t seen students using ChatGPT to solve problems before writing the syllabus,” said Muehlegger. “I put that section there on my own after seeing how ChatGPT was able to solve open-ended economics problems.”
Muehlegger started exploring ChatGPT when it first launched in November 2022. He said he’s noticed that ChatGPT is best at giving answers to things that have been extensively written about. When it comes to nuanced topics or math problems, however, ChatGPT doesn’t fare as well. As a large language model, ChatGPT is intended to generate text, not solve mathematical equations.
“Think about how [problem-solving app] Photomath is good at solving basic linear algebra problems, but doesn’t do well with multivariable calculus and higher levels of math,” Muehlegger explained.
Because of ChatGPTs current struggle with mathematical and theoretical topics like economics, Muehlegger doesn’t anticipate it impacting his classes.
“I don’t see it changing the way I operate or teach for now,” Meuhlegger said. “I can see it affecting other disciplines though, like writing or the humanities.”
A student group’s focus
As a computational cognitive science major, third-year Elijah Yeboah continually seeks to increase his knowledge of computers and artificial intelligence. This past spring, he participated in and facilitated an AI-themed case competition held by the UC Davis Artificial Intelligence Student Collective, or AISC. The club aims to provide AI literacy through competitions, workshops and speaker events.
For this competition, participants were tasked with developing AI-powered solutions to enhance aspects of university student life. These included workload management, student wellness and more.
“One of the most important things about AISC for me is providing students at Davis with the opportunity to gain valuable skills that can translate to the workforce,” Yeboah said.
When the day of the case competition came around, Yeboah had four hours to brainstorm, come up with a solution for the prompt.
“It was an amazing exercise for public speaking,” said Yeboah.
Yeboah’s outlook on AI is largely positive. He sees it changing the way students work through homework and studying in a beneficial way if they are thoughtful about how they use it. When used incorrectly, though, Yeboah said that AI can outsource valuable skills like idea generation and thought summarizing.
“Overall, I think AI can be a tremendous aid to college students with proper self-limitations,” said Yeboah. “I’d love to see more of my peers begin to understand the types of generative AI that exist, and how that can be helpful to them creatively, personally and professionally.”