Learning in the age if AI

Kentucky college leaders and students share tips for navigating new technology
No technology in recent history has divided experts more than artificial intelligence. The most optimistic forecasters predict an AI-driven future of almost unimaginable prosperity and productivity. Pessimistic observers foresee dire economic consequences at best, and human extinction at worst. Still others predict that most things will stay about the same.
Here’s one thing we know for certain: large language models like ChatGPT have surged in popularity on college campuses, where students use them for everything from brainstorming project ideas to writing full papers. As the technology evolves, educators and students across Kentucky are grappling with how to use it well.
Mohamed Shehata, assistant professor of computer science at Midway University, says to understand AI’s limits, it’s important to know how it works. Put simply, generative AI models function like skilled quilters. They cut, rearrange and stitch together the information they have been trained on to make seemingly new products. Generative AI can create essays, articles, images, videos, audio or even software code.
Shehata reminds students that, although AI results can seem intricate, they are not entirely original in the same sense human thinking can be. “AI predicts answers it deems statistically accurate based on data it has been fed,” he explains. “It operates through logic and algorithms.”
Leah Simpson, system director of online learning and faculty development for the Kentucky Community and Technical College System, elaborates on what this means for the quality of student learning when using AI.
“AI runs on data, and the results it provides are only as good as the data it was trained with,” she says. “It doesn’t know answers. It predicts them. When a student relies too heavily on AI, it hinders learning and means the student is at risk of shallow or incorrect answers.”
Understand the limits of AI
Students need to understand that AI has major limitations, Shehata says: “Answers can be biased, boring or wrong. As a professor, when I see the same wording for several students’ incorrect answers, I know the culprit.” He uses these shortcomings as a learning exercise. “I like to show students in class where AI failed in their homework assignments,” he says with a chuckle.
Sophia Stover is a junior at the University of Louisville who is preparing for law school. Stover, who was a 2023 Washington Youth Tour delegate sponsored by Nolin RECC, says AI tools are helpful for creating flashcards as study aids. But she is leery of depending on it too much. “My friends in law school have had AI hallucinate cases that never happened,” she says.
Hallucination refers to the tendency of AI tools to make up false information and present it as fact. Stover is alert to the dangers of outsourcing her thinking to a fallible model.
“In the courtroom, I will be defending someone’s life or business,” she says. “I don’t want to get that wrong. I plan to do my own research.”
Xinxing Wu, assistant professor of business at Midway University, encourages students to have the right perspective on AI’s abilities.
“I tell students they must treat AI as a junior assistant, not an authority,” he says. “Large language models cannot handle reasoning like the human brain. It’s a great partner in work, but it is limited in solving complicated problems.”
Do your own thinking
Trey Conatser, who directs the University of Kentucky’s Center for the Enhancement of Learning and Teaching, urges students to value their own knowledge and creativity, even as AI use becomes more common in school and in the workplace.
He uses the term “cognitive offloading” to describe what’s happening when people rely on technology to do their work. There’s a place for this, he says— but there’s also a downside. Because AI tools may seem to be more knowledgeable, students hesitate to go through the learning process themselves.
Shehata, the computer science professor at Midway, has noticed this shift. With the rise of AI, students are acing homework but failing in-class quizzes. “When I give students a pen-and-paper quiz or an exam with digital limits, they don’t know the material,” he says. “If you do not know the information the first time, you won’t be able to recall it.”
Shehata remembers how much content he and his peers had to master just two decades ago. Now, professors face students with shorter attention spans and higher anxiety.
“Students need to know they can focus. They can do hard work by themselves, without AI,” he says.
Conatser echoes that, saying students need to savor moments of inefficiency—the parts of papers and projects that are hard.
“We often value speed over endurance,” he says. “But sustained challenge is where real learning takes place—not just with academics, but with durable skills like perseverance. Learning requires personal effort and practice. It’s important to commit to these as we increasingly work alongside AI.”
Conatser notes that students can lose confidence when comparing their writing or knowledge with AI, feeling defeated and tempted to abdicate their work.
“Of course, AI seems to know more than any of us individually,” he says. “But it is not human. I tell my students: ‘AI does not have your experiences. You have something to say. Use your voice.’”
Conatser urges teachers to keep pushing students to develop reading, writing and critical thinking skills. In his own writing classes, which engage students with AI tools, he clarifies that he’s not seeking perfect work. He prefers writing that wrestles with ideas and that reveals the student’s process over seemingly competent AI output.
“No professor wants to read what an AI bot has written for a student. We want to know their thoughts. We want to see what they can create,” he says.
Beware the draw of AI
Developers program AI to seem human-like and supportive to users—qualities known as anthropomorphism and sycophancy, Conatser says. Developers build in personification because they want their product to be viewed as a cooperative partner in work. And they want to encourage its use to grow their platforms.
The human-like, interactive quality of AI is powerful. When students feel cared for by AI—perhaps given more encouragement by a bot than by peers or teachers—it’s hard for them not to look to it for both emotional support and for academic shortcuts. Research cited by the Brookings Institution shows that users no longer turn to generative AI tools primarily for help with their work—many are also seeking “companionship and therapy”—an approach that comes with significant risks.
Brennan Christmas, a junior at Western Kentucky University and a 2023 Washington Youth Tour delegate sponsored by Pennyrile Electric, is preparing for a career in secondary education and Christian ministry. He fears the first generation of AI users will be sucked in by its novelty and suffer a decline in literacy. He’s observed falling academic outcomes across Kentucky the past couple of years, something he thinks correlates to AI’s escalation.
“The decline in retention of knowledge is disturbing,” he says. “How are students going to handle life’s issues? You can’t plug everything into AI to solve.”
Skilled people still stand out
Timothy Campbell, vice president of academic affairs at Midway University, says students who do the work to write, perform computational steps, and plan and organize projects on their own will stand out in tomorrow’s job market.
“We assure students their work will be rewarded,” he says. “Yes, AI has value as a thought partner, but you are the student. You have to put the work in to understand your field. You have to strain through hard concepts, and wrestle with complex and sometimes unfinished ideas. … I think there is great satisfaction in the process our work requires of us.”
Christopher Howes, vice president of technology solutions and chief information officer at KCTCS, anticipates that the jobs of the future will be more collaborative with AI, but that students’ human skill remains irreplaceable.
“I encourage them to develop skills such as creativity, emotional intelligence, critical thinking and ethical judgment,” he says. “Add value by growing in your abilities to be adaptable, empathetic and capable of making nuanced decisions in complex situations.”
AI is becoming an essential technology—it might one day be as essential as the electricity that powers our lightbulbs and the phones in our pockets. But it’s not superior to human ingenuity.
“The human mind is an extraordinary resource, and learning is truly a privilege,” says Reneau Waggoner, KCTCS vice president for academic and student success. “We are already finding meaningful ways to motivate our students—and ourselves—and we must continue to build on that momentum by nurturing critical thinking and a commitment to lifelong learning.”
AI delivers efficiencies unlike any technology before it. Yet its rapid integration into daily life raises complex and lasting questions. Here are just a few that society will grapple with for years to come. Ultimately, humans design and use AI—and we decide how it is applied at work, at home, and in schools.
Bias
Bias can be embedded in AI through the limited or skewed data it is trained on, as well as through “sycophantic bias,” its tendency to agree with users. If the information AI learns from is stereotyped or inaccurate, it cannot recognize those flaws and may reinforce harmful assumptions. Will young users know how to identify bias and misinformation? Read more here.
Intellectual Property Rights & Plagiarism
Many AI tools are trained on the work of researchers, artists, musicians, and designers who have spent a lifetime developing their craft. Have they been fairly compensated? Is it ethical for users to claim AI-generated work as their own or profit from it? And will students be able to distinguish between legitimate assistance and academic dishonesty? Read more here.
Environmental and Economic Impact
According to Dr. Wu, AI systems require vast amounts of memory, electricity, and cooling power—demands that are costly and potentially harmful to the environment. Will access to AI remain concentrated among those with significant resources, further widening economic and educational inequality? Economists discuss the issue here.
Child Safety
AI enables the creation and rapid spread of illicit or harmful content. Can parental controls and age-verification systems truly keep pace with evolving technology? Guardrails and safeguards are discussed in this article.
Data Privacy
Technology companies such as Meta and Google collect vast amounts of user data, sometimes including direct messages and photos, to train AI systems. How can individuals protect their personal information? Could AI systems retrieve and share sensitive data with anonymous users? Learn more about protecting privacy here.
Cognitive Health
Research suggests that overreliance on AI may reduce brain activity and problem-solving skills. Will this lead to a decline in higher-order thinking and a generation less inclined to reason, analyze, create independently, or value personal agency? Experts weigh in here.
