Assessing AI
Doctoral student Michael Zahorec blends philosophy and computer science to understand ethical artificial intelligence usage
If you’ve been reading the news lately, you could be forgiven for thinking artificial intelligence was on the verge of taking over the world. Today’s headlines make AI seem more like a threat to humanity than the technology that for years has quietly suggested what to watch next on Netflix.
From virtual assistants like Apple’s Siri to large language models like ChatGPT, AI is in use across industries for wide-ranging tasks from enabling automation and generating digital content to engaging in customer service, but big questions loom.
How do we know when to trust AI? How can we best evaluate AI’s ability to perform the tasks promised? What guardrails exist to ensure AI is used ethically? And when is human interaction a better choice than using AI?
Michael Zahorec, who earned his master’s from Florida State University’s Department of Computer Science in Spring 2025, is currently pursuing a doctorate through the Department of Philosophy and is focused on AI evaluation, AI explanation and responsible AI use.
At its core, AI is a predictive model that uses complex mathematical operations to generate responses to queries. In his research, Zahorec integrates philosophy and computer science perspectives to emphasize the importance of understanding the internal processes of AI models, not just a model’s behavior, to fully comprehend how a particular model functions. He also pushes for ethical guidance and responsible AI use.
“I research the philosophy behind different techniques used to understand complex generative AI models like ChatGPT,” said Zahorec, who earned a bachelor’s in philosophy and mechanical engineering in 2019 from the University of Dayton in Ohio before coming to FSU. “Exclusively analyzing a model’s output, or the behavior, doesn’t really tell us how the model works. We have to understand the model’s internal components, or how it arrived at that behavior.”
Many researchers currently disagree on the best AI evaluation practices, and Zahorec’s argument has the potential to widely shape future AI evaluation standards.
“People need to understand AI as imperfect mathematical models that aren’t always trustworthy,” Zahorec said. “I hope to help the public engage with AI’s benefits without falling prey to the potential harms, such as when AI generates incorrect or biased information.”
“Michael’s work has an essential public dimension. He’s proposing innovative, nuanced suggestions about understanding key concepts in AI. His work has lasting impacts on the standards researchers use to comprehend and evaluate AI models and responsible AI use guidelines.”
— Courtney Fugate, professor of philosophy
In 2024, Zahorec interned on the responsible AI team for health insurance company Humana and researched ways to evaluate large language models used in customer-facing tasks. Among the ways to evaluate AI is adversarial testing, in which a researcher like Zahorec intentionally tries to trick and confuse the model to behave contrary to its design in order to uncover vulnerabilities and later, strengthen them.
“This internship gave me a practical understanding of AI-related research literature,” he said. “I saw applications of AI evaluation in a real-world context, like how data scientists apply research to use AI more safely and create better products.”
In forthcoming research, Zahorec charts various uses of buzzwords, such as “transparency,” typically used by humans to describe AI design and function. His work categorizes the words by different meanings, showcasing vast disagreements in AI definitions. He’s also writing a book chapter for “The Philosophy of Artificial Intelligence,” which argues that understanding AI’s internal components is essential and explores why that is so difficult to understand in generative AI models.
“I believe we should use language models as idea generators as opposed to other paradigms, like an expert or information processor, in order to use AI responsibly,” Zahorec said. “Just because an idea is AI-generated doesn’t automatically mean it’s a good idea; verification is needed. Relying solely on AI creates potential for biased or incorrect information.”
Zahorec’s dissertation, which he’s slated to defend in March 2026, focuses on “scientific kinds” — the question of what defines groupings, like biological species or chemical elements — in society and if scientists create or discover these kinds. He argues that “kinds” are created by scientists but are grouped depending on their context in nature, meaning kinds of AI models and species are grouped in different manners.
In addition to his research, Zahorec serves as a teaching assistant in philosophy and has taught his own classes including Environmental Ethics and Logic, Reasoning and Critical Thinking. He has also lectured on AI explanation and interpretability, and he moderated the FSU-hosted “AI and its Impact on Higher Education” panel discussion in September 2025. Following graduation, Zahorec plans to pursue a career in academia to continue teaching and conducting research.
“Michael’s work has an essential public dimension,” said Courtney Fugate, professor of philosophy and Zahorec’s adviser. “He’s proposing innovative, nuanced suggestions about understanding key concepts in AI. His work has lasting impacts on the standards researchers use to comprehend and evaluate AI models and responsible AI use guidelines.”
Carly Nelson is an FSU alumna who earned a bachelor's degree in advertising in 2025. She is currently pursuing a master's degree in strategic communications with plans to graduate in Summer 2026.