Bryan Weaver Wants You to Think about the Ethics of AI
Artificial intelligence is everywhere—driving innovation, shaping public conversations, and raising urgent questions about its proper use. The Ohio State University even launched a new AI Fluency initiative this year. Yet, despite the push for quick adoption, Bryan Weaver, a member of the CEHV Steering Committee and Senior Lecturer in the Department of Computer Science and Engineering, encourages us to pause and consider what AI truly means for society. Below are some ideas that Dr. Weaver teaches in his classes.
Technical expertise is not ethical expertise.
If developers are skilled at technological innovation, they are not automatically skilled at making good ethical decisions regarding those innovations.
Dr. Weaver is especially concerned about how narrow expertise can distort our understanding of AI ethics. He points out the Dunning–Kruger effect: people highly skilled in one specific area often overestimate their abilities in others. In AI, technologists might believe their engineering skills give them ethical expertise—and moral philosophers might assume the opposite. Neither assumption is correct. Instead, Dr. Weaver urges both technologists and philosophers to learn about each other’s work. Only then, Dr. Weaver says, can we develop morally sound top-down and bottom-up solutions to real-world problems using technology. Otherwise, efforts from both sides overlook important information and perspectives of the other.
Technology is not value-neutral.
Technologies are not “just tools”; their very design, as well as their use, are matters of moral judgement.
A common belief is that technology is value-neutral—that AI systems are only as good or bad as the intentions behind them. Dr. Weaver strongly disagrees with this idea. Creating technology is inherently a moral act—it's a claim that the world needs what you build. He argues that developers are responsible for the harm their products cause; they can't just dismiss consequences by claiming neutrality. But what does this responsibility look like? How far should it extend, and how can we hold each other accountable?
There are no technical solutions to ethical problems.
If technology causes an ethical problem, a better or more advanced technology cannot solve it.
Another misconception is the idea that ethical problems caused by technology can be fixed with more or better technology. For example, biased hiring algorithms are sometimes viewed as issues that can be solved by creating “less biased” AI. Dr. Weaver describes this as a category mistake. Since mathematical ideas of fairness are inconsistent, new tools cannot solve deeply ethical questions. “We should seek ethical solutions to ethical problems and technological solutions to technological problems,” he explains—though technology may help in implementing an ethical solution once one exists. This is one reason Dr. Weaver advocates for both developers and philosophers to learn about each other’s fields. Through shared knowledge rather than siloed expertise, moral solutions can be developed.
Developers are not morally entitled to build whatever they want.
Technology is not morally good just because developers choose to create it.
Dr. Weaver also highlights how institutional incentives push developers to behave unethically: to move swiftly, scale widely, and bypass crucial checks and balances. These economic and business pressures are real—and dangerous—especially when combined with the belief that developers are free to create whatever they consider beneficial.
Ethics is not optional for developers.
Developers cannot opt out of ethics by choosing not to take ethical and philosophical questions seriously.
Still, Dr. Weaver insists that “everyone is a philosopher.” We all make ethical judgments and hold philosophical positions, even if we rarely analyze them. When we ignore our everyday moral reasoning, we miss chances to improve it. As a result, we may make poor ethical choices—or good ones only by luck. Philosophers, he notes, cannot alter the incentive structures driving AI development. But they can teach people to recognize when those incentives clash with ethical considerations—and to understand that they must choose whether to make those decisions well or poorly. So, while everyone is a philosopher, not everyone is a good one. We need training and practice to build these skills, something he provides in his two classes listed below.
Students interested in exploring these issues will have two opportunities next semester. Dr. Weaver is teaching CSE 5194: Introduction to the Ethics of Artificial Intelligence, which examines moral and normative frameworks for AI, and CSE 250: Social, Ethical, and Professional Issues in Computing, which explores the broader societal responsibilities of computing professionals.