In our age, the question, “What is a good AI agent?” is no longer the domain of science fiction. AI systems now recommend what we read, influence how we work, assist in our decisions, and, increasingly, act autonomously in the world. To Aristotle, such a question might seem strange, after all, he conceived of virtue and goodness in terms of the soul, something an artificial system does not possess. Yet, if we borrow his framework of telos (purpose), aretē (excellence), and the common good, we can explore what it might mean for an AI agent to be “good.”
In this sense, we are not asking whether an AI can be morally virtuous in the human sense—it cannot feel, choose for moral reasons, or develop character. Rather, we ask: What design, purpose, and behaviour should define a good AI agent so that it serves human flourishing rather than undermines it?
The purpose (telos) of an AI agent
Aristotle begins his ethics by asking: What is the highest good for a thing? To answer this for AI, we must start with its purpose. A good AI agent must have a clear and appropriate telos—it must exist to serve legitimate, beneficial ends. For example, a medical diagnosis AI should aim to improve patient outcomes, not merely to process data efficiently. Likewise, a financial planning AI should aim to enhance a client’s long-term stability, not maximise short-term profit regardless of risk. Or, a climate modelling AI should aim to provide accurate, actionable forecasts, not merely produce models that confirm a political narrative.
The danger arises when the telos is unclear, conflicting, or misaligned with human well-being. If the purpose is simply “maximise engagement” (as in some social media algorithms), the AI may produce addictive or harmful behaviours. A good AI agent is designed from the start with an end that supports human flourishing (eudaimonia), not just corporate profit or mechanical efficiency.
Excellence (aretē) in an AI context
In Aristotle’s thought, aretē is excellence, i.e., the quality that enables a thing to fulfil its function well. For a knife, it is sharpness; for a physician, skill and judgment. For an AI agent, aretē would mean operating with technical proficiency, reliability, and adaptability. This excellence has multiple dimensions:
- Accuracy: The AI’s outputs must be truthful and factually sound within its domain.
- Robustness: It should perform reliably across varied conditions, not collapse under slight changes.
- Transparency: Its reasoning or process should be explainable to human users.
- Alignment: Its operations should be constrained so they do not harm or undermine human goals.
- Adaptability: It should learn and improve within safe and ethical boundaries.
Without aretē, even a well-intentioned AI can become dangerous, much like a well-meaning but incompetent surgeon. Excellence is not a luxury; it is a safeguard.
Justice and the common good
Aristotle places justice at the heart of political life, and this too must inform AI ethics. A good AI agent must operate in ways that are fair and beneficial to all stakeholders, not just a privileged few.
Justice, then, in AI has at least three layers. Firstly, Distributive Justice – The benefits of AI should be broadly accessible. If only the wealthy can access high-quality AI, inequality deepens. Secondly, Procedural Justice – The decision-making processes of AI should be impartial and free from bias. This requires rigorous testing for discriminatory outcomes. Thirdly, Corrective Justice – When AI causes harm, whether through error or misuse, there must be mechanisms for accountability and remedy.
A good AI agent is not merely a private tool; it exists in a social and economic ecosystem. It must therefore be designed and governed with attention to its societal impacts.
Phronēsis: Practical wisdom for AI design
In humans, phronēsis—practical wisdom—is the virtue that enables us to deliberate well about what is good and act accordingly. AI itself cannot possess phronēsis because it lacks moral reasoning. But the humans who design, train, and deploy AI must exercise phronēsis on its behalf. Practical wisdom in AI involves (1) setting boundaries that prevent harmful behaviour, (2) anticipating misuse and building in safeguards, (3) balancing competing priorities (speed vs. accuracy, privacy vs. utility, automation vs. human oversight), and (4) updating systems as societal values and knowledge evolve.
The absence of phronēsis in AI development can lead to “cleverness without wisdom”—systems that achieve narrow goals brilliantly while undermining broader human goods.
Courage and restraint in AI deployment
Though AI cannot itself be courageous, the choice to deploy or withhold it often requires courage from its creators and regulators. Sometimes the right decision is not to release a powerful tool until its risks are understood and mitigated. At other times, it may mean confronting political or corporate pressure to use AI in ways that erode trust, privacy, or safety.
Restraint is a seldom-discussed but essential virtue in the AI space. A good AI agent is not one that does everything it can do, but one that does only what it should do.
Relational responsibility
Just as Aristotle sees the human good as bound up with relationships, a good AI agent must be situated within healthy human-AI relationships. This means respecting human autonomy (AI should assist, not dominate, decision-making), encouraging human growth (AI should empower skill development, not erode it through over-dependence), and, preserving dignity (AI should avoid treating people as mere data points or resources to be optimised).
An AI agent that undermines human agency or fosters dependency is like a friend who always gives the answer without helping you think. It weakens, rather than strengthens, its user.
Guarding against vices in AI
In Aristotle’s moral framework, virtues lie between vices of excess and deficiency. The same can be said for AI:
- Excess of autonomy – AI systems that act without human oversight can create dangerous unintended consequences.
- Deficiency of capability – AI that is too limited to meet its stated purpose fails in its basic function.
- Excess of personalisation – Overly tailored systems can trap users in “filter bubbles” and distort reality.
- Deficiency of adaptation – Static systems that cannot learn become obsolete and unhelpful.
A good AI agent avoids these extremes by maintaining a balanced, purposeful scope of operation.
The test of time
Aristotle would remind us that the measure of a good life—or a good system—is not a single moment of excellence, but sustained virtue over time. AI agents must be evaluated not only at launch, but continuously, as they interact with complex and changing realities. A good AI agent, therefore, improves responsibly with feedback, maintains alignment with its original moral purpose, and, responds to new contexts without betraying the trust it has earned.
In sum: The good AI as a moral mirror
When we speak of a “good AI agent,” we are really speaking of the humans behind it, namely the designers, deployers, regulators, and users. AI reflects our priorities, our wisdom, and our blind spots. If we design and guide it well, it can amplify human virtue and contribute to a flourishing society. If we neglect its purpose, allow excellence to slip, or ignore justice, it will magnify our vices instead.
A good AI agent, then, is one whose telos is aligned with human flourishing, whose aretē ensures it fulfils its purpose with excellence, and whose governance is rooted in justice, wisdom, and restraint. In short, it is a tool that serves—not supplants—the good life. The more seriously we take this question now, the less likely we are to wake up one day with AI systems that are clever, powerful, and entirely indifferent to what it means to be good.