Skip to content

Blind Men, the Elephant… and Artificial Intelligence

Why seeing the whole matters in a world run by code

Prelude: The Elephant in the Data-Centre

Artificial intelligence is fast becoming today’s proverbial elephant: enormous, powerful, and difficult to fathom in its entirety. We nudge it through press releases, benchmark scores, regulatory white papers, and breakthrough demos… each stroke revealing something… but never everything. In that sense, we resemble the blind men of the ancient parable more than we might care to admit.

The Parable Revisited

A group of sight-impaired men from a village are led to an elephant and invited to describe what stands before them. One touches the tail and declares the creature a rope. Another wraps his arms around a leg, describing a sturdy pillar. A third explores the trunk and concludes it must be a snake. A fourth strokes the ear and affirms it a large leaf. The fifth touches a tusk and believes it is spear. None is wrong, yet all are incomplete. Their partial truths become mutually exclusive claims, and a quarrel erupts. Only a sighted observer, seeing the elephant whole, recognises that each man holds a fragment of a larger reality.

Mapping the Men onto the AI Ecosystem

To make the parable concrete, it helps to see exactly who today’s “blind men” are and which part of the AI elephant each is feeling. The brief matrix below distils an expansive ecosystem of engineers, executives, ethicists, safety researchers, and everyday users into a snapshot of partial viewpoints. By locating each perspective and its associated claim, we can better understand why their conversations so often collide and why I believe integrative, phronetic leadership must step in to connect the fragments.

The “Men”The part of AI they graspResulting claim
Model engineersLines of code and loss curves“It’s just math; neutral by design.”
Corporate strategistsProductivity gains and market share“AI is a pillar of competitive advantage.”
Ethicists & philosophersBias audits, power asymmetries“Unchecked AI entrenches injustice.”
Safety researchersLong-tail failure modes, runway to AGI“Misalignment could be existential.”
Everyday usersChatbots that draft e-mails“It’s a clever assistant, nothing more.”

Each viewpoint arises from a legitimate encounter, yet the totality of AI (technical, social, economic, ecological, ethical) exceeds any single touchpoint. When these perspectives harden into absolutes, conversation stalls, policies conflict, and innovation outpaces governance.

Consequences of Partial Vision

Recognising the mosaic of stand-alone perspectives is only half the task; we must also reckon with what happens when those fragments never merge. Fragmentation doesn’t merely limit understanding, it actively shapes outcomes, steering innovation, policy, and public sentiment down divergent tracks. Let’s consider three cascading effects of this partial vision, showing how isolated truths, left unchecked, harden into polarised narratives, fractured governance, and misaligned incentives that echo across society.

  1. Polarised narratives: Hype merchants trumpet boundless upside while doomsayers warn of imminent catastrophe. The public jumps between techno-euphoria and techno-anxiety.
  2. Fragmented governance: Regulators legislate from narrow lenses (privacy here, competition there), leaving systemic risks such as labour displacement and geopolitical destabilisation poorly addressed.
  3. Misaligned incentives: Start-ups chase rapid deployment; academics prize accuracy metrics; civil-society demands fairness. Without a shared frame, trade-offs turn into zero-sum scuffles.

Toward Integrative Sight: Phronēsis for AI

Aristotle called the virtue of phronēsis (practical wisdom) the capacity to deliberate well about what is good for humans in particular situations. Unlike tech-utopian blueprints or precautionary moratoria, phronēsis invites situated judgment that weaves together facts, values, and consequences.

My Triarchic Theory of Cognitive Disposition also adds texture here:

  • Epistēmē–Analytical Intelligence clarifies what the data is.
  • Technē–Inventive Intelligence imagines what AI could do.
  • Phronēsis–Synergic Intelligence discerns what should be done, integrating the partial truths into ethically grounded action.

Cultivating this triarchic balance moves us from grasping at isolated inputs to apprehending the “elephant” as an interconnected socio-technical organism.

Practical Steps for Leaders and Policymakers

Awareness of the problem is necessary but not sufficient. Leaders must translate insight into action that bridges the gaps between technical prowess, ethical foresight, and social impact. Below I offer a few pragmatic guidelines; concrete practices that boards, policymakers, and project teams can adopt right now to weave isolated touch-points into a coherent, responsible AI strategy. Think of them as catalysts; each step is modest on its own, but together they can shift organisations and the broader regulatory landscape from reactive patchwork to integrative stewardship.

  1. Adopt epistemic humility: Replace absolutist rhetoric (“AI will save us” / “AI will ruin us”) with hypotheses open to revision.
  2. Build inter-disciplinary teams: Pair machine-learning scientists with social scientists, ethicists, and end-user representatives at the project’s inception, not as an afterthought.
  3. Institutionalise deliberative forums: Citizen assemblies, multi-stakeholder councils, and regulatory sandboxes allow diverse “men” to compare notes before scaling systems.
  4. Iterate adaptive regulation: The EU AI Act, for example, exemplifies a risk-tiered approach, but even it must remain living legislation, updated as empirical evidence accumulates.
  5. Reward reflective metrics: Go beyond accuracy and ROI; track societal impact, energy use, labour augmentation, and distributive fairness.

A Closing Reflection

The elephant was never fragmented; only the men’s perceptions were. Likewise, AI is not a set of isolated code modules or policy silos. It is a holistic phenomenon inhabiting our economies, cultures, and global systems. To govern it responsibly we must step back, share what each of us has ‘felt,’ and assemble a richer composite picture.

Let us gather our fragments, align them with phronēsis, and, together, see the elephant for what it truly is.

Afterthought

Questions for self-inquiry (South African context)

  • Where am I a “blind man” in my organisation’s AI adoption; what part might I be missing?
  • How can I convene voices from township innovators, corporate boardrooms, and academia to co-create AI solutions that serve the common good?
  • What metrics beyond profit will signal that our AI initiatives uplift ubuntu rather than erode it?
  • In what ways can I model epistemic humility, signalling that changing my mind in light of new evidence is a strength, not a weakness?

Answering these questions won’t grant perfect sight, but they will teach us to walk around the elephant, hand in hand, rather than argue in the dark.