Skip to content

From Blind Spots to Complexity: Morin, Phronēsis, and the Future of Artificial Intelligence

When I recently revisited the parable of the blind men and the elephant in the context of AI, I argued that technical experts, ethicists, entrepreneurs, and regulators each feel only one rough patch of a continuously changing creature. That metaphor is apt as it also captures a deeper condition in contemporary knowledge production: the domination of what French philosopher-sociologist Edgar Morin calls the paradigm of simplification. Long before artificial intelligence became a boardroom mantra, Morin diagnosed a “blind intelligence” that excels at dissecting reality into manageable pieces yet falters whenever those pieces start talking back to one another. In AI, that pathology is no longer merely a philosophical curiosity, but a potential governance crisis in the making.

Morin’s critique rests on three habits of mind, namely, disjunction, reduction, and abstraction, that organise modern disciplines, corporate hierarchies, and even the regulatory state. Disjunction isolates phenomena into silos, e.g., scientists tweak performance metrics while social scientists survey public trust, and policy teams draft compliance documents in a separate wing. Reduction removes “noise” by focusing on quantifiable variables (model accuracy, latency, energy cost) at the expense of messier human factors. Abstraction strips context from concepts, i.e., intelligence becomes statistical pattern-matching; fairness is reduced to a mathematical constraint; bias is measured without examining historical power relations. Together these habits produce knowledge that is precise, elegant… and dangerously partial.

Morin opposed simplification with complex thought. Complexity, for him, is not an academic synonym for complication. It is a fabric of heterogeneous events, interactions, feedback loops, and chance. Living systems, whether ecosystems or economies, remain in “productive disequilibrium”; they survive by continuously compensating for turbulence, never by freezing into equilibrium. Crucially, Morin insists that an open system cannot be understood solely from the inside. Its intelligibility lies just as much in its exchanges with the environment as in its internal architecture. In short, reality is found in the relationships, not merely a group of related things. That insight is precisely what AI discourse tends to forget when it rushes to optimise parameters or legislate single-issue guidelines.

Consider how the simplification paradigm shows up in three illustrative moves of AI culture. First, disjunction: we talk about “alignment” in technical circles, “ethics” in humanities departments, “safety” in policy think tanks, and “trust” in marketing campaigns, rarely integrating these conversations into one coherent design process. Second, reduction: benchmark competitions crown the latest large language model on standardised test suites, yet overlook the energy consumption of training runs or the labour conditions of data annotators. Third, abstraction: when intelligence is defined as pattern-recognition divorced from embodiment and history, it becomes easy to ignore how training data solidify social biases. As Emily Bender and colleagues warned in their influential Stochastic Parrots paper, language models that ingest the whole internet cannot help but replicate the internet’s prejudices, and, amplify them at industrial scale.

Morin would argue that none of these failures are accidental; they are scripted by our epistemological operating system. The cure, then, is not another compliance checklist but a paradigm shift. Complex thought directs us to reframe AI as a multi-layered, co-evolving assembly of algorithms, data pipelines, hardware supply chains, global power grids, gig-economy labellers, creative users, sceptical citizens, and the planetary biosphere that sustains them all. In this view, “model performance” is inseparable from labour equity, ecological cost, cultural representation, and geopolitical leverage. Every optimisation at one layer triggers ripples—and sometimes backlash—at other layers. The governance task is to keep those ripples visible, contested, and adaptable, rather than smoothing them out of sight.

Here Aristotle’s notion of phronēsis, or practical wisdom, proves indispensable. Phronēsis is the virtue of deliberating well about what is good in a concrete situation. It differs both from episteme (universal scientific knowledge) and technē (productive know-how). Phronetic judgement flourishes amid uncertainty, where rules are too crude and calculation too narrow. To govern AI phronetically is to resist the urge to declare problems “solved” by elegant metrics. It is to hold multiple, sometimes conflicting, perspectives in tension long enough to glimpse a better-informed next step. Morin supplies the cognitive lens (complexity) while Aristotle supplies the ethical posture (wise responsiveness).

What might a post-simplification, phronetic AI agenda look like in practice? It would begin with transdisciplinary stewardship. Instead of bolting ethicists onto projects as after-the-fact auditors, we would embed social scientists, domain experts, and representatives of affected communities inside the development loop from day one. Their presence would not be tokenistic but constitutive; the point is to let friction surface early, when design decisions are still malleable.

Second, we would cultivate relational metrics. Technical benchmarks remain useful, but they must be paired with indicators of environmental impact, labour conditions, cultural inclusivity, and downstream societal effects. Such composite dashboards do more than reveal trade-offs; they force organisations to own those trade-offs publicly.

Third, we would build living governance frameworks. Static regulations that assume a fixed technology will always lag behind self-learning models and adversarial actors. Policies should therefore function as “boundary objects” that evolve through iterative versioning, public comment, and real-world stress tests, much like open-source software. The European Union’s AI Act hints at this approach, but only continued civic engagement will keep the rules alive to emergent harms.

Fourth, we would reimagine education. Data scientists must learn not only to code but to interrogate how they know. Courses in, for example, epistemology, anthropology, and organisational behaviour should sit alongside linear algebra and stochastic optimisation. Likewise, business leaders and policymakers must gain fluency in the technical basics, lest they outsource discernment to consultants using cloudy jargon.

Finally, we would nurture phronetic leadership habits. Decision-makers should treat moments of uncertainty as cues to slow down, solicit dissenting voices, and triangulate across disciplines. They must be willing to realign when new evidence unsettles old assumptions, even at the cost of sunk investments or political capital. In other words, the opposite of simplification’s knee-jerk drive for closure.

This agenda is demanding, and incomplete by design. Complex thought does not promise tidy solutions; it promises a better conversation, one that keeps re-opening the frame whenever reality refuses to fit. Phronēsis, for its part, cannot be downloaded as a best-practice toolkit; it must be cultivated through habit, reflection, and shared accountability.

Sceptics may object that Morin’s philosophy is too high-minded for Agile Sprints and Quarterly Objectives and Key Results. Yet history shows that paradigm shifts often begin as subversive questions whispered at the margins. Galileo Galilei asked if heavy and light objects truly fall at different speeds. Isaac Newton wondered if colour is an innate property of light or something light acquires by interacting with matter. Today we might ask why ever-larger models require ever-larger disclaimers, or why algorithmic fairness tools proliferate while trust in tech companies sinks. Such questions expose cracks in the structure of simplification. Once seen, they cannot be unseen.

We stand, then, at a fork. Down one path lies the comfort of disjunction, reduction, and abstraction; a world where AI systems become exponentially more powerful while our collective field of vision grows narrower. Down the other path lies complexity and practical wisdom; messier, slower, but better attuned to the living systems we inhabit. Choosing the latter does not mean abandoning precision or innovation; it means situating them within a richer tapestry of relationships and values.

If the blind men of the parable finally recognise the elephant, it will not be because each perfected his individual touch technique. It will be because they talk to one another, compare notes, revise assumptions, and allow the elephant itself to reshape their categories. Morin invites us to that dialogue; phronēsis equips us to navigate its tensions. The future of AI, or more accurately, the future of us with AI, depends on whether we accept that invitation.

Author’s note: This essay is a sequel to “Blind Men, Elephants… and Artificial Intelligence.” It draws on Edgar Morin’s On Complexity (2008), Aristotle’s Nicomachean Ethics (2009), Emily Bender et al.’s “On the Dangers of Stochastic Parrots” (2021), and ongoing debates in AI governance.