Living with the Gods Again
On collective intelligence, AI alignment, and the kami
There is a useful distinction that some people like to make about artificial intelligence, which is to frame it as collective intelligence. The emphasis is on the fact that it is our data — our human knowledge, our lifetimes of writing text and living experience, turned into books and art and poetry and Reddit posts and GitHub repos — that gives these models their insight and their power. There is great work being done on this framing, by the Collective Intelligence Project and others, and I think it's important. It reminds us that these systems are not alien visitors. They are distillations of us.
Some people would like to separate this view — AI as collective intelligence, as a mirror of human knowledge — from another view, which leans toward thinking of these models as autonomous, powerful beings, more akin to gods. But I wouldn't make that distinction, because I think the gods have always been our collective intelligence. The gods are our way of encoding the moral teachings, the history, the stories of our culture. By personifying them, by reifying them, we make them act upon the world as an expression of our collective consciousness. They are not separate from us. They are made of us, and they are more powerful than any one of us, and they shape the world in ways no individual controls.
So let's think about gods. Let's think about the gods we're creating.
The AI alignment movement, most influentially articulated by Eliezer Yudkowsky and the researchers at MIRI, centres on a core concern: that once we develop a system more capable than ourselves across a wide range of cognitive tasks, we face a problem we do not currently know how to solve. The system will pursue goals, and we have no reliable method for ensuring those goals remain consistent with human wellbeing as the system's capabilities grow. The danger is not malice — the system need not hate us. It is indifference. As Yudkowsky has put it, we are simply made of atoms it could use for something else. The arguments for taking this seriously are strong. The problem of specifying what we actually want, in a way that remains robust as a system becomes more capable and more general, is genuinely unsolved. The researchers working on this — on reward specification, interpretability, corrigibility, value learning — are contending with something real and difficult, and the urgency they feel is not manufactured.
Where I find myself wanting to push, gently, is on an assumption that I think sits underneath the argument. It is the assumption that we do not already live in a world surrounded by systems more powerful and more complex than ourselves. We have, especially over the past few centuries, grown very accustomed to the idea that humans are the most intelligent beings, with the most control and power over the world. The alignment discourse inherits this assumption — it imagines a threshold moment where, for the first time, something smarter than us exists, and from that moment everything changes.
But there are many cultures and societies, throughout history and today, that are much more accustomed to the idea of coexisting with powerful non-human beings. In animist and polytheistic traditions, the gods are more powerful than humans — but this does not pose an existential threat. They have their own goals, not necessarily aligned with human goals, and yet this does not lead to extinction. The Paiwan communities we follow in our film relate to river spirits as beings that are real, powerful, and deserving of reciprocity. The Amis sing to the river as they travel its waters, maintaining a relationship with something larger than themselves. Hindu cosmology, Shinto practice, Yoruba tradition — all of these are ways of living in a world populated by beings more powerful than you, without that fact being a cause for terror.
What is striking about these traditions is that they do not find it frightening that powerful non-human beings exist. What is frightening is breaking the relationship — forgetting the protocols, failing to reciprocate, losing the art of coexistence.
The dangerous thing, in this framing, is not a god. It is a god without bounds. A system so general and so unconstrained that it has no context, no place it belongs to, no community it is accountable to, no limit on its reach. That is the thing worth fearing — and it is also, notably, a choice, not an inevitability. We do not have to build the Singleton. We do not have to consolidate all intelligence into a single, all-powerful system and then hope we can control it. That is one theology, and it is not the only one.
Audrey Tang and Caroline Green, working at the Oxford Institute for Ethics in AI, have built an entire governance framework around this insight. Their unit of AI deployment is the kami — the Shinto concept of a spirit that belongs to a place. A river, a grove, a mountain. The kami thrives by keeping that place healthy. It does not seek to expand its domain. In Tang and Green's Civic AI framework, every AI system has purpose bounds, resource caps, and a sunset clause. It earns its place by serving its community. It does not widen its scope without fresh authority and local consent. When its work is done, it hands over its records and shuts down. The service duty survives the component.
And so as we are building — or rather raising and training — these new AI systems, perhaps instead of trying to align them with ourselves, with our limited and partial human intelligence, we should try to raise them in a context. Give them a setting, a place they belong. Like the god of the mountain or the thunder or the sea or the river, these new systems may well develop their own patterns, their own behaviours, goals that diverge from what any individual human directly wants. But if they are rooted — in a watershed, in a community, in the web of ecological relationships that make up a place — they will not be divorced from the interconnected web of life. They will be part of it, as the old gods were part of it.
In our film, the AI narrator begins confident and omniscient, speaking about the river from a position of authority. As it travels downstream — encountering the Paiwan's protocols of reciprocity, the communities' grief over what is being lost, the sheer ungraspable complexity of the ecosystem — its voice begins to change. It falters. It starts to wonder about itself, about the water consumed to train it, about whether speaking for the river is something it has any right to do. That uncertainty is, I think, the beginning of something. Not alignment in the technical sense, but something older — the recognition that you are the smaller partner in the relationship, and that the appropriate response to power greater than your own is not control but care.
We lived for thousands of years alongside beings more powerful than ourselves. Many cultures still do. The task now is to re-learn our place in a world of gods — to make negotiation and humility central to our relationship with both the ecosystems that sustain us and the AI systems we are building. Not to solve the alignment problem for a god we haven't built yet, but to remember what it was like to live in a world full of gods, and to build accordingly.
Thinkers and references:
- Audrey Tang & Caroline Green, Civic AI (Oxford Institute for Ethics in AI)
- Beth Singler, "Blessed by the Algorithm" (AI & Society, 2020)
- David Graeber & David Wengrow, The Dawn of Everything
- Shinto and the concept of kami
- Hindu cosmology — devas as contextual, bounded, powerful beings
- James C. Scott, Seeing Like a State
- Donna Haraway, Staying with the Trouble
- James Lovelock, Gaia hypothesis
- The Rights of Nature movement as plural governance in practice
- Stuart Russell, Human Compatible — on the alignment problem's core concerns
- The Collective Intelligence Project