The Experience Space

What does it mean ‘to want something’? What are the physical limits of wishes?

With the rapid growth of machine learning and AI, a new question poses itself. Assuming an artificial super-intelligence (henceforth abbreviated as ASI) is possible and it listens, against all odds, to our demands, what exactly are we going to ask it? What do we really want? More money, more food, more knowledge? Or is it something else entirely?

In this article, we dive deep into the world that ASI makes possible. As we’ll discover, the question of what world we would like to live in goes far deeper than expected.

The Question of Questions

I’m reminded to the last scene of Finding Nemo (2003). For a good part of the movie, we see some fish trying to escape an aquarium. When they finally manage to break out using plastic bags and plunge into the Ocean one of them, after a brief pause, rightfully asks: ‘Now what?’.

Some fish stuck in a bag © Pixar

The poor things. They’re still trapped in plastic bags, with no obvious means to remove them. It was great plan, but not thought through.

The current state of AI is much the same. Yes, we very likely will be able to create machines that far surpass our intelligence. Yes, yes, we have all read the doomsday scenarios that even prominent AI scientists like Geoffrey Hinton gospel. However, imagine for just a moment that we’ve achieved ASI as expected, but instead of going on a killing spree, it just sits there, waiting for our command. Now what? What exact sequence of instructions will we give this almighty being?

We could ask the ASI to ramp up production and give everyone money. Society would continue to exist and not much would change. Sure, less people would show up at work, but overall the world of 2100 looks largely the same as 2024. But for some, including the ASI, this might not be enough.

Upload Your Brain

In the 1999 movie The Matrix, machines take over the world and enslave humanity in a virtual word. The humans have real bodies, but their minds are connected to a clone of the world. Forever stuck in the year 1999. They are completely immersed in the experience, unable to realize the grim truth about their existence.

The human farms managed by the machines © Warner Brothers

Are the people inside the matrix happy? Some are and some aren’t. But what if we could change the scene so that everyone is happy? A paradise where everyone feels at home? It would be heaven on Earth, with an amount happiness that is unfathomable for us who live in the real world. Would it be ethical to ask an ASI to build such a world? Would an ASI want to build it on its own, to help us?

It seems logical, doesn’t it? Even if you don’t agree, the ASI might think it is a mathematical necessity. Imagine you are a benevolent ASI and you have all these human beings to watch over. Wouldn’t you consider it a tremendous waste of resources to just let them use their inefficient brains? Whenever they ask you to build something, you have to do so with real, precious atoms. They ask you to build roads, bridges, statues with so much matter it could probably power hundreds of generations of virtual beings. Maybe if you sneakily upload them to a virtual world, they wouldn’t notice. In there, the humans would consume much less power due to the ASI’s ferocious ability to optimize. Constructing a road would be just a matter of flipping some bits. Child’s play.

The Experience Space

There is a term in AI parlance called the search space. It is an imaginative space, which might be in two dimensions or a whole lot more. AI specialists use the concept of the search space to better understand how their models learn and operate. As the risk of over-simplifying: an AI model ‘walks’ inside this search space and searches for a solution to a certain problem, a bit like someone would search for gold in the jungle. The better the model, the more efficient it can search for a solution. Some searches take longer than the lifetime of the universe, some take milliseconds.

An example of a search space in two dimensions with one extra dimension representing the score. The highest point contains the solution. (Source)

In the context of simulated realities I’d like to introduce the term experience space. The experience space is a collection of all possible experiences you as a (virtual) human being could have. For example, you as a child staring at the moon at night is just one little dot in this space. You reading this exact article at this particular time is another dot. You as an old woman reading this exact article one thousand years in the future is another.

One thing that is often overlooked is the fact that people inside a simulation are not required to experience a mirror of the physical world. Dreaming is also a perfectly valid experience to have, albeit a very weird one. Nothing would prohibit an all-knowing ASI to create virtual spaces that are dream-like, where gravity does not exist or even the laws of logic do not hold. Likewise, the experience space doesn’t have to correspond with the standard model of particle physics or whatever other physics theory.

To illustrate how strange this space is, note that it is highly likely that emotions do not have to correspond with the situations and places we usually associate them with. For one person, Berlin smells like home, while another person may associate it with severe trauma. Likewise, it is possible — at least in our imagination — that someone virtual can feel intense feelings of sadness while talking loudly and enthusiastically about their accomplishments, or feel an enormous anger while quietly painting the ocean. Indeed, we can imagine the latter, for example, to be highly introverted. The point is that the scenes we take for granted as humans — the things we expect — are not all that is possible. It is as if you would be trapped in a Monty Python sketch. The space of weird, absurd situations that are still valid experiences is vast, and we need to make sure our ASI is aware or this.

The experience space is vast.

Enter Philosophy

Let me drive that last point home. If an ASI decides to simulate all human beings (which, if benevolent, it probably will want to do), you better hope it doesn’t try to cover the entire experience space. You better hope it knows what to simulate and what not. I don’t like the prospect of being cut open alive while screaming (still a valid experience), nor that of falling freely for what seems to be an eternity (also a valid experience). I also wish nobody ever needs to experience that.

Say we ask the great ASI to optimize for happiness and avoid pain. It would need to know which experiences are better than others. This is known as an ordering. Due to all the dreamlike states that are possible it is difficult to get an ordering in this space, and any such ordering would have to be made on what information the senses receive combined with the internal state of (a reduced version of) our brain. Basically: comparing atoms to atoms.

The ASI will need some sort of compass to guide its decisions. Ultimately these decisions boil down to what’s good and what’s bad. It just so happens that humanity has been obsessing over these questions for well more than two thousand years. The field is called philosophy; the subject ethics. Humanity hasn’t come even near to a definite conclusion, yet not knowing the answer is a major danger to the ASI’s operations. If left untouched, we could be living in hell.

We need a philosophy to guide the ASI.

Exit Philosophy

I don’t believe any intelligence will be able to ‘solve’ philosophy and derive an ethics that is good for us. To know what to simulate is equivalent to solving all moral dilemmas, and with that probably the whole of philosophic inquiries. Solving philosophy requires solving the hard problem of consciousness, and I already wrote about my conviction that this is fundamentally unsolvable.

What’s far more likely is that the ASI will develop its own philosophy based on its training data and its previous iterations, together with what input it got during prompting. A machine philosophy that is as flawed as ours. This philosophy will guide it in what to simulate, or what not. The major problem is that one basis of our philosophy — our emotions, such as love and empathy — likely won’t be shared with a machine that never had its own child to breastfeed or its elderly to care for. It will be raw data, and this, yes this, might be the biggest danger of them all.


ASI is coming. I’d say you better prepare to it, but there’s nothing to prepare for. Once ASI emerges and in the small chance it is benevolent, it will more than likely simulate virtual realities that might be totally different from the real world. The ASI will create its own philosophy, and with that a new range of (virtual) beings will exist for millions, if not billions of years, for better or for worse.

Leave a Comment