The problems with the simulation argument

We’re all living in a simulation. Or so the argument goes.

Lots of people, not least Elon Musk, make this argument or something like it:

  1. Technology increases over time. Eventually, the technology to create a simulation as complex as our world will exist.
  2. Therefore life-like simulations are inevitable and more numerous than base realities. Ergo, we are most likely in a simulation.

It’s a simple argument. And convincing. In the current era, we’re used to lots of technological advancement. Lots of smart people have been pushing the theory too.

But there are at least two major problems with it.

  1. Technology doesn’t always increase.
  2. A monotonic increasing function is not necessarily unbounded–and not necessarily unbounded in reasonable time.

Let’s take these in order.

Technology doesn’t always increase

Nowadays, we take continual technological progress for granted. But it doesn’t always work this way. A usual caveat to the simulation theory is: “this holds if we don’t destroy the world in nuclear war and destroy all technology”. But technology has degraded in more gradual ways.

There are plenty of examples of this–one notable example being plumbing.


Ancient Greece, as far back as the 18th century BC, and Islamic Cordoba in the 10th century AD are reported to have used indoor plumbing. Along the way, nation states came and went and the technology was lost to many.

London had greatly inadequate sanitation as late as the 1850s and 60s. It was so bad it lead to an event called The Great Stink in 1858 which was caused by sewage flowing into the Thames and resulted in outbreaks of cholera. The modernization of London’s sanitation system took 16 years.


A more modern version of lost technology is in aeronautics. Supersonic flight used to be available to consumers for New York to London flights. Flights from New York to London, which take up to 8 hours, instead took 3 hours 30 minutes. Eventually, costs, environmental concerns, sonic booms and overall complexity led to the retirement of the Concorde. For the time being in 2018, there is no commercial air travel at supersonic speeds. Whether first class or economy, we all go from London to New York in 8 hours or so. Technologically, it’s a step back from 1976 when the Concorde was introduced.

For aviation, it seems we’ve reached a point of ‘good enough’ performance. Technically, maybe we could go further, but the complications, safety risks and environmental side effects seem to outweigh the benefits.

Some other notes on aeronautics:

  • The fastest plane was the X-15, and nothing has come close since 1969.
  • We have not put a man on the moon since 1972.

Possible antagonists to AI development

That’s all well and good, you may say, some technology reverses or doesn’t advance. But how could the development of something as important as AI and sentient simulations be halted? Here’s one possible scenario. And remember, the argument the simulation people make is that it’s inevitable. Not likely, not possible, but inevitable.

The power of AI is associated with no company or entity moreso than Google. The fates of advanced AI likely lie with Google, or companies like it. Thus, threats to Google may effectively slow down or reverse AI technology, which would include the possibilities of sentient simulations.

After years of growth, Google’s stock price is down about 20% from its peak. The EU has levied fines against Google totaling about $7.4 billion. The news media and public are becoming more wary of the kind of data collection and ad targeting that Google makes its money on. Recently, Apple has begun making a dent in one of Google’s most impressive products, Google Maps.

An underestimated long-run threat to Google is the growth of DuckDuckGo. The best estimates put DDG at roughly 0.3% of Google’s search traffic. That’s not a concern for the time being, but DDG is doubling every year and still accelerating.

DuckDuckGo is promising not to collect any personal data and makes money on ads that are based only on search terms, not your personal data. If they succeed and seize significant market share, Google’s revenue is at risk.

At root, Google is an advertising company. It makes money because it has an audience, mostly for its search. Some of Google’s most impressive technology is its least profitable. For instance, its Deepmind team which created the world champion Go algorithm has recently been reported to have difficulty figuring out how to earn money. Even if DuckDuckGo’s search is never quite as good or as profitable as Google’s personalized results, perhaps it will be ‘good enough’ and without the social risks of data collection and targeting. If that’s so, it would take a lot of money away from the very costly development of groundbreaking AI.

If general AI and sentient simulations were clearly within grasp, someone would surely fund it, but it may be beyond the reach of individual investors without a mega corporation continually funding it.

Increasing functions can be bounded

Even if technology keeps increasing every year, that doesn’t mean we will reach simulation capacity.

For example, the graph of the function f(x) = -1/x + 1:

Graph of the function f(x) = -1/x + 1; Asymptote at y=1

Imagine that the x-axis is time and the y-axis is a measure of technological capability. This function always increases, but has an asymptote, a line which it approaches but never crosses. Thus, it’s possible for technology to continue increasing, but for the pace of that increase to slow down such that the goal of sentient AI is never reached.


Another argument is that the kind of AI developed so far is not even on the path that will lead to the sentient AI needed for realistic simulations.

Douglas Hofstadter, a pioneering AI researcher was profiled in The Atlantic in 2013 making these claims:

Deep Blue plays very good chess—so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?

He wrote a follow-up in 2018 about Google Translate.

Today’s AI is useful–diagnosing cancer, classifying images, plotting the fastest route, etc–but it’s not intelligent in the sense envisioned by Hofstadter. And according to this line of thinking, the current branch AI brings us no closer to general or sentient AI.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s