Conditions for the Skynet Takeover, or Hostile Intelligence Explosion

 

Eliezer Yudkowsky has posted a report titled Intelligence Explosion Microeconomics, where he outlines his approach to the need for and directions of research into recursive self-improvement of machine intelligence. Below are my impressions of it.

 

Basically, the Skynet-type outcome, with humans marginalized and possibly extinct is almost certain to happen if all of the following conditions come true:

 

  1. A general enough intelligence, likely some computer software, but not necessarily so, gains the ability to make a smarter version of itself, which can then make an even faster version of itself, and so on. leading to the Technological Singularity
  2. The ethics guiding this super-intelligence will not be anything like the human ethics (Nick Bostrom’s fancily named Orthogonality Thesis), so it won’t care for human values and welfare one bit (yes, it’s a pun).
  3. This superintelligence could be interested in acquiring the same resources people find useful (Nick Bostrom’s Instrumental Convergence Thesis). Another fancy name.

 

Now, what is this “general enough intelligence”? Yudkowsky’s definition is that it is an optimizer in various unrelated domains, like humans are, not just something narrow, like a pocket calculator, a Roomba vacuum or a chess program.

 

So what happens if someone very smart but not very ethical finds something of yours very useful? Yeah, you don’t have a chance in hell. Like the animals on Earth, they are at the rather questionable mercy of humans. So, there you go, Skynet.

 

Now, this is not a new scenario, by any stretch, though it is nice to have the Skynet takeover conditions listed explicitly. How to avoid this fate? Well, you can imagine a situation where one of the three condition does not hold or can be evaded.

 

For example, Asimov dealt with the condition 2 by introducing (and occasionally demolishing) his 3 laws of robotics, which are supposed to make machine intelligence safe for humans. Yudkowsky’s answer to this is the Complexity of Value: if you try to define morality with a few simple rules, the literal genie, trying to obey them most efficiently, will do something completely unexpected and likely awful. In other words, morality is way too complicated to formalize, even for a group of people who tend to agree on what’s moral and what’s not. The benevolent genie is easy to imagine, but hard to design.

 

Well, maybe we don’t need to worry, and the Orthogonality Thesis (condition 2) is simply wrong? That’s the position taken by David Pearce, who promotes the Hedonistic Imperative, the idea that “genetic engineering and nanotechnology will abolish suffering in all sentient life”. In essence, even if it appears that intelligence is independent of morality, a superintelligence is necessarily moral. This description might be a bit cartoonish, but probably not too far from his actual position. Yudkowsky does not have much patience for this, calling it wishful thinking and a belief in the just universe.

 

It might be a happy accident if the condition 3 is false. For example, a super-intelligence might invent FTL travel and leave the Galaxy, or invent baby universes and leave the Universe, or decide to miniaturize to Planck scale, or something. But there is no way to know, so no reason counting on it.

 

The condition 1 is probably the easiest of the three to analyze (though still very hard), and that’s the subject of the Intelligence Explosion Microeconomics report. The goal is to figure out if “investment in intelligence” pays dividend enough for a chain reaction, or if the return remains linear or maybe flattens out completely at some point. Different options lead to different dates for the “intelligence explosion to go critical”, and the range varies from 20 years from now to centuries to never. The latter might happen if, somehow, the human-level intelligence is the limit of what a self-improving AI can do without human input.

The majority of the report on the “microecon0mics of intelligence explosion” goes through the variety of reasons why it may (or may not) occur sooner or later, and eventually suggests that there are enough examples of exponential increase in power with linear increase in effort to be very worried. The examples are the chess programs, the Moore’s law, the economic output and the scientific output (this last one measured in the number of papers produced, admittedly a lousy metric). The report goes on to discuss the effects of the preliminary research on intelligence explosion on the policies of MIRI (Machine Intelligence Research Institute) and so not overly interesting from the pure research perspective.

This seems to be the clearest outline yet of the MIRI’s priorities and research directions related to Intelligence Explosion. They also apparently do a lot of research related to the condition 2, making a potential AGI more humane, but this is not addressed in this report.

 

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s