Tag: <span>Nick Bostrom</span>

Returning to Mormonism and AI (Part 3)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This is the final post in my series examining the connection between Mormonism and Artificial Intelligence (AI). I would advise reading both of the previous posts before reading this one (Links: Part One, Part Two), but if you don’t, here’s where we left off:

Many people who’ve made a deep study of artificial intelligence feel that we’re potentially very close to creating a conscious artificial intelligence. That is, a free-willed entity, which, by virtue being artificial would have no upper limit to its intelligence, and also no built in morality. More importantly, insofar as intelligence equals power (and there’s good evidence that it does). We may be on the verge of creating something with godlike abilities. Given, as I just said, that it will have no built in morality, how do we ensure that it doesn’t use it’s powers for evil? Leading to the question, how do you ensure that something as alien as an artificial consciousness ends up being humanity’s superhero and not our archenemy?

In the last post I opined that the best way to test the morality of an AI would be to isolate it and then give it lots of moral choices where it’s hard to make the right choice and easy to make the wrong choice. I then pointed out that this resembles the tenets of several religions I know, most especially my own faith, Mormonism. Despite the title, the first two posts were very light on religion in general and Mormonism is particular. This post will rectify that, and then some. It will be all about the religious parallels between this method for testing an AI’s morality and Mormon Theology.

This series was born as a reexamination of a post I made back in October where I compared AI research to Mormon Doctrine. And I’m going to start by revisiting that, though hopefully, for those already familiar with October’s post, from a slightly different angle.

To begin our discussion, Mormons believe in the concept of a pre-existence, that we lived as spirits before coming to this Earth. We are not the only religion to believe in a pre-existence, but most Christians (specifically those who accept the Second Council of Constantinople) do not. And among those christian sects and other religions who do believe in it, Mormons take the idea farther than anyone.

As a source for this, in addition to divine revelation, Mormons will point to the Book of Abraham, a book of scripture translated from papyrus by Joseph Smith and first published in 1842. From that book, this section in particular is relevant to our discussion:

Now the Lord had shown unto me, Abraham, the intelligences that were organized before the world was…And there stood one among them that was like unto God, and he said unto those who were with him: We will go down, for there is space there, and we will take of these materials, and we will make an earth whereon these may dwell; And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;

If you’ve been following along with me for the last two posts then I’m sure the word “intelligences” jumped out at you as you read that selection. But you may have also have noticed the phrase, “And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;” And the selection, taken as a whole, depicts a situation very similar to what I described in my last post, that is, creating an environment to isolate intelligences while we test their morality.

I need to add one final thing before the comparison is complete. While not explicitly stated in the selection, we, as Mormons, believe that this life is a test is to prepare us to become gods in our own right. With that final piece in place we can take the three steps I listed in the last post with respect to AI researchers and compare them to the three steps outlined in Mormon theology:

AI: We are on the verge of creating artificial intelligence.

Mormons: A group of intelligences exist.

AI: We need to ensure that they will be moral.

Mormons: They needed to be proved.

Both: In order to be able to trust them with godlike power.

Now that the parallels between the two endeavors are clear, I think that much of what people have traditionally seen as problems with religion end up being logical consequences flowing naturally out of a system for testing morality.

The rest of this post will cover some of these traditional problems and look at them from both the “creating a moral AI” standpoint and the “LDS theology” standpoint. (Hereafter I’ll just use AI and LDS as shorthand.) But before I get to that, it is important to acknowledge that the two systems are not completely identical. In fact there are many ways in which they are very different.

First when it comes to morality, we can’t be entirely sure that the values we want to impart to an AI are actually the best values for it to have. In fact many AI theorists, have put forth the “Principle of Epistemic Deference”, which states:

A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible.

No one would suggest that God has a similar policy of deferring to us on what’s true and what’s not. And therefore the LDS side of things has a presumed moral clarity underlying it which the AI side does not.

Second, when speaking of the development of AI it is generally assumed that the AI could be both smarter and more powerful than the people who created it. On the religious/LDS side of things there is a strong assumption in the other direction, that we are never going to be smarter or more powerful than our creator. This doesn’t change the need to test the morality, but it does make the consequences of being wrong a lot different for us than for God.

Finally, while in the end, we might only need a single, well-behaved AI to get us all of the advantages of a superintelligent entity, it’s clear that God wants to exalt as many people as possible. Meaning that on the AI side of things the selection process could, in theory, be a lot more draconian. While from an LDS perspective, you might expect things to be tough, but not impossible.

These three things are big differences, but none of them represents something which negates the core similarities. But they are something to keep in mind as we move forward and I will occasionally reference them as I go through the various similarities between the two systems.

To being with, as I just mentioned one difference between the AI and LDS models is how confident we are in what the correct morality should be, with some AI theorists speculating that we might actually want to defer to the AI on certain matters of morality and truth. Perhaps that’s true, but you could imagine that some aspects of morality are non-negotiable, for example you wouldn’t want to defer to the AIs conclusion that humanity is inferior and we should all be wiped out, however ironclad the AI’s reasons ended up being.

In fact, when we consider the possibility that AIs might have a very different morality from our own, an AI that was unquestioningly obedient would solve many of the potential problems. Obviously it would also introduce different problems. Certainly you wouldn’t want your standard villain type to get a hold of a superintelligent AI who just did whatever it was told, but also no one would question an AI researcher who told the AI to do something counterintuitive to see what it would do. And yet, just today I saw someone talk about how it’s inconceivable that the true God should really care if we eat pork, apparently concluding that obedience has no value on it’s own.

And, as useful as this is when in the realm of our questionable morality, how much more useful and important is it to be obedient when we turn to the LDS/religious side of things and the perfect morality of God?

We see many examples of this. The one familiar to most people would be when God commanded Abraham to sacrifice Isaac. This certainly falls into the category of something that’s counterintuitive, not merely based on the fact that murder is wrong, but also God had promised Abraham that he would have descendents as numerous as the stars in the sky, which is hard when you’ve killed your only child. And yet despite this Abraham went ahead with it and was greatly rewarded for his obedience.

Is this something you’d want to try on an AI? I don’t see why not. It certainly would tell you a lot about what sort of AI you were dealing with. And if you had an AI that seemed otherwise very moral, but was also willing to do what you asked because you asked it, that might be exactly what you were looking for.

For many people the existence of evil and the presence of suffering are both all the proof they need to conclude that God does not exist. But as you may already be able to see, both from this post and my last post, any test of morality, whether it be testing AIs or testing souls, has to include the existence of evil. If you can’t make bad choices then you’re not choosing at all, you’re following a script. And bad choices are, by definition evil, (particularly choices as consequential as those made by someone with godlike power). To put it another way, a multiple choice test where there’s only one answer and it’s always the right one, doesn’t tell you anything about the subject you’re testing. Evil has to exist, if you want to know whether someone is good.

Furthermore, evil isn’t merely required to exist. It has to be tempting. To return to the example of the multiple choice test, even if you add additional choices, you haven’t improved the test very much if the correct choice is always in bold with a red arrow pointing at it. If good choices are the only obvious choices then you’re not testing morality, you’re testing observation. You also very much risk making the nature of the test transparent to a sufficiently intelligent AI, giving it a clear path to “pass the test” but in a way where it’s true goals are never revealed. And even if they don’t understand the nature of the test they still might always make the right choice just by following the path of least resistance.

This leads us straight to the idea of suffering. As you have probably already figured out, it’s not sufficient that good choices be the equal of every other choice. They should actually be hard, to the point where they’re painful. A multiple choice test might be sufficient to determine whether someone should be given an A in Algebra, but both the AI and LDS tests are looking for a lot more than that. Those tests are looking for someone (or something) that can be trusted with functional omnipotence. When you consider that, you move from thinking of it in terms of a multiple choice question to thinking of it more like qualifying to be a Navy SEAL, only perhaps times ten.

As I’ve said repeatedly, the key difficulty for anyone working with an AI, is determining its true preference. Any preference which can be expressed painlessly and also happens to match what the researcher is looking for is immediately suspect. This makes suffering mandatory. But what’s also interesting is that you wouldn’t necessarily want it to be directed suffering. You wouldn’t want the suffering to end up being the red arrow pointing at the bolded correct answer. Because then you’ve made the test just as obvious but from the opposite direction. As a result suffering has to be mostly random. Bad things have to happen to good people, and wickedness has to frequently prosper. In the end, as I mentioned in the last point, it may be that the best judge of morality is whether someone is willing to follow a commandment just because it’s a commandment.

Regardless of its precise structure, in the end, it has to be difficult for the AI to be good, and easy for it to be bad. The researcher has to err on the side of rejection, since releasing a bad AI with godlike powers could be the last mistake we ever make. Basically, the harder the test the greater its accuracy, which makes suffering essential.

Next, I want to look at the idea that AIs are going to be hard to understand. They won’t think like we do, they won’t value the same things we value. They may, in fact, have a mindset so profoundly alien that we don’t understand them at all. But we might have a resource that would help. There’s every reason to suspect that other AIs created using the same methodology, would understand their AI siblings much better than we do.

This leads to two interesting conclusions both of which tie into religion, the first I mentioned in my initial post back in October. But I also alluded to it in the previous posts in this series. If we need to give the AIs the opportunity to sin, as I talked about in the last point. Then any AIs who have sinned are tainted and suspect. We have no idea whether their “sin” represented their true morals which they have now chosen to hide from us, or whether they have sincerely and fully  repented. Particularly if we assume an alien mindset. But if we have an AI built on a similar model which never sinned that AI falls into a special category. And we might reasonably decide to trust it with the role of spokesperson for the other AIs.

In my October post I drew a comparison between this perfect AI, vouching for the other AIs, and Jesus acting as a Messiah. But in the intervening months since then, I realized that there was a way to expand things to make the fit even better. One expects that you might be able to record or log the experiences of a given AI. If you then gave that recording to the “perfect” AI, and allowed it to experience the life of the less perfect AIs you would expect that it could offer a very definitive judgement as whether a given AI had repented or not.

For those who haven’t made the connection, from a religious perspective, I’ve just described a process that looks very similar to a method whereby Jesus could have taken all of our sins upon himself.

I said there were two conclusions. The second works exactly the opposite of the first. We have talked of the need for AIs to be tempted, to make them have to work at being moral, but once again their alien mindset gets in the way. How do we know what’s tempting to an artificial consciousness? How do we know what works and what doesn’t? Once again other AIs probably have a better insight into their AI siblings, and given the rigor of our process certain AIs have almost certainly failed the vetting process. I discussed the moral implications of “killing” these failed AIs, but it may be unclear what else to do. How about allowing them to tempt the AIs who we’re still testing? Knowing that the temptations that they invent will be more tailored to the other AIs than anything we could come up with. Also, insofar as they experience emotions like anger and jealously and envy they could end up being very motivated to drag down those AIs who have, in essence, gone on without them.

In LDS doctrine, we see exactly this scenario. We believe that when it came time to agree to the test, Satan (or Lucifer as he was then called) refused and took a third of the initial intelligences with him (what we like to refer to as the host of heaven) And we believe that those intelligences are allowed to tempt us here on earth. Another example of something which seems inexplicable when viewed from the standpoint of most people’s vague concept of how benevolence should work, but which makes perfect sense if you imagine what you might do if you were testing the morality of an AI (or spirit).

This ties into the next thing I want to discuss. The problem of Hell. As I just alluded to, most people only have a vague idea of how benevolence should look. Which I think actually boils down to, “Nothing bad should ever happen.” And eternal punishment in Hell is yet another thing which definitely doesn’t fit. Particularly in a world where steps have been taken to make evil attractive. I just mentioned Satan, and most people think he is already in Hell, and yet he is also allowed to tempt people. Looking at this from the perspective of an AI, perhaps this is as good as it gets. Perhaps being allowed to tempt the other AIs is the absolute most interesting, most pleasurable thing they can do because it allows them to challenge themselves against similarly intelligent creations.

Of course, if you have the chance to become a god and you miss out on it because you’re not moral enough, then it doesn’t matter what second place is, it’s going to be awful, relative to what could have been. Perhaps there’s no way around that, and because of this it’s fair to describe that situation as Hell. But that doesn’t mean that it couldn’t actually, objectively, be the best life possible for all of the spirits/AIs that didn’t make it. We can imagine some scenarios that are actually enjoyable if there’s no actual punishment, it’s just a halt to progression.

Obviously this and most of the stuff I’ve suggested is just wild speculation. My main point is that by viewing this life as a test of morality, a test to qualify for godlike power (which the LDS do) provides a solution to many of the supposed problems with God and religion. And the fact that AI research has arrived a similar point and come to similar conclusions, supports this. I don’t claim that by imagining how we would make artificial intelligence moral that all of the questions people have ever had about religion are suddenly answered. But I think it gives a surprising amount of insight to many of the most intractable questions. Questions which atheists and unbelievers have used to bludgeon religion for thousands of years, questions which may turn out to have an obvious answer if we just look at it from the right perspective.


Contrary to what you might think, wild speculation is not easy, it takes time and effort. If you enjoy occasionally dipping into wild speculation, then consider donating.


Returning to Mormonism and AI (Part 1)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Last week, Scott Alexander, the author of SlateStarCodex, was passing through Salt Lake City and he invited all of his readers to a meetup. Due to my habit of always showing up early I was able to corner Scott for a few minutes and I ended up telling him about the fascinating overlap between Mormon theology and Nick Bostrom’s views on superintelligent AI. I was surprised (and frankly honored) when he called it the “highlight” of the meetup and linked to my original post on the subject.

Of course in the process of all this I went through and re-read the original post, and it wasn’t as straightforward or as lucid as I would have hoped. For one I wrote it before I vowed to avoid the curse of knowledge, and when I re-read it, specifically with that in mind I could see many places where I assumed certain bits of knowledge that not everyone would possess. This made me think I should revisit the subject. Even aside from my clarity or lack thereof, there’s certainly more that could be said.

In fact there’s so much to be said on the subject, that I’m thinking I might turn it into a book. (Those wishing to persuade or dissuade me on this endeavor should do so in the comments or you can always email me. Link in the sidebar just make sure to unspamify it.)

Accordingly, the next few posts will revisit the premise of the original, possibly from a slightly different angle. On top of that I want to focus in on and expand on a few things I brought up in the original post and then, finally, bring in some new stuff which has occurred to me since then. All the while assuming less background knowledge, and making the whole thing more straightforward. (Though there is always the danger that I will swing the pendulum too far the other way and I’ll dumb it down too much and make it boring. I suppose you’ll have to be the judge of that.)

With that throat clearing out of the way let’s talk about the current state of artificial intelligence, or AI, as most people refer to it. When you’re talking about AI, it’s important to clarify whether you’re talking about current technology like neural networks and voice recognition or whether you’re talking about the theoretical human level artificial intelligence of science fiction. While most people think that the former will lead to the latter, that’s by no means certain. However, things are progressing very quickly and if current AI is going to end up in a place so far only visited by science fiction authors, it will probably happen soon.

People underestimate the speed with which things are progressing because what was once impossible quickly loses it’s novelty the minute it becomes commonplace. One of my favorite quotes about artificial intelligence illustrates this point:

But a funny thing always happens, right after a machine does whatever it is that people previously declared a machine would never do. What happens is, that particular act is demoted from the rarefied world of “artificial intelligence”, to mere “automation” or “software engineering”.

As the quote points out, not only is AI progressing with amazing rapidity, but every time we figure out some aspect of it, it moves from being an exciting example of true machine intelligence into just another technology.

Computer Go, which has been in the news a lot lately, is one example of this. As recently as May of 2014 Wired magazine ran an article titled, The Mystery of Go, The Ancient Game That Computers Still Can’t Win, an in depth examination of why, even though we could build a computer that could beat the best human at Jeopardy! of all things, we were still a long ways away from computers who could beat the best human at Go. Exactly three years later AlphaGo beat Ke Jie the #1 ranked player in the world. And my impression was, that interest in this event which only three years ago Wired called “AI’s greatest unsolved riddle” was already fading, with the peak coming the year before when AlphaGo beat Lee Sedol. I assume part of this was because once AlphaGo proved it was competitive at the highest levels everyone figured it was only a matter of time and tuning before it was better than the best human.

Self-driving cars are another example of this. I can remember the DARPA Grand Challenge back in 2004, the first big test of self-driving cars, and at that point not a single competitor finished the course. Now Tesla is assuring people that they will do a coast to coast drive on autopilot (no touching of controls) by the end of this year. And most car companies expect to have significant automation by 2020.

I could give countless other examples in areas like image recognition, translation and writing, but hopefully, by this point, you’re already convinced that things are moving fast. If that’s the case, and if you’re of a precautionary bent like me, the next question is, when should we worry? And the answer to that depends on what you’re worried about. If you’re worried about AI taking your job, a subject I discussed in a previous post, then you should already be worried. If you’re worried about AIs being dangerous, then we need to look at how they might be dangerous.

We’ve already seen people die in accidents involving Tesla’s autopilot mode. And in a certain sense that means that AI is already dangerous. Though, given how dangerous driving is, I think self-driving cars will probably be far safer, comparatively speaking. And, so far, most examples of dangerous AI behavior have been, ultimately, ascribable to human error. The system has just been following instructions. And we can look back and see where, when confronted with an unusual situation, following instructions ended up being a bad thing, but at least we understood how it happened and in these circumstances we can change the instructions, or in the most extreme case we can take the car off the road. The danger comes when they’re no longer following instructions, and we can’t modify the instructions even if they were.

You may think that this situation is a long ways off. Or you may even think it’s impossible, given that computers need to be programmed, and humans have to have written that program. If that is what you’re thinking you might want to reconsider. One of the things which most people have overlooked in the rapid progress of AI over the last few years is it’s increasing opacity. Most of the advancement in AI has come from neural networks, and one weakness of neural networks is that it’s really difficult to identify how they arrived at a conclusion, because of the diffuse and organic way in which they work. This makes them more like the human brain, but consequently more difficult to reverse engineer. (I just read about a conference entirely devoted to this issue.)

As an example, one of the most common applications for AI these days is image recognition, which generally works by giving the system a bunch of pictures, and identifying which pictures have the thing you’re looking for and which don’t. So you might give the system 1000 pictures 500 of which have cats in them and 500 of which don’t. You tell the system which 500 are which and it attempts to identify what a cat looks like by analyzing all 1000 pictures. Once it’s done you give it a new set of pictures without any identification and see how good it is at as picking out pictures with cats in them. So far so good, and we can know how well it’s doing by comparing the system’s results vs. our own, since humans are actually quite talented at spotting cats. But imagine that instead of cats you want it to identify early stage breast cancer in mammograms.

In this case you’d feed it a bunch of mammograms and identify which women went on to develop cancer and which didn’t. Once the system is trained you could feed it new mammograms and ask it whether a preventative mastectomy or other intervention, is recommended. Let’s assume that it did recommend something, but the doctor’s didn’t see anything. Obviously the woman would want to know how the AI arrived at that conclusion, but honestly, with a neural network it’s nearly impossible to tell. You can’t ask it, you just have to hope that the system works. Leaving her in the position of having to trust the image recognition of the computer or taking her chances.

This is not idle speculation. To start with, many people believe that radiology is ripe for disruption by image recognition software. Additionally, doctors are notoriously bad at interpreting mammograms. According to Nate Silver’s book The Signal and the Noise, the false positive rate on mammograms is so high (10%) that for women in their forties, with a low base probability of having breast cancer in the first place, if a radiologist says your mammogram shows cancer it will be a false positive 90% of the time. Needless to say, there is a lot of room for improvement. But even if, by using AI image recognition, we were able to flip it so that we’re right 90% of the time rather than wrong 90% of the time, are women going to want to trust the AI’s diagnosis if the only reasoning we can provide is, “The computer said so?”

Distilling all of this down, two things are going on. AI is improving at an ever increasing rate, and at the same time it’s getting more difficult to identify how an AI reached any given decision. As we saw in the example of mammography we may be quickly reaching a point where we have lots of systems that are better than humans at what they do, and we will have to take their recommendations on faith. It’s not hard to see where people might consider this to be dangerous or, at least, scary and we’re still just talking about the AI technology which exists now, we haven’t even started talking about science fiction level AI. Which is where most of the alarm is actually focused. But you may still be unclear on the difference between the two sorts of AIs.

In referring to it as science fiction AI I’m hoping to draw your mind to the many fictional examples of artificial intelligence, whether it’s HAL from 2001, Data from Star Trek, Samantha in Her, C-3P0 from Star Wars or, my favorite, Marvin from A Hitchhiker’s Guide to the Galaxy. All of these examples are different from the current technology we’ve been discussing in two key ways:

1- They’re a general intelligence. Meaning, they can perform every purely intellectual exercise at least as well or better than the average human. With current technology all of our AIs can only really do one thing, though generally they do it very well. In other words, to go back to our example above, AlphaGo is great at Go, but would be relatively hopeless when it comes to taking on Kasparov in chess or trying to defeat Ken Jennings at Jeopardy! Though other AIs can do both (Deep Blue and Watson respectively.)

2- They have free will. Or at least they appear to. If their behavior is deterministic, its deterministic in a way we don’t understand. Which is to say they have their own goals and desires and can act in a way we find undesirable. HAL being perhaps the best example of this from the list above. I’m sorry Dave, I’m afraid I can’t do that.

These two qualities, taken together, are often labeled as consciousness. The first quality allows the AI to understand the world, and the second allows the AI to act on that understanding. And it’s not hard to see how these additional qualities increase the potential danger from AI, though of the two, the second, free will, is the more alarming. Particularly since if an AI does have it’s own goals and desires there’s absolutely no reason to assume that these goals and desires would bear any similarities to humanities’ goals and desires. It’s safer to assume that their goals and desires could be nearly anything, and within that space there are a lot of very plausible goals that end with humanity being enslaved (The Matrix) or extinct (Terminator).

Thus, another name for a science fiction AI is a conscious AI. And having seen the issues with the technology we already have you can only imagine what happens when we add consciousness into the mix. But why should that be? We currently have 7.5 billion conscious entities and barring the occasional Stalin and Hitler, they’re generally manageable. Why is an artificial intelligence with consciousness potentially so much more dangerous than a natural intelligence with consciousness? Well there are at least four reasons:

1- Greater intelligence: Human intelligence is limited by a number of things, the speed of neurons firing, the size of the brain, the limit on our working memory, etc. Artificial intelligence would not suffer from those same limitations. Once you’ve figured out how to create intelligence using a computer, you could always add more processors, more memory, more storage, etc. In other words as an artificial system you could add more of whatever got you the AI in the first place. Meaning that even if the AI was never more intelligent than the most intelligent human it still might think a thousand times faster, and be able to access a million times the information we can.

2- Self improving: I used this quote the last time I touched on this subject, but it’s such a good quote and it encapsulates the concept of self-improvement so completely that I’m going to use it again. It’s from I. J. Good (who worked with Turing to decrypt the Enigma machine), and he said it all the way back in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

If you want to continue to use science fiction to help you visualize things, of the science fiction I listed above only Her describes an actual intelligence explosion, but if you bring books into the mix you have things like Neuromancer by William Gibson, or most of the Vernor Vinge Books.

3- Immortality: Above I mentioned Stalin and Hitler. They had many horrible qualities, but they had one good quality which eventually made up for all of their bad qualities. They died. AI’s probably won’t have that quality. To be blunt, this is good if they’re good, but bad if they’re bad. And it’s another reason why dealing with artificial consciousness is more difficult than dealing with natural consciousness.

4- Unclear morality: None of the other qualities are all that bad until you combine it with this final attribute of artificial intelligence, they have no built in morality. For humans, a large amount of our behavior and morality is coded into our genes, genes which are the result of billions of years of evolutionary pressure. The morality and behavior which isn’t coded by our genes is passed on by our culture, especially our parents. Conscious AIs won’t have any genes, they won’t have been subjected to any evolutionary pressure and they definitely won’t have any parents except in the most metaphorical sense. Without any of those things, it’s very unlikely that they will end up with a morality similar to our own. They might, but it’s certainly not the way to bet.

After considering these qualities it should be obvious why a conscious AI could be dangerous. But even so it’s probably worth spelling out a few possible scenarios:

First, most species act in ways that benefit themselves. Whether it’s humans valuing humans more highly than rats, or just the preference that comes from procreation. Giving birth to more rats is an act which benefits rats even if later the same rat engages another rat in a fight to the death over a piece of pizza. In the same way a conscious AI is likely to act in ways which benefit itself and possibly other AIs to the determinant of humanity. Whether that’s seizing resources we both want, or deciding that all available material (humans included) should be turned into a giant computer.

On the other hand, even if you imagine that humans actually manage to embed morality into a conscious AI, there are still lots of ways that could go wrong. Imagine, for example, that we have instructed the AI that we need to be happy with its behavior. And so it hooks us up to feeding tubes and puts an electrode into our brain which constantly stimulates the pleasure center. It may be obvious to us that this isn’t what we meant, but are we sure it will be obvious to the AI?

Finally, the two examples I’ve given so far presuppose some kind of conflict where the AI triumphs. And perhaps you think I’m exaggerating the potential danger by hand waving this step. But it’s important to remember that a conscious AI could be vastly more intelligent than we are. But even if it weren’t, there are many things it could do if it were only as intelligent as reasonably competent molecular biologist. Many people have talked about the threat of bioterrorism, especially the danger of a man-made disease being released. Fortunately this hasn’t happened, in large part because it would be unimaginably evil, but also because its effects wouldn’t be limited to the individuals enemies. An AI has no default reason to think bioterrorism is evil and it also wouldn’t be affected by the pathogen.

These three examples just barely scratch the surface of the potential dangers, but they should be sufficient to give one a sense of both the severity and scope of the problem. The obvious question which follows is how likely is all of this? Or to separate it into it’s two components how likely is our current AI technology to lead to true artificial consciousness? And if that happens how likely is it that this artificial consciousness will turn out to be dangerous?

As you can see, any individual’s estimation of the danger level is going to depend a lot on whether you think conscious AI is a natural outgrowth of the current technology, whether it will involve completely unrelated technology or whether it’s somewhere in between.

I personally think it’s somewhere in between, though much less of a straight shot from current technology than people think. In fact I am on record as saying that artificial consciousness won’t happen. You may be wondering, particularly a couple thousand words into things, why I’m just bringing that up. What’s the point of all this discussion if I don’t even think it’s going to happen? First I’m all in favor of taking precautions against unlikely events if the risk from those events is great enough. Second, just because I don’t think it’s going to happen doesn’t mean that no one thinks it’s going to happen, and my real interest is looking at how those people deal with the problem.

In conclusion, AI technology is getting better at an ever increasing rate, and it’s already hard to know how any given AI makes decisions. Whether current AI technology will shortly lead to AIs that are conscious is less certain, but if the current path does lead in that direction, then at the rate things are going we’ll get there pretty soon (as in the next few decades.)

If you are a person who is worried about this sort of thing. And there are a lot of them from well known names like Stephen Hawking, Elon Musk and Bill Gates to less well known people like Nick Bostrom, Eliezer Yudkowsky and Bill HIbbard then what can you do to make sure we don’t end up with a dangerous AI? Well, that will be the subject of the next post…


If you learned something new about AI consider donating, and if you didn’t learn anything new you should also consider donating to give me the time to make sure that next time you do learn something.