Further Lessons in Comparing AI Risk and the Plan of Salvation
If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
On the same day as this post goes live, I'll be at the annual conference of the Mormon Transhumanist Association (MTA). You may remember my review of last year’s MTA Conference. This year I'm actually one of the presenters. I suspect that they may not have read last year’s review (or any of my other critical articles) or they may have just not made the connection. But also, to their credit, they're very accepting of all manner of views even critical ones, so perhaps they know exactly who I am. I don't know, I never got around to asking.
The presentation I’m giving is on the connection between AI Risk and the LDS Plan of Salvation. Subjects I covered extensively in several past posts. I don't think the presentation adds much to what I already said in those previous posts, so there wouldn’t be much point in including it here. (If you’re really interested email me and I’ll send you the Google slide deck.) However my presentation does directly tie into some of the reservations I have about the MTA and so, given that perhaps a few of them will be interested enough in my presentation to come here and check things out, I thought this would be a good opportunity to extend what I said in the presentation and look at what sort of conclusions might follow if we assume that life is best viewed as similar to the process for reducing AI Risk.
As I mentioned I covered the initial subject (the one I presented on today) at some length already, but for those who need a quick reminder or who are just joining us. Here's what you need to know:
1- We may be on the verge of creating an artificial superintelligence.
2- By virtue its extreme intelligence, this AI would have god-like power.
3- Accordingly we have to ensure that the superintelligence will be moral. i.e. not destroy us.
Mormons believe that this life is just such a test of "intelligences". A test of their morality in preparation for eventually receiving god-like powers. Though I think I'm the first to explicitly point out the similarities between AI Risk and the LDS Plan of Salvation. Having made that connection, my argument is that many things previously considered strong arguments against, or problems with, religion (eg suffering, evil, Hell, etc.) end up being essential components on the path to trusting something with god-like power. Considering these problems in this new light was the primary subject of the presentation I gave today. The point of this post is to go farther, and consider what further conclusions we might be able to draw from this comparison, particularly as it relates to the project of Mormon Transhumanism.
Of course everything I say going forward is going to be premised on accepting the LDS Plan of Salvation (more accurately, my specific interpretation of it) and the connections I'm drawing between it and AI Risk. Which I assume many are not inclined to do, but if you could set your reservations aside for the moment I think there’s some interesting intellectual territory to cover.
All of my thinking proceeds from the idea that one of the methods you’re going to try as an Artificial Intelligence Researcher (AIR) is isolating your AI. Limiting the damage a functionally amoral superintelligence can cause by cutting it off from its ability to cause that harm, at least in the real world.
(Now of course many people have argued that it may be difficult to keep an AI in a box so to speak, but if the AIR is God and we’re the intelligences, presumably that objection goes away.)
It’s easy to get fixated on this isolation, but the isolation is a means to an end not an end in itself. It’s not necessary for its own sake, it's necessary because we assume that the AI already has god-like intelligence, and we’re trying to keep it from having a god-like impact until it has god-like morals. Accordingly we have three pieces to the puzzle:
1- Intelligence
2- Morals
3- Impact
What happens when we consider those three attributes with respect to humans? It’s immediately obvious from the evidence that we’re way out ahead on 3. That humanity has already made significant strides towards having the ability to create a god-like impact, without much evidence that we have made similar strides with attributes 1 and 2. The greatest example of that is nuclear weapons. Trump and Putin could separately or together have a god-like impact on the world. And the morality of it would be the opposite of god-like and the intelligence of the action would not be far behind.
Now I would assume that God isn’t necessarily worried about any of the things we worry about when it comes to superintelligence. But if there was going to be a concern (perhaps even just amongst ourselves) it would be the same as the concerns of the AIRs, that we end up causing god-like impacts before we have god-like morals. Meaning that the three attributes are not all equally important. I believe any objective survey of LDS scripture, prophetic counsel or general conference talks would conclude that the overwhelming focus of the church is on item 2, morals. If you dig a little deeper you can also find writings about the importance of intelligence, but I think you’ll find very little related to having a god-like impact.
I suspect at this point, I need to spell out what I mean by that phrase. I’ve already given the example of nuclear war. To that there are a whole host of environmental effects I could add, on the negative side of things. On the positive side you have the green revolution, the internet, skyscrapers, rising standards of living, etc. Looking towards the future we can add immortality, brain-uploading, space colonization, and potentially AI, though that could go either way.
All of these are large scale impacts, and that’s the kind of thing I’m talking about. Things historians could be discussing in hundreds of years. LDS/Mormon doctrine does not offer much encouragement in favor of making these sorts of impacts. In fact, if anything, it comes across as much more personal and dispenses advice about what we should do if someone sues us for our cloak, or the benefits of saving even one soul, or what we should do if we come across someone who has been left half dead by robbers. All exhortations which apply to individual interactions. There’s essentially nothing about changing the world on a large scale through technology, and arguably what advice is given, is strongly against it. Of course, as you can probably guess I’m talking about the Tower of Babel. I did a whole post on the idea that the Tower of Babel did apply to the MTA, so I won’t rehash it here, but the point of all of this is that I get the definite sense that the MTA has prioritized the impact piece of the equation for godhood to the detriment of the morality piece, which for an AIR monitoring the progress of a given intelligence ends up being precisely the sort of thing you would want to guard against.
As an example of what I’m talking about consider the issue of immortality. Something that is high on the Transhumanist list as well as the Mormon Transhumanist list. Now to be clear all Mormons believe in eventually immortality, it’s just that most of them believe you have to die first and then come back. The MTA hopes to eliminate the “dying first” part. This is a laudable goal, and one that would have an enormous impact, but that’s precisely the point I was getting at above, allowing god-like impacts before you have god-like morality is the thing we’re trying to guard against in this model. Also “death” appears to have a very clear role in this scenario, insofar as tests have to end at some point. If you’re an AIR this is important if only for entirely mundane reasons like scheduling, having limited resources and most of all having a clear decision point. But I assume you would also be worried that the longer a “bad” AI has to explore its isolation the more likely it is to be able to escape. Finally, and perhaps most important for our purposes, there’s significant reason to believe that morality becomes less meaningful if you allow an infinite time for it to play out.
If this were just me speculating on the basis of the analogy, you might think that such concerns are pointless, or that they don’t apply we replace our AIR with God. But it turns out that something very similar is described in the Book of Mormon, in Alma chapter 42. The entire chapter speaks to this point, and it’s probably worth reading in its entirety, but here is the part which speaks most directly to the subject of immortality.
...lest he should put forth his hand, and take also of the tree of life, and eat and live forever, the Lord God placed cherubim and the flaming sword, that he should not partake of the fruit—
And thus we see, that there was a time granted unto man to repent, yea, a probationary time, a time to repent and serve God.
For behold, if Adam had put forth his hand immediately, and partaken of the tree of life, he would have lived forever, according to the word of God, having no space for repentance; yea, and also the word of God would have been void, and the great plan of salvation would have been frustrated.
But behold, it was appointed unto man to die...
At a minimum, one gets the definite sense that death is important. But maybe it’s still not clear why, the key is that phrase “space for repentance”. There needs to be a defined time during which morality is established. Later in the chapter the term “preparatory state” is used a couple of times, also the term “probationary state”. Both phrases point to a test of a specific duration, a test that will definitely determine one way or the other whether an intelligence can be trusted with god-like power. Because while it’s not clear that this necessarily the case with God. With respect to artificial intelligence, once we give them god-like power we can’t take it back. The genie won’t go back in the bottle.
To state it more succinctly, this life is not a home for intelligences, it’s a test of intelligences, and tests have to end.
It is entirely possible that I’m making too much of the issue of immortality. Particularly since true immortality is probably still a long way off, and I wouldn’t want to stand in the way of medical advances which could improve the quality of life. (Though I think there’s a good argument to be made that many recent advances have extended life without improving it.) Also I think that if death really is a crucial part of God’s Plan, that immortality won’t happen regardless of how many cautions I offer or how much effort the transhumanists put forth. (Keeping in mind the assumptions I mentioned above.)
Of more immediate concern might be the differences in opinion between the MTA and the LDS Leadership, which I’ve covered at some length in those previous posts I mentioned at the beginning. But, to highlight just one issue I spoke about recently, the clear instruction from the church, is that it’s leaders should counsel against elective transexxual surgery, while as far as I can tell (see my review of the post-genderism presentation from last year) the MTA, views “Gender Confirmation Surgery” as one of the ways in which they can achieve the “physical exaltation of individuals and their anatomies” (that’s from their affirmation). Now I understand where they’re coming from. It certainly does seem like the “right” thing to do is to allow people the freedom to choose their gender, and allow gay people the freedom to get married in the temple (another thing the LDS Leadership forbids). But let me turn to another story from the scriptures. This time we’ll go to the Bible.
In the Old Testament there’s a classic story concerning Samuel and King Saul. King Saul is commanded to:
...go and smite Amalek, and utterly destroy all that they have, and spare them not; but slay both man and woman, infant and suckling, ox and sheep, camel and ass.
But rather than destroying everything Saul:
spared...the best of the sheep, and of the oxen, and of the fatlings, and the lambs, and all that was good, and would not utterly destroy them: but every thing that was vile and refuse, that they destroyed utterly.
He does this because he figures that God will forgive him for disobeying, once he sacrifices all of the fatlings and lambs, etc. But in fact this act is where God decides that making Saul the King was a mistake. And when Samuel finally shows up he tells the King:
And Samuel said, Hath the Lord as great delight in burnt offerings and sacrifices, as in obeying the voice of the Lord? Behold, to obey is better than sacrifice, and to hearken than the fat of rams.
I feel like this Biblical verse might be profitably placed in a very visible location in all AIR offices. Because when it comes down to it, no matter how good the AI is (or thinks it is) or how clever it ends up being. In the end the most important thing might be that if you tell the AI to absolutely never do X, you want it to absolutely never do X.
You could certainly imagine an AI pulling a “King Saul”. Perhaps if we told it to solve global warming it might decide to trigger massive volcanic eruptions. Or if we told it to solve the population problem, we could end up with a situation that Malthus would have approved of, but which the rest of the world finds abhorrent. Even if, in the long run, the AI assures us that the math works out. And it’s likely that our demands on these issues would seem irrational to the AI, even evil. But for good or for ill, humanity definitely has some values which should supersede behaviors which the AI might otherwise be naturally inclined to adopt, or which, through its own reasoning, it might conclude is the moral choice. If we can accept that this is a possibility with potential superintelligences, how much more could it be the case when we consider the commandments of God? Who is a lot more intelligent and moral than we are.
If we accept the parallel, then we should accept, exactly this possibility, that something similar might be happening with God. That there may be things we are being commanded not to do, but which seem irrational or even evil. Possibly this is because we are working from a very limited perspective. But it’s also possible that we have been given certain commandments which are irrational, or perhaps just silly, and it’s not our morality or intelligence being tested, but our obedience. As I just pointed out, a certain level of blind obedience is probably an attribute we want our superintelligence to have. The same situation may exist with respect to God. And it is clear that obedience, above and beyond everything I’ve said here, is an important topic in religion. The LDS Topical Guide lists 120 scriptures under that heading, and cross-references an additional 25 closely related topics, which also probably have a similar number of scriptures attached.
Here at last we return to the idea I started this section with. I know there are many things which seem like good ideas. They are rational, and compassionate, and exactly the sort of thing it seems like we should be doing. I mentioned as one example supporting people in “gender confirmation surgery” and the example of pressing for gay marriages to be solemnized in the temple. But we can also see if we look at AI Risk, in connection with the story of King Saul, maybe this is a test of our obedience? I am not wise enough to say whether it is or not, and everyone has to chart their own path, and listen to their own conscience and do the best they have with what they’ve got. But I will say that I don’t think it’s unreasonable to draw the conclusion from this comparison that tests of obedience are something we should expect, and that they may not always make as much sense as we would like.
At this point, it’s 6 am the morning of the conference where I’ll be presenting, which basically means that I’m out of time. There were some other ideas I wanted to cover, but I suppose they’ll have to wait for another time.
I’d like to end by relating an analogy I’ve used before. One which has been particularly clarifying for me when thinking about the issues of transhumanism and where our efforts should be spent.
Imagine that you’re 15 and the time has come to start preparing to get your driver’s license. Now you just happen to have access to an autoshop. Most people (now and throughout history) have not had access to such an autoshop. But with that access you think you might be able to build your own car. Now maybe you can and maybe you can’t. Building a car from scratch is probably a lot harder than you think. But if, by some miracle, you are able to build a car, does that also give you the qualifications to drive it? Does building a car give you knowledge of the rules (morality) necessary to safely drive it? No. Studying for and passing the driver’s license test, is what (hopefully) gives you that. And while I don’t think it’s bad to study auto mechanics at the same time as studying for your driver’s license test, the one may distract from the other. Particularly if you’re trying to build an entire car, which is a very time consuming process.
God has an amazing car waiting for us. Much better than what we could build ourselves, and I think he’s less interested in having us prove we can build our own car then in showing that we’re responsible enough to drive safely.
I really was honored to be invited to present at the MTA conference, and I hope I have not generated any hard feelings with what I’ve written, either now or in the past. Of course, one way to show there are no hard feelings is to donate.
That may have crossed a line. I’m guessing it’s possible with that naked cash grab that if there weren’t any hard feelings that there are now.