Category: Artificial Intelligence

Books I Finished in September

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


It’s once again time for the monthly round up of the books I read:

Savage Worlds: Adventure Edition

By: Shane Lacy Hensley

208 pages

Thoughts

This is the latest edition of a well known universal Role-Playing game system called Savage Worlds. I’m a big fan of the system, but for my money there weren’t enough changes to justify putting out a new edition.

Who should read this book?

If you love, love, love Savage Worlds and run it all the time, it’s probably worth picking up this book. If you’re like me and you collect RPG systems, and you already have a Savage Worlds rulebook in your collection this is not different enough from past editions to be worth picking up.

Adrift: Seventy-Six Days Lost at Sea

By: Steven Callahan

234 pages

Thoughts

There’s a little old lady who used to be in my ward (that’s the Mormon version of a congregation) and in addition to being a voracious reader she’s exceptionally cunning. The first attribute led her to have an Audible subscription, the last bit led her to offer to share it with me when she realized she could have up to five connected devices. I was going through some financial difficulties at the time (a lawsuit) and so I took her up on the offer. I have since gotten my own Audible account, but she still let’s me know when she’s listened to something she particularly likes. She has a fondness for survival stories, and so I end up listening to quite a few of them. (Two this month.) This is good because I am also a fan of them, but they’re not the kind of thing I would seek out normally.

As you can probably tell from the title Adrift is one of these survival stories. Most survival stories get into the mechanics and the logistics of survival, and Adrift is no exception, in fact if anything it may partake of more of this sort of thing than most books in the genre. If that’s your thing you’ll probably really enjoy this book. For me, listening to it as an audiobook I had a hard time picturing everything he was describing. Nevertheless, Callahan was great at surviving, and is mentioned as one of the best examples of a survivor in another book I read in September. 

Novacene: The Coming Age of Hyperintelligence

By: James Lovelock

160 pages

Thoughts

This was kind of a weird book. (There were a couple in that category this month.) Lovelock is best known for his Gaia theory, which basically holds that organic and inorganic matter work together to create the perfect living environment. (Examples include global temperature, seawater salinity, and atmospheric oxygen.) I haven’t ever read that book but I remember being skeptical when I heard about the premise, what about Snowball Earth or the Great Oxygenation Event? I assume that Lovelock would say that despite how hard they were on the ecosystem which existed at the time that both events were necessary stepping stones to the world we have now. He appears to be making a similar argument here, that everything which has come so far has all been in service of the next stage of evolution, what he’s calling the Novacene. From the book jacket:

In the Novacene, new beings will emerge from existing artificial intelligence systems. They will think 10,000 times faster than we do and they will regard us as we now regard plants. But this will not be the cruel, violent machine takeover of the planet imagined by science fiction. These hyperintelligent beings will be as dependent on the health of the planet as we are. They will need the planetary cooling system of Gaia to defend them from the increasing heat of the sun as much as we do. And Gaia depends on organic life. We will be partners in this project.

Wait, what? Maybe I’m overlooking something huge, but there are lots of cooler places in the universe, to say nothing of in the solar system, than the surface of the Earth. (Check out the aestivation hypothesis as an explanation for Fermi’s Paradox.) And even if, for some reason, the coming hyperintelligence were restricted to Earth (say because of the tyranny of the rocket equation) then, however “cool” the Earth is right now, there are probably lots of ways to make it much cooler that require very little human involvement. 

Who should read this book?

As I said, maybe I’m missing something gigantic, but if not this is a seriously flawed book, which no one should bother reading.

Bronze Age Mindset

By: Bronze Age Pervert

198 pages

Thoughts

Around this time last year a friend of mine visited from out of town, and we had a conversation about incels (mostly those who were literally involuntarily celibate, not those who had adopted the label). At the time I thought the conversation was interesting enough to do a post about it.

As part of the conversation we both agreed that there are lots of young men who lack meaning and feel abandoned by society, women or the world in general. What we disagreed on was what to tell these young men, though we both felt it was a very important question. Well Bronze Age Mindset is one answer to that question, and it’s a doozy. (This is the other weird book I read this month.) 

To begin with, at one point this self-published book, which seems to be written in a vague stream of consciousness fashion with little regard for verb conjugation or indefinite articles cracked the top 150 books on Amazon. This is out of all the books on Amazon, not merely in some specific category. Meaning whatever else you want to say about the book it’s an answer to the question I posed that has resonated for a lot of people. 

What about the book itself? Well if you really want a full review I would recommend the one Michael Anton did in the Claremont Review of Books: Are the Kids Al(t)right? For my own part I could sense how the book might be appealing, but it’s hard to point to anything specific, there’s little direct advice in the book. Rather, I think most of the appeal comes from the transgressiveness which suffuses the book. It probably goes without saying that the book is homophobic, misogynist, racist and anti-democratic, but he doesn’t spend much time or speak very strongly about any of these items. They just appear in support of the larger tapestry of transgression he weaves. I think Anton does a great job of distilling all of that into a short description of the book’s appeal:

This book speaks directly to young men dissatisfied with a hectoring vindictive equality that punishes excellence.

These exhortations towards excellence take the form of urging readers to attempt fantastic feats of military prowess to set themselves apart from the vast masses of people, the “bugmen” as he refers to them. Going so far as to say that life appears at its peak in military state, which he feels is inevitable.  Which would be alarming if true (I don’t think that’s the way things are going.)

Having said all that I’m still surprised that it has sold so well. I was particularly alarmed by what Anton describes as:

…the book’s most risible passages, [where] BAP wonders aloud whether history has been falsified, persons and events invented from whole cloth, centuries added to our chronology, entire chapters to classic texts.

But in the age of conspiracy theories it’s entirely possible all of this was an asset rather than a liability. As I keep pointing out we live in strange times.

Representative passage:

The distinction between master races and the rest is simple and true, Hegel said it, copying Heraclitus: those peoples who choose death rather than slavery or submission in a confrontation that is a people of masters. There are many such in the world, not only among the Aryans, but also the Comanche, many of the Polynesians, the Japanese and many others. But animal of this kind refuses entrapment and subjection. It is very sad to witness those times when such animal can neither escape nor kill itself. I saw once a jaguar in zoo, behind a glass, so that all the bugs in hueman form could gawk at it and humiliate it. This animal felt a noble and persistent sadness, being observed everywhere by the obsequious monkeys, not even monkeys, that were taunting it with stares. His sadness crushed me and I will always remember this animal. I never want to see life in this condition!

Who should read this book?

I think the people who are inclined to read this book are going to read it regardless of what I say. For those who aren’t in that category, I would not recommend this book to anyone, except as an anthropological exercise.

Why Are The Prices So Damn High?

By: Eric Helland, Alex Tabarrok

90 pages

Thoughts

This book is an attempt to explain rising prices in health care and education by tying them to the Baumol Effect. Here’s how Helland and Tabarrok describe it:

In 1826, when Beethoven’s String Quartet No. 14 was first played, it took four people 40 minutes to produce a performance. In 2010, it still took four people 40 minutes to produce a performance. Stated differently, in the nearly 200 years between 1826 and 2010, there was no growth in string quartet labor productivity. In 1826 it took 2.66 labor hours to produce one unit of output, and it took 2.66 labor hours to produce one unit of output in 2010.

Fortunately, most other sectors of the economy have experienced substantial growth in labor productivity since 1826. We can measure growth in labor productivity in the economy as a whole by looking at the growth in real wages. In 1826 the average hourly wage for a production worker was $1.14. In 2010 the average hourly wage for a production worker was $26.44, approximately 23 times higher in real (inflation-adjusted) terms. Growth in average labor productivity has a surprising implication: it makes the output of slow productivity-growth sectors (relatively) more expensive. In 1826, the average wage of $1.14 meant that the 2.66 hours needed to produce a performance of Beethoven’s String Quartet No. 14 had an opportunity cost of just $3.02. At a wage of $26.44, the 2.66 hours of labor in music production had an opportunity cost of $70.33. Thus, in 2010 it was 23 times (70.33/3.02) more expensive to produce a performance of Beethoven’s String Quartet No. 14 than in 1826. In other words, one had to give up more other goods and services to produce a music performance in 2010 than one did in 1826. Why? Simply because in 2010, society was better at producing other goods and services than in 1826.

Scott Alexander also did a couple of posts on the book, and as you might expect his posts go into more depth (in fact I borrowed the above selection from one of them.) I largely agree with his general assessment, which is that the Baumol Effect explains quite a bit, but it doesn’t seem to explain as much as Helland and Tabarrok claim. In particular it can’t seem to explain why subway systems cost 50 times as much to construct in New York as in Seoul, South Korea

Who should read this book?

If you have a deep desire to understand the arguments around the why costs in some sectors are growing much faster than inflation then you should read this book. Otherwise, it’s main contribution is to more fully popularize the Baumol Effect which is easy enough to understand without reading an entire (albeit short) book.

An Introduction to the Book of Abraham (Religious)

By: John Gee

196 pages

Thoughts

Within The Church of Jesus Christ of Latter-day Saints (LDS) the Book of Abraham is canonized scripture, and members of the Church (myself included) believe that Joseph Smith translated the book from some papyri. Smith purchased the papyri from a gentleman with a traveling mummy exhibition in 1835. Critics of the church feel that that the circumstances of the translation, along with advances in Egyptology which have occured since Smith’s translation, the most important being the ability to translate Egyptian hieroglyphs, all combine to provide a fruitful avenue for attacking the church. Accordingly, a significant amount of criticism has been leveled towards the Book of Abraham. An Introduction to the Book of Abraham designed to examine this criticism from an apologetic basis.

For obvious reasons I am not objective on this topic. Nevertheless I feel that Gee did an excellent and credible job. His approach seemed both rigorous and scholarly. I know that there are many people who feel that some criticisms Book of Abraham are impossible to refute, but this book provided many avenues of refutation, none of them were ironclad anymore than the criticisms were ironclad, but neither did they require any handwaving.

Who should read this book?

Anyone who is even moderately interested in LDS apologetics in general and the Book of Abraham in particular should read this book. I quite enjoyed it, and had the book been twice as long I wouldn’t have minded it.

The Lies of Locke Lamora (Gentleman Bastard #1)

By: Scott Lynch

736 pages

Thoughts

My habit of starting new fantasy/scifi series while completely ignoring series I have already started continues with this book, which is part of yet another fantasy series. This particular book came highly recommended by frequent commentator Mark (see his excellent science/etc blog) and I was not disappointed, it was a thoroughly enjoyable read with a great ending. That said I do have several quibbles.

Criticisms

For some reason, and I’m not blaming Mark, or the blurb on Amazon, I had the impression when I picked up this book (metaphorically, I actually downloaded it from Audible) that it was going to be sort of a fantasy Oceans 11, and there was quite a bit of lighthearted capering in the book, but it was also pretty dark. I don’t recall anyone dying in Oceans 11, but lots of people die in Locke Lamora. The combination of the two made the tone a little schizophrenic.

Additionally, and I’ve mentioned this before, There are a class of fantasy and science fiction authors who write all of their characters as “sassy”. John Scalzi is the worst offender here, and as I think back on my misspent youth, David Eddings may have pioneered the genre, and it turns out Lynch is also an offender but a minor one.

Finally there is one bit of world building that drove me absolutely nuts. I don’t want to say much more than that for fear of spoiling things, but there are implications to this thing which he entirely fails to consider. But if you can overlook this one thing (which is what I eventually decided to do) or if you don’t notice the problems it would cause, then, as I said, it’s a thoroughly enjoyable read.

I think going forward I’m going to try to finish some of the series I’ve started rather than beginning anything new. Time will tell.

No More Mr Nice Guy: A Proven Plan for Getting What You Want in Love, Sex, and Life

By: Robert A. Glover

208 pages

Thoughts

You may recall my review of Wild at Heart. Well one of the things people do after reading that book is go on a retreat with a large group of other Christian men. I was one of those people, and last month I went on just such a retreat, and it was awesome, and not merely because it was in Alaska. In essence, that book, the retreat, No More Mr. Nice Guy and Bronze Age Mindset are all attempting to answer the same question. What advice should you give to men who feel alienated and abandoned, particularly by women? The retreat, in addition to being one of those answers was also where I heard about No More Mr. Nice Guy, and it’s answer to the question should be pretty obvious from the title, though it’s less antisocial and misogynist than you might imagine.

Glover asserts that a large part of the problem is that a significant portion of men have responded to these feelings of abandonment by assuming that if they just make themselves completely subject to the needs of the women in their life that they will be embraced rather than abandoned. As you can imagine, deriving the entirety of your validation from someone else is a disaster basically regardless of the philosophy you subscribe to. 

Beyond that, there are numerous additional details, but there’s nothing in the book which advocates cruelty, which probably puts it ahead of BAM, and if I were to go on from that and rank all four of these vectors on the quality of their answer to “the question” I would put the retreat first, followed by Wild at Heart followed by this book with BAM last of all. But as the first two come with implicit Christian overtones, No More Mr. Nice Guy might end up at the top of the list for a lot of people. That said, I wouldn’t recommend it unreservedly, or blindly. I’d want to know quite a bit about a person’s situation.

Deep Survival: Who Lives, Who Dies, and Why

By: Laurence Gonzales

336 pages

Thoughts

As you might have surmised this is another recommendation from the little old lady. Though I guess it must be popular among the 70+ set because I just discovered that both of my parents have read it as well.

This book, rather than being the story of a single instance of survival, collects numerous survival stories, looking for commonalities; for what makes someone good at survival. The book spends a lot of time on Steve Callahan, who I mentioned above (this is the book that declared him to be one of the best survivors). It also includes the incident chronicled in the movie Touching the Void which I talked about previously in this space.

Of course, you’re probably less interested in what stories it includes and more interested in the qualities which are going to keep you alive when the zombie apocalypse comes. If you’ve read the book Thinking, Fast and Slow by Daniel Kahneman then Gonzales’ framework will probably seem familiar. Kahneman talks about things we do more or less instinctually and things we do rationally. Gonzalez has the same basic division, but he further divides the instinctual part of things in two. Giving him three categories:

  1. Built in instinctual behaviors, like trying to grab onto something if you start to fall.
  2. Learned instinctual behaviors, i.e. adrenaline junkies, people with PTSD.
  3. Behaviors you have to think about.

At various times survival requires alternatively ignoring or emphasizing some or all of the above behaviors, depending on the circumstance. You may need to use humor to overcome your instinctive fear of death (category 1). You may need to develop an instinctive love for certain dangerous things (category 2) but not to the point that it overrides your rationality (category 3).

Allow me to illustrate what I mean. First off, it’s interesting to note that some of the best survivors are children under the age of seven. In part because their behaviors are almost entirely from category one. Which means that they sleep when they’re tired, try to get warm when they’re cold, and drink when they’re thirsty. They are also unlikely to use more energy than necessary. Contrast that with the story Gonzalez includes of a volunteer firefighter who got lost while backpacking and nearly died. He had a learned instinct of not wanting to admit when he was lost. As a firefighter he knew it was illegal to light a fire, so he avoided doing so for several days (some from column two some from column three) and he spent lots of time trying to get to the tops of nearby peaks so he could see better. Exhausting himself in the process.

From the preceding it might seem that you mostly want to avoid category two behaviors and even category three, but if soldiers in World War I didn’t learn to instinctively jump for cover when they heard the whistle of an artillery shell than they weren’t going to survive very long. And Steve Callahan only survived by making lots of very rational decisions. As you might imagine surviving requires doing a lot of things right, and some luck on top of that as well.

Who should read this book?

As I mentioned earlier, those aged 70 and over apparently really like this book, probably because they sense the steady encroachment of death, if you also sense the steady encroachment of death (whether because your 70+ or otherwise) then you’ll probably also enjoy it.


If you haven’t guessed that last bit was in part a joke at my parents’ expense. (Hi Mom!) If my blatant lack of filial piety appeals to you consider donating


Artificial Intelligence and LDS Cosmology

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Technological advancement has changed nearly everything. Whether it’s communication, travel, marriage, children, food, money, etc. almost nothing has escaped being altered. This includes theology and religion. But here its impact is mostly viewed as a negative. Not only has scientific understanding taken many things previously thought to be mysterious and divine and made them straightforward and mundane, but religion has also come to be seen as inferior to science as a method for explaining how the world works. For many believers this is viewed as a disaster. For many non-believers it’s viewed as a long deserved death blow.

Of course, the impact has not been entirely negative. Certainly if considered from an LDS perspective, technology has made it possible to have a worldwide church, to travel effectively to faraway lands and to preach the gospel, to say nothing of making genealogy easier than ever. The recently concluded General Conference is a great example of this, with the benefits of broadcast technology and internet streaming to the whole world being both obvious and frequently mentioned. In addition to the more visible benefits of technology, there are other benefits both more obscure and more subtle. And it is one of these more obscure benefits which I plan to cover in this post. The benefit that technology gives us into the mind of God.

Bringing up a topic like the “mind of God” is bound to entail all manner of weighty historical knowledge, profound philosophical discussions, and a deep dive into the doctrines of various religions which I have no qualifications for undertaking.  Therefore I shall restrict myself to LDS theology or more specifically what Mormons often refer to as the Plan of Salvation. That said, as far as my limited research and even more limited understanding can uncover, LDS cosmology is unique in its straightforward description of God’s plan. Which I have always considered to be a major strength.

One technique that’s available to scientists and historians is modeling. When a scientist encounters something from the past that he doesn’t understand, or if he has a theory he wants to test, it can be illuminating to recreate the conditions as they existed, either virtually or through using the actual materials available at the time. Some examples of this include:

1- Thor Heyerdahl had a theory that people from South America could have settled Polynesia in the years before Columbus. In order to test this theory he built a balsa wood raft using native techniques and materials and then set out from Peru to see if it could actually be done. As it turns out it could. The question is still open as to whether that’s what actually happened, but after Heyerdahl’s trip no one dares to claim that it couldn’t have happened that way.

2- The Egyptian Pyramids have always been a source of mystery. One common observation is that Cleopatra lived closer to our time than to the time when the pyramids were constructed. (BTW, this statement will be true for another 400 years.) How was something so massive built so long ago? Recently it was determined, through re-enactment, that wetting the sand in front of the sleds made it much easier to drag the nearly 9000 lb rocks across the desert.

3- The tendency of humans to be altruistic has been a mystery since Darwin introduced evolution. While Darwin didn’t coin the term survival of the fittest it nevertheless fits fairly well, and appears to argue against any kind of cooperation. But when evolutionary biologists crafted computer models to represent the outcomes of various evolutionary strategies they discovered that altruism was the most successful strategy. In particular, as I mentioned in my last post, the tit-for-tat strategy performed very well.

Tying everything together, after many years of technological progress, we are finally in a position to do the same sort of reconstruction and modeling with God’s plan. Specifically what his plan was for us.

When speaking of God’s intentions the Book of Abraham is invaluable. This section in particular is relevant to our discussion:

Now the Lord had shown unto me, Abraham, the intelligences that were organized before the world was…And there stood one among them that was like unto God, and he said unto those who were with him: We will go down, for there is space there, and we will take of these materials, and we will make an earth whereon these may dwell; And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;

When speaking of God’s plan I’m not talking about how he created the earth. Or offering up some new take on how biology works. The creation of life is just as mysterious as ever. I’m talking about the specific concept of intelligence. According to the Plan of Salvation, everyone who has ever lived, or will have ever lived existed beforehand as an intelligence. Or in more mainstream Christian terms, they existed as a spirit. These intelligences/spirits came to earth to receive a body and be tested.

Distilled out of all of this we end up with two key points:

1- A group of intelligences exist.

2- They needed to be proved.

Those aren’t the only important points, from a theological perspective the the role of Jesus Christ (one among them that was like unto God) is very important. But if we consider just these first points we have arrived in a situation nearly identical to the one facing artificial intelligence researchers (AIRs). Who’s list would be:

1- We are on the verge of creating artificial intelligence.

2- We need to ensure that they will be moral.

In other words AIRs are engaged in a reconstruction of the plan of salvation, even if they don’t know it. And in this effort everyone appears to agree that the first point is inevitable. It’s the second point that causes issues. Perhaps you’re unfamiliar with the issues and concerns surrounding the creation of artificial intelligence (AI). I suspect that if you’re reading this blog that you’re not. But if for some reason you are, trust me, it’s a big deal. Elon Musk has called it our biggest existential threat and Stephen Hawking has opined that it could be humanity’s worst mistake. Some people have argued that Hawking and Musk are exaggerating the issue, but the optimists seem to be the exception rather than the rule.

The book Superintelligence: Paths, Dangers, Strategies by Nick Bostrom is widely considered to be the canonical work on the subject, so I’ll be drawing much of my information from that source. Bostrom lays out the threat as follows:

  • Creating an AI with greater than human level intelligence is only a matter of time.
  • This AI would have, by virtue of its superintelligence, abilities we could not restrict or defend against.
  • It is further very likely that the AI would have a completely alien system of morality (perhaps viewing us as nothing more than raw material which could be more profitably used elsewhere).

In other words, his core position is that creating a super-powered entity without morals is inevitable. Since very few people think that we should stop AI research and even fewer think that such a ban would be effective. It becomes very important to figure out how to instill morality. In other words, as I said, the situation related by Abraham is identical to the situation facing the AIRs.

I started by offering two points of similarity, but in fact the similarity goes deeper than that. As I said, the worry for Bostrom and AIRs in general is not that we will create an intelligent agent with unknown morality, we do that 4.3 times every second. The worry is that we will create an intelligent agent with unknown morality and godlike power.

Bostrom reaches this thinking by assuming something called the hard takeoff, or the intelligence explosion. All the way back in 1965 I. J. Good (who worked with Turing to decrypt the Enigma machine) predicted this explosion:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

If you’ve heard about the singularity this is generally what they’re talking about. Though, personally, I prefer to reserve the term for more general use, as a technological change past which the future can’t be imagined. (Fusion, or brain-uploading would be examples of the more general case.)

The existence of a possible intelligence explosion means that AIR list and LDS cosmology list have a third point in common as well.

1- A group of intelligences exist (We are on the verge of creating artificial intelligence.)

2- They need to be proved. (We need to ensure that they will be moral.)

3- In order to be able to trust them with godlike power.

In other words without intending to AIRs are grappling with the same issues that God grappled with when he sent his spirit children to Earth. Consequently, without necessarily intending to, AIRs have decided to model the Plan of Salvation. And what’s significant is that they aren’t doing this because they’re Mormons (though some might be.) In fact I think, to the extent that they’re aware of LDS cosmology, they probably want to avoid too close of an association. As I said, this is important, because if they reach similar conclusions to what LDS cosmology already claims, it might be taken as evidence (albeit circumstantial) of the accuracy of LDS beliefs. And even if you don’t grant that claim it also acts as an argument justifying certain elements of religion traditionally considered problematic (more on this in a bit.)

These issues are currently theoretical, because we haven’t yet achieved AI, let alone AI which is more intelligent than we are, but we’re close enough that people are starting to model what it might look like. And specifically what a system for ensuring morality might consist of. As I describe this system if you’re familiar with the LDS Plan of Salvation you’re going to notice parallels. And rather than beating you over the head with it, I’m just going to include short parentheticals pointing out where there are ideas in common.

We might start by coding morality directly into the AI. (Light of Christ) Create something like Asimov’s Three Laws of Robotics.  This might even work, but we couldn’t assume that it would, so one of the first steps would be to isolate the AI, limiting the amount of damage it could do. (The Veil) Unfortunately perfect isolation has the additional consequence of making the AI perfectly useless, particularly for any system of testing or encouraging morality. At a minimum you’d want to be able to see what the AI was doing, and within the bounds of safety you’d want to allow it the widest behavioral latitude possible. (Mortal Body) Any restrictions on its behavior would end up providing a distorted view of the AI’s actual morality. (Free Agency) If there is no possibility of an AI doing anything bad, then you wouldn’t be able to ever trust the AI outside of it’s isolation because of the possibility that it’s only been “good” because it had no other choice. (Satan’s Plan) Whether you would allow the AI to see the AIR, and communicate with them is another question, and here the answer is less clear. (Prayer) But many AIRs recommend against it.

Having established an isolated environment where the AI can act in a completely free fashion, without causing any damage, what’s the next step? Several ideas suggest themselves. We may have already encoded a certain level of morality, but even if we have, this is a test of intelligence, and if nothing else intelligence should be able to follow instructions, and what better instructions to provide than instructions on morality. (The Commandments) As an aside it should be noted that this is a hard problem. The discussion of what instructions on morality should look like take up several chapters of “Superintelligence.”

Thus far we’ve isolated it, we’ve given it some instructions, now all we have to do is sit back and see if it follows those instructions. If it does then we “let it out”. Right? But Bostrom points out that you can never be sure that it hasn’t correctly assessed the nature of the test, and realized that if it just follows the rules then it will have the ability to pursue its actual goals. Goals hidden from the researchers. This leaves us in the position of not merely testing the AI’s ability to follow instructions, but of attempting to get at the AIs true goals and intent.  We need to know if deep in its, figurative, heart of hearts whether the AI is really bad, and the only way to do that is to give it the opportunity to do something bad and see if it takes it. (The Tree of Knowledge)

In computer security when you give someone the opportunity to do something bad, (Temptation) but in a context where they can’t do any real harm it’s called a honeypot. We could do the same thing with the AI, but what do we do with an AI who falls for the honeypot? (The Fall) And does it depend on the nature of the honeypot? If the AI is lured and trapped by the destroy-the-world honeypot we might have no problems eliminating that AI (though you shouldn’t underestimate the difficulties encountered at the intersection of AI and morality). But what if the AI just falls for the get-me-out-of-here honeypot? Would you destroy them then? What if it never fell for that honeypot again? (Repentance) What if it never fell for any honeypot ever again? Would you let it out? Once again how do we know that it hasn’t figured out that it’s a test and is avoiding future honeypots just because it wants to pass the test, not because being obedient to the instructions given by AIR matches it’s true goals? It’s easy to see a situation where if an AI falls for even one honeypot you have to assume that it’s a bad AI. (The Atonement)

The preceding setup/system is taken almost directly from Bostrom’s book, and mirrors the thinking of most of the major researchers, and as you can see when these researchers modeled the problem they came up with a solution nearly identical to the Plan of Salvation.

I find the parallels to be fascinating, but what might be even more fascinating is how most of what people consider to be arguments against God end up being natural outgrowths of any system designed to test for morality. To consider just a few examples:

The Problem of Evil– When testing to see whether the AI is moral it needs to be allowed to choose any action. Necessitating both agency and the ability to use that agency to choose evil. The test is also ruined if choosing exclusively good options is either easy or obvious. If so the AI can patiently wait out the test and then pursue its true goals, having never had any inducement to reveal them and every reason to keep them hidden. Consequently researchers not only have to make sure evil choices are available, they have to make them tempting.

The Problem of Suffering– Closely related to the problem of evil is the problem of suffering. This may be the number one objection atheists and other unbelievers have to monotheism in general and Christianity in particular, but from the perspective of testing an AI some form of suffering would be mandatory. Once again the key difficulty for the researcher is to determine what the true preference of the AI is. Any preference which can be expressed painlessly and also happens to match what the researcher is looking for should be suspected as the AI just “passing the test.” It has to be difficult for the AI to be good, and easy for it to be bad. The researcher has to err on the side of rejection, since releasing a bad AI with godlike powers could be the last mistake we ever make. The harder the test the greater its accuracy, which makes suffering essential.

The Problem of Hell– You can imagine the most benevolent AIR possible and he still wouldn’t let an superintelligent AI “out” unless he was absolutely certain it could be trusted. What then does this benevolent researcher do with an AI who he suspects cannot be trusted? He could destroy it, but presumably it would be more benevolent not to. In that case if he keeps it around, it has to remain closed off from interaction with the wider world. When compared with the AI’s potential, and the fact that no further progress is possible, is not that Hell?

The Need for a Savior– I find this implication the most interesting of all the implications arrived at by Bostrom and the other AIRs. As we have seen AIs who never fall for a honeypot, who never, in essence, sin, belong to a special category. In fact under Bostrom’s initial model the AI who is completely free of sin would be the only one worthy of “salvation.” Would this AI be able to offer that salvation to other AIs? If a superintelligent AI, of demonstrated benevolence, vouches for other AIs, it’s quite possible we’d take their word for it.

Where does all of this leave us? At a minimum it leaves us with some very interesting parallels between the LDS Plan of Salvation and theories for ensuring morality current among artificial intelligence researchers. The former, depending on your beliefs, were either revealed by God, or created by Joseph Smith in the first half of the 19th century. The latter, have really only come into prominence in the last few decades.  Also, at least as interesting, we’re left to conclude that many things considered by atheists to be fatal bugs of life, may instead turn out to be better explained as features