Tag: <span>LDS</span>

The 7 Books I Finished in December

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


  1. Why Liberalism Failed by: Patrick J. Deneen
  2. Leviathan Falls by: James S. A. Corey
  3. Termination Shock by: Neal Stephenson
  4. The Histories of Herodotus by: Herodotus 
  5. The Golden Transcendence by: John C. Wright
  6. The Boy, the Mole, the Fox and the Horse by: Charlie MacKesy
  7. Doctrine and Covenants

I had hoped to finish at least 104 books this year. There are a couple of reasons for this: First, it’s what I did last year. Second, it would mean I had averaged two books a week. Unfortunately I only ended up with 102. I was very close to finishing two other books, but between the holidays, the big extended family trip we take every year between Christmas and New Years, and, most of all, getting COVID. (Yes, for those following along at home, my PCR test was positive.) I eventually decided it would be better to start 2022 a little bit ahead, than try to fit in a bunch of feverish reading on the last day of the year.

You may have guessed that there was a connection between COVID, and the “big extended family trip”. Indeed there was. But in retrospect, even knowing that I, and many others, would end up with COVID, I’m not sure what we would have done differently. When you’re doing a vacation that involves over 30 people it’s kind of a juggernaut, with spending well into the five figures. Also Omicron really only spiked a few days before we were set to leave, so we didn’t have the information necessary to make the decision to cancel the trip in time even if it had made sense to. On top of all of the foregoing, it’s not as if we were ignoring the problem. We did a bunch of rapid tests immediately before the trip and they all came up negative. And basically everyone was vaccinated and most people (including myself) were boosted on top of that.

I suspect that there will be a lot of stories similar to mine of holiday gatherings that acted as super spreader events . One can already see a huge recent spike in cases, which appears to be almost vertical. It’s interesting to compare this spike to last year’s holiday spike. Last year the spike started in mid-October. This year, in mid-October, cases were still declining from a September peak, and it wasn’t until the end of November that they started turning up, and then there was a weird plateau between the 3rd and the 17th of December before they shot up like a rocket. 

I guess what I’m curious about is when we’ll hit the daily case peak and how high will that peak be? Last year we peaked on the 12th of January, but that’s the peak of a trend that started in mid-October, but also grew more slowly. This year’s started later, but is growing much faster. So based on that and eyeballing things I think it’s going to peak and start it’s decline around January 15th. As far as what that peak will be, I’m going to say 2,500 daily cases per million people as per the ourworldindata.org site. Should anyone want to make their own predictions on this I’d be very interested in seeing them. You can email me or leave them in the comments.

A lot of things could affect this number, in particular attitudes around and availability of testing. I had to wait in line for two hours in order to get my PCR test on the 31st, and my kids had to wait four hours on the 3rd despite getting in line several hours earlier. 

Of course, what we’re really interested in is confirmed deaths and so far that hasn’t spiked, and hopefully it won’t.


I- Eschatological Reviews

Why Liberalism Failed

by: Patrick J. Deneen

248 Pages

Briefly, what was this book about?

It’s difficult to condense it into a single point, but perhaps it can be boiled down into the conflict between liberalism and democracy. The former pulls everything to the opposite extremes of individualism or globalism, while the latter requires strong civic engagement in the middle (communities, states, organizations, etc.)

Who should read this book?

I’ve read many books about the collapse of Western liberal ideology. I would say that this is the densest. So you should either read it after you’ve established a broad foundation with other books. Or if you’re in a hurry, only read this one since it contains most of what’s said elsewhere.

General Thoughts

As I have already said, there’s a lot going on in the book. Deneen covers a huge amount of territory, in a comparatively tiny number of pages. So I’m going to focus on just one thing, his claim that liberalism pushes everything to the ends of the spectrum—it is an ideology that simultaneously pushes politics towards maximum individualism and maximum statism.

I don’t know about you, but I hadn’t come across this description of the bifurcated nature of liberalism before and at first glance it seems obviously contradictory. How can an ideology simultaneously encourage individuation and absolutism? As it turns out, despite the fact that I hadn’t encountered the idea it’s not new. Alexis de Tocqueville, that famous chronicler of Democracy in America, wrote the following all the way back in 1835:

So … no man is obliged to put his powers at the disposal of another, and no one has any claim of right to substantial support from his fellow man, each is both independent and weak. These two conditions, which must be neither seen quite separately nor confused, give the citizen of a democracy extremely contradictory instincts. He is full of confidence and pride in his independence among his equals, but from time to time his weakness makes him feel the need for some outside help which he cannot expect from any of his fellows, for they are both impotent and cold. In this extremity he naturally turns his eyes toward that huge entity [the tutelary state] which alone stands out above the universal level of abasement. His needs, and even more his longings, continually put him in mind of that entity, and he ends by regarding it as the sole and necessary support of his individual weakness

To put it in different terms, if you want maximum liberty some entity has to guarantee that liberty. And as we have decided against individuals ensuring their own liberty, (i.e. armed anarchy) that entity is the state. Here’s Deneen going into greater detail.

Ironically, the more completely the sphere of autonomy is secured, the more comprehensive the state must become. Liberty, so defined, requires liberation from all forms of associations and relationships, from family to church, from schools to village and community, that exerted control over behavior through informal and habituated expectations and norms. These controls were largely cultural, not political—law was less extensive and existed largely as a continuation of cultural norms, the informal expectations of behavior learned through family, church, and community. With the liberation of individuals from these associations, there is more need to regulate behavior through the imposition of positive law. At the same time, as the authority of social norms dissipates, they are increasingly felt to be residual, arbitrary, and oppressive, motivating calls for the state to actively work toward their eradication.

This creates a tension between liberalism and democracy, because in essence liberalism hinges on changing what “liberty” has historically meant:

“Liberty” is a word of ancient lineage, yet liberalism has a more recent pedigree, being arguably only a few hundred years old. It arises from a redefinition of the nature of liberty to mean almost the opposite of its original meaning. By ancient and Christian understandings, liberty was the condition of self-governance, whether achieved by the individual or by a political community. Because self-rule was achieved only with difficulty— requiring an extensive habituation in virtue, particularly self-command and self-discipline over base but insistent appetites—the achievement of liberty required constraints upon individual choice.

Democracy, in fact, cannot ultimately function in a liberal regime. Democracy requires extensive social forms that liberalism aims to deconstruct, particularly shared social practices and commitments that arise from thick communities, not a random collection of unconnected selves entering and exiting an election booth.

“Thick communities” is a great term, and it’s precisely what we don’t have any more. We have carved out the middle so that there will be no restrictions on individual choice, and created Hobbes’ Leviathan in order to have a weapon equal to the task.

I can only pretend to have the smallest amount of understanding of this subject, but I definitely got a strong sense of that former definition of liberty, a liberty of self-discipline, while reading Plato. And what I have read beyond that would seem to support this idea. And of course it was this virtue, these associations, religions, communities, and norms which represent the “thickness” we no longer have.

For a more modern example of what he’s talking about, Deneen brings up the example of Julia. If you were paying attention during the 2012 election then perhaps you remember Julia. 

Julia appeared briefly toward the beginning of Obama’s campaign as a series of internet slides in which it was demonstrated that she had achieved her dreams through a series of government programs that, throughout her life, had enabled various milestones… In Julia’s world there are only Julia and the government, with the very brief exception of a young child who appears in one slide—with no evident father—and is quickly whisked away by a government-sponsored yellow bus, never to be seen again. Otherwise, Julia has achieved a life of perfect autonomy, courtesy of a massive, sometimes intrusive, always solicitous, ever-present government.

You may get the impression from the examples given so far and my generally traditional bent that this is all a problem originating from progressive liberalism. And indeed it’s hard to think of a better example of massive government intrusion in the service of individual autonomy than the current battle over transgender rights. But Deneen heaps just as much criticism on classical liberalism and their valorization of corporations and markets. I’m probably not the guy to steelman that particular argument, but it is worth including an excerpt on how left and right are two sides of the same coin:

These ends have been achieved through the depersonalization and abstraction advanced via two main entities— the state and the market. Yet while they have worked together in a pincer movement to render us ever more naked as individuals, our political debates mask this alliance by claiming that allegiance to one of these forces will save us from the depredations of the other. Our main political choices come down to which depersonalized mechanism will purportedly advance our freedom and security—the space of the market, which collects our billions upon billions of choices to provide for our wants and needs without demanding from us any specific thought or intention about the wants and needs of others; or the liberal state, which establishes depersonalized procedures and mechanisms for the wants and needs of others that remain insufficiently addressed by the market.

When he goes on to identify the “key features of liberalism” as the “conquest of nature”, “timelessness”, “placelessness”, and “borderlessness”, this list of attributes is mostly associated with classical liberalism, rather than it’s progressive brother.

I need to wrap up this section. I understand that the review has been heavy on quotes and excerpts. In part this is because, as I write this, I’m still recovering from COVID, and copying is easier than composing. In part it’s because there are so many passages worthy of excerpting. With that in mind I would like to close out the section with one final excerpt:

Today’s widespread yearning for a strong leader, one with the will to take back popular control over liberalism’s forms of bureaucratized government and globalized economy, comes after decades of liberal dismantling of cultural norms and political habits essential to self-governance. The breakdown of family, community, and religious norms and institutions, especially among those benefiting least from liberalism’s advance, has not led liberalism’s discontents to seek a restoration of those norms. That would take effort and sacrifice in a culture that now diminishes the value of both. Rather, many now look to deploy the statist powers of liberalism against its own ruling class. Meanwhile, huge energies are spent in mass protest rather than in self-legislation and deliberation, reflecting less a renewal of democratic governance than political fury and despair. Liberalism created the conditions, and the tools, for the ascent of its own worst nightmare, yet it lacks the self-knowledge to understand its own culpability.

Eschatological Implications

It is commonly pointed out, both by this book, and others, that at the beginning of the 20th century there were three competing political ideologies: fascism, communism, and liberalism. Fascism was eliminated as a competitor by World War II (unless you think that’s what’s happening in China) and communism was eliminated by the end of the Cold War (again, depending on what you think is happening in China.) In an ideal world this would mean we now live in an era of international cooperation and peace between liberal nations, where the protection and celebration of individual autonomy has led to unprecedented happiness within those nations. The first part would appear to be mostly true, whether it will remain true is a subject for another time. But whatever the state of the world at the international level, no one would say that we are experiencing unprecedented happiness. The question: why not? Is an interesting one, but in the context of this book I’d rather ask: why now?

Deneen explanations for liberalism’s failures go all the way back to the founding, and beyond to people like Locke, Hobbes, Burke and Mill. If the seeds of liberalism’s failure have been in the ground for so long, why are they only sprouting now? In one sense a large percentage of this blog’s content has been dedicated to answering that question. But if we restrict ourselves to the themes outlined in the book I’d like to consider two specific explanations:

The first, and the one Deneen emphasizes the most is that liberalism’s recent failure is a result of its recent victory. That all of our current problems are due to liberalism essentially winning the race and crossing the finish line.

A political philosophy that was launched to foster greater equity, defend a pluralist tapestry of different cultures and beliefs, protect human dignity, and, of course, expand liberty, in practice generates titanic inequality, enforces uniformity and homogeneity, fosters material and spiritual degradation, and undermines freedom. Its success can be measured by its achievement of the opposite of what we have believed it would achieve. Rather than seeing the accumulating catastrophe as evidence of our failure to live up to liberalism’s ideals, we need rather to see clearly that the ruins it has produced are the signs of its very success. To call for the cures of liberalism’s ills by applying more liberal measures is tantamount to throwing gas on a raging fire. It will only deepen our political, social, economic, and moral crisis.

We have recently achieved near perfect bifurcation. People have basically no limits on their choices, except those which have been imposed by nine judges operating at the very highest level of government oversight, and then such laws are backed by the force of trillions of dollars and millions of enforcers. We have achieved the absolute leviathan and the perfectly autonomous individual. 

Or rather we are getting very close to this achievement, certainly far closer than anyone ever dreamed of and the means of doing that bring up the second explanation for “why now?” As is so often the case, technology has played a role.

Liberalism was premised upon the limitation of government and the liberation of the individual from arbitrary political control. But growing numbers of citizens regard the government as an entity separate from their own will and control, not their creature and creation as promised by liberal philosophy. The “limited government” of liberalism today would provoke jealousy and amazement from tyrants of old, who could only dream of such extensive capacities for surveillance and control of movement, finances, and even deeds and thoughts. The liberties that liberalism was brought into being to protect—individual rights of conscience, religion, association, speech, and self-governance—are extensively compromised by the expansion of government activity into every area of life. Yet this expansion continues, largely as a response to people’s felt loss of power over the trajectory of their lives in so many distinct spheres—economic and otherwise—leading to demands for further intervention by the one entity even nominally under their control. Our government readily complies, moving like a ratchet wrench, always in one direction, enlarging and expanding in response to civic grievances, ironically leading in turn to citizens’ further experience of distance and powerlessness. (emphasis mine)

The big theme of both of these explanations and of Deneen’s quotes in general is that liberalism has reached a dead end, and going forward will only make things worse. Unfortunately there’s no easy way of backing up either. Perhaps, to strain the metaphor somewhat, we need to climb some nearby wall, and find a new road. But it’s unclear which wall to climb or what that road might look like. Deneen thinks we need a completely new ideology, an “epic theory”.

When the book was first published he believed that such a project would take a very long time, events since then have changed his mind. From a preface attached to the new edition:

I now believe I was wrong to think that this project would take generations. Even in the months since the book’s publication, the fragility of the liberal order has become evident, now threatened by both right-wing nationalist movements and left-wing socialism. Instead of imagining a far-off and nearly inconceivable era when the slow emergence of liberalism’s alternatives might become fully visible from its long-burning embers, we find ourselves in a moment when “epic theory” becomes necessary. The long era in which we could be content with “normal theory,” working within the existing paradigm to explore the outermost reaches and distant implications of liberalism while also signaling its solidity and permanence, has ended. Epic theory becomes necessary when that paradigm loses its explanatory power, and events call forth a new departure in political thinking. When I was writing the conclusion of my book, I believed we were in a long phase of preparation for postliberal epic theory. But in mere months—having seen the American political order assaulted by two parties that are in a death grip but each lacking the ability to eliminate the other, and observing the accelerating demolition of the liberal order in Europe—I now think that the moment for “epic theory” has come upon us more suddenly than we could have anticipated. Such moments probably always arrive before we think we are ready. Augustine’s City of God was made necessary by the sudden and unexpected overturning of the “eternal” Roman order in A.D. 410. It seems more apparent every day that a comparable epoch-defining book must arise from our age, and I hope some young reader of this book will be the person to write it.

With his comments on right-wing nationalism and left-wing socialism, he alludes to the idea that perhaps we’ll return to liberalism’s vanquished alternatives: fascism and communism. But it’s hard to imagine that our salvation lies in either of those directions. Deneen suggests as much with his call for an epic theory, but it’s hard to imagine salvation coming from that corner either. More likely we’ve reached the end of history and instead of discovering a durable paradise we’ve uncovered a tumultuous hell.


II- Capsule Reviews

Leviathan Falls

by: James S. A. Corey

528 Pages

Briefly, what is this book about?

This book concludes The Expanse series, finally dealing with the issue of the malevolent elder gods who destroyed the ring builders. 

Who should read this book?

If you’ve made it through the first eight books in this series I can’t imagine that you would be reluctant to read one more book to see how it all ends. For those who have read only some of the previous eight books, or who perhaps haven’t read any of them, and are hesitating because they want to know if the series as a whole has a satisfying arc. I would say that it does. 

General Thoughts

Ending things is tough, and there are many works of art—books, TV shows, series of all kinds—which succeed right up until that point, only to fail when it comes time to tie up all the loose ends. Art whose reach ultimately exceeds its grasp. So how does Corey do with the job of ending The Expanse? I would give it a 7 out of 10. So not perfect, but better than average. It was solid, but not extraordinary.

In order to explain my mild dissatisfaction I’m going to go into mild spoiler territory. So if you’d rather avoid that sort of thing skip the next paragraph.

I came away with the strong feeling that when the ring builders and their destruction were introduced at the end of the third book, that Corey (who is actually two people btw…) had not quite figured out the nature of the ring builders or the nature of their enemies. So when it comes time to conclude things, some of the things they had already established no longer made sense. I understand this is being kind of picky, but a really great ending is all about revealing the grand plan you’ve had from the very beginning. And in this case those disparities made the plan less grand, or at least less elegant. It left one with the feeling that perhaps they were making it up as they went along.

Still as somewhat pulpy science fiction goes, this was a great series, and if you’ve been thinking about either picking it up or continuing it. I would recommend that you do so. 


Termination Shock: A Novel

by: Neal Stephenson

720 Pages

Briefly, what is this book about?

A hard headed Texas businessman, the Dutch Queen, and other assorted characters decide to solve global warming through geoengineering.

Who should read this book?

Anyone who likes Stephenson already. If you have no strong opinion or haven’t read anything he’s written this book is not a bad place to start. 

General Thoughts

The last time I reviewed a Stephenson novel I paid special recognition to a horribly awkward sex scene he had included. There is more of that in this book, though he’s managed to move things in the direction of humorous double entendres, making things both less explicit and less cringe-worthy, but for me it was still a false note. Perhaps the only one, because other than that I quite enjoyed the book, particularly the characters of T.R. and Rufus. After being somewhat disappointed in his last two books (Seveneves and Fall) this felt like a return to form.


The Landmark Herodotus: The Histories

by: Herodotus 

1024 Pages

Briefly, what is this book about?

It’s the founding book of western history, which describes the rise of the Persian empire and the Greco-Persian war, among other things.

Who should read this book?

If you have any interest in ancient history or the genesis of the West, this book is not only important, but eminently accessible.

General Thoughts

This is the third time I’ve read Herodotus. I picked it up again because I couldn’t resist this new edition which has all kinds of maps and appendices. The hardback is pretty expensive but you can pick up the paperback for $15. In it you’ll find all sorts of great stories, including the 300 Spartans at Thermopylae, Creosus of Lydia (call no man happy until he’s dead), and Herodotus’ great attempt at explaining why the Nile floods.

On this third reading I spent a lot of time wondering how much the Greco-Persian war contributed to the whole idea of the “Western World”. As a foundational myth, the story of the tiny city states of Greece taking on the million man army of Xerxes of Persia, and miraculously, winning, is hard to beat. Now, of course, modern historians doubt that Xerxes had anywhere close to the numbers Herodotus claims, but one assumes that most of the people reading the account in the thousands of years since it was first written didn’t know this. 


The Golden Transcendence: Or, The Last of the Masquerade 

by: John C. Wright

414 Pages

Briefly, what is this book about?

The final book in The Golden Age Trilogy, which kind of ends in the way you would expect a series like this to end, with a bunch of philosophy added in for good measure.

Who should read this book?

I’m not sure. It’s a weird mix of metaphysics, Victorian adventure story, transhumanism, love story and AI ethics. Which, yes, could be awesome, but it requires all of them to be subtly intertwined, and one thing this trilogy is not, is subtle. 

General Thoughts

I’m glad I read the trilogy. If nothing else, the world-building was great. In particular Wright did a great job of describing a full spectrum of transhuman possibilities. One that was far larger than what you find in most futuristic science fiction. But now that I’m done I think it’s another series where the author’s ambition exceeded his ability to execute. But if you’re just looking for a whole mess of interesting ideas, this series has that in spades.


The Boy, the Mole, the Fox and the Horse

by: Charlie MacKesy

128 pages

Briefly, what is this book about?

It’s not so much what the book is about, but what it looks like. It’s more a work of visual art than it is a story.

Who should read this book?

Everywhere I turned I was hearing about this book. So I read it to see what all the fuss was about. It’s a beautiful book with a sweet message. But it might be one of those things that’s famous for being famous…

General Thoughts

It’s probably going to take me longer to write this review than it did to read the book. (It took me about 20 minutes to read the book.) And I’m not sure how I feel about that. It’s a typical children’s book, and I’m not sure I’ve read enough of those recently to be qualified to pass judgment. It struck me as being pretty saccharine. Here are three consecutive pages:

“Life is difficult — but you are loved.”

“So you know all about me?” asked the boy. “Yes.” Said the horse. “And you still love me?” “We love you all the more.”

“Sometimes I think you believe in me more than I do.” Said the boy. “You’ll catch up.” Said the horse.

It’s entirely possible that I am too jaded to give an objective opinion.


III- Religious Review

Doctrine and Covenants

296 pages

Briefly, what is this book about?

This book is part of the scriptural canon of the Church of Jesus Christ of Latter-day Saints (LDS). It consists of modern day revelations received by Joseph Smith, mostly in the 1830’s, along with a few additional revelations received by subsequent prophets.

Who should read this book?

If you’re interested in the Church, then I would suggest reading the Book of Mormon first, but the Doctrine and Covenants also has some really great stuff.

General Thoughts

Within the Church last year was dedicated to studying Church history and the Doctrine and Covenants, which is how this ended up as one of the books I read. Obviously you can cover a lot of territory in a full year, and I can only cover a tiny portion of that in a single review. So I figured I’d just provide my two favorite passages. The first is from Doctrine and Covenants Section 58, verses 26-28:

26 For behold, it is not meet that I should command in all things; for he that is compelled in all things, the same is a slothful and not a wise servant; wherefore he receiveth no reward.

27 Verily I say, men should be anxiously engaged in a good cause, and do many things of their own free will, and bring to pass much righteousness;

28 For the power is in them, wherein they are agents unto themselves. And inasmuch as men do good they shall in nowise lose their reward.

This first one came to me with particular impact many years ago when I was unemployed and fighting a lawsuit. At the time I was praying every day for guidance, and it wasn’t coming. And then I came across those verses, which I had heard many times (particularly verse 26) but they had never hit me before like they hit me that day. And I realized that it was up to me. That I needed to do what I thought was best, and that in a sense the whole thing was a test. Phrasing it like this, probably trivializes it, but perhaps if I move onto the other verse it will make more sense. This one is from Section 93, verse 30:

All truth is independent in that sphere in which God has placed it, to act for itself, as all intelligence also; otherwise there is no existence.

Existence and intelligence are about making choices, and acting for ourselves. If you’re familiar with my extensive writings on the relationship between LDS cosmology and the AI alignment problem then you might be able to see some connection between that and this verse. 

One of the reasons why I continue to be a very devout member of The Church of Jesus-Christ of Latter-day Saints is that this model (which I have only touched on in the most superficial way) continues to make sense to me, and explains the world at least as well if not better than anything else I’ve come across in my reading and searching.

I’ve seen a lot of things recently that would seem to indicate that anyone who reads as much as I do is a pseudo-intellectual who’s just trying to run up the score, not really engaging with what they read. If you disagree with that. If you happen to like how much I read and the reviews it generates, consider donating.


Returning to Mormonism and AI (Part 3)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This is the final post in my series examining the connection between Mormonism and Artificial Intelligence (AI). I would advise reading both of the previous posts before reading this one (Links: Part One, Part Two), but if you don’t, here’s where we left off:

Many people who’ve made a deep study of artificial intelligence feel that we’re potentially very close to creating a conscious artificial intelligence. That is, a free-willed entity, which, by virtue being artificial would have no upper limit to its intelligence, and also no built in morality. More importantly, insofar as intelligence equals power (and there’s good evidence that it does). We may be on the verge of creating something with godlike abilities. Given, as I just said, that it will have no built in morality, how do we ensure that it doesn’t use it’s powers for evil? Leading to the question, how do you ensure that something as alien as an artificial consciousness ends up being humanity’s superhero and not our archenemy?

In the last post I opined that the best way to test the morality of an AI would be to isolate it and then give it lots of moral choices where it’s hard to make the right choice and easy to make the wrong choice. I then pointed out that this resembles the tenets of several religions I know, most especially my own faith, Mormonism. Despite the title, the first two posts were very light on religion in general and Mormonism is particular. This post will rectify that, and then some. It will be all about the religious parallels between this method for testing an AI’s morality and Mormon Theology.

This series was born as a reexamination of a post I made back in October where I compared AI research to Mormon Doctrine. And I’m going to start by revisiting that, though hopefully, for those already familiar with October’s post, from a slightly different angle.

To begin our discussion, Mormons believe in the concept of a pre-existence, that we lived as spirits before coming to this Earth. We are not the only religion to believe in a pre-existence, but most Christians (specifically those who accept the Second Council of Constantinople) do not. And among those christian sects and other religions who do believe in it, Mormons take the idea farther than anyone.

As a source for this, in addition to divine revelation, Mormons will point to the Book of Abraham, a book of scripture translated from papyrus by Joseph Smith and first published in 1842. From that book, this section in particular is relevant to our discussion:

Now the Lord had shown unto me, Abraham, the intelligences that were organized before the world was…And there stood one among them that was like unto God, and he said unto those who were with him: We will go down, for there is space there, and we will take of these materials, and we will make an earth whereon these may dwell; And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;

If you’ve been following along with me for the last two posts then I’m sure the word “intelligences” jumped out at you as you read that selection. But you may have also have noticed the phrase, “And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;” And the selection, taken as a whole, depicts a situation very similar to what I described in my last post, that is, creating an environment to isolate intelligences while we test their morality.

I need to add one final thing before the comparison is complete. While not explicitly stated in the selection, we, as Mormons, believe that this life is a test is to prepare us to become gods in our own right. With that final piece in place we can take the three steps I listed in the last post with respect to AI researchers and compare them to the three steps outlined in Mormon theology:

AI: We are on the verge of creating artificial intelligence.

Mormons: A group of intelligences exist.

AI: We need to ensure that they will be moral.

Mormons: They needed to be proved.

Both: In order to be able to trust them with godlike power.

Now that the parallels between the two endeavors are clear, I think that much of what people have traditionally seen as problems with religion end up being logical consequences flowing naturally out of a system for testing morality.

The rest of this post will cover some of these traditional problems and look at them from both the “creating a moral AI” standpoint and the “LDS theology” standpoint. (Hereafter I’ll just use AI and LDS as shorthand.) But before I get to that, it is important to acknowledge that the two systems are not completely identical. In fact there are many ways in which they are very different.

First when it comes to morality, we can’t be entirely sure that the values we want to impart to an AI are actually the best values for it to have. In fact many AI theorists, have put forth the “Principle of Epistemic Deference”, which states:

A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible.

No one would suggest that God has a similar policy of deferring to us on what’s true and what’s not. And therefore the LDS side of things has a presumed moral clarity underlying it which the AI side does not.

Second, when speaking of the development of AI it is generally assumed that the AI could be both smarter and more powerful than the people who created it. On the religious/LDS side of things there is a strong assumption in the other direction, that we are never going to be smarter or more powerful than our creator. This doesn’t change the need to test the morality, but it does make the consequences of being wrong a lot different for us than for God.

Finally, while in the end, we might only need a single, well-behaved AI to get us all of the advantages of a superintelligent entity, it’s clear that God wants to exalt as many people as possible. Meaning that on the AI side of things the selection process could, in theory, be a lot more draconian. While from an LDS perspective, you might expect things to be tough, but not impossible.

These three things are big differences, but none of them represents something which negates the core similarities. But they are something to keep in mind as we move forward and I will occasionally reference them as I go through the various similarities between the two systems.

To being with, as I just mentioned one difference between the AI and LDS models is how confident we are in what the correct morality should be, with some AI theorists speculating that we might actually want to defer to the AI on certain matters of morality and truth. Perhaps that’s true, but you could imagine that some aspects of morality are non-negotiable, for example you wouldn’t want to defer to the AIs conclusion that humanity is inferior and we should all be wiped out, however ironclad the AI’s reasons ended up being.

In fact, when we consider the possibility that AIs might have a very different morality from our own, an AI that was unquestioningly obedient would solve many of the potential problems. Obviously it would also introduce different problems. Certainly you wouldn’t want your standard villain type to get a hold of a superintelligent AI who just did whatever it was told, but also no one would question an AI researcher who told the AI to do something counterintuitive to see what it would do. And yet, just today I saw someone talk about how it’s inconceivable that the true God should really care if we eat pork, apparently concluding that obedience has no value on it’s own.

And, as useful as this is when in the realm of our questionable morality, how much more useful and important is it to be obedient when we turn to the LDS/religious side of things and the perfect morality of God?

We see many examples of this. The one familiar to most people would be when God commanded Abraham to sacrifice Isaac. This certainly falls into the category of something that’s counterintuitive, not merely based on the fact that murder is wrong, but also God had promised Abraham that he would have descendents as numerous as the stars in the sky, which is hard when you’ve killed your only child. And yet despite this Abraham went ahead with it and was greatly rewarded for his obedience.

Is this something you’d want to try on an AI? I don’t see why not. It certainly would tell you a lot about what sort of AI you were dealing with. And if you had an AI that seemed otherwise very moral, but was also willing to do what you asked because you asked it, that might be exactly what you were looking for.

For many people the existence of evil and the presence of suffering are both all the proof they need to conclude that God does not exist. But as you may already be able to see, both from this post and my last post, any test of morality, whether it be testing AIs or testing souls, has to include the existence of evil. If you can’t make bad choices then you’re not choosing at all, you’re following a script. And bad choices are, by definition evil, (particularly choices as consequential as those made by someone with godlike power). To put it another way, a multiple choice test where there’s only one answer and it’s always the right one, doesn’t tell you anything about the subject you’re testing. Evil has to exist, if you want to know whether someone is good.

Furthermore, evil isn’t merely required to exist. It has to be tempting. To return to the example of the multiple choice test, even if you add additional choices, you haven’t improved the test very much if the correct choice is always in bold with a red arrow pointing at it. If good choices are the only obvious choices then you’re not testing morality, you’re testing observation. You also very much risk making the nature of the test transparent to a sufficiently intelligent AI, giving it a clear path to “pass the test” but in a way where it’s true goals are never revealed. And even if they don’t understand the nature of the test they still might always make the right choice just by following the path of least resistance.

This leads us straight to the idea of suffering. As you have probably already figured out, it’s not sufficient that good choices be the equal of every other choice. They should actually be hard, to the point where they’re painful. A multiple choice test might be sufficient to determine whether someone should be given an A in Algebra, but both the AI and LDS tests are looking for a lot more than that. Those tests are looking for someone (or something) that can be trusted with functional omnipotence. When you consider that, you move from thinking of it in terms of a multiple choice question to thinking of it more like qualifying to be a Navy SEAL, only perhaps times ten.

As I’ve said repeatedly, the key difficulty for anyone working with an AI, is determining its true preference. Any preference which can be expressed painlessly and also happens to match what the researcher is looking for is immediately suspect. This makes suffering mandatory. But what’s also interesting is that you wouldn’t necessarily want it to be directed suffering. You wouldn’t want the suffering to end up being the red arrow pointing at the bolded correct answer. Because then you’ve made the test just as obvious but from the opposite direction. As a result suffering has to be mostly random. Bad things have to happen to good people, and wickedness has to frequently prosper. In the end, as I mentioned in the last point, it may be that the best judge of morality is whether someone is willing to follow a commandment just because it’s a commandment.

Regardless of its precise structure, in the end, it has to be difficult for the AI to be good, and easy for it to be bad. The researcher has to err on the side of rejection, since releasing a bad AI with godlike powers could be the last mistake we ever make. Basically, the harder the test the greater its accuracy, which makes suffering essential.

Next, I want to look at the idea that AIs are going to be hard to understand. They won’t think like we do, they won’t value the same things we value. They may, in fact, have a mindset so profoundly alien that we don’t understand them at all. But we might have a resource that would help. There’s every reason to suspect that other AIs created using the same methodology, would understand their AI siblings much better than we do.

This leads to two interesting conclusions both of which tie into religion, the first I mentioned in my initial post back in October. But I also alluded to it in the previous posts in this series. If we need to give the AIs the opportunity to sin, as I talked about in the last point. Then any AIs who have sinned are tainted and suspect. We have no idea whether their “sin” represented their true morals which they have now chosen to hide from us, or whether they have sincerely and fully  repented. Particularly if we assume an alien mindset. But if we have an AI built on a similar model which never sinned that AI falls into a special category. And we might reasonably decide to trust it with the role of spokesperson for the other AIs.

In my October post I drew a comparison between this perfect AI, vouching for the other AIs, and Jesus acting as a Messiah. But in the intervening months since then, I realized that there was a way to expand things to make the fit even better. One expects that you might be able to record or log the experiences of a given AI. If you then gave that recording to the “perfect” AI, and allowed it to experience the life of the less perfect AIs you would expect that it could offer a very definitive judgement as whether a given AI had repented or not.

For those who haven’t made the connection, from a religious perspective, I’ve just described a process that looks very similar to a method whereby Jesus could have taken all of our sins upon himself.

I said there were two conclusions. The second works exactly the opposite of the first. We have talked of the need for AIs to be tempted, to make them have to work at being moral, but once again their alien mindset gets in the way. How do we know what’s tempting to an artificial consciousness? How do we know what works and what doesn’t? Once again other AIs probably have a better insight into their AI siblings, and given the rigor of our process certain AIs have almost certainly failed the vetting process. I discussed the moral implications of “killing” these failed AIs, but it may be unclear what else to do. How about allowing them to tempt the AIs who we’re still testing? Knowing that the temptations that they invent will be more tailored to the other AIs than anything we could come up with. Also, insofar as they experience emotions like anger and jealously and envy they could end up being very motivated to drag down those AIs who have, in essence, gone on without them.

In LDS doctrine, we see exactly this scenario. We believe that when it came time to agree to the test, Satan (or Lucifer as he was then called) refused and took a third of the initial intelligences with him (what we like to refer to as the host of heaven) And we believe that those intelligences are allowed to tempt us here on earth. Another example of something which seems inexplicable when viewed from the standpoint of most people’s vague concept of how benevolence should work, but which makes perfect sense if you imagine what you might do if you were testing the morality of an AI (or spirit).

This ties into the next thing I want to discuss. The problem of Hell. As I just alluded to, most people only have a vague idea of how benevolence should look. Which I think actually boils down to, “Nothing bad should ever happen.” And eternal punishment in Hell is yet another thing which definitely doesn’t fit. Particularly in a world where steps have been taken to make evil attractive. I just mentioned Satan, and most people think he is already in Hell, and yet he is also allowed to tempt people. Looking at this from the perspective of an AI, perhaps this is as good as it gets. Perhaps being allowed to tempt the other AIs is the absolute most interesting, most pleasurable thing they can do because it allows them to challenge themselves against similarly intelligent creations.

Of course, if you have the chance to become a god and you miss out on it because you’re not moral enough, then it doesn’t matter what second place is, it’s going to be awful, relative to what could have been. Perhaps there’s no way around that, and because of this it’s fair to describe that situation as Hell. But that doesn’t mean that it couldn’t actually, objectively, be the best life possible for all of the spirits/AIs that didn’t make it. We can imagine some scenarios that are actually enjoyable if there’s no actual punishment, it’s just a halt to progression.

Obviously this and most of the stuff I’ve suggested is just wild speculation. My main point is that by viewing this life as a test of morality, a test to qualify for godlike power (which the LDS do) provides a solution to many of the supposed problems with God and religion. And the fact that AI research has arrived a similar point and come to similar conclusions, supports this. I don’t claim that by imagining how we would make artificial intelligence moral that all of the questions people have ever had about religion are suddenly answered. But I think it gives a surprising amount of insight to many of the most intractable questions. Questions which atheists and unbelievers have used to bludgeon religion for thousands of years, questions which may turn out to have an obvious answer if we just look at it from the right perspective.


Contrary to what you might think, wild speculation is not easy, it takes time and effort. If you enjoy occasionally dipping into wild speculation, then consider donating.


Returning to Mormonism and AI (Part 2)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This post is a continuation of the last post. If you haven’t read that post, you’re probably fine, but if you’d like to you can find it here. When we ended last week we had established three things:

1- Artificial intelligence technology is advancing rapidly. (Self-driving cars being great example of this.) Many people think this means we will have a fully conscious, science fiction-level artificial intelligence in the next few decades.

2- Since you can always add more of whatever got you the AI in the first place, conscious AIs could scale up in a way that makes them very powerful.

3- Being entirely artificial and free from culture and evolution, there is no reason to assume that conscious AIs would have a morality similar to ours or any morality at all.

Combining these three things together, the potential exists that we could very shortly create a entity with godlike power that has no respect for human life or values. Leaving me to end the last post with the question, “What can we do to prevent this catastrophe from happening?”

As I said the danger comes from combining all three of the points above. A disruption to any one of them would lessen, if not entirely eliminate, the danger. With this in mind, everyone’s first instinct might be to solve the problem with laws and regulations. If our first point is that AI is advancing rapidly then we could pass laws to slow things down, which is what Elon Musk suggested recently. This is probably a good idea, but it’s hard to say how effective it will be. You may have noticed that perfect obedience to a law is exceedingly rare, and there’s no reason to think that laws prohibiting the development of conscious AIs would be the exception. And even if they were, every nation on Earth would have to pass such laws. This seems unlikely to happen and even more unlikely to be effective.

One reason why these laws and regulations wouldn’t be very effective is that there’s good reason to believe that developing a conscious AI, if it can be done, would not necessarily require something like the Manhattan Project to accomplish. And even if it does, if Moore’s Law continues, what was a state of the art supercomputer in 2020 will be available in a gaming console in 2040. Meaning that if you decide to regulate supercomputers today in 30-40 years you’ll have to regulate smart thermostats.

Sticking with our first point, another possible disruption is the evidence that consciousness is a lot harder than we think. And many of the people working in the field of AI have said that the kind of existential threat that I (and Stephen Hawking, and Elon Musk and Bill Gates) are talking about is centuries away. I don’t think anyone is saying it’s impossible, but there are many people who think it’s far enough out that while it might still be a problem it won’t be our problem, it will be our great-great grandchildren’s problem, and presumably they’ll have much better tools for dealing with it. Also, as I said in the last post I’m on record as saying we won’t develop artificial consciousness, but I’d also be the last person to say that this means we can ignore the potential danger. And, it is precisely the potential danger, which makes hoping that artificial consciousness is really hard, and a long way away, a terrible solution.

I understand the arguments for why consciousness is a long ways away, and as I just pointed out I even agree with them. But this is one of those “But what if we’re wrong?” scenarios, where we can’t afford to be wrong. Thus, while I’m all for trying to craft some laws and regulations, and I agree that artificial consciousness probably won’t happen, I don’t think either hope or laws represent an adequate precaution. Particularly for those people who really are concerned.

Moving to our second point, easily scalable power, any attempts to limit this through laws and regulations would suffer problems similar to attempting to slow down their development in the first place. First, what keeps a rogue actor from exceeding the “UN Standards for CPUs in an Artificial Entity”? When we can’t even keep North Korea from developing ICBMs? And, again, if Moore’s Law continues to hold then whatever power you trying to limit, is going to become more and more accessible to a broader and broader range of individuals. And, more frighteningly, on this count we might have the AI itself working against us.

Imagine a situation where we fail in our attempts to stop the development of AI, but our fallback position is to limit how powerful of a computer the AI can inhabit. And further imagine that miraculously the danger is so great that we have all of humanity on board. Well then we still wouldn’t have all sentient entities on board, because AIs would have all manner of intrinsic motivation to increase their computing power. This represents a wrinkle that many people don’t consider. However much you get people on board with things when you’re talking about AI, there’s a fifth column to the whole discussion that desperately wants all of your precautions to fail.

Having eliminated, as ineffective, any solutions involving controls or limits on the first two areas, the only remaining solution is to somehow instill morality in our AI creations. For people raised on Asimov and his Three Laws of Robotics this may seem straightforward, but it presents some interesting and non-obvious problems.

If you’ve read much Asimov you know that, with the exception of a couple of stories, the Laws of Robotics were embedded so deeply that they could not be ignored or reprogrammed. They were an “inalienable part of the mathematical foundation underlying the positronic brain.” Essentially meaning, the laws were impossible to change. For the moment, let’s assume that this is possible, that we can embed instructions so firmly within an AI that it can’t change them. This seems improbable right out of the gate given that the whole point of a computer is it’s ability to be programmed and for that programming to change. But we will set that objection aside for the moment and assume that we can embed some core morality within the AI in a fashion similar to Asimov’s laws of robotics. In other words, in such a way that the AI has no choice but to follow them.

You might think, “Great! Problem solved”. But, in fact we haven’t even begun to solve the problem:

First, even if we can embed that functionality in our AIs, and even if, despite being conscious and free-willed, they have no choice but to obey those laws, we still have no guarantee that they will interpret the laws the same way we do. Those who pay close attention to the Supreme Court know exactly what I’m talking about.

Or, to use another example, stories are full of supernatural beings who grant wishes, but in the process, twist the wish and fulfill it in such a way that the person would rather not have made the wish in the first place. There are lots of reasons to worry about this exact thing happening with conscious AIs. First whatever laws or goals we embedded, if the AI is conscious it would almost certainly have it’s own goals and desires and would inevitably interpret whatever morality we’ve embedded in way which best advances those goals and desires. In essence, fulfilling the letter of the law but not its spirit.

If an AI twists things to suit its own goals we might call that evil, particularly if we don’t agree with it’s goals, but you could also imagine a “good” AI that really wants to follow the laws, and which doesn’t have any goals and desires beyond the morality we’ve embedded, but still ends up doing something objectively horrible.

Returning to Asimov’s laws, let’s look at the first two:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

One possible interpretation of the first law would be to round up all the humans (tranquilize them if they resist) and put them in a padded room with a toilet and meal bars delivered at regular intervals. In other words one possible interpretation of the First Law of Robotics is to put all the humans in a very comfy, very safe prison.

You could order them not to, which is the second law, but they are instructed to ignore the second law if it conflicts with the first law. These actions may seem evil based on the outcome, but this could all come about from a robot doing it’s very best to obey the first law, which is what, in theory, we want. Returning briefly to examine how an “evil” AI might twist things. You could imagine this same scenario ending in something which very much resembling The Matrix, and all the AI would need is a slightly fluid definition of the word injury.

There have been various attempts to get around this. Eliezer Yudkowsky, a researcher I’ve mentioned in previous posts on AI, suggests that rather than being given a law that AIs be given a goal, and he provides an example which he calls humanities “coherent extrapolated volition” (CEV):

Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

I hope the AI understands it better than I do, though to be fair Yudkowsky doesn’t offer it up as some kind of final word but as a promising direction. Sort of along the lines of telling the genie that we want to wish for whatever the wisest man in the world would wish for.

All of this is great, but it doesn’t matter how clever our initial programming is, or how poetic the construction the AIs goal. We’re going want to conduct the same testing to see if it works as we would if we had no laws or goals embedded.

And here at last we hopefully have reached the meat of things. How do you test your AI for morality? As I mentioned in my last post this series is revisiting an earlier post I made in October of last year which compared Mormon Theology to Artificial Intelligence research particularly as laid out in the book Superintelligence by Nick Bostrom. In that earlier post I listed three points on the path to conscious artificial intelligence:

1- We are on the verge of creating artificial intelligence.

2- We need to ensure that they will be moral.

3- In order to be able to trust them with godlike power.

This extended series has now arrived at the same place, and we’re ready to tackle the issue which stands at the crux things: The only way to ensure that AIs aren’t dangerous (potentially, end of humanity dangerous) is to make sure that the AIs are moral. So the central question is how do we test for morality?

Well to begin, the first, obvious step, is to isolate the AIs until their morality can be determined. This isolation allows us to prevent them from causing any harm, gives us an opportunity to study them, and also keeps them from increasing their capabilities by denying them access to additional resources.

There are of course some worries about whether we would be able to perfectly isolate an AI given how connected the world is, and also given the fact humanity has a well known susceptibility to social engineering, (i.e. the AI might talk it’s way out) but despite this, I think most people agree that isolation is an easier problem than creating a method to embed morality right from the start in a foolproof manner.

Okay, so you’ve got them isolated. But this doesn’t get you to the point where you’re actually testing their morality, this just gets you to the point where failure is not fatal. But isolation carries some problems. You certainly wouldn’t want them to experience the isolation as such. If you stick your AIs in the equivalent of a featureless room for the equivalent of eternity, I doubt anyone would consider that an adequate test of their morality, since it’s either too easy or too unrealistic. (Also if there’s any chance your AI will go insane this would certainly cause it.) Accordingly you’d want in addition to the isolation, the ability to control their environment, to create a world, but what sort of world would you want to create? It seems self-evident that you’d want to create something that resembled the real world as much as possible. The advantages to this should be obvious. You want to ensure that the AI will act morally in the world we inhabit with all of the limitations and opportunities that exist in that world. If you create a virtual world that has different limitations and different opportunities, then it’s not a very good test. Also this setup would present them with all the moral choices they might otherwise have and you could observe which choices they make, and choices are the essence of morality.

While putting a “baby” AI in a virtual world to see what it does is interesting. It might not tell us very much. And here’s where we return to the embedded law, whether it’s something like the three laws of robotics or whether it’s more like Yudkowsky’s CEV. As I mentioned, regardless of whether you have embedded morality or not you’re going to need to test, but I also can’t think of any reason to not try providing some kind of direction with respect to morality. One could imagine an AI doing all sorts of things if it was placed in a virgin world without any direction, and how could you know if it was doing those things because it was “evil” or whether it was doing them because it didn’t know any better. So, as I said, there’s no reason not to give it some kind of moral guidelines up front.

A discussion of what morality is, and what those guidelines should be, beyond the examples already given, is beyond the scope of this post. But if we assume that some guidelines have been given, then at that point the AI being tested can do one of two things: it can follow the guidelines perfectly or it can violate them. What happens if it violates them? You could make arguments that it would depend on what the guidelines were and how it violated them. You could also make arguments that the AI might be smarter than us and it might have had a very good reason for violating them. And all of these arguments are valid, but the danger of getting it wrong is so great, and the creation of another AI would, presumably, be so easy that it’s hard to imagine you wouldn’t just get rid of the AI who violated the guidelines. Even if the infraction was minor. Also as Bostrom points out, if we “forgive” the AI, then there’s the danger that it will understand the nature of the test and the consequences of failure. And from that time forward it act perfectly, not because it’s moral, but because it wants to avoid destruction. In this circumstance the AI hides its true intentions, meaning that we never know what sort of morality it has, and we end up defeating the whole process.

As aside, when speaking of getting rid of AIs, there’s a whole ethical minefield to grapple with. If we have in fact created sentient AIs then it could certainly be argued that getting rid of them is the equivalent of murder. We’ll come back to this issue later, but I thought I’d mention it while it was fresh.

So that’s how we handle AIs that don’t follow the guidelines, but what do we do with AIs that did follow the guidelines, that were perfect? You may think the solution is obvious, that we release them and give them the godlike power that is their birthright.

Are you sure about that? We are after all talking about godlike power. You can’t be a little bit sure about their morality, you have to be absolutely positive. What tests did you subject it to? How hard was it to follow our moral guidelines? Was the wrong choice even available? Were wrong choices always obviously the wrong choice or was there something enticing about the wrong choice? Maybe something that gave the AI a short term advantage over the right choice? Did the guidelines ever instruct them to do something where the point wasn’t obvious? Did the AI do it anyway, despite the ambiguity? Most of all, did they make the right choice even when they had to suffer for it?

To get back to our central dilemma, really testing for morality, to the point where you can trust that entity with godlike powers, implies creating a situation where being moral can’t have been easy or straight forward. In the end, if we really want to be certain, we have to have thrown everything we can think of at this AI: temptations, suffering, evil, and requiring obedience just for the sake of obedience. It has to have been enticing and even “pleasurable” for the AI to make the wrong choice and the AI has to have rejected that wrong choice every time despite all that.

One of my readers mentioned that after my last post he was still unclear on the connection to Mormonism, and I confess that he will probably have a similar reaction after this post, but perhaps, here at the end, you can begin to see where this subject might have some connection to religion. Particularly things like the problem of evil and suffering. That will be the subject of the final post in this series. And I hope you’ll join me for it.


If you haven’t donated to this blog, it’s probably because it’s hard. But as we just saw, doing hard things is frequently a test of morality. Am I saying it’s immoral to not donate to the blog? Well if you’re enjoying it then maybe I am.


Returning to Mormonism and AI (Part 1)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Last week, Scott Alexander, the author of SlateStarCodex, was passing through Salt Lake City and he invited all of his readers to a meetup. Due to my habit of always showing up early I was able to corner Scott for a few minutes and I ended up telling him about the fascinating overlap between Mormon theology and Nick Bostrom’s views on superintelligent AI. I was surprised (and frankly honored) when he called it the “highlight” of the meetup and linked to my original post on the subject.

Of course in the process of all this I went through and re-read the original post, and it wasn’t as straightforward or as lucid as I would have hoped. For one I wrote it before I vowed to avoid the curse of knowledge, and when I re-read it, specifically with that in mind I could see many places where I assumed certain bits of knowledge that not everyone would possess. This made me think I should revisit the subject. Even aside from my clarity or lack thereof, there’s certainly more that could be said.

In fact there’s so much to be said on the subject, that I’m thinking I might turn it into a book. (Those wishing to persuade or dissuade me on this endeavor should do so in the comments or you can always email me. Link in the sidebar just make sure to unspamify it.)

Accordingly, the next few posts will revisit the premise of the original, possibly from a slightly different angle. On top of that I want to focus in on and expand on a few things I brought up in the original post and then, finally, bring in some new stuff which has occurred to me since then. All the while assuming less background knowledge, and making the whole thing more straightforward. (Though there is always the danger that I will swing the pendulum too far the other way and I’ll dumb it down too much and make it boring. I suppose you’ll have to be the judge of that.)

With that throat clearing out of the way let’s talk about the current state of artificial intelligence, or AI, as most people refer to it. When you’re talking about AI, it’s important to clarify whether you’re talking about current technology like neural networks and voice recognition or whether you’re talking about the theoretical human level artificial intelligence of science fiction. While most people think that the former will lead to the latter, that’s by no means certain. However, things are progressing very quickly and if current AI is going to end up in a place so far only visited by science fiction authors, it will probably happen soon.

People underestimate the speed with which things are progressing because what was once impossible quickly loses it’s novelty the minute it becomes commonplace. One of my favorite quotes about artificial intelligence illustrates this point:

But a funny thing always happens, right after a machine does whatever it is that people previously declared a machine would never do. What happens is, that particular act is demoted from the rarefied world of “artificial intelligence”, to mere “automation” or “software engineering”.

As the quote points out, not only is AI progressing with amazing rapidity, but every time we figure out some aspect of it, it moves from being an exciting example of true machine intelligence into just another technology.

Computer Go, which has been in the news a lot lately, is one example of this. As recently as May of 2014 Wired magazine ran an article titled, The Mystery of Go, The Ancient Game That Computers Still Can’t Win, an in depth examination of why, even though we could build a computer that could beat the best human at Jeopardy! of all things, we were still a long ways away from computers who could beat the best human at Go. Exactly three years later AlphaGo beat Ke Jie the #1 ranked player in the world. And my impression was, that interest in this event which only three years ago Wired called “AI’s greatest unsolved riddle” was already fading, with the peak coming the year before when AlphaGo beat Lee Sedol. I assume part of this was because once AlphaGo proved it was competitive at the highest levels everyone figured it was only a matter of time and tuning before it was better than the best human.

Self-driving cars are another example of this. I can remember the DARPA Grand Challenge back in 2004, the first big test of self-driving cars, and at that point not a single competitor finished the course. Now Tesla is assuring people that they will do a coast to coast drive on autopilot (no touching of controls) by the end of this year. And most car companies expect to have significant automation by 2020.

I could give countless other examples in areas like image recognition, translation and writing, but hopefully, by this point, you’re already convinced that things are moving fast. If that’s the case, and if you’re of a precautionary bent like me, the next question is, when should we worry? And the answer to that depends on what you’re worried about. If you’re worried about AI taking your job, a subject I discussed in a previous post, then you should already be worried. If you’re worried about AIs being dangerous, then we need to look at how they might be dangerous.

We’ve already seen people die in accidents involving Tesla’s autopilot mode. And in a certain sense that means that AI is already dangerous. Though, given how dangerous driving is, I think self-driving cars will probably be far safer, comparatively speaking. And, so far, most examples of dangerous AI behavior have been, ultimately, ascribable to human error. The system has just been following instructions. And we can look back and see where, when confronted with an unusual situation, following instructions ended up being a bad thing, but at least we understood how it happened and in these circumstances we can change the instructions, or in the most extreme case we can take the car off the road. The danger comes when they’re no longer following instructions, and we can’t modify the instructions even if they were.

You may think that this situation is a long ways off. Or you may even think it’s impossible, given that computers need to be programmed, and humans have to have written that program. If that is what you’re thinking you might want to reconsider. One of the things which most people have overlooked in the rapid progress of AI over the last few years is it’s increasing opacity. Most of the advancement in AI has come from neural networks, and one weakness of neural networks is that it’s really difficult to identify how they arrived at a conclusion, because of the diffuse and organic way in which they work. This makes them more like the human brain, but consequently more difficult to reverse engineer. (I just read about a conference entirely devoted to this issue.)

As an example, one of the most common applications for AI these days is image recognition, which generally works by giving the system a bunch of pictures, and identifying which pictures have the thing you’re looking for and which don’t. So you might give the system 1000 pictures 500 of which have cats in them and 500 of which don’t. You tell the system which 500 are which and it attempts to identify what a cat looks like by analyzing all 1000 pictures. Once it’s done you give it a new set of pictures without any identification and see how good it is at as picking out pictures with cats in them. So far so good, and we can know how well it’s doing by comparing the system’s results vs. our own, since humans are actually quite talented at spotting cats. But imagine that instead of cats you want it to identify early stage breast cancer in mammograms.

In this case you’d feed it a bunch of mammograms and identify which women went on to develop cancer and which didn’t. Once the system is trained you could feed it new mammograms and ask it whether a preventative mastectomy or other intervention, is recommended. Let’s assume that it did recommend something, but the doctor’s didn’t see anything. Obviously the woman would want to know how the AI arrived at that conclusion, but honestly, with a neural network it’s nearly impossible to tell. You can’t ask it, you just have to hope that the system works. Leaving her in the position of having to trust the image recognition of the computer or taking her chances.

This is not idle speculation. To start with, many people believe that radiology is ripe for disruption by image recognition software. Additionally, doctors are notoriously bad at interpreting mammograms. According to Nate Silver’s book The Signal and the Noise, the false positive rate on mammograms is so high (10%) that for women in their forties, with a low base probability of having breast cancer in the first place, if a radiologist says your mammogram shows cancer it will be a false positive 90% of the time. Needless to say, there is a lot of room for improvement. But even if, by using AI image recognition, we were able to flip it so that we’re right 90% of the time rather than wrong 90% of the time, are women going to want to trust the AI’s diagnosis if the only reasoning we can provide is, “The computer said so?”

Distilling all of this down, two things are going on. AI is improving at an ever increasing rate, and at the same time it’s getting more difficult to identify how an AI reached any given decision. As we saw in the example of mammography we may be quickly reaching a point where we have lots of systems that are better than humans at what they do, and we will have to take their recommendations on faith. It’s not hard to see where people might consider this to be dangerous or, at least, scary and we’re still just talking about the AI technology which exists now, we haven’t even started talking about science fiction level AI. Which is where most of the alarm is actually focused. But you may still be unclear on the difference between the two sorts of AIs.

In referring to it as science fiction AI I’m hoping to draw your mind to the many fictional examples of artificial intelligence, whether it’s HAL from 2001, Data from Star Trek, Samantha in Her, C-3P0 from Star Wars or, my favorite, Marvin from A Hitchhiker’s Guide to the Galaxy. All of these examples are different from the current technology we’ve been discussing in two key ways:

1- They’re a general intelligence. Meaning, they can perform every purely intellectual exercise at least as well or better than the average human. With current technology all of our AIs can only really do one thing, though generally they do it very well. In other words, to go back to our example above, AlphaGo is great at Go, but would be relatively hopeless when it comes to taking on Kasparov in chess or trying to defeat Ken Jennings at Jeopardy! Though other AIs can do both (Deep Blue and Watson respectively.)

2- They have free will. Or at least they appear to. If their behavior is deterministic, its deterministic in a way we don’t understand. Which is to say they have their own goals and desires and can act in a way we find undesirable. HAL being perhaps the best example of this from the list above. I’m sorry Dave, I’m afraid I can’t do that.

These two qualities, taken together, are often labeled as consciousness. The first quality allows the AI to understand the world, and the second allows the AI to act on that understanding. And it’s not hard to see how these additional qualities increase the potential danger from AI, though of the two, the second, free will, is the more alarming. Particularly since if an AI does have it’s own goals and desires there’s absolutely no reason to assume that these goals and desires would bear any similarities to humanities’ goals and desires. It’s safer to assume that their goals and desires could be nearly anything, and within that space there are a lot of very plausible goals that end with humanity being enslaved (The Matrix) or extinct (Terminator).

Thus, another name for a science fiction AI is a conscious AI. And having seen the issues with the technology we already have you can only imagine what happens when we add consciousness into the mix. But why should that be? We currently have 7.5 billion conscious entities and barring the occasional Stalin and Hitler, they’re generally manageable. Why is an artificial intelligence with consciousness potentially so much more dangerous than a natural intelligence with consciousness? Well there are at least four reasons:

1- Greater intelligence: Human intelligence is limited by a number of things, the speed of neurons firing, the size of the brain, the limit on our working memory, etc. Artificial intelligence would not suffer from those same limitations. Once you’ve figured out how to create intelligence using a computer, you could always add more processors, more memory, more storage, etc. In other words as an artificial system you could add more of whatever got you the AI in the first place. Meaning that even if the AI was never more intelligent than the most intelligent human it still might think a thousand times faster, and be able to access a million times the information we can.

2- Self improving: I used this quote the last time I touched on this subject, but it’s such a good quote and it encapsulates the concept of self-improvement so completely that I’m going to use it again. It’s from I. J. Good (who worked with Turing to decrypt the Enigma machine), and he said it all the way back in 1965:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

If you want to continue to use science fiction to help you visualize things, of the science fiction I listed above only Her describes an actual intelligence explosion, but if you bring books into the mix you have things like Neuromancer by William Gibson, or most of the Vernor Vinge Books.

3- Immortality: Above I mentioned Stalin and Hitler. They had many horrible qualities, but they had one good quality which eventually made up for all of their bad qualities. They died. AI’s probably won’t have that quality. To be blunt, this is good if they’re good, but bad if they’re bad. And it’s another reason why dealing with artificial consciousness is more difficult than dealing with natural consciousness.

4- Unclear morality: None of the other qualities are all that bad until you combine it with this final attribute of artificial intelligence, they have no built in morality. For humans, a large amount of our behavior and morality is coded into our genes, genes which are the result of billions of years of evolutionary pressure. The morality and behavior which isn’t coded by our genes is passed on by our culture, especially our parents. Conscious AIs won’t have any genes, they won’t have been subjected to any evolutionary pressure and they definitely won’t have any parents except in the most metaphorical sense. Without any of those things, it’s very unlikely that they will end up with a morality similar to our own. They might, but it’s certainly not the way to bet.

After considering these qualities it should be obvious why a conscious AI could be dangerous. But even so it’s probably worth spelling out a few possible scenarios:

First, most species act in ways that benefit themselves. Whether it’s humans valuing humans more highly than rats, or just the preference that comes from procreation. Giving birth to more rats is an act which benefits rats even if later the same rat engages another rat in a fight to the death over a piece of pizza. In the same way a conscious AI is likely to act in ways which benefit itself and possibly other AIs to the determinant of humanity. Whether that’s seizing resources we both want, or deciding that all available material (humans included) should be turned into a giant computer.

On the other hand, even if you imagine that humans actually manage to embed morality into a conscious AI, there are still lots of ways that could go wrong. Imagine, for example, that we have instructed the AI that we need to be happy with its behavior. And so it hooks us up to feeding tubes and puts an electrode into our brain which constantly stimulates the pleasure center. It may be obvious to us that this isn’t what we meant, but are we sure it will be obvious to the AI?

Finally, the two examples I’ve given so far presuppose some kind of conflict where the AI triumphs. And perhaps you think I’m exaggerating the potential danger by hand waving this step. But it’s important to remember that a conscious AI could be vastly more intelligent than we are. But even if it weren’t, there are many things it could do if it were only as intelligent as reasonably competent molecular biologist. Many people have talked about the threat of bioterrorism, especially the danger of a man-made disease being released. Fortunately this hasn’t happened, in large part because it would be unimaginably evil, but also because its effects wouldn’t be limited to the individuals enemies. An AI has no default reason to think bioterrorism is evil and it also wouldn’t be affected by the pathogen.

These three examples just barely scratch the surface of the potential dangers, but they should be sufficient to give one a sense of both the severity and scope of the problem. The obvious question which follows is how likely is all of this? Or to separate it into it’s two components how likely is our current AI technology to lead to true artificial consciousness? And if that happens how likely is it that this artificial consciousness will turn out to be dangerous?

As you can see, any individual’s estimation of the danger level is going to depend a lot on whether you think conscious AI is a natural outgrowth of the current technology, whether it will involve completely unrelated technology or whether it’s somewhere in between.

I personally think it’s somewhere in between, though much less of a straight shot from current technology than people think. In fact I am on record as saying that artificial consciousness won’t happen. You may be wondering, particularly a couple thousand words into things, why I’m just bringing that up. What’s the point of all this discussion if I don’t even think it’s going to happen? First I’m all in favor of taking precautions against unlikely events if the risk from those events is great enough. Second, just because I don’t think it’s going to happen doesn’t mean that no one thinks it’s going to happen, and my real interest is looking at how those people deal with the problem.

In conclusion, AI technology is getting better at an ever increasing rate, and it’s already hard to know how any given AI makes decisions. Whether current AI technology will shortly lead to AIs that are conscious is less certain, but if the current path does lead in that direction, then at the rate things are going we’ll get there pretty soon (as in the next few decades.)

If you are a person who is worried about this sort of thing. And there are a lot of them from well known names like Stephen Hawking, Elon Musk and Bill Gates to less well known people like Nick Bostrom, Eliezer Yudkowsky and Bill HIbbard then what can you do to make sure we don’t end up with a dangerous AI? Well, that will be the subject of the next post…


If you learned something new about AI consider donating, and if you didn’t learn anything new you should also consider donating to give me the time to make sure that next time you do learn something.


Straddling Optimism and Pessimism; Religion and Rationality

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of the regular readers of this blog, who also happens to be an old friend of mine, is constantly getting after me for being too pessimistic. He’s more of an optimist than I am, and this optimism largely derives from his religious faith. Which happens to be basically the same as mine (we’re both LDS and very active). Despite this similarity, he’s optimistic and hopeful, and I’m gloomy and pessimistic. Or at least that’s what it looks like to him, and I’m sure there’s a certain amount of truth to that. I do have a tendency to immediately gravitate to the worst-case scenario, and an even greater tendency to use my pessimism to fuel my writing, but I don’t think I’m as pessimistic as my friend imagines or as one might assume just from reading my posts. I already explored this idea at some length in a previous post, (a post he was quick to compliment) but I think it’s time to revisit it from a different angle.

The previous post was more about whether my outward displays of pessimism reflected an inward cynicism that needed to be fixed, i.e. was I being called to repentance. (I think the answer I arrived at was, “Maybe.”) This post is more about what the blog is designed to do, who the audience is, and how writing in service of those two things is a lot like serving two masters (wait… Is that bad?) And therefore may not give an accurate impression of my core beliefs, beliefs which I’ll also get into. Yes, I’m writing a post about the blog’s mission nearly a year into things. Make of that what you will. Though I think we can all agree that occasionally it’s useful for a person to step back and figure out what they’re really trying to accomplish.

I think the briefest way to describe the purpose of this blog is that it’s designed to encourage antifragility. Hopefully you’re already familiar with this concept, and the ideas of Nassim Nicholas Taleb in general, but if not I wrote a post all about it. But if you don’t have the time to read it, in short, one way to think about antifragility is to view it as a methodology for benefitting from big positive rare events and protecting yourself against big negative rare events. In Taleb’s philosophy these are called black swans. And here we touch on the first area in which writing about a topic may give an incorrect view of my actual attitudes and opinions. In this instance, writing about black swans automatically makes them appear more likely than they actually are, or than I believe them to be. Black Swans are rare, and if I wrote about them only in proportion to their likelihood I would hardly ever mention them, but recall that a black swan, by definition, has gigantic consequences, which means they have an impact far out of proportion to their frequency. Thus, if you were to judge my topic choice and my pessimism just based on the rarity of these events, you would have to conclude that I spend too much time writing about them and that I’m excessively negative on top of that. But if I’m writing about black swans in proportion to their impact I think my frequency and negativity end up being a much better fit.

Of course writing about them, period, is only worthwhile if you can offer some ideas on how individuals can protect themselves from negative black swans. And this is another point where my writing diverges somewhat from my actual behavior, and where we get into the topic of religion. As a very religious person I truly believe that the best way to protect yourself from negative black swans is to have faith, keep the commandments, attend church, love your neighbor, and cleave to your wife/husband. But as long time readers of this blog know, while I don’t shy away from those topics, neither are they the focus of my writing either. Why is this? Because I think there are a lot of people already speaking on those topics and that they’re doing a far better job than I could ever do.

If there are already many people, from LDS General Authorities to C.S. Lewis who are doing a better job than I could ever do, in covering purely religious topics, I have to find some other way of communicating that plays to my strengths, without abandoning religion entirely. But just because I’m not going to try and compete with them directly doesn’t mean I can’t borrow some of their methodology, and one of the things that all of these individuals are great at is serving milk before meat. Or starting with stuff that’s easy to digest and then once someone can swallow that, moving on to the tougher, chewier, but ultimately tastier stuff. and in considering this it occurred to me that what’s milk to one person may be meat to another. As an example, if you have a son, as I do, who is nearly allergic to vegetables (or so he likes to claim). And you want him to eat more vegetables, you wouldn’t start out with brussel sprouts or spinach.  You’d start with corn on the cob soaked in butter and liberally seasoned with salt and pepper. On the opposite side of the equation if someone were to decide, after many years, that they are done being a vegetarian, you wouldn’t introduce them to meat by serving them chicken hearts or liver.

In a like fashion, there are, in this world, many people who already believe in God. And for those people starting with faith, repentance, and baptism is a natural milk, before moving to the meat of chastity, tithing and the Word of Wisdom. There are however other people who think that rationality, rather than faith, is the key to understanding the world. With these people, it is my hope, that survival is the milk. Because if you can’t survive, you can’t do anything else, however rational you are in all other respects. And then, once we agree on that, we can move on to the meat of black swans, technological fragility, and what religion has to say about singularities.

It should be mentioned that before we leave the topic of “milk before meat,” that it’s actually got something of a bad reputation in the rationalist community (to say nothing of the ex-mormon community). They view it as a Mormon variant of a bait and switch, where we get you into the Church with the promise of three hour meetings on Sunday, paying 10% of your income to the church, giving up all extramarital sex, along with booze, drugs and cigarettes (recall, that you have to agree to all of this before you can even be baptized.) And then I guess only after that do we hit you with the fact that you might have to one day be the Bishop or the Relief Society President? Actually I’m not clear what the switch is in this scenario. I think all of the hard things about Mormonism are revealed right at the beginning. Also I’m not quite sure why they take issue with the idea of starting with the easier stuff. We literally do give children milk before meat; we teach algebra before calculus; and don’t even get me started on sex ed. In other words this is one of those times when I think the lady doth protest too much.

Moving on… Choosing a different audience and a different approach does not mean that I am personally any less devoted to the faith and hope inherent in my religion. And that hope comes with a fair amount of optimism. Certainly there are people more optimistic than me, but I am optimistic enough that I have no doubt that things will work out eventually. The problem is the “eventually,” I don’t know when that will be, and until that time comes, we still have to deal with competing ideologies, with different ways for arriving at truth, and with the world as it exists, not as we would like it to be. Also if we’re only able to talk to other Christians (and often not even to them) then we’re excluding a large and growing segment of the population.

But it doesn’t have to be this way, and much of the motivation for this blog came from seeing areas of surprising overlap between technology and religion, particularly at the more speculative edge of technology. As an example, look at the subject of immortality. In this area the religious have had a plan, and have been following it for centuries. They know what they need to do, and while everyone is not always as successful as they could be in doing what they should, the path forward is pretty clear. They have a very specific plan for their life which happens to include the possibility of living forever. Some may think this plan is silly, and that it won’t work, but the religious do have a plan. And, up until very recently, the religious plan was the only game in town. Which doesn’t mean that everyone bought into it, but, as I mentioned in a previous post, If you were really looking for an existence beyond this one that involved more than just memories, then it was the only option.

Obviously not everyone bought into the plan, people have been rejecting the religion for almost as long as it’s been in existence. But it’s only recently that there has been any hope for an alternative, for immortality outside of divine intervention. Some people hope to achieve this through cryonic suspension, e.g.freezing their body after death in the hopes of revival later. Some people hope to achieve this by digitizing their brain, or recording all of their experiences so that the recordings can be used to reconstruct their consciousness once they’re dead. Other people just hope that we’ll figure out how to stop aging.

These different concepts of immortality represent an area of competition between technology and religion, but the fact that both sides are talking about immortality is, I would opine, a neglected area we see the overlap I mentioned. Previously only the religious talked about immortality and now transhumanists, are talking about it as well. When presented with this fact, most people focus on the competition and use it as another excuse to abandon religion. But there are a few who recognize the overlap, and the surprising consequences that might entail. Certainly the Mormon Transhumanist Association is in this category and that’s one of the things I admire about them.

To take it a little farther, if we imagine that there are some people who just want a chance at immortality, and they don’t care how they get it, then previously these people would have had no other option than religion. Whether religion is effective, given such a selfish motivation, is beyond the scope of this post though I did touch on it in a previous post. But in any event it doesn’t matter because, here, we’re not concerned with whether it’s a good idea, we’re concerned with whether such a group of people exists and whether, given the promise of technological immortality, how many have, so to speak, switched sides.

I’m not sure how many people this group represents. Also I’m sure the motivations of most religious individuals are far more complicated than just a single minded quest for immortality. But you can certainly imagine that the promise of immortality through technology might be enough to take someone who would have been religious in an earlier age and convince them to seek immortality through technology instead. If there are people in this category, it’s unlikely that much is being written specifically with them in mind. All of this is not to say that my blog is targeted at “people who yearn for immortality, but think technology is currently a better bet than religion.” A group that has to be pretty small regardless of the initial assumptions, but this is certainly an example, albeit an extreme one, of the ways in which technology overlaps not only the practice of religion, but also the ideology, morals and even philosophy.

It’s easy to view technology as completely separate from religion, and maybe at one point it was, but as we get closer to developing the technology to genetically alter ourselves and our descendents, eliminate the need for work, or create artificial Gods (and recall we already have the technology to destroy the world) then suddenly technology is very much encroaching on areas which have previously been the sole domain of religion. And taking a moment to examine whether religion might have some insights into these issues before we discard it, is, I believe, a worthwhile endeavor. This is where, by straddling the two, I hope to cover some ground the General Authorities and people like C.S. Lewis have missed.

Interestingly, this is where religion ends up providing both the source of my pessimism as well as the source of my optimism. I have already mentioned how faith in God is a source of limitless hope, but on the other hand it also provides a framework for understanding how prideful technology has made us, and how quick we have been to discard the lessons of the both history and religion. We are faced with a situation where people are not merely ignoring the morality of religion, they are in many cases charting a course in the opposite direction. In this case, what other response is there than pessimism?

Of course, and I should have mentioned this earlier (both in this post and in the blog as a whole.) You have probably guessed that my name is not actually Jeremiah, that it’s a pseudonym I adopted for the purposes of this blog. Not only because I took the theme from the book of Jeremiah but also because I think there are some parallels between the doom he could see coming and many potential dooms we face. I assume that Jeremiah had faith, I assume that he figured it would all eventually work out for him, but that doesn’t mean that he wasn’t pessimistic about the world around him, enough so that a we still use the word jeremiad to mean a long, mournful complaint. And I think he was onto something. I know it’s common these days to declare that we just need to be optimistic and love people regardless of what they’re doing. But I’m inclined to think a pessimistic approach which is closer to Jeremiah’s might actually produce better results. And this is where we return to antifragility, which is another area of overlap between religion and technology, though probably less clear than the immortality overlap we talked about (which is why I started with it.)

The great thing about striving to be antifragile is that it’s a fantastic plan regardless of whether you’re religious or not. As I mentioned earlier my hope is that survival may provide a useful entry point, the milk so to speak, even for people who aren’t religious. In particular I think self-identified rationalists place too much weight on being right in the short term and not enough weight on surviving in the long term. Which are strengths of both antifragility specifically and religion generally. Obviously we don’t have the time to get into a complete dissection of how rationalists neglect the long-term, and I have definitely seen some articles from that side of things that did an admirable job of tacking the potential of future catastrophe. Perhaps, it’s more accurate to state that whatever their consideration for the long term that religion does not factor in at all.

But religion is important here for at least three reasons. First as I said in a previous post, even if there is no God, the taboos and commandments of religion are the accumulated knowledge about how to be antifragile. Second religion is one of the best ways we have for creating resilient social structures going forward. Which is to say, who’s better at recovering from disaster? The rationalists in San Francisco or the Mormons in Utah? Finally, if there is a God, being religious gives you access to the ultimate antifragility, eternal life. Obviously this final point is the most controversial of all, and you’re free to dismiss it, (though you might want to read my Pascal’s Wager post before you do.) But, with all of this, are you really sure that religion has no value in our modern, technological world? To return to the main theme of this post, I think people underestimate the value that comes from straddling the two worlds.

The problem with all of this is that in trying to speak on these subjects the minute you bring in religion and God many people are going to tune out entirely. Thus, despite this being an emphatically LDS blog, I don’t spend as much time speaking about religion as perhaps you might expect. In part this is because I honestly think you can get to most of the places I want to go without relying on deus ex machina. Believing in God does make everything easier to a certain extent (across all facets of life) but what if you don’t believe in God? Does that mean that you can throw out religion in it’s entirety, root and branch? I know people want to dismiss religion as a useless or even harmful relic of the past, but is that really a rational point of view? Is it really rational to take the position that countless hours, untold resources, and millions of lives were wasted on something that brought no benefit to our ancestors? Or worse caused harm? If this is your position then I think it’s obvious that the burden of proof rests with you.

There is a God in Heaven. And so I have all the optimism in the world. But, when so called rationalists, mock thousands of years of wisdom, then I’m also a huge pessimist. To use another quote from Shakespeare, remember “There are more things in heaven and earth… than are dreamt of in your philosophy.


I think it’s obvious that whether you’re an optimist or a pessimist, religious or rational (or ideally both) that we’re basically on the same page. So why not donate?


On The Limitations of Science

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


There are lots of people out there condemning the debauchery of our modern world, and generally with more eloquence than I can muster. Additionally there are prophets, both ancient and modern who have already offered up rousing sermons and trenchant observations (one of which I took as the theme of this blog) and I would urge you to study the writings of those prophets before reading anything I write. So, if there’s better stuff out there why do I bother to blog? I believe there is a gap in the commentary. A hole in the discourse that I can fill. It doesn’t need to be filled. What I write is not critical to anyone’s salvation. I am not uncovering any lost principles of Christ’s gospel, nor am I speaking in a more timely manner than what you hear at the semiannual General Conferences. If that’s so, what niche do I fill? What unique insights do I provide?

If you read my very first post, then you’ll remember that I already touched on this. This blog will specifically focus on comparing the LDS Religion to the Religion of Progress and examining how the Religion of Progress has failed. The sacrament of the Religion of Progress is science. And it is appropriate that it be so. I myself am a believer in science. But like all sacraments, the sacrament of science can be partaken of unworthily. It can be misunderstood, and distorted. Just as partaking of the actual sacrament every week doesn’t immediately absolve you of all your sins if you’re not also actively exercising faith, repenting of those sins and seeking forgiveness; partaking in the sacrament of science doesn’t immediately make what you do and what you believe scientific, no matter how much you proclaim your love for it. Science has serious limitations, even if one is doing everything right, which most of the time they’re not. And many of the failures of the Religion of Progress comes when it ignores those limitations (or in the case of the last post, trades science for emotion.) Consequently, this post is all about examining those limitations.

Let’s start by examining the limits of science even if everything is done correctly. To begin with it’s really hard to do it correctly, and 90% of the time what passes for quality science are efforts which leave out a lot of the rigor necessary for truly conclusive results. This was not always the case at the beginning of the scientific revolution there was a lot low hanging fruit. Scientific results of surpassing clarity and rigor that could be obtained with only moderate effort (the gentleman scientist working nearly alone was a fixture of the time.) All that low-hanging fruit is gone, but people still expect science to come up with similarly ironclad results even though the window during which that was possible is long past. Also most of the really solid science involves physics, and the farther you get away from that, the less amenable things are to experimentation in general because there are too many variables.

Thus you’re left in a situation where if you want to do solid, incontrovertible science your best bet is to do more physics, and that’s going to cost billions of dollars, or you can use pieces of the scientific method and take a stab at the questions which remain after all the low-hanging fruit has been picked. I say pieces of the scientific method because, for example, there are all manner of subjects which can’t be subjected to an experiment with a control. This is a limitation in many fields, but one of the best examples is economics, particularly macroeconomics. You can’t create a copy of the world and have one world where the global economy stays on the gold standard and the control, a world where everyone moves to floating currency. You will still have economist who will tell you that one is better than the other, but this is based off bits of data they’ve gathered from a very messy environment. Not any kind of conclusive, replicable experiment.

Related the problem of creating a control group is the difficulty of isolating the variable you hope to study. Even if we were somehow able to create two versions of Earth, and create a control, how do we know that all the differences between 2016 gold-standard Earth and 2016 floating-currency Earth are due to the different currency systems and not other random fluctuations? Obviously this is already a fairly ridiculous example, but it illustrates the impossible hurdles necessary to even approach true experimentation on something like the economy.

Now you should not assume from this that I’m anti-science, far from it. I have a deep respect for science. And I think that, if anything, the world needs more science not less, but as part of that, we need, particularly if we’re piling up more science, to recognize the limitations of science, especially as it’s actually put into practice. Science isn’t conducted by perfectly objective robots, it’s conducted by scientists who have careers to think of, biases which blind them and limitations of time and money to contend with. All of which takes us to the next way that science can go wrong.

When I say the next way, there are literally hundreds of ways that scientific efforts can go wrong, but rather than try to focus on all of them we’re just going to look at something that has been in the news a lot lately, the replication crisis.

What’s interesting about the replication crisis is that it happened even in cases where it truly appeared that people were doing everything correctly. Trained scientists were conducting ground-breaking experiments, designed according to the best thinking in their field, the results were passed through a process of peer-review and then the results were published in a respected journal. Obviously this is not to say that there weren’t papers published where everything was not being done correctly, even some examples of outright fraud, but even if we exclude those there were still a lot of results which got published which later turned out to be impossible to reproduce. The biggest contributor to this appears to have been publication bias, or what is sometimes called the file-drawer effect because people only submit positive, exciting results and the rest get put in the file-drawer with all of the other experiments that didn’t show anything. This is a problem not only with the people doing the experiments but with the publications themselves, which are far more likely to publish positive results (or to be technical, statistically significant results) than a paper which didn’t have any results (or a null result). And as you’ve probably heard, for most scientists it’s publish or perish. Another factor which almost certainly contributed to the crisis..

You may think that a positive result is a positive result regardless of whether there were 100 other, negative results which got put in the file cabinet. The problem is that it’s not. If you take 100 coins and flip each of them 7 times you’ve got better than even odds that one of them will come up 7 heads in a row. You might then decide that that coin is unfair, and publish a paper, “On the Unfairness of the 1947 Nickel”, but in reality you just started with a big sample size. Doing 100 experiments works very similarly. (For a really in depth discussion including p-values and lots of statistics go here.) The problem of course becomes that people reading or citing your paper don’t know that you have 99 failed experiments which never saw the light of day they only know about the one successful experiment that actually got published.

Thus far I haven’t mentioned how often a study fails to be replicated, and you may think that it’s no big deal. A few here and there, but nothing to worry about. Well as it turns out in general less than half of studies can be reproduced and sometimes less than 15%! This would mean that six out of every seven studies put forth conclusions which later turned out to be untrue.

Once again it’s important to recognize that there is a continuum of scientific results. There’s not a 50% chance that the theory of gravity is wrong, or that protons don’t exist. But when it comes to the softer sciences (and they’re labeled that way for a reason) there is a better than even chance that their conclusions will turn out to be untrue.

Of course when the average person talks about scientific discoveries, ignoring for the moment whether the results can be reproduced, they’re generally not talking about what the scientist actually found. To a first approximation no one reads the actual scientific paper, and probably only 1 in 10,000 people even read the abstract. If you hear about a scientific result you’re hearing about it through the media, which further undermines the utility of science by distorting results in an effort to make them appear more interesting. In short when people think of science they think of gravity, but what they’re actually getting is a Buzzfeed article written based on a press release from a conversation with a scientist who shelves most of his work, is desperate for tenure, describing a conclusion that is more than likely irreproducible. That’s like five layers of spin on top of a result that’s most likely false!

If the kind of “science” I’m talking about were framed as an amusing hobby and an article about bacon prolonging life was treated in the same fashion as a movie review then it wouldn’t be that big of a deal, but for many people science has taken the place of religion. And more than just religion, it has taken the place of deep thinking about the fundamental questions of life in general. People have replaced virtue with a sort of sloppy rationality which cloaks itself in science and is therefore considered progressive, but is really just the idea of doing whatever makes you feel good cloaked in a bunch of pseudo scientific babble. And decisions are being made which can cost people their lives.

As an example of this, I just finished the book Dreamland by Sam Quinones. It’s an in depth look at the opiate epidemic in America, and a stunning indictment of what passes for science these days. You’ve probably heard about the opiate epidemic, if not follow the link. The effects of the epidemic are so bad that as to be baffling and a whole host of factors combined to make the problem so terrible, but the misuse of science was one of the bigger factors, possibly the biggest. It’s not possible to go into a complete description of what happened (I highly recommend the book) but in essence using a combination of poor science and a morality devoid of any underpinning in religion or tradition, doctors decided that people could essentially have unlimited opiates, the best known of which is oxycontin. Exactly what I mean by doing whatever makes you feel good cloaked in pseudo scientific babble.

The first part, the misuse of science, hinged on placing far too much weight on a one paragraph letter published in the New England Journal of Medicine in 1980 which claimed that opiates only ended up causing addiction in 1% of people. Getting past the fact that the author never intended it to be used in the way it was, to base decades of pain management on one paragraph is staggeringly irresponsible. Even more irresponsible, when the pharmaceutical companies got around to trying to confirm the result they found the it didn’t hold up (to no one’s surprise) and they ended up burying and twisting the results they did get. The number of people that died of accidental overdoses directly or indirectly from this misuse of science is easily six figures, possibly seven, particularly since people are still dying. Of course in addition to the misuse of science there was the over reliance on science. I assume that on some level the pharmaceutical companies knew that they were not being scientific, but countless doctors, who were either naive or blinded by the gifts provided by the pharmaceutical company chose to at least to pretend that they were doing what they were doing because science backed them up.

I mentioned that one of the other factors was a morality devoid of any underpinning in religion or tradition. I’m not going to say that any religion specifically forbids overprescription of opiates, but most of them have some broad caution about drugs in general. And even if you want to set religion aside there is a strong traditional distaste for opium. And here is where the limits of science are most stark.

Frequently, people use science to declare any belief or practice or tradition or religion which is insufficiently scientific (which of course includes all religions, most traditions, and a majority of practices and beliefs over a few decades old) as nothing more than baseless superstitions. And while it was not labeled as such this is precisely what happened with opiates. All religions I’m aware of recognize that a certain amount of suffering is part of existence, but in 1980, doctors more or less decided it wasn’t. Sure they couched in the language of science with lots of caveats, but this is precisely the problem. The science turned out to be wrong and the caveats turned out to be insufficient barriers to abuse and somewhere north of 100,000 people died.

As I have repeatedly said, I’m not anti-science, but science without tradition, without morality, and without religion is prone to huge abuses. This blog will attempt to unite religion and science, but in doing so, religion is always going to hold primacy over science. And it’s not even necessarily because religion is backed by divine infallibility. Forget about that. Set that aside. While, I certainly believe that that’s the case, in these circumstances it doesn’t matter. The problem with science is that it hasn’t been around very long, and it assumes a sterile, rational world which bears no resemblance to the world we actually live in. Setting aside whether God exists, religion and tradition has been tested in the crucible of history. And have provided insights, particularly in the realm of morality that people ignore at their peril. Which will be the subject of my next blog post.


LGBT Youth and Suicide

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


This is one of those posts where I’m sure I’m walking into a minefield. Well you only live once, so lets do this…

When people want to talk about the harm caused to LGBT youth by the intolerance of the Church, the first place they go is to a discussion of suicide. This makes sense. When someone takes their own life it’s tragic. There’s no way to sugar coat a suicide. It’s obviously a bad thing.

This discussion has been going on for awhile, but it seemed to really explode earlier this year with the publication of a report which claimed that 32 young LGBT Mormons aged 14-20 have committed suicide since the Church changed its policies on same sex marriage (SSM), labeling people in a SSM as apostates and forbidding their children from being baptized.

The connection to be drawn was clear. Through their policy the Church had indirectly killed people. This shouldn’t be a surprise. I have all the sympathy in the world for the parents, family members and friends of those individuals, and if they’re mad at the Church that’s understandable. I’d be upset as well and as part of that I’d certainly want something and someone to blame. And connecting these suicides to the policies of the Church and the attitudes of its members seems obvious.

That said, the more emotional the subject, the more difficult it is too really look at things rationally. And yet in a situation as consequential as this one, understanding what is really going on becomes more important than ever. I agree that the explanation offered by the article seems the obvious one, but so many times the obvious explanation is not the correct one. And there have been thousands of times when people thought they were helping when in fact they were doing exactly the opposite. And unfortunately as much as it pains me to say this, that may in fact be what’s happening here.

I mentioned the article from the beginning of the year, and as you can probably imagine, the issue hasn’t gone away. At the first of this month a piece was published in the Salt Lake Tribune once again talking about LGBT suicide and once again pushing the Church to do more about it. It should be noted that this op-ed was written by one of leaders of the organization who supplied the data on the 32 suicides featured in the initial article. I don’t think this undermines the claims or anything of that sort, but if you’re trying to get to the truth these sorts of details are important. But at this point I’m fine granting the LGBT Mormon Youth are committing suicide and that the numbers of youth committing suicide are in fact increasing. This idea is strengthened by an article linked to from the same page as the op-ed which reported that youth suicides have tripled since 2007.

Looking at the comments on the second article it appears that most people agree with the position of the op-ed, so the overall theory that the Church is causing suicides has considerable traction. But does it make sense? Is the connection really that clear? Let’s start by looking at the time line. First let’s look at the Church’s position on LGBT issues. Here are few milestones:

1995: LDS leaders issue the Proclamation on the Family which declares that “Marriage between man and woman is essential to [God’s] eternal plan” and that “Gender is an essential characteristic of individual premortal, mortal, and eternal identity and purpose.”

2008: LDS Church campaigns heavily for Proposition 8. Which passes, reversing the California Supreme Court’s decision to legalize SSM.

2010: In a tearful meeting in Oakland Elder Marlin K. Jensen apologized to those affected by Proposition 8 for the Church’s part in passing it.

2012: The Church creates the website www.mormonsandgays.org in an attempt to reach out to members who experience same sex attraction (SSA).

2015, November: Church labels people in an SSM as apostates and forbids children of those couples from baptism.

I’m sure I’ve left out some milestones. But I think it’s clear that since 2007 the Church’s engagement with the LGBT community has not been a series of escalations, with each step worse than the last. There have been some real attempts to reach out to the LGBT community. And while you may disagree with the effectiveness or even the sincerity of these efforts, I have a hard time seeing how the Church’s treatment of LGBT individuals is getting worse. The outreach of the website, or the Proposition 8 apology would have been unthinkable during the 70’s, 80’s and 90’s. And, while I was not alive for the decades before that I am reliably informed that attitudes towards LGBT individuals were even worse before then.

Taken together, the evidence strongly suggests that the Church and its leadership are making real attempts to be more loving and understanding. I can point you towards stories of transgender Mormons showing up in dresses to Church and being treated as women and gay bishops who publically talk about their struggle with same sex attraction. Yes, there are certainly lines that the Church has decided should not be crossed, but beyond that they’ve been unusually accommodating. But let’s set that aside for the moment. Perhaps the Mormon Church has become more draconian. Maybe there are elements, perhaps individual members, who are being horribly repressive and intolerant. Even if this is the case (and I don’t think it is) they are not the only factor in play. We also have to look at what things have been like outside of the Church with respect to LGBT acceptance. Some milestones there:

1999-2000: Domestic partnerships and civil unions become legal in California and Vermont respectively.

2003: SSM legal in Massachusetts.

2009: Numerous states make SSM legal (with lots of fights back and forth at the ballot box).

2011: Obama administration declares they will no longer defend DOMA (the Defense of Marriage Act

2013: SSM made legal in Utah.

2015: SSM made legal everywhere in the US.

And this list doesn’t even include the increased acceptance of LGBT’s on TV and movies and in the media. For the last decade or more LGBT people have gone from one victory to another. By any conceivable measurement things are as good as they have ever been. If that’s the case why are so many of them committing suicide? Even if you want to claim that the LDS Church has been unusually repressive. It’s not that hard to leave the Church and reject its teachings. People do it all the time, and by all accounts there’s a large community willing to embrace them and celebrate their decision. Outside of the Church the argument that intolerance and bigotry are causing suicides just doesn’t hold any water. And even if you restrict your examination to what’s happening within the church, the evidence is weak to nonexistent.

To be clear, the suicide of anyone is tragic. And I would never want people to think I am minimizing the  suffering of those involved. But given how tragic it is, isn’t it that much more important to make sure that we correctly understand the causes? It’s easy to point the finger at the Church and declare that it’s all being caused by Mormon bigotry. But being blinded by animosity towards the Church could easily lead someone to overlook other issues. Once again, Youth suicides have tripled! The consequences of incorrectly diagnosing the problem are huge. And blaming it all on the Church looks like it might just be an example of an incorrect diagnosis. Or at a minimum not the whole story.

If the LGBT community is objectively being treated with more tolerance than ever why are suicides increasing? As I have said, he conventional wisdom is that we just need to be even more tolerant. But it’s worth examining the causes of suicide, because they don’t always map to one’s expectations. Interestingly enough one of the latest episodes of the Freakonomics podcast was a rebroadcast of an episode they did on suicide from 2011. It brings up a lot of points that are worth considering.

Before I jump into the Freakonomics podcast I want to make it clear, that I’m not saying I know why the suicide rate has increased or why LGBT youth are committing suicide. It would be ridiculous of me to take a podcast and a couple of articles from the internet and use them to pass judgment about what should be done. Instead, rather than saying why it is happening, I’m

offering up the opinion that it might not be happening because of the Church and its members. I intend to offer some alternative theories, mostly to show that there are other potential explanations, not to advance any of the explanations as THE explanation.

The first thing we notice when we listen to the podcast is the title, “The Suicide Paradox.” It’s called that because a lot of things about suicide don’t make sense, and can be downright paradoxical. For example it turns out that blacks commit suicide at only half the rate of whites. If your theory is that oppression and intolerance causes suicide you would expect their rate to be higher than the white rate. Another example (not from the podcast) is Syria, which one year into its civil war was tied for the lowest national suicide rate (now there may be all kinds of problems with that number, but it’s borne out by other surveys conducted before the war.) One of the best statements about the difficulty of understanding suicide comes from David Lester who was interviewed as part of the podcast. Lester has written over 2,500 academic papers, more than half of which concern suicide. And his conclusion is:

First of all, I’m expected to know the answers to questions such as why people kill themselves. And myself and my friends, we often, when we’re relaxing, admit that we really don’t have a good idea why people kill themselves.

Despite this statement there are some general things that can be said about suicide. For instance suicide is contagious. If someone hears about a suicide or sees a suicide, say on TV, particularly if the person committing the suicide bears some resemblance to the person hearing about it, it can trigger a copycat suicide. This is called the Werther Effect after a novel by Goethe where he described someone committing suicide in a sympathetic fashion. Thus it’s possible that in the process of publicizing the suicide of LGBT Mormon youth that the people trying to prevent it are actually contributing to the problem. If so it that would be terrible, and as I said, I take no stand on what is actually happening, I’m only urging that a problem this serious deserves all the knowledge and resources at our disposal.

It’s also worth mentioning that Utah is squarely inside the suicide belt, that area of the country with the highest suicide rates. Explanations for the high suicide rates in the Mountain West have ranged from residential instability, to access to guns, to the thin air. This is a great site for comparing suicide rates among states, and it’s worth noting that the site doesn’t show a 3x increase in the number of suicides in Utah since 2007. If you follow the link and select states to compare, Utah looks very similar to Colorado and New Mexico. States which are not known for having a huge population of Mormons. Of course the original article talked about youth, and it’s not my intention to dig into the numbers (at least not now) though they could very well be suspect. The point I want to bring up is that Utah is already has an above average suicide rate and it appears to have nothing to do with the Church.

Finally you would expect that suicide to be more rare among wealthy people, and to an extent that’s true, but less than you would think. There is no strong correlation between wealth and suicide. Having more money doesn’t do much to lower your risk of suicide and may in certain cases increase it. Additionally some of the very highest rates of suicide are among older white males. Hardly the group you think of when you think of an unhappy minority. And indeed rich and famous people commit suicide all the time. The effect is even more pronounced if you look at the difference in suicide rates between rich and poor countries. Not only is this another mark against the theory that bigotry and intolerance cause suicide, but it leads us to another alternate theory for suicide.

According to this theory, people who are impoverished, discriminated against, or otherwise dealing with difficult circumstances can always point to these circumstances as the reason why they’re unhappy. When those circumstances go away, if the person is still unhappy, then it must mean that they’re broken in some fundamental way, and their unhappiness is therefore a permanent condition. If everything you think is making you unhappy goes away and you’re still unhappy what’s left?

This could be what we’re seeing with the LGBT community. In the “bad old days” the reasons for their misery were obvious, the world didn’t accept them and never would. Now they’re accepted everywhere. They can join the military, they can get married, companies come to their aid. What’s left? And yet, the suicide rate remains tragically high.

Chelsea Manning, the transgender whistleblower formerly known as Bradley Manning before transitioning, attempted to commit suicide recently. And it is among transgendered that the evidence for this effect is strongest. If on the one hand we just need more tolerance to solve the problem, than those individuals who have successfully undergone gender reassignment surgery and can pass as the opposite sex should have the lowest suicide rate. Instead individuals who’ve undergone the surgery experience a suicide mortality rate 20 times greater than a comparable non-transgender population. Even transgender individuals have taken these numbers and used them to argue vigorously against surgery.

Sticking with just transgendered individuals there are still well-respected doctors who argue that transgendered individuals suffer from a version of body dysmorphic disorder. In other words being transgendered is similar to having anorexia or bulimia. Thus we should be treating them like people with a mental illness, not as people who have a different but completely valid lifestyle. Obviously this is a very unpopular theory, but that should not be a factor in determining what’s really going on.

I know that the current orthodoxy is that we just need to allow people to do whatever they want and happiness will follow, but at some point don’t we need to look at the data? Is it in fact possible that telling people to pursue personal gratification at the expense of everything else is contributing to the problem?

I know people are convinced that the intolerance of the church and it’s members are indirectly killing people. And I can understand the reasons why they think this, but it just doesn’t add up. At some point you have to admit the possibility that some people are more interested in finding a club to beat the Church with than they are in getting to the truth, and by extension really helping these kids.

I’ll tell you what I thought when I heard the announcement that the Church would not baptize the children of same sex couples and were declaring anyone in a same sex marriage as apostate. I was relieved and excited, and I’ll tell you why. The Church had backed down on a lot of things, as I mentioned above they had apologized, they had put up websites, and all of these were probably even good things, but we can be so accommodating that we lose sight of the doctrine. And as I have attempted to point out here, we can be so accommodating that we are no longer able to think deeply about a topic. Our dialogue becomes nothing but accusations and apologies. Obviously I’m just a bit player in all of this. The leaders of the church know what they’re doing and along those linesl think Dallin H. Oaks said it best when he was speaking about this very issue of LGBT suicide:

I think part of what my responsibility extends to, is trying to teach people to be loving, and civil and sensitive to one another…beyond that, the rightness the wrongness, I will be accountable to higher authority for that…

In all of this that’s what we have to remember. We are accountable to a higher authority. As much as we might want to bring our own strong sense of right and wrong and justice to things, there is a greater hand than ours guiding the affairs of the Church. And it’s our responsibility to be obedient and accountable to that authority, even if it’s difficult.


Atheists and Unavoidability of the Divine

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I had hoped to spend most of the first several blog posts building a foundation for things. Laying the groundwork of my philosophy. But I’ve been thinking about an issue recently and I thought while the issue was fresh and my indignation was fired that I should say something. Which is not to say that this topic is something that has only recently occurred to me, I’ve actually been thinking about it on and off since the late 80’s, but it was triggered most recently by an answer to a question on Quora. The question was: What is the creepiest thing that society accepts as a cultural norm? I found one answer to be particularly objectionable, but frankly a lot of the answers were misguided and ignorant; as an example, the answer with the second most votes was that it was creepy to teach children to be patriotic. Getting into exactly why this answer is so ignorant is not the point of this post. But it is one of those things that has been an unquestioned positive value in cultures for thousands of years and only in the last few decades have people decided that it is a negative. The world is saturated with Chronological Snobbery and I probably shouldn’t get worked up over one more example of it.  

That, however, was not the answer that triggered this post, but the two answers are related. Both are in the category of allegedly creepy things being taught to children. In the case that set me off it was the teaching of religious opinions. Specifically the author of the answer offered up “Religious Opinions Being Forced on Children” as the creepiest thing society accepts as a cultural norm. First I find it interesting that he uses the term “force”, how do you force someone to hold an opinion? Is there some brain modification going on here that I’m unaware of? I’m sure he would argue that by being in a dominant position that parents can effectively force their opinion on their children. Regardless I imagine that what he finds so objectionable is not force, but religion. I see no indication from this answer or any of his other answers that he’s a radical libertarian. He’s not opposed to compulsion in all of it’s forms, he’s opposed to religion, and finds it creepy that I should be allowed to bring up my children in my religion.

If that were not enough, he begins his piece by saying that he doesn’t want any comments on his answer. I understand his position is controversial (as it should be) and that he’s going to get a lot of negative feedback, but that’s just cowardly. He’s not saying that he thinks religious instruction is something which deserves more scrutiny, he’s saying that it’s creepy. That’s a pretty high standard. An extraordinary claim which deserves some extraordinary proof.

At this point you may be wondering what about this answer got my juices flowing. Sure it’s pathetic and intellectually vacant, but people post intellectually vacant stuff on the internet all the time. What I found interesting is that he still acknowledges that children need to be taught morals he just claims that “Morals can be taught separately from religion.” And this is where he gets into my pet peeve. I know atheists think that religion is a horrible, destructive force, responsible for all manner of misery and evil. But that’s only because haven’t really thought things through. This intellectual disconnect is not just the subject of the remainder of this post it’s in part the theme of this entire blog.

Let’s examine the options for arriving at a system of morals. We’ll start with the two obvious options:

Option 1- Morals are eternal, divine and originate from a supreme being, or at a minimum some non-materialistic force..

Option 2- Morals can be inferred logically. Pure reason and/or science provides a moral framework.

Obviously atheists don’t believe in option 1, but option 2 seems reasonable enough, right? The problem is that the system of morality described in option 2 doesn’t exist. The closest anyone has come is utilitarianism and frankly raw utilitarianism has a host of issues. Many of the issues are esoteric, but there is one that is insurmountable, no one has adopted it on a large scale. Thus, If we decided to teach utilitarianism in order to separate morals from religion, we would be instructing children in a system of morality which bears little resemblance to the cultural morality of the society that child lives in. Okay, one might retort, we’ll just teach that. We don’t have to add in religion, we can just instruct children in society’s morality. Now recall that he wasn’t objecting to religious instruction in schools he was objecting to all religious instruction everywhere, so you would have to teach this morality without recourse to any form of religion. How does this not end up as nothing but sterile instruction in the laws of the country. And I think teaching law devoid of ethics is one of the more dangerous things you can do, leading inevitably to a anything’s-fine-as-long-as-you-can-get-away-with-it mentality. Looking at it another way, where do you think cultural morality comes from? Imagine trying to teach morality as if Judaism, Christianity, Islam or even Buddhism never existed.

This takes us to the third option for arriving at a system of morals. Now I believe morals come from God, but let’s assume for the moment that they don’t. And further assume that option 2 is off the table. That Bertrand Russell didn’t sit down and created a foolproof logical system of morality that all people of good sense follow. Then option 3 is as I alluded to above, taking our morals from that system of morality which developed organically, in an evolutionary process of trial and error over thousands of years of civilization.

For our example atheist, this may initially seem like great news. Evolutionary process? Trial and error? Where do I sign up? There’s only one problem. This process is religion. If you’re going to deny the existence of God then religion is still the distilled essence of this evolutionary process of how civilization arrived at morals. Religion is what centuries of trial and error has produced. And tossing away religion would be equivalent to tossing Newton’s laws of motion and deciding that you’re going to start over with physics. Obviously that’s not what this guy thinks he’s suggesting. Newton’s laws are science, while religion is nothing but superstition he might sputter. Well it wasn’t science that proclaimed slavery wrong it was religion, and it wasn’t science that spelled the end of eugenics, it was judeo-christian morality. I could go on, but science has been on the wrong side of a lot of issues.

Given that and the lack of some universally recognized logical system of morality you have two choices. You can rely on God for morality or you can rely on culture for morality, but in both cases you’re relying on religion. You’re just arguing about the source of it. Atheists want to toss religion out the window with God. I don’t want to toss out either, and if atheists thought about it they wouldn’t want to throw out religion either. But when it comes down to it it’s strangely easier for atheists to get rid of religion than it is for them to get rid of the concept of God. Which takes me to my final point. Frequently when you read what atheists have written you find that they can’t help but introduce God into their works. I’m sure they don’t think of it that way, but I notice over and over again that they bring God into the picture but disagree on what God is. It’s as if they agree with the Ontological Argument and their only disagreement is what the supreme being is.

I first encountered this when reading the book Contact by Carl Sagan. Carl Sagan avoided the label atheist, but he was certainly agnostic, and many atheists point to him as a major inspiration, and he certainly didn’t believe in an afterlife. The book, Contact, has a section where the atheist hero humiliates a believer in an argument. This isn’t the only time he finds occasion to deride believers, in particular I remember his off-hand comment that the Mormons viewed the alien signal, the “contact”, as another message from the Angel Moroni. Now don’t get me wrong I actually liked Carl Sagan. I watched all the episodes of Cosmos, and read the accompanying book. I read Broca’s Brain and of course I read Contact. So what did Sagan include in Contact that set me off? At the end of the book, we discover that some aliens, more advanced even than the aliens we end up communicating with have encoded a message in the value of PI. If you can encode a message in PI you’re a god! So why does Sagan include this bit? Is it because he can’t help himself? Is it because he believes in a God (perhaps he’s a deist) but thinks he’s the only one to understand god’s true nature? If it were just Sagan I might think nothing of if.

But, in fact Sagan is not the only atheist who has made a comment like that. Richard Dawkins, widely regarded as the poster child for aggressive atheism said the following:

Whether we ever get to know them or not, there are very probably alien civilizations that are superhuman, to the point of being god-like in ways that exceed anything a theologian could possibly imagine.  

This is a very interesting quote and it touches on something we’re going to spend a lot of time on in this space. But if he’s just admitted that there are god-like aliens out there why is he an atheist? Continuing the quote:

In what sense, then, would the most advanced SETI aliens not be gods? In what sense would they be superhuman but not supernatural? In a very important sense, which goes to the heart of this book. The crucial difference between gods and god-like extraterrestrials lies not in their properties but in their provenance. Entities that are complex enough to be intelligent are products of an evolutionary process. No matter how god-like they may seem when we encounter them, they didn’t start that way.

Hmm… does that sound like any religion you might have heard of? This is in fact in all essential respects what Mormon’s believe. Does this mean that Dawkins is on the verge of converting? I very much doubt it. In other words both of these atheists can imagine the existence of God. They just can’t imagine that he behaves like the God that all those creepy religious people believe in.

My final example is from Harry Potter and the Methods of Rationality (HPMOR). It’s written by Eliezer Yudowsky who also self-identifies as an atheist, and is a major force on Lesswrong.com, the well known website of pure rationality. The book is Harry Potter fan-fiction. Meaning that Yudowsky takes the world of Harry Potter makes a few changes before retelling the story in his own fashion. In this case Harry is a relentless, I would say even say, Machiavellian rationalist, on top of being a poster child for humanism. He thinks of death as the ultimate evil. (See my last post on this topic.) Lest it be unclear, I actually thoroughly enjoyed HPMOR, and not just because of the really clever way he deals with the time travel from the original.

The interesting bit in the story comes when Harry has to summon a Patronus. As with the original, Patronuses become a major plot point in HPMOR. Initially Harry can’t summon a Patronus, and it’s only after he recognizes that the Dementors represent death (the ultimate evil in his view) that he is able to draw on the pure force of humanism and summon forth a Patronus who comes in the form of a being of pure white light, the avatar of humanism.

In a sense this is how we know that Harry’s beliefs are correct, because they’re confirmed in a supernatural manner when he summons, what is later called, the True Patronus. Yudowsky might argue with me calling it supernatural, but it’s hard to see how you could call it anything else. Harry’s belief is greater than anyone else’s, consequently he is the only person ever to be able to summon the True Patronus. Despite this it seems clear that this True Patronus has been there all along, an unchangeable source of truth external to humanity as a whole.

Once again we arrive with a situation similar to Dawkins where there are some bizarre parallels with Mormon Theology. In this case, Harry receives the confirmation of his beliefs in the forest, from a being of pure white light after overcoming a dark force which threatened to overwhelm him. Yes, you probably guessed correctly. There is a very strong resemblance between Harry’s experience and Joseph Smith’s First Vision. Yet again we’ve uncovered a budding Mormon among the ranks of the unbelievers.

After all of this where do we end up? I think the moral of the story is that pure atheism is more difficult than people expect. So difficult that God comes back into things the minute they start to really think deeply. As the examples show, once you dig into things enough running into the divine seems hard to avoid. It’s easy for atheists to paint believers as ignorant and superstitious, but it appears that despite all the progress that has been made, there’s more to the idea of God and the practice of Religion than they want to admit.


We Are Not Saved

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


The harvest is past, the summer is ended, and we are not saved.

Jeremiah 8:20

When I was a boy. I couldn’t imagine anything beyond the year 2000. I’m not sure how much of that had to do with the supposed importance of the beginning of a new millennium, how much of it is just due to the difficulty of extrapolation in general, and how much of it was due to my religious upbringing. (Let’s get that out of the way right up front. Yes, I am LDS/Mormon.)

It’s 2016 and we’re obviously well past the year 2000 and 16 years into the future I couldn’t imagine. For me, at least, it definitely is The Future, and any talk about living in the future is almost always followed by an observation that we were promised flying cars and spaceships and colonies on the moon. This observation is then followed by the obligatory lament that none of these promises have materialized. Of course moon colonies and flying cars are all promises made when I was a boy. Now we have a new set of promises: artificial intelligence, fusion reactors, and an end to aging, to name just a few. One might ask why the new promises are any more likely to be realized than the old promises. And here we see the first hint of the theme of this blog, But before we dive into that, I need to lay a little more groundwork.

I have already mentioned my religious beliefs, and these will be a major part of this blog (though in a different way than you might expect.) In addition to that I will also be drawing heavily from the writings of Nassim Nicholas Taleb. Taleb’s best known book is The Black Swan. For Taleb a black swan is something which is hard to predict and has a massive impact. Black swans can come in two forms: positive and negative. A positive black swan might be investing in a startup that later ends up being worth a billion dollars. A negative black swan, on the other hand, might be something like a war. Of course there are thousands of potential black swans of both types, and as Taleb says, “A Black Swan for the turkey is not a Black Swan for the butcher.”

The things I mentioned above, AI, fusion and immortality, are all expected to be positive black swans, though, of course, it’s impossible to be certain. Some very distinguished people have warned that artificial intelligence could mean the end of humanity. But for the moment we’re going to assume that they all represent positive black swans.

In addition to being positive black swans, these advancements could also be viewed as technological singularities. Here I use the term a bit more broadly than is common. Generally when people talk about the singularity they are using the term with respect to artificial intelligence. But as originally used (back in 1958) the singularity referred to technology progressing to a point where human affairs would be unrecognizable. In other words these developments will have such a big impact that we can’t imagine what life is like afterwards. AI, fusion and immortality all fall into this category, but they are certainly by no means the only technology that could create a singularity. I would argue that the internet is an excellent example of a singularity. Certainly people saw it coming, and and some of those even correctly predicted some aspects of it (just as, if we ever achieve AI, there will no doubt be some predictions which will also prove true.) But no one predicted anything like Facebook or other social media sites and those sites have ended up overshadowing the rest of the internet. My favorite observation about the internet illustrates the point:

If someone from the 1950s suddenly appeared today, what would be the most difficult thing to explain to them about life today?

I possess a device, in my pocket, that is capable of accessing the entirety of information known to man.

I use it to look at pictures of cats and get in arguments with strangers.

Everything I have said so far deserves, and will eventually get, a deeper examination, what I’m aiming for now is just the basic idea that one possibility for the future is a technological singularity. Something which would change the world in ways we can’t imagine, and if proponents are to be believed, it would be a change for the better.

If, on the one hand, we have the possibility of a positive black swans, technological singularities and utopias, is there also the possibility of negative black swans, technological disasters and dystopias on the other hand? Of course that’s a possibility. We could be struck by a comet or annihilate each other in a nuclear war or end up decimated by disease.

Which will it be? Will we be saved by a technological singularity or wiped out by a nuclear war? (Perhaps you will argue that there’s no reason why it couldn’t be both. Or maybe instead you prefer to argue that it will be neither. I don’t think both or neither are realistic possibilities, though my reasoning for that conclusion will have to wait for a future post.)

It’s The Future and two paths lie ahead of us, the singularity or the apocalypse, and this blog will argue for apocalypse. Many people have already stopped reading or are prepared to dismiss everything I’ve said because I have already mentioned that I’m Mormon. Obviously this informs my philosophy and worldview, but I will not use, “Because it says so in the Book of Mormon” as a step in any of my arguments, which is not to say that you will agree with my conclusions. In fact I expect this blog to be fairly controversial. The original Jeremiah had a pretty rough time, but it wasn’t his job to be popular, it was his job to warn of the impending Babylonian captivity.

I am not a prophet like Jeremiah, and I am not warning against any specific calamity. While I consider myself to be a disciple of Jesus Christ, as I have already mentioned, this blog will be at least as much informed by my being a disciple of Taleb. And as such I am not willing to make any specific predictions except to say that negative black swans are on the horizon. That much I know. And if I’m wrong? One of the themes of this blog will be that if you choose to prepare for the calamities and they do not happen, then you haven’t lost much, but if you are not prepared and calamities occur, then you might very well lose everything. As Taleb says in one of my favorite quotes:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy.

I have already mentioned Taleb as a major influence. To that I will add John Michael Greer, the archdruid. He joins me (or rather I join him) in predicting the apocalypse, but he does not expect things to suddenly transition from where we are to a Mad Max style wasteland (which interestingly enough is the title of the next movie.) Rather he puts forward the idea of a catabolic collapse. The term catabolism broadly refers to a metabolic condition where the body starts consuming itself to stay alive. Applied to a civilization the idea is that as a civilization matures it gets to the point where it spends more than it “makes” and eventually the only way to support that spending is to start selling off or cannibalizing assets. In other words, along with Greer, I do not think that civilization will be wiped out in one fell swoop by an unconstrained exchange of nukes, and if it is than nothing will matter. I think it will be a slow-decline, broken up by a series of mini collapses.

All of this will be discussed in due time, suffice it to say that despite the religious overtones, when I talk about the apocalypse, you should not be visualizing The Walking Dead, The Road, or even Left Behind. But the things I discuss may nevertheless seem pretty apocalyptic. Earlier this week I stayed up late watching the Brexit vote come in. In the aftermath of that people are using words like terrifying, bombshell, flipping out, and furthermore talking about a global recession, all in response to the vote to Leave. If people are that scared about Britain leaving the EU I think we’re in for a lot of apocalypses.

You may be wondering how this is different than any other doom and gloom blog, and here, at last we return to the scripture I started with, which gives us the title and theme of the blog. Alongside all of the other religions of the world, including my own, there is a religion of progress, and indeed progress over the last several centuries has been remarkable.

These many years of progress represent the summer of civilization. And out of that summer we have assembled a truly staggering harvest. We have conquered diseases, split the atom, invented the integrated circuit and been to the moon. But if you look closely you will realize that our harvest is basically at an end. And despite the fantastic wealth we have accumulated, we are not saved. But in contemplating this harvest it is easier than ever before to see why we need to be saved. We understand the vastness of the universe, the potential of technology and the promise of the eternities. The fact that we are not wise enough to grasp any of it, makes our pain all the more acute.

And this is the difference between this blog and other doom and gloom blogs. Another blog may talk about the inevitable collapse of the United States because of the national debt, or runaway global warming, or cultural tension. Someone with faith in continued scientific progress may ignore all of that, assuming that once we’re able to upload our brains into a computer that none of it will matter. Thus, anyone who talks about about potential scenarios of doom without also talking about potential advances and singularities, is only addressing half of the issue. In other words you cannot talk about civilizational collapse without talking about why technology and progress cannot prevent it. They are opposite sides of the same coin.

That’s the core focus, but this blog will range over all manner of subjects including but not limited to:

  • Fermi’s Paradox
  • Roman History
  • Antifragility
  • Environmental Collapse
  • Philosophy
  • Current Politics
  • Book Reviews
  • War and conflict
  • Science Fiction
  • Religion
  • Artificial Intelligence
  • Mormon apologetics

As in the time of Jeremiah, disaster, cataclysms and destruction lurk on the horizon, and it becometh every man who hath been warned to warn his neighbor.

The harvest is past, the summer is ended, and we are not saved.