Tag: <span>taleb</span>

The 9 Books I Finished in March-2022

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

  1. When Genius Failed: The Rise and Fall of Long-Term Capital Management by: Roger Lowenstein
  2. How to Live on 24 Hours a Day by: Arnold Bennet
  3. Burning Chrome by: William Gibson
  4. Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy by:  Richard Hanania
  5. Virtue Hoarders: The Case against the Professional Managerial Class by: Catherine Liu
  6. Mythos: The Greek Myths Retold by: Stephen Fry
  7. Heroes: Mortals and Monsters, Quests and Adventures by: Stephen Fry
  8. If You Absolutely Must…: a brief guide to writing and selling short-form argumentative nonfiction from a somewhat reluctant professional writer by: Fredrik deBoer
  9. Expeditionary Force Book 8: Armageddon by: Craig Alanson

Somehow, without really planning to, I’ve ended up traveling a lot. As I write this I’m actually in a car headed to Albuquerque (my wife is currently driving). A week ago I was in Lake Geneva, Wisconsin at Gary Con. And later this month I’m going to Vegas. I’m feeling stressed out and frivolous at the same time. 

I mention the traveling to both prepare the ground for the possibility that I might once again not produce as much writing as I want to this month, and because it leads into a story, a story about masks. As I mentioned I just got back from Gary Con and as the convention approached they made it clear that they wanted a total mask mandate. They were so serious about this that they canceled the option for table-side service, which, as I understand, is a major source of revenue for them, because they didn’t want to give people the excuse that they didn’t have their mask on because they were eating. They didn’t say that you couldn’t eat or drink at the table, but they wanted you to quickly pull down your mask, take a bite or a drink and quickly put it back on.

I was not looking forward to the mandate because I think it makes it super hard to communicate in a noisy gaming hall, and, though it might be psychosomatic prolonged mask wearing always gives me a headache, plus with three shots and a verified positive for Omicron I think I’m about as safe as one can be in this day and age. So imagine my delight when I show up and not only are about half the people in the registration line unmasked, but the guy next to me in line says that the mask mandate was removed at the 11th hour, because the hotel itself doesn’t have a mandate, and indeed 90% of them aren’t wearing masks, so the point seemed kind of moot. And indeed when I get up to the window and get my badge no one mentions that I need to put a mask on. The first room I’m in eventually ends up about 50/50 masked vs. unmasked, and it does seem like it’s being left to personal preference.

But then there’s a pushback. Certain areas seem to get very draconian with the masks, arguments erupt on the facebook page. One of the guys in charge of the con posts something very extreme about the requirement for masks and it gets deleted, even as another guy posts something else reminding people of the mask requirement, but in slightly less extreme language. But it was clear that the number of people who were just sick of masks had reached a critical mass, and it didn’t matter how much people begged and cajoled a universal mask mandate just was no longer in the cards. It really felt like being on the front lines of a front that’s collapsing, with people trying to make an orderly retreat, but on the verge of a route.

As one final point, I’m always amazed that the people loudly proclaiming the need for a mask mandate because they personally can’t attend an event otherwise because of their health, never seem to be wearing an N95. My understanding is that you personally wearing an N95, while everyone else is unmasked, is better than everyone wearing a cloth or a surgical mask. So if you’re that worried, why wouldn’t you take the one step that’s completely under your control?

Anyway I’ve gone on too long about this as it is. On to the reviews!

I- Eschatological Reviews

When Genius Failed: The Rise and Fall of Long-Term Capital Management

by: Roger Lowenstein

Published: 2001

304 Pages

Briefly, what is this book about?

The story of Long Term Capital Management (LTCM), a very exclusive hedge fund full of arrogant people that blew up in spectacular fashion.

Who should read this book?

If you enjoyed The Big Short and have a general fondness for stories of financial blow-ups brought on by hubris this is a book about exactly that.

General Thoughts

I remember hearing about the spectacular blow-up of LTCM when it happened in 1998, and my recollection is that it was pretty big news, at least for a week or so. I’m sure that the appeal of the story was helped along by its obvious moral: the arrogant brought low in spectacular fashion by their hubris. Like so many before the principals of LTCM thought that they had outsmarted the market, they were wrong.

The next time I remember encountering the story was while reading Fooled by Randomness by Taleb where he described LTCM as a hedge fund set up by a couple of Nobel Prize winning economists. He scornfully described their delusional belief that they could precisely measure and therefore manage risk. He went on to say that the hedge fund had blown up after four years in what these economists had called a “ten sigma event”, which is to say an event ten standard deviations from the norm—an event which is so improbable that you’re unlikely to see even one such event in the entire history of the universe.

This “ten sigma” claim fascinated me, I was staggered that a Nobel Prize winner could be so wrong. (And yes I know the Nobel in economics is not an actual Nobel Prize.) Ever since then I’ve wanted to hear the whole story about how someone so smart could be wrong on a scale that beggars the imagination. Finally, after many years, I got around to looking into it. To start with I should probably include the section of the book Taleb referenced in making his claim: 

According to these same models, the odds against the firm’s suffering a sustained run of bad luck—say, losing 40 percent of its capital in a single month—were unthinkably high. (So far, in their worst month, they had lost a mere 2.9 percent.) Indeed, the figures implied that it would take a so-called ten-sigma event—that is, a statistical freak occurring one in every ten to the twenty-fourth power times—for the firm to lose all of its capital within one year.

There it is. Of course with all such claims the truth is a little bit more complicated, though it’s also depressingly similar to other stories of financial collapse. (A point which I’ll take up in the next section.)

The fund collapsed through a combination of the 1997 Asian Financial Crisis and the 1998 Russian Currency Crisis, but I didn’t see any evidence that the LTCM principals described this combination as a “ten sigma event” after those things happened. It’s merely that before the events happened their models said that such events were spectacularly rare. If I’m going to be charitable, I don’t think the LTCM guys assumed their model was a perfect representation of reality. But I do get the impression that they thought it was directionally accurate. That it could be used for a baseline. I imagine them reasoning something like this, “At the tails of the model things are probably not completely accurate, so it might only take an seven sigma event rather than a ten sigma event, but that still should only happen once every 2 million years, which is still basically impossible.” I understand that’s still not being particularly charitable, but after reading the book it’s the best I can do. It’s clear that whatever place the models had in their decision making process that their confidence in those models was delusional to the point of insanity.

Before we entirely leave the charitable portion of this review, I need to defend the Nobel Prize winning economists, they didn’t set up the fund, nor did they have a lot of control over how it was run. They were mostly brought on to bolster its reputation. So accusing them of being arrogant and dumb is to overlook the real cocky idiots at the center of the story.

If you’re looking for the person who possessed the plurality of the fund’s hubris that would be Lawrence Hilibrand. I don’t have time to go into all the instances of Hilibrand’s arrogance. It is far easier to list the things he did that weren’t arrogant, because as near as I can tell (and one presumes that Lowenstein might have an axe to grind) there really aren’t any.

He was punished for his arrogance. All of the principals had just about the entirety of their wealth in LTCM, so when it went bust, they went bust as well. Well, not really, not bust in the way you or I would understand it, but they did go from being half-billionaires to merely multi-millionaires, who live in palatial comfort and went on to found yet another hedge fund, JWM Partners.

Unsurprisingly, their arrogance was unabated. The second fund used basically the same models and managed to last all of 10 years before it was killed by the 2007-2008 financial crisis. (Yet another ten sigma event, what are the odds!) You would think this would be the end of things, but they’re actually on their third hedge fund. Though to be fair rather than the billions invested into LTCM they were only able to get tens of millions on this third go around.

Eschatological Implications

If LTCM were an isolated story, then we wouldn’t need this section, but the hubris and collapse of LTCM appear more to be the rule of modern finance than the exception. Despite the lesson of LTCM, the 2007-2008 financial crisis was basically exactly the same story, only this time played out over the entire world rather than over a single hedge fund.

For LTCM it was the Black-Scholes model and the underlying riskless asset was government bonds. In the leadup to 2007 it was the Gaussian copula function and the underlying “riskless” asset was mortgages. We even have the same language being used to declare how improbable it is. In the middle of the crisis David Viniar, the CFO of Goldman Sachs, declared, “We were seeing things that were 25 standard deviation moves, several days in a row” I’m running out of ways to describe how idiotic this is. A 25 standard deviation move should happen once every 10135 years and he’s claiming he saw this sort of thing several days in a row!?!? Furthermore, consider that this is after LTCM, when someone like the CFO of Goldman should know that they can’t use a normal distribution when considering risk. Accordingly, what they thought was so risk free that it should never happen in the lifetime of trillions of universes, happened several days in a row. “Riskless” was anything but.

We have two examples of breathtaking financial incompetence at the highest levels within 10 years of each other. I strongly suspect that if my knowledge of financial history went even deeper that I could come up with a third example. But even if there isn’t, what do you want to bet that it won’t happen a third time? In fact I strongly suspect that the third example is already in motion, and that in 10 years we’ll be able to point to another financial crisis caused by another complicated financial instrument that is already in existence.

If you disagree, then please tell me what we have done since 2008 to keep that from happening. Honestly, I’d like to know how to solve this problem. The LTCM partners went on to found not one, but two different hedge funds after their spectacular collapse, and Lord knows the mountains of bad behavior that led to 2007-2008 crisis went almost entirely unpunished. (In the US only one guy went to jail, though 25 people did end up in jail in Iceland.)

I’m not necessarily saying that the LTCM guys shouldn’t have been able to set up a new hedge fund—I am amazed that people gave them money—I’m saying that exotic financial strategies and the instruments which empower them appear to inevitably blow up in spectacular fashion. And as things increasingly centralize these financial catastrophes just get worse. On top of all this, because of this centralization only governments are in a position to do anything about the problem, and they appear woefully unequal to that task.

It’s possible that none of this will matter, that the invasion of Ukraine will lead to World War III and the last thing on our minds will be complicated financial instruments. But if we do manage to preserve the liberal order, then we’re still going to have to deal with financial crises, because they’re deeply embedded in markets which are a fundamental feature of that order. And I think people underestimate how much the 2007-2008 crisis led to the populism we’re currently seeing, and the attendant political disorder. There are an awful lot of people who remember that while they were getting kicked in the nuts, bankers were making millions of dollars off a crisis they caused. As you can imagine this might lead to them losing faith in the system that allows that, particularly if that system just keeps allowing it to happen. 

II- Capsule Reviews

How to Live on 24 Hours a Day 

by: Arnold Bennet

Published: 1908

92 Pages

Briefly, what is this book about?

It’s a very short, very early self-help book.

Who should read this book?

If you’re a fan of self-help books I think you should check out this one. As I said it’s super short, and reading the earliest examples of any genre always ends up being particularly illuminating. 

General Thoughts

This book put me in mind of Parkinson’s Law, by C. Northcote Parkinson. It’s one of the first, and for my money still the greatest business book. How to Live on 24 Hours is not the greatest self-help book, but it is surprising how many of the themes that are now common in self-help books existed basically from the genre’s inception. Things like prioritization, using your mornings effectively, the power of habits and ongoing effort, etc. And of course we’re still struggling with all those things, in fact, it might be getting worse. I suppose this is more evidence that some problems will always be with us, but even if that’s the case, it’s still useful to read about one of the first people to identify those problems and attempt to fix them.

Burning Chrome 

by: William Gibson

Published: 1982

223 Pages

Briefly, what is this book about?

A collection of Gibson’s cyberpunk stories, something of a prequel to his famous book, Neuromancer

Who should read this book?

If you like Gibson, or cyberpunk, or science fiction short stories as a genre, you should definitely read this book.

General Thoughts

I read this as part of Freddie deBoer’s book club. In particular he wanted to talk about the story New Rose Hotel. I’ve read quite a bit of Gibson, but I’d never read this collection, so it seemed like a great excuse to do so. New Rose Hotel, was the standout story, but possibly just because deBoer drew extra attention to it. But really all the stories were quite good. Gibson is a very literary author, and his prose is always fantastic. Cyperpunk is a close cousin to noir and as such it’s really all about the atmosphere and a certain understated panache, and Gibson, as the designated father of the genre, is the master of both. 

Public Choice Theory and the Illusion of Grand Strategy: How Generals, Weapons Manufacturers, and Foreign Governments Shape American Foreign Policy

by:  Richard Hanania

Published: December, 2021

224 Pages

Briefly, what is this book about?

A comprehensive debunking of the idea that American foriegn policy is driven by a grand, overarching strategy.

Who should read this book?

I would probably just subscribe to Hanania’s substack. I think you’ll get most of the important bits, plus the book itself, as an academic publication, is horribly expensive ($160 hardback, $40 kindle).

General Thoughts

I may have mentioned that I’m part of a local Slate Star Codex meetup group. In addition to meetups we also do a book club, and this was the book we did in February. As part of that we managed to get Hanania to attend our discussion, virtually. So whatever else you might say about Hanania he’s generous with his time. 

His central point essentially boils down to the idea that American foreign policy is incoherent, that it has no overarching goal. Of course people imagine we have an overarching goal, and are quick to offer up suggestions for what that goal is, but Hanania shoots all of them down. As one example many people assume we are trying to maintain our position as the global hegemon. But the only reason that position is under threat is because we gave both Russia and China the necessary help to be competitive. You have to look pretty far back in time to see the help the US gave Russia, but even while outwardly opposed to Stalin, pre-WWII, the US government still allowed US businesses to jump start their heavy industry. Our assistance to China happened more recently when we let them into the WTO and gave them most favored nation status. In other words, the only reason we’re worried about them today is because of the economic help we gave them decades ago. And it wasn’t if they suddenly became our enemy, we have always had a pretty antagonistic relationship. Obviously we did this because we hoped it would provide a long term benefit to us, but this expected benefit was always at cross-purposes with maintaining hegemony. 

On the other side of things even when we’re clearly not hoping to benefit ourselves, when we’re definitely doing things for the sole reason of harming our enemies, our tactics are still incoherent. The best example of this is our habit of imposing sanctions. Hanania points out that sanctions almost never accomplish their intended goal, and generally end up being humanitarian disasters on top of that. Certainly they haven’t really affected Putin, on the contrary they seem to have made him more popular than ever inside Russia. Strengthening the perception that the West will always be implacable enemies of the Russian people and that Putin is the only one who can stand up to them. 

I could go on and cover other suggestions for potential US grand strategies, like the maintenance of international laws and norms. (If that’s our strategy why do we continually break those laws?) But I’m interested in high level questions. Is true grand strategy more common in a multipolar world? As the lone hyperpower is the US trying to be all things to all people? Are monarchies and autocracies better at grand strategy because decision making power is more centralized? Or is it worse because they end up surrounded by “yes men”? Do liberal values make it harder to engage in grand strategy, because there’s an irreconcilable tension between national interests and humanitarian concerns? Is it possible that nations have always fumbled through history, sometimes doing the right thing, sometimes the wrong thing, mostly by chance, but in the age of nuclear weapons, we’re suddenly in a place where these mistakes, which have always happened, might be catastrophic?

As you can imagine the invasion of Ukraine has the possibility of answering many of these questions, and we might not like what those answers turn out to be.

Virtue Hoarders: The Case against the Professional Managerial Class

by: Catherine Liu

Published: December, 2020

90 Pages

Briefly, what is this book about?

There is a war within the left between those who want to prioritize identity (being black, or gay) and those who want to prioritize class. This is a book in favor of the latter and opposed to the former.

Who should read this book?

The book is short, which is why I picked it up, but it’s pretty dense, still if you’re interested in the conflict I just mentioned it’s probably worth reading. Certainly, as someone who’s never really been on the left, it helped me understand things better.

General Thoughts

One of my friends turned me on to Tara Henley who’s kind of the Canadian version of Bari Weiss. And Henley raved about this book, which is how I came to find out about it. Also I’ve long been fascinated by the subject of the book, what Liu calls the professional managerial class (PMC), what others call woke capital, and what still others have labeled “the cathedral”

I have yet to decide which term is best, it’s a little like the ancient parable of the blind men and the elephant. Each term emerges from a different point of view of what is clearly a massive phenomenon. As far as the PMC, Liu ends up defining it more by its relationship to the working class than by any elements inherent to the PMC itself. The PMC is the academic who can’t imagine why the working class doesn’t just go to college, surely it must be clear to them that such attendance is the answer to all of the problems they might be experiencing. It’s the bureaucrat, who enforces laws for the working class’ “own good”, and feels all the more smug when the working class chaffs against these laws.  And to take a quote directly from the book:

PMC virtue hoarding is the insult added to injury when white-collar managers, having downsized their blue-collar workforce, then disparage them for their bad taste in literature, bad diets, unstable families and deplorable child-rearing habits.

As you might have gathered this is a book about the conflict between the professional managerial class and the working class, and in a larger sense it’s a book about the conflict between those who prioritize identity and those who prioritize class. In order to understand how this conflict emerged you have to go back a few decades. This is a vast oversimplification, but Liu and people like her would probably point to a long standing unity between advocates for minority rights and advocates for economic justice. Certainly Martin Luther King Jr. still embodied both strands, and this was fairly mainstream Marxism as well, but in the years after his death these two strands started to subtly drift apart.

These advocates for broad spectrum justice had clearly seized the moral high ground, and as a consequence of this they were growing more powerful. Those already in power, who had gotten there by way of their wealth and status, needed some way to keep their power—it’s hard for people to take you seriously as an advocate for economic justice and the working class if you’re rich. So partially by design, but mostly just because of the way the incentives were structured, those in power started emphasizing the identity side of things and deemphasizing the economic side of things. It became more about minorities who were poor and less about poor people in general. In other words, identity was easier to subvert than class and so that’s what they did. Given that such subversion was second nature for those who already had power and wealth this was fairly easy to do. Basically they adopted the culture of the 60’s and used it as a proxy for virtue of the 60’s, narrowing the definition of virtue in the process, and hoarding what remained. Thus, the title of the book. Here’s how Liu puts it:

The culture war was always a proxy economic war, but the 1960’s divided the country into the allegedly enlightened and the allegedly benighted, with the PMC able to separate itself from its economic inferiors in a way that seemed morally justifiable.

The post -1968 PMC elite has become ideologically convinced of its own unassailable position as comprising the most advanced people the earth has ever seen. They have, in fact, made a virtue of their vanguardism. Drawing on the legacy of the counterculture and its commitment to technological and spiritual innovations, PMC elites try to tell the rest of us how to live…as the fortunes of the PMC elites rose, the class insisted on it’s ability to do ordinary things in extraordinary, fundamentally superior and more virtuous ways: as a class, it was reading books, raising children, eating food, staying healthy, and having sex as the most culturally and affectively[sic] advanced people in human history.

All of this hopefully gives you enough to understand the outlines of the conflict. You can probably simplify it into the Marxists vs. the Woke. Though that might be too simple. The borders of the conflict can seem a little bit messy when you first encounter them, and this book’s primary utility is to clearly delineate those borders. In any case, I am on neither side of the conflict, and although I never thought I would say this, I clearly prefer the Marxists. In part because of things I’ve read elsewhere, but in part because of this book. Though only in this very narrow sphere, everywhere else I prefer just about anything else to Marxism.

Liu does a good job of making the case that the PMC is on the side of the Woke, and that this alignment isn’t bringing us closer to justice, it’s perverting it. Above all she makes the case that the PMC, which she admits she’s a part of (and for that matter, so am I) are mostly a bunch of sanctimonious assholes. 

Stephen Fry’s Greek Myths Retold Series

By: Stephen Fry

Book 1: Mythos

Published: 2019

352 Pages

Book 2: Heroes

Published: 2020

352 Pages

Briefly, what are these books about?

Stephen Fry retells the stories of Greek Mythology.

Who should read these books?

If you like Stephen Fry or Greek Mythology you should read these books, actually you should listen to Stephen Fry narrating these books.

General Thoughts

I was a big fan of Bulfinch’s Mythology when I was a kid. And it’d been a long time since I had revisited the myths, outside of reading the Iliad and the Odyssey and the Greek Dramas, which is not nothing, but it was nice to engage in a comprehensive review of all the myths. 

Fry’s retelling is different from Bulfinch’s (to the extent I remember it) in three respects. First off Bulfinch’s left out the more salacious details, for example I don’t remember reading that when Kronos overthrew Ouranos it involved cutting off his genitals and hurling them across Greece and out into the ocean.

The second point is closely related to the first, as part of this bowdlerization Bulfinch’s left out all of the homosexuality, Fry, for obvious reasons, not only includes it, but really leans into all the LGBT elements of the mythology. For my money a little too much. Which is not to say I think he exaggerates any of the details but rather he can’t resist using these elements as ammo in the current culture war. For example when telling the story of someone who these days would be identified as transgender he offers one of his very few footnotes. Where he not only says that this is proof of current transgender orthodoxy, but goes on to reference an academic paper in support of this point. 

I’m not opposed to such arguments, but for a moment it’s an entirely different book. Rather than being a playful retelling of myths it’s modern cultural pontification. And it’s possible that this point, out of all the points he could have pontificated on, was worth the digression. But it draws unusual attention to the issue which often has the opposite of (what I presume is) the author’s intended goal. “There is no lack of people telling me how natural it is to be transgender, I was reading a book about classic mythology to get away from the grubbiness of the current culture wars. Instead I’m even more annoyed by such statements!”

I don’t want to exaggerate the issue, mostly the books are quite good. Which takes me to the third difference from Bulfinch’s. Fry frequently takes the opportunity to inject humorous asides. You kind of get the sense that these are the Greek myths as told by Douglas Adams (of Hitchhiker’s Guide to the Galaxy fame) though Fry’s humor is not quite so dense. 

In the end these are classic stories, told in a humorous fashion, by a great narrator. I just wish he could have done a slightly better job of keeping his politics out of things. 

If You Absolutely Must…: a brief guide to writing and selling short-form argumentative nonfiction from a somewhat reluctant professional writer

by: Fredrik deBoer

Published: January, 2022

50 Pages

Briefly, what is this book about?

The title pretty much sums it up.

Who should read this book?

If you are genuinely trying to make a living as a blogger, newsletter writer, or even a podcaster then I would definitely read this book.

General Thoughts

Obviously I write argumentative nonfiction, so I was hoping to get a lot of great pointers from this book. There were several, you need a niche/schtick, you need to be honest and fearless, you need to actually write, etc. Mostly stuff I’ve heard before, and it was good to be reminded of these things, but there was also nothing revelatory or earth-shattering. Where the book really excelled was in an area I’m not looking for advice, at least not yet. This was the area of actually, really and truly making a living as a writer, as in it’s your primary source of income. DeBoer gets into the nuts and bolts there, going so far as to include his actual book pitch. But of course making a living as a writer is very difficult, and thus the title, you should do it only “If You Absolutely Must”. 

Expeditionary Force Book 8: Armageddon

by: Craig Alanson

456 Pages

Briefly, what is this book about?

The continued adventures of the merry band of pirates, keeping the Earth safe from the horrors of the galaxy.

Who should read this book?

I guess if you’ve already read the previous seven books you should read this one. But I think if you were on the fence about continuing I might stop at book seven or maybe even earlier. Or at least, if I were you, I would wait until some blogger you trust finishes the series and reports back to you. Because I probably will end up being just such a blogger.

General Thoughts

Increasingly this series is 80% stuff that was interesting the first time, but has been done to death by book 8 and 20% stuff I’m intensely curious and interested in and I can hardly wait to see how it turns out. As an example I was in the middle of the book, and there was a setback, and it was basically the same kind of setback that had happened in nearly all of the previous books, and I honestly just about stopped listening right there. But then just a few minutes later Alanson did some world building (technically galaxy building) and expanded on one of the big mysteries of the book and I was all the way back in, at least for a bit. 

Another element that hasn’t gone quite the way I expected: When you start a series and discover it’s already been mapped out to be 15 books long, you expect that in the course of those books that the characters are going to level up in some fashion, and mostly this hasn’t happened. Though again just as I was about to reach the point of despair here as well, they did substantially level up in this book. So I will continue reading, but I wouldn’t blame anyone else for stopping.

If you were paying attention to page numbers you may have noticed a theme. There were a lot of short books this month. But short books need love just as much as massive classics. And tiny blogs need love just as much as giant newsletters. If this saying I just barely made up for completely selfish reasons resonates at all with you, consider donating.

Eschatologist #13: Antifragility

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

This newsletter is now a year old, and we spent much of that year working through the ideas of Nassim Nicholas Taleb. This is not merely because I think Taleb is the best guide to understanding the challenges of the modern world, he’s also the best guide to preparing for those challenges. 

This preparation is necessary because, as Taleb points out, our material progress has largely come at the expense of increased fragility. This does not necessarily mean that things are more likely to fail in the modern world, just that when they do, such failures come in the form of catastrophic black swans. The deaths and disruptions caused by the pandemic have provided us with an excellent example of just such a catastrophe.

If fragility is the problem, then what’s Taleb’s recommended solution? Antifragility. Upon hearing this word you may think, “Of course, antifragility is the solution to fragility, but what does antifragility even mean?” Fortunately Taleb has a formal definition, but let’s start with his informal definition:

If you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will cause potential difficulties. It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy. 

Fragility is when we accept small, limited benefits now, in exchange for potential large, unbounded costs. In the quote it’s the benefit of getting a little extra money by going into debt, which presumably translates into a bigger house or a nicer car but running the risk of bankruptcy if you lose your job and are unable to pay those debts. 

Antifragility is when we accept small, limited costs in exchange for potential large, unbounded benefits. The time and discipline it costs to save money and stockpile spam in your basement—accompanied presumably by a smaller house and a more modest car—turns into a huge benefit when you are unscathed by disaster. As a graph it looks like this:

For fragility just flip the graph upside down. If we apply this to our current catastrophe the pandemic was preceded by thousands of small, fixed benefits, using the time and money we could have spent planning, preparing, and stockpiling, on other things. Things that presumably seemed more important at the time. But these small benefits turned into large costs when the pandemic arrived and revealed how fragile things really were.

The pandemic not only revealed the fragility of our preparations it also revealed the fragility of our logistics when it broke the global supply chain. Of course before the pandemic people didn’t talk about fragility, they talked about efficiency, the wonders of “just in time” manufacturing, the offshoring of production, and global consolidation. But when the black swan arrived all of those things ended up breaking, as fragile things tend to do.

Moving back a little farther in time, the global financial crisis of 2007-2008 is an even better example. As Taleb describes it the entire financial system was focused on picking up pennies in front of a steamroller—limited benefits with eventually fatal consequences.

As you may have already surmised, antifragility is the opposite of all this. It consists of spending a certain amount of time and money on being prepared, some of which will be wasted. Of taking certain risks/costs in order to avoid catastrophic harm. It’s also, like many things, easier said than done. But as long as we’re talking about the pandemic it’s worth asking: what steps are being taken to prepare for the next pandemic?

So far, it’s not looking good, we’ve slashed the amount of money we’re spending on such preparedness, and rather than figuring out the origin of the pandemic (see my last essay) we’re still fighting about masks. I would have hoped that the pandemic would have led us, as a society, to focus more on preparedness, risk management, and above all antifragility, but perhaps not. That being the case, I hope all of my readers are lucky enough to have some gold bars in the basement, even if they’re metaphorical. 

All of my gold bars are metaphorical. If you’d like to help make them non-metaphorical consider donating. I understand that it takes a LOT of donations to equal one gold bar, but one has to start somewhere.

Eschatologist #11: Black Swans

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

February 2020, the last month of normalcy, probably feels like a long time ago. I spent the last week of it in New York City. Which was already ground zero for the pandemic—though no one knew that yet. I was there to attend the Real World Risk Institute. A week-long course put on by Nassim Taleb, who’s best known as the author of The Black Swan. The coincidence of learning more about black swans while a very large one was already in process is not lost on me.

(Curiously enough, this is not the first time I was in New York right before a black swan. I also happened to be there a couple of weeks before 9/11.)

Before we go any further, for any who might be unfamiliar with the term, a black swan is an unpredictable, rare event with extreme consequences. And, one of the things I was surprised to learn while at the institute is that Taleb, despite inventing the term, has grown to dislike it. There are a couple of reasons for this. First people apply it to things which aren’t really black swans, to things which can be foreseen. The pandemic is actually a pretty good example of this. Experts had been warning about the inevitability of one for decades. We had one in 1918, and beyond that several recent near misses with SARS, MERS, and Ebola. And that was just in the last couple of decades. If all this is the case, why am I still calling it a black swan?

First off, even if the danger of a pandemic was fairly well known, the second order effects have given us a whole flock of black swans. Things like supply chain shocks, teleworking, housing craziness, inflation, labor shortages, and widespread civil unrest, to name just a few. This is the primary reason, but on top of that I think Taleb is being a little bit dogmatic with this objection. (I.e. it’s hard to think of what phrase other than “black swan” better describes the pandemic.)

However, when it comes to his second objection I am entirely in agreement with him. People use the term as an excuse. “It was a black swan. How could we possibly have prepared?!?” And herein lies the problem, and the culmination of everything I’ve been saying since the beginning, but particularly over the last four months.

Accordingly saying “How could we possibly have prepared?” is not only a massive abdication of responsibility, it’s also an equally massive misunderstanding of the moment. Because preparedness has no meaning if it’s not directed towards preparing for black swans. There is nothing else worth preparing for.

You may be wondering, particularly if black swans are unpredictable, how is one supposed to do that? The answer is less fragility, and ideally antifragility, but a full exploration of what that means will have to wait for another time. Though I’ve already touched on how religion helps create both of these at the level of individuals and families. But what about levels above that? 

This is where I am the most concerned. And where the excuse, “It was a black swan! Nothing could be done!” has caused the greatest damage. In a society driven by markets, corporations have great ability to both help and harm by the risks they take. We’re seeing some of these harms right now. We saw even more during the 2007-2008 financial crisis. When these harms occur, it’s becoming more common to use this excuse. That it could not be foreseen. It could not be prevented.

If corporations suffered the effects of their lack of foresight that would be one thing. But increasingly governments provide a backstop against such calamities. In the process they absorb at least some of the risk. Making the government itself more susceptible to future, bigger black swans. And if that happens, we have no backstop.

Someday a black swan will either end the world, or save it. Let’s hope it’s the latter.

One thing you might not realize is that donations happen to also be black swans. They’re rare (but becoming more common) and enormously consequential. If you want to feel what it’s like to have that sort of power, consider trying it out. 

Eschatologist #10: Mediocristan and Extremistan

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

Last time we talked about mistakenly finding patterns in randomness—patterns that are then erroneously extrapolated into predictions. This time we’re going to talk about yet another mistake people make when dealing with randomness, confusing the extreme with the normal.

When I use the term “normal” you may be thinking I’m using it in a general sense, but in the realm of randomness, “normal” has a very specific meaning, i.e. a normal distribution. This is the classic bell curve: a large hump in the center and thin tails to either side. In general occurrences in the natural world fall on this curve. The classic example is height, people cluster around the average (5’9” for men and 5’4” for women, at least in the US) and as you get farther away from average—say men who are either 6’7” or 4’11”—you find far fewer examples. 

Up until relatively recently, most of the things humans encountered followed this distribution. If your herd of cows normally produced 20 calves in a year, then on a good year the herd might produce 30 and on a bad year they might produce 10. The same might be said of the bushels of grain that were harvested or the amount of rain that fell. 

These limits were particularly relevant when talking about the upper end of the distribution. Disaster might cause you to end up with no calves, or no harvest or not enough rain. But there was no scenario where you would go from 20 calves one year to 2000 the next. And on an annualized basis even rainfall is unlikely to change very much. Phoenix is not going to suddenly become Portland even if they do get the occasional flash flood. 

Throughout our history these normal distributions are so common that we often fall into the trap of assuming that everything follows this distribution, but randomness can definitely appear in other forms. The most common of these is the power law, and the most common example of a power law is a Pareto distribution, one example of which is called the 80/20 rule. This originally took the form of observing that 20% of the people have 80% of the wealth. But you can also see it in things like software, where 20% of the features often account for 80% of the usage. 

I’ve been drawing on the work of Nassim Taleb a lot in these newsletters, and in order to visualize the difference between these two distributions he came up with the terms mediocristan and extremistan. And he points out that while most people think they live in mediocristan, because that’s where humanity has spent most of its time, that the modern world has gradually been turning more and more into extremistan. This has numerous consequences, one of the biggest is when it comes to prediction.

In mediocristan one data point is never going to destroy the curve. If you end up at a party with a hundred people and you toss out the estimate that the average height of all the men is 5’9” you’re unlikely to be wrong by more than a couple of inches in either direction. And even if an NBA player walks through the door it’s only going to throw off things by a half an inch. But if you’re estimating the average wealth things get a lot more complicated. Even if you were to collect all the data necessary to have the exact number, the appearance of, the fashionably late, Bill Gates will completely blow that up. For instance an average wealth of $1 million pre-Bill Gates to $2.7 billion after he shows up.

Extreme outliers like this can either be very good or very bad. If Gates shows up and you’re trying to collect money to pay the caterers it’s good. If Gates shows up and it’s an auction where you’re both bidding on the same thing it’s bad. But where such outliers really screw things up is when you’re trying to prepare for future risk, particularly if you’re using the tools of mediocristan to prepare for the disasters of extremistan. Disasters which we’ll get to next time…

As it turns out blogging is definitely in extremistan. Only in this case you’re probably looking at 5% of the bloggers who get 95% of the traffic. As someone who’s in the 95% of the bloggers that gets 5% of the traffic I really appreciate each and every reader. If you want to help me get into that 5%, consider donating.

Eschatologist #9: Randomness

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

Over the last couple of newsletters we’ve been talking about how to deal with an unpredictable and dangerous future. To put a more general label on things, we’ve been talking about how to deal with randomness. We started things off by looking at the most extreme random outcome imaginable: humanity’s extinction. Then I took a brief detour into a discussion of why I believe that religion is a great way to manage randomness and uncertainty. Having laid the foundation for why you should prepare yourself for randomness, in this newsletter I want to take a step back and examine it in a more abstract form.

The first thing to understand about randomness is that it frequently doesn’t look random. Our brain wants to find patterns, and it will find them even in random noise. An example:

T​​he famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand. When he looked up he realized that the glowworms made the ceiling look like the night sky, except… there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms will eat other glowworms) each worm had a similar spacing. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.

One of the ways this pattern matching manifests is in something called the Narrative Fallacy. The term was coined by Nassim Nicholas Taleb, one of my favorite authors, who described it thusly: 

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

That last bit is particularly important when it comes to understanding the future. We think we understand how the future is going to play out because we’ve detected a narrative. To put it more simply: We’ve identified the story and because of this we think we know how it ends.

People look back on the abundance and economic growth we’ve been experiencing since the end of World War II and see a story of material progress, which ends in plenty for all. Or they may look back on the recent expansion of rights for people who’ve previously been marginalized and think they see an arc to history, an arc which “bends towards justice”. Or they may look at a graph which shows the exponential increase in processor power and see a story where massively beneficial AI is right around the corner. All of these things might happen, but nothing says they have to. If the pandemic taught us no other lesson, it should at least have taught us that the future is sometimes random and catastrophic. 

Plus, even if all of the aforementioned trends are accurate the outcome doesn’t have to be beneficial. Instead of plenty for all, growth could end up creating increasing inequality, which breeds envy and even violence. Instead of justice we could end up fighting about what constitutes justice, leading to a fractured and divided country. Instead of artificial intelligence being miraculous and beneficial it could be malevolent and harmful, or just put a lot of people out of work. 

But this isn’t just a post about what might happen, it’s also a post about what we should do about it. In all of the examples I just gave, if we end up with the good outcome, it doesn’t matter what we do, things will be great. We’ll either have money, justice or a benevolent AI overlord, and possibly all three. However, if we’re going to prevent the bad outcome, our actions may matter a great deal. This is why we can’t allow ourselves to be lured into an impression of understanding. This is why we can’t blindly accept the narrative. This is why we have to realize how truly random things are. This is why, in a newsletter focused on studying how things end, we’re going to spend most of our time focusing on how things might end very badly. 

I see a narrative where my combination of religion, rationality, and reading like a renaissance man leads me to fame and adulation. Which is a good example of why you can’t blindly accept the narrative. However if you’d like to cautiously investigate the narrative a good first step would be donating.

Tetlock, the Taliban, and Taleb

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


There have been many essays written in the aftermath of our withdrawal from Afghanistan. One of the more interesting was penned by Richard Hanania, and titled “Tetlock and the Taliban”. Everyone reading this has heard of the Taliban, but there might be a few of you who are unfamiliar with Tetlock. And even if that name rings a bell you might not be clear on what his relation is to the Taliban. Hanania himself apologizes to Tetlock for the association, but “couldn’t resist the alliteration”, which is understandable. Neither could I. 

Tetlock is known for a lot of things, but he got his start by pointing out that “experts” often weren’t. To borrow from Hanania:

Phil Tetlock’s work on experts is one of those things that gets a lot of attention, but still manages to be underrated. In his 2005 Expert Political Judgment: How Good Is It? How Can We Know?, he found that the forecasting abilities of subject-matter experts were no better than educated laymen when it came to predicting geopolitical events and economic outcomes.

From this summary the connection to the Taliban is probably obvious. This is an arena where the subject matter experts got things very wrong. Hanania’s opening analogy is too good not to quote:

Imagine that the US was competing in a space race with some third world country, say Zambia, for whatever reason. Americans of course would have orders of magnitude more money to throw at the problem, and the most respected aerospace engineers in the world, with degrees from the best universities and publications in the top journals. Zambia would have none of this. What should our reaction be if, after a decade, Zambia had made more progress?

Obviously, it would call into question the entire field of aerospace engineering. What good were all those Google Scholar pages filled with thousands of citations, all the knowledge gained from our labs and universities, if Western science gets outcompeted by the third world?

For all that has been said about Afghanistan, no one has noticed that this is precisely what just happened to political science.

Of course Hanania’s point is more devastating than Tetlock’s. The experts weren’t just “no better” than the Taliban’s “educated laymen”. The “experts” were decisively outcompeted despite having vastly more money and in theory, all the expertise. Certainly they had all the credentialed expertise…

In some ways Hanania’s point is just a restatement of Antonio García Martínez’s point, which I used to end my last post on Afghanistan—the idea we are an unserious people. That we enjoy “an imperium so broad and blinding” we’ve never been “made to suffer the limits of [our] understanding or re-assess [our] assumptions about [the] world”

So the Taliban needed no introduction, and we’ve introduced Tetlock, but what about Taleb? Longtime readers of this blog should be very familiar with Nassim Nicholas Taleb, but if not I have a whole post introducing his ideas. For this post we’re interested in two things, his relationship to Tetlock and his work describing black swans: rare, consequential and unpredictable events. 

Taleb and Tetlock are on the same page when it comes to experts, and in fact for a time they were collaborators, co-authoring papers on the fallibility of expert predictions and the general difficulty of making predictions—particularly when it came to fat-tail risks. But then, according to Taleb, Tetlock was seduced by government money and went from pointing out the weaknesses of experts to trying to supplant them, by creating the Good Judgement project, and the whole project of superforecasting.

The key problem with expert prediction, from Tetlock’s point of view, is that experts are unaccountable. No one tracks whether they were eventually right or wrong. Beyond that, their “predictions” are made in such a way that even making a determination of accuracy is impossible. Additionally experts are not any better at prediction than educated laypeople. Tetlock’s solution is to offer the chance for anyone to make predictions, but in the process ensure that the predictions can be tracked, and assessed for accuracy. From there you can promote those people with the best track record. A sample prediction might be “I am 90% confident that Joe Biden will win the 2020 presidential election.” 

Taleb agreed with the problem, but not with the solution. And this is where black swans come in. Black swans can’t be predicted, they can only be hedged against, and prepared for, but superforecasting, by giving the illusion of prediction, encourages people to be less prepared for black swans, and in the end worse off than they would have been without the prediction.

In the time since writing The Black Swan Taleb has come to hate the term, because people have twisted it into an excuse for precisely the kind of unpreparedness he was trying to prevent. 

“No one could have done anything about the 2007 financial crisis. It was a black swan!”

“We couldn’t have done anything about the pandemic in advance. It was a black swan!” 

“Who could have predicted that the Taliban would take over the country in nine days! It was a black swan!”

Accordingly, other terms have been suggested. In my last post I reviewed a book which introduced the term “gray rhino”, something people can see coming, but which they nevertheless ignore. 

Regardless of the label we decide to apply to what happened in Afghanistan, it feels like we were caught flat footed. We needed to be better prepared. Taleb says we can be better prepared if we expect black swans. Tetlock says we can be better prepared by predicting what to prepare for. Afghanistan seems like precisely the sort of thing superforecasting was designed for. Despite this I can find no evidence that Tetlock’s stable of superforecasters predicted how fast Afghanistan would fall, or any evidence that they even tried. 

As a final point before we move on. This last bit is one of the biggest problems with superforecasting. The idea that you should only be judged for what you got wrong, that if you were never asked to make a prediction about something that the endeavor “worked”. But reality doesn’t care about what you chose to make predictions on vs. what you didn’t. Reality does whatever it feels like. And the fact that you didn’t choose to make any predictions about the fall of Afghanistan doesn’t mean that thousands of interpreters didn’t end up being left behind. And the fact that you didn’t choose to make any predictions about pandemics doesn’t mean that millions of people didn’t die. This is the chief difference between Tetlock and Taleb.


I first thought about this issue when I came across a poll on a forum I frequent, in which users were asked how long they thought the Afghan government would last. The options and results were:

(In the interest of full disclosure the bolded option indicates that I said one to two years.)

While it is true that a plurality of people said less than six months, six months was still much longer than the nine days it actually took (from capturing the first provincial capital to the fall of Kabul) and from the discussion that followed the poll, it seemed most of those 16 people were thinking that the government would fall at closer to six months or even three months than one week. In fact the best thing, prediction-wise, to come out of the discussion was when someone pointed out that 10 years previously The Onion had posted an article with the headline U.S. Quietly Slips Out Of Afghanistan In Dead Of Night, which is exactly what happened at Bagram. 

As it turns out this is not the first time The Onion has eerily predicted the future. There’s a whole subgenre of noticing all the times it’s happened. How do they do it? Well of course part of the answer is selection bias.  No one is expecting them to predict the future; nobody comments on all the articles that didn’t come true.  But when one does, it’s noteworthy. But I think there’s something else going on as well: I think they come up with the worst or most ridiculous thing that could happen, and because of the way the world works, some of the time that’s exactly what does happen. 

Between the poll answers being skewed from reality and the link to the Onion article, the thread led me to wonder: where were the superforecasters in all of this?

I don’t want to go through all of the problems I’ve brought up with superforecasting (I’ve easily written more than 10,000 words on the subject) but this event is another example of nearly all of my complaints. 

  • There is no methodology to account for the differing impact of being incorrect on some predictions vs. others. (Being wrong about whether the Tokyo Olympics will be held is a lot less consequential than being wrong about Brexit.)
  • Their attention is naturally drawn to obvious questions where tracking predictions is easy. 
  • Their rate of success is skewed both by only picking obvious questions, and by lumping together both the consequential and the inconsequential.
  • People use superforecasting as a way of more efficiently allocating resources, but efficiency is essentially equal to fragility, which leaves us less prepared when things go really bad. (It was pretty efficient to just leave Bagram all at once.)

Or course some of these don’t apply because as far as I can tell the Good Judgment project and it’s stable of superforecasters never tackled the question, but they easily could have. They could have had a series of questions about whether the Taliban would be in control of Kabul by a certain date. This seems specific enough to meet their criteria. But as I said, I could find no evidence that they had. Which means either they did make such predictions and were embarrassingly wrong, so it’s been buried, or despite its geopolitical importance it never occurred to them to make any predictions about when Afghanistan would fall. (But it did occur to a random poster on a fringe internet message board?) Both options are bad.

When people like me criticize superforecasting and Tetlock’s Good Judgment project in this manner, the common response is to point out all the things they did get right and further that superforecasting is not about getting everything right; it’s about improving the odds, and getting more things right than the old method of relying on the experts. This is a laudable goal. But as I point out it suffers from several blindspots. The blindspot of impact is particularly egregious and deserves more discussion. To quote from one of my previous posts where I reflected on their failure to predict the pandemic:

To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

Once again we find ourselves in a similar position. When we imagine historians looking back on 2021, no one would find it surprising if they ranked the withdrawal of the US and subsequent capture of Afghanistan by the Taliban as the most impactful event of the year. And yet superforecasters did nothing to help us prepare for this event.


The natural next question is to ask how should we have prepared for what happened? Particularly since we can’t rely on the predictions of superforecasters to warn us. What methodology do I suggest instead of superforecasting? Here we return to the remarkable prescience of The Onion. They ended up accurately predicting what would happen in Afghanistan 10 years in advance, by just imagining the worst thing that could happen. And in the weeks since Kabul fell, my own criticism of Biden has settled around this theme. He deserves credit for realizing that the US mission in Afghanistan had failed, and that we needed to leave, that in fact we had needed to leave for a while. Bad things had happened, and bad things would continue to happen, but in accepting the failure and its consequences he didn’t go far enough. 

One can imagine Biden asserting that Afghanistan and Iraq were far worse than Bush and his “cronies” had predicted. But then somehow he overlooked the general wisdom that anything can end up being a lot worse than predicted, particularly in the arena of war (or disease). If Bush can be wrong about the cost and casualties associated with invading Afghanistan, is it possible that Biden might be wrong about the cost and casualties associated with leaving Afghanistan? To state things more generally, the potential for things to go wrong in an operation like this far exceeds the potential for things to go right. Biden, while accepting past failure, didn’t do enough to accept the possibility of future failure. 

As I mentioned, my answer to the poll question of how long the Afghanistan government was going to last was 1-2 years. And I clearly got it wrong (whatever my excuses). But I can tell you what questions I would have aced (and I think my previous 200+ blog posts back me up on this point): 

  • Is there a significant chance that the withdrawal will go really badly?
  • Is it likely to go worse than the government expects?

And to be clear I’m not looking to make predictions for the sake of predictions. I’m not trying to be more accurate, I’m looking for a methodology that gives us a better overall outcome. So is the answer to how we could have been better prepared, merely “More pessimism?” Well that’s certainly a good place to start, beyond that there’s things I’ve been talking about since the blog was started. But a good next step is to look at the impact of being wrong. Tetlock was correct when he pointed out that experts are wrong most of the time. But what he didn’t account for is it’s possible to be wrong most of the time, but still end up ahead. To illustrate this point I’d like to end by recycling an example I used the last time I talked about superforecasting:

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” 

This is what I mean by impact. When things depart from the status quo, when Eustice loses money, it’s so dramatic that it overwhelms all of the times when things went according to expectations.  

Biden was correct when he claimed we needed to withdraw from Afghanistan. He had no choice, he had to play poker. But once he decided to play poker he should have done it as skillfully as possible, because the stakes were huge. And as I have so frequently pointed out, when the stakes are big, as they almost always are when we’re talking about nations, wars, and pandemics, the skill of pessimism always ends up being more important than the skill of superforecasting.

I had a few people read a draft of this post. One of them complained that I was using a $100 word when a $1 word would have sufficed. (Any guesses on which word it was?) But don’t $100 words make my donors feel like they’re getting their money’s worth? If you too want to be able to bask in the comforting embrace of expensive vocabulary consider joining them.

Remind Me What The Heck Your Point is Again?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

The other day I was talking to my brother and he said, “How would you describe your blog in a couple of sentences?”

It probably says something about my professionalism (or lack thereof) that I didn’t have some response ready to spit out. An elevator pitch, if you will. Instead I told him, “That’s a tough one.” Much of this difficulty comes because, if I were being 100% honest, the fairest description of my blog would boil down to: I write about fringe ideas I happen to find interesting. Of course, this description is not going to get me many readers, particularly if they have no idea whether there’s any overlap between what I find interesting and what they find interesting.

I didn’t say this to my brother, mostly because I didn’t think of it at the time. Instead, after few seconds, I told him, well of course the blog does have a theme, it’s right there in the title, but I admitted that it might be more atmospheric than explanatory. Though I think we can fix that with the addition of a few words. Which is how Jeremiah 8:20 shows up on my business cards. (Yeah, that’s the kind of stuff your donations get spent on, FYI.) With those few words added it reads:

The harvest [of technology] is past, the summer [of progress] is ended, and we are not saved.

If I was going to be really pedantic, I might modify it, and hedge, so it read as follows:

Harvesting technology is getting more complex, the summer where progress was easy is over, and I think we should prepare for the possibility that we won’t be saved.

If I was going to be more literary and try to pull in some George R.R. Martin fans I might phrase it:

What we harvest no longer feeds us, and winter is coming.

But once again, you would be forgiven if, after all this, you’re still unclear on what this blog is about (other than weird things I find interesting). To be fair, to myself, I did explain all of this in the very first post, and re-reading it recently, I think it held up fairly well. But it could be better, and this assumes that people have even read my very first post, which is unlikely since at the time my readership was at its nadir, and despite my complete neglect of anything resembling marketing, since then, it has grown, and presumably at least some of those people have not read the entire archive.

Accordingly, I thought I’d take another shot at it. To start, one concept which runs through much (though probably not all) of what I write, is the principle of antifragility, as introduced by Nassim Nicholas Taleb in his book of (nearly) the same name.

I already dedicated an entire post to explaining the ideas of Taleb, so I’m not going to repeat that here. But, in brief, Taleb starts with what should be an uncontroversial idea, that the world is random. He then moves on to point out the effects of that, particularly in light of the fact that most people don’t recognize how random things truly are. They are often Fooled by Randomness (the title of his first book) into thinking that there’s patterns and stability when there aren’t. From there he moves on to talk about extreme randomness through introducing the idea of a Black Swan (the name of his second book) which is something that:

  1. Lies outside the realm of regular expectations
  2. Has an extreme impact
  3. People go to great lengths afterwards to show how it should have been expected.

It’s important at this point to clarify that not all black swans are negative. And technology has generally had the effect of increasing the number of black swans of both the positive (internet) and negative (financial crash) sort. In my very first post I said that we were in a race between these two kinds of black swans, though rather than calling them positive or negative black swans I called them singularities and catastrophes. And tying it back into the theme of the blog a singularity is when technology saves us, and a catastrophe is when it doesn’t.

If we’re living in a random world, with no way to tell whether we’re either going to be saved by technology or doomed by it, then what should we do? This is where Taleb ties it all together under the principle of antifragility, and as I mentioned it’s one of the major themes of this blog. Enough so that another short description of the blog might be:

Antifragility from a Mormon perspective.

But I still haven’t explained antifragility, to say nothing of antifragility from a Mormon perspective, so perhaps I should do that first. In short, things that are fragile are harmed by chaos and things that are antifragile are helped by chaos. I would argue that it’s preferable to be antifragile all of the time, but it is particularly important when things get chaotic. Which leads to two questions: How fragile is society? And how chaotic are things likely to get? I have repeatedly argued that society is very fragile and that things are likely to get significantly more chaotic. And further, that technology increases both of these qualities

Earlier, I provided a pedantic version of the theme, changing (among other things) the clause “we are not saved” to the clause “we should prepare for the possibility that we won’t be saved.” As I said, Taleb starts with the idea that the world is random, or in other words unpredictable, with negative and positive black swans happening unexpectedly. Being antifragile entails reducing your exposure to negative black swans while increasing your exposure to positive black swans. In other words being prepared for the possibility that technology won’t save us.

To be fair, it’s certainly possible that technology will save us. And I wouldn’t put up too much of a fight if you argued it was the most likely outcome. But I take serious issue with anyone who wants to claim that there isn’t a significant chance of catastrophe. To be antifragile, consists of realizing that the cost of being wrong if you assume a catastrophe and there isn’t one, is much less than if you assume no catastrophe and there is one.

It should also be pointed out that most of the time antifragility is relative. To give an example, if I’m a prepper and the North Koreans set off an EMP over the US which knocks out all the power for months. I may go from being a lower class schlub to being the richest person in town. In other words chaos helped me, but only because I reduced my exposure to that particular negative black swan, and most of my neighbors didn’t.

Having explained antifragility (refer back to the previous post if things are still unclear) what does Mormonism bring to the discussion? I would offer that it brings a lot.

First, Mormonism spends quite a bit of time stressing the importance of antifragility, though they call it self reliance, and emphasis things like staying out of debt, having a plan for emergency preparedness, and maintaining a multi-year supply of food. This aspect is not one I spend a lot of time on, but it is definitely an example of Mormon antifragility.

Second, Mormons, while not as apocalyptic as some religions nevertheless reference the nearness of the end right in their name. We’re not merely Saints, we are the “Latter-Day Saints”. While it is true that some members are more apocalyptic than others, regardless of their belief level I don’t think many would dismiss the idea of some kind of Armageddon outright. Given that, if you’re trying to pick a winner in the race between catastrophe and singularity or more broadly, negative or positive black swans, belonging to religion which claims we’re in the last days could help break that tie. Also as I mentioned it’s probably wisest to err on the side of catastrophe anyway.

Third, I believe Mormon Doctrine provides unique insight into some of the cutting edge futuristic issues of the day. Over the last three posts I laid out what those insights are with respect to AI, but in other posts I’ve talked about how the LDS doctrine might answer Fermi’s Paradox. And of course there’s the long running argument I’ve had with the Mormon Transhumanist Association over what constitutes an appropriate use of technology and what constitutes inappropriate uses of technology. This is obviously germane to the discussion of whether technology will save us. And what the endpoint of that technology will end up being. And it suggests another possible theme:

Connecting the challenges of technology to the solutions provided by LDS Doctrine.

Finally, any discussion of Mormonism and religion has to touch on the subject of morality. For many people issues of promiscuity, abortion, single-parent families, same sex marriage, and ubiquitous pornography are either neutral or benefits of the modern world. This leads some people to conclude that things are as good as they’ve ever been and if we’re not on the verge of a singularity then at least we live in a very enlightened era, where people enjoy freedoms they could have never previously imagined.

The LDS Church and religion in general (at least the orthodox variety) take the opposite view of these developments, pointing to them as evidence of a society in serious decline. Perhaps you feel the same way, or perhaps you agree with the people who feel that things are as good as they’ve ever been, but if you’re on the fence. Then, one of the purposes of this blog is to convince you that even if there is no God, that it would be foolish to dismiss religion as a collection of irrational biases, as so many people do. Rather, if we understand the concept of antifragility, it is far more likely that rather than being irrational that religion instead represents the accumulated wisdom of a society.

This last point deserves a deeper dive, because it may not be immediately apparent to you why religions would necessarily accumulate wisdom or what any of this has to do with antifragility. But religious beliefs can only be either fragile or antifragile, they can either break under pressure or get stronger. (In fairness, there is a third category, things which neither break nor get stronger, Taleb calls this the robust category, but in practice it’s very rare for things to be truly robust.) If religious beliefs were fragile, or created fragility then they would have disappeared long ago. Only beliefs which created a stronger society would have endured.

Please note that I am not saying that all religious beliefs are equally good at encouraging antifragile behavior. Some are pointless or even irrational, but others, particularly those shared by several religions are very likely lessons in antifragility. But a smaller and smaller number of people have any religious beliefs and an even smaller number are willing to actively defend these beliefs, particularly those which prohibit a behavior currently in fashion.

However, if these beliefs are as useful and as important as I say they are then they need all the defending they can get. Though in doing this a certain amount of humility is necessary. As I keep pointing out, we can’t predict the future. And maybe the combination of technology and a rejection of traditional morality will lead to some kind of transhuman utopia, where people live forever, change genders whenever they feel like it and live in a fantastically satisfying virtual reality, in which everyone is happy.

I don’t think most people go that far in their assessment of the current world, but the vast majority don’t see any harm in the way things are either, but what if they’re wrong about that?

And this might in fact represent yet another way of framing the theme of this blog:

But what if we’re wrong?

In several posts I have pointed out the extreme rapidity with which things have changed, particularly in the realm of morality, where, in a few short years, we have overturned religious taboos stretching back centuries or more. The vast majority of people have decided that this is fine, and, that in fact, as I already mentioned, it’s an improvement on our benighted past. But even if you don’t buy my argument about religions being antifragile I would hope you would still wonder, as I do, “But what if we’re wrong?”

This questions not only applies to morality, but technology saving us, the constant march of progress, politics, and a host of other issues. And I can’t help but think that people appear entirely too certain about the vast majority of these subjects.

In order bring up the possibility of wrongness, especially when you’re the ideological minority there has to be freedom of speech, another area I dive into from time to time in this space. Also you can’t talk about freedom of speech or the larger ideological battles around speech without getting into the topic of politics. A subject I’ll return to.

As I have already mentioned, and as you have no doubt noticed the political landscape has gotten pretty heated recently and there are no signs of it cooling down. I would argue, as others have, that this makes free speech and open dialogue more important than ever. In this endeavor I end up sharing a fair amount of overlap with the rationalist community. Which you must admit is interesting given the fact that this community clearly has a large number of atheists in it’s ranks. But that failing aside, I largely agree with much of what they say, which is why I link to Scott Alexander over at SlateStarCodex so often.

On the subject of free speech the rationalists and I are definitely in agreement. Eliezer Yudkowsky, an AI theorist, who I mentioned a lot in the last few posts, is also one of the deans of rationality and he had this to say about free speech:

There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

I totally agree with this point, though I can see how some people might choose to define some of the terms more or less broadly, leading to significant differences in the actual implementation of the rule. Scott Alexander is one of those people, and he chooses to focus on the idea of the bullet, arguing that we should actually expand the prohibition beyond just literal bullets or even literal weapons. Changing the injunction to:

Bad argument gets counterargument. Does not get bullet. Does not get doxxing. Does not get harassment. Does not get fired from job. Gets counterargument. Should not be hard.

In essence he want’s to include anything that’s designed to silence the argument rather than answer it. And why is this important? Well if you’ve been following the news at all you’ll know that there has been a recent case where exactly this thing happened, and a bad argument got someone fired. (Assuming it even was a bad argument which might be a subject for another time.)

Which ties back into asking, “But what if we’re wrong?” Because unless we have a free and open marketplace of ideas where things can succeed and fail based on their merits, rather than whether they’re the flavor of the month, how are we ever going to know if we’re wrong? If you have any doubts as to whether the majority is always right then you should be incredibly fearful of any attempt to allow the majority to determine what gets said.

And this brings up another possible theme for the blog:

Providing counterarguments for bad arguments about technology, progress and religion.

Running through all of this, though most especially with the topic I just discussed, free speech, is politics. The primary free speech ground is political, but issues like morality and technology and fragility all play out at the political level as well.

I often joke that you know those two things that you’re not supposed to talk about? Religion and politics? Well I decided to create a blog where I discuss both. Leading me to yet another possible theme:

Religion and Politics from the perspective of a Mormon who thinks he’s smarter than he probably is.

Perhaps the final thread running through everything, is like most people I would like to be original, which is hard to do. The internet has given us a world where almost everything you can think of saying has been said already. (Though I’ve yet to find anyone making exactly the argument I make when it comes to Fermi’s Paradox and AI.) But there is another way to approximate originality and that is to say things that other people don’t dare to say, but which hopefully, are nevertheless true. Which is part of why I record under a pseudonym. So far the episode that most fits that description is the episode I did on LGBT youth and suicide, with particular attention paid to the LDS stand and role in that whole debate.

Going forward I’d like to do more of that. And it suggests yet another possible theme:

Saying what you haven’t thought of or have thought of but don’t dare to bring up.

In the end, the most accurate description of the blog is still, that I write about fringe ideas I happen to find interesting, but at least by this point you have a better idea of the kind of things I find interesting and if you find them interesting as well, I hope you’ll stick around. I don’t think I’ve ever mentioned it within an actual post, but on the right hand side of the blog there’s a link to sign up for my mailing list, and if you did find any of the things I talked about interesting, consider signing up.

Do you know what else interests me? Money. I know that’s horribly crass, and I probably shouldn’t have stated it so bluntly, but if you’d like to help me continue to write, consider donating, because money is an interesting thing which helps me look into other interesting things.

Straddling Optimism and Pessimism; Religion and Rationality

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

One of the regular readers of this blog, who also happens to be an old friend of mine, is constantly getting after me for being too pessimistic. He’s more of an optimist than I am, and this optimism largely derives from his religious faith. Which happens to be basically the same as mine (we’re both LDS and very active). Despite this similarity, he’s optimistic and hopeful, and I’m gloomy and pessimistic. Or at least that’s what it looks like to him, and I’m sure there’s a certain amount of truth to that. I do have a tendency to immediately gravitate to the worst-case scenario, and an even greater tendency to use my pessimism to fuel my writing, but I don’t think I’m as pessimistic as my friend imagines or as one might assume just from reading my posts. I already explored this idea at some length in a previous post, (a post he was quick to compliment) but I think it’s time to revisit it from a different angle.

The previous post was more about whether my outward displays of pessimism reflected an inward cynicism that needed to be fixed, i.e. was I being called to repentance. (I think the answer I arrived at was, “Maybe.”) This post is more about what the blog is designed to do, who the audience is, and how writing in service of those two things is a lot like serving two masters (wait… Is that bad?) And therefore may not give an accurate impression of my core beliefs, beliefs which I’ll also get into. Yes, I’m writing a post about the blog’s mission nearly a year into things. Make of that what you will. Though I think we can all agree that occasionally it’s useful for a person to step back and figure out what they’re really trying to accomplish.

I think the briefest way to describe the purpose of this blog is that it’s designed to encourage antifragility. Hopefully you’re already familiar with this concept, and the ideas of Nassim Nicholas Taleb in general, but if not I wrote a post all about it. But if you don’t have the time to read it, in short, one way to think about antifragility is to view it as a methodology for benefitting from big positive rare events and protecting yourself against big negative rare events. In Taleb’s philosophy these are called black swans. And here we touch on the first area in which writing about a topic may give an incorrect view of my actual attitudes and opinions. In this instance, writing about black swans automatically makes them appear more likely than they actually are, or than I believe them to be. Black Swans are rare, and if I wrote about them only in proportion to their likelihood I would hardly ever mention them, but recall that a black swan, by definition, has gigantic consequences, which means they have an impact far out of proportion to their frequency. Thus, if you were to judge my topic choice and my pessimism just based on the rarity of these events, you would have to conclude that I spend too much time writing about them and that I’m excessively negative on top of that. But if I’m writing about black swans in proportion to their impact I think my frequency and negativity end up being a much better fit.

Of course writing about them, period, is only worthwhile if you can offer some ideas on how individuals can protect themselves from negative black swans. And this is another point where my writing diverges somewhat from my actual behavior, and where we get into the topic of religion. As a very religious person I truly believe that the best way to protect yourself from negative black swans is to have faith, keep the commandments, attend church, love your neighbor, and cleave to your wife/husband. But as long time readers of this blog know, while I don’t shy away from those topics, neither are they the focus of my writing either. Why is this? Because I think there are a lot of people already speaking on those topics and that they’re doing a far better job than I could ever do.

If there are already many people, from LDS General Authorities to C.S. Lewis who are doing a better job than I could ever do, in covering purely religious topics, I have to find some other way of communicating that plays to my strengths, without abandoning religion entirely. But just because I’m not going to try and compete with them directly doesn’t mean I can’t borrow some of their methodology, and one of the things that all of these individuals are great at is serving milk before meat. Or starting with stuff that’s easy to digest and then once someone can swallow that, moving on to the tougher, chewier, but ultimately tastier stuff. and in considering this it occurred to me that what’s milk to one person may be meat to another. As an example, if you have a son, as I do, who is nearly allergic to vegetables (or so he likes to claim). And you want him to eat more vegetables, you wouldn’t start out with brussel sprouts or spinach.  You’d start with corn on the cob soaked in butter and liberally seasoned with salt and pepper. On the opposite side of the equation if someone were to decide, after many years, that they are done being a vegetarian, you wouldn’t introduce them to meat by serving them chicken hearts or liver.

In a like fashion, there are, in this world, many people who already believe in God. And for those people starting with faith, repentance, and baptism is a natural milk, before moving to the meat of chastity, tithing and the Word of Wisdom. There are however other people who think that rationality, rather than faith, is the key to understanding the world. With these people, it is my hope, that survival is the milk. Because if you can’t survive, you can’t do anything else, however rational you are in all other respects. And then, once we agree on that, we can move on to the meat of black swans, technological fragility, and what religion has to say about singularities.

It should be mentioned that before we leave the topic of “milk before meat,” that it’s actually got something of a bad reputation in the rationalist community (to say nothing of the ex-mormon community). They view it as a Mormon variant of a bait and switch, where we get you into the Church with the promise of three hour meetings on Sunday, paying 10% of your income to the church, giving up all extramarital sex, along with booze, drugs and cigarettes (recall, that you have to agree to all of this before you can even be baptized.) And then I guess only after that do we hit you with the fact that you might have to one day be the Bishop or the Relief Society President? Actually I’m not clear what the switch is in this scenario. I think all of the hard things about Mormonism are revealed right at the beginning. Also I’m not quite sure why they take issue with the idea of starting with the easier stuff. We literally do give children milk before meat; we teach algebra before calculus; and don’t even get me started on sex ed. In other words this is one of those times when I think the lady doth protest too much.

Moving on… Choosing a different audience and a different approach does not mean that I am personally any less devoted to the faith and hope inherent in my religion. And that hope comes with a fair amount of optimism. Certainly there are people more optimistic than me, but I am optimistic enough that I have no doubt that things will work out eventually. The problem is the “eventually,” I don’t know when that will be, and until that time comes, we still have to deal with competing ideologies, with different ways for arriving at truth, and with the world as it exists, not as we would like it to be. Also if we’re only able to talk to other Christians (and often not even to them) then we’re excluding a large and growing segment of the population.

But it doesn’t have to be this way, and much of the motivation for this blog came from seeing areas of surprising overlap between technology and religion, particularly at the more speculative edge of technology. As an example, look at the subject of immortality. In this area the religious have had a plan, and have been following it for centuries. They know what they need to do, and while everyone is not always as successful as they could be in doing what they should, the path forward is pretty clear. They have a very specific plan for their life which happens to include the possibility of living forever. Some may think this plan is silly, and that it won’t work, but the religious do have a plan. And, up until very recently, the religious plan was the only game in town. Which doesn’t mean that everyone bought into it, but, as I mentioned in a previous post, If you were really looking for an existence beyond this one that involved more than just memories, then it was the only option.

Obviously not everyone bought into the plan, people have been rejecting the religion for almost as long as it’s been in existence. But it’s only recently that there has been any hope for an alternative, for immortality outside of divine intervention. Some people hope to achieve this through cryonic suspension, e.g.freezing their body after death in the hopes of revival later. Some people hope to achieve this by digitizing their brain, or recording all of their experiences so that the recordings can be used to reconstruct their consciousness once they’re dead. Other people just hope that we’ll figure out how to stop aging.

These different concepts of immortality represent an area of competition between technology and religion, but the fact that both sides are talking about immortality is, I would opine, a neglected area we see the overlap I mentioned. Previously only the religious talked about immortality and now transhumanists, are talking about it as well. When presented with this fact, most people focus on the competition and use it as another excuse to abandon religion. But there are a few who recognize the overlap, and the surprising consequences that might entail. Certainly the Mormon Transhumanist Association is in this category and that’s one of the things I admire about them.

To take it a little farther, if we imagine that there are some people who just want a chance at immortality, and they don’t care how they get it, then previously these people would have had no other option than religion. Whether religion is effective, given such a selfish motivation, is beyond the scope of this post though I did touch on it in a previous post. But in any event it doesn’t matter because, here, we’re not concerned with whether it’s a good idea, we’re concerned with whether such a group of people exists and whether, given the promise of technological immortality, how many have, so to speak, switched sides.

I’m not sure how many people this group represents. Also I’m sure the motivations of most religious individuals are far more complicated than just a single minded quest for immortality. But you can certainly imagine that the promise of immortality through technology might be enough to take someone who would have been religious in an earlier age and convince them to seek immortality through technology instead. If there are people in this category, it’s unlikely that much is being written specifically with them in mind. All of this is not to say that my blog is targeted at “people who yearn for immortality, but think technology is currently a better bet than religion.” A group that has to be pretty small regardless of the initial assumptions, but this is certainly an example, albeit an extreme one, of the ways in which technology overlaps not only the practice of religion, but also the ideology, morals and even philosophy.

It’s easy to view technology as completely separate from religion, and maybe at one point it was, but as we get closer to developing the technology to genetically alter ourselves and our descendents, eliminate the need for work, or create artificial Gods (and recall we already have the technology to destroy the world) then suddenly technology is very much encroaching on areas which have previously been the sole domain of religion. And taking a moment to examine whether religion might have some insights into these issues before we discard it, is, I believe, a worthwhile endeavor. This is where, by straddling the two, I hope to cover some ground the General Authorities and people like C.S. Lewis have missed.

Interestingly, this is where religion ends up providing both the source of my pessimism as well as the source of my optimism. I have already mentioned how faith in God is a source of limitless hope, but on the other hand it also provides a framework for understanding how prideful technology has made us, and how quick we have been to discard the lessons of the both history and religion. We are faced with a situation where people are not merely ignoring the morality of religion, they are in many cases charting a course in the opposite direction. In this case, what other response is there than pessimism?

Of course, and I should have mentioned this earlier (both in this post and in the blog as a whole.) You have probably guessed that my name is not actually Jeremiah, that it’s a pseudonym I adopted for the purposes of this blog. Not only because I took the theme from the book of Jeremiah but also because I think there are some parallels between the doom he could see coming and many potential dooms we face. I assume that Jeremiah had faith, I assume that he figured it would all eventually work out for him, but that doesn’t mean that he wasn’t pessimistic about the world around him, enough so that a we still use the word jeremiad to mean a long, mournful complaint. And I think he was onto something. I know it’s common these days to declare that we just need to be optimistic and love people regardless of what they’re doing. But I’m inclined to think a pessimistic approach which is closer to Jeremiah’s might actually produce better results. And this is where we return to antifragility, which is another area of overlap between religion and technology, though probably less clear than the immortality overlap we talked about (which is why I started with it.)

The great thing about striving to be antifragile is that it’s a fantastic plan regardless of whether you’re religious or not. As I mentioned earlier my hope is that survival may provide a useful entry point, the milk so to speak, even for people who aren’t religious. In particular I think self-identified rationalists place too much weight on being right in the short term and not enough weight on surviving in the long term. Which are strengths of both antifragility specifically and religion generally. Obviously we don’t have the time to get into a complete dissection of how rationalists neglect the long-term, and I have definitely seen some articles from that side of things that did an admirable job of tacking the potential of future catastrophe. Perhaps, it’s more accurate to state that whatever their consideration for the long term that religion does not factor in at all.

But religion is important here for at least three reasons. First as I said in a previous post, even if there is no God, the taboos and commandments of religion are the accumulated knowledge about how to be antifragile. Second religion is one of the best ways we have for creating resilient social structures going forward. Which is to say, who’s better at recovering from disaster? The rationalists in San Francisco or the Mormons in Utah? Finally, if there is a God, being religious gives you access to the ultimate antifragility, eternal life. Obviously this final point is the most controversial of all, and you’re free to dismiss it, (though you might want to read my Pascal’s Wager post before you do.) But, with all of this, are you really sure that religion has no value in our modern, technological world? To return to the main theme of this post, I think people underestimate the value that comes from straddling the two worlds.

The problem with all of this is that in trying to speak on these subjects the minute you bring in religion and God many people are going to tune out entirely. Thus, despite this being an emphatically LDS blog, I don’t spend as much time speaking about religion as perhaps you might expect. In part this is because I honestly think you can get to most of the places I want to go without relying on deus ex machina. Believing in God does make everything easier to a certain extent (across all facets of life) but what if you don’t believe in God? Does that mean that you can throw out religion in it’s entirety, root and branch? I know people want to dismiss religion as a useless or even harmful relic of the past, but is that really a rational point of view? Is it really rational to take the position that countless hours, untold resources, and millions of lives were wasted on something that brought no benefit to our ancestors? Or worse caused harm? If this is your position then I think it’s obvious that the burden of proof rests with you.

There is a God in Heaven. And so I have all the optimism in the world. But, when so called rationalists, mock thousands of years of wisdom, then I’m also a huge pessimist. To use another quote from Shakespeare, remember “There are more things in heaven and earth… than are dreamt of in your philosophy.

I think it’s obvious that whether you’re an optimist or a pessimist, religious or rational (or ideally both) that we’re basically on the same page. So why not donate?

Time Preference and the Survival of Civilizations

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

In my ongoing quest to catch up on those topics I promised to revisit someday but never have, in this post I’m turning my attention to a statement I made all the way back in July of last year. (As I said I’ve been negligent about keeping my promises.) Back then, as aside on the topic of taboos, I said:

Of course this takes us down another rabbit hole of assuming that the survival of a civilization is the primary goal, as opposed to liberty or safety or happiness, etc. And we will definitely explore that in a future post, but for now, let it suffice to say that a civilization which can’t survive, can’t do much of anything else.

Well, this is that future post and it’s time to talk about Civilization! With a capital C! And no, not the classic Sid Meier’s game of the same name. Though that is a great game.

To begin with though, in timing that can only be evidence of the righteousness of my cause (that’s sarcasm by the way.) I recently listened to several interesting podcasts that directly tied into this topic. (By the way, you all know that you can get this blog as a podcast, right?) The first was a podcast titled Here Are The Signs That A Civilization Is About To Collapse. I confess it wasn’t as comprehensive as I had hoped, but their guest, Arthur Demarest, brought up some very interesting points. And if he had had a book on civilizational collapse I would have bought it in a heartbeat, but it appears that his books are all academically oriented and mostly focused on the Mayans. In any case here are some of the points that dovetail well with things I have already talked about.

  1. Civilization allows increasing complexity and connectivity, resulting in increased efficiency. But this connectivity and complexity increases the fragility of the system. Demarest gave the example of a slowdown in China causing pizza parlors to close in Chile.
  2. This complexity also leads to increased maintenance costs, and overhead. And eventually maintenance expands to the point where there’s very little room for innovation and no flexibility to unwind any of the complexity.
  3. When civilizations get in trouble they often end up doubling down on whatever got them in trouble in the first place. Demarest gives the example of the Mayans who built ever more elaborate temples as collapse threatened, in an effort to prop up the rulers.
  4. A civilization’s strength can often end up being the cause of its downfall.
  5. As things intensify thinking becomes more and more short term.
  6. Observations that the current period is the greatest ever often act as a warning that the civilization has already peaked, and the collapse is in progress.

As you may notice we already check most if not all of these boxes and I’ve already talked about all of them in one form or another, but more importantly, what he also points out, and what should be obvious, is that all civilizations collapse. Now you may argue that all we can say for sure is that every previous civilization has collapsed; ours may be different. This is indeed possible. But I think, for a variety of reasons which I mention again and again, that it’s safer to assume that we aren’t different. If we do make this completely reasonable and cautionary assumption, then the only questions which remain are: when is the current civilization going to collapse and is there anything we can do to extend its life?

I mentioned that I had listened, coincidently, and by virtue of the righteousness of my cause (once again sarcasm), to several podcasts which spoke to this issue. The second of these podcasts was Dan Carlin’s Common Sense. In this most recent episode he spent the first half of the program talking about the increasing hostility that exists between the two halves of the country and specifically the hostility between the Antifas (short for anti-fascists) and the hardcore Trump supporters. Carlin mentioned videos of violence which has been erupting at demonstrations and counter demonstrations all over the country. I would link to some of these videos, but it’s hard to find any that aren’t edited in a nakedly partisan fashion by one or the other side. But they’re easy enough to find if you do a search.

This is not a new phenomenon, we’ve had violence since election day, and I already spent an entire post talking about it. But Carlin frames things in an interesting way. He asks us to imagine that we were elected as president, and that our only goal was to heal the divisions that exist in the country. How would we do it? What policy would we implement that would bring the country back together again?

Carlin accurately points out that there’s not some anti-racist policy you could pass that would suddenly make everything all better. In fact it could be argued that we already have lots of anti-racist policies and that rather than helping, they might be making it worse. In my previous post I pushed for greater federalism, which is less a policy than a roll-back of a lot of previous policies. But as Carlin points out this is probably infeasible. First off because that’s just not how government works. Governments don’t ever voluntarily become less powerful. And second there’s not a lot of support for the idea even if the government was predisposed to let it happen.

Carlin spends the second half of the podcast talking about the Syrian missile strike. And in a common theme this discussion flows into his criticism of the ever expanding power of the executive. As you probably all know, only Congress has the power to declare war, and it last used that power in 1942 when it declared war on Bulgaria, Hungary and Romania. Since then it hasn’t used that power, though generally the President still seeks congressional approval for military action, what Carlin calls the fig leaf. He points out that Trump didn’t even do that. These days if someone dares to mention that this all might be unconstitutional, they are viewed as being very much on the fringe. But Carlin, like me, is grateful when people bring it up because at least it’s being talked about.

As I said executive overreach and expansion is a common theme for Carlin and one of the points he always returns to is that whatever tools you give your guy when he’s President are going to be used by the other side when they eventually get the presidency back. And this idea touches on the central idea that I want to explore, and the idea that unites the two halves of Carlin’s podcast, the idea of short term thinking. Both the current political crisis and the expansion of the presidency are examples of this short term thinking. And exactly the kind of thing that Demarest was talking about when he described historical civilizations which have collapsed.

As an extreme example of what I mean let me turn to one final recent podcast, the episode on Nukes from Radiolab. In the episode they examine the nuclear chain of command to determine if there are any checks on the ability of a US President to unilaterally launch a nuclear strike. That is, launch a nuclear strike without getting anyone else’s permission. And the depressing conclusion they come to is that there are effectively no checks. This is not to say that someone couldn’t disobey the order in that situation, but it’s hard to imagine such insubordination would hit 100%. In other words if Trump really wants to launch an ICBM, ICBMs will be launched.

But, for me, this is an issue which goes beyond Trump, and it’s scary basically regardless of who’s president. But it’s also a classic example of short term thinking. At some point it became clear that in the event of a Soviet first strike that there would be no time for a committee to assemble or multiple people to be called, and in that moment and based on this very narrow scenario, it was decided that sole control of the nuclear arsenal would be given to the President. If I remember the episode correctly this policy really firmed up during the Kennedy administration (and if you couldn’t trust Kennedy who could you trust?)

One could potentially understand this rationale for investing all of the power with the President, even if you don’t agree with it. But no thought was given to what should be done if the Cold War ever ended, and indeed when it did end, nothing changed. No thought or effort was even made to restrict this control to just the scenario of responding to a Soviet first strike. As it stands the President can launch missiles entirely at his discretion and for any reason whatsoever.

One would think that if Trump is as dangerous and unstable as people claim that they would be doing everything in their power to limit his ability to unilaterally start a nuclear war. That, at a minimum they would limit the President’s authority over nuclear weapons so that it applied only in situations where another country attacked us first. (I’m not sure how broad to make the standard of proof in this case, but even if it was fairly expansive we’d still be in a much better position than we are now. ) Instead, as of this writing, such a concern is nowhere to be found, and rather the headlines are about another GOP stab at a health bill, or how much the FBI director may have influenced the election or the sentencing of a woman who laughed at Jeff Sessions (the Attorney General).

Perhaps all of these issues will end up being of long term importance. Though that seems unlikely, particularly the story about the protestor laughing at Sessions, and even the story about the FBI director concerns something that already happened, and is therefore essentially unchangeable. It’s even harder to imagine how any of the issues currently in the news have more long term importance than the issue of the President’s singular control of the nuclear arsenal. And that’s just one example of long term dangers being overwhelmed by short-term worries.

You might argue at this point that the stories I mentioned are not unique to this moment in history, that people have been focused on their immediate needs and wants to the exclusion of longer term concerns for hundreds if not thousands of years. I don’t agree with this argument, I do think historically it has been different. And as a counter example I offer up the American Civil War where the focus may have been almost too long term. But even if I’m wrong and historically people were every bit as short-term in their outlook as they are now, the stakes today are astronomically greater.

I wanted to focus on short term thinking because it all builds up to my favorite definition of what civilization is. You may have noticed that we’ve come all this way without even clearly defining what we’re talking about, and I want to rectify that. Civilization is nothing more or less than low time preference. What’s time preference? It’s the amount of weight you give to something happening now versus in the future. As the term is commonly used it mostly relates to economics, how much more valuable is $1000 is today than $1000 in a month or a year. If $1000 today is the same as $1000 in three months then you have a time preference of zero. If you’re a loan shark and you want someone to pay you $2000 next week in exchange for $1000 today then you have a very high time preference, and are consequently engaging in what may be described as an uncivilized transaction, or at least a low-trust transaction, but of course trust is a big part of civilization.

Outside of economics, having a low time preference allows people to plan for the future, to build infrastructure, to establish institutions and perhaps most importantly to rely on the operation of the law, having faith that it’s not important to get justice right this second if you will get justice eventually. Perhaps you can see why I worry about what’s happening right now.

On the other hand it can easily be seen that corruption, the cancer of civilization, is a high time preference activity, people would rather get a bribe right now, because they have no trust in what the future will bring. When people talk about institutions, the rule of law, societal trust, and even the absence of violence they’re talking about low time preference. And let’s all agree right now that it’s a little bit confusing for “high” to be bad and low to be “good”.

Everything I’ve said so far is necessary to show that short-termism isn’t a symptom of the decline of civilization it IS the decline of civilization. But of course things can look fine for quite a while, because of the low time preference which existed up until this point. Meaning that those who came before us invested a lot in the future (because of their low time preference) and we can reap the benefits of those investments for a long time before it finally catches up to us.

Way back in the beginning of this post I stated that if you assume that our civilization is going to eventually collapse then the only question we’re left with is when and is there any way to delay that collapse? I think I’ve already answered the question of “when?” (Not immediately but sooner than most people think.) And now we need to look at the question, “What can we do to slow it down?” A simple, but somewhat impractical answer would be to lower our time preference. But as you can imagine this exhortation is unlikely to appear on a protest sign any time soon. (Perhaps I’ll try it out if we ever have a demonstration in Salt Lake City.) But, if we can’t get people to lower their time preference directly, perhaps we can do it indirectly.

If you were to use the term sacrifice, in place of low time preference, you would not be far from the mark. And restating the entire problem as, “We need greater sacrifice,” is something people understand, and it, also, just might make a good protest sign. But stating the solution this way just makes the scope of the problem all the more apparent. Because the last thing any of the people who are currently angry want to be told is that they need to sacrifice more.

It is, as far as I can tell, the exact opposite. All of the interested parties, left and right, rich and poor, minority and non, citizen and immigrant all feel that they have sacrificed enough, that now is the time for them to “get what they deserve.” Obviously not every poor person or every minority feels this way, but those who do feel this way are the ones who are out on the streets. And once again it all comes back to low time preference. No one wants to wait 10 years for something. No one is content to see their children finally get the rights they’ve been protesting for (if they even have children) and no one wants to wait four years for the next election.

All of this is not to say that people are entirely unwilling to sacrifice. People make sacrifices all the time for the things they want. But what I’m calling for, if we want to postpone collapse, is sacrifice specifically for civilization, which is, I admit, a fairly nebulous endeavor. But I think it starts with identifying what civilization is, and how it’s imperiled. Which is, in part, the point of this post. (In fact, I firmly expect all protesting and unrest to stop once it’s released.)

Joking aside, I fear there is no simple solution even if you have managed to identify the problem, and it may in fact be that there is nothing we can do to delay the end at this point. To return to Carlin’s question about the sorts of policies you might implement if you were made President and your one goal was to heal the country. I do think that creating some shared struggle we could all sacrifice for, would be a good plan, as good as any, and maybe even the best plan, which is not to say that it would succeed. And this hypothetical still relies on getting someone like that elected. Which is also not something that seems very likely. In other words things may already be too far gone.

One of my biggest reasons for pessimism is that I don’t think people see any connection between the unrest we’re currently experiencing (both here and abroad) and the weakening of civilization and more specifically the country. But there are really only three possibilities, the massive anger which exists can either strengthen the country, it can weaken it, or it can have no effect. If you think it’s making the country stronger, (or even having no effect) I’d love to hear your reasoning. But rather, I think any sober assessment would have to conclude that it can’t be strengthening it, and it can’t be having no effect, therefore it must be weakening it. Leaving only the question of by how much.

None of this is to claim that anger about Trump or alternatively support for Trump (or any of the other issues) will single-handedly bring down the country. But it’s all part of a generalized trend towards higher and higher time preference. Towards wanting justice and change right now. And I understand, of course, that the differences of opinion which have split the country are real and consequential. But what is the end game? What is the presidential policy that will make it all better? What are people willing to sacrifice? To repeat a quote I used in a previous post from Oliver Wendell Holmes:

Between two groups that want to make inconsistent kinds of world I see no remedy but force.

It’s a dangerous road we’re on and I would argue that as thinking get’s more and more short-term that the survival of civilization is at stake. And it’s at stake precisely because long-term thinking and planning is precisely what civilization is.

To come back to the assertion that started this all off, the assertion that I promised to return to. A civilization which can’t survive can’t do much of anything else. Of course at one level this is just a tautology. But at another level it ends up being a question of whether certain things can exist together. Can Trump supporters and Trump opponents live in the same country? Can a country give you everything you think you deserve right now, and yet still be solvent in 100 years? Can you have a system which is really good at reducing violence (as Pinker points out) but never abuses it’s power?

It’s entirely possible that the answer to all of those questions is yes. And I hope that’s the case. I hope that my worries are premature. I hope that similar to the unrest in the late 60’s/early 70’s that things will peak and then dissipate. That it will happen without a Kent State Shooting, or worse. But I also know that civilization takes sacrifice, it takes compromise, and however unsexy and dorky this sounds. It takes a low time preference.

You may have considered donating, but never gotten around it. Perhaps because you have low time preference and you assume that a dollar someday is as good as a dollar now. Well on this one issue I have very high time preference, so consider donating now.

Why I Hope the MTA Is Right, but Also Why It’s Safer to Assume They’re Not

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3

Last week’s post was titled Building the Tower of Babel, and it was written as a critique of the position and views of the Mormon Transhumanist Association (MTA). Specifically it was directed at an article written by Lincoln Cannon titled Ethical Progress is not Babel. In response to my post Cannon came by and we engaged in an extended discussion in the comments section. If you’re interested in seeing that back and forth, I would recommend that you check them out. Particularly if you’re interested in seeing Cannon’s defense of the MTA. (And what open-minded person wouldn’t be?)

I was grateful Cannon stopped by for several reasons. First I was worried about misrepresenting the MTA, and indeed it’s clear that I didn’t emphasize enough that, for the MTA, technology is just one of many means to bring about salvation and in their view insufficient by itself. Second a two-sided discussion of the issues is generally going to be more informative and more rigorous than a one-sided monologue. And third because I honestly wasn’t sure what to do with the post, or with the MTA in general. Allow me to explain.

In a previous post I put people into three categories: the Radical Humanists, the Disengaged Middle and the Actively Religious. And in that post I said I had more sympathy for and felt more connected to the Radical Humanists than to the Disengaged Middle. The MTA is almost unique in being part of both the Radical Humanist group and the Actively Religious. Consequently I should be very favorably disposed to them, and I am, but that doesn’t mean that I think they’re right, though if it were completely up to me I’d want them to be right. This is the difficulty. On the one hand I think there are a lot of issues where we agree. And on those issues both of us (but especially me) need all the allies we can get. On the other hand, I think they’re engaged in a particularly seductive and subtle form of heresy. (That may be too strong of a word.) And I am well-positioned to act as a defender of the Mormon Orthodoxy against this, let’s say, mild heresy. And it should go without saying that I could be wrong about this. Which is one of the reasons why I think you should go read the discussion in the comments of the last post and decide for yourself.

Perhaps a metaphor might help to illuminate how I see and relate to the MTA. Imagine that you and your brother both dream of selling chocolate covered asparagus. So one day the two of you decide to start a business doing just that. As your business gets going your father offers you a lot of advice. His advice is wise and insightful and by following it your business gradually grows to the point where it’s a regional success story. But at some point your father dies.

Initially this doesn’t really change anything, but eventually you and your brother are faced with a business decision where you don’t see eye to eye, and your father isn’t around anymore. Let’s say the two of you are approached by someone offering to invest a lot of money in the business. You think the guy is shady and additionally that once he’s part owner, that he may change the chocolate covered asparagus business in ways that would damage it, alter it into something unrecognizable or potentially even destroy it. Perhaps he might make you switch to lower quality chocolate, or perhaps he wants to branch into chocolate covered broccoli. (Which is just insane.) Regardless, you don’t trust him or his motives.

On the other hand, your brother thinks it’s a great opportunity to really expand the chocolate covered asparagus business from being a regional player into a worldwide concern. In the past your father might have settled the dispute, but he’s gone, and as the two of you look back on his copious advice you can both find statements which seem to support your side in the dispute. And, not only that, both of you feel that the other person is emphasizing some elements of your father’s advice while ignoring other parts. In any event you’re adamant that you don’t want this guy as an investor and part owner, and your brother is equally adamant that it’s a tremendous opportunity and the only way your chocolate covered asparagus business is really going to be successful.

None of this means that you don’t still love your brother, or that either of you is any less committed to the vision of chocolate covered asparagus. Or that either of you is less respectful of your late father. But these commonalities do nothing to resolve the conflict. You still feel that this new investor may destroy the chocolate covered asparagus business, while your brother feels that the investor is going to provide the money necessary to make it a huge success. And perhaps, most interesting of all, if you could just choose the eventual outcome of the decision you would choose your brother’s expected outcome. You would choose for the investment to be successful, and for chocolate covered asparagus to fill the world, bringing peace and prosperity in it’s wake.

But, you can’t choose one future over another, you can’t know what will happen when you take on the investor. And in your mind it’s better to preserve the company you have than risk losing it all on a unclear bet and a potentially unreliable partner.

Okay that metaphor ended up being longer than I initially planned, also, as with all metaphors it’s not perfect, but hopefully it gives you some sense of the spirit in which I’m critiquing the MTA. And perhaps the metaphor also helps explain why there are many ways in which I hope the MTA is right, and I’m wrong. Finally I hope it also provides a framework for my conclusion that the best course of action is to assume that they’re not right. But, let’s start by examining a couple of areas where I definitely hope they are correct.

The first and largest area where I hope the MTA is right and I’m wrong is war and violence. There is significant evidence that humans are getting less violent. The best book on the subject is Better Angels of Our Nature by Steven Pinker, which I reviewed in a previous post. As I mentioned in that post I do agree that there has been a short term trend of less violence, and also, a definite decrease in the number of deaths due to war. This dovetails nicely with the MTA’s assertion that humanity’s morality is increasing at the same rate as its technology, and given these trends, there is certainly ample reason to be optimistic. But this is where the Mormon part of the MTA comes into play. While it’s certainly reasonable for Pinker and secular transhumanists to be optimistic about the future, for Mormons and Christians in general, there is the little matter of Armageddon. Or as it’s described in one of my favorite scriptures Doctrine and Covenants Section 87 verse 6:

And thus, with the sword and by bloodshed the inhabitants of the earth shall mourn; and with famine, and plague, and earthquake, and the thunder of heaven, and the fierce and vivid lightning also, shall the inhabitants of the earth be made to feel the wrath, and indignation, and chastening hand of an Almighty God, until the consumption decreed hath made a full end of all nations;

I assume that the MTA has an explanation for this scripture that is different than mine, but I’m having a hard time finding anything specific online. If I had to guess, I imagine they would say that it has already happened. But in any case, they have to have an alternative explanation because if we assume that the situation described above has yet to arrive, then the MTA will have at least two problems. First the trend of decreasing violence and increasing morality will have definitely ended, and second I think it’s safe to assume that if we have to pass through the “full end of all nations”, that what comes out on the other side won’t bear any resemblance to the utopian transhumanist vision of the MTA. Again, I hope they’re right, and I hope I’m wrong, I hope that scripture has somehow already been fulfilled, or that I’m completely misinterpreting it. I hope that humanity is more peaceful than I think, rather than less. But just because I want something to be a certain way doesn’t mean that’s how it’s actually going to play out.

For our second area, let’s take a look at genetic engineering. Just today I was listening to the Radiolab podcast, specifically the most recent episode which was an update to an older episode exploring a technology called CRISPR. If you’re not familiar with it, CRISPR is a cheap and easy technology for editing DNA, and the possibilities for it’s use are nearly endless. The most benign and least controversial application of CRISPR would be using it to eliminate genetic diseases like hemophilia (something they’re already testing in mice.) From this we move on to more questionable uses, like using CRISPR to add beneficial traits to human embryos (very similar to the movie Gattaca). Another questionable application would involve using CRISPR to edit some small portion of a species and then, taking advantage of another technique called Gene Drive, use the initially modified individuals to spread the edited genes to the rest of the species. An example of this would be modifying mosquitos so that they no longer carry malaria. But it’s easy to imagine how this might cause unforeseen problems. Also how the technique could be used in the service of other, less savory goals. I’ll allow you a second to imagine some of the nightmare scenarios this technique makes available to future evil geniuses.

CRISPR is exactly the sort of technology the MTA and other transhumanists have been looking forward to. It’s not hard to see how the cheap and easy editing of DNA makes it easier to achieve things like immortality and greater intelligence. But as I already pointed out even positive uses for CRISPR have been controversial. According to the Radiolab podcast the majority of bioethicists are opposed to using CRISPR to add beneficial traits to human embryos. (Which hasn’t stopped China from experimenting with it.)

As far as I understand it the MTA’s position on all of this is that it’s going to be great, that the bioethicists worry to much. This attitude stems from their aforementioned belief that morality and technology are advancing together. Which means that by the time we master a technology we will also have developed the morality to handle it. As it turns out DNA editing is another area of agreement between the MTA and Steven Pinker, who said the following:

Biomedical research, then, promises vast increases in life, health, and flourishing. Just imagine how much happier you would be if a prematurely deceased loved one were alive, or a debilitated one were vigorous — and multiply that good by several billion, in perpetuity. Given this potential bonanza, the primary moral goal for today’s bioethics can be summarized in a single sentence.

Get out of the way.

A truly ethical bioethics should not bog down research in red tape, moratoria, or threats of prosecution based on nebulous but sweeping principles such as “dignity,” “sacredness,” or “social justice.” Nor should it thwart research that has likely benefits now or in the near future by sowing panic about speculative harms in the distant future. These include perverse analogies with nuclear weapons and Nazi atrocities, science-fiction dystopias like “Brave New World’’ and “Gattaca,’’ and freak-show scenarios like armies of cloned Hitlers, people selling their eyeballs on eBay, or warehouses of zombies to supply people with spare organs. Of course, individuals must be protected from identifiable harm, but we already have ample safeguards for the safety and informed consent of patients and research subjects.

Given this description perhaps you can see why I hope the MTA, and Pinker are right. I hope that CRISPR and other similar technologies do yield a better life for billions. I hope that humanity is mature enough to deal with the technology, and that it’s just as cool, and as transformative as they predict. That the worries of the bioethicists concerning CRISPR and the warnings of scripture concerning war, turn out to be overblown. That the future really is as awesome as they say it’s going to be. Wouldn’t it be nice if it were true.

But perhaps, like me, you don’t think it is. Or perhaps, you’re just not sure. Or maybe despite my amazing rhetoric and ironclad logic, you still think that the MTA is right, and I’m wrong. The key thing, as always, is that we can’t know. We can’t predict the future, we can’t know for sure who is right and who is wrong. Though to be honest I think the evidence is in my favor, but even so let’s set that aside for the moment and examine the consequences of being wrong from either side.

If I’m wrong, and the MTA is correct, then my suffering will be minimal. Sure the transhumanist overlords will dredge up my old blog posts and use them to make me look foolish. Perhaps I’ll be included in a hall of fame of people who made monumentally bad predictions. But I’ll be too busy living to 150, enjoying a post scarcity society, and playing amazingly realistic video games, to take any notice of their taunting.

On the other hand, if I’m right and the MTA is wrong. Then the sufferings of those who were unprepared could be extreme. Take any of the things mentioned in D&C 87:6 and it’s clear that even a little preparation in advance could make a world of difference. I’m not necessarily advocating that we all drop everything and build fallout shelters, I’m talking about the fundamental asymmetry of the situation. Which is to say that the consequences of being wrong are much worse in one situation than in the other.

The positions of the MTA and the transhumanists and of Pinker are asymmetrical in several ways. First is the way I already mentioned, and is inherent in the nature of extreme negative events, or black swans as we like to call them. If you’re prepared for a black swan it only has to happen once to make all the preparation worth while, but if you’re not prepared then it has to NEVER happen. To use an example from a previous post, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with shelf after shelf of canned goods. Every year I predict a nuclear war and every year people mock me, because year after year I’m wrong. Until one year, I’m not. At that point, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of modernity and progress, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

The second way in which their position is asymmetrical is the number of people who have to be “good”. CRISPR is easy enough and cheap enough and powerful enough that a small group of people could inflict untold damage. The same goes for violence due to war. It’s not enough for the US and Russia to not get into a war. China, Pakistan, North Korea, Israel, France, Japan, Taiwan, India, Brazil, Vietnam, the Ukraine, and on and on, all have to behave as well. The point being that even if you are impressed with modern standards of morality (which I’m not by the way) if only 1% of the people decide to be really bad, it doesn’t matter how good the other 99% are.

The final asymmetry is that of time. A large part of the transhumanist vision came about because we’re in a very peaceful time where technology is advancing very quickly. Thus the transhumanists came into being during a brief period where it seems obvious that things are going to continue getting better. But they seem to largely ignore the possibility that in 100 years an enormous number of things might have changed. The US might no longer exist, perhaps democracy itself will be rare, we could hit a technological plateau, and of course we’ll have to go that entire time without any of the black swans I already mentioned. No large scale nuclear wars, no horrible abuses of DNA editing, nor any other extreme negative events which might derail our current rate of progress and our current level of peace.

As my final point, in addition to the two things I hope the MTA is right about I’m going to add one thing which I hope they’re not right about. To introduce the subject I’d like to reference a series of books I just started reading. It’s the Culture Series by Iain M. Banks, named after the civilization at the core of all the books. Wikipedia describes Culture as a utopian, post-scarcity space communist society of humanoids, aliens, and very advanced artificial intelligences. We find out additionally that its citizens can live to be up to 400. So not immortal, but very long lived. In other words Culture is everything transhumanists hope for. As far as I can tell citizens of the Culture spend their time in either extreme boredom, some manner of an orgy or transitioning from one gender into another and back again. Perhaps this is someone’s idea of heaven, but it’s not mine. In other words if this or something like it is what the MTA has in mind as the fulfillment of all the things promised by the scriptures, then I hope they’re wrong. And I would offer up that they suffer from a failure of imagination.

I hope that resurrection is more than just cloning and cryonics, that transfiguration is more than having my mind uploaded into a World of Warcraft server, that “worlds without number” is more than just a SpaceX colony on Mars. That immortality is more than just the life I already have, but infinitely longer. If you’re thinking at this point that my description of things is a poor caricature of what the MTA really aspires to then you’re almost certainly correct, but I hope that however lofty the dreams of the MTA that those lofty dreams are in turn a poor caricature of what God really has in store for us.

Returning to my original point. I am very favorably disposed to the MTA. I think they have some great ideas, and I’ve very impressed with the way they’ve combined science and religion. Unfortunately, despite all that, we have very different philosophies when it comes to the business of chocolate covered asparagus.

Given that we don’t yet live in a post-scarcity society consider donating. And if you’re pretty sure we eventually will, that’s all the more reason to donate, since money will soon be pointless anyway.