Month: <span>September 2021</span>

Tetlock, the Taliban, and Taleb

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

There have been many essays written in the aftermath of our withdrawal from Afghanistan. One of the more interesting was penned by Richard Hanania, and titled “Tetlock and the Taliban”. Everyone reading this has heard of the Taliban, but there might be a few of you who are unfamiliar with Tetlock. And even if that name rings a bell you might not be clear on what his relation is to the Taliban. Hanania himself apologizes to Tetlock for the association, but “couldn’t resist the alliteration”, which is understandable. Neither could I. 

Tetlock is known for a lot of things, but he got his start by pointing out that “experts” often weren’t. To borrow from Hanania:

Phil Tetlock’s work on experts is one of those things that gets a lot of attention, but still manages to be underrated. In his 2005 Expert Political Judgment: How Good Is It? How Can We Know?, he found that the forecasting abilities of subject-matter experts were no better than educated laymen when it came to predicting geopolitical events and economic outcomes.

From this summary the connection to the Taliban is probably obvious. This is an arena where the subject matter experts got things very wrong. Hanania’s opening analogy is too good not to quote:

Imagine that the US was competing in a space race with some third world country, say Zambia, for whatever reason. Americans of course would have orders of magnitude more money to throw at the problem, and the most respected aerospace engineers in the world, with degrees from the best universities and publications in the top journals. Zambia would have none of this. What should our reaction be if, after a decade, Zambia had made more progress?

Obviously, it would call into question the entire field of aerospace engineering. What good were all those Google Scholar pages filled with thousands of citations, all the knowledge gained from our labs and universities, if Western science gets outcompeted by the third world?

For all that has been said about Afghanistan, no one has noticed that this is precisely what just happened to political science.

Of course Hanania’s point is more devastating than Tetlock’s. The experts weren’t just “no better” than the Taliban’s “educated laymen”. The “experts” were decisively outcompeted despite having vastly more money and in theory, all the expertise. Certainly they had all the credentialed expertise…

In some ways Hanania’s point is just a restatement of Antonio García Martínez’s point, which I used to end my last post on Afghanistan—the idea we are an unserious people. That we enjoy “an imperium so broad and blinding” we’ve never been “made to suffer the limits of [our] understanding or re-assess [our] assumptions about [the] world”

So the Taliban needed no introduction, and we’ve introduced Tetlock, but what about Taleb? Longtime readers of this blog should be very familiar with Nassim Nicholas Taleb, but if not I have a whole post introducing his ideas. For this post we’re interested in two things, his relationship to Tetlock and his work describing black swans: rare, consequential and unpredictable events. 

Taleb and Tetlock are on the same page when it comes to experts, and in fact for a time they were collaborators, co-authoring papers on the fallibility of expert predictions and the general difficulty of making predictions—particularly when it came to fat-tail risks. But then, according to Taleb, Tetlock was seduced by government money and went from pointing out the weaknesses of experts to trying to supplant them, by creating the Good Judgement project, and the whole project of superforecasting.

The key problem with expert prediction, from Tetlock’s point of view, is that experts are unaccountable. No one tracks whether they were eventually right or wrong. Beyond that, their “predictions” are made in such a way that even making a determination of accuracy is impossible. Additionally experts are not any better at prediction than educated laypeople. Tetlock’s solution is to offer the chance for anyone to make predictions, but in the process ensure that the predictions can be tracked, and assessed for accuracy. From there you can promote those people with the best track record. A sample prediction might be “I am 90% confident that Joe Biden will win the 2020 presidential election.” 

Taleb agreed with the problem, but not with the solution. And this is where black swans come in. Black swans can’t be predicted, they can only be hedged against, and prepared for, but superforecasting, by giving the illusion of prediction, encourages people to be less prepared for black swans, and in the end worse off than they would have been without the prediction.

In the time since writing The Black Swan Taleb has come to hate the term, because people have twisted it into an excuse for precisely the kind of unpreparedness he was trying to prevent. 

“No one could have done anything about the 2007 financial crisis. It was a black swan!”

“We couldn’t have done anything about the pandemic in advance. It was a black swan!” 

“Who could have predicted that the Taliban would take over the country in nine days! It was a black swan!”

Accordingly, other terms have been suggested. In my last post I reviewed a book which introduced the term “gray rhino”, something people can see coming, but which they nevertheless ignore. 

Regardless of the label we decide to apply to what happened in Afghanistan, it feels like we were caught flat footed. We needed to be better prepared. Taleb says we can be better prepared if we expect black swans. Tetlock says we can be better prepared by predicting what to prepare for. Afghanistan seems like precisely the sort of thing superforecasting was designed for. Despite this I can find no evidence that Tetlock’s stable of superforecasters predicted how fast Afghanistan would fall, or any evidence that they even tried. 

As a final point before we move on. This last bit is one of the biggest problems with superforecasting. The idea that you should only be judged for what you got wrong, that if you were never asked to make a prediction about something that the endeavor “worked”. But reality doesn’t care about what you chose to make predictions on vs. what you didn’t. Reality does whatever it feels like. And the fact that you didn’t choose to make any predictions about the fall of Afghanistan doesn’t mean that thousands of interpreters didn’t end up being left behind. And the fact that you didn’t choose to make any predictions about pandemics doesn’t mean that millions of people didn’t die. This is the chief difference between Tetlock and Taleb.

II.

I first thought about this issue when I came across a poll on a forum I frequent, in which users were asked how long they thought the Afghan government would last. The options and results were:

(In the interest of full disclosure the bolded option indicates that I said one to two years.)

While it is true that a plurality of people said less than six months, six months was still much longer than the nine days it actually took (from capturing the first provincial capital to the fall of Kabul) and from the discussion that followed the poll, it seemed most of those 16 people were thinking that the government would fall at closer to six months or even three months than one week. In fact the best thing, prediction-wise, to come out of the discussion was when someone pointed out that 10 years previously The Onion had posted an article with the headline U.S. Quietly Slips Out Of Afghanistan In Dead Of Night, which is exactly what happened at Bagram. 

As it turns out this is not the first time The Onion has eerily predicted the future. There’s a whole subgenre of noticing all the times it’s happened. How do they do it? Well of course part of the answer is selection bias.  No one is expecting them to predict the future; nobody comments on all the articles that didn’t come true.  But when one does, it’s noteworthy. But I think there’s something else going on as well: I think they come up with the worst or most ridiculous thing that could happen, and because of the way the world works, some of the time that’s exactly what does happen. 

Between the poll answers being skewed from reality and the link to the Onion article, the thread led me to wonder: where were the superforecasters in all of this?

I don’t want to go through all of the problems I’ve brought up with superforecasting (I’ve easily written more than 10,000 words on the subject) but this event is another example of nearly all of my complaints. 

  • There is no methodology to account for the differing impact of being incorrect on some predictions vs. others. (Being wrong about whether the Tokyo Olympics will be held is a lot less consequential than being wrong about Brexit.)
  • Their attention is naturally drawn to obvious questions where tracking predictions is easy. 
  • Their rate of success is skewed both by only picking obvious questions, and by lumping together both the consequential and the inconsequential.
  • People use superforecasting as a way of more efficiently allocating resources, but efficiency is essentially equal to fragility, which leaves us less prepared when things go really bad. (It was pretty efficient to just leave Bagram all at once.)

Or course some of these don’t apply because as far as I can tell the Good Judgment project and it’s stable of superforecasters never tackled the question, but they easily could have. They could have had a series of questions about whether the Taliban would be in control of Kabul by a certain date. This seems specific enough to meet their criteria. But as I said, I could find no evidence that they had. Which means either they did make such predictions and were embarrassingly wrong, so it’s been buried, or despite its geopolitical importance it never occurred to them to make any predictions about when Afghanistan would fall. (But it did occur to a random poster on a fringe internet message board?) Both options are bad.

When people like me criticize superforecasting and Tetlock’s Good Judgment project in this manner, the common response is to point out all the things they did get right and further that superforecasting is not about getting everything right; it’s about improving the odds, and getting more things right than the old method of relying on the experts. This is a laudable goal. But as I point out it suffers from several blindspots. The blindspot of impact is particularly egregious and deserves more discussion. To quote from one of my previous posts where I reflected on their failure to predict the pandemic:

To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

Once again we find ourselves in a similar position. When we imagine historians looking back on 2021, no one would find it surprising if they ranked the withdrawal of the US and subsequent capture of Afghanistan by the Taliban as the most impactful event of the year. And yet superforecasters did nothing to help us prepare for this event.

IV.

The natural next question is to ask how should we have prepared for what happened? Particularly since we can’t rely on the predictions of superforecasters to warn us. What methodology do I suggest instead of superforecasting? Here we return to the remarkable prescience of The Onion. They ended up accurately predicting what would happen in Afghanistan 10 years in advance, by just imagining the worst thing that could happen. And in the weeks since Kabul fell, my own criticism of Biden has settled around this theme. He deserves credit for realizing that the US mission in Afghanistan had failed, and that we needed to leave, that in fact we had needed to leave for a while. Bad things had happened, and bad things would continue to happen, but in accepting the failure and its consequences he didn’t go far enough. 

One can imagine Biden asserting that Afghanistan and Iraq were far worse than Bush and his “cronies” had predicted. But then somehow he overlooked the general wisdom that anything can end up being a lot worse than predicted, particularly in the arena of war (or disease). If Bush can be wrong about the cost and casualties associated with invading Afghanistan, is it possible that Biden might be wrong about the cost and casualties associated with leaving Afghanistan? To state things more generally, the potential for things to go wrong in an operation like this far exceeds the potential for things to go right. Biden, while accepting past failure, didn’t do enough to accept the possibility of future failure. 

As I mentioned, my answer to the poll question of how long the Afghanistan government was going to last was 1-2 years. And I clearly got it wrong (whatever my excuses). But I can tell you what questions I would have aced (and I think my previous 200+ blog posts back me up on this point): 

  • Is there a significant chance that the withdrawal will go really badly?
  • Is it likely to go worse than the government expects?

And to be clear I’m not looking to make predictions for the sake of predictions. I’m not trying to be more accurate, I’m looking for a methodology that gives us a better overall outcome. So is the answer to how we could have been better prepared, merely “More pessimism?” Well that’s certainly a good place to start, beyond that there’s things I’ve been talking about since the blog was started. But a good next step is to look at the impact of being wrong. Tetlock was correct when he pointed out that experts are wrong most of the time. But what he didn’t account for is it’s possible to be wrong most of the time, but still end up ahead. To illustrate this point I’d like to end by recycling an example I used the last time I talked about superforecasting:

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” 

This is what I mean by impact. When things depart from the status quo, when Eustice loses money, it’s so dramatic that it overwhelms all of the times when things went according to expectations.  

Biden was correct when he claimed we needed to withdraw from Afghanistan. He had no choice, he had to play poker. But once he decided to play poker he should have done it as skillfully as possible, because the stakes were huge. And as I have so frequently pointed out, when the stakes are big, as they almost always are when we’re talking about nations, wars, and pandemics, the skill of pessimism always ends up being more important than the skill of superforecasting.


I had a few people read a draft of this post. One of them complained that I was using a $100 word when a $1 word would have sufficed. (Any guesses on which word it was?) But don’t $100 words make my donors feel like they’re getting their money’s worth? If you too want to be able to bask in the comforting embrace of expensive vocabulary consider joining them.


The 8 Books, 2 Graphic Novels, & 1 Podcast Series I Finished in August

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


  1. This Is How They Tell Me the World Ends: The Cyberweapons Arms Race by: Nicole Perlroth 
  2. Everything is F*cked: A Book About Hope by: Mark Manson
  3. Never Split the Difference: Negotiating as if Your Life Depended on It by: Chris Voss
  4. Gray Rhino: How to Recognize and Act on the Obvious Dangers We Ignore by: Michele Wucker
  5. Golden Son by: Pierce Brown
  6. Red Rising: Sons of Ares – Volume 1 and 2 (Graphic Novels) By: Pierce Brown
  7. The Bear by: Andrew Krivak
  8. The Phoenix Exultant by: John C. Wright
  9. A History of North American Green Politics: An Insider View by: Stuart Parker
  10. Rube Goldberg Machines: Essays in Mormon Theology by: Adam S. Miller

In August my youngest child left for college, and my oldest child started her graduate work. Next month another one of my children is getting married, though he’s been moved out for quite a while. Out of all of this only one child remains at home. He’s recently graduated from college with a computer science degree and is looking for his first job. Once he gets it, he too will move out. And, in what seems a very short space of time, my wife and I will be empty nesters. I’m not entirely sure I’m ready for it.

One of the first things we’re going to do is move out of the house while it undergoes a long overdue remodel. I’m expecting it to start sometime in October. I’m obviously nervous about an undertaking of this size. Remodeling isn’t a huge gamble, but it is a costly one. It’s also asymmetric, the upside is essentially capped while the downside has a very fat tail. So lots of changes, but hopefully none of them will impact the mediocre logorrhea you’ve come to expect from me.


I- Eschatological Reviews

This Is How They Tell Me the World Ends: The Cyberweapons Arms Race

by: Nicole Perlroth

528 Pages

Briefly, what was this book about?

The history, mechanics, and actors of a global and escalating cyberwar.

Who should read this book?

If you have enough worries about the future already I would avoid this book. If you’d like more or if you’re interested in cybersecurity this is the book for you.

General Thoughts

There are a lot of moving parts in this story, numerous actors, different incidents, and various technologies. One gets the sense that Perlroth is writing the history of something that hasn’t happened yet. Similar to someone writing the history of World War II at the end of August, 1939. Germany hasn’t invaded Poland, but they have annexed Austria, occupied the Sudetenland, and signed a nonaggression pact w/ Stalin (though no one knows that yet). Certain things are going to end up being very important and certain things are going to end up being entirely forgotten but none of that is clear yet.

Out of all the things Perlroth mentioned I’m going to make a few guesses as to which events and actors will end up actually being important when the war is finally over.

Stuxnet: This is the worm that was developed to take out Iranian centrifuges and slow down their uranium enrichment. It’s important for two reasons: It’s the first clear example of one nation attacking another using cyberweapons. Beyond that it undercut any moral high ground the US might have had. When the final history is written I think it will actually be less important than Perlroth claims, but it’s hard to imagine it not being included.

Heartbleed: This was a huge open source bug in the OpenSSL library that the NSA and others took advantage of for a long time. It illustrated that open source was not necessarily any more secure than the alternative (despite what some have claimed). Unsurprising given that the budget for the OpenSSL foundation was $2000/month. 

Ukraine: The Russian cyber attacks against Ukraine are a huge part of the story, big enough that I’ll cover it in the next section.

China: As is the case with so many things these days, China also conducts extensive cyberwarfare operations. And the story is similar to all the other China stories. China does something completely ridiculous, but in the end there’s too much money at stake so we overlook it. The key story from the book was Google, which exited China in 2010 after a gigantic hack, but then by 2018 they were working on getting back in. Currently the situation is complicated, but it’s obvious that Google is trying to get back into China’s good graces.

Of course I could be wrong as well about what will end up being important, but I don’t think I’m wrong about this being only the beginning.

Eschatological Implications

Historically wars have been the most common way that one sort of world changed into another sort of world, what we might consider eschatology lite. But it was only with the advent of nuclear weapons that people started to seriously consider the possibility that we could have wars which ended the world. With the book’s title Perlroth is making the claim that we should add cyberwar to that category. I don’t think she makes a convincing case that it should be added to the list with other x-risks, still she does make the case for significant worry. 

The book opens with the stories of Russia’s cyber attacks on Ukraine. The first, in 2015, took down their power grid, the second, in 2017, took down nearly every company in the country (though to the best of my knowledge the power stayed on this time). The second used the Petya malware, and apparently the Urkainians divide their lives into before Petya and and after Petya, in part because so much information was lost in the attack. From Pelroth’s description these attacks were obviously bad, but she claims that they could have been a lot worse. That this was just a test, not a real attempt to do as much damage as possible. That we should assume that if a big enough player, like Russia or China, really wanted to cause as much damage as possible, it would be far far worse. 

This example of Ukraine and the other discussions of cyberwarfare remind me of discussions about strategic bombing during the interwar period. World War I had given people a taste of what might be possible, and the advancement of technology only served to make those possibilities more terrifying—possibilities which would certainly play out in future wars.

These discussions were not universally bleak. Many thought it would lead to war more terrible than any which had come before, but some thought it would actually lead to fewer deaths because it would end wars so quickly. People would just give up once you had air superiority and could bomb them at will. In particular it was widely believed that aerial bombardment would cause uncontrollable panic among civilians. As you can see some people got it right and others didn’t. But amidst all the theorizing, one thing was definitely clear, industrial capacity would be a hugely important factor. You had to be able to build both the bombers and the bombs and the more you could build the better. 

We’re having the same discussions with respect to cyberwarfare. Some, like Perlroth, judging by the title of her book, think it has the potential to be apocalyptic, while others think that the danger is severe but manageable. (I assume Pinker is in this category, but this is another danger from progress/technology which doesn’t appear in Enlightenment Now.) I think I’m somewhere in the middle of those two positions. What I’m more interested in thinking about is which factors are going to end up ultimately determining success in cyberwarfare. If industrial capacity is what eventually allowed the US to win World War II, what factors will eventually allow which actors to achieve victory in a cyberwar?

From the book it’s clear that currently warfare revolves around highly talented individuals finding security holes in important software. From this you can imagine lots of ways this could go:

  • Is it a numbers game where the larger your country’s population the more talented individuals you possess and thus the more security holes your country has access to?
  • How does culture play into things? Are Chinese and Russian hackers more dedicated or less? If you’re a talented programmer in the US you’re working for six figures in silicon valley. If you’re a talented Russian hacker you’re building ransomware. The latter skill set would appear to be more useful if a cyberwar starts. 
  • Related, our government seems to suffer from more leaks than the Chinese and Russian governments. See for example Edward Snowden. Does our expectation of openness work against us?
  • China seems to have a pretty tight clamp on its software companies. For example it’s widely believed that they can have them include whatever backdoors and spyware they want. While we do see some cooperation between our government and our companies, it’s not nearly so extensive, and there’s been enormous pushback. Who has the advantage here? 
  • There’s a market for security holes and exploits. Given that you can buy your way into being competitive, but doing so is viewed as immoral, to whose benefit is that?

As I said, it’s impossible to predict which factors are going to be important and how things will play out in this arena, but reviewing the factors I just listed most of them seem to work to our disadvantage and to the advantage of our enemies. In particular this book has made me very worried about cyberterrorism. Thus far most terrorist organizations are fairly low tech, but that can’t last forever. In the old days it was assumed that the holy grail for a terrorist organization would be a nuke. With security vulnerabilities you have thousands of potential nukes wandering around. How long before a terrorist organization gets its hands on one? 

Consider, what would cause more chaos? A terrorist nuke in a major city (probably closer to Hiroshima than an ICBM) or 20% of the country being without power for a month because terrorists managed to blow out a couple of critical transformers? Okay, now which is easier to pull off? My hunch would be that the power disruption causes more chaos and is easier to pull off. And if the terrorists can’t quite pull that off, there are thousands of security holes out there—some more damaging, some less damaging—but all with the potential to cause a lot of chaos.


Everything is F*cked: A Book About Hope 

by: Mark Manson

288 Pages

Briefly, what was this book about?

I’m honestly not sure. It was kind of all over the place. I think it’s primary theme was an admonition to accept the world as it is, and that hope and the search for happiness is the opposite of that.

Who should read this book?

If you loved Manson’s other books, you will probably like this one, beyond that. I’m not sure I would recommend it. There are good parts, but nothing you couldn’t get from reading Ryan Holiday or some other stoic. 

General Thoughts

I’m not entirely clear on how this book came to my attention, but I had read Manson’s previous book, The Subtle Art of Not Giving A F*ck, and I enjoyed it, so that’s probably why I decided to read this book, plus it was short. The book is strange. It’s got a fair amount of philosophy in it, and most of that is pretty good. In fact Manson seemed to be making exactly the same connection I did between Nietzche and AI. It also had a lot of stories which I also enjoyed. The story of Antonio Damasio and “Elliot”, a man who couldn’t do anything because he felt no emotion is one I’ve heard, and even referenced on a couple of occasions, but Manson presents it with far more detail than any of the previous retellings I’ve encountered, so that was certainly useful. 

One thing I hadn’t encountered, at least that I can remember, was the blue-dot experiment. In this experiment researchers ask participants to decide if a dot is blue, and initially they show them a set of dots where half are blue and half are purple. Then they gradually reduce the number of blue dots until all they’re showing is purple. As it turns out the number of dots identified as blue remains fairly constant, even as the actual number of blue dots goes to zero. As the occurrence of blue dots decreased, their definition of blue expanded. Thus far it’s interesting, but not particularly earth-shattering, but then they did some follow-up experiments:

In one follow-up experiment, the researchers showed the participants 800 computer-generated faces that varied on a continuum of “threatening” to “nonthreatening.” When the number of malevolent mug shots the researchers showed the participants decreased after 200 trials, the participants started labeling nonthreatening portraits as threatening.

From this, people (including Manson) concluded that even if things are improving humans are wired such that they will always see a constant level of danger and disorder. That if we’re not feeling sufficiently threatened by external foes that we’ll make up the difference by perceived internal threats

The things I’ve just mentioned along with other human biases are what lead him to conclude that Everything is F*cked. It’s when he provides his solution, in a chapter titled “The Final Religion” that things get interesting.

Eschatological Implications

So what is the FINAL religion? In Nick Bostrom’s foundational work on AI Risk, Superintelligence he proposes something he calls “The principle of epistemic deference”:

A future superintelligence occupies an epistemically superior vantage point: its beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible. 

Manson takes this principle and turns it up to 11. I have never seen anyone lean into it as much as Manson does. He doesn’t suggest we defer to them “whenever feasible”. He suggests we worship them as gods:

AI will reach a point where its intelligence outstrips ours by so much that we will no longer comprehend what it’s doing. Cars will pick us up for reasons we don’t understand and take us to locations we didn’t know existed. We will unexpectedly receive medications for health issues we didn’t know we suffered from…

Then, we will end up right back where we began: worshipping impossible and unknowable forces that seeming control our fates, Just as primitive humans prayed to their gods for rain and flame—the same way they made sacrifices, offered gifts, devised rituals, and altered their behavior and appearance to curry favor with the naturalistic gods—so will we. But instead of the primitive gods, we will offer ourselves up to the AI gods.

We will develop superstitions about the algorithms. If you wear this, the algorithms will favor you. If you wake at a certain hour and say the right thing and show up at the right place, the machines will bless you with great fortune. If you are honest and you don’t hurt others and you take care of yourself and your family, the AI gods will protect you. 

[A]llow me to say that I, for one, welcome our AI overlords.

Needless to say there is a lot wrong with this. First it completely ignores the AI alignment problem. Do we care what location we’re taken to by the car that “pick[s] us up for reasons we don’t understand”? What if it’s an assisted suicide facility because the AI has decided we’re old, sad and lonely and all of those conditions are only going to get worse? What if it’s a eugenics facility? And these are the very mildest examples.

All of the foregoing might be forgivable if this conclusion was supported by a foundation built over the course of the previous 200 pages, or if it was foreshadowed at all. But instead it seems to come out of left field. A strange eschatology emerging, unheralded from a rambling mix of self-help, neuroscience, and Nietzsche. 


II- Capsule Reviews

Never Split the Difference: Negotiating as if Your Life Depended on It 

by: Chris Voss

288 Pages

Briefly, what is this book about?

A method of negotiation which involves open ended questions designed to get the other side to solve your problems for you.

Who should read this book?

Someone, I forget who, pointed out that you’re never making more money, or losing more money in a given period of time than when you’re negotiating. If this book can improve your negotiating power by 1%, say by netting you $101k vs. $100K and you do this sort of negotiation a lot, then it’s value should be obvious.

General Thoughts

One would think based on what I just wrote that I have read every book on negotiation I can get my hands on. This is not the case, I mostly only read ones that have been recommended to me, and out of those I think this one, Influence by Robert Cialdini and Secrets of Power Negotiating by Roger Dawson have been the best. If you’re trying to decide between them it might be useful to point out that Influence has 365 ratings on Amazon with an average of 4.7 stars. Power Negotiating has 428 ratings also with a 4.7 average. Don’t Split the Difference on the other hand has 20,000 ratings with an average of 4.8. I’m not sure if these numbers should reflect on the author’s negotiating prowess or not. 

Beyond that, as I’ve already said, I believe this is a useful book. Voss has lots of great stories from his time as the FBI’s chief international hostage negotiator. And lots of good advice beyond that. With that in mind, my sense of things is that these sorts of books are best read right before a big negotiation. They’re useful in general, but they kind of recommend a different mindset, one you’re unlikely to practice enough (unless you’re in a position like Voss’s) to be able to recall at will. 

So if you’ve got a big negotiation coming up, I would definitely recommend this book, and probably the other two as well.


Gray Rhino: How to Recognize and Act on the Obvious Dangers We Ignore

by: Michele Wucker

284 Pages

Briefly, what is this book about?

Catastrophes which have been predicted but not prepared against.

Who should read this book?

Anyone who’s interested in risk management, though, if you haven’t read The Black Swan you should read that first.

General Thoughts

In 2005 Hurricane Katrina caused the levees to fail in New Orleans. The resulting flood killed approximately 1500 people and inflicted $70 billion dollars in damages. This was a catastrophe, but it wasn’t a black swan, the potential for catastrophe had been foreseen well in advance of Katrina, and yet the necessary preventative steps were not taken.

Shortly after reading this book Hurricane Ida hit Louisiana and New Orleans, and while the levees fortunately held this time, the 911 system once again collapsed. Despite the 16 years that had elapsed since Katrina, New Orleans was only now putting in a new system and it wasn’t ready, and the old system collapsed in the same way it had the last time around.

All of the foregoing are examples of Grey Rhinos. Disasters which can be foreseen, even if the actual timing can’t be pin-pointed. Wucker uses the analogy of someone out on safari who wants a picture of a rhino. In their quest they get too close, ignoring all the rules given by their guide, and as a result they spook the rhino and next thing they know it’s charging in their direction, whereupon they freeze. Everything about the “grey rhino” crisis is predictable and obvious, but because people are more focused on short term incentives they ignore the giant, and possibly fatal risk, which is now barreling down on them. 

Grey rhinos are obviously more common than black swans, and far easier to see, but as Wucker points out this doesn’t mean we’re great at dealing with them. If this book can help even a little bit it’s utility will be unquestionable. Despite that potential, reading the book was depressing rather than hopeful as it goes through example after example of people who got too close to the rhino, found themselves facing down possible catastrophe, freezing up and getting trampled. And yes, Wucker does provide plenty of advice for avoiding that fate, but people have been giving such advice for thousands of years and it hasn’t seemed to make much of an impact, it’s hard to imagine that this book is going to finally be the one that takes. 


Golden Son

by: Pierce Brown

464 Pages

Briefly, what is this book about?

This is book two of the Red Rising Trilogy. The continued saga of Darrow, a low caste Red who becomes a Gold and must navigate the various treacheries and machinations of their society while attempting to bring the whole thing crashing down. 

Who should read this book?

Every series has its peak, if you’re lucky it comes at the end, but that’s actually fairly rare. I think this series peaks in book one. Book two is still enjoyable, but if you didn’t love book one this book isn’t going to improve things for you.

General Thoughts

I’ve decided this series is a combination of Dune, Game of Thrones and the Hunger Games. This is not necessarily a good thing. In particular it out paces all of them in the amount of deaths and duplicitous double dealing. (Yes, it even out paces Game of Thrones.) At a certain point I started to find this tiresome. My plan is still to read the third book, but I’m worried. The friend who recommended them said that each book is worse than the one before. Of course he told me this after I finished book two…


Red Rising: Sons of Ares – Volume 1 and 2 (Graphic Novels)

by: Pierce Brown

152 Pages and 132 pages respectively

Briefly, what was this series about?

This is a prequel to the main trilogy, in graphic novel form. 

Who should read it?

First off you shouldn’t read this series before you read book two of the actual trilogy because it contains major spoilers. Second, unless you’re a Red Rising completist you probably shouldn’t read it at all.

General Thoughts

The back story provided by these graphic novels is somewhat interesting, though it doesn’t break any new ground. Also it’s incoherent in places, and I didn’t really like the art, which was kind of the whole reason I decided to check them out. (In this case, literally, I checked them out from the library.)


The Bear

by: Andrew Krivak

224 Pages

Briefly, what is this book about?

A man and his daughter making their living in the wilderness long after the rest of humanity has disappeared.

Who should read this book?

This is another instance where I think viewing something as a long podcast is very clarifying. The audio book is four hours, so if a great four hour podcast episode sounds appealing then this should as well.

General Thoughts

If you were to view this as a happy version of Cormac McCarthy’s The Road you wouldn’t be far off. It also has hints of Jack London’s Call of the Wild. Finally it reminds me of some of the Native American mythology I’ve read over the years. Krivak does a great job of combining all of these elements together into something great. I thoroughly enjoyed everything about the book: the setting, the plot, the characters and the writing. It was great.


The Phoenix Exultant 

by: John C. Wright

304 Pages

Briefly, what is this book about?

A story set in the far future, full of AI’s and humans in every variety you can imagine (from base neuroforms, to warlocks, composites and invariants). A story about one man’s quest to explore beyond the solar system and the forces trying to stop him.

Who should read this book?

This is also the second book in a series. It was also not as good as the first, but I enjoyed it quite a bit. I expect this series might peak at the end, and thus if you’ve read the first one, read this one too.

General Thoughts

As I mentioned in my review of book one Wright is great at creating an interesting setting. That mostly continues to be the case, though this book takes place at a smaller scale than the last one, which is somewhat to its detriment. Also one thing I didn’t mention is that Wright himself is a conservative catholic. It’s extraordinarily difficult to craft a book with an underlying ideology that doesn’t appear heavy handed, but I think Wright pulls it off. As you might imagine this gives the book a bit of an old school science fiction feel which I also enjoyed. 


A History of North American Green Politics: An Insider View (Podcast Series)

by: Stuart Parker

15 hours

Briefly, what is this podcast about?

The history of North America environmentalism and the creation of the Green Party, which have not always been as closely aligned as you might think. 

Who should listen to it?

From the outside looking in I always assumed that the environmental movement was well organized and monolithic. Parker shows that it was anything but. If you’re interested in a detailed story about how the narcissism of small differences plays out in politics, this is the series for you.

General Thoughts

Parker has been heavily involved in politics and environmentalism essentially his entire adult life. He was leader of the British Columbia Greens from the age of 21 to 28. So this really is an insider view of things. Parker is also a gifted academic and lecturer with a deep and eclectic knowledge of the history of environmentalism, the relationship between various factions (farm workers, rich elites, native americans, etc.) and how it all came to be manifested or ignored in the form of the Green Party. 

As the series progresses it comes to events where Parker really was an insider. He is able to give more of a first hand account of how things played out and we get to really see how the sausage was made. Which is fascinating and frustrating in equal measure, and I don’t even have a dog in the fight. I really enjoyed the series, much more than I expected.


III- Religious Reviews

Rube Goldberg Machines: Essays in Mormon Theology

by: Adam S. Miller

132 Pages

Briefly, what is this book about?

A collection of essays with a particular focus on what Mormonism has to say about grace and the atonement.

Who should read this book?

If you only dabble in Mormon theology, then there are easier books to read, but if you’re serious about the subject the essays in this book are deep and thought-provoking.

General Thoughts

I found Miller’s writing to be somewhat opaque, in my opinion more opaque than was actually necessary. Miller has some brilliant insights, but at times I felt like I was having to work too hard for them. My favorite essay from the book was “Notes on Life, Grace and Atonement.” Grace is going through something of revival in current Mormon dialogue and Miller’s contribution is fascinating, and almost Buddhist in nature:

With respect to grace, the legitimacy of my preferences for pleasant or productive things is a secondary issue at best. Grace is not concerned with preferences, legitimate or not. Grace, in its prodigality, is relentlessly and single-mindedly concerned with just one thing: the givenness of whatever is given, regardless of how such things may or may not comport with my preferences.

This definition of grace is all part of what he calls a non-sequential theology. We are not interested in cause and effect. We shouldn’t be focused on doing this in order for this to happen, but rather we should be focused on the totality of our lives at any given moment. I am certain I am not doing it justice, but perhaps I’m giving you enough of an idea to determine whether or not the book would appeal to you. And isn’t that the whole point of a review?


September looks to be the month when I finally finish reading Plato. So you’ve got more dilettantish commentary on ancient classics to look forward to. If that’s precisely what’s been missing from your life all this time, consider donating. If, inexplicably, you’ve already got enough of that sort of commentary, consider donating to support all the non-classical dilettantish commentary I do.