Tag: <span>Predictions</span>

Tetlock, the Taliban, and Taleb

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

There have been many essays written in the aftermath of our withdrawal from Afghanistan. One of the more interesting was penned by Richard Hanania, and titled “Tetlock and the Taliban”. Everyone reading this has heard of the Taliban, but there might be a few of you who are unfamiliar with Tetlock. And even if that name rings a bell you might not be clear on what his relation is to the Taliban. Hanania himself apologizes to Tetlock for the association, but “couldn’t resist the alliteration”, which is understandable. Neither could I. 

Tetlock is known for a lot of things, but he got his start by pointing out that “experts” often weren’t. To borrow from Hanania:

Phil Tetlock’s work on experts is one of those things that gets a lot of attention, but still manages to be underrated. In his 2005 Expert Political Judgment: How Good Is It? How Can We Know?, he found that the forecasting abilities of subject-matter experts were no better than educated laymen when it came to predicting geopolitical events and economic outcomes.

From this summary the connection to the Taliban is probably obvious. This is an arena where the subject matter experts got things very wrong. Hanania’s opening analogy is too good not to quote:

Imagine that the US was competing in a space race with some third world country, say Zambia, for whatever reason. Americans of course would have orders of magnitude more money to throw at the problem, and the most respected aerospace engineers in the world, with degrees from the best universities and publications in the top journals. Zambia would have none of this. What should our reaction be if, after a decade, Zambia had made more progress?

Obviously, it would call into question the entire field of aerospace engineering. What good were all those Google Scholar pages filled with thousands of citations, all the knowledge gained from our labs and universities, if Western science gets outcompeted by the third world?

For all that has been said about Afghanistan, no one has noticed that this is precisely what just happened to political science.

Of course Hanania’s point is more devastating than Tetlock’s. The experts weren’t just “no better” than the Taliban’s “educated laymen”. The “experts” were decisively outcompeted despite having vastly more money and in theory, all the expertise. Certainly they had all the credentialed expertise…

In some ways Hanania’s point is just a restatement of Antonio García Martínez’s point, which I used to end my last post on Afghanistan—the idea we are an unserious people. That we enjoy “an imperium so broad and blinding” we’ve never been “made to suffer the limits of [our] understanding or re-assess [our] assumptions about [the] world”

So the Taliban needed no introduction, and we’ve introduced Tetlock, but what about Taleb? Longtime readers of this blog should be very familiar with Nassim Nicholas Taleb, but if not I have a whole post introducing his ideas. For this post we’re interested in two things, his relationship to Tetlock and his work describing black swans: rare, consequential and unpredictable events. 

Taleb and Tetlock are on the same page when it comes to experts, and in fact for a time they were collaborators, co-authoring papers on the fallibility of expert predictions and the general difficulty of making predictions—particularly when it came to fat-tail risks. But then, according to Taleb, Tetlock was seduced by government money and went from pointing out the weaknesses of experts to trying to supplant them, by creating the Good Judgement project, and the whole project of superforecasting.

The key problem with expert prediction, from Tetlock’s point of view, is that experts are unaccountable. No one tracks whether they were eventually right or wrong. Beyond that, their “predictions” are made in such a way that even making a determination of accuracy is impossible. Additionally experts are not any better at prediction than educated laypeople. Tetlock’s solution is to offer the chance for anyone to make predictions, but in the process ensure that the predictions can be tracked, and assessed for accuracy. From there you can promote those people with the best track record. A sample prediction might be “I am 90% confident that Joe Biden will win the 2020 presidential election.” 

Taleb agreed with the problem, but not with the solution. And this is where black swans come in. Black swans can’t be predicted, they can only be hedged against, and prepared for, but superforecasting, by giving the illusion of prediction, encourages people to be less prepared for black swans, and in the end worse off than they would have been without the prediction.

In the time since writing The Black Swan Taleb has come to hate the term, because people have twisted it into an excuse for precisely the kind of unpreparedness he was trying to prevent. 

“No one could have done anything about the 2007 financial crisis. It was a black swan!”

“We couldn’t have done anything about the pandemic in advance. It was a black swan!” 

“Who could have predicted that the Taliban would take over the country in nine days! It was a black swan!”

Accordingly, other terms have been suggested. In my last post I reviewed a book which introduced the term “gray rhino”, something people can see coming, but which they nevertheless ignore. 

Regardless of the label we decide to apply to what happened in Afghanistan, it feels like we were caught flat footed. We needed to be better prepared. Taleb says we can be better prepared if we expect black swans. Tetlock says we can be better prepared by predicting what to prepare for. Afghanistan seems like precisely the sort of thing superforecasting was designed for. Despite this I can find no evidence that Tetlock’s stable of superforecasters predicted how fast Afghanistan would fall, or any evidence that they even tried. 

As a final point before we move on. This last bit is one of the biggest problems with superforecasting. The idea that you should only be judged for what you got wrong, that if you were never asked to make a prediction about something that the endeavor “worked”. But reality doesn’t care about what you chose to make predictions on vs. what you didn’t. Reality does whatever it feels like. And the fact that you didn’t choose to make any predictions about the fall of Afghanistan doesn’t mean that thousands of interpreters didn’t end up being left behind. And the fact that you didn’t choose to make any predictions about pandemics doesn’t mean that millions of people didn’t die. This is the chief difference between Tetlock and Taleb.

II.

I first thought about this issue when I came across a poll on a forum I frequent, in which users were asked how long they thought the Afghan government would last. The options and results were:

(In the interest of full disclosure the bolded option indicates that I said one to two years.)

While it is true that a plurality of people said less than six months, six months was still much longer than the nine days it actually took (from capturing the first provincial capital to the fall of Kabul) and from the discussion that followed the poll, it seemed most of those 16 people were thinking that the government would fall at closer to six months or even three months than one week. In fact the best thing, prediction-wise, to come out of the discussion was when someone pointed out that 10 years previously The Onion had posted an article with the headline U.S. Quietly Slips Out Of Afghanistan In Dead Of Night, which is exactly what happened at Bagram. 

As it turns out this is not the first time The Onion has eerily predicted the future. There’s a whole subgenre of noticing all the times it’s happened. How do they do it? Well of course part of the answer is selection bias.  No one is expecting them to predict the future; nobody comments on all the articles that didn’t come true.  But when one does, it’s noteworthy. But I think there’s something else going on as well: I think they come up with the worst or most ridiculous thing that could happen, and because of the way the world works, some of the time that’s exactly what does happen. 

Between the poll answers being skewed from reality and the link to the Onion article, the thread led me to wonder: where were the superforecasters in all of this?

I don’t want to go through all of the problems I’ve brought up with superforecasting (I’ve easily written more than 10,000 words on the subject) but this event is another example of nearly all of my complaints. 

  • There is no methodology to account for the differing impact of being incorrect on some predictions vs. others. (Being wrong about whether the Tokyo Olympics will be held is a lot less consequential than being wrong about Brexit.)
  • Their attention is naturally drawn to obvious questions where tracking predictions is easy. 
  • Their rate of success is skewed both by only picking obvious questions, and by lumping together both the consequential and the inconsequential.
  • People use superforecasting as a way of more efficiently allocating resources, but efficiency is essentially equal to fragility, which leaves us less prepared when things go really bad. (It was pretty efficient to just leave Bagram all at once.)

Or course some of these don’t apply because as far as I can tell the Good Judgment project and it’s stable of superforecasters never tackled the question, but they easily could have. They could have had a series of questions about whether the Taliban would be in control of Kabul by a certain date. This seems specific enough to meet their criteria. But as I said, I could find no evidence that they had. Which means either they did make such predictions and were embarrassingly wrong, so it’s been buried, or despite its geopolitical importance it never occurred to them to make any predictions about when Afghanistan would fall. (But it did occur to a random poster on a fringe internet message board?) Both options are bad.

When people like me criticize superforecasting and Tetlock’s Good Judgment project in this manner, the common response is to point out all the things they did get right and further that superforecasting is not about getting everything right; it’s about improving the odds, and getting more things right than the old method of relying on the experts. This is a laudable goal. But as I point out it suffers from several blindspots. The blindspot of impact is particularly egregious and deserves more discussion. To quote from one of my previous posts where I reflected on their failure to predict the pandemic:

To put it another way, I’m sure that the Good Judgement project and other people following the Tetlockian methodology have made thousands of forecasts about the world. Let’s be incredibly charitable and assume that out of all these thousands of predictions, 99% were correct. That out of everything they made predictions about 99% of it came to pass. That sounds fantastic, but depending on what’s in the 1% of the things they didn’t predict, the world could still be a vastly different place than what they expected. And that assumes that their predictions encompass every possibility. In reality there are lots of very impactful things which they might never have considered assigning a probability to. That in fact they could actually be 100% correct about the stuff they predicted but still be caught entirely flat footed by the future because something happened they never even considered. 

As far as I can tell there were no advance predictions of the probability of a pandemic by anyone following the Tetlockian methodology, say in 2019 or earlier. Or any list where “pandemic” was #1 on the “list of things superforecasters think we’re unprepared for”, or really any indication at all that people who listened to superforecasters were more prepared for this than the average individual. But the Good Judgement Project did try their hand at both Brexit and Trump and got both wrong. This is what I mean by the impact of the stuff they were wrong about being greater than the stuff they were correct about. When future historians consider the last five years or even the last 10, I’m not sure what events they will rate as being the most important, but surely those three would have to be in the top 10. They correctly predicted a lot of stuff which didn’t amount to anything and missed predicting the few things that really mattered.

Once again we find ourselves in a similar position. When we imagine historians looking back on 2021, no one would find it surprising if they ranked the withdrawal of the US and subsequent capture of Afghanistan by the Taliban as the most impactful event of the year. And yet superforecasters did nothing to help us prepare for this event.

IV.

The natural next question is to ask how should we have prepared for what happened? Particularly since we can’t rely on the predictions of superforecasters to warn us. What methodology do I suggest instead of superforecasting? Here we return to the remarkable prescience of The Onion. They ended up accurately predicting what would happen in Afghanistan 10 years in advance, by just imagining the worst thing that could happen. And in the weeks since Kabul fell, my own criticism of Biden has settled around this theme. He deserves credit for realizing that the US mission in Afghanistan had failed, and that we needed to leave, that in fact we had needed to leave for a while. Bad things had happened, and bad things would continue to happen, but in accepting the failure and its consequences he didn’t go far enough. 

One can imagine Biden asserting that Afghanistan and Iraq were far worse than Bush and his “cronies” had predicted. But then somehow he overlooked the general wisdom that anything can end up being a lot worse than predicted, particularly in the arena of war (or disease). If Bush can be wrong about the cost and casualties associated with invading Afghanistan, is it possible that Biden might be wrong about the cost and casualties associated with leaving Afghanistan? To state things more generally, the potential for things to go wrong in an operation like this far exceeds the potential for things to go right. Biden, while accepting past failure, didn’t do enough to accept the possibility of future failure. 

As I mentioned, my answer to the poll question of how long the Afghanistan government was going to last was 1-2 years. And I clearly got it wrong (whatever my excuses). But I can tell you what questions I would have aced (and I think my previous 200+ blog posts back me up on this point): 

  • Is there a significant chance that the withdrawal will go really badly?
  • Is it likely to go worse than the government expects?

And to be clear I’m not looking to make predictions for the sake of predictions. I’m not trying to be more accurate, I’m looking for a methodology that gives us a better overall outcome. So is the answer to how we could have been better prepared, merely “More pessimism?” Well that’s certainly a good place to start, beyond that there’s things I’ve been talking about since the blog was started. But a good next step is to look at the impact of being wrong. Tetlock was correct when he pointed out that experts are wrong most of the time. But what he didn’t account for is it’s possible to be wrong most of the time, but still end up ahead. To illustrate this point I’d like to end by recycling an example I used the last time I talked about superforecasting:

The movie Molly’s Game is about a series of illegal poker games run by Molly Bloom. The first set of games she runs is dominated by Player X, who encourages Molly to bring in fishes, bad players with lots of money. Accordingly, Molly is confused when Player X brings in Harlan Eustice, who ends up being a very skillful player. That is until one night when Eustice loses a hand to the worst player at the table. This sets him off, changing him from a calm and skillful player, into a compulsive and horrible player, and by the end of the night he’s down $1.2 million.

Let’s put some numbers on things and say that 99% of the time Eustice is conservative and successful and he mostly wins. That on average, conservative Eustice ends the night up by $10k. But, 1% of the time, Eustice is compulsive and horrible, and during those times he loses $1.2 million. And so our question is should he play poker at all? (And should Player X want him at the same table he’s at?) The math is straightforward, his expected return over 100 games is -$210k. It would seem clear that the answer is “No, he shouldn’t play poker.”

But superforecasting doesn’t deal with the question of whether someone should “play poker” it works by considering a single question, answering that question and assigning a confidence level to the answer. So in this case they would be asked the question, “Will Harlan Eustice win money at poker tonight?” To which they would say, “Yes, he will, and my confidence level in that prediction is 99%.” 

This is what I mean by impact. When things depart from the status quo, when Eustice loses money, it’s so dramatic that it overwhelms all of the times when things went according to expectations.  

Biden was correct when he claimed we needed to withdraw from Afghanistan. He had no choice, he had to play poker. But once he decided to play poker he should have done it as skillfully as possible, because the stakes were huge. And as I have so frequently pointed out, when the stakes are big, as they almost always are when we’re talking about nations, wars, and pandemics, the skill of pessimism always ends up being more important than the skill of superforecasting.


I had a few people read a draft of this post. One of them complained that I was using a $100 word when a $1 word would have sufficed. (Any guesses on which word it was?) But don’t $100 words make my donors feel like they’re getting their money’s worth? If you too want to be able to bask in the comforting embrace of expensive vocabulary consider joining them.


I Don’t Know If Everything Will Be Okay: My Thoughts On the Election

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


You may be familiar with the website Cracked. I spend more time on it than I should, and I definitely have a dysfunctional relationship with it. Sometimes I think it’s the worst clickbait site out there, more information free than even Buzzfeed. Other times, while still annoyed by their tendency to split test their titles until the most sensational, least accurate title wins out, I think they might actually have some interesting articles. This may seem like a strange way to start a post about the election, but it’s going somewhere.

In the wake of the election Cracked had an article, titled Dear White People Stop Saying Everything Will Be Okay (though by the time you get to it it may be titled “Five Reassuring Things White People Say (that are pure B.S.)”). And in case you didn’t know it, I am white. And I’m going to follow this injunction. I’m not going to tell you that everything will be okay. How could I possibly know that? In fact the theme of this blog is that things are not going to be okay (and certainly that they’re not going to be okay in the absence of God, for my non-religious friends this is the first and last religious reference.) If you want to be told that everything will be okay I would point you at the recent article from Wait but Why. If you’d rather stick with someone who has no illusions about his ability to predict the future you’re in the right place.

To be frank, Trump could end up being a horrible president. He could not only be as bad as people thought, he could be worse. He could be the person most responsible for the eventual destruction of the planet, whether through a full on exchange of nukes with Russia, or something more subtle. But, once you start talking about things that could happen, then in the end Clinton also could be and do all those things, in fact there are credible arguments that Clinton could have been even more likely to do some of those things.

We just don’t know. We guess; we estimate; we might even create models to predict what will happen, and coincidently enough, we just got a great example of how models and predictions can be wrong, really wrong. So the first thing I want to talk about is the pre-election predictions, because everyone recognizes that they were wrong, and yet now, both people who are enthusiastic about the election and people who are devastated by the election are making pre-presidency predictions, without recognizing that these predictions are even more likely to be wrong than the pre-election ones. At least the predictions about who would win the election were based on lots of data and dealing with a very narrow question. On the other hand, how Trump will be as president is a huge question with very little data. So yeah, I’m not going to say that everything will be okay because I don’t know, and neither does anyone else really.

As I said remembering how wrong the polls were can help us have some perspective on how wrong we might be about a Trump presidency (and remember we could be wrong in either direction). I should pause before I discuss the predictions and, in the interest of full disclosure, mention that there is definitely some schadenfreude going on here, not because I really wanted Trump to win, but because as someone who is constantly pointing out the difficulty of predicting the future, when someone smugly does just that and ends up being really wrong, it does give me a certain amount of validation. In any event my favorite example of being really wrong is is Sam Wang from the Princeton Election Consortium, who gave Clinton a 99% chance of winning the election. This is bad enough, but then outlets like Wired and DailyKos decided to double down and not only hail the genius of Sam Wang, but dismiss Nate Silver as an idiot. Now of course Silver was wrong as well, but he was a lot less wrong. To take a more limited example Matt Grossman of Michigan State said that Clinton was ahead by 19 points in Michigan, a state that Trump won. This wasn’t months ago, this was a week before the election. (Perhaps, one clue that it was wrong should have been the fact that in the same poll Gary Johnson was getting 11% of the vote.)

In their defense people like Wang and Silver will argue that the polls were not off by that much. Nate Silver posted an article about how if only 1 person in 100 had switched votes Clinton would have easily won. What this amounts to is that the polls were off by 2%, which is not that much, and the sort of thing that could slip in unnoticed, and be due to any of a 100 different factors operating in isolation or in combination.This is totally fair, but it doesn’t matter if the polls were only off by 0.1% or if Trump’s margin of victory was only 537 votes. (As was the case with Bush, another person who won the election but lost the popular vote.) He still gets 100% of the presidency. Most things are like this, a tiny error in some part of our calculations can still have huge consequences. In this sense it doesn’t matter if the odds of a Clinton presidency were 65.1% or 65.2% the key thing was for them to be right about who would actually win, and everyone (or at least mostly everyone) was wrong about that.

Before leaving our discussion of polling I’d like to point out one final thing. Yes, a tiny switch in the voting and the nation would be having a very different discussion right now, but as Andrew Gelman, a noted statistician, points out there are two ways to view the election. The first way to view it, is as the probability that Trump would be president given what we knew Tuesday morning. The second way is to view it as the probability that Trump would be president given what we knew when the race first started. Under the first view Trump’s victory was not that unlikely, despite what Sam Wang said. Under the second view it was fantastically unlikely. Gelman points out that a lot of the shock people are feeling is based on still being stuck in the second view, the probability of him going all the way.

Being stuck in the second view obviously causes problems, but for the moment I’d like to look at how we got from here to there. How did something which seemed so unlikely when Trump first announced his candidacy (One commentator said he was more likely to play in the NBA finals than win the nomination) end up being our reality on November 9th?

Obviously this is not the first attempt at an explanation, pundits have had essentially no other job since Trump entered the race than explaining and/or dismissing his rise, but I’d like to focus on two explanations which I don’t think got much play, but may be more significant than people realized.

I know a fair number of political junkies and as you can imagine there was a lot of discussion in the aftermath of the election. One comment in particular jumped out at me, from one of my more liberal friends, he mentioned that there is a history in the US, going all the way back to the revolution, of saying “Screw you, I do what I want!” And that’s what this looked like to him. In response I pointed out that in order for that to happen that someone had to be trying to tell them what to do, and in my opinion that was one of the overlooked factors. All the individuals telling people how evil they were for even thinking about voting for Trump. Everyone seems to agree that Clinton lost some voters when called half of Trump’s supporters a basket of deplorables, but what about when the Huffington Post decided to add the following to all of their articles:

Note to our readers: Donald Trump is a serial liar, rampant xenophobe, racist, birther and bully who has repeatedly pledged to ban all Muslims — 1.6 billion members of an entire religion — from entering the U.S.

Did that hurt or help Trump? And is it possible that the net effect of Joss Whedon getting all his rich friends together to record a video (which I enjoyed by the way) was to create more Trump voters, while convincing no one new to vote for Clinton?

I am not saying that any one of these things was enough to push the election to Trump, but together, to borrow a term from the other side, they created a climate of badgering, smugness and disapproval. Was it enough to swing the election? Hard to say, but as we saw above it was very close, so if this hectoring created any net Trump voters (particularly in the state of Pennsylvania) then it may very well have been what pushed it over the top. I think it certainly created the nucleus of hard-core supporters that got him the nomination and kept him in the race.

I said that this didn’t get much play, and that was true before the election. Now that the election is over lots of people are pointing it out. So far I’ve seen articles about the Unbearable Smugness of the Press, another commentator saying Trump was elected (and the Brexit happened) because people were tired of being labeled as bigots and racists, and finally Reason Magazine saying that Trump won because political correctness inspired a terrifying backlash. Perhaps you feel that Trump, and anyone who voted for him, is racist, and that regardless of whether it’s going to cost Clinton the election, it’s still important to point it out, that’s certainly your right, but in the long run it might be more effective for your candidate to win.

The second explanation I’d like to look at might be called the, “what’s good for the goose” explanation. And this goes beyond the election into the presidency, but let’s start with the election, in particular voting as a racial block. Much has been made of the fact that 53% of white women voted for Trump, despite his apparent misogyny. And some are even saying that because of this obvious racism that white women sold out the world. But at the same time you read about people who are shocked that Latinos didn’t vote in greater numbers and that up to 29% of them may have actually voted for Trump. But then another article comes along and assures us that no, it’s okay, Latinos did vote as a block and only 18% of them voted for Trump. This is not new of course, minorities have been voting as a block for a long time. It’s expected, but it was also expected that whites wouldn’t vote as a block, but why?

I’m not going to get into whether it’s right or wrong to vote as a racial block, it’s one of those intersections of a lot of different principles (charity, justice, equality, etc.) where things get really muddy. But no one should be surprised if after decades of urging blacks and latinos to view the election in terms of race, that at least some whites start viewing it in terms of race as well. And you don’t even have to imagine some grand conspiracy for this to happen. Most people vote based on their perceived self interest, not on what’s best for the world, and it’s not inconceivable that these interests will align in a way that looks racial, even if that race is white.

This gets into the subject of those tactics, which seem great if your side is the only one using them, but aren’t so great when the other side starts using them. And here we move from talking about the election to talking about Trump’s presidency.

Regardless of your opinion on whether Trump will make a good president or a bad president. It is certainly true that recent developments will make him a more consequential president than he might otherwise have been. I already talked about how dangerous the temptation is to restrict free speech because not only is it the best protection against a bad leader, but you can create tools to use while you’re in power which then backfire on you when you’re out of power. There are lots of examples of expanded executive powers which fit this model. Dan Carlin of the Common Sense and Hardcore History podcasts talks a lot about this. He’s particularly worried about surveillance powers and executive orders. I’m more interested in the Supreme Court. There are a lot of things where liberals couldn’t wait for public opinion to catch up and so they relied on the courts to change them, but now that the court has done that, they can reverse it, and they can do it even if, in the interim, public opinion has caught up.

Also, with the Supreme Court acting more and more as the de facto rulers of the whole country, I know that there are a lot of Republicans out there who voted for Trump just because they didn’t want Clinton appointing four justices. That was their single issue, and they ignored or held their nose about everything else.  Combine this with Dan Carlin’s list of concerns, and a federal bureaucracy that’s more powerful than ever, and if Trump is going to be a bad President he’s going to have a lot more tools at his disposal than he otherwise would have. In short, people arguing for limited government weren’t always doing it because they’re jerks. (I mean sometimes they were, but not always.) They may have genuinely recognized the danger and the fragility that comes from too much centralization.

As I’ve said, I don’t know what will happen under a Trump Presidency. He could be good, he could be horrible, he could be worse than horrible, but before ending I’ll run through what I think might happen in a half dozen different areas:

First, let’s start with immigration. This is one area where Trump took a lot of heat and got a lot of support. I have seen some Trump defenders say that he’s going to walk back some of his more extreme comments when he’s President. And if you look at his plan for the first 100 days it does appear that he might be doing that, at least somewhat. There is no mention of deporting everyone who’s here illegally or banning all Muslims (the word Muslim doesn’t appear anywhere in the plan). Combine this with the normal difficulties of getting things done in Washington and  his immigration policy may be less draconian than people feared.

Second, another place where people are scared is LGBT rights. Despite the expansion of executive power I don’t know that there’s a lot he can do here outside of getting the Supreme Court to undo the blanket legalization for same sex marriage. (And remember that all the Supreme Court can do is send it back to the states, where, one could argue, it should have been in the first place.) Also from what I can tell Trump’s social conservative urges are nearly non-existent. Certainly nothing about this appears in his plan for the first 100 days nor was the idea that prominent in his campaign. That said if he manages to appoint four conservative justices there’s no telling what they might do. But of all the Republicans in the primaries I think Trump was the most socially liberal.

Third, people also seem to be worried about whether Trump will keep abortion legal. This is another area where Trump doesn’t seem to have strong feelings, but a court with four Trump justices could still reverse Roe vs. Wade (and once again remember this just moves it back to the states.) For whatever reason this strikes me as more likely. For one, Roe v. Wade is considered a poorly constructed ruling even by some people who support it, plus it appears to have been bubbling to the top more in the last few years. Despite all this I still don’t think it’s going to happen, but I think we’ll actually see a substantial challenge.

Now that we’ve covered the relatively mundane topics, topics where there’s almost certainly going to be some noise made, we can move on to what we might term black swans.

In the fourth position, and our first black swan is something which is definitely going to make some noise, the question is whether it’s going to go anywhere. I’m talking about California seceding.  What was once the cause of a few thousand hardcore supporters is now being seriously considered. The consensus is that to do it cleanly would require a constitutional amendment. But historically it’s far more common for a nation to break apart through bloodshed and war than through a vote, though I doubt the Californians have the stomach for that, but probably neither do the rest of us. When I consider the difficulties I think more likely than either a specifically Californian Constitutional Amendment or war would, be an amendment making it easier for any state to leave. Or alternatively a new Constitutional Convention, which is actually something provided for in the Constitution.

For numbers five and six we’ll finally deal with the two greatest fears cited by opponents of Donald Trump: dictatorship and nuclear war. I’m not sure how to evaluate the possibility of a dictatorship. I mean obviously it is possible, I just don’t immediately see how to get from here to there, but I’ll see what I can come up with. Let’s start with the premise that dictatorship requires some kind of force, and while force can be applied without guns, eventually if you really want to get someone to do something guns are going to enter into the equation at some point. So who has guns? Obviously the military does, also in the US there is a vast stock of guns in private ownership, and then there’s the police. But if it came to it private gun owners (if unified) are a bigger deal than the police, but the military is a bigger deal than them all. Thus, to exercise force you need to control one level and the levels above you need to be sidelined. For example it’s sufficient to control the police if both military and private gun owners are uninvolved, which is, broadly speaking, the situation we have now. But if someone controls the military it doesn’t matter how many cops or private citizens oppose him. And Trump does, sort of, control the military now, but he can’t just immediately declare martial law, the military would tell him to go suck it. He needs an excuse. Perhaps the War on Terror. Perhaps the war against California after they secede. But regardless of the excuse it has to be a big enough excuse to derail the normal process of elections. And that’s where I have a hard time seeing how to get from here to there. But perhaps I just lack imagination on this front.

As far as Trump controlling the nukes. This worries me too. If the worry is just all out nuclear war with Russia, he actually worries me slightly less than Clinton did. The other possibility for all out war is China and here he’s kind of a black box, though it’s widely understood that China prefered Trump, for whatever that’s worth. Where Trump concerns me more is in the area of using tactical nukes, say in the Middle East somewhere. I this front I’m not sure what warnings or consolation to offer. I think we’ll just have to wait and see.

And of course that’s the primary advice I have, wait and see. There should definitely be some red lines even for those people who think Trump is the greatest thing since sliced bread. But these red lines should always be there for every President. And by red lines I mean acts by Trump that should cause us to take to the streets with signs and shouting and if necessary, man the barricades. Red lines like if he starts abusing the power of military, or if he starts censoring people, or if he tries to pack the Supreme Court, or most especially if he tries to start messing with the election. Of course there are a lot of small steps between where we are and General Trump, Dictator for Life, Beloved and Eternal Leader. And it’s important that, unlike the frog, we don’t allow ourselves to be slowly boiled. But based on what I’ve seen on social media and the news since Tuesday I have no doubt that there will always be people willing to call out Trump the minute he tries to raise the temperature.