Tag: <span>SlateStarCodex</span>

Predictions (Spoiler: No AI or Immortality)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Many people use the occasion of the New Year to make predictions about the coming year. And frankly, while these sorts of predictions are amusing, and maybe even interesting, they’re not particularly useful. To begin with, historically one of the biggest problems has been that there’s no accountability after the fact. If we’re going to pay attention to someone’s predictions for 2017 it would be helpful to know how well they did in predicting 2016. In fairness, recently this trend has started to change, driven to a significant degree by the work of Philip Tetlock. Perhaps you’ve heard of Tetlock’s book Superforcasting (another book I intend to read, but haven’t yet, I’m only one man) But if you haven’t heard of the book or of Tetlock, he has made something of a career out of holding prognosticators accountable, and his influence (and that of others) is starting to make itself felt.

Scott Alexander of SlateStarCodex, makes yearly predictions and, following the example of Tetlock, scores them at the end of the year. He just released the scoring of his 2016 predictions. As part of the exercise, he not only makes predictions but provides a confidence level. In other words, is he 99% sure that X will/won’t happen, or is he only 60% sure? For those predictions where his confidence level was 90% or higher he only missed one prediction. He predicted with 90% confidence that “No country currently in Euro or EU announces plan to leave:” And of course there was the Brexit, so he missed that one. Last year he didn’t post his predictions until the 25th of January, but as I was finishing up this article he did post his 2017 predictions, and I’ll spend a few words at the end talking about them.

As an aside, speaking of posting predictions on the 25th, waiting as long as you can get away with is one way to increase your odds. For example last year Alexander made several predictions about what might happen in Asia. Taiwan held their elections on the 16th of January, and you could certainly imagine that knowing the results of that election might help you with those predictions. I’m not saying this was an intentional strategy on Alexander’s part, but I think it’s safe to say that those first 24 days of January weren’t information free, and if we wanted to get picky we’d take that into account. It is perhaps a response to this criticism for Alexander to post his predictions much earlier this year.

Returning to Alexander’s 2016 predictions, they’re reasonably mundane. In general he predicts that things will continue as they have. There’s a reason he does that. It turns out that if you want to get noticed, you predict something spectacular, but if you want to be right (at least more often than not) than you predict that things will basically look the same in a year as they look now. Alexander is definitely one of those people who wants to be right. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s efforts, prediction is still more amusing than useful.

See, it’s not the things which stay the same that are going to cause you problems. If things continue as they have been, than it doesn’t take much foresight to reap the benefits and avoid the downside. It’s when the status quo breaks that prediction becomes both useful and ironically impossible.

In other words someone like Alexander (who by the way I respect a lot I’m just using him as an example) can have year after year of results like the results he had for 2016 and then be completely unprepared the one year when some major black swan occurs which wipes out half of his predictions.

Actually, forget about wiping out half his predictions, let’s just look at his, largely successful, world event predictions for 2016. There were 49 of them and he was wrong about only eight. I’m going to ignore one of the eight because he was only 50% confident about it (that is the equivalent of flipping a coin and he admits himself that being 50% confident is pretty meaningless). This gives us 41 correct predictions out of 48 total predictions, or 85% correct. Which seems really good. The problem is that the stuff he was wrong about is far more consequential than the stuff he was right about. He was wrong about the aforementioned Brexit, he made four wrong predictions about the election. (Alexander, like most people, was surprised by the election of Trump.) And then he was wrong about the continued existence of ISIS and oil prices. As someone living in America you may doubt the impact of oil prices, but if so I refer you to the failing nation of Venezuela.

Thus while you could say that he was 85% accurate, it’s the 15% of stuff he wasn’t accurate about that is going to be the most impactful. In other words, he was right about most things, but the consequences of his seven missed predictions will easily exceed the consequences of the 41 predictions that he got right.

That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things people end up being wrong about end up being far more consequential than all things they’re right about. Obviously it’s a little bit crude to use the raw number of predictions as our standard. But I think in this case it’s nevertheless essentially accurate. You can be right 85% of the time and still end up in horrible situations because the 15% of the time you’re wrong, you’re wrong about the truly consequential stuff.

I’ve already given the example of Alexander being wrong about Brexit and Trump. But there are of course other examples. The recent financial crisis is a big one. One of the big hinges of investment boom leading up to the crisis was the idea that the US had never had a nationwide decline in housing prices. And that was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.

You may be thinking from all this that I have a low opinion of predictions, and that’s largely the case. Once again this goes back to the ideas of Taleb and Antifragility. One of his key principles is to reduce your exposure to negative black swans and increase your exposure to positive black swans. But none of this exposure shifting involves accurately predicting the future. And to the extent that you think you can predict the future it makes you less likely to worry about the sort of exposure shifting that Taleb advocates, and makes things more fragile. Also, in a classic cognitive bias, everything you correctly predicted you ascribe to skill while every time you’re wrong you put that down to bad luck. Which, remember, is easy trap to fall into because if you expect the status quo to continue you’re going to be right a lot more often than you’re wrong.

Finally, because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once, but if you’re not prepared then it has to NEVER happen. For example, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.

All of this is not to say that you should move to Wyoming and build a fallout shelter. Only to illustrate the asymmetry of being right most of the time, if when you’re wrong you’re wrong about something really big.

In discussing the move towards tracking the accuracy of predictions I neglected to engage in much of a discussion of why people make outrageous and ultimately inaccurate predictions. Why do predictions, in order to be noticed, need to be extreme? Many people will chalk it up to a need for novelty or a requirement brought on by a crowded media environment, but once you realize that it’s the black swans, not the status quote that cause all the problems (and if you’re lucky bring all the benefits) you begin to grasp that people pay attention to extreme predictions not out of some morbid curiosity or some faulty wiring in their brain but because if there is some chance of an extreme prediction coming true, that is what they need to prepare for. Their whole life and all of society is already prepared for the continuation of the status quo, it’s the potential black swans you need to be on the lookout for.

Consequently, while I totally agree that if someone says X will happen in 2016, that it’s useful to go back and record whether that prediction was correct. I don’t agree with the second, unstated assumption behind this tracking that extreme predictions should be done away with because they so often turn out to not be true. If someone thinks ISIS might have a nuke, I’d like to know that. I may not change what I’m doing, but then again I just might.

To put it in more concrete terms, let’s assume that you heard rumblings in February of 2000 that tech stocks were horribly overvalued, and so you took the $100,000 you had invested in the NASDAQ and turned it into bonds, or cash. If so when the bottom rolled around in September of 2002 you would still have your $100k, whereas if you didn’t take it out you would have lost around 75% of your money. But let’s assume that you were wrong, and that nothing happened and that the while the NASDAQ didn’t continue its meteoric rise that it continued to grow at the long term stock market average of 7% then you would have made around $20,000 dollars.

For the sake of convenience let’s say that you didn’t quite time it perfectly and you only prevented the loss of $60k. Which means that the $20k you might have made if your instincts had proven false was one third of the $60k you actually might have lost. Consequently you could be in a situation where you were less than 50% sure that the market was going to crash (in other words you viewed it as improbable) and still have a positive expected value from taking all of your money out of the NASDAQ. In other words depending on the severity of the unlikely event it may not matter if it’s unlikely or improbable, because it can still make sense to act as if it were going to happen, or at a minimum to hedge against it. Because in the long run you’ll still be better off.

Having said all this you may think that the last thing I would do is offer up some predictions, but that is precisely what I’m going to do. These predictions will differ in format from Alexander’s. First, as you may have guessed already I am not going to limit myself to predicting what will happen in 2017. Second I’m going to make predictions which, while they will be considered improbable, will have a significant enough impact if true that you should hedge against them anyway. This significant impact means that it won’t really matter if I’m right this year or if I’m right in 50 years, it will amount to much the same regardless. Third, a lot of my predictions will be about things not happening. And with these predictions I will have to be right for all time not just 2017. Finally with several of these predictions I hope I am wrong.

Here are my list of predictions, there are 15, which means I won’t be able to give a lot of explanation about any individual prediction. If you see one that you’re particularly interested in a deeper explanation of, then let me know and I’ll see what I can do to flesh it out. Also as I mentioned I’m not going to put any kind of a deadline on these predictions, saying merely that they will happen at some point, but for those of you who think that this is cheating I will say that if 100 years have passed and a prediction hasn’t come true then you can consider it to be false. However as many of my predictions are about things that will never happen I am, in effect, saying that they won’t happen in the next 100 years, which is probably as long as anyone could be expected to see. Despite this caveat I expect those predictions to hold true for even longer than that. With all of those caveats here are the predictions. I have split them into five categories

Artificial Intelligence

1- General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.

If there was a single AI able to do everything on this list, I would consider this a failed prediction. For a recent examination of some of the difficulties see this recent presentation.

2- A complete functional reconstruction of the brain will turn out to be impossible.

This includes slicing and scanning a brain, or constructing an artificial brain.

3- Artificial consciousness will never be created.

This of course is tough to quantify, but I will offer up my own definition for a test of artificial consciousness: We will never have an AI who makes a credible argument for it’s own free will.

Transhumanism

1- Immortality will never be achieved.

Here I am talking about the ability to suspend or reverse aging. I’m not assuming some new technology that lets me get hit by a bus and survive.

2- We will never be able to upload our consciousness into a computer.

If I’m wrong about this I’m basically wrong about everything. And the part of me that enviously looks on as my son plays World of Warcraft hopes that I am wrong, it would be pretty cool.

3- No one will ever successfully be returned from the dead using cryonics.

Obviously weaselly definitions which include someone being brought back from extreme cold after three hours don’t count. I’m talking about someone who’s been dead for at least a year.

Outer Space

1- We will never establish a viable human colony outside the solar system.

Whether this is through robots constructing humans using DNA, or a ship full of 160 space pioneers, it’s not going to happen.

2- We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.

I think I’m being generous here to think it would even get close to this number but if it did it would still be smaller than the top 900 US cities and Lichtenstein.

3- We will never make contact with an intelligent extraterrestrial species.

I have already offered my own explanation for Fermi’s Paradox, so anything that fits into that explanation would not falsify this prediction.

War (I hope I’m wrong about all of these)

1- Two or more nukes will be exploded in anger within 30 days of one another.

This means a single terrorist nuke that didn’t receive retaliation in kind would not count.

2- There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)

Either an external or internal conflict would count, for example a Chinese Civil War.

3- The number of nations with nuclear weapons will never be less than it is right now.

The current number is nine. (US, Russia, Britain, France, China, North Korea, India, Pakistan and Israel.)

Miscellaneous

1- There will be a natural disaster somewhere in the world that kills at least a million people

This is actually a pretty safe bet, though one that people pay surprisingly little attention to as demonstrated by the complete ignorance of the 1976 Chinese Earthquake.

2- The US government’s debt will eventually be the source of a gigantic global meltdown.

I realize that this one isn’t very specific as stated so let’s just say that the meltdown has to be objectively worse on all (or nearly all) counts than the 2007-2008 Financial Crisis. And it has to be widely accepted that US government debt was the biggest cause of the meltdown.

3- Five or more of the current OECD countries will cease to exist in their current form.

This one relies more on the implicit 100 year time horizon then the rest of the predictions. And I would count any foreign occupation, civil war, major dismemberment or change in government (say from democracy to a dictatorship) as fulfilling the criteria.

A few additional clarifications on the predictions:

  • I expect to revisit these predictions every year, I’m not sure I’ll have much to say about them, but I won’t forget about them. And if you feel that one of the predictions has been proven incorrect feel free to let me know.
  • None of these predictions is designed to be a restriction on what God can do. I believe that we will achieve many of these things through divine help. I just don’t think we can do it ourselves. The theme of this blog is not that we can’t be saved, rather that we can’t save ourselves with technology and progress. A theme you may have noticed in my predictions.
  • I have no problem with people who are attempting any of the above or are worried about the dangers of any of the above (in particular AI) I’m a firm believer in the prudent application of the precautionary principle. I think a general artificial intelligence is not going to happen, but for those that do like Eliezer Yudowsky and Nick Bostrom it would be foolish to not take precautions. In fact insofar as some of the transhumanists emphasize the elimination of existential risks I think they’re doing a useful and worthwhile service, since it’s an area that’s definitely underserved. I have more problems with people who attempt to combine transhumanism with religion, as a bizarre turbo-charged millennialism, but I understand where they’re coming from.

Finally, as I mentioned above Alexander has published his predictions for 2017. As in past years he keeps all or most of the applicable predictions from the previous year (while updating the confidence level) and then incrementally expands his scope. I don’t have the space to comment on all of his predictions, but here are a few that jumped out:

  1. Last year he had a specific prediction about Greece leaving the Euro (95% chance it wouldn’t) now he just has a general prediction that no one new will leave the EU or Euro and gives that an 80% chance. That’s probably smart, but less helpful if you live in Greece.
  2. He has three predictions about the EMDrive. That could be a big black swan. And I admire the fact that he’s willing to jump into that.
  3. He carried over a prediction from 2016 of no earthquakes in the US with greater than 100 deaths (99% chance) I think he’s overconfident on that one, but the prediction itself is probably sound.
  4. He predicts that Trump will still be president at the end of 2017 (90% sure) and that no serious impeachment proceedings will have been initiated (80% sure). These predictions seem to have generated the most comments, and they are definitely areas where I fear to make any predictions myself, so my hat’s off to him here. I would only say that the Trump Presidency is going to be tumultuous.

And I guess with that prediction we’ll end.


The Politics of the Zombie Apocalypse

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


One of my favorite blogs is Slatestarcodex, the blog of Scott Alexander. And yes I would offer the obligatory “check it out if you haven’t already.”

As an example of the high esteem I have for his blog I’ve started at the very beginning and I’m reading all the archives, and one of his earliest posts has some bearing on the topic we were discussing in my last post, but is also interesting enough on it’s own account to be worth reviewing. So I’ll start with that and then tie it back to my post. His post is titled A Thrive/Survive Theory of The Political Spectrum, and in it he puts forth his own theory of how to explain the right/left, conservative/liberal divide:

…rightism is what happens when you’re optimizing for surviving an unsafe environment, leftism is what happens when you’re optimized for thriving in a safe environment.

As an example of the rightist/survival mindset he offers the example of a zombie apocalypse. Imagining how you might react to a zombie apocalypse, he feels, is a great way to arrive at most of the things supported by the right/survive side of the political equation. You’d want lots of guns, you’d be very suspicious of outsiders, you’d become very religious (if there are no atheists in foxholes there are definitely no atheists in foxholes surrounded by zombies) extreme black and white thinking would dominate (zombies are not misunderstood, they’re evil), etc.

For the leftist/thrive side of the spectrum he offers the example of a future technological utopia:

Robotic factories produce far more wealth than anyone could possibly need. The laws of Nature have been altered to make crime and violence physically impossible (although this technology occasionally suffers glitches). Infinitely loving nurture-bots take over any portions of child-rearing that the parents find boring. And all traumatic events can be wiped from people’s minds, restoring them to a state of bliss. Even death itself has disappeared.

As you can imagine you’d probably get the exact opposite of the previous scenario. Guns would be nearly non-existent. If you don’t have to compete for resources and violence has been eliminated most of the current objections to foreigners would be gone. Also, based on current trends in the developed world, it seems unlikely that religion would have much of a foothold, nurture bots would make marriage vestigial, etc.

I find his theory very compelling, it makes as much sense as any of the theories I’ve come across, and I have no problem granting that it’s probably accurate. Which leads us to an examination of the implications of the theory, and this is where I think it gets really interesting.

The first thing to consider is which view of the future is more likely to be accurate. Is it going to be closer to the technological utopia or the zombie apocalypse? I think my own views on this subject are pretty clear. (Though as I mentioned way back in the first post I think we’re more likely to see a gradual catabolic collapse than a Mad Max/Walking Dead scenario.) But I’m also on record as saying that I could very well be wrong. Given that we can’t predict the future, what’s more important is not to try and guess what will happen, to say nothing of trying to plan around those guesses, but rather to choose the course where the penalty for being wrong is the smallest.

In other words if the world prepares for disaster and instead we end up with robotic factories that produce everything we could possible need, then it’s fine, and yes we wasted some time and resources preparing for disaster, but in light of the eventual abundance it was a small price to pay. But if the world pins its hopes on robotic factories and we end up with roving zombies then people die, which I understand is much worse than wasting time and money.

Of course one might immediately make the argument that by preparing for disaster we could slow down or actually prevent the technological utopia. Obviously that argument is not easy to dismiss, particularly since, generally, planning for A makes it harder to accomplish B. This is especially true if B is the opposite of A. Thus, on its face that argument would appear to be compelling. But let’s look at how things are actually playing out.

If we want robotic factories then we need to spend resources inventing them. More generally, the best way to guarantee the technological utopia is to put as many resources as we can into innovation. So how are our resources allocated? According to this chart 41% of US GDP goes to the government, not the first place you think of when the word innovation comes to mind. But it’s still possible that some innovation might emerge, but if it does it will most likely come from military spending, the area leftists would most like to cut. I would argue that innovation is least likely to come from entitlement spending the area leftists are most desirous to expand. In other words, at first glance the people planning on the utopian future may, paradoxically, be the people least likely to bring it about.

Of course there’s still the remaining 59% of the economy. It’s certainly conceivable that leftists could be so much better at encouraging innovation in that area of the economy that it makes up for whatever distortions they bring to the percent of GDP consumed by the government. On this count I see evidence going both ways. I think the generally laissez-faire attitude of the rightist is much better for encouraging innovation. On the other hand the hub of modern innovation is San Francisco, a notoriously leftist city. On the gripping hand you have things like Uber not being able to operate in SF because of regulations. Personally I would again say that rightist are better at encouraging innovation then leftists. Best case scenario I have a hard time seeing it as anything other than a wash. Also as our affluence increases the percentage of GDP that goes to government also increases, which takes us back to the first argument.

Remember in the end, we don’t even need to show that rightest are better at innovation, just that their focus on survival doesn’t fatally injure the prospects of the technological utopia, which I don’t see any compelling evidence for.

Having progressed this far, we have the survive/rightist side of the aisle being great as a just-in-case measure, which doesn’t slow down the thrive/leftist side and may actually speed it up. In fact at this point you may think that Alexander obviously created the post as a defense of rightism, and many of the commenters on his blog felt the same way, but that was not the case. Here’s his response

…this post was not intended to sell Reaction [rightism/survive]. If anything, it was about how it was adapted for conditions that no longer exist. If you’re in a stable society without zombies, optimizing your life for zombie defense is a waste of time; working towards not-immediately-survival-related but nice and beautiful and enjoyable things like the environment and equality and knowledge-for-knowledge’s sake may be an excellent choice.

Does he have a point? Is the survive mindset a relic of the past which now just represents a waste of time and resources? This is where we return to my last post. If you haven’t read it here’s the 30 second summary. Some smart concerned people wanted poor countries to use opiates like morphine to ease the pain of the dying. They refused. Instead it was all the rich countries who started using opiates leading to the deaths of an additional 100,000 people, just in the US, from prescription opiate overdoses.

This is a great example of the thrive/survive dichotomy. In typical survive fashion the poor countries were not worried about easing the pain of people who were effectively already dead. Rather, they were a lot more worried about addiction and overdosing among the young, healthy population. Whereas in typical thrive we-shouldn’t-have-to-worry-about-anything fashion, the rich world prescribed opiates like candy. In our post scarcity world why should anyone have to worry about pain? But as it turned out despite living in what is arguably already a technological utopia (I mean have you seen this thing called the internet?!?) heroin is still really addictive. And using technology to switch a few molecules around and slap a time release coating on it (and call it oxycontin) didn’t make as much of a difference as people hoped.

This should certainly not be taken as sufficient evidence to say that “survive” is superior (though I think that’s where we’re headed) but it should at least serve as sufficient evidence to refute the idea that the conditions where the survive mindset is beneficial “no longer exist.”

So we have 100.000 people, at least, who wish the needle had been a little bit more on the survive end of dial and a little bit less on the thrive side of dial. With a number like that one starts to wonder why we even have people who are optimized for thrive. Well, just like everything, it goes back to evolution. Of course anytime you start putting forth an evolutionary explanation for things you’re in danger of constructing a just-so story. Though this particular theory does have some evidence behind it. Here Alexander and I are once again largely in agreement so I’ll pass it back to him:

Developmental psychology has gradually been moving towards a paradigm where our biology actively seeks out information about our environment and then toggles between different modes based on what it finds. Probably the most talked-about example of this paradigm is the thrifty phenotype idea, devised to explain the observation that children starved in the womb will grow up to become obese

Coincidently I came across another example of this just the other day. My research began when I came across an article that indicated that Dawkin’s theory of the Selfish Gene had fallen out of favor and I wanted to know why. As it turns out this paradigm of phenotypical toggling was a big reason. The example given by this article dealing with the problems of the Selfish Gene concerned grasshoppers and locusts. What people didn’t realize until very recently is that grasshoppers and locusts are the same species, but grasshoppers turn into locusts when a switch is flipped by environmental cues. Continuing with Alexander:

It seems broadly plausible that there could be one of these switches for something like “social stability”. If the brain finds itself in a stable environment where everything is abundant, it sort of lowers the mental threat level and concludes that everything will always be okay and it’s job is to enjoy itself and win signaling games. If it finds itself in an environment of scarcity, it will raise the mental threat level and set its job to “survive at any cost”.

In other words humans switch to thrive when things are going well because it works better, and when things aren’t going well they switch to survive because that works better. Of course the immediate question is, what does it mean for something to “work better”. Since we’re talking about evolution, working better means reproductive success, or having more offspring. The fact that the people most associated with the thrive side of things have the least children is something that seems like a big flashing neon sign, which makes me want to switch to a completely separate topic, but I’m going to resist it.

Also if we’re talking in terms of an evolutionary response the thrive side of things has to have been a potential strategy for a long, long time. It can’t have been something that developed in the last 100 years, or even the last 500 years. We’re talking about something that’s been around for probably tens of thousands of years. Thus, any theory about it’s benefits would have to encompass a pre-historical reason for the thrive switch to exist.

As I warned earlier. discussions like this are apt to look like just so stories, so if even the hint of ad hoc reasoning bothers you, you should skip the next 5 paragraphs.

Obviously one category of people who might benefit from the thrive switch would be whoever ends up being in the ruling class. You might think that’s too small a category to deserve it’s own evolutionary switch, but I direct your attention to the fact that 1 in every 200 men are descendants of Genghis Khan, and the related finding that there were more mothers than fathers in the past indicating strong polygyny, almost certainly concentrated in the ruling class. What this implies is that even if something is only triggered a small amount of the time, it could have a disproportionate evolutionary effect. Sure, you might only be on the top of the heap a short time, perhaps only a few generations, but a switch to take advantage of that could have an enormous long term effect.

If we’re willing to grant that the thrive switch was largely designed to take advantage of your time on top, and we’re willing to see where speculation might take us (you were warned) it generates some interesting ideas.

First it definitely explains the promiscuity. It explains the hedonism. It explains the enormous focus on jockeying for status and signalling games. But so far I haven’t departed that much from Alexander’s position. What if I told you it explains microaggressions?

The concept of microaggressions has been much discussed over the last few years. Most people view it as a new and disturbing trend. But microaggressions have been around forever, however up until now they were restricted to royalty. In dealing with royalty you have to be careful not to give the slightest hint of offense, to use exactly the right words when addressing them. Can anyone look at this chart explaining the proper form of address for royalty and tell me it’s not the most elaborate system ever for avoiding microaggressions? Is the rising objection to microaggressions an unavoidable consequence of the increasing dominance of the thrive paradigm?

Okay perhaps that’s a stretch, speculation and just-so-story time over we’ll return to firmer ground.

Much of what we understand about the kind of evolutionary switching we’re talking about comes from game theory. And of course the classic example of game theory is prisoner’s dilemma. Iterated prisoner’s dilemma is often used as a proxy for group dynamics and evolution. In this case the strategy that works best is a tit-for tat strategy, but game theory also tells us that occasionally, particularly in the short term, it can be advantageous to defect. Could the thrive switch be just this? That when the rewards for defecting reach a certain level, the switch flips and the individual defects? The exact nature of the defection (and the abandoned co-operation) are not entirely clear to me, but we are still talking about a certain payoff leading to a switch in strategy. And you don’t have to be a hard core libertarian to think that the baron in his castle has a more predatory relationship with the peasant than the peasant has with another peasant.

I admit that I am once again speculating to a large degree. But this speculation proceeds from some reasonable assumptions. Assumption one: the thrive switch works in conjunction with the the survive switch. That there’s a reason grasshoppers aren’t locusts 100% of the time. Assumption two: this symbiotic relationship has not gone away (see the previous point about opiates.) Assumption three: There are unseen reasons for the historical equilibrium between the two modes.  In other words, one could certainly imagine that the thrive strategy relies on having a certain level of surrounding survive. That evolutionarily speaking a society that’s 20% thrive and 80% survive works great, but a society in which those numbers are reversed, works horribly, or is in any case much more fragile than the society which is only 20% thrive.

How might we test this? What would count as evidence for an imbalance between the strive and thrive portions of society? What would count as evidence of the imbalance being dangerous? I can think of few things:

-College: This area could provide a blog post or three all on it’s own. As Alexander says if you’re in thrive mode then pursuing “knowledge-for-knowledge’s sake may be an excellent choice.” But there’s definitely a strong case to be made that we’ve reached a point where too many people go to college. And even if you agree with the general benefit of college and want it spread as widely as possibly, you can still probably agree that too many people take on too much debt to get degrees in fields with very little economic benefit. If that’s not evidence of a thrive imbalance than I think you have to invalidate the entire construct.   

-Debt: I’m reminded of an exchange in Anna Karenina when one of the main characters complains of being in debt. The noble’s he’s with asks how much and he responds with the amount of twenty thousand roubles, and they all laugh at him because it’s so small. One of the nobles is five million roubles in debt on a salary of twenty thousand a year. This to me encapsulates the idea that debt is something that was traditionally only available to the wealthy. But today we have a staggering amount of debt at all levels. I was just reading in The Economist that the unfunded pension liability in 20 OECD countries is $78 trillion dollars. That’s an amount that takes a minute to sink in, but for help $78 trillion is about the world’s GDP for an entire year. Now maybe Krugman and Yglesias and Keynes are all correct and government debt (even $78 trillion of it) is no big deal, but what about consumer debt, and student debt, and corporate debt. Is it all no big deal?

-Virtue Signalling: I mentioned signalling games earlier, and you may still be unclear on what those actually are. Well as Alexander explains:

When people are no longer constrained by reality, they spend most of their energy in signaling games. This is why rich people build ever-bigger yachts and fret over the parties they throw and who got invited where. It’s why heirs and heiresses so often become patrons of the art, or donors to major charities. Once you’ve got enough money, the next thing you need is status, and signaling is the way to get it.

So the people of this final utopia will be obsessed with looking good. They will become moralists, and try to prove themselves more virtuous than their neighbors.

In a virtue signalling arms race it becomes harder and harder to establish that you are truly the most virtuous, and as a result virtue get’s sliced into smaller and smaller parts. If three genders (male, female and other) is virtuous, surely seven is more virtuous, thirty-one still more virtuous and fifty-one the most virtuous of all (until someone comes along with their list of sixty-three or, not to be outdone, seventy-one.) Is this evidence of a thrive/survive imbalance? It sure looks like one, and of course, this is also just one example. Is it evidence of the imbalance being dangerous? That I’m less sure about, I guess it depends on how far the arms race goes. I have a hard time imagining that will eventually reach the point where murdering the transphobic is considered more virtuous than yelling at them, but honestly I never imagined we’d get as far as we have already.

Whether you accept these three points as evidence of a dangerous imbalance will largely depend on how closely your own biases and prejudices match mine. I’m certainly not the only one who thinks that worthless college degrees, massive debt, and the virtue arms race are problems. I just may be the only one who has tried to tie them to a single cause.

Since this is technically an LDS blog (though I’ve hid it very well the last couple of posts) you might constructively wonder what the Church’s stance on things is. And while the Church would strenuously object to an accusation that everyone in the Church is a Republican (particularly in light of the current candidate) and would probably also object (albeit perhaps less strenuously) over being labeled a Right-wing organization. With their emphasis on food storage, avoiding debt, chastity and family would they or anyone else object to them being labeled a “survive” organization