Tag: <span>Asymmetry</span>

The Bifurcation Created by Technology

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


I.

I’ve tried to start tweeting more consistently, though not because I like to tweet. As you may be able to tell from the length of my essays, tweeting is the exact opposite of my preferred style of communication. Unfortunately Twitter is where all the action happens, so if I want to be a public intellectual, I have to tweet. I don’t know that I want to be a public intellectual, nor have I ever claimed to be such. But I haven’t found any better label for what I’m trying to do, so I guess that’s the direction I hope to go in however silly and conceited that may sound at this point. 

Of course the consensus is that Twitter is a dumpster fire encased in high level nuclear waste, and that anyone who has the least interest in maintaining their mental health should avoid it like the plague. Though I mostly see that sentiment expressed by people who are already well known enough that their audience will find them regardless, giving them the option of avoiding the radioactive inferno. That does not describe me, though so far the greatest difficulty I’ve encountered is remembering that it’s there. Apparently the deep red of the uranium fire does not hold the same appeal for me as it does for others.  

In any case I digress, I mention the tweeting in case there are readers out there who might be interested in following my infrequent tweets (I’ve gotten pretty good at tweeting at least once a day.) I also mention it because the idea for this post started as a tweet. (I guess I need to figure out how to embed tweets, but until then, I’ll just quote it.)

Technology bifurcates problems. 99% of problems go away, but the 1% that are left are awful. Case in point customer service: 99% of the time you don’t even need customer service from Amazon, etc. but the 1% of the time you do you’re suddenly in a story by Kafka.

I mentioned Amazon in my tweet, completely missing that, contextually it would have been better to mention Twitter, since it basically has the same problem. We probably all know someone who has been temporarily banned from twitter for tweeting something objectionable. The initial choice (as I understand it, remember I’m not a power user) is to delete the offending tweet or to appeal. Nearly everyone deletes the offending tweet because they know that appealing puts them in the aforementioned story by Kafka. And should it happen a second time then appealing switches from Kafka to Dante: “abandon hope all ye who enter here”. All of which is to emphasis my initial point: 99%, probably even 99.9% of Twitter users never need customer service, the platform just runs on it’s own without the users ever running into problems which need special intervention, but in the edge cases where it doesn’t run smoothly the process is completely opaque, unresponsive, and ineffective.

Despite how bad it is, as far as I can tell Twitter does much better than Amazon and Google. The internet is full of stories of people who had their Amazon seller account closed—frequently for things the person never did. (This reddit story represents a typical example.) And you may have caught the New York Times story from last month about the father who took pictures of his toddler’s genitals to send to the child’s pediatrician, who ended up losing everything he had with Google (Contacts, email, photos, etc.) because the pictures he sent were flagged as child pornography by the Google algorithms. And not only that Google also referred him to the police. All of this is bad but from a societal perspective the worst was yet to come. 

Google, despite being contacted by the NYT and having the situation explained to them, refused to budge. You might imagine that this is just a principled stand on their part. That they have zero tolerance for stuff like this. Or you might imagine the inverse, they’re worried that if they did reverse their opinion they would appear soft on the issue. I don’t think it’s either of those things. I think that they’re incredibly invested in the idea that their algorithms can handle content moderation, and that’s the position they don’t want to undermine. Because if algorithms are shown to have holes and flaws then they might have to spend a lot of time and money getting humans to do customer service, which is the exact opposite of the direction they’ve been trying to go since their founding. 

Before moving on, as I was re-reading this story, I came across yet another consequence of Google’s villainy. This one, out of all of the consequences this man suffered, really hit home for me. “He now uses a Hotmail address for email, which people mock him for…” I’ll admit that made me laugh. But also, yeah, I totally get it.

In any case I think the customer service angle is pretty straight forward, the question is how broadly can we apply this observation, where else might it be happening? What other harms might it be causing? In order to answer that I think we need to start by examining the process which brings it into existence in the first place.

II.

Initially everything is hard and time consuming. I own a small custom software company, that’s a million times smaller than Amazon. (Okay, not literally, but pretty close.) But my company also solves problems with software. At this point if one of my customers has a problem then they come directly to me and I fix it (or more likely my younger, more talented, better looking partner does.) There’s a lot of friction and a lot of overhead to that process. Gradually we hope to be able to smooth all of that out. The first step in doing that is to hope that whatever we fix stays fixed. We also hope to be able to nip problems in the bud by reusing proven solutions. Finally, we automate solutions for the most common problems. (Think of the “forgot password” button.) While this represents only a small portion of our efforts, it’s a large part of what the big dawgs are doing.

Through all of these tactics, gradually we move things from the “hard and time consuming” column to the “never have to worry about it” column. Or at least we hope we never have to worry about them but “never” is a very long time, and it’s difficult to implement a solution that covers every possible eventuality. There are always going to be edge cases, unique situations and circumstances which combine in ways we didn’t expect. For problems like these you have to get a human involved, ideally a human with the authority to fix things, who’s also smart enough to understand why the situation is unique. (That’s often the problem I run into with customer service. If I’m calling you it’s not something I could fix just by Googling, and telling me to restart my router again is not helpful, it’s infuriating.) But I’m going to argue that for really difficult problems, it goes beyond even all of these things. You actually need a human who’s wise

It’s unclear whether the gentleman who had such difficulties with Google was even able to talk to an actual human, let alone a wise one. Certainly I don’t know how to reach a live person at Google, though I confess I’ve never tried. (Which is probably exactly the behavior they like to encourage.) The NYT obviously talked to someone, but again, even though they definitely had an actual conversation, there doesn’t appear to have been an abundance of wisdom involved. But to be fair, the amount of intelligence and wisdom required to solve these problems just keeps increasing. Because the problems left over after the implementation of all this technology are worse than they would have been if the technology never existed. To be clear I’m not arguing that the overall situation is worse (at least not yet). I’m pointing out that the top 1% of all problems are way worse when the other 99% is automated than when it’s not. 

How does this happen? Well let’s move on to a different example, one where the stakes are higher than being forced to switch to hotmail.

III.

That initial tweet was followed up with one more. (I was on fire that day!)

Additional thoughts/example: Self driving cars. Tech can take care of easiest 99%. Tosses most difficult 1% back to driver. Driver has no context, just suddenly in deep end, therefore much worse at hardest 1% than if they had just dealt with the full 100% from start.

Let me expand that from its abbreviated, staccato form. If not now, then soon, self driving cars will be able to take care of all the easy bits of driving. All the way back in 2015 my Subaru came with adaptive cruise control, which appears to be the lowest of all the low hanging fruit, and I’m sure many of you have Tesla’s which are several generations further advanced still, but no car can take care of 100% of driving and that driving which they can’t take care of is the most difficult driving of all.

The difficult 1% falls into two categories. First there are the sudden calamities: the car on a cross street running a red light, or debris falling off the pickup truck driving just in front of you, etc. 

The second category is bad weather. It’s my understanding that self-driving cars are not great at handling heavy rain, and are completely stymied by heavy snow. Luckily, unlike the examples from the first category, weather is not generally something that gets sprung on you suddenly. Nevertheless, it requires a whole suite of skills which rely on doing a lot of moderately difficult driving, not all of it in bad weather. In the same fashion that speaking academic English is helped by being able to speak conversational English, it’s clear that lots of normal driving helps one develop the skills necessary to tackle bad weather driving. Which is not to say that driving in snow does not have its own unique challenges. This is why in some municipalities, where snow is rare, when it does come they shut things down entirely. Is this same situation what we have to look forward to? A future where neither humans nor auto-pilots can handle inclement weather, and so when it happens everything shuts down? Perhaps, but that’s not really an option in many places. What’s more likely is that of all the driving humans do, a greater and greater percentage of it will only be done during times of extreme weather, with very little experience outside of that. Should this be the case self-driving cars will have made all the driving that does get done significantly more difficult.

Returning to the first category, those situations where conditions suddenly change are more what I was referring to in my tweet. Times where the self-driving car has lulled you into a state of inattentiveness (something that happens to me just using adaptive cruise control) but whatever the car is doing it’s understood that as part of the deal that it can’t handle everything. So when the light turns green it’s your responsibility to notice the Mustang coming from the left whose driver decided, incorrectly, that they could beat the light if they punched it up to 60. Of course you might not notice it regardless of the level of auto-pilot your car has, but also the chance of you missing it if you’ve been relying on auto-pilot for everything else goes way up. 

Having a car run a red light at high speed is presumably something outside the ability of most auto-pilots to detect, on the other hand there are some things the autopilot has no problem detecting, they just don’t know what to do with them. I mentioned debris falling out of a pickup truck. The car can probably detect that, but is this a situation where it’s better to slam on the breaks or swerve? I don’t claim to be an expert on exactly how every current auto-pilot functions, but I think most of them are not equipped to swerve. And it’s not clear how much you want to trust even those cars that are equipped to swerve. This means that it’s up to the person to immediately seize control, and make the decision. Fortunately the car should sound a collision alarm, but if that’s the first point at which you become aware of the debris you’ve already lost valuable time. 

Ideally in order to know whether to swerve or whether to break, you’d want to have a pretty good sense of where the other cars are on the road, particularly if there’s anyone currently hanging out in your blind spot. All of this is unlikely if you haven’t really been paying attention. Deciding whether to break, or swerve when suddenly confronted with road debris is in the top 1% of difficulty. And of course the decision is more complicated than that, there are some situations where the very best thing to do is run over the debris. The point is that for the foreseeable future, using autopilot would almost certainly make this very difficult decision even more difficult. 

IV.

Thus far we’ve covered the two examples that are the most straightforward (though perhaps you’ve already thought of other, equally obvious examples.) Now I want to move into examples where it’s not quite as obvious, but where I think this idea might still have some explanatory utility. I’m just going to touch on each example briefly, just long enough for you to get a sense of the area I’m talking about. I’m more going for a “what are your thoughts about that?” rather than a “here’s why this is also an example of the bifurcation I’ve been talking about”

Was it a factor with the pandemic? We have used technology to routinize numerous aspects of healthcare, such that with 99% of problems we have a system. There’s a specialist you can go to, a medicine you can take, or an operation which can be performed. But when the most difficult health problem of the last 100 years came along in the form of COVID, and it didn’t fit into any of our routines, we seemed pretty bad at dealing with it. Worse than we had been historically, particularly if you factor in the tools available then, vs. the tools available now. Additionally the bureaucracy we had created to deal with the lower 99% of problems ended up really getting in the way when it came to dealing with the top 1%, i.e. a global pandemic. 

Then there are societal problems like homelessness and drug addiction. We also have implemented significant civic technology in this area. Employment is pretty easy to find. Signing up for social programs is straightforward. Just about anybody who wants to go to college can. We’ve taken care of a lot of things which used to be dealt with at the level of the individual, the family, or the community. But, there was a lot of variability in the service offered by these entities, and oftentimes they failed, spectacularly. This is the reason for the various civic technologies that have emerged, and as a result of these technologies we’ve gotten pretty good at the 99%, but what’s happened to the 1%? As I’ve talked about frequently, drug overdose deaths are through the roof. The systems we’ve created are great at dealing with normal problems like just not having enough food, but with the really knotty problems like opioid addiction we seem to have gotten worse.

Does this bifurcation apply in the arena of war? Since WWII we’ve managed to keep 99% of international conflicts below the level of the Great Powers. This has rightly been called the long peace. And it’s been pretty nice. But as the situation in Ukraine gets ever more perilous are we about to find out what the really difficult 1% looks like? The type of war our international system was unable to tame? Essentially what I’m arguing here is that our diplomatic muscles have atrophied. We’re not used to negotiating with powerful countries who truly have their backs against the wall. Which was fine 99% of the time, but the 1% of the time we need it, we’ve lost the ability to engage in it. 

What about energy generation? We are fantastic at generating power. The international infrastructure we’ve built for getting oil out of the ground and then transporting it anywhere in the world is amazing. We’ve also gotten really good at erecting windmills and putting up solar panels. But somehow we just can’t seem to build nuclear power plants in a cost-effective way. It clearly is in that top 1% of difficulty, and as near as I can tell by getting really good at the other 99% we’ve essentially decided to just give up on that remaining 1%. But of course that 1% ends up being really important.

I think I may have stretched this idea to its breaking point, and maybe even past that, but I would be remiss if I didn’t discuss how this idea relates to my last post. Because at first glance they seem to be contradictory. In the last post I said we put too much attention on the tails, and in this post I seem to be saying we’re not putting enough attention there. To be honest this contradiction didn’t occur to me until I was well into things, and for a moment it puzzled me as well. Clearly one explanation would be that I’m wrong now, or that I was wrong then, or that I’m wrong both times. But (for possibly selfish reasons) I think I was right both times, though the interplay between the two phenomena was subtle. 

In our current land grab for status people are racing towards the edges, but that doesn’t mean that the extreme edge, the 1%, gets more attention. In fact the exact opposite, it gets buried by the newcomers. Freddie deBoer has done a lot of great work here and I could pick any of a dozen articles he’s written, but perhaps his newsletter from this morning will suffice. As usual his titles don’t leave much to the imagination: “We Can’t Constructively Address Online Mental Health Culture Without Acknowledging That Some People Think They Have Disorders They Don’t”. As a result of people misdiagnosing themselves you end up in a situation where out of all the people who claim to have a particular disorder a significant percentage, let’s say 80%, don’t have it at all, or if they do it’s subclinical. Then figure an additional 15% of people have very mild cases. And the remaining 5% have a serious affliction. This 5% ends up basically being the 1% I’ve mentioned above, who don’t get the level of help they need because they’re competing for resources with the 95% of people who have mild or non-existent cases. Which takes us back to the same bifurcation I’ve been talking about.

V.

Some of you may have noticed that I’ve neglected a very significant counter argument. Possibly, some of you may be impotently yelling at me through your screen at this very moment. I’ve never discussed the ROI of this arrangement. In other words, this bifurcation could leave all of us better off. To take the example of the self-driving car. Around 40,000 people die every year in automobile accidents. Let’s say that 20% of those deaths come in situations auto-pilots are ill-equipped to deal with. But the other 80% of deaths would be completely eliminated if all cars were self-driving. Unless the extreme 1% ends up being five times more deadly because of overreliance on auto-pilot, we would be better off entirely switching to self-driving cars. Far fewer people would die.

Beyond this most people imagine that eventually we’ll get to 100%. That someday, perhaps sooner than we think, self-driving cars will be better than human drivers in all conditions. And at that point there really won’t be anything left to discuss. While the first point is valid, this second point partakes more of hubris than practicality. Truly getting to 100% would be the equivalent to creating better than human level AI, i.e. superintelligence. And if you follow the debates around the risk of that you know that the top 1% of bad outcomes are existential. 

Still, what about the first point? It is a good one, but I think we still need to keep three things in mind:

1- The systems we create to automate the 99% end up shifting complexity. Complex systems are fragile. We should never underestimate the calamities that can be created when complex systems blow up. I’m not prepared to say that CDOs are an example of this phenomenon, but they very well could be, and their existence took the 2007-2008 financial crisis to a whole new level. Despite the fact that most people had never even heard of them.

2- By focusing on technology we may be overlooking the truly worrisome aspect of this phenomenon. In theory we can turn technology off, or reprogram it. But to the extent we’re seeing this with softer systems (healthcare, diplomacy, energy generation) things could be much worse. The consequences take longer to manifest and are more subtle when they do. It’s far less clear that the ROI will eventually be positive.

3- Even if it’s absolutely true that we have improved the ROI it doesn’t mean that we shouldn’t keep the 1% in mind and attempt to mitigate it. We have a tendency to want to stretch our systems as far as we think they will go. But perhaps we don’t need to stretch them quite so far. It might turn out that the sweet spot is not always maximum automation. That Amazon could afford to hire a few more actual humans. That self-driving systems might work in concert with humans rather than trying to replace them. That rather than ignoring the 1% because we’ve solved the 99% that we can once again decide to do hard things.

This post may or may not have been inspired by an actual experience with Amazon. Though I will say that if you ship something back for a refund be sure to keep the shipping receipt with the tracking number. This experience, which may or may not have happened is why I deal with everything related to this podcast personally. If you appreciate this lack of automation consider donating.


Not Intellectuals Yet Not Idiots

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


Back at the time of the Second Gulf War I made a real attempt to up my political engagement. I wanted to understand what was really going on. History was being made and I didn’t want to miss it.

It wasn’t as if before then I had been completely disengaged. I had certainly spent quite a bit of time digging into things during the 2000 election and its aftermath, but I wanted to go a step beyond that. I started watching the Sunday morning talk shows. I began reading Christopher Hitchens. I think it would be fair to say that I immersed myself in the the arguments for and against the war in the months leading up to it. (When it was pretty obvious it was going to happen, but hadn’t yet.)

In the midst of all this I remember repeatedly coming across the term neocon, used in such a way that you were assumed to know what it meant. I mean doesn’t everybody? I confess I didn’t. I had an idea from the context, but it was also clear that I was missing most of the nuance. I asked my father what a neocon was and he mumbled something about them being generally in favor of the invasion, and then, perhaps realizing that, perhaps, he wasn’t 100% sure either, said Bill Kristol is definitely a neocon, listen to him if you want to know.

Now, many years later, I have a pretty good handle on what a neocon is, which I would explain to you if that what this post were about. It’s not. It’s about how sometimes a single word or short phrase can encapsulate a fairly complicated ideology. There are frequently bundles of traits, attitudes and even behavior that can resist an easy definition, but are nevertheless easy to label. Similar to the definition of pornography used by Justice Stewart when the Supreme Court was considering an obscenity case,

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description [“hard-core pornography”], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it(my emphasis)

It may be hard to define what a neocon is exactly, but I know one when I see it. Of course, as you have already surmised, neocon is not the only example of this. Other examples include, hipster, or social justice warrior, and lest I appear too biased towards the college millennial set, you could also add the term “red neck” or perhaps even Walmart shopper.

To those terms that already exist, it’s time to add another one: “Intellectual Yet Idiot” or IYI for short. This new label was coined by Taleb in just the last few days. As you may already be aware, I’m a big fan of Taleb, and I try to read just about everything he writes. Sometimes what he writes makes a fairly big splash, and this was one of those times. In the same way that people recognized that there was a group of mostly Jewish, pro-israel, idealistic, unilateralists, with a strong urge to intervene who could be labeled as neocons, it was immediately obvious that there was an analogous bundle of attitudes and behavior that is currently common in academia and government and it also needed a label. Consequently when Taleb provided one it fit into a hole that lots of people had recognized, but no one had gotten around to filling until then. Of course now that it has been filled it immediately becomes difficult to imagine how we ever got along without it before.

Having spent a lot of space just to introduce an article by Taleb, you would naturally expect that the next step would be for me to comment on the article, point out any trenchant phrasing, remark on anything that seemed particularly interesting, and offer amendments to any points where he missed the mark. However, I’m not going to do that. Instead I’m going to approach things from an entirely different perspective, with a view towards ending up in the same place Taleb did, and only then will I return to Taleb’s article.

I’m going to start my approach with a very broad question. What do we do with history? And to broaden that even further, I’m not only talking about HISTORY! As in wars and rulers, nations and disasters, I’m also talking about historical behaviors, marriage customs, dietary norms, traditional conduct, etc. In other words if everyone from Australian Aborigines to the indigenous tribes of the Amazon to the Romans had marriage in some form or another, what use should we make of that knowledge? Now, if you’ve actually been reading me from the beginning you will know that I already touched on this, but that’s okay, because it’s a topic that deserves as much attention as I can give it.

Returning to the question. While I want “history” to be considered as broadly as possible, I want the term “we” to be considered more narrowly. By “we” I’m not referring to everyone, I’m specifically referring to the decision makers, the pundits, the academics, the politicians, etc. And as long as we’re applying labels, you might label these people the “movers and shakers” or less colloquially the ruling class, and in answer to the original question, I would say that they do very little with history.

I would think claiming that the current ruling class pays very little attention to history, particularly history from more than 100 years ago (and even that might be stretching it), is not an idea which needs very much support. But if you remain unconvinced allow me to offer up the following examples of historically unprecedented things:

1- The financial system – The idea of floating currency, without the backing of gold or silver (or land) has only been around for, under the most optimistic estimate, 100 or so years, and our current run only dates from 1971.

2- The deemphasis of marriage – Refer to the post I already mentioned to see how widespread even the taboo against pre-marital sex was. But also look at the gigantic rise in single parent households. (And of course most of these graphs start around 1960, what was the single parent household percentage in the 1800s? Particularly if you filtered out widows?)

3- Government stability – So much of our thinking is based on the idea that 10 years from now will almost certainly look very similar to right now, when any look at history would declare that to be profoundly, and almost certainly, naive.

4- Constant growth rate – I covered this at great length previously, but once again we are counting on something continuing that is otherwise without precedent.

5- Pornography – While the demand for pornography has probably been fairly steady, the supply of it has, by any estimate, increased a thousand fold in just the last 20 years. Do we have any idea of the long term effect of messing with something as fundamental as reproduction and sex?

Obviously not all of these things are being ignored by all people. Some people are genuinely concerned about issue 1, and possibly issue 2. And I guess Utah (and Russia) is concerned with issue 5, but apparently no one else is, and in fact when Utah recently declared pornography to be a public health crisis, reactions ranged from skeptical to wrong all the way up to hypocritical and, the capper, labeled it pure pseudoscience. In my experience you’ll find similar reactions to those people expressing concerns about issues 1 and 2. They won’t be quite so extreme as the reactions to Utah’s recent actions, but they will be similar.

As a personal example, I once emailed Matt Yglesias about the national debt and while he was gracious enough to respond that response couldn’t have been more patronizing. (I’d dig it up but it was in an old account, but you can find similar stuff from him if you look.) In fact, rather than ignoring history, as you can see from Yglesias’ response, the ruling case often actively disdains it.

Everywhere you turn these days you can see and hear condemnation of our stupid and uptight ancestors and their ridiculous traditions and beliefs. We hear from the atheists that all wars were caused by the superstitions of religions (not true by the way). We hear from the libertines that premarital sex is good for both you and society, and any attempt to suppress it is primitive and tyrannical. We hear from economists that we need to spend more and save less. We heard from doctors and healthcare professionals that narcotics could be taken without risk of addiction. This list goes on and on.

For a moment I’d like to focus on that last one. As I already mentioned I recently read the book Dreamland by Sam Quinones. The book was fascinating on a number of levels, but he mentioned one thing near the start of the book that really stuck with me.

The book as a whole was largely concerned with the opioid epidemic in America, but this particular passage had to do with the developing world, specifically Kenya. In 1980 Jan Stjernsward was made chief of the World Health Organization’s cancer program. As he approached this job he drew upon his time in Kenya years before being appointed to his new position. In particular he remembered the unnecessary pain experienced by people in Kenya who were dying of cancer. Pain that could have been completely alleviated by morphine. He was now in a position to do something about that, and, what’s more morphine is incredibly cheap, so there was no financial barrier. Accordingly, taking advantage of his role at the WHO he established some norms for treating dying cancer patients with opiates, particularly morphine. I’ll turn to Quinones’ excellent book to pick up the story:

But then a strange thing happened. Use didn’t rise in the developing world, which might reasonably be viewed as the region in the most acute pain. Instead, the wealthiest countries, with 20 percent of the world’s population came to consume almost all–more than 90 percent–of the world’s morphine. This was due to prejudice against opiates and regulations on their use in poor countries, on which the WHO ladder apparently had little effect. An opiophobia ruled these countries and still does, as patients are allowed to die in grotesque agony rather than be provided the relief that opium-based painkillers offer.

I agree with the facts, as Quinones lays them out, but I disagree with his interpretation. He claims that prejudice kept the poorer countries from using morphine and other opiates, that they suffered from opiophobia, implying that their fear was irrational. Could it be instead, that they just weren’t idiots

In fact the question should not be why the developing countries had problems with widespread opioid use, but rather why America and the rest of the developing world didn’t. I mean any idiot can tell you that heroin is insanely addictive, but somehow (and Quinones goes into great detail on how this happened) doctors, pain management specialists, pharmaceutical companies, scientist, etc. all convinced themselves that things very much like heroin weren’t that addictive. The people Stjernsward worked with in Kenya didn’t fall into this trap because basically they’re not idiots.

Did the Kenyan doctors make this decision by comparing historical addiction rates? Did they run double-blind studies? Did they peruse back issues of the JAMA and Lancet? Maybe, but probably not. In any case whatever their method for arriving at the decision (and I strongly suspect it was less intellectual than the approach used by western doctors) in hindsight they arrived at the correct decision, while the intellectual decision, backed up by data and a modern progressive morality ended up resulting in  exactly the wrong decision when it came time to decide whether to expand access to opioids. This is what Taleb means by intellectual yet idiot.

To give you a sense of how bad the decision was, in 2014, the last year for which numbers are available 47,000 people died from overdosing on drugs. That’s more than annual automobile deaths, gun deaths, or the number of people that died during the worst year of the AIDS epidemic. You might be wondering what kind of an increase that represents. Switching gears slightly to look just at prescription opioid deaths they’ve increased by 3.4 times since 2000. A net increase of around 13,000 deaths a year. If you add up the net increase over all the years you come up with an additional 100,00 deaths. No matter how you slice it or how you apportion blame, it was a spectacularly bad decision. Intellectual yet idiot.

And sure, we can wish for a world where morphine is available so people don’t die in grotesque agony, but also is simultaneously never abused. But I’m not sure that’s realistic. We may in fact have to choose between serious restrictions on opiates and letting some people experience a lot of pain or fewer restrictions on opiates and watching young healthy people die from overdosing. And while developing countries might arguably do a better job with pain relief for the dying, when we consider the staggering number of deaths, when it came to the big question they undoubtedly made the right decision. Not intellectual yet not an idiot.

It should be clear now that the opiate epidemic is a prime example of the IYI mindset. The smallest degree of wisdom would have told the US decision makers that heroin is bad. I can hear some people already saying, “But it’s not heroin it’s time released oxycodone.” And that is where the battle was lost, that is precisely what Taleb is talking about, that’s the intellectual response which allowed the idiocy to happen. Yes, it is a different molecular structure (though not as different as most people think) but this is precisely the kind of missing the forest for the trees that the IYI mindset specializes in.

Having arrived back at Taleb’s subject by a different route, let’s finally turn to his article and see what he had to say. I’ve already talked about paying attention to history. And in the case of the opiate epidemic we’re not even talking about that much history. Just enough historical awareness to have been more cautious about stuff that is closely related to heroin. But of course I also talked about the developing countries and how they didn’t make that mistake. But I’ve somewhat undercut my point. When you picture doctors in Kenya you don’t picture somehow who knows in intimate detail the history of Bayer’s introduction of heroin in 1898 as a cough suppressant and the later complete ban of heroin in 1924 because it was monstrously addictive.

In other words, I’ve been making the case for greater historical awareness, and yet the people I’ve used as examples are not the first people you think of when the term historical awareness starts being tossed around. However, there are two ways to have historical awareness. The first involves reading Virgil or at least Stephen Ambrose, and is the kind we most commonly think of. But the second is far more prevalent and arguably far more effective. These are people who don’t think about history at all, but nevertheless continue to follow the traditions, customs, and prohibitions which have been passed down to them through countless generations back into the historical depths. This second group doesn’t think about history, but they definitely live history.

I mentioned “red necks” earlier as an example of one of those labels which cover a cluster of attitudes and behaviors. They are also an example of this second group. And further, I would argue, that they should be classified in the not intellectual yet not idiots group.

As Taleb points there is a tension between this group and the IYI’s. From the article:

The IYI pathologizes others for doing things he doesn’t understand without ever realizing it is his understanding that may be limited. He thinks people should act according to their best interests and he knows their interests, particularly if they are “red necks” or English non-crisp-vowel class who voted for Brexit. When plebeians do something that makes sense to them, but not to him, the IYI uses the term “uneducated”. What we generally call participation in the political process, he calls by two distinct designations: “democracy” when it fits the IYI, and “populism” when the plebeians dare voting in a way that contradicts his preferences.

The story of the developing countries refusal to make opiates more widely available is a perfect example of the IYI’s thinking that they know what someone’s best interests are better than they themselves. And yet what we saw is that despite, not even being able to explain their prejudice against opiates, that the doctors in these countries, instinctively, protected their interests better than the IYIs. They were not intellectuals, yet they were also not idiots.

Now this is not to say, that “red necks” and the people who voted for the Brexit are never wrong (though I think they got that right) or that the IYI’s are never right. The question which we have to consider is who is more right on balance, and this is where we return to a consideration of history. Are historical behaviors, traditional conduct, religious norms and long-standing attitudes always correct? No. But they have survived the crucible of time, which is no mean feat. The same cannot be said of the proposals of the IYI. They will counter that their ideas are based on the sure foundation of science, without taking into account the many limitations of science. Or as Taleb explains:

Typically, the IYI get the first order logic right, but not second-order (or higher) effects making him totally incompetent in complex domains. In the comfort of his suburban home with 2-car garage, he advocated the “removal” of Gadhafi because he was “a dictator”, not realizing that removals have consequences (recall that he has no skin in the game and doesn’t pay for results).

The IYI has been wrong, historically, on Stalinism, Maoism, GMOs, Iraq, Libya, Syria, lobotomies, urban planning, low carbohydrate diets, gym machines, behaviorism, transfats, freudianism, portfolio theory, linear regression, Gaussianism, Salafism, dynamic stochastic equilibrium modeling, housing projects, selfish gene, Bernie Madoff (pre-blowup) and p-values. But he is convinced that his current position is right.

With a record like that which horse do you want to back? Is it more important to sound right or to be right? Is it more important to be an intellectual or more important to not be an idiot? Has technology and progress saved us? Maybe, but if it has then it has done so only by abandoning what has got us this far: history and tradition, and there are strong reasons to suspect both that it hasn’t saved us (see all previous blog posts) and that we have abandoned tradition and history to our detriment.

In the contest between the the intellectual idiots and the non-intellectual non-idiots. I choose to not be an idiot.