The Bifurcation Created by Technology
If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
I.
I've tried to start tweeting more consistently, though not because I like to tweet. As you may be able to tell from the length of my essays, tweeting is the exact opposite of my preferred style of communication. Unfortunately Twitter is where all the action happens, so if I want to be a public intellectual, I have to tweet. I don't know that I want to be a public intellectual, nor have I ever claimed to be such. But I haven’t found any better label for what I’m trying to do, so I guess that’s the direction I hope to go in however silly and conceited that may sound at this point.
Of course the consensus is that Twitter is a dumpster fire encased in high level nuclear waste, and that anyone who has the least interest in maintaining their mental health should avoid it like the plague. Though I mostly see that sentiment expressed by people who are already well known enough that their audience will find them regardless, giving them the option of avoiding the radioactive inferno. That does not describe me, though so far the greatest difficulty I’ve encountered is remembering that it’s there. Apparently the deep red of the uranium fire does not hold the same appeal for me as it does for others.
In any case I digress, I mention the tweeting in case there are readers out there who might be interested in following my infrequent tweets (I’ve gotten pretty good at tweeting at least once a day.) I also mention it because the idea for this post started as a tweet. (I guess I need to figure out how to embed tweets, but until then, I'll just quote it.)
Technology bifurcates problems. 99% of problems go away, but the 1% that are left are awful. Case in point customer service: 99% of the time you don't even need customer service from Amazon, etc. but the 1% of the time you do you're suddenly in a story by Kafka.
I mentioned Amazon in my tweet, completely missing that, contextually it would have been better to mention Twitter, since it basically has the same problem. We probably all know someone who has been temporarily banned from twitter for tweeting something objectionable. The initial choice (as I understand it, remember I’m not a power user) is to delete the offending tweet or to appeal. Nearly everyone deletes the offending tweet because they know that appealing puts them in the aforementioned story by Kafka. And should it happen a second time then appealing switches from Kafka to Dante: “abandon hope all ye who enter here”. All of which is to emphasis my initial point: 99%, probably even 99.9% of Twitter users never need customer service, the platform just runs on it’s own without the users ever running into problems which need special intervention, but in the edge cases where it doesn’t run smoothly the process is completely opaque, unresponsive, and ineffective.
Despite how bad it is, as far as I can tell Twitter does much better than Amazon and Google. The internet is full of stories of people who had their Amazon seller account closed—frequently for things the person never did. (This reddit story represents a typical example.) And you may have caught the New York Times story from last month about the father who took pictures of his toddler’s genitals to send to the child’s pediatrician, who ended up losing everything he had with Google (Contacts, email, photos, etc.) because the pictures he sent were flagged as child pornography by the Google algorithms. And not only that Google also referred him to the police. All of this is bad but from a societal perspective the worst was yet to come.
Google, despite being contacted by the NYT and having the situation explained to them, refused to budge. You might imagine that this is just a principled stand on their part. That they have zero tolerance for stuff like this. Or you might imagine the inverse, they’re worried that if they did reverse their opinion they would appear soft on the issue. I don’t think it’s either of those things. I think that they’re incredibly invested in the idea that their algorithms can handle content moderation, and that’s the position they don’t want to undermine. Because if algorithms are shown to have holes and flaws then they might have to spend a lot of time and money getting humans to do customer service, which is the exact opposite of the direction they’ve been trying to go since their founding.
Before moving on, as I was re-reading this story, I came across yet another consequence of Google’s villainy. This one, out of all of the consequences this man suffered, really hit home for me. “He now uses a Hotmail address for email, which people mock him for…” I’ll admit that made me laugh. But also, yeah, I totally get it.
In any case I think the customer service angle is pretty straight forward, the question is how broadly can we apply this observation, where else might it be happening? What other harms might it be causing? In order to answer that I think we need to start by examining the process which brings it into existence in the first place.
II.
Initially everything is hard and time consuming. I own a small custom software company, that’s a million times smaller than Amazon. (Okay, not literally, but pretty close.) But my company also solves problems with software. At this point if one of my customers has a problem then they come directly to me and I fix it (or more likely my younger, more talented, better looking partner does.) There’s a lot of friction and a lot of overhead to that process. Gradually we hope to be able to smooth all of that out. The first step in doing that is to hope that whatever we fix stays fixed. We also hope to be able to nip problems in the bud by reusing proven solutions. Finally, we automate solutions for the most common problems. (Think of the “forgot password” button.) While this represents only a small portion of our efforts, it’s a large part of what the big dawgs are doing.
Through all of these tactics, gradually we move things from the “hard and time consuming” column to the “never have to worry about it” column. Or at least we hope we never have to worry about them but “never” is a very long time, and it’s difficult to implement a solution that covers every possible eventuality. There are always going to be edge cases, unique situations and circumstances which combine in ways we didn’t expect. For problems like these you have to get a human involved, ideally a human with the authority to fix things, who’s also smart enough to understand why the situation is unique. (That’s often the problem I run into with customer service. If I’m calling you it’s not something I could fix just by Googling, and telling me to restart my router again is not helpful, it's infuriating.) But I’m going to argue that for really difficult problems, it goes beyond even all of these things. You actually need a human who’s wise.
It’s unclear whether the gentleman who had such difficulties with Google was even able to talk to an actual human, let alone a wise one. Certainly I don’t know how to reach a live person at Google, though I confess I’ve never tried. (Which is probably exactly the behavior they like to encourage.) The NYT obviously talked to someone, but again, even though they definitely had an actual conversation, there doesn’t appear to have been an abundance of wisdom involved. But to be fair, the amount of intelligence and wisdom required to solve these problems just keeps increasing. Because the problems left over after the implementation of all this technology are worse than they would have been if the technology never existed. To be clear I’m not arguing that the overall situation is worse (at least not yet). I’m pointing out that the top 1% of all problems are way worse when the other 99% is automated than when it’s not.
How does this happen? Well let’s move on to a different example, one where the stakes are higher than being forced to switch to hotmail.
III.
That initial tweet was followed up with one more. (I was on fire that day!)
Additional thoughts/example: Self driving cars. Tech can take care of easiest 99%. Tosses most difficult 1% back to driver. Driver has no context, just suddenly in deep end, therefore much worse at hardest 1% than if they had just dealt with the full 100% from start.
Let me expand that from its abbreviated, staccato form. If not now, then soon, self driving cars will be able to take care of all the easy bits of driving. All the way back in 2015 my Subaru came with adaptive cruise control, which appears to be the lowest of all the low hanging fruit, and I’m sure many of you have Tesla’s which are several generations further advanced still, but no car can take care of 100% of driving and that driving which they can’t take care of is the most difficult driving of all.
The difficult 1% falls into two categories. First there are the sudden calamities: the car on a cross street running a red light, or debris falling off the pickup truck driving just in front of you, etc.
The second category is bad weather. It’s my understanding that self-driving cars are not great at handling heavy rain, and are completely stymied by heavy snow. Luckily, unlike the examples from the first category, weather is not generally something that gets sprung on you suddenly. Nevertheless, it requires a whole suite of skills which rely on doing a lot of moderately difficult driving, not all of it in bad weather. In the same fashion that speaking academic English is helped by being able to speak conversational English, it’s clear that lots of normal driving helps one develop the skills necessary to tackle bad weather driving. Which is not to say that driving in snow does not have its own unique challenges. This is why in some municipalities, where snow is rare, when it does come they shut things down entirely. Is this same situation what we have to look forward to? A future where neither humans nor auto-pilots can handle inclement weather, and so when it happens everything shuts down? Perhaps, but that’s not really an option in many places. What’s more likely is that of all the driving humans do, a greater and greater percentage of it will only be done during times of extreme weather, with very little experience outside of that. Should this be the case self-driving cars will have made all the driving that does get done significantly more difficult.
Returning to the first category, those situations where conditions suddenly change are more what I was referring to in my tweet. Times where the self-driving car has lulled you into a state of inattentiveness (something that happens to me just using adaptive cruise control) but whatever the car is doing it’s understood that as part of the deal that it can’t handle everything. So when the light turns green it’s your responsibility to notice the Mustang coming from the left whose driver decided, incorrectly, that they could beat the light if they punched it up to 60. Of course you might not notice it regardless of the level of auto-pilot your car has, but also the chance of you missing it if you’ve been relying on auto-pilot for everything else goes way up.
Having a car run a red light at high speed is presumably something outside the ability of most auto-pilots to detect, on the other hand there are some things the autopilot has no problem detecting, they just don’t know what to do with them. I mentioned debris falling out of a pickup truck. The car can probably detect that, but is this a situation where it’s better to slam on the breaks or swerve? I don’t claim to be an expert on exactly how every current auto-pilot functions, but I think most of them are not equipped to swerve. And it’s not clear how much you want to trust even those cars that are equipped to swerve. This means that it’s up to the person to immediately seize control, and make the decision. Fortunately the car should sound a collision alarm, but if that’s the first point at which you become aware of the debris you’ve already lost valuable time.
Ideally in order to know whether to swerve or whether to break, you’d want to have a pretty good sense of where the other cars are on the road, particularly if there’s anyone currently hanging out in your blind spot. All of this is unlikely if you haven’t really been paying attention. Deciding whether to break, or swerve when suddenly confronted with road debris is in the top 1% of difficulty. And of course the decision is more complicated than that, there are some situations where the very best thing to do is run over the debris. The point is that for the foreseeable future, using autopilot would almost certainly make this very difficult decision even more difficult.
IV.
Thus far we’ve covered the two examples that are the most straightforward (though perhaps you’ve already thought of other, equally obvious examples.) Now I want to move into examples where it’s not quite as obvious, but where I think this idea might still have some explanatory utility. I’m just going to touch on each example briefly, just long enough for you to get a sense of the area I’m talking about. I’m more going for a “what are your thoughts about that?” rather than a “here’s why this is also an example of the bifurcation I’ve been talking about”
Was it a factor with the pandemic? We have used technology to routinize numerous aspects of healthcare, such that with 99% of problems we have a system. There’s a specialist you can go to, a medicine you can take, or an operation which can be performed. But when the most difficult health problem of the last 100 years came along in the form of COVID, and it didn’t fit into any of our routines, we seemed pretty bad at dealing with it. Worse than we had been historically, particularly if you factor in the tools available then, vs. the tools available now. Additionally the bureaucracy we had created to deal with the lower 99% of problems ended up really getting in the way when it came to dealing with the top 1%, i.e. a global pandemic.
Then there are societal problems like homelessness and drug addiction. We also have implemented significant civic technology in this area. Employment is pretty easy to find. Signing up for social programs is straightforward. Just about anybody who wants to go to college can. We’ve taken care of a lot of things which used to be dealt with at the level of the individual, the family, or the community. But, there was a lot of variability in the service offered by these entities, and oftentimes they failed, spectacularly. This is the reason for the various civic technologies that have emerged, and as a result of these technologies we’ve gotten pretty good at the 99%, but what’s happened to the 1%? As I’ve talked about frequently, drug overdose deaths are through the roof. The systems we’ve created are great at dealing with normal problems like just not having enough food, but with the really knotty problems like opioid addiction we seem to have gotten worse.
Does this bifurcation apply in the arena of war? Since WWII we’ve managed to keep 99% of international conflicts below the level of the Great Powers. This has rightly been called the long peace. And it’s been pretty nice. But as the situation in Ukraine gets ever more perilous are we about to find out what the really difficult 1% looks like? The type of war our international system was unable to tame? Essentially what I’m arguing here is that our diplomatic muscles have atrophied. We’re not used to negotiating with powerful countries who truly have their backs against the wall. Which was fine 99% of the time, but the 1% of the time we need it, we’ve lost the ability to engage in it.
What about energy generation? We are fantastic at generating power. The international infrastructure we’ve built for getting oil out of the ground and then transporting it anywhere in the world is amazing. We’ve also gotten really good at erecting windmills and putting up solar panels. But somehow we just can’t seem to build nuclear power plants in a cost-effective way. It clearly is in that top 1% of difficulty, and as near as I can tell by getting really good at the other 99% we’ve essentially decided to just give up on that remaining 1%. But of course that 1% ends up being really important.
I think I may have stretched this idea to its breaking point, and maybe even past that, but I would be remiss if I didn’t discuss how this idea relates to my last post. Because at first glance they seem to be contradictory. In the last post I said we put too much attention on the tails, and in this post I seem to be saying we’re not putting enough attention there. To be honest this contradiction didn’t occur to me until I was well into things, and for a moment it puzzled me as well. Clearly one explanation would be that I’m wrong now, or that I was wrong then, or that I’m wrong both times. But (for possibly selfish reasons) I think I was right both times, though the interplay between the two phenomena was subtle.
In our current land grab for status people are racing towards the edges, but that doesn’t mean that the extreme edge, the 1%, gets more attention. In fact the exact opposite, it gets buried by the newcomers. Freddie deBoer has done a lot of great work here and I could pick any of a dozen articles he’s written, but perhaps his newsletter from this morning will suffice. As usual his titles don’t leave much to the imagination: “We Can't Constructively Address Online Mental Health Culture Without Acknowledging That Some People Think They Have Disorders They Don't”. As a result of people misdiagnosing themselves you end up in a situation where out of all the people who claim to have a particular disorder a significant percentage, let’s say 80%, don’t have it at all, or if they do it’s subclinical. Then figure an additional 15% of people have very mild cases. And the remaining 5% have a serious affliction. This 5% ends up basically being the 1% I’ve mentioned above, who don’t get the level of help they need because they’re competing for resources with the 95% of people who have mild or non-existent cases. Which takes us back to the same bifurcation I’ve been talking about.
V.
Some of you may have noticed that I’ve neglected a very significant counter argument. Possibly, some of you may be impotently yelling at me through your screen at this very moment. I’ve never discussed the ROI of this arrangement. In other words, this bifurcation could leave all of us better off. To take the example of the self-driving car. Around 40,000 people die every year in automobile accidents. Let’s say that 20% of those deaths come in situations auto-pilots are ill-equipped to deal with. But the other 80% of deaths would be completely eliminated if all cars were self-driving. Unless the extreme 1% ends up being five times more deadly because of overreliance on auto-pilot, we would be better off entirely switching to self-driving cars. Far fewer people would die.
Beyond this most people imagine that eventually we’ll get to 100%. That someday, perhaps sooner than we think, self-driving cars will be better than human drivers in all conditions. And at that point there really won’t be anything left to discuss. While the first point is valid, this second point partakes more of hubris than practicality. Truly getting to 100% would be the equivalent to creating better than human level AI, i.e. superintelligence. And if you follow the debates around the risk of that you know that the top 1% of bad outcomes are existential.
Still, what about the first point? It is a good one, but I think we still need to keep three things in mind:
1- The systems we create to automate the 99% end up shifting complexity. Complex systems are fragile. We should never underestimate the calamities that can be created when complex systems blow up. I’m not prepared to say that CDOs are an example of this phenomenon, but they very well could be, and their existence took the 2007-2008 financial crisis to a whole new level. Despite the fact that most people had never even heard of them.
2- By focusing on technology we may be overlooking the truly worrisome aspect of this phenomenon. In theory we can turn technology off, or reprogram it. But to the extent we’re seeing this with softer systems (healthcare, diplomacy, energy generation) things could be much worse. The consequences take longer to manifest and are more subtle when they do. It’s far less clear that the ROI will eventually be positive.
3- Even if it’s absolutely true that we have improved the ROI it doesn’t mean that we shouldn’t keep the 1% in mind and attempt to mitigate it. We have a tendency to want to stretch our systems as far as we think they will go. But perhaps we don’t need to stretch them quite so far. It might turn out that the sweet spot is not always maximum automation. That Amazon could afford to hire a few more actual humans. That self-driving systems might work in concert with humans rather than trying to replace them. That rather than ignoring the 1% because we’ve solved the 99% that we can once again decide to do hard things.
This post may or may not have been inspired by an actual experience with Amazon. Though I will say that if you ship something back for a refund be sure to keep the shipping receipt with the tracking number. This experience, which may or may not have happened is why I deal with everything related to this podcast personally. If you appreciate this lack of automation consider donating.