Technology, Transit Systems and Uncharted Territory
If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
Long time readers (and probably even people who just started last week) know of my admiration for Scott Alexander and his blog Slate Star Codex. An admiration which extends to doing the same thing for his blog that I do for mine, i.e. record it and syndicate it as a podcast. Which is not to say that I don’t also, on occasion, disagree with him. I bring all this up because there’s a metaphor of his that I’ve been meaning to discuss for quite a while, probably since he first introduced it near the end of September, but somehow I never get around to slipping it in. This is despite it being directly applicable to much of the stuff I’ve been writing about, particularly over the last couple of weeks. This feels like it was partially a factor of size. That this metaphor has ended up being too big to just use as just one more example of my point, but too small to carry an entire post. Well, we’re about to see if this is the case because I have decided to delay no longer and devote an entire post to Alexander’s The Tails Coming Apart as Metaphor for Life.
He starts off by pointing out that even if you have two variables which are strongly correlated, the most extreme example of one will only rarely be the most extreme example of the other. As an illustration he offers up arm strength vs. grip strength. Certainly you would expect someone with really powerful arms to have really powerful hands, and there would appear to be no reason why you couldn’t have both, but apparently there is enough of an edge to focusing on either arms or grip that different people will end up on the extremes of either measure. Rather than one person being the strongest in both.
You can certainly see this kind of specialization at the highest levels of athletics. Usain Bolt is the fastest man alive at the 100 m dash, but when people started wondering how fast he could run a mile his agent clarified he’s never run a mile in his life. And it turns out that at 2:10 even his 800 m speed is entirely uncompetitive (to be in the top 10 of Utah HS Athletes he would need a time of at least 1:53). On the other side of things, we have the less well known, Eliud Kipchoge, the current holder of the marathon record, who I’m sure would be slaughtered in any competitive 100 m dash (though unlike Bolt, he’s definitely run that distance). In other words, they’re both runners but it turns out they’re very different kinds of runners.
At this point we could go off into a discussion of fast-twitch vs. slow-twitch muscle fibers, and other factors, but that’s not really where I want to take this. Because as much as Bolt and Kipchoge are specialized, there’s a limit to that specialization. People try to push these limits with performance enhancing drugs, but even if those were allowed no one is going to run a five second 100 m dash, or complete a marathon in less than an hour. But once you add technology all of those things are easy. Take the worst car in the world, and as long as it actually still runs it should be able to do both trivially. But then on top of just doing most things better technology vastly increases our ability to really crank up the dial on specific things.
Of course when we do this we have to make sacrifices in other areas. More so even than the elite athletes. Regardless of how much someone focuses on being a sprinter or alternatively being a marathoner, they’re never going to lose their ability to walk. Perhaps even more to the point nothing about sprinting or endurance running precludes learning how to swim. But despite technology allowing us to make a car that is better at both sprinting and endurance, outside of a James Bond movie it’s never going to be able to pass through water deeper than it’s exhaust.
To jump to a more extreme example, let’s discuss airplanes. On the sprinter side we have the SR-71 which had a 56 foot wingspan; was 107 feet in length; had a loaded weight of 76 tons and a maximum speed of Mach 3.3. Of course to achieve that speed it burned 22 tons of fuel every hour, meaning it had to be refueled about every 90 minutes. On the endurance side we have the Rutan Voyager which had a 111 foot wingspan; was 10 feet in length; had a loaded weight of five tons, and a maximum speed of 122 mph. This required it to burn 32 lbs of fuel every hour. On its record setting flight it did the first non-stop, non-refueled circumnavigation of the globe that involved crossing the equator twice. It was aloft for 216 hours.
Yes the SR-71 and the Voyager are both airplanes, but beyond the fact that they both fly there’s not much resemblance. And what resemblance there is probably comes down to the fact that they both have human crews. Eliminate the crews and the how fast can we propel something through the air and how long can we keep something aloft diverge even more. Russia claims it just tested a hypersonic missile that hit Mach 27. On the other side you have the Airbus Zephyr which can, as an unmanned solar-powered UAV (drone), stay aloft essentially indefinitely, at a cruising speed of 35 mph. For comparison to above it has a 74 foot wingspan and weighs 117 lbs.
Returning to Alexander he points out that we end up with two zones, and borrowing some terminology from Taleb he labels them Mediocristan and Extremistan. In my examples Mediocristan is everything where just humans are involved. The speed at which Usain Bolt can travel 100 m is only about 7x as fast as the average person can walk 100 m. And it will never go to 8x. Once you start introducing technology, you enter Extremistan. The hypersonic missile travels 582x as fast as the Zephyr (which coincidentally travels about the same speed as the Wright Brother Flyer). And we already have space probes which have traveled 5000x as fast.
But Alexander isn’t solely focused on technology, the central point of his post is to talk about morality.
The morality of Mediocristan is mostly uncontroversial. It doesn’t matter what moral system you use, because all moral systems were trained on the same set of Mediocristani data and give mostly the same results in this area. Stealing from the poor is bad. Donating to charity is good. A lot of what we mean when we say a moral system sounds plausible is that it best fits our Mediocristani data that we all agree upon. This is a lot like what we mean when we say that “quality of life”, “positive emotions”, and “meaningfulness” are all decent definitions of happiness; they all fit the training data. The further we go toward the tails, the more extreme the divergences become. Utilitarianism agrees that we should give to charity and shouldn’t steal from the poor, because Utility, but take it far enough to the tails and we should tile the universe with rats on heroin. Religious morality agrees that we should give to charity and shouldn’t steal from the poor, because God, but take it far enough to the tails and we should spend all our time in giant cubes made of semiprecious stones singing songs of praise. Deontology agrees that we should give to charity and shouldn’t steal from the poor, because Rules, but take it far enough to the tails and we all have to be libertarians.
I should point out that the ultimate expression of my religion is not spending “all of [my] time in giant cubes made of semiprecious stones singing songs of praise”. But I can’t speak for everyone.
He actually immediately follows this up with a graph. On one axis is “How good something is according to hedonic utilitarianism.” On the other axis is “How good something is according to Christian teachings on morality”. Then he plots various events/actions such as, “The Holocaust” (very bad for both). “Donating to a Charity” right in the middle. “Starting a Catholic Hospital” high for Christians, middle for utilitarians. And so on. All of these things are in morality Mediocristan, where everyone basically agrees what’s good and what’s not. Then he gives two examples from Extremistan. At the very top of the Christian axis (but lower than the holocaust on the utilitarian axis) is “One thousand year reign of Christ over the Earth with unbelievers thrown into the bottomless pit”. (Once again I should point out this isn’t exactly what I believe though he’s getting closer.) And at the very top of the utilitarian axis (but lower than the holocaust for Christians) is “Entire mass of the universe converted into nervous tissue experiencing raw euphoria”.
This is an excellent observation and you can see where I’ve alluded to it in several previous posts, like my last post on the conflict between happiness and survival. Historically the overlap between survival and happiness has been nearly total, so it didn’t really much matter which we were prioritizing. We were firmly in Mediocristan. But I would argue that the two spaces are starting to diverge; the overlap is getting less and less. As we saw with flying, technology allows us to make radically different planes depending on what we decide to prioritize. I don’t think it’s to much of a stretch to say that we’re entering a period where we can make radically different societies depending on what we decide to prioritize, and if we prioritize happiness we may end up with a society that isn’t great at survival, just like the SR-71 is great at going really fast, but isn’t great at staying aloft for long periods.
To use Alexander’s term, there’s a danger that the “tails are coming apart”. Which takes us to his best metaphor, the Bay Area transit system. But, before we get into that I want to point out something about his two examples of Extremistan. People are inclined to declare that religious fanaticism and technological fanaticism are both equally alarming. In fact, to a point Alexander himself does this. But let’s return to his two very extreme examples. Christ reigning over the earth and nervous tissue experiencing raw euphoria. Outside of the Mormon Transhumanist Association no one thinks that we can bring about Christ’s return by creating sufficiently advanced technology. But the technology to stimulate the pleasure centers of the brain directly already exists, and we should also include designer drugs, and other supernormal stimuli in this category as well. You may of course argue that this is a long way from “the entire mass of the universe”, or that it’s not exactly “raw euphoria” but there’s nothing necessarily stopping us from heading in that direction and the steps we’ve already taken are disturbing and unlikely to get less so. Whereas all the space between where we are now and “Christ reigning on the Earth” largely consists of people trying to master the morality of Mediocristan. Which is precisely where most people, including Alexander, want to remain.
The key point being that we can bring about a utilitarian pleasure maximization nightmare, but we can not make Christ return. Either God exists or he doesn’t and if he does, there’s not a lot we can do to change his plan except perhaps by working out our own salvation, and certainly nothing we can do with technology to change it. In the more immediate sense I know that lots of people worry about religious fanatics, and I would argue that we should worry more about technological fanaticism. Religious fanatics have existed for a long time, and as yet they haven’t seriously endangered the world, nor do they have the power to. Technological fanatics are both potentially more powerful, and also a lot less well understood.
This takes us, finally, to the Bay Area transit system metaphor. Though you could also use the Salt Lake City Trax system. In both cases there is a densely populated area through which all of the lines pass, but once you leave the more densely populated area the lines start to diverge. I’ll let Alexander explain the metaphor from here:
Mediocristan is like the route from Balboa Park to West Oakland, where it doesn’t matter what line you’re on because they’re all going to the same place. Then suddenly you enter Extremistan, where if you took the Red Line you’ll end up in Richmond, and if you took the Green Line you’ll end up in Warm Springs, on totally opposite sides of the map.
Our innate moral classifier has been trained on the Balboa Park – West Oakland route. Some of us think morality means “follow the Red Line”, and others think “follow the Green Line”, but it doesn’t matter, because we all agree on the same route.
When people talk about how we should arrange the world after the Singularity when we’re all omnipotent, suddenly we’re way past West Oakland, and everyone’s moral intuitions hopelessly diverge.
For myself I’m not sure it will take the singularity, we might have passed our metaphorical West Oakland already. But I do agree that technology is a big part of the problem. Of course, as people will often point out technology has no inherent morality, it’s just a tool. I’m not sure I’m 100% on board with that, but it is important to note that it’s mostly human desires being given a more perfect expression by technology that’s causing the divergence. To extend the metaphor, in the past it was difficult to get much farther than West Oakland, just as arguing whether we should build a plane that goes really fast or one that stays aloft forever was pointless before the Wright Brothers came along. But now we can have arguments about all sorts of things that were previously unthinkable, or at least only discussed in the realm of science fiction. Some examples:
The recent story of the Chinese scientist who used CRISPR to genetically modify babies.
The new tactic of large groups of people publicly shaming private individuals.
The question of whether Facebook is using their 10 Year Challenge to improve their facial recognition software.
The changing face of war and deterrence in light of the new hypersonic missiles I described above.
Elon Musk’s plans for a million person city on Mars.
The epidemic of anxiety among teenagers and college students and how much of it is due to social media.
A future of ubiquitous designer babies, a la Gattaca, is very different from a future where we decide that such technology should be entirely off limits, and the rest of the examples are similar, particularly if we imagine how far we could travel if we take one side of the argument all the way to the “end of the line”.
If you push on the metaphor enough you realize that it’s entirely too certain and clean to actually represent reality. A rider of the Bay Area transit system can tell which line they’re boarding, and know where they’re headed, but such is not the case with humanity. Even if we all decided we wanted to take the hedonic utilitarian train all the way to “raw euphoria” town, we might not get there. And of course we don’t all agree. And it’s not like there’s multiple trains, there may only be one train, with a bunch of people all fighting for control of the speed and direction, which brings up another point, forget about tracks, or even roads, those don’t exist either. And the landscape passing by on either side of us? Completely new.
I’ve gotten continual pushback for discussing falling birthrates as a proxy for survival priority, some of which is certainly fair, but given that we’re in completely new territory, what landmarks can we rely on? A lush countryside tells us one thing, a barren one another. Both may be temporary, but how can you tell? Take I-80 west out of the Bay Area and once things start looking like a desert they’re going to look that way for a long time.
One argument that’s been made for the falling birthrates is that it’s a rational response to population pressure. That in essence someone sitting in the front of the train can see approaching catastrophe, and they’re braking to avoid it. And, if there was a central authority telling everyone to have fewer children, similar to China’s One Child policy, that might make sense, but instead everyone seems to have made this decision just about regardless of where they are on the train. In fact it should be noted that total fertility rate (TFR) seems to be mostly uncorrelated with population density. Consider Nigeria and South Korea, both countries have similar population density (Nigeria is slightly higher) but Nigeria is 8th in total fertility while South Korea is 206th (out of 209).
Probably a better argument is that the declining TFR has nothing to do with rationally making a choice to avoid Malthusian catastrophe, or rationally choosing happiness over survival. But is rather a mostly unconscious following of incentives, some hidden, some obscured, and some right out there in the open. The point I’m getting at is that technology allows us to pursue those incentives in previously unimagined ways. In the example above, when the incentive was speed, engineers built a terrifyingly fast plane that burned 22 tons of fuel every hour.
What incentives are we maximizing with technology? What plane have we built and does it also have a voracious appetite for fuel? Stated in that way the current culture war seems to qualify and at the risk of mixing metaphors, it’s definitely clear we’re way past West Oakland with one side speeding towards Warm Springs and another headed just as fast to Richmond. Gallons of virtual ink has already been spilled on this subject, (perhaps that’s the fuel?) but can anyone look at controversy over the smirking MAGA kid and not see this split?
Despite the emphasis I have placed on technology, in the end Alexander is right and this is primarily a split in morality. Between two competing visions of what ultimate morality is, once you get past things we can agree on like the evils of murder and the benefits of being charitable. At this point I could interject that one side wants to stay in West Oakland, or at least reduce the speed at which we pull away. And maybe that has some validity, and maybe it doesn’t. Regardless I think humanity as a whole is definitely headed into uncharted territory, and I’m not sure what we’ll find when we reach the end of the line.
I can’t improve upon Alexander’s closing statement, so I’m going to go ahead and steal it:
When Lovecraft wrote that “we live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far”, I interpret him as talking about the region from Balboa Park to West Oakland on the map above. Go outside of it and your concepts break down and you don’t know what to do. He was right about the island, but exactly wrong about its causes – the most merciful thing in the world is how so far we have managed to stay in the area where the human mind can correlate its contents.
Speaking of Lovecraft, last year I worked my way through his complete works. I’m not sure I’d recommend it. It may have affected my sanity. If you would like to help with the inevitable cost of therapy stemming from that and many other things, consider donating.