If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
Over the last couple of newsletters we’ve been talking about how to deal with an unpredictable and dangerous future. To put a more general label on things, we’ve been talking about how to deal with randomness. We started things off by looking at the most extreme random outcome imaginable: humanity’s extinction. Then I took a brief detour into a discussion of why I believe that religion is a great way to manage randomness and uncertainty. Having laid the foundation for why you should prepare yourself for randomness, in this newsletter I want to take a step back and examine it in a more abstract form.
The first thing to understand about randomness is that it frequently doesn’t look random. Our brain wants to find patterns, and it will find them even in random noise. An example:
The famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand. When he looked up he realized that the glowworms made the ceiling look like the night sky, except… there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms will eat other glowworms) each worm had a similar spacing. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.
One of the ways this pattern matching manifests is in something called the Narrative Fallacy. The term was coined by Nassim Nicholas Taleb, one of my favorite authors, who described it thusly:
The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.
That last bit is particularly important when it comes to understanding the future. We think we understand how the future is going to play out because we’ve detected a narrative. To put it more simply: We’ve identified the story and because of this we think we know how it ends.
People look back on the abundance and economic growth we’ve been experiencing since the end of World War II and see a story of material progress, which ends in plenty for all. Or they may look back on the recent expansion of rights for people who’ve previously been marginalized and think they see an arc to history, an arc which “bends towards justice”. Or they may look at a graph which shows the exponential increase in processor power and see a story where massively beneficial AI is right around the corner. All of these things might happen, but nothing says they have to. If the pandemic taught us no other lesson, it should at least have taught us that the future is sometimes random and catastrophic.
Plus, even if all of the aforementioned trends are accurate the outcome doesn’t have to be beneficial. Instead of plenty for all, growth could end up creating increasing inequality, which breeds envy and even violence. Instead of justice we could end up fighting about what constitutes justice, leading to a fractured and divided country. Instead of artificial intelligence being miraculous and beneficial it could be malevolent and harmful, or just put a lot of people out of work.
But this isn’t just a post about what might happen, it’s also a post about what we should do about it. In all of the examples I just gave, if we end up with the good outcome, it doesn’t matter what we do, things will be great. We’ll either have money, justice or a benevolent AI overlord, and possibly all three. However, if we’re going to prevent the bad outcome, our actions may matter a great deal. This is why we can’t allow ourselves to be lured into an impression of understanding. This is why we can’t blindly accept the narrative. This is why we have to realize how truly random things are. This is why, in a newsletter focused on studying how things end, we’re going to spend most of our time focusing on how things might end very badly.
I see a narrative where my combination of religion, rationality, and reading like a renaissance man leads me to fame and adulation. Which is a good example of why you can’t blindly accept the narrative. However if you’d like to cautiously investigate the narrative a good first step would be donating.
I’ve never seen Moore’s law brought up as a serious prediction of how the future will play out. Since I first heard of it decades ago it was in context of, “and of course Moore’s law will end someday,” with the added caveat that this would happen “in spite of people’s expectations”. For decades I’ve been told that there is some invisible majority out there who believe Moore’s law is going to continue ad infinitum.
Strangely, I’ve never met someone who believes Moore’s law is a prediction of the future, not just an observation of historical trends. Instead, I’ve only met people who are surprised at how long the amazing improvement of processing power has continued, even after we reached the level where quantum tunneling started to become an issue! I’m starting to believe that there are no people who believe the trend will persist forever – not even Moore himself! Just a public still amazed at how long it has continued, and who all fully expect it to end someday.
What will be even more impressive will be how creative programmers will become once processing power no longer improves. They haven’t had to be as parsimonious as we all expect they someday will have to be. And then we’ll likely see a new revolution in computing as we explore just how much we can really do with what we created.
Really? You’ve never seen Moore’s Law brought up in a predictive sense? That’s literally Ray Kurzweil’s entire schtick:
https://en.wikipedia.org/wiki/The_Singularity_Is_Near
Interesting. I guess I’m not exposed to people who think like that enough. Odd that they would use Moore’s law for that. You’d need qualifiers, at least, to make a serious case. Moore and everyone in the industry understood the trend was based on shrinking transistor sizes in a way that obviously couldn’t continue indefinitely. Indeed, continuing the trend as long as we have required a lot of novel techniques and inventiveness that couldn’t be predicted would be there in advance.
Now, you could claim that improvements will find new paths to explore, but literally arguing we can continue down this path indefinitely is ridiculous. But people predict ridiculous things all the time. Randomly they’re right a tiny percent of the time.
So what amazing thing will be done with more processing power? I mean if I said “here’s a boatload of processing power in the basement, do something with it on your free time my team of programmers” what would happen? I suspect at some point someone will say something like “F-it, let’s mine bitcoin and split it”.
Don’t get me wrong I’m sure we have some amazing things waiting to come out of mass data crunching, calculating how all known proteins fold, simulations of entire cells or small bodies etc. But diminishing returns would imply that if Moore’s Law goes on forever, The amazing things that come each year from it will get less and less stunning as improvements.
Anyone think the next ten years of Marvel movies will be so amazing in terms of special effects versus the last ten years. I could see ‘amazing’ improvements in de-aging old actors and bringing back actors who have passed but in terms of what you see on the screen, I suspect this will appear to be a smaller improvement.
Predicting how proteins fold isn’t just about cataloguing “all known proteins”. It’s about understanding how a predicted polypeptide sequence will fold, and what it will do when it interacts with other small molecules or other proteins.
More than that, it’s about designing our own enzymes to do what we want them to do. Enzymes are advanced catalysts. They work near standard temperatures and pressures, don’t require toxic reagents, give you stereo-selective products, and don’t produce a bunch of side-products so you don’t have to filter out a bunch of garbage. They do all this without protecting groups, and extra steps along the way. They are responsible for the molecular complexity found in nature, which is orders of magnitude more complex than anything we can do in the lab. If we can do nothing more with a 1,000-fold increase in processing power than figure out how to do predictive protein folding, we’ll have changed the modern world in ways most laypeople could never imagine. Our children will look back at our time like we look at the world without the internet. And in the process, we’ll eliminate a massive amount of industrial waste.
I’m not sure what other people are doing with their processing power, but in my field there’s a lot to be gained.
So my computer is running a protein folding program, every now and then the screen saver comes on with some spaghetti like thing that I assume it is ‘working’ on. My assumption is that there’s something like a few hundred million of those things and if they are all worked thru big things can happen down the line. It seems like someone could set up a system, possibly with crypto, where the first to get and verify a correct fold for each protein gets a coin. If you did you’d have zillions of computers doing just that.
Instead we have server farms ‘mining’ useless guesses to factorization problems in order to unlock bitcoin.