If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
This last Saturday I was hanging out with a friend of mine that I don’t see very often. This friend has a profound technical interest in AI and has spent many years working on it, though not in any formal capacity. That said he’s very smart, and my assumption would be that his knowledge runs at least as deep as mine if not much deeper. (Though I don’t think he’s spent much time on the philosophy of AI, in particular AI risk.) In short, I don’t think I’m exaggerating to call AI a long-term obsession of his.
Part of the reason for this is that he thinks that general AI, a single AI that can do everything a human can do, is only about 10 years away and if he wants to make his mark he has to do it now. This prediction of 10 years is about as optimistic as it gets (and indeed it’d be hard to compress the task into much less time than that.) If you conduct a broader survey of experts and aggregate their answers Human Level Machine Intelligence is more likely than not to be developed by 2060. Though there are certainly AI experts at least as optimistic as my friend and, on the other hand, some who basically think it will never happen. In fact, this might be a good description of the situation given that some of the data indicates there’s a bimodal distribution in attitudes, with lots of people thinking it’s just around the corner, and a lot thinking it’s going to take a very long time, if it ever happens, with few people in the middle.
(Interestingly there are significant cultural differences in predictions with the Chinese average coming in at 2044 and the American average coming in at 2092.)
Just recently, and as promised, I finished Robin Hanson’s book The Age of Em: Work, Love and Life When Robots Rule the Earth and this whole discussion of AI probability is an important preface to any discussion of Hanson’s book because Hanson belongs to that category of people who think that human level machine intelligence is a long ways off. And that well before we figure out how to turn a machine into a brain, we’ll figure out how to turn a brain into a machine. Which is to say, he thinks we’ll be able to scan a brain and emulate it on a computer long before we can make a computer brain from scratch.
This idea is often referred to as brain uploading, and it’s been a transhumanist dream for as long as the concept has been around, though normally it sits together with AI in the big-bucket-of-science-fiction-awesomeness we’ll have in the future without much thought being given to how the two ideas might interact or, more likely, be in competition. One of Hanson’s more important contributions is to point out this competition, and pick brain emulation, or “ems”, for short, as the winner. Once you’ve picked a winner, the space of possible futures greatly narrows to the point where you can make some very interesting and specific predictions. And this is precisely what the Age of Em does. (Though perhaps with a level of precision some might find excessive.)
Having considering Hanson’s position and my friend’s position and the generic transhumanist position we are left with four broad views of the future (the fourth of which is essentially my position.)
First, the position of the AI optimists, who believe that human level machine intelligence is just a matter of time, that computers keep getting faster, algorithms keep getting better, and the domain of things which humans can do better than computers keeps narrowing. I would say that these optimists are less focused on exactly when the human intelligence finish line will be crossed and more focused on the inevitability of crossing that line.
Second, there’s the position of Hanson (and I assume a few others) who mostly agree with the above, but go on to point out (correctly) that there are two races being run. One for creating machine intelligence and one for successfully emulating the human brain. Both are singularities, and they’re betting that the brain emulation finish line is closer than the AI finish line, and accordingly that’s the future we should be preparing for.
Third, there’s the generic transhumanist position which holds that some kind of singularity is going to happen soon, and when it does it’s going to be awesome. But who have no strong opinion on whether it will be AI, brain emulation or some third thing (extensive cybernetic enhancement? Unlimited free energy from fusion power? Aliens?)
Finally there are those people, myself included, who think something catastrophic will happen which will derail all of these efforts. Perhaps, to extend the analogy, clouds are gathering over the race track, and if it starts to rain all the races will be canceled even if none of the finish lines have been reached. As I said this is my position, though it has more to do with the difficulties involved in these efforts, than in thinking catastrophe is imminent. Though I think all three of the other camps underestimate the chance of catastrophe as well.
The Age of Em is written to explain and defend the second case. Let’s start our discussion of it by examining Hanson’s argument that we will master brain emulation before we master machine intelligence. I was already familiar with this argument having encountered it in the Age of Em review on Slate Star Codex, which was also the first time I heard about the book. And then later, I heard the argument, in a more extended form when Robin Hanson was the keynote speaker at the 2017 Mormon Transhumanist Association Conference.
Both times I felt like Hanson downplayed the difficulty of brain emulation, and after hearing him speak I got up and asked him about the OpenWorm Project where they’re trying to model the brain of the C. elegans roundworm, which has a brain of only 302 neurons, so far without much success. Didn’t this indicate, I asked, that modelling the human brain, with it’s 100 billion neurons, was going to be nearly impossible? I don’t recall exactly what his answer was, but I definitely recall being unsatisfied by it.
Accordingly, one of the things I hoped to get out of reading the book was a more detailed explanation of this assumption, and in particular why he felt brain emulation was closer than machine intelligence. In this I was somewhat disappointed. I wouldn’t say that the book went into much more detail than Hanson did in his presentation. I didn’t come across any arguments about emulation in the book which Hanson left out of his presentation. That said, the book did make a much stronger case for the difficulties involved in machine intelligence, and I got a much clearer sense that Hanson isn’t so much an emulation optimist as he is an AI pessimist.
Since I started with the story of my friend, the AI optimist, it’s worth examining why Hanson is so pessimistic. I’ll allow him to explain:
It turns out that AI experts tend to be much less optimistic when asked about the topic they should know best: the past rate of progress in the AI subfield where they have the most expertise. When I meet other experienced AI experts informally, I am in the habit of asking them how much progress they have seen in their specific AI research subfield in the last 20 years. A median answer is about 5-10% of the progress required to reach human level AI.
He then argues that taking the past rate of progress and extending it forward is a better way of making estimations than having people make wild guesses about the future. And, that using this tactic, we should expect it to take two to four centuries before we have human level machine intelligence. Perhaps more, since getting to human level in one discipline does not mean that we can easily combine all those disciplines into fully general AI.
Though I am similarly pessimistic, in my friend’s defense I should point out that Age of Em was published in 2016, and thus almost certainly written before the stunning accomplishments of AlphaGo and some of the more recent excitement around image processing, both of which may now be said to be “human level”. It may be that after several eras of AI excitement which were inevitably followed by AI winters, that spring has finally arrived. Only time will tell. But my personal opinion is that there is still one more winter in our future.
I am on record as predicting that brain emulation will not happen in the next 100 years, but Hanson isn’t much more optimistic than I am and predicts it might take up to 100 years, and that the only reason he expects it before AI is that he expects AI to take 200-400 years. Meaning that in the end my actual disagreement with Hanson is pretty minor. Also I think that the skies are unlikely to remain dry for another 100 years, which means neither race will reach the finish line…
I should also mention that in between seeing Hanson’s presentation at the MTA conference and now that my appreciation for his thinking has greatly increased, and I was glad to find that on the issue of emulation difficulty that we were more in agreement then I initially thought. Which is not to say that I don’t have my problems with Hanson or with the book.
I think I’ll take a short detour into those criticisms before returning to a discussion of potential futures. The biggest criticism I have concerns the length and detail of the book. Early on he says:
The chance that the exact particular scenario I describe in this book will actually happen just as I describe it is much less than one in a thousand. But scenarios that are similar to true scenarios, even if not exactly the same can still be a relevant guide to action and inference. I expect my analysis to be relevant for a large cloud of different but similar scenarios. In particular, conditional on my key assumptions, I expect at least 30% of the future situations to be usefully informed by my analysis. Unconditionally I expect at least 10%.
To begin with, I think the probabilities he gives suffer from being too confident, and he may be, ironically, doing something similar to AI researchers, whose guesses about the future are more optimistic than a review of past performance would indicate. I think if you looked back through history you’d be hard pressed to name a set of predictions made a hundred years in advance which would meet his 10% standard, let alone his 30% standard. And while I admire him for saying “much less than one in a thousand”. He then goes on to spend a huge amount of time and space getting very detailed about this “much less than one in a thousand” prediction. An example:
Em stories predictably differ from ours in many ways. For example, engaging em stories still tell morality tales, but the moral lessons slant toward those favored by the em world. As the death of any one copy is less of a threat to ems, the fear of imminent personal death less often motivates characters in em stories. Instead such characters more fear mind theft and other economic threats that can force the retirement of entire subclans. Death may perhaps be a more sensible fear for the poorest retirees whose last copy could be erased. While slow retirees might also fear an unstable em civilization, they can usually do little about it.
This was taken from the section on what stories will be like in the Age of Em, from the larger chapter on em society. And hopefully it gives you a taste of the level of detail Hanson goes into in describing this future society, and the number of different subjects he covers while doing so. As a setting bible for an epic series of science fiction novels, this book would be fantastic. But as just a normal non-fiction book one might sit down to read for enlightenment and enjoyment, it got a little tedious.
That’s really basically the end of my criticisms, and actually there is a hidden benefit to this enormous amount of detail. It not only describes a potential em society with amazing depth. It also sheds significant light on the third position I mentioned at the beginning, the vague, everything’s going to be cool transhumanist future. Hanson’s level of detail provides a stark contrast to the ideology of most transhumanists who have a big-bucket-of-science- fiction-awesomeness that might happen in the future but little in the way of a coherent vision for how they all fit together, or whether, as Hanson points out in the case of ems vs. AIs, they even can fit together
Speaking of big-bucket-of-science-fiction-awesomeness, and transhumanists, I already mentioned Hanson’s keynote at the MTA Conference, and while I hesitate to speculate too strongly, I suspect most MTA members did not think Hanson’s vision of the future was quite as wonderful or as “cool” as the future they imagine. (For myself, as you may have guessed, I came away convinced that this wasn’t a scenario I could ignore, and resolved to read the book.) But of course it could hardly be otherwise. Most historical periods (including our own) seem pretty amazing if you just focus on the high points, it’s when you get into the details and the drudgery of the day to day existence that they lose their shine. And for all that I wish that Hanson had spent more time in other areas (a point I’ll get back to) he does a superlative job of extrapolating even the most quotidian details of em existence.
In further support of my speculation that the average MTA member was not very excited about Hanson’s vision of the future, at their next conference, a year later, the first speaker mentioned Age of Em as an example of technology going too far in the direction of instrumentality. You may be wondering, what he meant by that, and thus far, other than a few hints here and there, I haven’t gone into too much detail about what the Age of Em future actually looks like. And I’ll only be able to give the briefest of overviews here, but as it turns out much of what we imagine about an AI future applies equally well in an em future. Both AIs and ems share the following broad features:
- They can be sped up: Once you’re able to emulate a human brain on a computer you can always turn the speed up. Presumably this would make the “person” being emulated experience time at that new speed. By speeding up the most productive ems, you could get years of work done every day. Hanson suggests the most common speed setting might be 1000 to 1, meaning that for every year of time which passes for normal humans, a thousand subjective years would pass for the most productive ems.
- They can be slowed down: You can do the reverse and slow down the rate at which time is experienced by an em. Meaning that rather than ever shutting down an em, you could put them into a very cheap “low resource state”. Perhaps they only experience a day for every month that passes for a normal human. Given how cheap this would be to maintain you could presumably keep these ems “alive” for a very long time.
- They can be copied: Because you can copy a virtual brain as many times as you want, not only can you have thousands if not millions of copies of the same individual, you’re also going to only choose the very “best” individual to copy. This means that the vast majority of brain emulations may be copies of only a thousand or so of the most suitable and talented humans.
- Other crazy things: You could create a copy each day to go to “work” and then delete that copy at the end of the day, meaning that the “main” em would experience no actual work. You could take a short break, but by turning up the speed make that short break into a subjective week long vacation. You could make a copy to hear sensitive information, allow that copy to make a decision based on that information, then destroy the copy after it had passed the decision along. And on and on.
Presumably at this point you have a pretty good idea of what the MTA speaker meant by going too far in the direction of instrumentality. Also since culture and progress are going to reside almost exclusively in the domain of the speediest ems, chosen from only a handful of individuals, it’s almost certain that no matter how solid your transhumanist cred, you’re going to be watching this future from the sidelines. (And actually even that analogy is far too optimistic, it will be more like reading a history book, and every morning there’s a new history book.)
The point of all of this is that there is significant risk associated with AI (position 1). Hanson points out that the benefits of widespread brain emulation will be very unequally distributed (position 2). Meaning that the two major hopes of transhumanists both promise futures significantly less utopian than initially expected. We still have the vague big-bucket-of-science fiction-awesomeness hope (position 3). But I think Hanson has shown that if you subject any individual cool thing to enough scrutiny it will end up having significant drawbacks. The future is probably not going to go how we expect even if the transhumanists are right about the singularity, and even if we manage to avoid all the catastrophes lying in wait for us (position 4).
The problem with optimistic views of the future (which would include not only the transhumanists, but people like Steven Pinker) is that they’re all based on picking an inflection point somewhere in the not too distant past. The point where everything changed. They then ignore all the things which happened before that inflection point and extrapolate what the future will be like based only on what has happened since. But as I mentioned in a previous post, Hanson is of the opinion that current conditions are anomalous, and that extrapolating from them is exactly the wrong thing to do because they can’t continue. They’re the exception, not the rule. He calls the current period we’re living in “dreamtime” because, for a short time we’re free from the immediate constraints of survival.
Age of Em covers this idea as well, and at slightly greater length than the blogpost where he initially introduced the idea. And when I complain about the book’s length and the time it spends discussing every nook and cranny of em society, I’m mostly complaining about the fact that he could have spent some of that going into more detail on this idea, the idea of “dreamtime”. Also his discussion of larger trends is fascinating as well. And, in the end, I would have preferred for Hanson to have spent most of his time discussing broad scenarios, rather than spending so much on this one, very specific, scenario. Because, as you’ll recall, I’m a believer in the fourth position, that something will derail us in the next 100 years before Hanson’s em predictions are able to come to fruition, and largely because of the things he points out in his more salient (in my opinion) observations about the current “dreamtime”.
We have also, I will argue, become increasingly maladaptive. Our age is a “dreamtime” of behavior that is unprecedentedly maladaptive, both biologically and culturally. Farming environments changed faster than genetic selection could adapt, and the industrial world now changes faster than even cultural selection can adapt. Today, our increased wealth buffers us more from our mistakes, and we have only weak defenses against the super-stimuli of modern food, drugs, music, television, video games and propaganda. The most dramatic demonstration of our maladaptation is the low fertility rate in rich nations today.
This is what I would have liked to hear more about. This is a list of problems that is relevant now. And which, in my opinion at least, seem likely to keep us from ever getting either AI or ems or even just the big-bucket-of-science-fiction-awesomeness. Because in essence what he’s describing are problems of survival, and as I have said over and over again, if you don’t survive you can’t do much of anything else. And brain emulation and AI and science fiction awesomeness are all on the difficult end of the “stuff you can do” continuum on top of this. I understand that some exciting races are being run, and that the finish line seems close, but I still think we should pay at least some attention to the gathering storm.
If the phrase “big-bucket-of-science-fiction-awesomeness” made you smile, even a little bit, consider donating. Wordsmithing of that level isn’t cheap. (Okay maybe it is, but still…)
I don’t see brain downloads as ever being viable. Take one advanced neuroscience class, and the levels of complexity of the brain should be sufficient to convince anyone that this isn’t just a problem of processing power, but of actual impossibility.
The problem is that the brain is highly dynamic, with things happening at the protein subunit level that are fundamental to function. It’s not just that you’d need a snapshot of all the individual neurons and neutral connections. It isn’t even just that you’d have to add all the supporting cellular infrastructure that maintains and sometimes controls neural function. You actually have to know the billions (trillions?) of sub cellular states.
Then you have to verify them against expected consciousness, which is impossible to do. How do you know you got it right, and that you’ve recreated a living mind? Maybe all you did was make a robot that can pretend it’s a person really well. At which point you’ve just recreated AI from a different, more difficult, method.
Westworld is the goto series here. Spoiler ALERT:
In the series a potential path to immortality is envisioned. Not quite a brain upload but a robot with a perfect copy of the person’s brain. Yes the original still dies but to the robot’s perception they are immortal as far as they knew they were born human and one day revealed to be a robot. From that point on you can copy as much as necessary to keep the robot mind going.
One of the park’s founders attempts to do this for his father-in-law. One potent episode shows him year after year having a drink with robot-in-law-dad in the lab. The conversation follows the same pattern until the robot realizes he didn’t survive the fatal illness and he’s a robot. At a certain point he shows the ‘script’ to the robot to make him realize the conversation he is having is known and determined. Why is it so important the robot mind follows the script? “Fidelity”. The assumption is made if the robot acts as the real life person did when presented with the exact situation, then the mind copy has perfect fidelity. But after hundreds of rounds, the mind breaks down and the robot falters and fails. (Watch Riddle of the Sphinx episode if you want to just see it)
But then it’s revealed there are some actual ‘copies’ of human minds roaming around the robot park. The trick is not to demand perfect ‘fidelity’ in the ‘upload/copy’ This actually does seem to produce an ‘upload’ of the subject’s mind and a robot that can act his agenda.
Here’s where this goes. I’d be very fascinated by an uploaded ‘copy’ of, say, James Joyce or Johnny Carson who was only 99% ‘exact’. You may say such a feat would be a failure as the ‘original’ was the only ‘true’ one but why wouldn’t the copy simply be a digital person whose very much like but not exactly like his ‘biological twin’?
If ‘fidelity’ is a bit relaxed as a requirement then how is the huge complexity of the brain a factor here? Yes it’s billions of neurons but they are, after all, contained nicely in a brain about the size of a football that seems to operate without violating any rules of physics or chemistry…doesn’t require insane amounts of energy.
More likely I think you could develop the ability to create a “blank” brain and then train it up like a child until you had a brain emulation, but not of any specific person. This would make fidelity not important at all except in the initial model of the brain.
Yeah, I think making something that acts like a human brain is much easier than copying the brain of a specific person. The copying idea is the kind of thing you can imagine in the abstract, until you get into the weeds of modern neuroscience and then everything falls apart very quickly.
For example, we know massive amounts of processing – including feedback – is happening outside the brain. So it’s not just a matter of “emulating a football-sized mass”. You might say, “fine, we’ll emulate the inputs, too (by which I don’t mean sensory inputs, but rather lower level biological inputs throughout the body from the heart, other organ systems, and complex hormonal interactions created by microenvironments). But for any kind of decent copy, you end up emulating the whole body – including microbiome – and mission creep makes the task increasingly far off.
Boonton suggests relaxing fidelity requirements to get something “close enough”. Maybe you could create some generic bodily inputs, guess at most of the dynamic sub-protein states, fudge some neutral connections that keep changing on the fly, and give up on environmental inputs altogether. But by then you’re not copying anymore but making something new. Which, again, making new human-like machines is probably closer to the realm of non-fiction. It’s the copy idea – which is what gets people excited about human emulation in the first place – that’s hopelessly out of reach.
But say we are okay with some breaks in fidelity so long as most
of it transfers over. We want the memories, at least. The problem is we don’t even know how to identify any specific memory. We have what we think are good hypotheses for how they’re stored, but they’re vague, untestable, and probably wrong because they’re ultimately too simplistic. Then again, what good are memories in some generic brain? If you’re going to all the trouble of capturing the memories, don’t you also really want the personality as well? Preserving the memories of a Washington isn’t nearly as interesting if the entity they’re encoded in is absolutely nothing like Washington.
And the biggest problem of all is that brains don’t store memories because they like a nostalgic record, but for purposes related to survival and sense-making. If you copy a bunch of memories and put them in a foreign brain emulation, that brain emulation will likely reject the memories as irrelevant and start purging them, or maybe go insane trying to reconcile them. This is where a fundamental principle of biology – that low order structures define higher order function – make it hard to only recreate part of the lower order structure accurately. Experience in simpler fields within biology suggests the unintended consequence will almost always be total collapse of such an artificial system.
But wait, why don’t we create a computer system that emulates the relevant aspects we want to keep, but jettisons all the complicated garbage we want to avoid? No need for a faithful recreation of every neuron, just read the neurons and you’re good. Again, the simple act of reading neurons isn’t as simple as just marking off connections. There are layers of complexity here that are completely glossed over. And it’s not clear you could just “program around” the complex set of built-in features that are literally the reason for the overall brain architecture, without ending up having to just emulate the entire complicated mess whole hog.
From where we stand in the field of neuroscience, I’d compare the idea of copying a brain into a computer to Greek philosophers looking at a bird and saying, “theoretically it should be possible to construct a mechanical bird that is able to fly all the way to Mars and speak with the gods directly. It’s only a matter of time before this is accomplished.”
It seems to me you could ‘train’ your ‘blank brain’ on the available examples of the person you are thinking about. Much like the Turing Test, you’d end up with a copy that seems just like, say, Johnny Carson to any of us, but who among us really knew the ‘real’ Carson? Or the real Bill Cosby for that matter. Nonetheless the ‘copy’ not only would seem just like the real thing but also provide us it’s own perspective on things the original could never do. For example, we could hear digital Carson do his take on Facebook and Youtube. We don’t know how close to the ‘real’ Carson that would be, if we resurrected him, but it would be an independent person in it’s own right….perfect copy or not.
I think Mark raises a valid point that a mind copy also would need either input from the body (Westworld’s answer is very intricate robots) or would need that input simulated in surprisingly minute detail. But ok if we’re able to simulate the brain we can simulate the leg and we can simulate a subtle twitch one might feel if their pancreas released just a tad too much insulin. Or we could leave that input-output behind and adopt other IO designed by the system.
That might happen first. A robotic leg, for example, can either try to mirror all the IO interactions your real leg would have with your brain or, more likely, we would try to get your brain to adapt to the robot leg and communicate with it.
What I don’t really see from Jeremiah is his reason why he is skeptical this will happen? I wonder if he’s relying on something like a god of the gaps gambit here. One avenue is dualism….there’s some immaterial ‘soul’ type material that somehow isn’t breaking any rules of physics but absent that a brain or simulated brain just won’t work. Or you’re relying on a gaps argument….yes the brain is material but it is so complicated it can only be simulated by a biological brain type computer.
In other words, I believe there’s a concept called a Turing Machine. All computers are Turing machines which means that you can run any program on any computer. Your Commodore 64 may need a billion years to run a program a supercomputer in a defense lab runs in a day, but both machines are the same. If the brain is a Turing Machine, then you can’t claim it can’t be simulated or ‘uploaded’ in a virtual medium, but you can say AI advocates are underestimating just how many computer advances are needed and how long Moore’s law will work in our favor to give them to us.
Seems to me if you are avoiding dualism you need an argument that the brain is not a Turing machine hence the program cannot simply be ported from a ‘wet’ system to a dry one. Fair to say AI advocates have assumed the brain is a Turing Machine without proving it. But I don’t think the skeptics have presented a case for why the brain isn’t a Turing Machine.
In my heart of hearts I believe in a soul. So yes, my opinion is that trying to create brain/consciousness in a purely material fashion is ultimately never going to work. But..
I think there’s an enormous amount of complexity to figure out, before we even get to that point. And that the deeper people get in the project the more complex they’ll realize it is.
Also I think modernity is fragile enough that the effort will go off the rails (be deprioritized in favor of more immediate survival) before we reach the materialistic endpoint as well.
So yes, call me a dualist, but in the end I think it won’t matter. I think the effort will be stopped by other things first.
I would argue that even if the brain is a Turing machine (and I think the burden of proof for that would lie with those asserting the brain is a Turing machine) it is not fundamentally amenable to recreation de novo.
If you said you planned to freeze a human head and create a faithful copy of every molecular interaction at that moment in time, assuming you could surmount the highly time-dependent problems associated with physically accomplishing that, you might be able to approach a kind of replication (also assuming you knew everything about the inputs and function of the brain).
Is that a fundamentally solvable problem given infinite time and resources (and persistence) to work on it? Possibly. But to get there we had to make many dramatic assumptions that border on fantasy. Maybe some of those problems will get worked out with better technology, but there are enough major, fundamental problems that I’m skeptical of the copying concept.
Maybe you could make something that acts like Johnny Carson on stage, but the complex stuff is harder to do and eventually you have to admit you just made a shallow impression of the real thing.
And here’s where we run into the limits of projecting forward from current technology. We can see hoverboards and flying cars that never appear because the fundamental physics make them impractical, even if theoretically possible.
Jeremiah’s position, I think, is not dualism. He seems to be quite materialistic. “You” are the atoms and energy of your brain. This won’t be uploaded or copied not because of anything special about human nature but simply for the same reason the set of all possible chess games will never be computed and analyzed. Doing so will be very complicated and will take us a long time and over the long haul, we are almost certain to either destroy ourselves or do something to set us far back. He doesn’t say that us trying to achieve such knowledge will set off either the catastrophic destruction of us or set us backwards.
This to me seems to be making multiple predictions. One is how complicated the brain really is. Another is how long it would take us to unravel that given sufficient research effort sustained over time until we get there. Another is how human society will go over that time period. The weakness of predictions that are really bundles of nested predictions is that you are increasing the number of ways you could be wrong therefore increasing the odds that you’re wrong.
Mark’s position, seems to be that the brain is a Turing Machine, you can therefore run a ‘brain program’ on any Turing machine including a bank of servers. But this would require an atom by atom copy of the brain, which can’t be done:
“If you said you planned to freeze a human head and create a faithful copy of every molecular interaction at that moment in time, assuming you could surmount the highly time-dependent problems associated with physically accomplishing that, you might be able to approach a kind of replication (also assuming you knew everything about the inputs and function of the brain).”
Well look if you had a Commodore 64 running, say, Ultimata III, it might be valid to say doing a atom by atom copy would produce a copy of the machine and program but would be impossible to do. But that’s not really necessary. Since all computers are Turing Machines, you can create an emulator that pretends to be a Commodore 64 running the Ultimata III game and you can have a bit of your childhood fun back. No need to measure the position of every electron in a C-64 you salvaged from ebay.
If this is the case then the ‘barrier’ you’re trying to argue for is pretty weak. Sure we don’t have the information to simulate Johnny Carson’s brain. We also don’t have the information needed to emulate your actual C-64 from 30 years ago…with the sticky Y key and little toy program you once wrote yourself in BASIC. That C-64 is in a landfill somewhere and the electrons of your little program are now zipping around the universe doing their thing. But C-64 emulators means there are more C-64’s in the world today than were ever made back then. The ability to emulate human minds will mean biological humans will be a minority of all humans but granted we may not copy a biological human into a digital one. On the other hand, you would have a hard time trying to copy a biological human via biological means. Having a son and raising him exactly like Tiger Woods’ dad raised Tiger is unlikely to produce a golf champion. Yet if you tried this you’d still end up with a human who is no less interesting and valuable in his own right than Tiger Woods.
I think you have bundling backward. I’m right if the brain turns out to be more complicated than Hanson thinks OR if AI turns out to be easier OR if anything derails the effort OR if it turns out that there’s actually a soul which can’t be modeled on a computer, etc.
EMs only happen if the brain turns out to be straightforward, AND AIs turn out to be hard, AND everything is purely materialistic, AND If nothing derails the effort.
Hanson’s the one who has to be right about everything I only have to be right about one thing.
I also basically agree with all the points Mark is making as well.
If you think brains are tractable, when do you think the Open Worm project will announce their perfectly accurate virtual worm? This year? Ten years?
Even if it’s this year, and even if from that point the number of neurons follows Moore’s Law and doubles every two years. (Both insanely optimistic standards) it will be 2074 before they hit 100 billion.
If, on the other hand, it takes 10 years, and there’s a doubling, but it only happens every 5 years after that. Then you’re looking at 2170, and a lot can happen in then next 150 years…
I’m not sure about a virtual worm, but again we aren’t anywhere near an atom by atom simulation of a C-64 on a computer, but no one needs that .
The ‘soul’ as a barrier looks a bit different in that context. A perfect C-64 emulator may, nonetheless, not have access to the chips the C-64 used. If your project was to study the chips of the C-64 and try to recreate them, then an emulator may not provide you with all you need. Someday things like this may matter as the emulators can live far into the future very easily but no doubt the chips are discontinued today and in so many years into the future it may be impossible to use old ones.
But if we have a soul that doesn’t really matter. If the soul+wet brain make a Turing Machine, An emulator running on a server is equivalent. Even if you go full dualism, if the soul by itself is a Turing Machine then we’re still good.
if you say the soul or soul+brain or just the wet brain is a special type of non-Turing computer….well that would be very interesting, although Turing Machines can simulate non-Turing Machines so you may not be safe yet. It would open up a really interesting area of research. Where do biological systems switch from Turing Machines to non-Machines? Suppose we find the brains of people, apes, chimps, and dogs have this ‘non-Turing’ barrier but the brains of fish, cats, birds have no such barrier? Might that end up being a ‘test for a soul’ and what would be the impact if some non-human animals test positive but others don’t?
Agreed, a lot can happen in 150 years, if we need that long. However since humans have been around for maybe 150,000 years, which is 1000 periods of 150 years. In all of them I don’t think there’s a single clear example of us learning something or getting close to learning something but then that effort set us backwards. There are plenty of examples of things turning out more complicated than we thought (the makeup of atoms, energy, etc.) and examples of implementations of knowledge being harder than we imagined (mass production of flying cars, curing cancer, the common cold), but none like you seem to imply is about to happen.
Of course the last few 150 year periods look very different than all previous ones. You could say we are entering unknown territory here on a scale we never did before and we may yet encounter a Faustian Trap (where acquiring knowledge by itself actually harms us).
Yeah, I think one of the big weaknesses of Hanson’s position is the difficulty of brain emulation. Though if you’re allowed 100 years to do it, who knows what will happen?
Even if some big crash happens before ems are possible, that just delays these issues. It is very hard to kill off all humans, or all life, and so the rising path would eventually restart, leading to something like ems.
I appreciate you stopping by to comment. And that’s a fair point, though the farther out we push things the muddier the waters become, particularly if we toss a few catastrophes into the mix. Which takes me back to the point that I’d rather you had spent more time on a near term examination of the consequences of Dreamtime, then your very detailed examination of this one specific prediction.
I had originally planned to focus a book on the Dreamtime concept, but I found I just didn’t have enough to say on that to fill a book.
Ahh… I guess I could see that. Hopefully you’ll eventually have enough to write a book, because that would be a book I would love to read.