The Technological Singularity is defined as the sudden emergence of a super-intelligent computer. The classic scenario is people build an artificial intelligence capable of building a better artificial intelligence, and this process runs away with itself until you get a super-intelligent computer. See Asimov’s 1950 short story for an example. You can see why it’s been mocked as rapture of the nerds. Or if you like end-the-world scenarios, think Skynet from the Terminator movies. The term “singularity” is used because it’s hard to predict what will happen once a super-intelligent computer is running around. Personally I’m a yes for computers getting smarter than people, but think it will be gradual. Call it the gradual singularity.
IEEE did a special report on the singularity in 2008, which is still fairly current. They surveyed tech leaders, and what jumps out from their responses is arguments about the singularity are just proxies for arguments about the nature of intelligence itself. If you think the human mind is more than physical processes in the brain, then you are a no. And if you believe in a materialist mind, you can still be a no if you believe human brains are beyond our comprehension. If you believe the essentials of intelligence are comprehensible, then you tend to be a yes. Obviously I’m a yes of a sort. But before we get there, I want to debunk an odd version of yes where people believe intelligence is beyond our comprehension, but think we can sidestep this difficulty by doing whole brain emulation.
Regarding advanced machine intelligence, my guess is that our best chance of achieving it within a century is to put aside the attempt to understand the mind, at least for now, and instead simply focus on copying the brain. This approach, known as whole brain emulation, starts with a real human brain, scanned in enough detail to see the exact location and type of each part of each neuron, such as dendrites, axons, and synapses. Then, using models of how each of these neuronal components turns input signals into output signals, you would construct a computer model of this specific brain.
The analogy I would give here is: Well…… we can’t figure out how birds fly. But instead of learning about aerodynamics and lift, or experimenting like the Wright Brothers, let’s build an exact copy of a bird and throw it off a cliff. Problem solved.
How can you build a flying machine if you don’t understand which part of a bird makes it fly in the first place? Do you need to mimic the color, curvature and smelling capability of the beak? Do feathers matter, or just the shape of the wing? If you don’t know what’s essential, you have to build a copy of the entire bird down to the atomic level, which defeats the whole purpose. What’s going on here is Hanson claims he doesn’t know how the brain works, but has a hidden assumption about it anyway. The hidden assumption is the brain works due to how the neurons are connected. Once you make this assumption explicit, you can see why he thinks wiring things up correctly will magically make a brain. Of course this begs the question. If you know it’s the wiring, why can’t you determine the principles behind brain wiring and construct an artificial neural network simplified to essentials? Hard as that might be, it’s far easier than blindly emulating random details of a complete human brain. But the killer point is neuroscience is still in flux on how important brain wiring is relative to other chemical aspects of cognition. For example see Gallistel and King for a critique on brain wiring hype.
This seems an arcane point, but the over hyped neuroscience focus on brain wiring has diffused into public policy. For example the Obama administration is pushing a $100 million dollar program to map the brain. This is a response to the $1.5 billion Europe Brain Project. Nothing wrong with spending money on brain science, and seeing how brains are wired up. Awesome. But these projects continue the overemphasis on brain mapping, without first finding out what else we need to model. Why not spend more money on understanding and modelling insect brains, then guppie brains, working your way up to people? Less flashy and glamorous for politicians, but more bang for the research buck.
Anyway, I’m a yes on the singularity following the Jeff Hawkins school of how intelligence works. Hawkins’ view is the brain constantly forecasts what will happen next. Intelligence is predictive interaction with the world. The better you can predict, the smarter you are. And you get true intelligence when a brain can forecast not just physical nature, but how other minds behave. For more detail see my post on artificial intelligence. Let’s quote Hawkins directly:
If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time—an exponential increase in intelligence—then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn’t a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have.
The key here is if intelligence is about predicting the world, you have to spend a lot of time gaining experience. Predict, correct, repeat. This training takes a decade or two for babies to reach an adult level of understanding, and it will take time for computer algorithms as well. You can’t bootstrap learning from experience. You have to live it.
Now where I disagree with the thrust of Hawkins above is on the suddenness of the economic impact of artificial intelligence. It won’t be a singularity, but will be disruptively quick. So let’s look at the economic impact of artificial intelligence in the next post.