The cartoon below is my favorite version of the sentiment that Artificial Intelligence (AI) is defined as anything computers can’t do….at least yet.
From the original version of the TV show Star Trek, in the episode The Conscience of the King, Captain Kirk is suspicious the actor Anton Karidian is actually the evil mass murderer Kodos the Executioner. So Kirk asks the computer for information:
An often quoted paper by Erik Brynjolfsson and Adam Saunders starts with this great line: “We see the influence of the information age everywhere, except in the GDP statistics.” They go on to note that Information technology has been static at 4% of GDP for the past 25 years. But the economics of artificial intelligence might change the game in the 25 years to come. Let’s see why.
The Technological Singularity is defined as the sudden emergence of a super-intelligent computer. The classic scenario is people build an artificial intelligence capable of building a better artificial intelligence, and this process runs away with itself until you get a super-intelligent computer. See Asimov’s 1950 short story for an example. You can see why it’s been mocked as rapture of the nerds. Or if you like end-the-world scenarios, think Skynet from the Terminator movies. The term “singularity” is used because it’s hard to predict what will happen once a super-intelligent computer is running around. Personally I’m a yes for computers getting smarter than people, but think it will be gradual. Call it the gradual singularity.
Biologists consider bees to be superorganisms, where the social bonding of individual bees is so tight they exhibit behavior at the hive level. A common analogy is bees are to hives what cells are to a bee. Or to extend further, as neuron cells are to a human brain. Now the coordination of cells in an organism is far tighter than bees in a hive. But that difference is helpful. With bees we can actually see what’s going on. Let’s look at how bees decide where to build a nest, and see what that might mean for how the brain works.
There is a longstanding debate on whether computers will ever achieve true consciousness, sometimes called “Strong Artificial Intelligence (AI)”. So you might think the most interesting AI question right now is how close are we to thinking computers. But I think something else big will happen before we get there, and this will end the debate before we actually reach the goal.
As explained last post, in regards to consciousness I’ll assume a materialist biology theory of mind. And as such the science question is how a real human brain is conscious. This approach avoids any abstract philosophical arguments such as the mind-body problem. Furthermore I’ll immediately concede I’m in the tank for Daniel Dennett, pictured above. So to a certain extent this post will be a commentary on Dennett’s ideas, with a little Jeff Hawkins on the side.