The debate on Strong Artificial Intelligence and conscious computers

scotty

There is a longstanding debate on whether computers will ever achieve true consciousness, sometimes called “Strong Artificial Intelligence (AI)”. So you might think the most interesting AI question right now is how close are we to thinking computers. But I think something else big will happen before we get there, and this will end the debate before we actually reach the goal.

First, start with my last post where we saw that consciousness requires an ability to a) predict the consequences of alternative actions, and b) an ability to predict what other people may do as a result (have a theory of mind). How close are we to this? Well this survey provides a nice history of where we’ve been. But the gist is that progress has been erratic. A new technique comes into play and progress is fast for a while, then it hits diminishing returns. Then another approach comes in and progress picks up again. Underlying particular techniques, right now we are in a big upswing as AI generally moves away from rule based models toward statistical “big data” approaches.

To illustrate, let’s look at language and computer translation. In the old days linguists would program rule based logic such as: If noun, then verb, then do this or that. The newer approaches use massive data sets with statistical predictive models, loosely inspired by the brain. A good way to think about this is to recall that a child takes years to learn to talk, and during those years massive amounts of experience (data) are needed to drive the prediction/action/correction learning cycle. Just think of the petabytes needed if you recorded all the video of all the interactions for the first years of life, not to mention the data sensed when monitoring internal reactions inside the body and nervous system itself. Things like trying to move your arm and seeing where it goes, or using your mouth and tongue to speak and hearing resulting sound, etc. Computers are only now big and fast enough to handle data sets comparable to what a human brain requires to learn. The iconic quote on the power of statistical models over the old rule based aproach is attributed to Fred Jelinek: “Every time I fire a linguist, the performance of our speech recognition system goes up.”

So how close are we to true AGI? I would say surprisingly close, despite so many overblown claims in the past. Perhaps 20-50 years, though of course it’s impossible to be sure. The reason is similar to what I argued earlier about exponential trends in my beyond the touchscreen interface post. From a fixed vantage point, exponential progress at first seems incredibly slow. If 10 years is the doubling time, then it’s 10 years to go from 1/8 level to 1/4 level. Then another 10 years to go from 1/4 to 1/2. But once you hit your vantage level, in this case human level AI, then things explode. 10 years to go from 1 to 2. Then 2 to 4. So at 40 years after you hit the reference level you are 16x above it. Huge. But looking back those same 40 years, you only moved from 1/16 to 1. And yet the progress has been constant, just not linear. The takeaway for exponentials is that once you get near your vantage point things explode from a human and linear perspective. This happened with computer chess, and many other areas where computers were applied to very hard problems. And I think we are close to a human vantage level in AI, since now robots can now maneuver in the real world in real time, and speech recognition is a going concern. For the first time the basics are now tractable, which means we’re getting close.

But before we get to true AI, we’ll reach a surprising twist in the entire AI “will computers ever think” debate, as mentioned at the start of this post. The twist is the public will believe computers are conscious as soon as computers can talk well enough to hold a (stilted) conversation. Most people don’t care that Alan Turing defined his Turing test of intelligence back in to 1950, where with a written back and forth you try to get close to human level responses. Instead we’ll hit a cultural watershed of belief because of the human propensity to automatically assume agency to whatever we interact with, especially when talking. And as I outlined in my ELIZA post, Apple Siri and Google Voice will continue their exponential climb and to become the primary way to interact with phones inside 5-10 years. This voice interaction layer will also fit perfectly into the existing touchscreen interface of phones today.

Ironically AI will play out backwards from most science fiction. A Terminator style Skynet AI won’t surprise us by leaping upon the stage. Instead everyone will be chatting away with ELIZA type voice chatbots, believing them intelligent decades before they really hit that level. People will talk to their phones to figure out where to go to dinner, where the cool clubs are, what advertisers would like us to buy, and even what to think about world events. No one will debate whether computers can really think if they start their day by asking their phone which pair of pants to put on.

Can’t wait.

By Nathan Taylor

I blog at http://praxtime.com on tech trends and the near future. I'm on twitter as @ntaylor963.

3 comments

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s