Thursday, 12 June 2014

A computer tells lies to try to become a real boy: this is progress?

As a sci-fi reader and the parent of a philosopher (the two interests are definitely related), I'm not surprised that a claim has been made that a computer program has managed — according to some — to have passed the famous Turing Test.

What surprises me is that the world is not agog over the news. Or that it has not been made fearfully irrational by it.

Eugene Goostman is the name of a 'personality' created by programmers that recent history has made unlikely collaborators.

Vladimir Veselov is a Russian, Eugene Demchenko is Ukrainian. They've been working at Princeton Artificial Intelligence since 2001, and decided their shot at the Turing prize was to create a digital 13-year-old Ukrainian boy. That way, the program could claim to know everything when being tested, but could be believably wrong at the same time. Smart, that.

The aim of the test, as mathematical genius Alan Turing proposed in 1950, was for the program to be able to trick human questioners into believing it was a real human. Later on, a standard was set at 30 per cent success to be considered a real pass of Turing's test, which then touched off arguments that have continued ever since.

Sci-fi, meet philosophy.

The ability of a computer to mimic genuine intelligence is serious stuff. Not because people seem to long for a HAL or a Commander Data as a friend, but because the quest to create intelligence touches the base of our own notions of who we are.

And how vulnerable we are in the presence of our technology. More on that later.

The program Big Blue was able to defeat a grand master at chess, because it could compute a massive number of outcomes for any series of moves within the game. Eugene was pre-loaded with a massive roster of possible responses to comments and questions typed in by a human interviewer.

Neither power can be considered evidence of the self-awareness or intentionality that marks a human. Nor is it evidence of the functions of greed, compassion, desire to dominate or willingness to sacrifice that humans — and many other members of the animal kingdom — display naturally.

But last Saturday, Eugene Goostman convinced 10 of 30 judges that it was human, in 30 simultaneous unrestricted interviews. Other programs taking the test at the same time failed. I haven't seen a report on how many of the human interviewees set as test cases inside the exercise also failed.

Here's where the fear should set in. Nobody is saying that some point has been passed where robots can begin to enslave humans a la science fiction.

But if 10 of 30 skeptical judges can be fooled by a chatbot that something fictional is actually true, imagine what can happen when chatbots are released into the World Wide Web, able to convince massively gullible societies of anything at all.

Why do Nigerian princes still ask you to give them your banking information? So they can actually deposit a fortune within your account? It's because a whole lot of people believe false promises.

A human-like simulacrum can work 24 hours a day, keep track of millions of lies simultaneously, follow thousands of conversations uniquely while never getting confused or losing track, remembering every detail you let slip — and can be guided to steal everything you own.

“9Contact me for a profitable transaction i have for you.Regards” Sorry, mrs.liung, but you are now soooo obsolete.

“Is your email active? I have an urgent proposal to discuss with you.” Sorry, Wing Lok, but there's a robot on the line.

The people who imagined a world containing genuine AI must have believed themselves to be some sort of god. But even God who created Eden saw it fall into jealousy and murder pretty quickly.

I don't think Turing, or anyone else, imagined this outcome. Nobody set out to invent email so that people could be robbed of their life savings. Nobody set out to create social media, so that teenaged girls could be coaxed into publishing nude photos of themselves, and then be bullied into suicide.

Cyberneticist Mitchel Kapor bet $20,000 against futurist Ray Kurzweil that a widely-accepted pass of their specific Turing Test would occur before the dawn of 2029.

Eric Schmidt, executive chairman of Google, says the mark will be passed before the end of 2018.

I — and the philosophers — ask: why would we want that?


Follow Greg Neiman's blog at Readersadvocate@blogspot.ca

No comments:

Post a Comment