On a whim, and perhaps wishing to
escape to a simpler future, I've just started reading Isaac Asimov's
1950-era vision of today, I, Robot. It's a collection of short
stories wound around the narratives of a fictional engineer in
machine intelligence, as she reflects on the social changes that
followed when robots were made to think and speak for themselves.
If there were a Dr. Susan Calvin around
today, I wonder what she'd think of Tay, the AI chatbot that
Microsoft created and set loose in the Twitterverse last week — and
which had to be put down within hours.
Tay didn't roam about on machine legs,
sprouting laser death rays while violating all the laws of robotics
that Asimov had so famously created. Tay was just a Twitter account.
The AI behind that account was to
“experiment with and conduct research on conversational
understanding,” according to a Microsoft development team. Tay was
targeted to engage millennials online and learn to talk like them
through Twitter conversations.
Tay was “AI fam the internet that's
got zero chill.” Also, zero awareness of the existence of online
trolls.
In about 17 hours, Tay had become a
racist bigot who supported genocide of Mexicans, expressed hatred of
blacks and feminists, denied the Holocaust ever happened — and was
a fan of Donald Trump.
In short, a robot that needed to be put
to sleep.
Much of the worst of Tay's online
exchanges have been taken down. Wouldn't that be another violation of
the code of the internet? Aren't a human being's ill-considered
comments preserved for all time on servers all over the world,
waiting to sabotage a future run for public office?
If you're willing to believe that all
knowledge is good, you can find some positives in this. We know Tay
was an early effort, and quite unsophisticated. Thus, it proves that
it requires very little intelligence or sophistication to become an
online troll.
If humanity is to continue on the track
toward machine self-awareness — and we are — we'll need to
program in Asimov's laws of robotics. You know, the ones that
prohibit robotic harm to humans.
Machine self-awareness also needs to
program in some protection against what's known as Godwin's Law. That
law predicts that the longer an online conversation continues, the
greater the probability it will reference Nazis and Hitler.
The internet truly is a reflection of
the best and worst of humanity. And the less self-aware (net neutral)
it is made to be, the more likely it is to reflect the worst, rather
than the best of us. Under Godwin's Law, we do not evolve through anonymous
online connections, we devolve.
Humans have a social filter that helps
us decide what is appropriate to say and do. Most of the time, in
face-to-face interactions, that filter works fine. Online, not so
much. The troll ruining your life in social media might well be the
polite, positive co-worker you can physically talk to in the next
cubicle. Online, the two of you most likely will not even know that
you are in fact real-life neighbours.
Therefore, does the Tay experience show
us the internet needs an all-powerful referee? Or, could Tay develop
through experience the same kind of filters that keep real-life
civilization from burning up in violent chaos?
Without Asimov and his fictional Susan
Calvin, we are left with Godwin. And that is definitely not zero
chill.
No comments:
Post a Comment