Reykjavík Grapevine - 11.11.2016, Blaðsíða 12
“People might say, 'So you want
them to take over the world?' I
would say: not in any way that
software isn’t already taking
over the world.”
The Rise Of The
Machines: Artificial
Intelligence In
Iceland
Kristinn R. Thórisson is a professor
of computer science at the University
of Iceland. He is also the founding
Director of the Icelandic Institute for
Intelligent Machines (IIIM), and has
been working on the bleeding edge of
this industry for three decades. We sat
with Kristinn amid a sea of comput-
ers to talk about what artificial intel-
ligence is, where it’s heading, and the
implications it holds for the rest of us
humans.
What were the beginnings of the
IIIM?
In 2004 and 2005, we laid the ground-
work for the artificial intelligence lab
at the University of Reykjavík. This
brought forth the Centre for Analy-
sis and Design of Intelligent Agents
(CADIA), which is now in its eleventh
year. It’s a university research lab,
so it works mostly on basic research
questions. With education being the
primary objective of an academic re-
search lab, education always takes pre-
cedence, which limits severely their
ability to commit to dates and deliv-
erables. IIIM is a deliberate attempt
at making up for these limitations by
building a bridge between academic
research—and researchers—and in-
dustry. We do a lot of prototyping and
feasibility analysis, and our people are
very good at delivering products on
time.
Shortly after the financial crash,
and before tourism took off,
everyone was talking about
Iceland’s tech industry being the
Next Big Thing. Do you feel as
though the spotlight has moved
away from tech?
Yeah, it seems that somehow our
thoughts continuously return to the
older industries, like farming, fishing,
and energy production. We still haven’t
adopted the kind of thinking you need
to have in order to make the startup
mentality a part of your everyday life.
So this idea that a startup that fails is
a failure has to go away. People forget
that IBM used to be a startup. But how
many startups were tried and failed
for every IBM? With the recent events
in startup culture in Iceland, people
write negative things about the start-
ups that fail to sustain themselves, e.g.
past a second infusion of financing.
This is a serious mistake, because by
their very nature, startups are an at-
tempt to make something new. And
since no one knows perfectly what the
future holds, you can’t be right every
time. That doesn’t mean you shouldn’t
try.
And it doesn’t mean it’s a total fail-
ure when it folds. This is a basic con-
ception of the startup process which
is missing, I think, in the way people
think about the workforce in Iceland.
Just because a startup folds doesn’t
mean startup culture is a bad idea, or
somehow irrelevant to Iceland. Start-
ups are a primary way for how we sus-
tain innovation and our competitive-
ness internationally.
The people in the companies that
fail are still around. We need to take
the next step and recognise this fact if
we want a vibrant environment where
Icelanders have a chance at being at
the cutting edge of numerous fields,
rather than just a few. The people in
those companies that fail will be so
much more experienced the next time
around.
One of the things that rekindled
our interest in this subject was the
recent news of the creation of an
artificial intelligence transcriber for
Parliament. What inspired the need
for such a programme?
In a general sense, people’s awareness
has been raised significantly over the
past five years about the possibilities
that various AI technologies harbor.
I think people’s intrigued is piqued.
Artificial intelligence has stopped
being a Hollywood sci-fi concept and
has moved closer to reality, and with
that, all sorts of ideas spring forth that
people would have dismissed or not
understood otherwise. And the pos-
sibilities have indeed become quite
enormous for AI, and it’s foolish not
to at least consider them. But it’s also
foolish to jump too quickly. I would,
for example, not entrust my life to an
AI right now.
In an Icelandic context, what
sectors are people most interested
in when it comes to the application
of artificial intelligence?
There’s quite a range, considering our
manpower. For example, at IIIM we
are working with a number of start-
ups and more established companies,
including Össur, Mint Solutions,
Suitme, Rögg, Svarmi, Costner, and
many more are in the pipeline, target-
ing a variety of technologies and busi-
ness solutions. At CADIA we have Jón
Guðnason, who has been working very
closely with Google to create speech
recognition for Icelandic. We have
Hannes Högni Vilhjálmsson, who’s
been working with [game company]
CCP, and so have I in the past. Kamilla
Jónsdóttir has been working on a proj-
ect in aviation. I have been directing a
long-running project on artificial gen-
eral intelligence (AGI), over the past
seven years. This has to do with mov-
ing away from the current “black box”
design of AI, towards more capable,
continuously learning agents.
Can you elaborate on what AGI is?
The mechanisms that we use in AI
now, typically referred to as “machine
learning,” has very little in common
with human learning. The learning
is prepared in the lab, and then it’s
turned off when the product ships.
So what you get is a machine that did
learn at some point, but now it’s out of
the lab and can’t learn anymore. The
reason why you can’t “leave the learn-
ing on” is that there are no ways of en-
suring that what the AI might learn in
the future is going to be useful or sen-
sible. What we’ve been doing in CADIA
over the past five years is to come up
with machine designs where we have
a better understanding of the direc-
tion that the learning will go in, so
you could leave the learning on. Such
a machine will become safer and safer
over time.
How does an AI learn “wrong”?
Say you’re in a self-driving car that has
learned, for example, what a stop sign
looks like. You’re riding along, and en-
counter a stop sign with bullet holes
in it. You don’t know if the machine is
going to understand it’s a stop sign, or
confuse it with some other sign. What
happens then? You can’t predict it.
Even if the machine has been taught
very thoroughly, you still can’t fore-
see all the variations that it might en-
counter, and therefore your certainty
is only as good as the researcher’s abil-
ity to think up scenarios beforehand
where things could go wrong. How
good are we at thinking of things that
could go wrong? Not very.
So how would AGI approach a
situation where the stop sign is
rusted, upside-down and has bullet
holes in it?
The way that our machine operates,
and the ways in which we think it will
be much safer, is that this machine has
the ability to assess the reliability of its
own knowledge. If it encounters such a
stop sign, it could say, “What the hell
is that? I better do what I consider to
be the safest maneuver in this circum-
stance.” These machines would still
go through the training that regular
AI goes through, but their predicted
behaviour to new things will be much
more sensible, predictable and reli-
able. At present the only place you can
find prototypes of such a machine on
The Reykjavík Grapevine
Issue 17 — 2016
12
the planet is at RU. We’ve done some
interesting things with this in the lab,
but we still need more time, more test-
ing, and more funding.
On that subject, how solid are we
when it comes to having people in
this country who can work in this
field? Are we losing our best and
brightest?
Yeah, I think there’s been “brain
drain” in this country over the past
five or six years. And in the years lead-
ing up to the financial crash, we also
had another kind of brain drain in that
students in computer science were be-
ing gobbled up by the banks. That’s not
exactly the place you want them to be
if you want them to innovate—if you
want a vibrant startup community.
Closely related to that is that we’ve not
seen the necessary increases in fund-
ing for the universities.
Looking forward, what are the
projects in AI that you are most
excited about?
Well, my own, of course! We now have
this project on “machines that under-
stand.” I think this is where we have to
go to be able to trust machines with
more sophisticated tasks, and with
our lives. Understanding is the way to
go if you want intelligent machines.
People might say, “So you want them
to take over the world?” I would say:
not in any way that software isn’t al-
ready taking over the world. If you look
at the current application of software,
it’s used to reduce cost, to improve ef-
ficiency and so on.
In which aspects of our daily lives
over the next five years do you think
AI is going to come up most?
It’s not like everyone’s going to have a
robot in their homes in the next five
years. I don’t think it’s going to be that
obvious. It’s going to be mostly invisi-
ble or partly invisible. A lot of it will be
online, and used for things like find-
ing documents, blocking ads, and so
forth. These will be increasingly driv-
en by AI. But there’s a lot of untapped
opportunities for applying artificial
intelligence and we have only begun to
scratch the surface.
Why do you think we’re afraid of the
idea of intelligent machines that are
capable of learning? Where does
that come from?
I think it’s built into the human psyche
to have this continuous evaluation of
“Us and Them”—a self-protection
mechanism. If something surprises
you, you want to classify it as Friend
or Foe as quickly as possible. AI is sur-
prising us now—to have to think of
machines in a way that makes it easy to
anthropomorphise them. We’re com-
paring their behaviour, that we’re very
familiar with, to something that we’ve
never had to compare anything non-
human or even non-animal to. And
that I think raises this red flag. The
self-analysis and introspection that
we’re now building into our machines
will by all measures so far make them
safer and more predictable, and that
is an obvious benefit over the present
state of the art. Hopefully it becomes
sufficiently obvious to most people
over time to drive this fear away.
Words PAUL
FONTAINE
Illustration
AUÐUR VALA
EGGERTS-
DÓTTIR
INTERVIEW