Reykjavík Grapevine - 25.09.2015, Blaðsíða 20
NOT
SCARY:
Meet The Icelandic Company Working
On Artificial Intelligence
20
The Reykjavík Grapevine
Issue 15 — 2015
Politics | Bright?Ethics | Artificial
What exactly qualifies as an
"intelligent machine"? How does
this differ from artificial intel-
ligence, if at all?
To answer the second part first: There
is no real difference. The field of artifi-
cial intelligence aims at building intel-
ligent machines. The difficulty—and
this is a big one—lies in answering the
question, “What is intelligence?”
You see, most people I know have
an intuitive notion of what “intelli-
gence” is. But this is typically not what
computer scientists and engineers
mean when they use the term. Intel-
ligence refers to operational features
of a special kind of system. Intelligent
systems in nature, such as fully grown
dogs and humans, can handle a range
of complex data, work with time lim-
its, deal with novel things, reason, in-
vent things—well, maybe not so much
the dogs—anyway, naturally intel-
ligent systems can learn to do these
things, improving measurably every
day, week, month, and year: life-long
learning. No artificial systems can do
any of this yet. All of these properties
are typically part of what most people
mean by the term “intelligence”—and
many of them, for instance life-long
learning, have not been addressed in
any significant way by any branch of
AI for all of its sixty or more years.
In some sense humans are “intelli-
gent machines,” but only when we get
artificial intelligence that can really
understand things—in the vernacular
meaning of that term—can we start to
compare it to human intelligence in
any meaningful way.
Direct action
What are some of the more prom-
ising advances your team have
made in this field? And what are
the biggest challenges?
IIIM does very little basic research—this
we leave to the universities. The Center
for Design of Intelligent Agents at Reyk-
javik University is one of our close col-
laborators; they have made contributions
to AI on various fronts.
The biggest challenge in bringing
advanced automation to industry, and
allowing academia to work more closely,
lies in the way these two worlds operate
on different timescales, and are driven
each in opposite directions by their
goals: universities are driven to think far
into the future, as far as possible while
still sounding convincing, while industry
is driven by quarterly earnings. There is
a lot of public funding that goes to waste
because of lack of closer collaboration.
The only way to bridge that is to take
direct action—by instituting something
like IIIM.
We now have several “instruments”—
collaboration formats, intellectual prop-
erty arrangements, and so on—that allow
us to bridge very effectively between ba-
sic research and applied R&D. We have
provided some of our industry partners
with solutions that would have cost a
lot more to get in other ways, if they had
been gotten at all within the required
timeframes.
Who's expressed an interest in
having such hardware and soft-
ware, and why?
Some of IIIM’s industry partners are in-
terested in machine learning solutions,
while others want help with system inte-
gration and design. Both require special-
ised personnel that is highly proficient in
cutting-edge research on systems, net-
works, and AI algorithms, while under-
standing timelines, deadlines, and who
can easily adopt an efficient work ethic.
The ultimate tool for ma-
niacs everywhere
It's interesting to see you already
have an ethics policy in place
for intelligent machines. What
prompted that?
Our view on this is very simple: Scientists
need to think about the moral implica-
tions of their work, especially the poten-
tial negative uses of the knowledge they
contribute to society, and take a clear
stance on it. In my experience the num-
ber of scientists who want their work to
be for the benefit of all vastly outnumber
those who are perfectly ok with abuse
and violations of human rights. For an
institute like IIIM, whose purpose is to
improve society and life on this planet
for all, the choice is a rather obvious one.
Our new Ethics Policy codifies that in
very clear terms: We don’t want to par-
ticipate in activities that can increase
instability, heighten tension between
groups, nations and countries. This pol-
icy is an important part of that aim.
The biggest concern, however, is the
kind of nightmarish future that many
science fiction authors have predicted,
where a small elite takes control of the
vast population by privileged access to
powerful technologies. Although some
of this trend is already discernible in
many societies today, artificial intelli-
gence could possibly kick this into high
gear. Of course, artificial intelligence
coupled with modern weaponry is in a
sense the ultimate tool for maniacs ev-
erywhere.
Anything you're working on right
now that you can tell us about?
We recently reached a major milestone
in developing a self-programming AI,
which is ultimately needed for “real ar-
tificial intelligence.” We have shown this
machine to be capable of learning highly
complex spatio-temporal tasks that no
other machine learning system
has been shown to come even
close to.
Another thing that we’re
looking at—and this will pro-
duce results within the next two
years I think—is new ways of evalu-
ating intelligence. It turns out that IQ
tests the way psychologists do them can
only work for humans and animals,
and just barely at that. AI research-
ers haven’t come up with any good
ideas for how to compare the di-
verse set of systems that we call “ar-
tificially intelligent.”
My colleagues and I are also look-
ing deeply into the relationship between
computation and physics, which we be-
lieve is a more or less completely ignored
issue. Whoever gets to the bottom of that
relationship will instantaneously revolu-
tionise both computing and AI, possibly
causing these to merge into a brand new
field of research of “truly intelligent ma-
chines.”
Technology dependent
One of the classic fears of arti-
ficial intelligence is that they
will replace workers and lead to
greater unemployment; that they
will benefit the ruling class more
than the working class. Do you
think this is necessarily so? Why
or why not?
It has been clear from the beginning
of the industrial revolution that some
human labor would be replaced by ma-
chines. The advent of AI is simply the
extension of this effect into the infor-
mation age. There may be reason for
concern due to the speed at which this
can happen when we are mostly dealing
with software—when the inherent slug-
gishness and cost of hardware does not
impact speed of adoption as much.
There is also reason for concern re-
garding any use that could help tilt the
scales even faster towards a widened in-
come gap, which directly affects power
and decision-making. The individuals,
groups, and institutions that have the
better position to apply automation to
their ends will be in position to abuse
that power. We should be watchful
and use any
m e a n s
p o s s i b l e
to ensure
prosperit y
and equality
for all. This is
why we have in-
stantiated the Ethics
Policy, of which we
are very proud.
How would
you respond
to people
worried that
tech advances in this direction
only increase our dependency on
technology?
This is in some ways the ultimate tech-
nology to become dependent on—in a
similar way that a manager relies on staff
to get things done. Whether this is bet-
ter or worse than our reliance on tech-
nology now doesn’t simply depend only
on the technology and its deployment,
but a number of other functions in soci-
ety, such as our educational system, our
monetary and value generation system,
and our systems of government, to name
some.
Seen from another angle, given that
many of the problems we must address
in the coming decades and centuries
may be quite a bit more difficult than the
ones we face at present, we could use a bit
more brainpower to come up with better
plans, ideas, and perhaps even make new
scientific discoveries that can help with
that.
For a majority of people on Earth,
knowledge has helped reduce suffering,
ensure survival, and increase quality of
life. The remaining work to be done in
that respect is to some extent not getting
done because of lack of knowledge per
se, but because of the way we structure,
distribute, and control wealth—and due
to a serious lack of instruments for mobi-
lizing the wealth of the Western nations
in ways that can improve the state of af-
fairs elsewhere on the planet. We could
use some ideas and leadership for solving
this deadlock. Whether it comes from in-
dividuals, groups of people, or machines,
or some combination, shouldn’t matter.
The Icelandic Institute for Intelligent Machines (IIIM)
might be the next tech company to make international
headlines. Boasting an impressive staff of computer sci-
entists, physicists and electronics engineers, IIIM is cur-
rently and actively working on creating machines that
possess artificial intelligence. The goal is an ambitious
one, bringing its own sets of unique technological and
ethical challenges. We spoke with IIIM’s Managing Di-
rector, Dr. Kristinn R. Þórisson, to learn more.
Words by Paul Fontaine
Photo by Art Bicnick