Iceland review - 2015, Page 72
70 ICELAND REVIEW
He explains one of the central goals of
IIIM: “What does that chasm mean for
society? It means that return on investment
in academic research takes longer than it
should because the knowledge takes too
long to find its way into applications. A key
operational goal of IIIM is to bring aca-
demic results more quickly to industry, and
enable market opportunities to be more
visible to academic researchers.”
Founded in 2009, the IIIM works with a
dozen companies on a variety of projects.
One such project is with Össur, where
machine learning makes prosthetics better
adapt to their users. Another project is
with the National Commissioner of the
Icelandic Police, developing software to
run on new mobile devices for patrolling
officers. Kristinn’s hope is that the success
of these projects, which has already been
demonstrated, will clearly underline the
service to society that the institute is ren-
dering. The ultimate aim? For the institute
to become self-sufficient. Some may say
this is a tall order, given that it receives only
about half the funding of similar initiatives
abroad. However, IIIM has received ISK
400 million (USD 3.1 million) from the
Icelandic Centre for Research and the area
of AI and simulation technologies is one of
three Centers of Excellence programs from
2009 to 2015.
FRIEND OR FRANKENSTEIN?
Given his time in the field, how does
Kristinn think the approaches to AI have
developed? “By-and-large their nature is
fundamentally the same as it was 50 years
ago. People are using the same methodolo-
gies. AI systems are advanced tools. There’s
been a lot of talk recently about autonomy,
which is really what’s at the heart of the
dystopian future some have painted, where
machines takeover of their own accord. At
RU we have some interesting new devel-
opments on the subject of truly self-pro-
gramming systems—something that would
be necessary for such a Frankensteinian
future. But this work is at a very, very
early stage. For the pessimists, in the worst
scenarios autonomous robots take over
humanity and destroy it. This, however, is
still so hypothetical to be entertained as
anything but science fiction, and we are not
in that camp.”
The AI future the IIIM is helping devel-
op does not reflect the images of machine
mania that blockbusters often portray in
splashy color on the big screen. Kristinn
views the advancement of AI in a differ-
ent light. “In more realistic visions of the
future, machines take over our jobs. But
that is just the industrial revolution extend-
ing its reach into the information age, and
should not come as a surprise. Automation
takes over because of the structure we’ve
erected for people to make decisions for
how they pay their bills and taxes, and
the methods used to decide how to apply
those: in a market economy automation
will always be preferred over manual labor,
when applicable and available. AI today is
like Henry Ford’s conveyor belt. It’s a tool.
Granted—you can do a hell of a lot more
with an ‘information conveyor belt’ than a
physical conveyor belt, but it’s still a con-
veyor belt obeying the same principles for
its application—and abuse.”
MORALS AND MACHINES
Perhaps to address the concern about a
potential Frankensteinian future, and sus-
picions about how AI will be developed
and used, IIIM is the very first research and
development group to reject the develop-
ment of technologies intended for military
operations and to issue an ethics policy,
which it released in late August 2015.
What’s been the response to this? Kristinn
explains: “From those who have already
taken steps and thought about these issues,
it’s been overwhelmingly positive. From the
rest of the community—not much feedback
these first few weeks. We are getting some
press, but no deluge of requests for com-
mentary or interviews. Which is a bit sur-
prising, given how novel it is.” The IIIM’s
Ethics Policy for Peaceful R&D (research
SCIENCE