Tölvumál - 01.11.2011, Síða 40
4 0 | T Ö LV U M Á L
system or biological process found in nature, and hence brings little of
interest to AI practitioners.
As most of my readers will probably agree with, living beings don’t have
infinite time and resources to perceive, make plans, make decisions, or
act in the world. Most of the time, if not all the time, humans -- like other
animals -- have insufficient information and limited time to acquire some
desired but missing information. To address this, thinking minds generate
assumptions on which to base their decisions; assumptions that are
generated for the sole purpose of dealing with this lack of information.
As a result, *all* decisions by thinking minds are based on assumptions
that have some associated level of uncertainty. The removal of time from
a theoretical analysis of computation and software development practices
computer science has made the majority of its findings irrelevant to
research in artificial intelligence. Why? Because intelligence is essentially
not needed if time is removed: Ignoring time means we don’t care how
much time a computation takes, which means we can take all the time in
the world -- even infinite amounts. Having infinite amounts of time means
that for any problem a complete search of all possibilities and outcomes
can be made, essentially rendering intelligence unnecessary and irrelevant,
since one fundamental reason why intelligence exists in the first place is
because we have limited time.
Another reason why artificial general intelligence has difficulties living
comfortably within the confines of computer science has to do with focus.
As a basis for studying complex systems, computer science brings to the
table some very powerful tools, most notably the-executable-program-as-
imitation: simulation. Simulation is a powerful way to study highly complex
phenomena, such as ecosystems, societies, economics, biology, weather,
and thought. Computer science is in many ways “applied mathematics”,
and should therefore have an easy time branching out and “owning” some
or all of these fields. Just like the study and creation of “artificial horses”
(read: the car industry) naturally belongs to the fields of mechanical
engineering and physics, artificial intelligence seems to naturally belong
in the fields of computer and information sciences. This is perhaps even
more true for AI than other complex natural phenomena, since all evidence
brought out so far seems to indicate that thought is computation. However,
rather than reaching out and touching virtually all other fields of study,
like the field of mathematics has done, computer science departments in
universities all over the world have become narrowly focused, reducing
it to “the study of algorithms” or something equivalently narrow, thus
defining it on a principle of exclusion. Which of course shuts out a large set
of phenomena that are not amenable to such formalization at the present
time, yet could benefit greatly from a computational approach. A prime
example being systems research.
Cognitive science, the study of complex natural systems implementing
intelligence, and artificial intelligence, represent two sides of the same
coin: one studies intelligence in the wild, the other tries to build intelligent
systems in the lab. A focus on algorithms, to continue with that focus,
makes it really difficult to fit cognitive science with a computer science
scope. Yet there is strong reason to believe that over the next few
decades interactions and inspirations between these two fields is likely
to accelerate progress in our path towards a deeper understanding of
intelligence: Computer science could be the naturally unifying foundation
for these interactions. But with too narrow a focus the makeup of academic
departments around the globe may prevent a close enough marriage
to really produce the “mind children” that could be the fruits of such
collaboration.
As a deep thinker reading these words might have already figured, the
rift between computer science and artificial intelligence discussed here
is not a problem in principle, but rather a result of historical accidents:
There is no reason why computer science could not have its horizon
encompass numerous subjects of study not typically found there; after all,
if mathematics -- the “philosophy of size” -- can flourish within computer
science, surely other more “hard science” topics could -- biology, sociology,
ecology, economy, psychology. If the particular path computer science has
taken in the past is mainly due to history, and not fundamental differences
between it and AI, then perhaps one good idea could help inch it outward,
enabling it to encompass more, rather than less, of the complex world
around us. I can think of a strong potential candidate idea for this purpose:
the concept of emergence -- self-generative, self-organizing systems. With
a history focusing on manual labour -- i.e. the hand-coding of software
-- computer science has ignored an important phenomenon of nature,
namely, that many systems start from small “seeds” and subsequently
grow into systems of vastly greater complexity. Biological systems are
a literal case in point. By studying such systems from a computational
perspective at full force I predict that computer science could not only
advance those fields but more importantly, in the big picture, help science
overcome one of the biggest hurdles towards a deeper understanding of
a host of phenomena that at the present seem hopelessly out of reach,
namely emergent systems. One such system is general intelligence.
Studying the principles of emergence from a computational perspective
might be an obvious place to start expanding the field to a size that seems
to fit its nature.
Cognitive science, the study of complex naturally intelligent
systems, and artificial intelligence, represent two sides of the
same coin: one studies intelligence in the wild, the other tries to
build intelligent systems in the lab.