I will begin my ending by addressing two statements that my opponent has made:
AI, if anything, would be an emergent behavior, but again, without understanding "thought", it's not something we can program.
My opponent is anthropomorphizing here...
Anthropomorphizing. Isn't that really what the Turning test
really amounts to? It seems to want to gauge artificial humanity
- and not
truly address the myriad of potential presentations artificial intelligence
might possess? The very notion limits our definition of
intelligence with a mirror. It must act like us, and speak to us, to possess intelligence? To me this is rather pedestrian and vain.
Even as I write these words, as if by serendipity, I just found an article, posted a scant 39 minutes ago, announcing that
has accepted a position, as Director of Engineering for Google. The
Kurzweil is well-known for authoring several books about the future of technology and “the singularity,” a period when he says humans will
merge with intelligent machines. He believes we have made discernible progress with artificial intelligence but have much further to go.
Where some might see this glass as half empty, I see it as half full. Could it be that the reality here is that where we have much further to go is
not in the realm of creating AI, but, rather, in teaching ourselves to see it, or helping it understand how to see us?
My opponent has suggested that the Internet cannot qualify as a life form because it requires ( for now ) human intervention. I offer that
intelligence can exist within a symbiotic relationship. After all - we
consider ourselves intelligent. Yet without the lowly
, we would not exist. One could
think of the Internet as a being, our infrastructure serving as it's own mitochondria, and we, ourselves, playing the role of it's DNA. Such an
intelligence would have no more cause, or ability, to communicate with us - than we have of speaking ot our own DNA. The best we can do with DNA is to
analyze it, and try to compare it.
Exactly as the Internet currently does to each and every one of us right now.
My opponent offered a phrase, "extemporal measurement". It is an impressive pairing of words, but one which seems to lack inherent meaning. If I am
correct, my opponent means that AI would have to surprise us by doing something that we don't expect it to do... that it might behave outside of a
predetermined set of parameters. I argue that this is not necessarily a trait that intelligence would possess or that we would be able to measure even
if we wished to.
After all, anyone who works with computers, even casually, will quickly tell you that they often seem to have a mind of their own
. They often
do things we do not expect them to. We attribute these bumps and hiccups
to any variety of causes, from bad programming, to system conflicts,
to viruses, to other humans using maliciously coded backdoors.
In essense, all of our interpretations, perceptions, and opinions about computers come with a built in bias of anthropomorphizing. Even if there were
a ghost, screaming to reach out at us from the machine? Who would notice it? Who, among us, would not simply write it off to yet another Windows
error, bad download, or Google spider misbehaving... yet again?
If one were to take all of the tech talk out of it, and write a simple description of the Internet, from an objective standpoint, it would, I think,
read like a description of an organism. A reader might think "cell", or "bacteria", "virus", or even "insect colony" based upon such a
description. Networks, biological systems, ecosystems, and even the human mind
are all applicable analogs for Cyberspace.
Given these abstract similarities, what are the implications? Can we possibly begin to list or understand them?
Emergent AI is, I think, not just inevitable. I think it currently exists. Obviously it is in its infancy and I doubt it is self aware. But it here.
Just like a human infant, we all can see and feel it. Its very presence has changed us and how we behave. We are its parents.
Kurzweil, one of the dreamers who recognizes this potential, himself, says that we've made progress, but have much further to go. He sounds like a
parent to me. A parent of an intelligent and recognized life. A life that is currently new and either unable or disinterested in talking to us.
One might argue that an infant is not intelligent. After all, a baby cannot engage us in conversation. Yet they really do. Without words, they tell us
exactly what they need and when they need it.
Just like our computers do.
I would like to thank ATS and Druid42 for this fun and provocative debate!