I've been working on the development of a custom neural net framework the last few weeks. Training the neural nets works via genetic algorithms which
make use of "DNA code" for breeding and mutation processes. It's basically survival of the fittest for AI. You can blame me when self-aware robots
start running around.
Seriously though in the process of building this system I've been thinking a lot about the viability of creating some sort of sentient or self-aware
machine. I used to think it would be a relatively simple feat, we just needed fast enough computers. Then I got to really thinking about it more and
realized it's not that simple.
First of all there are some important distinctions we need to make when we talk about "self-aware AI" and "self-learning AI". They are two completely
different things ok. Self-learning AI does not have to be self-aware. It's rather simple to create a program which can learn and get better at things
without help from a programmer.
But that's far from sentience or self-awareness. It's just the illusion of self-awareness because it appears to learn and adapt to new things. I
realized as I was programming my neural network system, that anything I create on my computer will never become self-aware. It will adapt and change
but it'll never do anything more.
At the end of the day it's going to take input and give output based on a linear deterministic system, nothing more. It all just boils down to a set
of calculations and nothing more. Now some of you may argue that consciousness is nothing but a set of calculations, and that may be true, but it's
not a set of deterministic linear calculations.
Consciousness, at the very least, is powered by non-linear quantum calculations, carried out on immensely powerful organic machinery (the brain does
have quantum components). So what this means in my mind is that we'll never develop true self-aware AI until we develop very powerful quantum
True consciousness is not just a set of calculations being carried out by your brain, it's something more, something which cannot really be quantized.
As long as we stick with self-learning AI and not self-aware AI we will be fine. The problem arises when they become self-aware; then there's no
See, self-learning AI still learns what we tell it to learn, it still solves what we tell it to solve, it still switches off when we hit the button.
Self-aware AI would learn what ever it wanted to learn, solve any problem it wanted to solve, and there's no guarantee it would even switch off when
we hit the button.
Really the problem seems to be making sure none of our AI algorithms become self-aware in the first place. Self-learning AI systems obviously do have
the capacity to adapt so much that they become self-aware through a type of natural evolutionary process. In fact I don't see any other way that
self-aware AI will come about.
edit on 6/1/2013 by ChaoticOrder because: (no reason given)