Arf! Arf!! Grrrr.......
This topic/technology fascinates me. I’ve been following it’s progress for around 20 years now, and it seems pretty clear to me that it’s become
a train barrelling down a mountainside with no brakes. It’s far beyond the point of no return, and short of a nuclear Armageddon or dooms day virus,
it will not be stopped. I suspect by mid-century we’ll have machines that for all intent and purpose will have enough general intelligence to mimic
humans and operate on the same level. They will be able to carry on intelligent conversations, read our facial expressions and body language well
enough to acurately determine our moods and emotions, and react accordingly. In the form of humanoid robots they will be able to move about the
environment with smooth, continuous motion and be nearly indistinguishable from the rest of us. Since we’re a pretty gullible bunch, machines will
not have to achieve sentience, or feelings, or self-awareness in order for us to form full-blown emotional attachments to them. As long as they can
halfway decently mimic us, and intelligently respond to us, that’s all that’s necessary for them to qualify as good buds, soul mates, sex partners
and, yes, marriage material. We humans are easy. At this stage machines will probably not pose any real threat, since they will still be pretty much
under our control. I think it will likely be in the 2050-2100 period that machine superintelligence, and all that comes with it, finally arrives.
In case the next part sounds vaguely familiar, I’m copy/pasting it from a post I made awhile back on another AI thread. No point reinventing the
wheel... It’s at this stage I think we, as humans, will be tested and must be very careful how we proceed.
Once computers can effectively program themselves and reproduce (make other machines) with improvements incorporated into each new generation (machine
evolution), a technological intelligence explosion could conceivably occur and proceed at an exponential rate. At this point human intervention may no
longer be necessary, and may even be a hindrance. Whether through improvements made to initial programming done by humans or via naturally occurring
machine evolution, once superintelligent machines reach a certain level of complexity it may be an inescapable consequence that the properties of
self-awareness, self-preservation and goal-seeking naturally emerge.
From here on out, all bets are off. It’s hard to imagine the extreme and ridiculous lengths a self-aware, goal-seeking, superintellegent system may
go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter. With machines that can outwit us in a
fight for resources and self-preservation, things could get a little spooky. Hal9000 comes to mind.
A British cyberneticist named Kevin Warwick once said something that kinda stuck with me. He asked,
“How can you reason, how can you bargain, how can you understand what a machine is thinking when it’s thinking in dimensions you can’t
I hope I got that quote right. At any rate, the things I just mentioned aren’t wild speculations on my part. These are very real considerations by
leaders in the field right now. It’s no longer science fiction. This is an inevitable reality, and it’s right around the corner. The fact is, we
simply don’t know where this technology will take us. Maybe it will be a benevolent master, and our lives will become a magical La La Land. Then
again, and more realistically, it may be that we create our very own version of the Frankenstein monster. Either way, it’s going to happen; we
can’t stop it.
Great thread, grandmakdw
PS: I just hope we don’t design these machines to be in our own mold. We don’t want them to be too human-like. We'd be issuing our own death
warrants in that case...