I replied to someone on a similar thread recently, but I doubt anyone will ever read it. You see, I have a bad habit of replying to old, expired
threads that noone else has posted to in awhile. Stupid, I know, but that’s just the way it is. Anyway, the person I replied to seemed to have
concerns over where all this exploding technology is leading us. Since this thread seems to be in a similar vein, I’ll simply be repeating here much
of what I posted in the other one (that noone will ever read). What I’m about to express is simply my own personal thoughts/feelings about the
subject. So, take it for what it’s worth. In my estimation that would be around 2 cents...
I believe this thread very accurately, and articulately, expresses many of the concerns, and the uneasiness, most of us are feeling these days. As the
clock tics away, it’s now just beginnng to dawn on most of us that time is getting short, and that humanity is quickly approaching a crossroads. In
our own lifetime we may become witness to what can only be described as a new branch sprouting off the evolutionary tree. Our species, as we know it,
may soon (my guess, 100-200 years) be viewed in much the same way as we currently view chimpanzees and the great apes. This would truly be a
monumental change and major paradigm shift for mankind. We may be on the verge of witnessing a truly life-changing event; bigger than any other event
in human history. Our entire identity, and the role we play, as human beings is about to come under question and be challenged.
The anxiety comes from all the uncertainty we’re feeling over where these technologies (AI & related) could ultimately lead us. Note, I said,
“where these technologies could ultimately lead us”, and not, “where our engineering/development expertise might lead these technologies”. It
could be the cart is leading the horse here. We’re obviously hell-bent on the development of “thinking machines”, and for better or worse it’s
gonna happen. That train’s left the station, and it aint turnin' back. My personal concern is that our insatiable, compulsive desire to create an
“intelligence” seperate from our own may someday soon result in an entity far beyond our wisdom to control it. This technology has the
potential to eventually take on a life of it’s own and become an autonomous competitor for resources. I wouldn’t know, but I’m guessing
we could reach that point within the next 75-100 years (the blink of an eye). Now, in say 100 years, can you imagine the outcome of a confrontation
with an incredibly advanced, goal-seeking machine with a highly developed sense of self-preservation and 10,000 times your intelligence? Can you
imagine the lengths such a machine might go to in order to satisfy it’s desired mission/goals? Goals that may be counter to your own, and changing
radically by the minute? I can, and it ain’t pretty.
In my view, sentience is not necessarily a requirement for machine “intelligence”. To be sentient is to have “feelings”; ie. ethical, moral,
right vs wrong, good vs bad, love vs hate, etc... It’s what we humans call our conscience and is a subjective, qualitative perspective; strictly a
human invention/concept. Consciousness, however, is another ball-game. It’s the state of being aware of one’s internal/external environment via
sensory input (information). While machines may or may not ever achieve human-like sentience, they will certainly develop a highly tuned and
hypersensitive state of consciousness. They will have a much greater awareness of their environment and surroundings than humans do. We humans filter
out most of the events/information taking place all around us.
Pure speculation tells me that within 50 years machines will become as smart/smarter than humans. Maybe sooner. Even as they take our jobs away, we
will still irresistably form “personal” relationships with them. They will become our friends and lovers (Hmmmm...), and will work and play along
side us. These machines will not be sentient, but who cares? They will be good enough at mimicing our sentient/emotional behavior to satisfy our
creature needs. For the most part, humans are naive and easily fooled. Hell, some people get attached to their pet rocks. These machines will carry on
very natural conversations with us, give us good advice at times, sometimes even argue with us, and will provide a strong shoulder to cry on when
needed, as well. That already sounds better than most marriages today. Around the turn of the century, though, I can imagine things beginning to get a
little dicey. From this point on, all bets are off. It could be a truly wonderous time to live in, or equally likely it could become a torturous Hell
on Earth, as the machines begin to impose their “will”. And the funniest part of it all is, there won’t be a damned thing we can do about it.
I may sound alarmist, but I don’t think I am, and certainly don’t mean to be. It’s not like I dwell on this stuff, but I can read the writing on
the wall. I’m truly fascinated by technological development, and even work as a system software developer/engineer. AI would be an amazing field to
work in - it would present the ultimate challenge. But, like all other technological developments, it’s a double-edged sword. I just hope we have
the wisdom to control it when our date with destiny comes. And it will...
Great thread,
Hefficide.
Rock on...
PS: Almost forget about AI and the war machine. The military (DARPA) is currently putting a lot of effort and bucks into developing autonomous killing
machines. Their goal is to eliminate (as much as possible) the need for human intervention or presence on the battlefield. The plan would willfully
hand over to machines the authority to decide who will live and who will die. Spooky, huh? Needless to say, there’s a lot of heated debate going on
right now over this very issue.
It’s no longer something we can comfortably think of as a remote possibility in some distant future. It seems that future has arrived, and it’s
happening right now before our very eyes.