I've always held a sneaking suspicion that if we ever do end up creating AI capable of threatening human life, it won't be solely military, but a
nexus of separate, even unrelated, military and CORPORATE artificial intelligence.
No, this isn't a "doom" scenario, as I don't think this is something achievable for 100 years or more. But consider this.
We're on the cusp of having self-driving cars available to the general public. Suppose after the kinks are all worked out, those self-driving vehicles
end up having far better safety and operational records than human drivers. It stands to reason that - not anytime soon, but someday - we might create
an integrated system allowing all self-driving cars on the road to communicate with a network, allowing total awareness by AI at all times of the
locations, speeds, and circumstances of every vehicle on the road. I think this could greatly enhance safety.
Once that happens, the threshold of what society is willing to accept in terms of AI will be drastically lowered. We could see commercial planes
controlled by AI. Trains. Delivery vehicles (Amazon is already going to start using drones - I have no difficulty imagining these, some day in the
distant future, being AI piloted.) Etc. etc.
Meanwhile the military already has drones and is working on autonomous ground combat machines.It seems like a matter of time before, seeing the
success, safety, and precision of an integrated system designed to monitor and control every vehicle on the road and in the sky, the military would
also begin seeing the value in having a unified, integrated, networked, AI managed army of drone wings and ground combat units all being more
efficiently and intelligently managed than any human could.
But there's still no real danger of a "Terminator" scenario there, because the machines aren't self-aware, and are still dependent upon us for their
construction, programming, and maintenance. That's why I don't think military AI is the real threat personally. Instead... I feel CORPORATE artificial
intelligence - not military - will be the real risk. Corporate competition is what it is, and can take on emergent behavior with unpredictable
results. Bear with me, here.
Suppose this warehouse AI takes off and improves efficiency and cost effectiveness dramatically. It will no doubt be improved upon. Competitors will
want to start using similar systems, because without them, they won't remain competitive. In time - a long, long time, mind you - we could begin
seeing even more complex human behaviors (such as risk analysis, overall company strategic management, even product development and design) being
handled by AI. A corporate AI arms race, in essence. Initially they wouldn't be autonomous at all, but it seems like a matter of time.
From this tumultuous, "anything goes" corporate AI environment, could one day emerge truly "creative" AI. Still just algorithms and software, sure.
But capable of creativity and experimentation on a human - or even beyond human - level. What happens when the first company to achieve this makes
huge profits quarter after quarter and becomes the next Apple but on an unprecedented scale? Of course their competitors will follow suit. They have
to.
Then one day, someone has the bright idea: "Hey! These AIs are so good at management, design, and creativity now... why don't we let them design and
improve
themselves? Let's build an AI purely designed to create other, better AIs. That way we can get successively improved generations of
better AI than out competitors much more quickly!"
Once
that genie is out of the bottle, then you've got AI having replaced humans for:
- Product development
- R&D
- Logistics
- Economics
- Military strategy and tactical planning
- Transportation
- Commerce
- The design of new and better AIs, successively, for each of these tasks
I think you see where I'm going.
What happens when emergent behavior takes over in this chaotic, unpredictable environment, and the AIs become improved exponentially enough that it's
indistinguishable from our own intelligence, and we've handed over control of all of these systems to it... and they're programmed to compete with one
another?
So to me the danger is not of humans intentionally creating AI and sticking it in a killing machine. It's creating AI we
think can't ever get
out of hand, that we still have full control over, and then emergent, unpredictable human behavior and complex competing forces causing runaway
scenarios we don't see coming.
As I said, this isn't something I see happening anytime soon. Hundreds of years if ever. But it does give one pause.
Peace.