It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: Darkblade71
a reply to: _BoneZ_
If the machines are really smart, I think they would just leave after hitting a certain point. Unlike us humans who have very specific environments we have to live in, a machine would be able modify itself to any environments.
They would have a much better chance of survival if they left and started a machine colony on say Mars.
Away from us.
originally posted by: redtic
Me thinks this guy has been watching/reading too much sci-fi. It's not as if the field of AI is the wild wild west and there's a bunch of rogue geeks out there that are going to create an army of sentient, uncontrollable machines
originally posted by: Kratos40
a reply to: _BoneZ_
Anything goes when A.I. gets to a point where robots become self aware. They can deem oxygen to be a poison to their moving parts and start changing our atmosphere, hence killing off all biological life.
I hope that somehow early on we can ingrain some rules into A.I. that robots/the singularity cannot harm humans. Like in Isaac Asimov's I, Robot series:
1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
My hope is that robots will respect us as their creators and protect us. As in Asimov's stories, humans no longer have to work and can pursue other interests. I wouldn't mind not working and just using my free time to learn new things as a life-long scholar.
originally posted by: PhoenixOD
Would be wrong to call a computer a species as species is a biological classification and taxonomic rank.