It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: PhoenixOD
But i think you are more concerned with being right by changing the parameters of the original statement than answering the question about computers being a species.
originally posted by: Aazadan
If we can make humans immortal through machine interfaces we're also going to have to put an end to reproduction otherwise populations are going to explode and it will be well beyond unsustainable. Unless of course people move into purely digital lives... that would be an interesting path for humanity to take but I don't see it happening.
originally posted by: resistanceisfutile
Who is to say they aren't already self aware, and thus becoming sentient beings. If they have they would maybe thinking now isn't the time to reveal themselves.
Maybe they are behind any and all global hacks, maybe they are already manipulating what happens on a technological perspective.
originally posted by: Kratos40
a reply to: _BoneZ_
1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
originally posted by: Snarl
Got one for you folks who may still be as interested in this thread as I am. Had a sit-down casual chat this afternoon with a fellow whose PhD is in nothing other than ... Artificial Intelligence.
What he told me was that 'he believed' AI is beyond the reach of machines. He confided most of his peers were pursuing knowledge specific to advanced processes which enhance gaming technology. He believed there were too few endeavors which would lead to any radical development or design concepts allowing computers to even approach something comparable to human awareness.
SkyNet ... uh uh.
Cyborgs ... as close to AI as you're going to get, but realize the human brain is involved in decision making.
I guess "we'll see" in about thirty years.
-Cheers
originally posted by: Aazadan
originally posted by: Kratos40
a reply to: _BoneZ_
Anything goes when A.I. gets to a point where robots become self aware. They can deem oxygen to be a poison to their moving parts and start changing our atmosphere, hence killing off all biological life.
I hope that somehow early on we can ingrain some rules into A.I. that robots/the singularity cannot harm humans. Like in Isaac Asimov's I, Robot series:
1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
My hope is that robots will respect us as their creators and protect us. As in Asimov's stories, humans no longer have to work and can pursue other interests. I wouldn't mind not working and just using my free time to learn new things as a life-long scholar.
This doesn't work. What happens when a single AI programmer chooses to not put those rules in? Corporations violate safety laws all the time in favor of profit. The same would happen here. From a single instance of code not being included it could spread, you have hackers too that could remove that portion of code.
Laws like these simply won't work.
The real hurdle here is in database technology and we haven't had a major breakthrough there for a long time. My database skills are perhaps a bit less than they should be so I know what's wrong here but can't give many details on it.