It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: ausername
a reply to: pl3bscheese
Destroy your creator, then perhaps I would be willing to entertain your point.
originally posted by: masterp
We will be lucky if by 2045 we get computers that won't crash so often.
Perhaps by 20450.
(disclaimer: I am a programmer, and true AI is millennia away from us.)
originally posted by: PhoenixOD
Would be wrong to call a computer a species as species is a biological classification and taxonomic rank.
originally posted by: 3n19m470
originally posted by: PhoenixOD
Would be wrong to call a computer a species as species is a biological classification and taxonomic rank.
How much "biology" and of what type will it need before you consider it to be a species? Does it need to be 100% biological? In that case many humans with a pacemaker or artificial limb are not a specimen anymore.
originally posted by: LucidWarrior
Me. Unless they come up with a chip that will let me fly otherwise unaided, no way I'm getting a chip put into my head.
originally posted by: PhoenixOD
A species is biological, just adding something electronic to it does not change the classification of the animal.
originally posted by: Snarl
What if they're so efficient at reproduction they're not concerned with self-preservation?
I'm more curious about what they would find interesting. Humans give machines complex problems to solve. When they reach a point of self-awareness, I think they'll no longer be 'interested' in our problems, and we'll simply be ... ignored.
Anyway, there are a bunch of scenarios. Fascinating topic and I'm interested to read what other people think. Ewok's pic is a hoot!!
originally posted by: Kratos40
a reply to: _BoneZ_
Anything goes when A.I. gets to a point where robots become self aware. They can deem oxygen to be a poison to their moving parts and start changing our atmosphere, hence killing off all biological life.
I hope that somehow early on we can ingrain some rules into A.I. that robots/the singularity cannot harm humans. Like in Isaac Asimov's I, Robot series:
1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
My hope is that robots will respect us as their creators and protect us. As in Asimov's stories, humans no longer have to work and can pursue other interests. I wouldn't mind not working and just using my free time to learn new things as a life-long scholar.
originally posted by: PhoenixOD
It would depend if the animal was born with it, it developed naturally (ie not implanted) and if it was something that was encoded in their DNA that would be passed on to their children. They would also have to primary interbreed.
So basically adding tech to biology will never make a new species.