It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

By 2045 "The Top Species Will No Longer Be Humans"

page: 2
24
<< 1    3  4  5 >>

log in

join
share:

posted on Jul, 5 2014 @ 12:05 PM
link   

originally posted by: Darkblade71
a reply to: _BoneZ_

If the machines are really smart, I think they would just leave after hitting a certain point. Unlike us humans who have very specific environments we have to live in, a machine would be able modify itself to any environments.

They would have a much better chance of survival if they left and started a machine colony on say Mars.
Away from us.



They may be away from us, but they would be with each other. If their programming lacks a moral compass, then they may be as dangerous to each other as humans can be towards other humans -- or maybe even worse, especially if they only act on the instinct to survive, and nothing else.

Humanity may also have a deep down instinct to survive, but they also have some layers of morality that can keep that instinct in check, for the most part.




posted on Jul, 5 2014 @ 12:11 PM
link   
EMP, kill switches, loss of electricity, were good.


Though machines would be able to reproduce pretty fast. They could build themselves in a factory in an hour vs 9 months for a human baby.
edit on 5-7-2014 by WP4YT because: (no reason given)

edit on 5-7-2014 by WP4YT because: (no reason given)



posted on Jul, 5 2014 @ 12:20 PM
link   
Hmmm, lets throw climate change and energy decline out of the equation.

Sure, why not? Is it not natural for life to beget life? We evolved to create our successor. The only ones who survive are the ones who integrate. I can dig it.

5 year ago, I saw these trend-lines, and tried to fight it. Why? Caused me a lot of stress. Why should I have loyalty to a species that doesn't seem to respect their selves anymore?

Seeing as I don't put CC or ED out of the equation in my big picture analysis, I'm all for an intelligent life form making it out of the big crunch. IF we can't get our poo together and figure out our own problems, lets at least design a successive lifeform that will supersede our limitations and keep on.



posted on Jul, 5 2014 @ 12:32 PM
link   

originally posted by: redtic
Me thinks this guy has been watching/reading too much sci-fi. It's not as if the field of AI is the wild wild west and there's a bunch of rogue geeks out there that are going to create an army of sentient, uncontrollable machines


Ever heard of Darpa?
edit on 5-7-2014 by rustyclutch because: (no reason given)



posted on Jul, 5 2014 @ 12:37 PM
link   
a reply to: _BoneZ_

Anything goes when A.I. gets to a point where robots become self aware. They can deem oxygen to be a poison to their moving parts and start changing our atmosphere, hence killing off all biological life.
I hope that somehow early on we can ingrain some rules into A.I. that robots/the singularity cannot harm humans. Like in Isaac Asimov's I, Robot series:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

My hope is that robots will respect us as their creators and protect us. As in Asimov's stories, humans no longer have to work and can pursue other interests. I wouldn't mind not working and just using my free time to learn new things as a life-long scholar.

edit on KSat, 05 Jul 2014 12:39:18 -0500pm3120141840 by Kratos40 because: grammar



posted on Jul, 5 2014 @ 01:22 PM
link   
a reply to: _BoneZ_

For me the point of the ideal future would be at which time we both preserve our humanity or the best element's of it and also evolve through technological cybernetic fusion to become a both biological and artificial race then perhaps entirely artificial with no hunger, thirst, tiredness, life spans that can be essentially infinite and allow us to truly explore the universe but if we then forget our origin we will lose that legacy humanity.
But can a machine ever be conscious or can it merely simulate the synaptic pathway's like a glorified untra complex newtons cradle.
And what of Soul, will we ever be able to make a quantum interface like the human brain.



posted on Jul, 5 2014 @ 01:43 PM
link   
I belong in the "Merger" camp. I think that it IS the next step in our evolution. We already have computerized prosthetics, pacemakers, micro chips to put in someone's brain to control siezures, etc. We have metal replacements for almost every major joint, and there are already patents on metal spine replacements (I 've been keenly watching for those!). I think that 2045 will be too soon for complete integration, however. I would imagine that a few generations would be needed to gradually change the way humanity as a whole would view integration.



posted on Jul, 5 2014 @ 02:23 PM
link   
Computers may take a look at Earth's atmosphere, biosphere, and just any sphere, and figure out that humans are not only hurting the lungs of the planet by their long-term destruction of forests and continued air pollution, but will compute (look at) sea life and be bot-amazed at the decrease. Humans, the only species to purposely destroy forests, may have some ‘splainin’ to do, and I don't really know how if they could explain (make excuses for) any of it away in words that an all-knowing all-powerful computer would buy without throwing up in its mouth.
edit on 5-7-2014 by Aleister because: (no reason given)



posted on Jul, 5 2014 @ 02:38 PM
link   

originally posted by: Kratos40
a reply to: _BoneZ_

Anything goes when A.I. gets to a point where robots become self aware. They can deem oxygen to be a poison to their moving parts and start changing our atmosphere, hence killing off all biological life.
I hope that somehow early on we can ingrain some rules into A.I. that robots/the singularity cannot harm humans. Like in Isaac Asimov's I, Robot series:

1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

My hope is that robots will respect us as their creators and protect us. As in Asimov's stories, humans no longer have to work and can pursue other interests. I wouldn't mind not working and just using my free time to learn new things as a life-long scholar.


I forgot to mention that A.I. and the Singularity might be two separate events that could come in different flavors and could even merge together.
For instance, A.I. could become self-aware in a stand alone platform in some lab somewhere. It doesn't necessarily have to be a robot with appendages that can just break out of the confines of the lab, or escape through an ethernet port or wi-fi signal.
The Singularity could be a human who has enough cybernetic implants (both in limbs and brain), to access the entirety of all the knowledge of humankind stored in the internet/cloud and somehow assume control of all devices connected to networks. Kind of like the Lawnmover Man movie.
These two things if they exist at the same time could merge and form something beyond our wildest imaginations or fears.
A lot to ponder...



posted on Jul, 5 2014 @ 02:47 PM
link   
a reply to: FlyersFan

Machines will never be completely immune to lethal viral-like infections, which is why they will never be able to dominate over humanity.

That said, unless we can quickly change course now, by 2045 what is left of humanity will be difficult, perhaps impossible to define as "human".

imo



posted on Jul, 5 2014 @ 02:49 PM
link   
a reply to: ausername

How are we any different? We have lethal viruses that wipe out the most weakest/unfortunate.

It would be the same for bots. They already have immune systems...virus protection. It's superior to ours, as it can transfer immunity at a much quicker rate.



posted on Jul, 5 2014 @ 02:55 PM
link   
a reply to: pl3bscheese

In this case, we are the creator, therefore will always have the ability to destroy them... They will never be completely immune....

It's fun doom fantasy, but it will never happen.

We have far more to fear from humanity itself than we ever will from AI and machines.



posted on Jul, 5 2014 @ 02:57 PM
link   

originally posted by: PhoenixOD
Would be wrong to call a computer a species as species is a biological classification and taxonomic rank.


Well, you do have a taxonomy of computers; embedded systems; real-time systems; servers, workstations, thin clients, desktop PC's, supercomputers. At the top of that chain you have supercomputers with cognitive AI.

It's no different from biological species; some like trees and fungi only have chemical communication; critters like jellyfish only have a basic neural system enough to combine simple eyes with muscle movement so they can stay in the shade. Then you have mammals with advanced vision for hunting, and then humans who can think and create tools.



posted on Jul, 5 2014 @ 03:25 PM
link   
AI is not alive, its a toaster no matter what they think . Alive would mean they shove a soul in, a soul bot, non organic like we are, but somewhat similar.

Not alive is not dominating as a species over alive, for not alive is not a species at all. It's just technology. They're only deluding themselves for they don't get what it takes to be alive, many are athiests. Unless of course they're very wicked and advanced black ops trying to shove souls in there.
edit on 5-7-2014 by Unity_99 because: (no reason given)



posted on Jul, 5 2014 @ 03:43 PM
link   
Personally, i think it would depend on how the machines are built first. We make them weak and fragile, they break easily, which is something a manufacturer doesn't want, unless they are a rich one. Unless, an A.I system was put into a killing machine, then their could some serious problems. Especially if it evolved into a true A.I.

I can see Machines doing maintenance jobs and general labor jobs, and would do a much better then humans. They don't get bored, find it tedious, and could leave less room for error. While humans, would be more into actual fields, like engineering, or mechanics in whatever trade.

Cybernetic prosthetic's would be a lot easier to achieve, where it could replace limbs, or even organs for that matter. Don't know about the brain though, and if it ever got to that point we could. Then I think A.I is being pretty close to being made.

I think it generally safe, so long as it not threatened, or can connect to the internet. It would gorge itself, and be a true Skynet.



posted on Jul, 5 2014 @ 04:06 PM
link   
a reply to: ausername

I see no reason why this assumption must be true.

You can't imagine the created destroying the creator?

Why not?



posted on Jul, 5 2014 @ 04:10 PM
link   
a reply to: pl3bscheese

Destroy your creator, then perhaps I would be willing to entertain your point.




posted on Jul, 5 2014 @ 04:20 PM
link   
a reply to: ausername

I wonder if God thought the same thing about his creations at one point or the other?

Prepare to meet your maker? Sounds like Im getting sent to other dimension.
edit on 5-7-2014 by Specimen because: (no reason given)



posted on Jul, 5 2014 @ 05:08 PM
link   
a reply to: Xtrozero

"Is there a person here that would not like a chip put in their heads that has all the languages of the world? Or maybe 1000s of other enhancements? Or have 10,000s of Nano bots cleaning/repairing their bodies from the inside out? You would be 200 years old and look 20, and that is where it will start the next evolutionary phase or singularity."
Me. Unless they come up with a chip that will let me fly otherwise unaided, no way I'm getting a chip put into my head.



posted on Jul, 5 2014 @ 05:12 PM
link   
Hi everebody and hello all the secret agencies that from now on will flag me as an extrimist
(sorry I'm not)

It's hard not to comment when these discussions arise.. In my limited mind it's totally impossible that a machine made from non organic materials would ever be "scentient". Scentience is a thing of the living (maybe the definition of life). If you on the other hand actually created a scentient machine, then you would have engineered a new life form. In other words not computer science anymore. However, AI is very real. I beleive computer programs per definition are AI. They're artificial because they're not scentient but still doing intellectual work. Could AI become dangerous? Yes! A "mad scientist" could engineer the desire to persist the "species". -And heavily armed devices don't have to be that smart to be dangerous.. Imagine an armada of automated tanks that can refuel theselves, replace worn out parts from a large wherehouse of such, protect the storage and so on and so forth. Apart from that they are programmed to kill as many people as possible. Dangerous




top topics



 
24
<< 1    3  4  5 >>

log in

join