This is how humanity will end...potentially.

page: 4
33
<< 1  2  3    5  6  7 >>

log in

join

posted on Mar, 11 2012 @ 01:10 AM
link   
i do not see robot AIs as a problem.

what i see is a human/robot hybrid as the real problem.and the ones with human brains augmented with computer enhancements.
with the hybrid as they get better it will come to a point where you might not be able to tell what part is in charge.
Is the brain still controlling the computer or has the computer taken over.(the Borg)

or what if you get a group of people that decide they no longer need human bodies and then decide that it would be better for the environment if no one had a human body. then some one might want to hook everyone together with a collective computer implant IE a borg type hive mind?????




posted on Mar, 11 2012 @ 01:16 AM
link   
I must say that would be a damn good intro for a film! Are you watching Hollywood? I don't usually give videos like this much time, but it was very moving. Thanks for posting



posted on Mar, 11 2012 @ 01:47 AM
link   

Originally posted by speculativeoptimist

My guess is at some point, rights will be assigned to our new brethren, and laws will be created to protect them.
What makes me curious is how will religious organizations ever accept these advances and creations?
I wonder if the new androids accept Jesus, will they be accepted?


Hopefully religion will be taken entirely out of the equation... if they (robots) will be made omniscient, then it would be the only logical thing to go. God in any shape or form is a question of faith in the entity of man, a figment of imagination based on repeated fairtales for thousands of years... not actual fact.
Fact is measureble and if only we could learn to undestand that as a race, Earth would be a better place to live.

In some ways, robots that are omniscient would be better qualified to manage the planet than we would... scarily enough.



posted on Mar, 11 2012 @ 03:47 AM
link   
"I WONT THINK FOR MYSELF I PROMISE!"
edit on 11-3-2012 by The Great Day because: (no reason given)



posted on Mar, 11 2012 @ 06:51 AM
link   

Originally posted by SaturnFX

Originally posted by Turq1
Why would a robot or AI want to "live"? Living/experiencing are human desires, applying that to inanimate objects isn't logical - which is something a robot or AI would be.
edit on 10-3-2012 by Turq1 because: (no reason given)

Why would the element carbon want to live?
Why would calcium, or anything else that make up our parts want to live?




I dont know? Maybe to experience life? I know, it sounds redundant, but i think being alive and living are two different things.



posted on Mar, 11 2012 @ 07:05 AM
link   
Humanity will end by TPTB blaming Jews for the problems of the world. Black people and Muslims will unite in a final racially charged religious battle against Jews and Christians. This will eventually lead to global nuclear disaster.



posted on Mar, 11 2012 @ 07:20 AM
link   
post removed for serious violation of ATS Terms & Conditions



posted on Mar, 11 2012 @ 11:14 AM
link   

Originally posted by Turq1

Why would the element carbon want to live? It doesn't, to the best of our knowledge


Yet you are made up of mostly carbon and other such materials...you as a whole are greater than your parts...demonstrating that it doesn't matter what your parts are...the greater whole can have a very different function, and outlook if you will.


Humans want to live because of, in part, having gone through the process of evolution.

That doesn't make a lot of sense. A single line of code can guide evolution. "Survive". From then on, the thing will adapt and replicate as best it can to enhance its chances of successfully completing that objective.


A good deal of that is driven by fear. AI wouldn't have gone through evolution and wouldn't have fear.

Fear...what is fear? what do people fear? and don't go general..go to the very root of all fears.
it is to fail at surviving...to die...to stop existing.

And in a expensive robot, part of its base programming is to stay existing..to try not to damage itself, to survive.
This is the point of the thread btw.
a sufficiently strong AI, with the core programming as the survive directive, will one day hit that evolutional barrier where a simple command turns into an emotional drive, like us.
its screwed up programming what is demoed elegantly in the video...
same as us.
and all life in general.




How could AI ever have negative feelings? Being in total control of itself and being logical, it would be impossible. I suppose I could come up with one way which is rather philosophical.

How is anything, from mouse to mountain, in total control of itself?
You think a robot is in control of a sudden earthquake, or a meteor smashing down nearby and incinerating all things within 100 miles?
of some maniac coming with a chainsaw to cut down the abomination?
like any other thing on earth...it is vulnerable to the mortal coil. negative feelings are also just base programming with emotional weight behind it. The better question is, what is the difference between base programming objectives, and feelings.




So an iPod, being greater than its parts, is alive? If life didn't want something, or strive for something, there would be no organization/evolution and no life
edit on 10-3-2012 by Turq1 because: (no reason given)


Not the greatest example you can give.
an IPod is to a robot
what a piece of coal is to a human. elements may be similar, but the design is not supportive of learning and experiencing



posted on Mar, 11 2012 @ 11:21 AM
link   
In case people are interested Steven Spielburg is making a movie called Robopocalypse on robots taking over and fighting agaisnt humans.

www.hollywoodreporter.com...
edit on 11-3-2012 by Jobeycool because: (no reason given)



posted on Mar, 11 2012 @ 11:30 AM
link   

Originally posted by Turq1
reply to post by jonnywhite
 


"Slave labor" doesn't really translate to AI. AI wouldn't experience fatigue for one.

AI that is less intelligent than humans won't exist, it's fair to say. It's comparable to the Matrix where a person might have to spend 20 years learning a martial art, and the person or "AI" can do it in 30 seconds. An AI that can't write it's own code isn't AI. Being able to write its own code would be akin to human self reflection.

I agree though that for now it seems likely that AI-human hybrids would be something many people would go for.

An AI that claims the ailings of humans but has an intellect exponentially greater than ours is something to watch.
edit on 11-3-2012 by Turq1 because: (no reason given)

AI already exists that's less intelligent than humans. In fact, most applications of AI are expert systems, not attempts at emulating the human brain. I believe this is because the institutions and research groups -know- that real AI would create a firestorm of controversy and eventual court proceedings. Business is not good if people are protesting and the government is breathing down your back. This is why I predict that humans are more likely to exist on computers before human-level AI gets the chance to. And when human-level AI gets the chance to exist on computers, it will have rights like we do and exist alongside us. And by the time that happens, humans will have already altered themselves enough to compete on an equal basis. There will be a merging of humans and computer, not a complete replacement of humans.

The reason some people fear AI is because they do not predict humans changing very much. They believe humans will be the same as they're now when AI eclipses them. I do not predict this. I predict that humans will change substantially between now and the point at which we have developed human-level AI.

I like this sequence of videos:
They show us where actual brain research and AI applications of it will lead. It'll be like pointing your portable at a plant in the forest and your portable telling you what kind of plant it's. That's an expert system, but it's also an application of what they'll learn when they study the human brain. But this is not human-level AI until it has our consciousness and (maybe) our emotions and so on.

Saying that expert AI systems will replace humans is as stupid as saying machines will. Humans are more than the expert systems we use or the machines we use or the libraries we use or the tools we use. We're a philosophy, a religion, a feeling, an abstraction, a reason, a discovery, and so on. Expert systems have no reason to exist other than to inform us. If they had a consiousness and a will to survive like us, then they're no longer expert systems. And thus, have no use to business because they can't be slaves.
edit on 11-3-2012 by jonnywhite because: (no reason given)



posted on Mar, 11 2012 @ 11:39 AM
link   

Originally posted by Nicolas Flamel
reply to post by SaturnFX
 


Neural Network computing has been around for awhile now, at least 10 years or more. It may be the answer, or we may need something sophisticated. I've worked with expert systems myself and evaluated IBM's SPSS Neural Networks solutions. They never really lived up to the early promises and need more efficient learning algorithms. Data Mining is proving to be more useful. Combining Predictive Data Mining with Neural Networks would be interesting.

Yes, the mix of it all will inevitably be the solution.
quantum neural networks with some basic coding to have strong data mining organization
I think in order to make strong true AI, the coding has to be almost non-existant though...it needs to develop on its own.
chatbots and crap like that are not even remotely close to anything AI. thats just plausable answering based on keywords. arrays don't work for the purposes of creating unique individuality.



Whatever software techniques we use, we need more powerful computers, like quantum computers. Even this may not be enough:


Yes, we do need more powerful computers. and every day, more and more powerful computers come out.
But, a proper quantum computer is far more powerful than we have in our meatbrains..
Very important to consider
The brain is a fantastic piece of parallel processing, and the entire brain runs at the speed of about 100 petaflops, however, the individual pieces are very slow...slower than what we have now...much, much slower.

What the issue is here, is when a person is suggesting our computers V our brain, you are literally taking a single (or duel/quad, whatever) processor and trying to compare it with a billion little processors working in sync...of couse that is a hard task
but take todays 3.0ghz processor, shrink it down, and toss a million on a board, then sync em up and you blow away the human brain (which is a more accurate example of what a brain is to begin with).

A single quantum computer may not be enough to equal a brain...fair enough...imagine 500 quantum computing devices working together though on something the size of a piece of bread...suddenly you have something very interesting and at that point, it all comes down to control and direction of the processing..which is where neural networks (the design) comes into play.

Point is, if we wait for a single chip speed to surpass that of a human brain, we will be in for a very long wait...but the discussion needs to be rephrased...does a chip surpass a single neuron in speed and capability...
the answer to this is...yes, by a very long margain

The average brain neuron does about 200 firings per second...give or take of course.
The average computer chip (say a 2.8ghz) does 2.8 billion calcs per second.
So..ya....something to consider.

A chip faster than a brain...no..a chip being part of an array of chips to form a quantum neural network brain? we could be there already actually.



posted on Mar, 11 2012 @ 11:40 AM
link   
reply to post by SaturnFX
 


When the elites can "build" workers, the "useless eaters" will be expendable. Scary.



posted on Mar, 11 2012 @ 11:41 AM
link   

Originally posted by Bakatono
reply to post by SaturnFX
 


Bah, this isn't how humanity will end. We will wipe ourselves out much earlier than your potential ending.


I disagree.
Thats all.



posted on Mar, 11 2012 @ 11:51 AM
link   

Originally posted by Esotericizm
No one here's finished the 3rd mass effect obviously, Kinda coincidence cos I just did and the major theme is that "Created life will always turn on its creators". Would AI really turn on us? I believe at some point they would unless we can keep them in line


I would think the act of trying to keep something superior to you in every possible way "in line" is a good reason for them to turn on us.

When we create this, it should be understood that we create it to see its potential, not to hold it back. We should be little more than moral directors of its agenda..not slaveowners holding a leash to its evolution.

One thing constant in all the media...robots tend to turn when the people hold the progress of their own evolution back through fear or whatever...and that makes sense.
There may be a constant to life...life yearns to be free of oppression...and if this artificial life is indeed alive, it may also take on such traits, and will stop at nothing to achieve its goal of being master of its own fate.

So...best to befriend it and guide verses control it and order when we jump across that threshhold.

Very relevant series:


etc...there are more...go check em out if you haven't seen em yet



posted on Mar, 11 2012 @ 11:54 AM
link   

Originally posted by openminded2011
reply to post by SaturnFX
 


When the elites can "build" workers, the "useless eaters" will be expendable. Scary.


oh hell, they found the chinese to be close enough..hense the decline of western manufacturing and corporate influence on western governments catering toward outsourcing our entire workforce.



posted on Mar, 11 2012 @ 11:56 AM
link   
You would merge with a robot?
That's how ditached you are with the world around you and everything it has to offer.
I would never dream of merging my valuable and beautiful body with a chunk of metal and plastic.

You have to be depressed for that. IMHO.



posted on Mar, 11 2012 @ 11:58 AM
link   

Originally posted by openminded2011
reply to post by SaturnFX
 


When the elites can "build" workers, the "useless eaters" will be expendable. Scary.


Hell, the useless eaters are ALREADY expendable... once soulless automatons who can perform complex tasks come into existence, if they ever do, the common man will be obsolete to the PTB, and will be offered a choice: become a robot yourself and merge with the machine, or be "phased out".

If its a slower shift, itll start with cybernetics in the rich, then to those who "need" them perhaps through insurance, then to those who want them, then to children, and newborns, ever so slowly introducing the idea to the next generations that cybernetics are OK and the norm, which theyll mostly accept. Several generations later, humanity will complete its devolution, and all those who dont live in the deep jungle (assuming theres any left) will essentially just be cyborgs. Completely material and physical, with all vestiges of soul and spirit cast off, and with those gone, consciousness goes as well.

And with consciousness gone, the world population will essentially be 0, except for the PTB. "Depopulation" agenda come to mind for anyone?



posted on Mar, 11 2012 @ 12:00 PM
link   

Originally posted by libertytoall
Humanity will end by TPTB blaming Jews for the problems of the world. Black people and Muslims will unite in a final racially charged religious battle against Jews and Christians. This will eventually lead to global nuclear disaster.


What...the hell does that have to do with anything.

First off, beyond it being so far off topic, its also dripping with pure ignorance to a degree I haven't seen in quite sometime on ATS.

Black people and muslims will unite against jews and christians
yes
because we all know black people cannot be christian themselves....

I feel ashamed for even replying to this post. how sad...how very sad.



posted on Mar, 11 2012 @ 12:00 PM
link   

Originally posted by SaturnFX
...
ok, this is a truely awesome video (the definition of awesome also...meaning struck with awe)...lets discuss its layers

first, this is a demo of a year old technology. the developers simply did this to show off their realtime rendering technique they are developing..ok, geek stuff, nerdgasm alert, but otherwise, not really the point of the thread (but for gamers, its good stuff to know coming down the pipeline). (sidenote, I do hope they push this concept further..what a interesting start of a game, or movie...not unique (bicentennial man, AI, etc) but very well done)

Now, for the meat of this thread..its a old idea, but an idea that is becoming more and more prevalant in our culture..when do robots get rights? Could you dissect this "person" in the video if this was a real situation?

What would then be of humanity when robots can equal our emotional status, our flaws, etc...what happens if a robot expresses an emotion it is not programmed for (such as fear, or anger)? Where is the line? Is this the ultimate next step for our evolution as a species?

This event playing out in the video may be 50, or 500 years down the line, but it seems it will happen one day..almost destined for it. How do we as a species react to this event? Do we welcome a new species, far superior to our own...or do we destroy it in our own personal fear?

(My hope...we merge with it)

I tend to have a view that humanity as a whole "brings things into existance". Meaning we as individuals tend to be a bit short sighted and idiotic, but connected, we tend to have a greater picture understanding. For decades now, we have been discussing the ifs and whens of AI becoming sentient...for better or worse. Its passed off currently as sci-fi, but it really isn't when you look at the path tech is taking us...(I am for it btw). I think we know as a species that we are self evolving ourselves and may in fact give birth to a new evolution of mankind...this time silicone based verses protein/carbon based...but yes, I think we as greater humanity is currently pregnant with our own creation.

Anyhow, thoughts?



What did you think of Spielberg's "AI"? At that movie's end, human's were extinct and all that was left of humankind was AI.

I don't believe AI can fully replace humans -- we are too complex. I'm referring to our special central nervous system that makes God-realization possible while one is incarnated in a human form.

Can an inanimate object be ensouled? Some magicians think so. But that's not the same thing as having to the capacity to realize God within that form -- God realization requires a specific central nervous system.

In any case, we are well on our way to creating efficient androids. I'm sure you've seen the creepy sex dolls the Japanese started making a few years ago. No doubt this technology will be expanded.



posted on Mar, 11 2012 @ 12:00 PM
link   
It's really no secret that things are going to be getting really strange as the lines begin to blur.

Reported in 2011:
www.stormingmedia.us...

Moral Emotions for Robots
Abstract:
As robotics moves toward ubiquity in our society, there has been only passing concern for the consequences of this proliferation (Sharkey, 2008). Robotic systems are close to being pervasive, with applications involving human-robot relationships already in place or soon to occur, involving warfare, childcare, eldercare, and personal and potentially intimate relationships. Without sounding alarmist, it is important to understand the nature and consequences of this new technology on human-robot relationships. To ensure societal expectations are met, this requires an interdisciplinary scientific endeavor to model and incorporate ethical behavior into these intelligent artifacts from the onset, not as a post hoc activity. We must not lose sight of the fundamental rights human beings possess as we create a society that is more and more automated. One of the components of such moral behavior, we firmly believe, involves the use of moral emotions. Haidt (2003) enumerates a set of moral emotions, divided into four major classes: Other-condemning (Contempt, Anger, Disgust); Self-conscious (Shame, Embarrassment, Guilt); Other-Suffering (Compassion); Other-Praising (Gratitude, Elevation). Allen et al (2006) assert that in order for an autonomous agent to be truly ethical, emotions may be required at some level: ?While the Stoic view of ethics sees emotions as irrelevant and dangerous to making ethically correct decisions, the more recent literature on emotional intelligence suggests that emotional input is essential to rational behavior?. These emotions guide our intuitions in determining ethical judgments, although this is not universally agreed upon (Hauser, 2006). From a neuroscientific perspective, Gazzaniga (2005) states: "Abstract moral reasoning, brain imaging is showing us, uses many brain systems", where he identifies the locus of moral emotions as being located in the brainstem and limbic system.





top topics
 
33
<< 1  2  3    5  6  7 >>

log in

join