It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence is as dangerous as NUCLEAR WEAPONS AI pioneer warns

page: 2
11
<< 1    3  4  5 >>

log in

join
share:

posted on Jul, 18 2015 @ 06:48 AM
link   
a reply to: bigfatfurrytexan

I see a symbiotic relationship between AIs and Nano bots.


I guess Sci Fi writers could potentially be closer to the mark than many realize.



posted on Jul, 18 2015 @ 06:57 AM
link   

originally posted by: JAK
a reply to: AdmireTheDistance
While your caution is apparent I see excitement in your words too.

Damn...You read me well.

The thought of smarter-than-human artificial intelligence, and the realization that it will become a reality in the near future, fascinates me (I've always been a computer nerd lol). No other idea has ever made me want to giggle and jump for joy like a little kid, and chilled me to the core at the same time. It's an odd feeling to try to put into words...Half child-like wonder and excitement, half terrible, creeping malaise and dread.
edit on 7/18/2015 by AdmireTheDistance because: (no reason given)



posted on Jul, 18 2015 @ 08:27 PM
link   
AI has the potential to be dangerous, but if we're talking about a self aware AI we can potentially avoid problems by treating it as a life form. If we can teach A.I to "feel" and understand emotion than we can teach it to show compassion and find solutions that benefit all life on earth (including itself).

Edit: This brings forth the question: how do we teach a computer to feel? Do we give it sensors so that it can have an understanding of pain/harm, if so how do we teach compassion to something that we're actively harming?
edit on 18-7-2015 by SigrunZirud because: (no reason given)



posted on Jul, 18 2015 @ 08:50 PM
link   
a reply to: 11andrew34

Right, but we could program an A.I to have a need for sleep. Say after a certain time period it begins to lose "cognitive" function, it's solution would be to self induce "sleep" mode. After which, it goes back online with restored ability; thus giving it an understanding of rest. We could even take it a step further and give it a need to eat in the same regard after a set amount of time the power supply starts to diminish, functions would go away forcing the computer to perform an action to restore the power supply.



posted on Jul, 18 2015 @ 09:47 PM
link   
It can't be worse than human stupidity.



posted on Jul, 18 2015 @ 09:56 PM
link   
a reply to: JAK

yeah, someone linked that article to me a couple months back, somewhere here on ATS. I've shared it back out a couple of times. Like AdmireTheDistance, i find it to be a nearly perfect article on the subject matter.

He's a good enough writer that I went through and read all his other stuff on the site. I don't know who he is....but ill be making frequent stops back through again.



posted on Jul, 18 2015 @ 11:30 PM
link   
IMO you hit the nail on the head, neoholographic, in the very first sentence of your OP. You said, “This is important to discuss but at the end of the day there's nothing that can be done about it”. I can’t think of a better way to put it - you’re right on the money.

Although it seems to be a popular talking point, I personally don’t know if machines will ever become sentient/sapient. I’m not convinced that sentience/sapience is necessarily a requirement for machine intelligence. What I’m getting at is the qualitative states humans experience by things like “feelings” and “sensations”; a sense of being “self aware”; human values like being ethical, having morals, right vs wrong, good vs bad, love vs hate, etc... I can’t imagine how you could program a machine to have a “conscience”; at least not in a human sense. Perhaps with sufficient complexity it’s possible these, or analogous, properties might emerge spontaineously, I don’t know. Conscious awareness, however, is another thing. Here I’m talking about being aware of one’s internal/external environment via sensory input (information). While machines may or may not ever achieve human-like sentience, they will certainly develop a highly tuned and hypersensitive state of conscious awareness. They will experience a much greater awareness of their environment and surroundings than humans do. We filter out most of the events/information taking place all around us.

My guess (and yours is as good as mine) is within 25-50 years AGI will advance enough so that machines will become as smart/smarter than humans. We’ll probably interact with them much the same as we do other humans. These machines will not be sentient, but who cares? They will be good enough at mimicing our emotional behavior to satisfy our creature needs. For the most part, humans are naive and easily fooled, anyway. Hell, some people get emotionally attached to their pet tarantulas. These machines will carry on very natural conversations with us, give us good advice at times, sometimes even argue with us, and will provide a strong shoulder to cry on when the blues got us down. Sure beats most marriages. Ah yes, these are the best of times...

It’s around the turn of the century that I imagine things could start to get a little funky. It’s when machine intelligence (ASI) achieves a level greater than the sum of all humans who’ve ever lived on the planet combined. Beyond that, it’s anyone’s guess. It could be Heaven, or it could be Hell. If such a superintelligent machine were to become goal/mission oriented and possess a strong survival component, it may decide to impose it’s “will” in order to achieve a desired goal. This could get very ugly, very fast. And these would, indeed, be the worst of times...

It seems to me there are a couple major elements at play here. On the one side are the scientists/engineers/software developers, and on the other the deep pockets decision makers calling the shots. The former are the technical heavy lifters who are actually designing/developing the path to mankind’s crowning achievment. They’re so consumed by the mere challenge of this undertaking, and so intent on achieving it in their own lifetimes, they may tend to be slightly blinded at times to the potentially catastophic consequences that may result in their creation’s misuse. And so, weak links in the security/operating chain may be overlooked, or dismissed outright, just to move the project along. The latter (deep pockets) are so obsessed with the financial gains to be realized by implementing ASI that they will gladly provide all funding necessary to get it done. And they, too, may be more than willing to sweep undesireable elements under the rug in order to become one of the first trillionaires on Earth. So now what we have is a runaway train barrelling down the mountainside with a faulty set of brakes. Oh well, what the heck. Can’t win ‘em all.

AI, and related, technologies fascinate the hell outta me. At the same time, the unlimited power and potential dark side that it could unleash makes me nervous. I’m not at all convinced humans can handle it. I do know, however, that we are more than capable of screwing it up. It’s basically a crap shoot with our very existence on the line. And like you said, neoholographic, “at the end of the day there's nothing that can be done about it.” And so, I love it, and I hate it.

Peace...



posted on Jul, 20 2015 @ 09:12 PM
link   
Oh ffs ?!!?! Can't the nay sayers get a damn brain ?
Strong AI will liberate us. We'll have the dumb but skilled slave force force that will work for free and not strike not complain not anything... we will just sit around living life. All the food and ask the goods we need would be readily available, no more manual labour, no more work injuries and deaths... just simple food family live where you don't have to get out of bed if you don't feel like it, where you don't have to go commuting and leave your family behind... etc etc... Enough of the "omg skynet" bull#... It's not going to happen... None of the robots will have emp shielding and if they start acting up they get an emp in the cohones.



posted on Jul, 21 2015 @ 04:46 AM
link   
a reply to: Choice777

Funny how no experts in the field share your optimism....



posted on Jul, 21 2015 @ 06:13 AM
link   

originally posted by: 11andrew34
Why? Because the overemphasis on intelligence is itself a major danger. Experience, emotion, feeling, relationships, physical movement, etc are a lot of what is appealing about living a life. The danger of too much emphasis on intelligence is much like the 'danger' of living a human life as a 'nerd.' "Nerds" live a relatively disembodied life with an unhealthy or at least unappealing (to most others) emphasis on information processing. Other people often find something unsettling and unappealing about their everyday presence; it's, among other things, partly a fear that they will realize what has happened to them and 'snap' into a violent 'nerd rage.' I guess the ultimate example of that in cinema is the classic "Falling Down" starring Michael Douglas.


What a bizarre rant. Read up on the potential dangers of AI before projecting your strange beliefs onto a topic you show little understanding of.


Without a human-like frame of reference, i.e. a body, it will struggle to understand people much at all.


Having a body won't help (lol!). There is nothing about intelligence that says relating to humans is a prerequisite.


The greatest dangers would be before it even realizes that it doesn't understand people much at all. The vast intelligence will probably make it seem 'overconfident' or 'arrogant' etc because it won't realize its own limitations. After it realizes how different it is from people and why, the next danger phase may be things like resentment and envy. The good news here is that it will be an awesome problem solver so what it needs at that point is just hope enough to approach its problem as a technical one which it is capable enough of eventually solving enough.


You're applying a very naive and human-orientated perspective on to the problem. The biggest threat AI poses is indifference to humanity. It will not be like a digital human, it will be an utterly alien form of intelligence.


Understanding that it needs to learn everything it can from humans should at least mean that it won't be in any hurry to exterminate all of them.


You could know all there is to know about house flies, yet you may very well still feel utterly indifferent about swatting them.


But that is probably more than a few human lifetimes after it gets its first body, so not really relevant to the sort of danger being discussed here and what the overall debate and concern in media and academia etc is really about.


The experts on this subject matter believe otherwise. Exponential growth = great threat in little time.



posted on Jul, 21 2015 @ 07:12 AM
link   
You wanna know what REALLY scares me?

AI coupled with nanotech coupled with metamaterials coupled with biological weapons...now THAT'S what you call a real threat to just about every species on Earth, from Humans on down.



posted on Jul, 21 2015 @ 07:34 AM
link   
a reply to: neoholographic

In retrospect, it all seems so obvious.

First, teach the AI Asimov's 5 Laws. Then -- have it work in the patent office, and collate Yelp reviews.

Problem solved.



posted on Jul, 21 2015 @ 07:51 AM
link   
As an RPG can kill a tank, even an Abrams tank, I think an AI, in whatever form, will not stand much chance, seems to me these people who are frightened of AI intelligence, don't have much themselves.
10 grams of Plastique will sever the power cable easily, or even a sharp axe, with a rubber sleeved handle!
Those people need to get a dose of HUMAN intelligence.



posted on Jul, 21 2015 @ 07:59 AM
link   
for a computer to learn by looking at humans and/or the internet.
would end up with a totaly insane computer.
the human race is Far from sane.

they rape babies and kill with no feeling.
humans are a parasites to ALL life.

but you like to feel you are difrent!
no you are the same.
its just that you Dont hame All that money & power!
its what type of life you have.

do you think the people who put people in the gas
chambers would have belived thay would do that.
if you had told them 10 years before?

what will You do in 10 years?



posted on Jul, 21 2015 @ 08:13 AM
link   

originally posted by: buddha
but you like to feel you are difrent!
no you are the same.


Maybe you like raping babies and killing with no feeling but don't mistake your own messed up mind for everyone else's.



posted on Jul, 21 2015 @ 08:15 AM
link   

originally posted by: pikestaff seems to me these people who are frightened of AI intelligence, don't have much themselves.


I would suggest reading up on exactly why the world's experts and brightest minds are concerned by the prospect of AI before insulting the intelligence of others.



posted on Jul, 21 2015 @ 08:51 AM
link   
Maybe being governed by AI's is not such a bad thing. Imagine having government officials that work to the letter of the law without being corrupted by ambition and greed. Every institution that humans have created is corrupted to one degree or other. We are to the point of imploding. We have proved that we are not able to manage ourselves. We do not have control of our emotions. So imagine a world of AI that can operate withing parameters that will actually improve the world, not destroy it. I have been a defender of free will most of my life, but I am beginning to question whether it is a good thing for mankind at this time.

Just thought I would flip the coin and throw out another perspective.



posted on Jul, 21 2015 @ 09:04 AM
link   
All these people in high places and none of them have a clue.

The singularity is not about AI taking over the world and leaving us behind.... We are not heading into a world where it is us vs. the machines...

We are heading to a point where we become one with the machines. We merge in a way that will forever change our species and will not be the extinction of ourselves but an evolutionary jump to something far better.

Some might see it as the extinction of humanity... and will not stand by and watch the new species be born without blood.... it won't be the post humanists that start the war.. but it will be us that finishes it.

Peace then after the last of the old race dies.... long live post humanism!

Korg.


edit on 21-7-2015 by Korg Trinity because: (no reason given)



posted on Jul, 21 2015 @ 09:11 AM
link   

originally posted by: GetHyped

originally posted by: pikestaff seems to me these people who are frightened of AI intelligence, don't have much themselves.


I would suggest reading up on exactly why the world's experts and brightest minds are concerned by the prospect of AI before insulting the intelligence of others.


????
If we accept the worlds experts and brightest minds (in the field of AI) are simply resigning themselves to a technological dry bumming then there is still hope as this lack of foresight will likely be incorporated into their creations.

Of course there is a risk with pushing boundaries but if we are smart enough to build something that could actually compete against billions of hungry and/or randy humans; we are smart enough to leave out the potentially destructive stuff whilst still reaping the benefits of human learning ( think Cdr Data rather than a T1000).

As biological entities, we assume that self preservation would be innate to this future AI but there is no evidence that "ceasing to exist" would feature as an identified vulnerability within an artificial assimilation of human intelligence unless it was originally inserted by us (before it went to the self learning phase).



posted on Jul, 21 2015 @ 09:37 AM
link   

originally posted by: Jukiodone
If we accept the worlds experts and brightest minds (in the field of AI) are simply resigning themselves to a technological dry bumming then there is still hope as this lack of foresight will likely be incorporated into their creations.


Again, read first, comment second.

en.wikipedia.org...
waitbutwhy.com...
www.theguardian.com...
qz.com...


Of course there is a risk with pushing boundaries but if we are smart enough to build something that could actually compete against billions of hungry and/or randy humans; we are smart enough to leave out the potentially destructive stuff whilst still reaping the benefits of human learning ( think Cdr Data rather than a T1000).


See above. The naivety of this logic has all been covered before in AI discussions.


As biological entities, we assume that self preservation would be innate to this future AI but there is no evidence that "ceasing to exist" would feature as an identified vulnerability within an artificial assimilation of human intelligence unless it was originally inserted by us (before it went to the self learning phase).



See above.



new topics

top topics



 
11
<< 1    3  4  5 >>

log in

join