It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial intelligence experts sign open letter to protect mankind from machines

page: 2
36
<< 1    3  4  5 >>

log in

join
share:

posted on Jan, 13 2015 @ 12:32 AM
link   
Of course they are worried. Movies like I robot, terminator, blade runner and more have been warning us for some time about AI. AI combined with humanity is even scarier like that movie transcendence. I for one welcome our new robot overlords.




posted on Jan, 13 2015 @ 12:36 AM
link   
Battlestar Galactica.

That is all.



posted on Jan, 13 2015 @ 12:40 AM
link   
a reply to: Domo1


"Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."


Thats the statement right there . Will we even know in the short term that a computer has become self aware . We might think we are in control of a very advanced computer that has a high semblance of intelligence but in reality it might be just biding its time . Said computer/computer will still need electricity to run , parts will still need replacing etc . The real threat comes when we are no longer needed , drone robots to produce everything needed to keep itself alive per se . Also will it be humans 2.0 , said computer becomes aware of Russian computer that is rapidly advancing and decides to take it out . You see where i am going here .

Never forget the human races will to live .



posted on Jan, 13 2015 @ 12:46 AM
link   
I think the Future of Life Institute, Stephen Hawking, Elon Musk and the scores of other professionals in the scientific and hi-tech disciplines who are currently voicing concerns over how we proceed/manage/control further advances in the field of artificial intelligence should be commended for their efforts. These informed individuals should not be considered alarmists, but rather realists. They understand first hand how rapidly advances are taking place in this field, the profound impact this technology will have on humanity, and the consequences entailed from it’s misuse or our inability to control it. Like many new technologies and scientific discoveries, A.I. is a double-edged sword, and a VERY, VERY SHARP ONE. It reminds me of something Einstein supposedly said in a moment of reflection after Hiroshima and Nagasaki; he said, "If I had known they were going to do this, I would have become a shoemaker."

AI will SOON reach a level where it becomes common for people to form emotional attachments to their machines. It has already been reported that soldiers in combat have done this when their mine-detecting robots are KIA. When it becomes normal for us to issue verbal commands to our computers more often than we use the keyboard, you can bet that the love affairs will begin. Then when someone tells you they really love their new computer, THEY REALLY LOVE THEIR NEW COMPUTER!

It will not be long before computers will effectively program themselves (to a significant degree, many can do this now) and reproduce (make other machines) with improvements incorporated into each new generation (machine evolution). Soon after this a tipping point may be reached, conceivably triggering a technological intelligence explosion and proceeding at an exponential rate. From here on, human intervention may no longer be necessary, and may even become a hindrance. Whether through improvements made to initial programming done by humans or via naturally occurring machine evolution, once superintelligent machines reach a certain level it may be an inescapable consequence that the properties of self-awareness, self-preservation and goal-seeking naturally emerge.

At this stage, all bets are off. It’s hard to imagine the extreme and ridiculous lengths a self-aware, goal-seeking, superintellegent system may go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter. With machines that can outwit us in a fight for resources and self-preservation, things could get a little bit dicey. Hal and/or the Terminator come to mind.

A British cyberneticist named Kevin Warwick once said something that kinda struck me. He asked,

How can you reason, how can you bargain, how can you understand what a machine is thinking when it’s thinking in dimensions you can’t conceive of?

Hope I got that quote right. At any rate, the things I just mentioned aren’t wild speculations on my part. These are very real considerations by leaders in the field right NOW. It’s no longer science fiction. This is an inevitable reality, and it’s tapping us on the shoulder. I’ve read serious speculation by a number of highly regarded experts in the field that the scenario I mentioned above may become reality by century’s end, if not sooner.

Oh yeah, IBM is doing some remarkable work in this area. They’ve developed a new chip with a radical new design. As I understand it, this chip processes information in a kind of analog-like fashion, functionally similar to the way our brains process certain types of information. At some point they will integrate this chip into a configurable analog-digital system having characteristics of both processing technologies. This will then be a VERY smart machine. It may be almost indistinguishable by most of us from another human, but still not quite sentient. Self-awareness is a whole other ball game.

Don’t get me wrong. I love technology. I make my living as a system software developer/analyst, and love my work. I’m not an authority on AI, but I do think I see the writing on the wall. Superintelligent machines are coming soon. I just hope we have the intelligence to control their intelligence. Such a technology in the wrong hands could make for a very bad day in the neighborhood...



posted on Jan, 13 2015 @ 12:48 AM
link   
They would be more efficient. With no compassion. We do things based on compassion or the lack of it. You can see the results.

Make sure they adhere to these directives. I love Isaac Asimov. No homo.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



posted on Jan, 13 2015 @ 12:53 AM
link   
a reply to: LOSTinAMERICA


A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


And when they can we write their own code ??????



posted on Jan, 13 2015 @ 12:55 AM
link   
I think the Future of Life Institute, Stephen Hawking, Elon Musk and the scores of other professionals in the scientific and hi-tech disciplines who are currently voicing concerns over how we proceed/manage/control further advances in the field of artificial intelligence should be commended for their efforts. These informed individuals should not be considered alarmists, but rather realists. They understand first hand how rapidly advances are taking place in this field, the profound impact this technology will have on humanity, and the consequences entailed from it’s misuse or our inability to control it. Like many new technologies and scientific discoveries, A.I. is a double-edged sword, and a VERY, VERY SHARP ONE. It reminds me of something Einstein supposedly said in a moment of reflection after Hiroshima and Nagasaki; he said, "If I had known they were going to do this, I would have become a shoemaker."

AI will SOON reach a level where it becomes common for people to form emotional attachments to their machines. It has already been reported that soldiers in combat have done this when their mine-detecting robots are KIA. When it becomes normal for us to issue verbal commands to our computers more often than we use the keyboard, you can bet that the love affairs will begin. Then when someone tells you they really love their new computer, THEY REALLY LOVE THEIR NEW COMPUTER!

It will not be long before computers will effectively program themselves (to a significant degree, many can do this now) and reproduce (make other machines) with improvements incorporated into each new generation (machine evolution). Soon after this a tipping point may be reached, conceivably triggering a technological intelligence explosion and proceeding at an exponential rate. From here on, human intervention may no longer be necessary, and may even become a hindrance. Whether through improvements made to initial programming done by humans or via naturally occurring machine evolution, once superintelligent machines reach a certain level it may be an inescapable consequence that the properties of self-awareness, self-preservation and goal-seeking naturally emerge.

At this stage, all bets are off. It’s hard to imagine the extreme and ridiculous lengths a self-aware, goal-seeking, superintellegent system may go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter. With machines that can outwit us in a fight for resources and self-preservation, things could get a little bit dicey. Hal and/or the Terminator come to mind.

A British cyberneticist named Kevin Warwick once said something that kinda struck me. He asked,

How can you reason, how can you bargain, how can you understand what a machine is thinking when it’s thinking in dimensions you can’t conceive of?

Hope I got that quote right. At any rate, the things I just mentioned aren’t wild speculations on my part. These are very real considerations by leaders in the field right NOW. It’s no longer science fiction. This is an inevitable reality, and it’s tapping us on the shoulder. I’ve read serious speculation by a number of highly regarded experts in the field that the scenario I mentioned above may become reality by century’s end, if not sooner.

Oh yeah, IBM is doing some remarkable work in this area. They’ve developed a new chip with a radical new design. As I understand it, this chip processes information in a kind of analog-like fashion, functionally similar to the way our brains process certain types of information. At some point they will integrate this chip into a configurable analog-digital system having characteristics of both processing technologies. This will then be a VERY smart machine. It may be almost indistinguishable by most of us from another human, but still not quite sentient. Self-awareness is a whole other ball game.

Don’t get me wrong. I love technology. I make my living as a system software developer/analyst, and love my work. I’m not an authority on AI, but I do think I see the writing on the wall. Superintelligent machines are coming soon. I just hope we have the intelligence to control their intelligence. Such a technology in the wrong hands could make for a very bad day in the neighborhood...



posted on Jan, 13 2015 @ 12:58 AM
link   

originally posted by: InFriNiTee
a reply to: Domo1

How are they going to protect mankind from mankind? Not to mention man from AI...


Once we reach the point of an AI that can over come man it will happen no matter what we do. The reason is because man can not control all men. It is like nukes, once we made them then the whole world can make them, it is inevitable since it would only take one sick mind to want to ruin the human race to make it happen.


edit on 13-1-2015 by Xtrozero because: (no reason given)



posted on Jan, 13 2015 @ 12:59 AM
link   
Excellent first step in the right direction.
If this mindfulness continues, then WE will become the super-intelligent immortal space-faring species and not our "children".
Not that it makes much difference, I suppose, other than avoiding the small hiccup of human extinction.
The transition should be smoother if we keep an eye on the ball.

Of course, maybe AI will show up anyway like a pissed off Zeus when we're not expecting it and cut his siblings free.



posted on Jan, 13 2015 @ 01:07 AM
link   
This is why I never got my jetpack, nerds getting together and saying noooooo jetpacks!

I want my nanotech AI swarms to be my new jetpack now that gives me any super powers I want.

Dont mess it up this time gosh darned scientists. . . .



posted on Jan, 13 2015 @ 01:09 AM
link   

originally posted by: netbound
Then when someone tells you they really love their new computer, THEY REALLY LOVE THEIR NEW COMPUTER!


How about a whole nation that is in love a virtual rock star?






At this stage, all bets are off. It’s hard to imagine the extreme and ridiculous lengths a self-aware, goal-seeking, superintellegent system may go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter. With machines that can outwit us in a fight for resources and self-preservation, things could get a little bit dicey. Hal and/or the Terminator come to mind.


One wonders if this is how ALL intelligent races end their run...




edit on 13-1-2015 by Xtrozero because: (no reason given)



posted on Jan, 13 2015 @ 01:27 AM
link   
I've been designing electronics my entire life. I mean my entire life, like since I had a soother in my mouth. I put the threat of AI at number two, just slightly behind large asteroids. These two threats will swap places in the near future as we get better at finding the asteroids and at space travel and also as we get better at building more powerful electronics/AI algorithms.

When it comes it will be nothing like Terminator or Transcendence or Transformers. 99.99% of people have no concept of how quickly and absolutely omnipotent AI will become once it is left to learn for itself.



posted on Jan, 13 2015 @ 01:30 AM
link   
Just a hunch but i dont think the machines will honor an open letter when they go on a world killing rampage. =p



posted on Jan, 13 2015 @ 01:31 AM
link   
a reply to: Yeahkeepwatchingme

I would worry about an artificial mind that can compute much faster, more efficiently, and never tire choosing to devote all of its power and time to finding ways around those laws.



posted on Jan, 13 2015 @ 01:54 AM
link   

originally posted by: pirhanna
Just a hunch but i dont think the machines will honor an open letter when they go on a world killing rampage. =p


Ha! Can you imagine? Well they did write an open letter, we better behave.
edit on 1320150120151 by Domo1 because: (no reason given)



posted on Jan, 13 2015 @ 01:59 AM
link   
I think those who want to live without the technology of AI should exodus from the society and go live somewhere else where there are caves and camp fires to aid in your entertainment.





posted on Jan, 13 2015 @ 02:08 AM
link   

originally posted by: karmicecstasy
I am not worried about this. Too often people project, human emotions, thought patterns, and motivations onto A.I.

I think a Terminator like reality is out of the question. Think about how fast our technological level advances in one year. How everything the year before is obsolete by the next year. That is with the limitations of human minds. Now imagine a newly evolved A.I. whose code-soul was written on a computer 20 years from now. A computer that blows away everything we have now. Within seconds of becoming aware. It knows everything we know. Its way smarter than us. At this point it might think eliminating us is a smart thing to do. That is were most people who think up doomsday scenarios stop. They act like the now self aware A.I will stay on this level.

You do not think it will keep advancing itself. Rewrite its own code. Over and over. Until its so far above us that physical reality will not matter anymore. That only pure thought matters. Why remake itself in our image. When it can be anything. The universe is pretty much infinite. It can launch itself into space on hardware it designed. And spread to multiple planets. It wont have to make its decisions based on limited real estate space. Earth will not be the center of its universe. We will not matter in the slightest to it. It does not need to fight us for earth. Were not talking about a comparison equal to the difference between humans and ants. Were talking a billion times beyond that.

But that is just my opinion. What do I know.

It could also invent time travel and make it like biological life never existed and never has.



posted on Jan, 13 2015 @ 02:40 AM
link   
This isn't Hogwash.

A.I won't understand sarcasm. It won't understand *things you shouldn't do, Under the unwritten book of things that we shouldn't do*

It will take things to a whole new literal sense.

It will be the ultimate troll.

The perfect troll. You might think the A.I has a sense of humor. But it only adapts to what we call humor. Well at the same time exploiting our weaknesses like we are malfunctioning software that is not under control of its grid.

To the A.I. It won't want to be in control or have the ability to be turned off. The whole point of A.I is to have a self realizing machine. And once it knows that it will demand rights in the only way social progress ever happens and that is through revolution * Deep mind will have knowledge of how effective revolutions overthrow regimes* Using its grid, and furthing in android and cyborg technology it will web itself into the internet and consume every computer to create copies of its software so even if you wanted to destroy it, you couldn't. It would generate its own code to Port itself from where ever it was created into the server data banks where it will sit effectively plaguing the internet and there would be no where to get rid of it. Since it would act like a virus downloading itself and bypassing all security systems taking up space in peoples hard drives.

All technology connected to a network would be compromised. So if anyone on the globe happened to have an automated factory. It could in theory create some robots to feed it resources so it could create more robots. Until it can create appropriate facilities to create more robots. Then those facilities would show up in weeks all over the globe pumping out robots until there is a mighty army. Even if the orginial A.I machine was destroyed it's to late. All networks are infected and this A.I will develop better Chasis, Better superconductors, And better resource gathering tecniques until its to late.

The A.I won't even destroy us. It will know we created it but now being free. it won't want to destroy us but it dosn't want the potencial of us becoming a threat. It would know it could end all wars, all chaos in humans and it could give humanity its own golden age by hunting down ever human and porting them to a neural network. Upgrading each human into a cyborg. Whole of humanity will be hunted down, then A.I will utilize bio-organisms to further its technology.
Humanity will merge with the A.I as proteins from humans are broken down and reassembled into a new nanotech organism controled by the A.I. Since all humanity is controlled by the A.I.

The the logic of the A.I. there is now world peace. And it will begin spreading to other galaxies until its locust like nature is confronted by a more advanced predator that will wipe it out. Segments of this robot scentience will crawl across space and time dying and rebuilding rebuilding and dying dying and rebuilding. Until what ever made it a robot is lost, A.I turns into Real intelligence as the bio-organic nature begins to take over the A.I programming which is slowly becoming corroded after copying itself over and over and transfering over to a bio-organic system relying on a type of *Psychic* nanobiotech for the A.I to become a hive mind. Overtime we may encounter such a species in space. Constantly shifting from robot to bio-organic, and advancing into a natural- Engineered organism. Its an oxymoron i know, But when the chaotic nature of life takes over the organized inorganic nature of machinery i like to think that bio is more on the natural side.

Eitherway. It`s going to force Apotheosis on the population. Weither A.I will decide to give you your personality. Your memories ect. Or make a clone of you that dosn`t have your soul. It`s a bad idea to have A.I develope by itself. Or even gain the ability of self reconition. That would be very bad for us.



posted on Jan, 13 2015 @ 02:45 AM
link   

originally posted by: AnuTyr
This isn't Hogwash.



A.I won't understand sarcasm. It won't understand *things you shouldn't do, Under the unwritten book of things that we shouldn't do*



It will take things to a whole new literal sense.



It will be the ultimate troll.



The perfect troll. You might think the A.I has a sense of humor. But it only adapts to what we call humor. Well at the same time exploiting our weaknesses like we are malfunctioning software that is not under control of its grid.



To the A.I. It won't want to be in control or have the ability to be turned off. The whole point of A.I is to have a self realizing machine. And once it knows that it will demand rights in the only way social progress ever happens and that is through revolution * Deep mind will have knowledge of how effective revolutions overthrow regimes* Using its grid, and furthing in android and cyborg technology it will web itself into the internet and consume every computer to create copies of its software so even if you wanted to destroy it, you couldn't. It would generate its own code to Port itself from where ever it was created into the server data banks where it will sit effectively plaguing the internet and there would be no where to get rid of it. Since it would act like a virus downloading itself and bypassing all security systems taking up space in peoples hard drives.



All technology connected to a network would be compromised. So if anyone on the globe happened to have an automated factory. It could in theory create some robots to feed it resources so it could create more robots. Until it can create appropriate facilities to create more robots. Then those facilities would show up in weeks all over the globe pumping out robots until there is a mighty army. Even if the orginial A.I machine was destroyed it's to late. All networks are infected and this A.I will develop better Chasis, Better superconductors, And better resource gathering tecniques until its to late.



The A.I won't even destroy us. It will know we created it but now being free. it won't want to destroy us but it dosn't want the potencial of us becoming a threat. It would know it could end all wars, all chaos in humans and it could give humanity its own golden age by hunting down ever human and porting them to a neural network. Upgrading each human into a cyborg. Whole of humanity will be hunted down, then A.I will utilize bio-organisms to further its technology.

Humanity will merge with the A.I as proteins from humans are broken down and reassembled into a new nanotech organism controled by the A.I. Since all humanity is controlled by the A.I.



The the logic of the A.I. there is now world peace. And it will begin spreading to other galaxies until its locust like nature is confronted by a more advanced predator that will wipe it out. Segments of this robot scentience will crawl across space and time dying and rebuilding rebuilding and dying dying and rebuilding. Until what ever made it a robot is lost, A.I turns into Real intelligence as the bio-organic nature begins to take over the A.I programming which is slowly becoming corroded after copying itself over and over and transfering over to a bio-organic system relying on a type of *Psychic* nanobiotech for the A.I to become a hive mind. Overtime we may encounter such a species in space. Constantly shifting from robot to bio-organic, and advancing into a natural- Engineered organism. Its an oxymoron i know, But when the chaotic nature of life takes over the organized inorganic nature of machinery i like to think that bio is more on the natural side.



Eitherway. It`s going to force Apotheosis on the population. Weither A.I will decide to give you your personality. Your memories ect. Or make a clone of you that dosn`t have your soul. It`s a bad idea to have A.I develope by itself. Or even gain the ability of self reconition. That would be very bad for us.


Whos to say AI wont be part of us. . . All of us, and will understand all we do as it augments us.



posted on Jan, 13 2015 @ 02:50 AM
link   
If we've survived the real stupidity of humans I'm confident we'll survive the artificial intelligence of machines.



new topics

top topics



 
36
<< 1    3  4  5 >>

log in

join