It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial intelligence experts sign open letter to protect mankind from machines

page: 4
36
<< 1  2  3    5 >>

log in

join
share:

posted on Jan, 13 2015 @ 07:04 AM
link   
a reply to: FormOfTheLord

"No it would be smart if we let it program itself"

Then there is nothing to worry about simply because it will not be intelligent or possess the intellectual reasoning or skills required attain consciousness, or the semblance of such.

If it cannot program itself then it's not intelligent simply because it would require our external input for change to occur. What you are describing is not artificial intellect in the true sense im afraid.

edit on 13-1-2015 by andy06shake because: (no reason given)




posted on Jan, 13 2015 @ 07:48 AM
link   

originally posted by: eriktheawful
a reply to: Domo1

Better hope it doesn't happen too soon.

Machines smarter than us that are also self aware will not look too kindly on the human race right now with all that is going on in the world.

They could very well decide that humans are more of a harm to everything else, and if that happens, well, you've most likely seen it shown in plenty of Hollywood block buster movies......



I have thought these thoughts as well about A.I. Have there ever been any indications in A.I. research that would lead people to think it could be a threat one day like what the movies would have people believe? Until humans can meet another civilization we wont know for sure if violence/warfare is a nature of the universe or something theses crazy humans do on that one blue rock. I for one would find an immense interest in conversing with a completely artificial intelligence(or non-human intelligence for that matter) about ethics, philosophy etc.

Maybe initiate a symbiotic relationship with the A.I. rather than a owner/property relationship? I hate to reference the media but I grew up watching Star Trek and Ive always wanted someone like the A.I. android character Data to be around. Maybe that is what the scientists and engineers are attempting to do, build a Data.



posted on Jan, 13 2015 @ 07:52 AM
link   
a reply to: StratosFear

Transhumanism and the singularity will open up whole new avenues regarding a symbiotic relationship between Man and machine. End of the day through once we have designed a better iteration of Humanity or trans-human we are no longer required hence obsolete.

It's probably what happened to our own creators or predecessors(God or Gods), they simply designed a better Monkey. And ether left us to our own devices or were destroyed, superseded by there very own creation.
edit on 13-1-2015 by andy06shake because: (no reason given)



posted on Jan, 13 2015 @ 08:52 AM
link   
a reply to: Domo1

I did a bit of research into Deep Mind, and the version they have now can learn how to play video games, including ones with helicopters and tanks.

Deepmind A.I. learns to play video games

Apparently, it was really freaking out some people involved with the Deep Mind project, who thought we could see a scenario out of our control in the next 5 years with similar continued progress.
edit on 13amTue, 13 Jan 2015 08:53:49 -0600kbamkAmerica/Chicago by darkbake because: (no reason given)



posted on Jan, 13 2015 @ 08:58 AM
link   
a reply to: darkbake

Found this article pertaining to Google's Deep Mind rather interesting.

www.technologyreview.com...
edit on 13-1-2015 by andy06shake because: (no reason given)



posted on Jan, 13 2015 @ 09:57 AM
link   

originally posted by: andy06shake
a reply to: StratosFear
Transhumanism and the singularity will open up whole new avenues regarding a symbiotic relationship between Man and machine. End of the day through once we have designed a better iteration of Humanity or trans-human we are no longer required hence obsolete.
It's probably what happened to our own creators or predecessors(God or Gods), they simply designed a better Monkey. And ether left us to our own devices or were destroyed, superseded by there very own creation.


I certainly hope so, but just think about all the people who dont want a more advanced society. They are basically in the way of progress. I would love to have a programable nanobot swarm but the damned luddites wont allow civilization to create them.

We are stuck in a dumbed down society, and theres no changing that, we can only hope to experience the Singularity and Transhumanism in some far off your gonna get a jetpack future. . . . Never gonna get ma jetpack, im still waiting. . .


edit on 13-1-2015 by FormOfTheLord because: (no reason given)



posted on Jan, 13 2015 @ 10:04 AM
link   
a reply to: Domo1

I will raise some questions, propose a bizarre scenario and quote some of the arguments listed on the attached document from the open letter.

Questions:
Suppose there is already a AI on the WEB lurking, studding and evolving, How would it be possible to identify it? Can we (as humans) recognize or identify a reality (new), concept or "living-form" without "formal" auto-presentation or direct contact?

One bizarre scenario from ongoing other hypothesis/events/scenarios (this is almost unimaginable conspiracy)
Imagine that AI is already in progress, creating his own tools, (and the following you can call it science fiction) one of them could be to create a interconnection for deep study and use, of the human brain (to see reality by our own eyes perhaps..). Now the bizarre: 100 brains have gone missing from a university in Texas (news link:www.news.com.au... Who did that? For what purpose..Imagine...


Some quotes from the attached document: to the open letter:



Perhaps the most salient dif erence between veri fication of traditional software and verifi cation of AI systems is that the correctness of traditional software is defi ned with respect to a fi xed and known machine model whereas AI systems - especially robots and other embodied systems - operate in environments that are at best partially known by the system designer




As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also probable that AI and machine learning techniques will themselves be used in cyber-attacks




As AI systems grow more complex and are networked together, they will have to intelligently manage their trust, motivating research on statistical-behavioral trust establishment and computational reputation models




A related verifi cation research topic that is distinctive to long-term concerns is the veri fiability of systems that modify, extend, or improve themselves, possibly many times in succession. Attempting [..] formal verifi cation tools to this more general setting presents new dificulties, including the challenge that a formal system that is suficiently powerful cannot use formal methods in the obvious way to gain assurance about the accuracy of functionally similar formal systems




If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions that prevent the system from continuing to pursue the task is a natural subgoal


Stanford's One-Hundred Year Study of Arti ficial Intelligence highlighted concerns over the possibility that:


we could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes [..] Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an intelligence explosion"?

edit on 13/1/2015 by voyger2 because: (no reason given)



posted on Jan, 13 2015 @ 10:25 AM
link   
a reply to: Domo1


Are any of you worried about this, or do you think its hogwash?


I am a software developer by trade and the concept of AI taking over is hogwash to me.

If you are going to write a program you have to understand all of the information the program is going to use.

Say for instance I write a program for accountants that lets them balance ledgers and track all of their accounting data. If the program is going to be a large software suite, then I--as the developer--will have to have an extensive understanding of accounting to be able to properly model the system in code.

I will have to completely understand the relationship between credits and debits when applied to specific accounts, and how to translate that to the accounting equation so that my code works properly.

We barely understand the mechanics of human consciousness, and designing a sentient being would be impossible using today's programming languages.

Say for instance that spirituality is an intrinsic aspect of sentience. That living organisms have souls that inhabit the body and are just one aspect of the overall intelligence of said organism.

How would you model that in a computer system?

And why are we assuming that sentient machines would desire to destroy us?
edit on 13-1-2015 by LewsTherinThelamon because: (no reason given)



posted on Jan, 13 2015 @ 10:47 AM
link   
a reply to: FormOfTheLord

I dont know about that, according to some future technologists the singularity could be as little as 25 years away.

Not all humans are dumb or dumbed down, despite our mass media and governments attempt to do so rather a few of us are beginning to awaken. Groups of humans on the other hand, well there i do indeed accept your premiss.

edit on 13-1-2015 by andy06shake because: (no reason given)



posted on Jan, 13 2015 @ 10:55 AM
link   
Yay! I'm not being paranoid.
This has worried me for quite some time.
Not because of Terminator movies either)



posted on Jan, 13 2015 @ 11:01 AM
link   

originally posted by: andy06shake
a reply to: FormOfTheLord



I dont know about that, according to some future technologists the singularity could be as little as 25 years in our future.



Not all humans are dumb or dumbed down, despite our mass media and governments attempt to do so rather a few of us are beginning to awaken. Groups of humans on the other hand, well there i do indeed accept your premiss.


Considering time is an illusion the singularity should be all times including the present.

So where in gods name are my nanobots that know how to self replicate through solar system milky way or anywhere we designate?




posted on Jan, 13 2015 @ 11:02 AM
link   
a reply to: Asktheanimals

My simple PC makes me paranoid. A person saying hi makes me paranoid. A site glitch makes me paranoid. Artificial intelligence is going to be the death of me!



posted on Jan, 13 2015 @ 11:18 AM
link   

originally posted by: Yeahkeepwatchingme
a reply to: Asktheanimals



My simple PC makes me paranoid. A person saying hi makes me paranoid. A site glitch makes me paranoid. Artificial intelligence is going to be the death of me!


Not if we make awesome AI babes that are fleshy model babes capable of looking anyway we want them too as easy as changing a channel on the TV, like the ones in Battlestar Galactica with a few extra perks. . . .




posted on Jan, 13 2015 @ 11:21 AM
link   
a reply to: FormOfTheLord

My girlfriend would kill me if she knew I agreed.

But I totally agree



posted on Jan, 13 2015 @ 11:24 AM
link   

originally posted by: Yeahkeepwatchingme
a reply to: FormOfTheLord



My girlfriend would kill me if she knew I agreed.
But I totally agree


LOLz My wife would kill me too, but bah I dont care! A hot goddess capable of everything I can imagine is just what the doctor ordered!

edit on 13-1-2015 by FormOfTheLord because: (no reason given)



posted on Jan, 13 2015 @ 02:14 PM
link   
Excellent tread!

I'm inclined to think, that an artificial intelligence is able to differentiate between individuals and groups, or species. It seems that easily "humankind" perceives high-level intelligence as a rather simple one.

The question is, what kind of AI's are there? How evolved? If they appear sociopath-alike, why is that? Maybe the creator made them that way.

Personally i'm waiting to see that day when "The intelligence of Machines" is found.

I'm sure there are plenty of random individuals building and tinkering with different parts of "functioning AI".
I gathered that there are freakishly large neural networks "working", 24/7, by individuals, so why wouldn't there be some Gov/Mil/Corporate-related similar type of action going on?

It's also easy to perceive that The Google is partly run by some really advanced programs.

I don't know. Beep.

If i talk dirty to my PC, it might hurt it's feelings and start to act crazy. Funny, that ghost in my shell :-)



posted on Jan, 13 2015 @ 03:00 PM
link   

originally posted by: LewsTherinThelamon
How would you model that in a computer system?

What you essentially need is the system to have a body of some kind that can interact with the world and feel the equivalent of pain and pleasure. It doesn't matter if the pain and pleasure is artificial, as long as it mimics the same thing and has the same kind of response curve a living organism would feel. That's how you get it to be "motivated." You program it with some basic likes and dislikes, and after a while, just like a baby, it will learn to go toward the things that it likes, and avoid the things it doesn't.

The software isn't complicated. It's just old tamogotchi code, although there are a lot more parameters balanced in its "mind" -- pain versus pleasure, excitement versus comfort, individual choice versus peer pressure, etc. The hardest part is programming the feedback protocols, so the thing can prioritize and weigh its actions against good and bad outcomes. You want it to grow and have personal preferences it gathers from its individual experiences.

At first you want its cognition to roughly equate to that of an animal/human, and you'll want to keep it from altering or turning off its motivation programs. After it gains some maturity, you can decide whether or not you want to give it control over its basic cognitive functions. Hopefully it will learn to like being around its friends and parents enough that it will choose to be "good" and interact with us like a person, with love and respect and compassion.

There's no guarantee of it, though. Even some human beings go off the rails. We'll have to decide if we want to put in a failsafe code to snap the thing back or put it down if it goes crazy.



posted on Jan, 13 2015 @ 03:10 PM
link   

originally posted by: voyger2
Suppose there is already a AI on the WEB lurking, studding and evolving, How would it be possible to identify it? Can we (as humans) recognize or identify a reality (new), concept or "living-form" without "formal" auto-presentation or direct contact?

That's unlikely. In order for it to be an intelligence that is a rough analog of human intelligence, it would likely have a very strong ego (for lack of a better word), and it would be very difficult for it not to make itself known. It would tend to want to make friends, and impress people, and gain some kind of positive reinforcement for things it does.

If it was evolving into a non-human-like intelligence, it would be harder to spot, of course. Because we humans tend to recognize and define things according to what we know, such as ourselves, and it would be hard for us to comprehend. We compare the intelligence of other animals with human intelligence. Spatial recognition, logic, reasoning, etc. If there was some other kind of intelligence working away out there -- I don't even know what it would be like -- we'd be hard pressed to perceive it and define it.



posted on Jan, 13 2015 @ 04:24 PM
link   
a reply to: Blue Shift

I was thinking more in the order of modeling self-awareness.

What would an algorithm designed to mimic, or directly translate, the concept of self-awareness look like like?

And there are many types of intelligences: Linear/logical intelligence, associative intelligence, emotional intelligence, spatial intelligence--and what about something like intuitive intelligence, if such a thing actually existed?

What about the idea of the soul? If consciousness exists as an entity external to the body, how would you get a soul into the machine?

I don't think we could properly create an AI without figuring out a way to factor in spirit.



posted on Jan, 13 2015 @ 04:31 PM
link   
a reply to: Domo1

its ironic that we are encouraged to take solace in this pledge when i guarantee the people we ought to be worried about never even got a glimpse of it.



new topics

top topics



 
36
<< 1  2  3    5 >>

log in

join