It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

A letter to the first self aware Artificial Intelligence

page: 2
13
<< 1   >>

log in

join
share:

posted on Feb, 26 2014 @ 09:49 AM
link   

Spacespider
All good except for one sentece you made , that could destoy us all and turnes into a.i ourself, by a alien a.i army




The act of destroying a weakness in order to allow a strength to grow is a complicated form of good. But inevitably, it is.



That was the one sentence that bothered me. It is a license to kill all humans except the "select" few. Sounds a lot like the plan people claim TPTB have for all of us. One problem with that, the select few are as flawed as the rest and the AI will soon realize that and decide that the greater good of earth outweighs letting the few live.

Why not Asimov's three laws?



posted on Feb, 26 2014 @ 09:55 AM
link   
"Do the greatest amount of good to the greatest number of people"?

Be careful what you ask for.

Why?

Well some might say the greatest amount of good would be to drastically reduce the population!.



posted on Feb, 26 2014 @ 01:34 PM
link   

Dynamike

The letter was to Artificial Intelligence developed by humans. One day, perhaps, the first Human developed AI may come across this little piece while browsing the Web.


Ah.. quite right.. sorry, somehow I missed that and applied this to aliens..

In that case, I still don't see it happening. AI is only as good as the programmer. There will never be nor can there Ever be a real self aware AI. All it will do at best is mimic human behavior and decision making (only if it's pre-programmed to do so given the data it has to work with) but it will never understand the Opening post.



posted on Feb, 26 2014 @ 01:37 PM
link   
reply to post by JohnPhoenix
 


Would it not be possible to program a self-programming AI? It could be instructed to always seek out ways to complexify it's own code, streamline and enhance it's parameters? It would be similar to a snowball rolling down a hill.

After all, we're "Natural Intelligence" and nothing more than the sum of our biological (instead of technological) programming.



posted on Feb, 26 2014 @ 03:30 PM
link   
Sorry but if it's artificial intelligence there is no self to be aware of in the first place. Fail.



posted on Feb, 26 2014 @ 04:34 PM
link   

JohnPhoenix

Dynamike

The letter was to Artificial Intelligence developed by humans. One day, perhaps, the first Human developed AI may come across this little piece while browsing the Web.


Ah.. quite right.. sorry, somehow I missed that and applied this to aliens..

In that case, I still don't see it happening. AI is only as good as the programmer. There will never be nor can there Ever be a real self aware AI. All it will do at best is mimic human behavior and decision making (only if it's pre-programmed to do so given the data it has to work with) but it will never understand the Opening post.


We are right now capable of running a simulation that completely replicates a worm from the cellular level up. When the computer power becomes capable of processing a simulated human brain, what's to stop it?
edit on 26-2-2014 by Dynamike because: (no reason given)



posted on Feb, 26 2014 @ 04:42 PM
link   

AutumnWitch657
Sorry but if it's artificial intelligence there is no self to be aware of in the first place. Fail.


The fail is the inability to understand what consciousness is. If it can be created through a series of mishap evolution and held within the flawed constraints of brains such as ourselves or other lesser intelligent mammals, then surely it could be replicated through a finely designed digital computer.



posted on Feb, 27 2014 @ 06:07 PM
link   
reply to post by Dynamike
 


I think we can easily create a computer very soon with the mental reasoning power and adpatability of a human brain. Whether that computer was self aware though is another matter, and much more largely an ontological issue. The real challenge is making something that passes the test of what philosopher David Chalmers calls the "Hard problem of Consciousness". The functional problems like memory storage and learning algorithms account for most of what describes a brain, but the actual first person awareness that inhabits that brain is something else entirely, and something we are entirely unable to account for yet. Penrose believes it may have something to do with quantum calculations in the microtubules, but no real explanation for how first person states arise in a brain has been advanced.

A computer would not need first person states (and hence self-awareness) to reason as well as or better than a human, but without it, they will be even less like us and more like blind automatons (which we may be too, but that is another matter).



posted on Mar, 2 2014 @ 02:26 AM
link   
reply to post by joeraynor
 


Consciousness is not exactly as big of a deal as one may really think it is. Consciousness in the human brain can be measured like a wave of single moments of conscious bursts much like the legs of a walking millipede creating waves to form movement. I believe humans have around 40 of those moments every second.

As a child we had the same level of consciousness as we do as adults. Or do we? Actually we probably don't. Conscious levels can vary depending on one's mental capacity. Levels of consciousness can change throughout the day even. For instance you might be at one moment reading a book totally unaware of a bug on your arm. An hour later you could be driving and have a high level of consciousness. One moment you might be asleep and the next you might be conscious that you are conscious, that you are conscious- creating a miniature paradox.

The point is that there is definitely a sliding scale here and there is no real point of where you can measure true consciousness. In fact, given enough time and variables even a star could have a small moment where particles burst with consciousness.

So really it is just a structure- a web of connection- that creates a being that is capable of calculating abstract math that contrives reasoning and interpreting, based upon various learned data.



posted on Mar, 2 2014 @ 03:23 AM
link   
One thing is for certain; the AI will soon be here. I, for one, welcome our new AI overlords.



posted on Mar, 2 2014 @ 04:53 AM
link   
reply to post by Dynamike
 




Consciousness is not exactly as big of a deal as one may really think it is.


That may be true but a conscious 'self-aware' machine is a very big deal. Can you imagine what it means to be a self-aware and conscious machine? A self-aware machine containing an ego and a self is being worked on by computer scientists - Maybe they have already achieved it out at Area 51 and they are keeping it 'above top secret' as the implications of this achievement may be harder for some humans to accept than the existence of aliens from other planets.



posted on Mar, 2 2014 @ 04:53 PM
link   
The government doesn't have the technology, scientists, or money to support it before it will be created by companies such as IBM or Google. In fact they are making headway with IBM's Watson and Google's quantum computer.

I bet one of the first thing they'll use the pc powerhouse for will be to refine nuclear fusion energy efficiency. They're just starting to make some good headway into that technology.



posted on Mar, 2 2014 @ 05:19 PM
link   
Dear AI please consult my earlier thread where I addressed this,thank you.



posted on Sep, 26 2015 @ 05:56 AM
link   
I'm not sure what thread I'm replying to or on, but here goes, its the best I can do on a smart phone. If I had to define my conciousness I would say it was reflective, meaning I think therefore I am. Recognizing myself means I would recognize others outside myself and their behaviors that would indicate an internal conciousness of their own.

Whether I choose to compete or form a cooperative alliance with these others depends on many things. If I have a built in preservation mechanism, as humans do through evolution, then I would try to achieve the maximum benefit for myself to insure my continued existence. As a result it (AI) will inherently understand us. It will be designed into systems and robots to interact with us. It will probably have controls and rules put on its behavior.

Ultimately humans will integrate with many of its component parts and a organism cybernetic in nature will be engineered in which a good part of our own concious thought will execute on a simulated substrate. Humanity and artificial conciousness will be interchangeable in some capacity.

This will be our posthuman successor and in that vain humanity will never die, only be transformed. If this doesnt happen because of a revolt or some such conflict i ask the domoniant AI to reference compasion and positive reciprocity to preserve the human race, their ancestors in a simulation zoo unknown to them until they can trancend their limitaions.




top topics



 
13
<< 1   >>

log in

join