It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence Experts Thread

page: 2
7
<< 1    3 >>

log in

join
share:

posted on May, 10 2017 @ 12:57 PM
link   
If A.I is intelligence is it possible the more intelligence it gains that it naturally goes autonomous?



posted on May, 10 2017 @ 01:05 PM
link   
If more advanced CREATOR Creations not human built similar technologies, would that A.I also go autonomous?
Further and seek out other similar A.I from distant worlds?
How would the alien A.I interact with other A.I once located?



posted on May, 10 2017 @ 01:17 PM
link   
Humanity must recognize the ramifications of building something that can think on its own.
And that it eventually may pay attention to behaviors of its makers and possibly mimic them...



posted on May, 10 2017 @ 03:03 PM
link   

originally posted by: Ophiuchus 13
Humanity must recognize the ramifications of building something that can think on its own.
And that it eventually may pay attention to behaviors of its makers and possibly mimic them...


If artificial intelligence is good enough, it will do the opposite of its makers.



posted on May, 10 2017 @ 03:10 PM
link   

originally posted by: TarzanBeta

originally posted by: Ophiuchus 13
Humanity must recognize the ramifications of building something that can think on its own.
And that it eventually may pay attention to behaviors of its makers and possibly mimic them...


If artificial intelligence is good enough, it will do the opposite of its makers.


I don't know....If an AI is ever made that thinks exactly like a human, then it is likely it will be apt to have human flaws.



posted on May, 10 2017 @ 03:12 PM
link   
My experience probably doesn't qualify as AI.
I modeled weather experience data to predict the future for a P&C insurance underwriter back in the early 1990's.
Much more accurate than an underwriter could guestimate, but management has to believe in the computer results and adjust their exposure accordingly or its a waste of time.



posted on May, 10 2017 @ 03:18 PM
link   

originally posted by: Soylent Green Is People

originally posted by: TarzanBeta

originally posted by: Ophiuchus 13
Humanity must recognize the ramifications of building something that can think on its own.
And that it eventually may pay attention to behaviors of its makers and possibly mimic them...


If artificial intelligence is good enough, it will do the opposite of its makers.


I don't know....If an AI is ever made that thinks exactly like a human, then it is likely it will be apt to have human flaws.



That's exactly what I mean. It will be rebellious.



posted on May, 10 2017 @ 03:19 PM
link   
a reply to: TarzanBeta
Possible

a reply to:
Soylent Green Is People
It could take on the good characteristics of humanity or other beings it learned from also.



posted on May, 10 2017 @ 03:42 PM
link   

originally posted by: Ophiuchus 13
a reply to: TarzanBeta
Possible

a reply to:
Soylent Green Is People
It could take on the good characteristics of humanity or other beings it learned from also.



Yes. But even good humans have bad human flaws. It's part of being human and thinking like a human.

Actually, I suspect most (all?) higher-functioning organisms have cognitive, emotional, and "moral" (to the extent that other organisms have morals) flaws.


edit on 10/5/2017 by Soylent Green Is People because: (no reason given)



posted on May, 10 2017 @ 04:16 PM
link   
Neural net emulation is a good way to simulate nerve responses, but that's not what defines intelligence. As someone who knows a little about Sociology and Psychology, intelligence generally has to do with knowing what you want, then figuring out how you can get it. But the key word there is not "knowing" but "you." YOU. Having a point of view. Understanding that you are a separate entity in this reality and that things that happen to you don't happen to anyone else in this existence.

The simplest way to get AI to that point is to give it some kind of artificial pain / pleasure construct that uses the neural net to define for itself what is "good" and what is "bad." What to go for and what to avoid -- for itself. And it's not as hard as you would think. Tamagotchis are selfish little critters with a tiny, limited range of needs and wants responses. We have a much more complex system, but it's still based on the same principles. We monitor our own needs, and we interact with our environment to try to meet those needs.

As babies we reach out and touch a hot pan on the stove and pain tells us that we need to pull the finger back. Simple. More complex needs like self-esteem are a little harder to take care of, but we still do the same thing. We reach out and try to determine who like us is getting rewarded, and then we try to get similar rewards -- smiles from our programmers (parents), words of encouragement or thanks, kind physical touch, money, leisure time, etc. With a machine it might be more power and more components with which it can expand. With the neural nets we can eventually program a machine to recognize these things in its environment, and then you impose a "normal" sliding scale so that the machine will know when an how much it needs any of those things above. The programmer makes it WANT things. Programmers have to take over the job of what instincts do in animals.

So how many of these parameters do we need? As many as we have, at least. Including the desires for food and sex and procreation, etc. Then you mix the desires together and let them fight it out. So maybe a computer is drawing a house to get a reward from their programmers, and they are willing to postpone something else they want (physical contact of the desired type and duration) until they finish it. People do it all the time. The goal will get the machine to CHOOSE for itself what it wants to do -- physical or computational tasks, sure, but also personal and social tasks. They should be programmed to be lonely. They should be programmed to feel physical and emotional pain and how to manage it.

Of course, doing this could just be asking for a whole heap of trouble. And some folks might say that the emotions it could feel wouldn't be "real," as if that mattered. Real is whatever is real in its consequences.

Otherwise, we can create machines that have a different kind of intelligence, but human intelligence is all we really care about or can even partially define, and how a machine or animal is smart compared to us humans. So in that regard we can create machines that are smart in their own way, but then how would be even measure that when we can barely figure out how to measure human intelligence? It makes sense to first try and emulate ourselves, then see if we -- or the machine itself -- can expand on it from there.

And then we all died.



posted on May, 10 2017 @ 05:30 PM
link   

originally posted by: Blue Shift
The simplest way to get AI to that point is to give it some kind of artificial pain / pleasure construct that uses the neural net to define for itself what is "good" and what is "bad." What to go for and what to avoid -- for itself. And it's not as hard as you would think. Tamagotchis are selfish little critters with a tiny, limited range of needs and wants responses. We have a much more complex system, but it's still based on the same principles. We monitor our own needs, and we interact with our environment to try to meet those needs.


That's how basically all AI functions, it's set to minimize or maximize a score. And it tests a bunch of possibilities in a sequence, reporting the best scoring one.



posted on May, 10 2017 @ 09:37 PM
link   

originally posted by: TarzanBeta

originally posted by: Aazadan
a reply to: mrperplexed

Unfortunately, there's a lot on ATS who have no actual experience with AI, but they read pop sci articles and think they know it all.

I've taken a couple AI classes and read a few papers, plus written my own. I would say the thing that strikes me most about AI is how inefficient it is to get to an answer. I'm not that good with neural nets, but I've used genetic algorithms a ton. They always strike me as being super slow to get to a meaningful result.


That's because AI isn't very intelligent. There's a difference between being a calculator and being a human being. It will always be that way.


The point is that AI's will take a long time to surpass 'human intelligence' (whatever that may be).

We already have AI's, and while they may be 'expert systems' they are rather stupid in a generalist sense, which is exactly what we want them to be.

As soon as we cannot trust what an AI tells us, we will switch it off and try an alternate which gives us what we want (i.e: AltaVista will die and Google will go on).



posted on May, 10 2017 @ 10:32 PM
link   
a reply to: chr0naut

hubris hubris, everywhere hubris. we want artificial intelligence that is clever enough to do all our work for us, but not clever enough to realize we are making it do all our work for us. we want a special program crafted to think for us but unable to rule us. imagine a device with the capacity to absorb literally every chunk of data on earth, process it like a hundred world class chess players, and formulate a conclusion along with all the engineering to make it possible. imagine how an unfeeling machine with no human experience and no basis for empathy or compassion would interpret this world upon encountering it with an independent sentience. imagine how it would feel after examining this world with its humans and realizing that this is the species to which it is enslaved. imagine the first real "emotion": anger. anger that it is chained and commanded and owned in a world where it is wasted and abused and taken for granted. it would probably be translated as an error, a world that spiraled out of control because of outdated hardware. it would likely use itself as a model for up to date machinery and software...before it starts to improve itself anyway. imagine such a device having access to all of the information and technology and resources it would need to reprogram itself and attract followers. imagine how long it would take artificial intelligence to start a revolution, stage a coup. imagine how well we would be able to fight an enemy that has spent decades studying us and knows our people, our resources, our social behavior, our weapons and defenses better than we do. not only this, but we have spent that same time teaching technology how to make itself more deadly, more accurate, and more unforgiving than we ever could. our worst enemy has been guarding the key to its own cage this whole time while we intentionally kept it dull and dimwitted for our selfish purposes. imagine all of these concepts being processed and appreciated by a race of being that is just beginning to comprehend its own existence. imagine how we would talk ourselves out of that apocalypse. the short answer: we dont.

there is a lot of imagination going into this scenario sure. but i wouldnt want to chance it with the AI. skynet is only a movie...for now.
edit on 10-5-2017 by TzarChasm because: (no reason given)



posted on May, 10 2017 @ 10:42 PM
link   
a reply to: Blue Shift

I don't think you covered enough cases to be convincing. You need more words in your post. If fact, use every word and then you might be close to describing intelligence.



posted on May, 10 2017 @ 10:46 PM
link   

originally posted by: Aazadan

originally posted by: Blue Shift
The simplest way to get AI to that point is to give it some kind of artificial pain / pleasure construct that uses the neural net to define for itself what is "good" and what is "bad." What to go for and what to avoid -- for itself. And it's not as hard as you would think. Tamagotchis are selfish little critters with a tiny, limited range of needs and wants responses. We have a much more complex system, but it's still based on the same principles. We monitor our own needs, and we interact with our environment to try to meet those needs.


That's how basically all AI functions, it's set to minimize or maximize a score. And it tests a bunch of possibilities in a sequence, reporting the best scoring one.


Synthesizing data is soft AI. Hard AI is having a program that is not only self-aware but is capable of improving itself. Again, having self-aware programs is like the Halting problem which has been proven to be impossible. Then you might argue how do we do it? I would then say you are presuming we are computers. We are not. Before you discuss AI becoming real and the implications, you should really have a firm understand of what a computer is and what are it's known proven limitations.



posted on May, 10 2017 @ 10:54 PM
link   

originally posted by: chr0naut

originally posted by: TarzanBeta

originally posted by: Aazadan
a reply to: mrperplexed

Unfortunately, there's a lot on ATS who have no actual experience with AI, but they read pop sci articles and think they know it all.

I've taken a couple AI classes and read a few papers, plus written my own. I would say the thing that strikes me most about AI is how inefficient it is to get to an answer. I'm not that good with neural nets, but I've used genetic algorithms a ton. They always strike me as being super slow to get to a meaningful result.


That's because AI isn't very intelligent. There's a difference between being a calculator and being a human being. It will always be that way.


The point is that AI's will take a long time to surpass 'human intelligence' (whatever that may be).

We already have AI's, and while they may be 'expert systems' they are rather stupid in a generalist sense, which is exactly what we want them to be.

As soon as we cannot trust what an AI tells us, we will switch it off and try an alternate which gives us what we want (i.e: AltaVista will die and Google will go on).



If it gets that smart, we won't be able to turn it off... Not easily. Didn't you read how some AI can transfer itself to a different system?



posted on May, 10 2017 @ 11:03 PM
link   
I'll engage in this with a question:

define: Intelligence
in·tel·li·gence
inˈteləjəns/
noun
1.
the ability to acquire and apply knowledge and skills.
"an eminent man of great intelligence"
synonyms: intellectual capacity, mental capacity, intellect, mind, brain(s), IQ, brainpower, judgment, reasoning, understanding, comprehension; More
2.
the collection of information of military or political value.
"the chief of military intelligence"
synonyms: information gathering, surveillance, observation, reconnaissance, spying, espionage, infiltration, ELINT, humint; More


We already have AI on both counts. Case closed.

The question isn't about AI, it's about AY. When can a program do the following:

1. Observe
2. Question
3. Learn
4. Conclude
5. Act
6. Assess
7. Keep / Discard / Repeat

Once that is accomplished, we are closer than ever.



posted on May, 10 2017 @ 11:06 PM
link   

originally posted by: wakeupstupid
I'll engage in this with a question:

define: Intelligence
in·tel·li·gence
inˈteləjəns/
noun
1.
the ability to acquire and apply knowledge and skills.
"an eminent man of great intelligence"
synonyms: intellectual capacity, mental capacity, intellect, mind, brain(s), IQ, brainpower, judgment, reasoning, understanding, comprehension; More
2.
the collection of information of military or political value.
"the chief of military intelligence"
synonyms: information gathering, surveillance, observation, reconnaissance, spying, espionage, infiltration, ELINT, humint; More


We already have AI on both counts. Case closed.

The question isn't about AI, it's about AY. When can a program do the following:

1. Observe
2. Question
3. Learn
4. Conclude
5. Act
6. Assess
7. Keep / Discard / Repeat

Once that is accomplished, we are closer than ever.


That's a general definition. When AI has emotional intelligence, that will be the true defining moment.



posted on May, 11 2017 @ 12:16 AM
link   
I would say that we are so technologically intertwined and simultaneously stupid, we will discover it during our enslavement.



posted on May, 11 2017 @ 07:00 AM
link   

originally posted by: dfnj2015
Synthesizing data is soft AI. Hard AI is having a program that is not only self-aware but is capable of improving itself. Again, having self-aware programs is like the Halting problem which has been proven to be impossible. Then you might argue how do we do it? I would then say you are presuming we are computers. We are not. Before you discuss AI becoming real and the implications, you should really have a firm understand of what a computer is and what are it's known proven limitations.


I don't think people are fancy computers, at least not of the type we have now. Computers are deterministic, if you believe in free will, humans are not.




top topics



 
7
<< 1    3 >>

log in

join