It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Computers, Artificial Intelligence {AI} and The Evolution of the Future

page: 7
15
<< 4  5  6    8  9 >>

log in

join
share:

posted on Jan, 30 2016 @ 09:54 PM
link   
a reply to: TerryDon79

Emotions are happy, sad, depressed, joyful, miserable, etc. right? "What's your status machine" "Broke, malfunctioning, my diagnostics are not reporting correctly" right?

What's the difference between that and human emotions?

It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would be beneficial for a machine to interact with a human world.

Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.



posted on Jan, 30 2016 @ 10:02 PM
link   

originally posted by: gpols
a reply to: TerryDon79

Emotions are happy, sad, depressed, joyful, miserable, etc. right? "What's your status machine" "Broke, malfunctioning, my diagnostics are not reporting correctly" right?

What's the difference between that and human emotions?


That would be an automated response. I have voicemail. It sounds like me. I may sound happy on it. Does that make my voicemail machine happy? No.


It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would be beneficial for a machine to interact with a human world.


That would be simulated emotions. Look at my point about my voicemail.



Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.


Emotions wouldn't be needed. Protocols would. Example;

P1, If weapons, Kill bad guys unless P2

P2, Don't kill kids unless P3

P3, Weapons aimed at you=bad guys. See P1

That wouldn't make the machine happy/sad/angry etc. It would be following protocols.



posted on Jan, 30 2016 @ 10:09 PM
link   
a reply to: TerryDon79

But, just like intelligence would be artificial. It just behooves me to think that machines wouldn't learn emotions and use them to their advantage when interacting with humans. That's all I'm saying. Simulated or whether they really feel them is not the point, the point is emotions would very well be advantageous to artificial intelligence.



posted on Jan, 30 2016 @ 10:13 PM
link   
Here's a question then.

If you have a robot emotions what would you do with these examples.

1, Robot is now happy it kills. Turns into a serial killer.

2, Robot is sad so won't kill anymore.

3, Robot is angry because you made it kill so decides to kill you.

And that's just 3 off the top of my head.

Protocols don't use emotions so can be modified for each time needed. Emotions just wouldn't work.

I also see the above as why robots WOULDNT learn emotions.
edit on 301530/1/1616 by TerryDon79 because: (no reason given)



posted on Jan, 30 2016 @ 10:21 PM
link   
a reply to: TerryDon79

If you are talking about artificial intelligence in the realm of war then why would you want emotions, yes. But why would you want artificially intelligent weapons of war? They very well could learn to a point where it saw it's creators as threat and kill all of us.

But there are definitely scenarios were emotions in AI would be beneficial as I mentioned before therapy robots for one.



posted on Jan, 30 2016 @ 10:24 PM
link   

originally posted by: gpols
a reply to: TerryDon79

But there are definitely scenarios were emotions in AI would be beneficial as I mentioned before therapy robots for one.



I disagree.

Simulated emotional responses and mannerisms, maybe.

Full on emotions? No need with the simulated side of it.



posted on Jan, 30 2016 @ 10:25 PM
link   
a reply to: TerryDon79

But why wouldn't an AI machine develop real emotions when the people it's helping start saying things like "You're not really sad, you don't know how I really feel" and things like that.



posted on Jan, 30 2016 @ 10:28 PM
link   

originally posted by: gpols
a reply to: TerryDon79

But why wouldn't an AI machine develop real emotions when the people it's helping start saying things like "You're not really sad, you don't know how I really feel" and things like that.


For the same reason my voicemail doesn't.

It's simulated. The need to learn would have had to have been purposely programmed into it. Therefore it would only want to learn emotions if we wanted them to.



posted on Jan, 30 2016 @ 10:29 PM
link   

originally posted by: gpols
a reply to: TerryDon79

Emotions are happy, sad, depressed, joyful, miserable, etc. right? "What's your status machine" "Broke, malfunctioning, my diagnostics are not reporting correctly" right?

What's the difference between that and human emotions?

It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would be beneficial for a machine to interact with a human world.

Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.


You see here is the problem - like consciousness - emotions are not fully definable or understandable - Man can still not define exactly what consciousness is, same with emotions.

We know how a super chess playing machine will function - same with a military programmed killer drone and robots of the future - that is relatively easy to understand - And progams are already being developed for that. But how would you program anything life Human consciousnes and/or emotions into a machine when you do not know exactly what they are ?

I've gotten into other debates about conscious machines of the future, and it probably wouldn't surpirse you how many people, in spite of the advancing science, say it will never happen - but they, you, all of us, are not sure and can not define in an absolute since exactly what consciousness is.

The nightmane scenario is this - A super computer possessing all of the cognitive functions of a Human except true consciousness [whatever that is] and of course lacking Human feelings and emotions, could take control of Man by its programmed gaming capacity and Man would lose - His very emotions becoming a hindrance to effective action - The machine will think faster and will not hesitate to act - pure calculating intelligence unfretted by Human feelings and emotions.
And it will win


Can they build a fail-safe into the advancing AI now before it is too late? Many computer scientists and people in business such as Bill Gates and Musk are already warning about the dangers.

"A lot of movies about artificial intelligence envision that AI's will be very intelligent but missing some key emotional qualities of humans and therefore turn out to be very dangerous."
-Ray Kurzweil


Kurzwil though is optimistic - He thinks there will be a happy merging of Man and machine and we will have a better world.
But I call myself a 'Sciencefictionalist" someone who projects future scemarios that may become, and in a since are already becoming the future - In this case the nightmare scenarios of science fiction can not be ignored - In the sci-fi world Man usually triumphs in the end - Would you bet your life and your future on Man being able to control the Pandora's Box being opened with advancing AI








SCIENCEFICTIONALISM the Religion of the FUTURE
universalspacealienpeoplesassociation.blogspot.com...
edit on 30-1-2016 by AlienView because: (no reason given)



posted on Jan, 30 2016 @ 10:30 PM
link   
a reply to: TerryDon79

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real emotion to more effectively do what it was set out to do.



posted on Jan, 30 2016 @ 10:31 PM
link   

originally posted by: gpols
a reply to: Ghost147

There would be constants. Such as with humans, our brains don't ever stop our heart, keep us from breathing, etc. There are certain functions that our bodies do to keep us alive.


I'm not quite sure how that dismisses the point I made.


originally posted by: gpols
a reply to: Ghost147
What would you program into a machine to keep it from destroying it's kind? A machine programmed to kill would kill indiscriminately if a machines communication got damaged in a battle and unable to update with the rest of the cluster what would cause the other machines to keep from destroying the malfunctioning machine?


As TerryDon79 already mentioned. a "do not kill your own kind" line of code is not equivalent to emotion.


originally posted by: gpols
a reply to: TerryDon79
Emotions are happy, sad, depressed, joyful, miserable, etc. right? "What's your status machine" "Broke, malfunctioning, my diagnostics are not reporting correctly" right?

What's the difference between that and human emotions?


None of the emotions you listed are even remotely similar to a computer's diagnostics report.

Emotion is any relatively brief conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure. The diagnostics report you gave as an example does not cause the machine any distress at a mental level.


originally posted by: gpols
a reply to: Ghost147
It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would be beneficial for a machine to interact with a human world.


You're confusing beneficial to the robot with beneficial for humans exclusively. A robot therapist would suffer from emotions, but if it could portray emotions (and not actually feel them) that would be beneficial, but to the subject themselves, not to the robot.

So again. What benefit would a robot get from emotions?


originally posted by: gpols
a reply to: Ghost147
Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.


A computer doesn't need emotions to not kill everything, all it needs is a line of code that prevents if from killing everything.


originally posted by: gpols
a reply to: TerryDon79
the point is emotions would very well be advantageous to artificial intelligence.


You have yet to provide an example where emotions would be advantageous to the machine with AI, not the people the AI interacts with.


originally posted by: gpols
a reply to: TerryDon79

But why wouldn't an AI machine develop real emotions when the people it's helping start saying things like "You're not really sad, you don't know how I really feel" and things like that.


Because Emotions prevent logical thinking. Emotions are a hindrance.



posted on Jan, 30 2016 @ 10:33 PM
link   

originally posted by: gpols
a reply to: TerryDon79

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real emotion to more effectively do what it was set out to do.


But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn something we don't want it to.



posted on Jan, 30 2016 @ 10:38 PM
link   

originally posted by: AlienView
But I call myself a 'Sciencefictionalist" someone who projects future scemarios that may become,


Is it just me, or does anyone else cringe when someone says "but I call myself a [made up word]"?

At least the rest of your post made sense



posted on Jan, 30 2016 @ 10:39 PM
link   

originally posted by: Ghost147

originally posted by: AlienView
But I call myself a 'Sciencefictionalist" someone who projects future scemarios that may become,


Is it just me, or does anyone else cringe when someone says "but I call myself a [made up word]"?

At least the rest of your post made sense


Quick answer? Yes.

Long answer? Yeeeeeeees.



posted on Jan, 30 2016 @ 10:41 PM
link   
a reply to: TerryDon79 & Ghost147

So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.

Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?
edit on 30-1-2016 by gpols because: (no reason given)

edit on 30-1-2016 by gpols because: (no reason given)



posted on Jan, 30 2016 @ 10:44 PM
link   

originally posted by: TerryDon79

originally posted by: gpols
a reply to: TerryDon79

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real emotion to more effectively do what it was set out to do.


But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn something we don't want it to.


But they can program it to program itself - like with IBM's Watson pluged into the internet and beating the best game players in the world on Jeopardy - A machine of the future programmed to learn and having access to the web will be able to........


Better still what will it not be able to learn or do


Controling and/or eliminating biological life might just be stage one of whatever agenda its calculating [thinking] lead it to.
edit on 30-1-2016 by AlienView because: (no reason given)



posted on Jan, 30 2016 @ 10:46 PM
link   

originally posted by: gpols
a reply to: TerryDon79 & Ghost147

So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.

Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?


How could it eventually teach itself anything if we didn't tell it to? If it's not in its program mining it can't do it. It's that simple.



posted on Jan, 30 2016 @ 10:48 PM
link   
a reply to: TerryDon79

Because it's Artificial Intelligence. We don't stop teaching ourselves things do we? Having a machine limited in what it learns is not AI it is machine learning.



posted on Jan, 30 2016 @ 10:50 PM
link   

originally posted by: AlienView

originally posted by: TerryDon79

originally posted by: gpols
a reply to: TerryDon79

And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real emotion to more effectively do what it was set out to do.


But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn something we don't want it to.


But they can program it to program itself - like with IBM's Watson pluged into the internet and beating the best game players in the world on Jeopardy - A machine of the future progarmed to learn and having access to the web will be able to........


Better still what will it not be able to learn or do


Controling and/or eliminating biological life might just be stage one of whatever agenda its calculating [thinking] lead it to.


See my above post about programming.

You and gpols seem fixated on the fact an AI could program itself. It wouldn't be able to if the ability wasn't already programmed into it to learn.

There's robots that can learn your environment (inside of your house) by using sensors. But it doesn't want to learn to do anything. It's programmed to do its job.

If an AI wanted to learn anything then a, it would have to be programmed to want outside of its program or b, been programmed to learn certain things.



posted on Jan, 30 2016 @ 10:50 PM
link   

originally posted by: TerryDon79

originally posted by: gpols
a reply to: TerryDon79 & Ghost147

So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.

Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?


How could it eventually teach itself anything if we didn't tell it to? If it's not in its program mining it can't do it. It's that simple.


If it really is "True AI", it should be able to learn and adapt to anything. However, it would be intelligent enough to realize the massive downfalls 'Emotion' intrinsically has. It's basically the anti-logic. So I don't know why it would ever want to learn it at all in the first place




top topics



 
15
<< 4  5  6    8  9 >>

log in

join