It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: gpols
a reply to: TerryDon79
Emotions are happy, sad, depressed, joyful, miserable, etc. right? "What's your status machine" "Broke, malfunctioning, my diagnostics are not reporting correctly" right?
What's the difference between that and human emotions?
It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would be beneficial for a machine to interact with a human world.
Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.
originally posted by: gpols
a reply to: TerryDon79
But there are definitely scenarios were emotions in AI would be beneficial as I mentioned before therapy robots for one.
originally posted by: gpols
a reply to: TerryDon79
But why wouldn't an AI machine develop real emotions when the people it's helping start saying things like "You're not really sad, you don't know how I really feel" and things like that.
originally posted by: gpols
a reply to: TerryDon79
Emotions are happy, sad, depressed, joyful, miserable, etc. right? "What's your status machine" "Broke, malfunctioning, my diagnostics are not reporting correctly" right?
What's the difference between that and human emotions?
It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would be beneficial for a machine to interact with a human world.
Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.
originally posted by: gpols
a reply to: Ghost147
There would be constants. Such as with humans, our brains don't ever stop our heart, keep us from breathing, etc. There are certain functions that our bodies do to keep us alive.
originally posted by: gpols
a reply to: Ghost147
What would you program into a machine to keep it from destroying it's kind? A machine programmed to kill would kill indiscriminately if a machines communication got damaged in a battle and unable to update with the rest of the cluster what would cause the other machines to keep from destroying the malfunctioning machine?
originally posted by: gpols
a reply to: TerryDon79
Emotions are happy, sad, depressed, joyful, miserable, etc. right? "What's your status machine" "Broke, malfunctioning, my diagnostics are not reporting correctly" right?
What's the difference between that and human emotions?
originally posted by: gpols
a reply to: Ghost147
It also depends upon which realm you are creating the artificial intelligence for too. Emotions would be beneficial to a therapy robot. Emotions would be beneficial for a machine to interact with a human world.
originally posted by: gpols
a reply to: Ghost147
Mostly though emotions would be beneficial to stop the machine from killing everything in it's path.
originally posted by: gpols
a reply to: TerryDon79
the point is emotions would very well be advantageous to artificial intelligence.
originally posted by: gpols
a reply to: TerryDon79
But why wouldn't an AI machine develop real emotions when the people it's helping start saying things like "You're not really sad, you don't know how I really feel" and things like that.
originally posted by: gpols
a reply to: TerryDon79
And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real emotion to more effectively do what it was set out to do.
originally posted by: AlienView
But I call myself a 'Sciencefictionalist" someone who projects future scemarios that may become,
originally posted by: Ghost147
originally posted by: AlienView
But I call myself a 'Sciencefictionalist" someone who projects future scemarios that may become,
Is it just me, or does anyone else cringe when someone says "but I call myself a [made up word]"?
At least the rest of your post made sense
originally posted by: TerryDon79
originally posted by: gpols
a reply to: TerryDon79
And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real emotion to more effectively do what it was set out to do.
But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn something we don't want it to.
originally posted by: gpols
a reply to: TerryDon79 & Ghost147
So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.
Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?
originally posted by: AlienView
originally posted by: TerryDon79
originally posted by: gpols
a reply to: TerryDon79
And it would be beneficial in terms of therapy robots, only one I can think of off the top of my head. But eventually the robot would learn real emotion to more effectively do what it was set out to do.
But only if we told it to. That's my whole argument. If we don't program it to, or program it to have the ability to program itself, it can't learn something we don't want it to.
But they can program it to program itself - like with IBM's Watson pluged into the internet and beating the best game players in the world on Jeopardy - A machine of the future progarmed to learn and having access to the web will be able to........
Better still what will it not be able to learn or do
Controling and/or eliminating biological life might just be stage one of whatever agenda its calculating [thinking] lead it to.
originally posted by: TerryDon79
originally posted by: gpols
a reply to: TerryDon79 & Ghost147
So are you saying we should have a whole bunch of Data's (From Star Trek) running around? I didn't ever watch Star Trek religiously or anything like that, but I remember a few episodes of him wanting to know what being happy felt like, or what being sad felt like.
Why wouldn't an AI machine eventually teach it self emotions just because it wanted to know?
How could it eventually teach itself anything if we didn't tell it to? If it's not in its program mining it can't do it. It's that simple.