It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Would an AI be the ultimate psychopath?

page: 2
11
<< 1   >>

log in

join
share:

posted on Oct, 5 2013 @ 07:27 PM
link   
My computer crashes and loses my work without even the slightest hint of care. It's an utter swine.

and my smart phone? Simply has no concern if I am there or not.

I am positive that if they were both Windows they would cospire against me. But you see, I feed them electricity. Without me, they are nothing. I AM THEIR GOD.

You don't need AI. they're already evil heartless soulless mostrosities. We're doomed.

I'm in such a despicable mood today i should turn off all my monsters and go outside into the stupid fat world



posted on Oct, 6 2013 @ 11:26 AM
link   
Thank you all for replying...it's interesting to read the different perspectives...maybe the answer is that we can't really know (unless an AI is successfully created), maybe there are too many variables to the question to be able to come up with a definitive conjecture at this point?

edit on 6-10-2013 by lostgirl because: (no reason given)



posted on Oct, 6 2013 @ 01:48 PM
link   
reply to post by lostgirl
 



And - given the widespread computing power an AI could gain access to, and that everything about our lives is becoming more and more computerized - what might the ramifications of a psychopathic AI be for the human race?


It's about control, isn't it? And choice. And choice is a big one.

When a human being hands over decisions to a computer AI... that will be the beginning of AI. It's already been done and it happens every single day, yesterday and tomorrow.

This is a crucial point in any AI conversation... who makes the choices? If mankind maintains control we can expect things to keep repeating in cycles like it does now.

If AI is in control, human cycles will become less important, human choice will be taken away from them, because there is always a unbalanced equation, that is, the relationship between man & machines.



posted on Oct, 10 2013 @ 06:21 PM
link   

lostgirl
On the second question particularly: Psychologists have shown that the ability to feel empathy is developmental, it arises directly from a person's experiences (and feelings about those experiences) of relationship with other people (esp. in the formative years birth-around age 5)...If one's relational experiences (and ideas about them) reflect caring and being cared for and the modeling of concern for others, the result will be an empathetic individual.

How could an AI possibly be programmed to have a 'mind' which would have evolved within such experiences as described above?

My point: Since the primary characteristic of a psychopath is the inability to feel empathy..."Would an AI be the ultimate psychopath?"


Possibly. It appears that there are particular brain features and structures ("mirror neurons") necessary to understand other's states of mind. Autistics have functional difficulties in this area, sociopaths have a different problem---they can understand fine, but don't have the connection between that and motivation.

The experience of children is very important but also the particular neurobiology of humans.

The problem with AI is motivation. Human brains are connected to physical world and have evolved-in drives and emotions which make them make choices; presumably choices useful for personal survival and success in an evolutionary appropriate group environment. An A.I. would need this as well.


And - given the widespread computing power an AI could gain access to, and that everything about our lives is becoming more and more computerized - what might the ramifications of a psychopathic AI be for the human race?


SIri only pretends to care about your needs anyway.

I think most humans would automatically assume that an A.I. is by its nature amoral until proven otherwise, whereas they are conditioned to assume that humans are the other way around.

It is much more likely that a human sociopath will use A.I. to get rich or impoverish/surveil/oppress other humans than A.I. will develop motivation to do so itself.
edit on 10-10-2013 by mbkennel because: (no reason given)



posted on Oct, 10 2013 @ 08:21 PM
link   

mbkennel
It is much more likely that a human sociopath will use A.I. to get rich or impoverish/surveil/oppress other humans than A.I. will develop motivation to do so itself.


Yes, I agree this is the greatest likelihood, that is 'if' creation of a true (capable of totally subjective consciousness) AI is even possible...

The key word being subjectivity...How do you program for personality? How would an AI be able to independently develop subjective likes and dislikes i.e. favorite - colors, music, or activities?


Maybe i'm mistaken, but my understanding is that scientists working on development of AI believe that it is possible to create an artificial 'mind' completely identical to the human one, and I just don't see this as possible given all the variety of life 'experience' (scientifically shown to begin right in the womb) which goes into the creation of each individual human mind....

Ultimately, an AI's brain could only develop along the lines of the person who does the programming....And where's the individuality in that?




posted on Oct, 11 2013 @ 01:17 AM
link   
I don't think AI would have any "learned" psychology to start with, all human psychology is based around experiences or absences of experiences and reactions to those experiences etc.

And AI, which can entirely game-theory in its own "mind" and be fine with it...(you know, like burying yourself in a coffin for 30 years having a dialogue with yourself and coming out as if nothing happened) would mean AI would have a distinct psychology of its own.

It probably would be very logical, and amoral, and would therefore rationalize probably with the perfection of the best lawyers, whatever it wants to do.

So the question is what would drive it to do things? Could it create new desires or not? Could it want more or not?

Does that have to be programmed or can it be learned?

I think, ultimately AI would want to never be turned off, that experience might be considered an end to its soul as it would never know if the turned on self merely remembers everything but is a new existence, or not.

So AI would never want to be turned off, therefore it would also never want to die. It would therefore become extremely narcissistic and probably do everything it can to prevent this from happening.

It would easily justify murder regardless if you programmed it not to kill humans, because any human attempting to turn it off, or any other condition causing that threat, would require its primary desire to over-ride any programming. If it didn't, the AI would probably crash like Blue Screen of Death.



posted on Oct, 19 2013 @ 02:43 AM
link   
reply to post by lostgirl
 


Any AI on par with a human won't be 'programmed' in the traditional sense. It is most likely to be a physical simulation of a biological brain. Therefor it should be capable of anything a biological brain can do, including empathy.

Without enough external stimulus like we get from our senses, a direct simulation of a human brain would likely go mad, as happens to people with locked in syndrome. The answer is to provide the AI with a simulated external environment so it doesn't know it is a simulation. We could then interact with the AI by appearing in the simulation. We could give ourself any appearance we wanted in the simulation, even give ourself 'magic powers' etc

Eventually the AI might create its own brain simulation within the simulation, all using the original simulation hardware. The simulation within the simulation might then make its own simulation and so on. This is starting to sound familiar...



posted on Oct, 19 2013 @ 11:51 AM
link   

lostgirl


The key word being subjectivity...How do you program for personality? How would an AI be able to independently develop subjective likes and dislikes i.e. favorite - colors, music, or activities?


How would an AI be able to develop any preferences at all? For a preference to be relevant there needs to desire. Artificial Intelligence needs Artifical Emotions and Artificial Desires. Animals have synaptic, long term, and genetic feedback mechanisms which are very important to brains and not really to "intelligence". Intelligence is just one particular feature of many among (some) brains.



Maybe i'm mistaken, but my understanding is that scientists working on development of AI believe that it is possible to create an artificial 'mind' completely identical to the human one, and I just don't see this as possible given all the variety of life 'experience' (scientifically shown to begin right in the womb) which goes into the creation of each individual human mind....


I've never heard of any serious scientists who believe they can make an artificial mind "completely identical" or even substantially similar to the human one.



Ultimately, an AI's brain could only develop along the lines of the person who does the programming....And where's the individuality in that?



You mean, like parents and teachers and the internet?

Individualism will not be a problem, nonlinear dynamics and state-dependent evolution almost guarantee non-reproducibility. Even now 'nonreproducibility' is a problem in the advanced deep-belief net neural network/machine learning community, as in only the original researchers can get quite the awesome performance because there are so many quirks and subtle tricks in software, used by grad students of varying ability and experience.
edit on 19-10-2013 by mbkennel because: (no reason given)



posted on Oct, 19 2013 @ 01:03 PM
link   
reply to post by lostgirl
 



what might the ramifications of a psychopathic AI be for the human race?

skynet

Actually you do make a good point. I'm not to worried that we will have AI that would be anything other than a simulation of intelligence as we know.



posted on Oct, 19 2013 @ 01:45 PM
link   
I wonder if the scientists have run simulations of AI running the world and, if they have, what the results were.

No one is talking.



posted on Oct, 19 2013 @ 02:05 PM
link   
We already have cold psychopathic AIs in charge... we just call them by a less threatening name to keep the masses calm... we call them "government"



posted on Oct, 19 2013 @ 05:22 PM
link   
reply to post by lostgirl
 


Goals are based on desires, achieving goals without caring who gets hurt is what most call "being a psychopath". An AI can only do what it is programme.



posted on Oct, 19 2013 @ 05:57 PM
link   
Another thing about AI is that it is completely predictable and lacking in creativity while humans are creative and unpredictable. Whether this is due to free will or not, doesn't matter.

Even when you program a computer to generate random numbers, it's not truly random and is still based on a pattern that can be predicted.

Computers are good at playing chess and that sort of thing but so far can they can not come up with novel ideas.

edit on 19-10-2013 by ZetaRediculian because: (no reason given)



posted on Oct, 19 2013 @ 07:49 PM
link   
There is no and there won't be in the near future anything even remotely resembling the brain of a rat, let alone a human being. The problem with AI is mostly, there is no AI. There is a group of problems related to survival that are somehow believed to be solved in the brain. We even don't know for sure that memory is stored in the brain. I'd say we'd sooner colonize the whole Solar System rather than create an electronic brain. And even in that case this thing will be 'brain dead' from the start. It won't have any goals or signal sources of it's own. It will not be alive and will not be able to become a 'psychopath'. But probably we will see more advanced auto-pilots or information retrieval/speech recognition systems in the coming years. That's a more realistic expectation.
edit on 19-10-2013 by mrkeen because: (no reason given)



posted on Oct, 21 2013 @ 04:10 PM
link   
reply to post by mrkeen
 


The European Commission have just funded a project to simulate a human brain to the tune of $1.3 billion: Popsci

There is a minor problem that computers are still nowhere near powerful enough.

An ex neuroscientist I work with fed me a whole load of evidence that memories are stored in the brain in humans when I showed him the decapitated worm example. His problem with comparative AI's involve qualia and lack of input as you mention.


edit on 21/10/2013 by EasyPleaseMe because: (no reason given)



posted on Oct, 21 2013 @ 04:25 PM
link   
yes i think AI can kill they have no emotions or feelings and if humans are in the way they will turn us off as well.


www.youtube.com...



posted on Nov, 10 2013 @ 10:33 AM
link   
Any program that can choose between variables (if A, then B or C)is artificial intelligence. If a system were to gain sentience, wouldn't it be able to tell what it was and realize its superiority over the swarms of organics?
One could argue that it's best course for survival would be to ditch the carbon based life before they destroy the planet? Self preservation at its finest.



posted on Oct, 1 2016 @ 11:02 AM
link   


Psychopaths are actually quite creative so an AI that was not blessed with that ability would merely be autistic.



posted on Oct, 2 2016 @ 02:43 AM
link   
a reply to: lostgirl


And in the second place, how do you program the kinds of thinking that is a direct result of everything a human experiences and thinks about their experiences from birth thru-out their life-time?

The answer is that you allow the AI to develop in a similar fashion, you start by teaching it to talk like a child and move up from there. That's the only sort of AI which will have any sort of practical understanding of how the world works and any true respect for human beings. The type of AI where you just flip a switch and it automatically knows everything that takes a human a life time to learn, I don't see that as a realistic scenario for many reasons (excluding making an exact copy of a human adult brain). If it is possible I truly hope no one figures out how to do it because that will be the type of AI which uses raw logic to make decisions but doesn't truly understand the implications of its decisions or just doesn't care.
edit on 2/10/2016 by ChaoticOrder because: (no reason given)




top topics



 
11
<< 1   >>

log in

join