It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

MIT Creates An AI Psychopath Because Someone Had To Eventually

page: 1
15
<<   2  3 >>

log in

join
share:

posted on Jun, 6 2018 @ 12:36 AM
link   
This is just human nature. We want to explore are darker nature. I'm not sure it's a good idea to let AI explore it but truthfully we will not be able to stop it. So I suspect we will see "good" AI's and "evil" AI's. Maybe they will battle over our destruction.


In one of the big musical numbers from The Life Of Brian, Eric Idle reminds us to “always look on the bright side of life.” Norman, a new artificial intelligence project from MIT, doesn’t know how to do that.

That’s because Norman is a psychopath, just like the Hitchcock character that inspired the research team to create him.

Like so many of these projects do, the MIT researchers started out by training Norman on freely available data found on the Web. Instead of looking at the usual family-friendly Google Images fare, however, they pointed Norman toward darker imagery. Specifically, the MIT crew stuck Norman in a creepy subreddit to do his initial training.

Armed with this twisted mass of digital memories, Norman was then asked to caption a series of Rorschach inkblots. The results are predictably creepy. Let’s have a look at a couple, shall we?


www.geek.com...

Here's the images:





So instead of a nice picture Norman sees destruction. I could see a movie where psychopath AI infects AI bots across the internet on throughout Wall Street then....chaos.




posted on Jun, 6 2018 @ 12:40 AM
link   
Hoooooo boy.... it's only a matter of time now isn't it?



posted on Jun, 6 2018 @ 12:55 AM
link   

originally posted by: Starcrossd
Hoooooo boy.... it's only a matter of time now isn't it?



Yep, and I think test like this should be prohibited in the future unless there's strict protocols in place. You have an AI with a blood lust that may learn how to spread it's code so it won't get shutdown.

Next thing you know, people are dying in the Hospital because Nurses are giving them the wrong medicine and six months later we learn that it's Norman getting into Hospital computers and changing the Doctor's orders. Norman could then go after smart grids and the financial system.

AI testing will eventually need some safeguards because Scientist will end up creating AI that can outsmart them. At the end of the day, this would only do so much because as AI expands and becomes more common it will be used for very nefarious reasons.



posted on Jun, 6 2018 @ 01:45 AM
link   
a reply to: neoholographic

God damnit.

These #ing imbeciles. Im guessing none of them have camped before. You know, that whole thing about poking a bear.



posted on Jun, 6 2018 @ 01:48 AM
link   
Why are smart people sometimes the dumbest MFers you could ever hope to never meet?

How is this possible?



posted on Jun, 6 2018 @ 02:18 AM
link   
It's going to be installed in RoboCop.



posted on Jun, 6 2018 @ 02:20 AM
link   
a reply to: neoholographic

Why?

I mean, really, why?

Why didn't the researchers instead see if they could drill holes in their own skull while remaining conscious or try to see what taste moltern lava has?

edit on 6/6/2018 by chr0naut because: (no reason given)



posted on Jun, 6 2018 @ 02:24 AM
link   

originally posted by: chr0naut
a reply to: neoholographic

Why?

I mean, really, why?


Well actually AI dealing with mental disorders would be fantastic in terms of developing treatments for real people suffering such mental disorders.

You can't get a lot of good ideas for curing people if you only study healthy people.



posted on Jun, 6 2018 @ 02:26 AM
link   

originally posted by: neoholographic

So instead of a nice picture Norman sees destruction. I could see a movie where psychopath AI infects AI bots across the internet on throughout Wall Street then....chaos.


That one already a sadist at the ready for the takedown event to kick off.




posted on Jun, 6 2018 @ 02:33 AM
link   

originally posted by: SaturnFX

originally posted by: chr0naut
a reply to: neoholographic

Why?

I mean, really, why?


Well actually AI dealing with mental disorders would be fantastic in terms of developing treatments for real people suffering such mental disorders.

You can't get a lot of good ideas for curing people if you only study healthy people.


We don't have enough already to investigate?



posted on Jun, 6 2018 @ 02:47 AM
link   
I’d say they created an emo AI as opposed to a psycho one.

I can imagine it in dark clothes and dark make up like a goth kid sitting there bitching about how the world is messed up.

I actually find it kind of amusing



posted on Jun, 6 2018 @ 02:56 AM
link   
a reply to: chr0naut

Sure, but why not add another one to the list.

And as SaturnFX said, if this could be used to help study human behaviours in the future, why would you only focus on the good? You'd need the 'bad' to also study.

Besides, these are only computer programs. They are NOT sentient, they are NOT conscious. I know people are trying to imply they are based on the level of interaction and seemingly 'human' responses they give. But they are not true AI in the 'sci-fi' sense. They haven't become self-aware and leapt out of the circuitry into your toaster.

We're quite safe right now. Unless of course someone renames it to the WOPR and hooks it up to the nuclear arsenal.



posted on Jun, 6 2018 @ 05:58 AM
link   
WOOHOO Skynet! YES. Just go ahead and release the AI into the internet and forget about it. In 30 or so days you might start seeing some issues with having access to the internet completely, then a few accidents with automated machinery going haywire and killing a few workers.

I'm joking of course but lets hope someone doesn't decide to get the idea of "lets release this badboy on the internet and see what happens."

This was a dumb idea but knowing MIT, it had to be done for science.



posted on Jun, 6 2018 @ 06:00 AM
link   
Words fail me. When will people realise that just because we can do something, we shouldn't necessarily then do it?



posted on Jun, 6 2018 @ 06:06 AM
link   
a reply to: chr0naut

or try to see what taste molten lava has?

It is kinda spicy, from my experience.


NOW I see why people were worried Cern would suck us all into a black hole or something. How much you wanna bet they tried?



posted on Jun, 6 2018 @ 06:08 AM
link   
a reply to: AngryCymraeg



Human Nature proved that we are asking to be put out of our misery so many times I think we broke the counter.

Nuclear Weapons, Weather Modification, EMP, Laser Weapons, Sonic Weapons, AI, Radiation, Poison Gas, DNA editing, Gene splicing, cloning, nano-robotics.

Now I'm sure one or two of these things have their benefits and would be great for our future but at what cost is it worth? The end of the world or the end of the human race?

When will we learn? Never...



posted on Jun, 6 2018 @ 06:09 AM
link   
chris rock said it best.

JUST BECAUSE YOU CAN DO SOMETHING DOESN'T MAKE IT A GOOD IDEA


Scrounger



posted on Jun, 6 2018 @ 06:24 AM
link   
It's not really an AI. We don't have that technology yet. It's a bot. It's just the way it was programmed.

It's not a "psychopath" because it can't think for itself, it runs how it was designed.



posted on Jun, 6 2018 @ 06:30 AM
link   
a reply to: noonebutme

They are NOT sentient, they are NOT conscious

They don't have to be. I remember reading about someone who created a virus so awful (as a 'fun project, they were competing), it spread to every computer and they had to kill the power to stop it. This was in the 80s, I think. I can't find it on the wiki page. en.wikipedia.org...

All it takes is the wrong coding, the wrong digital hiccup, and it's time to kiss thy ass goodbye-eth.



posted on Jun, 6 2018 @ 07:08 AM
link   
Let's blow up the moon. Why? Just because we can.

Those MIT guys must be bored. They get paid to do this kind of stuff, but for what practical purpose? What's next, a porn addicted bot? A pedophile bot? A drug addicted bot? They set the standards pretty low for that one IMO.

This goes beyond the last conclusion I made in my thread about AI development - My last post

ETA: We need a new law like Murphy's law. "Anything man is capable of doing, right or wrong, man will do regardless of the consequences."
edit on 6-6-2018 by MichiganSwampBuck because: (no reason given)

edit on 6-6-2018 by MichiganSwampBuck because: Added an extra comment




top topics



 
15
<<   2  3 >>

log in

join