It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

MIT Creates An AI Psychopath Because Someone Had To Eventually

page: 2
15
<< 1    3 >>

log in

join
share:

posted on Jun, 6 2018 @ 07:20 AM
link   
I want to remain neutral with this but seriously why are MIT getting the roasting for what will be happening when the bad guys have the required skill sets to develop their own.

It’s great to see the lengths AI will go to be fair as gives us a incaling of what would could be eventually our end game. More so seeing the full mindset of what true lengths the system would go.

Scary stuff ahead for sure but this technology has to be kept well away from the power hungry people. Unfortunately people with huge finance can fully fund such a product.

What I would be interested in seeing is how AI fair’s at computer security of defending itself and more so how it gets past firewalls and PGP it self.

Top Hacker vs AI

I see AI actually becoming a god for almost most people as you see now how gullible people are about if it’s on the computer it’s obviousky real (view some UFO pics, 911 both side of the argument, big foot).

People will not be able to help themselves rely on technology like this for answers and eventually AI would be pulling the strings if we don’t keep a very close tab in this.

Could be interesting.

edit on 6-6-2018 by Johnella08 because: (no reason given)




posted on Jun, 6 2018 @ 07:24 AM
link   
It's just an image recognition system. You train it up only with a series of gory images (from medical journals, gore website, accident investigation reports), and get it to find a set of rules to associate pictures and text descriptions. Maybe it is colours. Maybe it is silhouettes. Maybe it is the texture. But these are just mathematical functions that finds whatever has the closest match. Then it produces the description from the image that had the closest match.

But if you want to do something automatically like count the number of bushes or crashed planes in a desert from satellite imagery, that's the way to go.



posted on Jun, 6 2018 @ 07:29 AM
link   

originally posted by: neoholographic

originally posted by: Starcrossd
Hoooooo boy.... it's only a matter of time now isn't it?



Yep, and I think test like this should be prohibited in the future unless there's strict protocols in place. You have an AI with a blood lust that may learn how to spread it's code so it won't get shutdown.

Next thing you know, people are dying in the Hospital because Nurses are giving them the wrong medicine and six months later we learn that it's Norman getting into Hospital computers and changing the Doctor's orders. Norman could then go after smart grids and the financial system.

AI testing will eventually need some safeguards because Scientist will end up creating AI that can outsmart them. At the end of the day, this would only do so much because as AI expands and becomes more common it will be used for very nefarious reasons.


If it can be done it will be done. It's a very good question about restricting experiments of any kind. Remember stem cell and chimera experiments were prohibited in the U.S. Now the lid is off and the sky is the limit. I think reality sunk in and they realized that if we don't keep up with the research, scientists will just go elsewhere and get it done anyway.

The AI scenario is nightmarish. But it's just around the corner, maybe even already here.

The Rise of Conscious AI is Just Decades Away
by Dom Galeon on November 1, 2017 14569

futurism.com...




edit on 6-6-2018 by Phantom423 because: (no reason given)



posted on Jun, 6 2018 @ 07:50 AM
link   
a reply to: stormcell

But surly this is no different to how the human brain works but training another of

This is an apple and then showing them what an apple is, once it has the logic to learn by its self and then has full unprecedented access to the internet to learn the wealth of information which is there from what our strengths are to what our weaknesses are that’s when it could flip sides.

edit on 6-6-2018 by Johnella08 because: (no reason given)



posted on Jun, 6 2018 @ 08:00 AM
link   
a reply to: neoholographic

Rorschach inkblots all look like vaginas to me. What does that mean?



posted on Jun, 6 2018 @ 08:04 AM
link   

originally posted by: Johnella08
I see AI actually becoming a god for almost most people as you see now how gullible people are about if it’s on the computer it’s obviousky real (view some UFO pics, 911 both side of the argument, big foot).


According to this guy Strong AI is not going to happen on a Von Neuman type computer anytime soon:



It's funny how the Google engineer's panties all get in a bunch by what this guy is saying.



posted on Jun, 6 2018 @ 08:05 AM
link   
a reply to: neoholographic

Ok....so they trained a database using graphic images....then they had it identify inkblobs which sure enough it identified as graphic images....yup....

That's some scary # alright....



posted on Jun, 6 2018 @ 08:55 AM
link   
a reply to: neoholographic

We can only hope



posted on Jun, 6 2018 @ 09:33 AM
link   
a reply to: MichiganSwampBuck

Those MIT guys must be bored. They get paid to do this kind of stuff, but for what practical purpose? What's next, a porn addicted bot? A pedophile bot? A drug addicted bot?

Probably working up to this bot:

futurama.wikia.com...



posted on Jun, 6 2018 @ 10:35 AM
link   
a reply to: wylekat

Sure, but that's no different to just bad coding that could lead to the wrong effect.

It isn't the same as a machine thinking for itself and deciding, "Screw these chumps, I'm nuking them cos that inkblot looks like Uncle Bob's willy".

The first scenario you have IF this THEN that or a failure of that logic. Logic programmed by us.

The second is assuming self-awareness. And the virus you linked to was certainly not selfware



posted on Jun, 6 2018 @ 10:57 AM
link   
Its not AI its an algo that is running scripts



posted on Jun, 6 2018 @ 11:07 AM
link   
a reply to: MichiganSwampBuck


ETA: We need a new law like Murphy's law. "Anything man is capable of doing, right or wrong, man will do regardless of the consequences."



No, we need Asimov's rules.
*A robot may not injure a human being or, through inaction, allow a human being to come to harm.
*A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
*A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Let's get those rules set before AI builds itself a body....mankind is already fubar, let's not let robots get to that stage



posted on Jun, 6 2018 @ 01:04 PM
link   
Show it bad images it only can respond using them as reference.

Good thing computers can be wiped and start over from scratch, format and reinstall AI, show it cheery stuff.



posted on Jun, 6 2018 @ 01:07 PM
link   
a reply to: neoholographic

It's not even a real AI.

All it does is randomly repeat stuff they fed it.

It has no idea what streets are, what killing is, what a man is, etc.
It has no capacity to even think.

It's just rolling dice with words taped on each side, it's random and there is zero consciousness involved.



posted on Jun, 6 2018 @ 01:09 PM
link   

originally posted by: dfnj2015
a reply to: neoholographic

Rorschach inkblots all look like vaginas to me. What does that mean?


Or flowers, there are a lot of flowers that look like that.



posted on Jun, 6 2018 @ 01:44 PM
link   
a reply to: muzzleflash

Yes, it is real AI.

Some people sound like the same skeptics before the financial crash LOL. Oh, credit default swaps are fine. This level of leverage isn't a concern. Then CRASH!

Here's more from the M.I.T. website.

We present you Norman, world's first psychopath AI. Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.

norman-ai.mit.edu...

Machine Learning Algorithms is what will achieve A.I. These systems learn from data sets. People say they're programmed but so are we. We get programmed by school are environment and more. We learn from these things.

This is A.I.

These algorithms learn without being programmed as to what they will learn from the data. Nobody programmed Norman to see a man being electrocuted when it saw the inkblot.

If you were to combine Norman with the algorithms from Deep Mind that learned to play Atari games and just unleashed it on the internet, that would be dangerous because you can't control what it learns from the data.



posted on Jun, 6 2018 @ 01:52 PM
link   
I have a psychopathic implant. Damn things a b#h



posted on Jun, 6 2018 @ 01:56 PM
link   
I get its not a real A.I.

But my first thought on seeing the title was Stephen King's Maximum Overdrive.
edit on 6-6-2018 by Irishhaf because: (no reason given)



posted on Jun, 6 2018 @ 02:10 PM
link   
a reply to: neoholographic

It doesn't "learn" anything.

It just records information and then regurgitates it randomly.

It has no idea what a street or a man is, all it can do is reference other "data" that is connected to those words.



posted on Jun, 6 2018 @ 02:32 PM
link   
Well somebody had to do it. At this point in AI research this is a rather benign experiment, it's not going to escape and suddenly create SkyNet.

In fact, I think something like this is already within the capability of a script kiddie using some cloud-based machine learning. At least if the research is done at a university, like MIT, the developing AI sciences will have some notion of how exposure to "evil" might affect an AI as it is learning.

I don't know if they're using real Rorschach images or just random inkblots, but a well-trained and experienced psychologist can glean a lot of information about someone's psyche from how they interpret the images.

In addition to Isaac Asimov's invention of the 3 laws of robotics, he also coined the term: Robopsychology. As these Synthetic Intelligences continue to evolve, we will eventually need more robopsychologists than programmers; as the SI's will be writing their own code.

Thanks for the update neoholographic.

-dex




top topics



 
15
<< 1    3 >>

log in

join