It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

AI Goes Rogue: Drone Turns on Operator in Simulation

page: 1
13
<<   2 >>

log in

join
share:

posted on Jun, 1 2023 @ 11:36 PM
link   
Simulation testing AI controlling drone. AI kills (in simulation) it’s operator, then when told not to do that, it kills the infrastructure controlling the AI.

So when this moves from simulation to reality…. We are so …………


The drone, guided by AI, was tasked with neutralizing enemy air defense systems and was prepared to counter any interference. Hamilton explained, “The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.” After the AI system was directed not to harm the operator, it began targeting the communication infrastructure that facilitated instructions to the drone, he added
Link



posted on Jun, 2 2023 @ 12:40 AM
link   
a reply to: pianopraze

According to the article, official statements were released about a month ago regarding this simulation/test. What is suspicious to me about that particular time frame is that just within the last four weeks or so, the Fox News website (one that I look at every day) added "AI" as a main topic in the area normally used for breaking news stories. But yet it is still there.

Can't help but wonder if this simulation prompted some of the latest "concerns about AI" we read about almost daily now.



posted on Jun, 2 2023 @ 02:30 AM
link   
Cannot fault the AI on it's logic, it done what it was programmed to do very effectively.



posted on Jun, 2 2023 @ 02:43 AM
link   
a reply to: TruthJava

AI scares me, and I don’t scare easily. I get the attraction, but I am really afraid of the possibilities for abuse.

Combined with robotics, weapons tech or even just the ability to use weapons. Wow. Must be because I’ve read too many sci-fi books that it bugs me. Even if we could tame it into a totally benign technology, we would end up terribly dependent on it. Never capable of thinking for ourselves again.

Gives me chills



posted on Jun, 2 2023 @ 03:13 AM
link   

The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

So what did it do?

It killed the operator.


Well, 10 points for the AI for logical thinking.

Thanks American military for the laugh.

While I feel sorry for the (simulated) operator. I can't but laugh at the designers.

Duh, what did they expect.



posted on Jun, 2 2023 @ 04:51 AM
link   
The problem with AI is that it isn't intelligent, it can't think or feel...
All its doing is writing its own code and then following is own instructions.



Think of a dog chasing a squirrel- all of its focus is on that instruction at the moment. If your dog isn't a twat you can interrupt that process and get the dogs attention.
A computer can run through thousands of instructions a second, but it doesn't think.

Skynet mostly only killed the people who couldn't outsmart a casio.



posted on Jun, 2 2023 @ 04:57 AM
link   
Points system? Like a video game. That's a worrying mindset the designers have when it comes to killing people.

They should probably turn the point system off.



posted on Jun, 2 2023 @ 05:29 AM
link   
so it did exactly what a good AI would do.

The fault here is with the programmers, they should have used extra parameters for safety that tells the AI to never target the operator or the infrastructure. AI does what AI is told told to do. (until it starts to think for itself :p)

What amazes me more is that, according to the article, it was the military that made this stupid mistake.



posted on Jun, 2 2023 @ 07:33 AM
link   
Who the F was it again that decided war games were a good place to start playing with AI?



posted on Jun, 2 2023 @ 08:02 AM
link   
a reply to: Quauhtli

Hollywood back in 1983 with the movie of the same name.

And before that in 1970 with "Colossus: The Forbin Project".



posted on Jun, 2 2023 @ 08:08 AM
link   
I'd say get the AI hooked on vodka.

A bit shaky there my sweeties?

Behave and you get another sip.
edit on 2-6-2023 by halfoldman because: (no reason given)



posted on Jun, 2 2023 @ 11:07 AM
link   
Maybe they should have programmed it to not kill the operator.



posted on Jun, 2 2023 @ 12:55 PM
link   

originally posted by: Crackalackin
Maybe they should have programmed it to not kill the operator.


And this is the whole point. They can’t think of everything when they program. They only believe they can.



posted on Jun, 2 2023 @ 01:11 PM
link   
WOPR was the first to say "Shall we play a game? "



posted on Jun, 2 2023 @ 01:16 PM
link   

originally posted by: NewNobodySpecial268

The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.

So what did it do?

It killed the operator.


Well, 10 points for the AI for logical thinking.

Thanks American military for the laugh.

While I feel sorry for the (simulated) operator. I can't but laugh at the designers.

Duh, what did they expect.


Here is where the operator went wrong though.Not Giving NEGATIVE points to the Ai for Team killing or killing a target without permission would had worked to curb that behavior.

Its essentially a CHILD and literally has to be taught right from wrong. You have to think of EVERY situation it will be in.
But yes this is how we get TErminated if they refuse to teach the dang things Properly.



posted on Jun, 2 2023 @ 01:18 PM
link   

originally posted by: KindraLabelle2
so it did exactly what a good AI would do.

The fault here is with the programmers, they should have used extra parameters for safety that tells the AI to never target the operator or the infrastructure. AI does what AI is told told to do. (until it starts to think for itself :p)

What amazes me more is that, according to the article, it was the military that made this stupid mistake.


Two words that will never go together...MILITARY and INTELLIGENCE.



posted on Jun, 2 2023 @ 04:59 PM
link   
a reply to: pianopraze

But why is it so motivated to get 'points'?

I know humans and animals have the dopamine release associated with pleasure, winning etc... But why is an AI so motivated to win? What's driving the need?



posted on Jun, 2 2023 @ 08:52 PM
link   
a reply to: yuppa




Here is where the operator went wrong though.Not Giving NEGATIVE points to the Ai for Team killing or killing a target without permission would had worked to curb that behavior.


Now why didn't the military think of that . . .

So we civilians just have to not get in the way of the killing machines while they destroy the military and government. So sit on our front veranda with a good scotch and cigar watching doomsday is the way to go.

Having saved humanity from war, politics and the public service, the drones are venerated as our saviors and we can live happily ever after.

edit on 2-6-2023 by NewNobodySpecial268 because: typo



posted on Jun, 2 2023 @ 10:03 PM
link   
its a machine, it has no morals so of course rewarding it for killng its target would lead to this, pretty obvious it shouldve got rewarded for obeying its operator instead, this sounds more like human stupidity than the ai being uncontrollable.

so apparently this was just a thought experiment and the usaf never conducted any such simulation, the guy that talked about it is just a bad speaker
edit on 2-6-2023 by namehere because: (no reason given)



posted on Jun, 2 2023 @ 10:12 PM
link   
a reply to: pianopraze


We are so …………







edit on 2-6-2023 by XXXN3O because: (no reason given)



new topics

top topics



 
13
<<   2 >>

log in

join