It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Fearing Bombs That Can Pick Whom to Kill

page: 1
5

log in

join
share:

posted on Nov, 11 2014 @ 09:15 PM
link   
Artificial Intelligence will start picking out targets to bomb without the need for human intervention. This will eventually minimize mistakes just like driver less cars will eventually minimize car accidents. There's a danger to this of course but what do you do? Do you run from technology that can be beneficial because there could be a downside? This is true in most cases but it's even a bigger problem in these areas as Artificial Intelligence advances.


On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.

Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill.

As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control — or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.


www.nytimes.com...



posted on Nov, 11 2014 @ 09:23 PM
link   
Add the possibility of a "SkyNet" scenario and they would start picking off any thing "living" that comes within its sights.



posted on Nov, 11 2014 @ 09:58 PM
link   
a reply to: Melbourne_Militia

There's also the possibility of artificial intelligence becoming so intelligent that it chooses not to kill anything. That wouldn't make for a very interesting sci-fi story me guess.



posted on Nov, 11 2014 @ 10:26 PM
link   
a reply to: OrphanApology

I've always said that myself but we all know that it's unlikely. I think it's called Murphy's law.



posted on Nov, 11 2014 @ 10:37 PM
link   

originally posted by: OrphanApology
a reply to: Melbourne_Militia

There's also the possibility of artificial intelligence becoming so intelligent that it chooses not to kill anything. That wouldn't make for a very interesting sci-fi story me guess.


Then there is always this...




posted on Nov, 11 2014 @ 11:21 PM
link   
I wouldnt worry just yet, I called the Cinema last week to ask about a latest movie and the robot automatic voice recognition system didnt understand me,

You asked for Mr Turner - no I said Intersteller.



posted on Nov, 11 2014 @ 11:31 PM
link   

originally posted by: tavi45
a reply to: OrphanApology

I've always said that myself but we all know that it's unlikely. I think it's called Murphy's law.


I was taught by a cop/professor... Murphy should have been a cop..( a good cop ).

But with nano tech ever becoming perfect..
And one well placed satellite... bombs away!

I still don't buy the whole RFID chip... yet we put them in our dogs....
Are we now the dogs?

Hmm.. James Cameron one.... Flesh and blood?



posted on Nov, 11 2014 @ 11:47 PM
link   
a reply to: hellobruce

I'd give you a trillion stars..
But binary allows one..

Great line..
I Think. Therefore I am!

Not many younger than 35 at best get that...
( smiles )

Just wanted to add..
War Games! The program did decide that the game was/and will continue to lose... so it stopped... and wanted a nice game of Chess.
edit on 12-11-2014 by Bigburgh because: (no reason given)



posted on Nov, 12 2014 @ 12:10 AM
link   
a reply to: hellobruce

I find it interesting that the ship's name is "Dark Star" ...



posted on Nov, 12 2014 @ 12:36 AM
link   

originally posted by: MystikMushroom
a reply to: hellobruce

I find it interesting that the ship's name is "Dark Star" ...

you are correct! John Carpenter.. but Kubrick for the win!
edit on 12-11-2014 by Bigburgh because: (no reason given)

edit on 12-11-2014 by Bigburgh because: because no one knows what a carpenter is now.. only a 3D printer



posted on Nov, 12 2014 @ 12:46 AM
link   

originally posted by: MystikMushroom
a reply to: hellobruce

I find it interesting that the ship's name is "Dark Star" ...




Edit: we are trailing off subject.. me too.
OP is talking real time..
We referenced writers... is this relevant? I went off course because it was spoken of.. is this OK?
I'm asking sincerely..
I wish this not to go off topic.



posted on Nov, 12 2014 @ 04:39 AM
link   
a reply to: neoholographic

"Fearing Bombs That Can Pick Whom to Kill"

The title sounds like the premise to the new Call of Duty Advanced Fighter game. And just like in the game I imagine any future weapons we design to target a specific set of people will be of the biological variety. As to allowing some form of AI to designate our targets, it will all end in tears. What happens when the AI decides its own creators are the real threat?
edit on 12-11-2014 by andy06shake because: (no reason given)



posted on Nov, 12 2014 @ 04:51 AM
link   


And according to rumor that picture is the processor that the team who were on the vanished flight MH 370 were working on , Rumor has it that it can put what ever military has it 10 years ahead of the game



posted on Nov, 12 2014 @ 05:47 AM
link   
a reply to: neoholographic

The idea of a low collateral weapon, i.e. one which strikes in such a way as to only pose a risk to those who should be on its receiving end, is a very enticing one. However, where the idea of a guided bomb is concerned, one must consider the target.

If the target is a compound rammed to the ceiling with terrorists, and only terrorists, then dropping a bomb which wipes that compound off the map is all fair enough, and no one would bat an eyelid. However, that is rarely the case, and very often it is impossible to deal damage to the target with a bomb, without causing harm to innocent parties in the vicinity.

Let me give you an example. Lets say that there is a building, four floors high, with a one hundred meter floor plan. Terrorists are on floors two and three, with floors one and four being occupied by innocent families and non combatants. Let us also say, that a bomb is dropped with a minimum explosive radius of one hundred meters on detonation, and that everything inside that radius will be pulverised by the detonation. No matter how well aimed that bomb is, that whole building full of people, combatant and non-combatant alike will die, or be seriously wounded by that assault, no matter if the bomb hits two feet wide of its target area, or hits it dead on. It does not matter whether the bomb is aimed by a computer, or aimed by a human being, the effect is going to be identical.

Now, if what they are saying, is that they have developed bombs which can be software guided, and have a small enough explosive radius that they can be used to destroy occupants of a single small room of a building, THAT would be precise enough to do a job worth doing. Frankly though, my attitude to these things is, that precision targeting is no where near as important for doing the work of modern warfare, as developing methods which do not destroy buildings and infrastructure, and cause little to no collateral damage to non-combatant targets.

Who steers the bomb is no where near as important as the contents of the target.



posted on Nov, 12 2014 @ 07:25 AM
link   

originally posted by: OrphanApology
a reply to: Melbourne_Militia

There's also the possibility of artificial intelligence becoming so intelligent that it chooses not to kill anything. That wouldn't make for a very interesting sci-fi story me guess.


I don't think that would happen. I think the scenario would be more like Ray Bradbury's "I Robot" than "War Games." Where the artificial intelligence seeks to control the humans to "protect them from themselves."

I, for one, do not welcome our new artificially intelligent overlords.



posted on Nov, 12 2014 @ 10:00 AM
link   
a reply to: Bigburgh

I was referring to a secret space plane program that may or may not have actually existed. Rumors of "Dark Star" or "Black Star" have floated around for years...



posted on Nov, 12 2014 @ 10:20 AM
link   

originally posted by: OrphanApology
a reply to: Melbourne_Militia

There's also the possibility of artificial intelligence becoming so intelligent that it chooses not to kill anything. That wouldn't make for a very interesting sci-fi story me guess.


There was "Dark Star",

en.wikipedia.org...



new topics

top topics



 
5

log in

join