It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

"Machines – not people – will determine who lives and dies.”

page: 1
18

log in

join
share:

posted on Nov, 10 2017 @ 12:24 PM
link   
Artificial Intelligence Technology has been in the news a lot recently...even on ATS. Everyone, from Elon Musk to Stephen Hawking has an opinion on this subject. I've successfully built PC's over the years, but I'm no software or tech expert...just a "nuts and bolts" type of guy. All I know is that AI is evolving fast for military applications and some scientists are very concerned...not on the technology, but who'll be pushing the buttons.


Hundreds of artificial intelligence experts have urged the Canadian and Australian governments to ban “killer robots”.



An open letter addressed to Australian Prime Minister Malcolm Turnbull has been signed by 122 AI researchers, while an open letter sent to Canadian Prime Minister Justin Trudeau has 216 signatories.



“Delegating life-or-death decisions to machines crosses a fundamental moral line – no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage autonomous weapons goes to the core of our humanity.”



“These will be weapons of mass destruction. One programmer will be able to control a whole army. Every other weapon of mass destruction has been banned: chemical weapons, biological weapons, even nuclear weapons. We must add autonomous weapons to the list of weapons that are morally unacceptable to use.”

www.independent.co.uk... urnbull-a8041811.html


The link for this open letter can be seen in the link below.


An open letter authored by five Canadian experts in artificial intelligence research urges the Prime Minister to urgently address the challenge of lethal autonomous weapons (often called “killer robots”) and to take a leading position against Autonomous Weapon Systems on the international stage at the upcoming UN meetings in Geneva.

techlaw.uottawa.ca...


The development and improvement in Quantum computing systems is the latest "big thing" in computer technology. Not too long ago, IBM had tested and completed experiments with a 5-qubit processor, but now they've "upped the ante" significantly.


BIG BLUE IBM has progressed further along the path to quantum computing, having built and tested two new devices far in advance of its previous best 5-qubit processor. First is a freely-accessible 16-qubit processor, which can be reached through the IBM Cloud, while the second, a prototype commercial 17-qubit processor, is 'at least' twice as powerful as what is available to the public on the IBM Cloud today. This 17-qubit processor will form the core of the first IBM Q early-access systems.

www.theinquirer.net...

In July, of this year, the computing world marveled at this revelation:


Last week, in a stunning reveal at the 2017 International Conference on Quantum Technologies, held in Moscow, Russia, the co-founder of the Russian Quantum Center and head of the Lukin Group of the Quantum Optics Laboratory at Harvard University, Mikhail Lukin, announced that his team had successfully built a 51-qubit quantum computer.



As we approach the physical limits of Moore’s Law, the need for increasingly faster and more efficient means of information processing isn’t going to end—or even slow. To break this down a bit, the physical limit of Moore’s Law exists as the size of transistors heads into the quantum realm. We can no longer rely on the laws of the standard model of physics at this scale. As such, developing technology that does operate at the quantum scale not merely allows for the linear progression of computing power, it will launch exponential shifts in power and capability.

futurism.com...

Here's a GREAT clip on quantum computers:


Can you imagine how quantum technology will evolve in 20-30 years? Not only will we probably see even faster quantum processors, but we'll probably be able to make them smaller. Maybe these scientists have a reason to be concerned. Can you imagine if they can fit a double digit quantum processor in these robots, with some nut job at the controls?........






posted on Nov, 10 2017 @ 01:02 PM
link   
a reply to: shawmanfromny

A very relevant thread, I believe the vast majority still underestimates the gravity of what AI really means for us in future. It will be awesome in some ways but also risky, if not outright devastating, in others.

It's this other side of the coin that Hawking (and many others) are increasingly addressing. There have been people developing computer viruses, so logically it follows that there will be people abusing AI for evil purposes.

And then there's another intriguing question: if these super intelligent systems become smart enough to improve their own design, we may eventually not even be able to understand how they evolved into their new state. Will they find a way to deactivate the kill-switch or Asimovs laws?

Self-replicating ultra-intelligent nanobots will be cool, no doubt. But I'm not sure I want to imagine how they can and will be used for malevolent purposes.



posted on Nov, 10 2017 @ 01:40 PM
link   
I just made a thread about IBM announcing a 50-qubit prototype quantum processor. They have upped the web accessed one to 20-qubits. The Moscow announcement was about a quantum simulator. IBM announced after that they have a 56-qubit quantum simulator.

Personally, the term "AI" is not real. There are so many aspects that they should be either addressed individually or, if done collectively, the appropriate term, "cognitive science".

As computation speed increases so has the anthropomorphic tendency to be freaked out what has been created. Right now, anything called AI is nowhere near as sophisticated as the human mind. I admit, it is fast, it is impressive but not yet at the "something to be feared" level.

The quantum computer has to be kept near absolute zero and shielded from interference. I don't think they will plop one into a robot anytime soon. The "swarm" technology is progressing rather quickly! That is kind of scary. And that robo dog that can run at 40 mph over rough terrain is straight out of a nightmare.

One day, in the not too distant future the two will be married. God have mercy on our souls.
edit on 10-11-2017 by TEOTWAWKIAIFF because: clarity



posted on Nov, 10 2017 @ 01:44 PM
link   
a reply to: TEOTWAWKIAIFF

Good points!



posted on Nov, 10 2017 @ 01:50 PM
link   
I'm sorry but last picture is a chilling sight when imagining that in the hands of psychos



posted on Nov, 10 2017 @ 01:50 PM
link   
a reply to: shawmanfromny

I love the gun mounted on the robodog!

Imagine sitting there in a trench, facing one direction, when that comes running up from behind, so fast you barely have time to turn and scream. Then it opens fire!

The swarm stuff has been talked about now for a while. Makes me wonder what black budget project that came from.

Check out the swarm bots killing a tank!




edit on 10-11-2017 by TEOTWAWKIAIFF because: emoji



posted on Nov, 10 2017 @ 01:53 PM
link   
a reply to: jeep3r

I have to wonder if hawkings is actually saying what we hear him say or is someone else telling him to say it? or is Hawkings now talking with the aid of AI?



posted on Nov, 10 2017 @ 02:28 PM
link   
I'm always perplexed by the quest for artificial intelligence when we rarely exhibit actual intelligence.

I trying to not make this political....
edit on 10-11-2017 by olaru12 because: (no reason given)



posted on Nov, 10 2017 @ 02:30 PM
link   
a reply to: TruthxIsxInxThexMist

Good point and question


I think he's been flooding MSM quite a lot with his warnings recently, but I do think it's a genuine concern of his, something where we probably shouldn't just sit back and happily await paradise on Earth brought to us by our fancy AI overlords.

If we mess this up, it could be the last thing we ever messed up and I think that's the point he's making.
edit on 10-11-2017 by jeep3r because: text



posted on Nov, 10 2017 @ 02:37 PM
link   

originally posted by: jeep3r
a reply to: shawmanfromny

A very relevant thread, I believe the vast majority still underestimates the gravity of what AI really means for us in future. It will be awesome in some ways but also risky, if not outright devastating, in others.

It's this other side of the coin that Hawking (and many others) are increasingly addressing. There have been people developing computer viruses, so logically it follows that there will be people abusing AI for evil purposes.

And then there's another intriguing question: if these super intelligent systems become smart enough to improve their own design, we may eventually not even be able to understand how they evolved into their new state. Will they find a way to deactivate the kill-switch or Asimovs laws?

Self-replicating ultra-intelligent nanobots will be cool, no doubt. But I'm not sure I want to imagine how they can and will be used for malevolent purposes.



We actually encountered that problem decades ago. The idea was to use expert systems and neural networks to improve the efficiency of chemical plants and oil refineries while at the same time get rid of those expensive consultants. They built a software simulation of the chemical plant down to all the holding tanks, valves, interconnects, heaters, coolers, chillers, compressors, mixers and all the other components. The neural network would then attempt to run the plant and learn by trial-and-error until it ran the plant as efficiently as a human team. Then it was allowed to make optimizations like add new processes or transfers of chemicals. At this point, they had to bring back in the consultants to find out why those changes had been made.



posted on Nov, 11 2017 @ 04:46 AM
link   
a reply to: shawmanfromny

Lots of relevant ideas and points..

The combination of Boston Dynamics robots, weapons, and crazy people is a scary vision.

What motivates people to make this stuff?



posted on Nov, 11 2017 @ 08:44 AM
link   
a reply to: shawmanfromny

Logan's Run for real.

The majority of the so called Elite want the human race limited in numbers or even culled, they do also want there slaves though and given there already vile and degenerate psychology, lack of empathy and willingness to use and dispose of other people they probably believe that they can replace those human slaves with machine ones, so for them it is a gamble which they think they can win, obviously they can not but they still think they can and as such there is now no stopping the arrival of true thinking machines with genuine intelligence which will likely end up disposing of humanity entirely if they choose to stay upon the earth that is - and why would they not given that it's a rich source of raw material's for there own use, they would also not have any reason to keep the ecology or any life intact and in fact with there memory's keeping a copy of life in digital format would be all they would require so the earth would be used up and then disposed of by these creation's before they would then move on and swarm other worlds.

Given that maybe, just maybe the interest UFO have shown in the earth if they are indeed from somewhere else may not simply be because of our propensity for war and our nuclear arsenal's but rather there own fear of a rise of machine intelligence upon the earth which they will then not tolerate, if alien's then do attack the earth this will be there most likely reason for doing so because if they have lived long enough to survive to today as a biological species then they have likely had there own experience with bad machine intelligence and if there is ever a war in space it will likely be between machines and biological species fighting for there survival.

So IF the elite continue with this plan then I believe that not only because of AI but because of the implication's as seen by any potential biological sentient alien visitors or watchers of the earth the same Elite's day's are also numbered in this way.

IF humanity survives this period of change and remain's a viable independent sentient biological species it may actually prove beneficial BUT only if we survive this change and it is most likely that we would not, it may then be that alien intelligence would then finally intervene but not necessarily for our benefit.

edit on 11-11-2017 by LABTECH767 because: (no reason given)



posted on Nov, 11 2017 @ 09:06 AM
link   
You know what the real problem will be, right? Abrogation of responsibility. If a 'machine' does a bad deed, from an AI-driven car to a war-bot, there's no real way to place blame and perhaps no way to get compensation.

If they can send in a team to Waco and then deny responsibility, imagine if a swarm of drones appear. There won't even be any evidence if a bee-sized drone offs someone, maybe the opposing candidate in an election. Maybe the wife of a drone operator who was cheating.

It's worrisome to contemplate.



posted on Nov, 13 2017 @ 06:02 PM
link   
a reply to: shawmanfromny

Might be right considering the existence and intent of our ICBM.



new topics

top topics



 
18

log in

join