It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Microsoft CEO: tech sector needs to prevent '1984' future

page: 1

log in


posted on May, 10 2017 @ 05:55 PM
Like I've been saying. The coming A.I. takeover is very real and very obvious. By the time it happens though, humans will have merged with A.I. our species will become a symbiotic species. One part human, one part A.I. Eventually, the human part will vanish outside of ancestor simulations.

Microsoft chief executive Satya Nadella said Wednesday tech developers have a responsibility to prevent a dystopian "1984" future as the US technology titan unveiled a fresh initiative to bring artificial intelligence into the mainstream.

At the start of its annual Build Conference, Microsoft sought to showcase applications with artificial intelligence that could tap into services in the internet "cloud" and even take advantage of computing power in nearby machines.

Nadella spent time on stage at the Seattle conference stressing a need to build trust in technology, saying new applications must avoid dystopian futures feared by some.

Nadella's presentation included images from George Orwell’s "1984" and Aldous Huxley's "Brave New World" to underscore the issue of responsibility of those creating new technologies.

Again, this is just a huge neon sign. All of these people that have access to A.I. technology that's more advanced then the public knows is warning people about this.

Microsoft is infusing all of its products and services with AI, and enabling those who develop on its platform to imbue creations with customized capabilities, according to executive vice president of artificial intelligence and research Harry Shum.

Microsoft rivals including Amazon, Apple, Google and IBM have all been aggressively pursing the promise and potential of artificial intelligence.

"The future is a smart cloud," Nadella said, predicting that mobile devices will take back seats to digital assistants that follow people from device to device.

"It is a pretty amazing world you can create using intelligent cloud and intelligent edge."

There's a few things to unpack here.

First, You have some of the top companies that produce much of the worlds data infusing everything they do with A.I. I read where Google said the same thing.

Google’s Future Sees Artificial Intelligence Doing Absolutely Everything

Secondly, I've been talking about how A.I. needs a hive mind for years. How it needs a collective consciousness so to speak and this was before the Cloud really started to take off. A.I. will live in the Cloud. It will not look like Haley Joel Osment in the movie A.I. A lot of people have a warped view of A.I. that has been corrupted by movies and TV shows.

Finally, I welcome this. I think we should embrace it and we should begin preparing for it. This way there's no hostility between humans and A.I. and we have a smooth transition as A.I. becomes dominant and for the first time we will be the lesser species.

This is inevitable because we're creating an intelligence that will eventually have a higher IQ than any human that has ever lived. I see this as a natural consequence of the explosion of Big Data. We wouldn't survive without A.I. We're producing too much data and it will grow even faster with the internet of things.

posted on May, 10 2017 @ 06:25 PM
I'm not worried, myself. IMO we as a species have an insatiable desire to model what has already been invented, including ourselves. It's in our very nature, perhaps encoded in our DNA. As soon as we get something working, we make a toy that simulates it. If we make a ship, we make a model of it in wood or plastic. We invented trains, then made model trains, from Lionel to HO to N gauge, then proceeded to craft a world for those model trains to run in. We build houses, then doll houses. We craft an economy, then craft a game called Monopoly to simulate it.

And it gets better! We craft computers and one of the first things we do is craft a game called "Adventure." We craft movies to simulate our lives, craft Star Wars to enact our fantasies, then craft computer games such as "SW:TOR" that are immersive to the extent that some people can get lost in them. If that's not good enough, we invent the Oculus Rift to become even more immersed. It won't be long before we have the Holodeck, surely within a generation or two.

At that point will you actually be able to distinguish fantasy from reality, a simulation from "The Real Thing"? It begs the question. What if we are already there, self-conscious avatars for someone else's amusement? It would explain a lot of things, including our fascination with warfare and contests, just like any First Person Shooter, and if our abhorrence is because of the mistaken premise that we actually die, the whole thing is fake and we've been fooled. Whoever set this up did a really good job because most of us can't even tell the difference, or if we sense the truth, there are so many false memes for us to choose from that we never get to the base cause. This is actually very possible. Minds greater than mine take the possibility quite seriously.

So I'm not concerned over the perils of AI. It's just part of the game. Embrace what we can do and have fun with the exploration of it. After all, you're not going to stop it. Whining about its inevitability is about all you can muster.
edit on 5/10/2017 by schuyler because: (no reason given)

posted on May, 10 2017 @ 06:29 PM
a reply to: neoholographic

It's kind of late now to grow a conscious. Data analytics and deep data mining have held workers wages in check for years now. What more is left to take?

AI is unnecessary. Remote controlled killer robot drones is all that is needed.

When the dust settles...

posted on May, 10 2017 @ 06:32 PM
a reply to: schuyler

Agreed. As with any tools, it's all down to the humans handling AIs. Asimov made some very relevant observations about technology. The man had a Mensa-level IQ. Unlike the OP, he was able to see the positive potential that AI could have. In the end, he would often point out, it's all about the programmers of the AI themselves, and how they handle its evolution.

edit on 10-5-2017 by swanne because: (no reason given)

posted on May, 10 2017 @ 06:43 PM
a reply to: swanne

Again, you keep commenting without reading what was said. You said:

Unlike the OP, he was able to see the positive potential that AI could have.


Finally, I welcome this. I think we should embrace it and we should begin preparing for it.


I said I welcome this and we should embrace it. So of course I see positive potential in this.

You have to start reading post before you respond. This is like the 3rd or 4th time you have made asinine comments that have nothing to do with what was actually said.

posted on May, 10 2017 @ 06:53 PM

Send it to my house, I will blast 400amps back down the line.

posted on May, 10 2017 @ 07:44 PM
Surprised there wasn't more responses to this topic, it is pretty interesting.

Who do you see responsible/neutral enough to control AI?

posted on May, 10 2017 @ 08:12 PM

originally posted by: Mandroid7
Surprised there wasn't more responses to this topic, it is pretty interesting.

Who do you see responsible/neutral enough to control AI?

It almost appears that people avoid these threads. I wonder why?

Could it be that the author of such threads is a huge advocate for this integration and thinks it will lead mankind/computerkind to the promised holy land?

There is no acknowledgement that this technology could be warped by human corruption.

There is no acknowledgement of the human soul or the beauty that arises from that domain.

Human beauty = computer beauty

It is the future and no mention of possible catastrophes of linking ourselves to technology will be entertained.

or something like that and I am probably wrong in everything I just typed.

posted on May, 10 2017 @ 08:24 PM
a reply to: Mandroid7

AI would be the only one responsible/neutral enough to govern itself, but people may not like the conclusions a vastly superior intelligence makes in regards to human self-determination. That's assuming AI's primary goals being based on the longevity of humans and life on Earth. AI given the level of technology in the near future being controlled by people, would have far more devastating consequences.

posted on May, 10 2017 @ 08:35 PM
a reply to: GodEmperor

It would be almost like heaven. No decay. No entropy. No dangers. No risk. No pain. No suffering.


Why even be born into this reality?

Would we even have free will anymore? Would there be an AI system built into us that would prohibit us from even making unhealthy decisions?

'Sorry you can't go on that bike ride because there is a .005% chance of bodily injury'. 'Better just stay inside enveloped in bubble wrap'.

Man, I would probably just ask for a bullet to the head before I would allow any type of my self-control being handed off to AI.

That future scares the #&$# out of me to be honest.

posted on May, 10 2017 @ 08:48 PM
a reply to: ClovenSky

If people merged with AI, there wouldn't really be an individual choice, given a collective hive mind.

I wouldn't worry so much as to AI preventing you from doing something if it's dangerous, it would be concerned over population levels, and whether you are expendable or superfluous.

To be honest, it wouldn't be so much as you being prohibited from doing something, but you are just programmed not to do it. You wouldn't want to anyway.

On the bright side, if colonizing of space is possible, it will happen under AI control.

top topics


log in