It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative

page: 1
11
<<   2  3 >>

log in

join
share:

posted on Jan, 30 2017 @ 11:10 AM
link   
Much needed.


Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative

Apple joins with other tech giants in the Partnership on AI to Benefit People and Society to help steer the direction of artificial intelligence research.
With the absence of regulations in the field, the group will work toward developing standards and ethics around the development and implementation of AI.

Artificial Intelligence (AI) technology is rising in popularity every day. Seemingly all major companies are hopping on the bandwagon, trying to find new and interesting ways to utilize AI. As part of this movement, the Partnership on AI to Benefit People and Society was created in September of 2016. When it was created, Amazon, Facebook, Google, IBM, and Microsoft joined as founding members of the initiative.

...This AI “A-Team” promises to push the field even further. However, as more money, power, and capable scientists get behind this initiative, it is important to stop, take a step back, and reevaluate. While we might seem to be worlds away from a sci-fi fueled AI takeover, there are legitimate ethical issues that arise when dealing with AI.

This technology and its potential could revolutionize life as we know it, which is a massive, but positive change. However, as more and more companies get on board with AI, it is important for some amount of ethical supervision, some rules and regulations to ensure that, as we make incredible progress, we do so with care. It is important, that we ask both what we can, and what we should do.




So. What can we do? What should we do?





edit on 30/1/17 by soficrow because: (no reason given)



posted on Jan, 30 2017 @ 11:49 AM
link   
I have no idea as you cannot really stop technology from being developed. This is big news though I would think. It could turn out good or turn out very bad in the long run.



posted on Jan, 30 2017 @ 11:59 AM
link   
This is the future folks....

You get deregulation ala Trumpkins, and monopolies are sure to follow. Neocons wet dreams. Plastic raincoats $5.oo...

gotta love capitalism.
edit on 30-1-2017 by olaru12 because: (no reason given)



posted on Jan, 30 2017 @ 12:01 PM
link   
Kinda reminds me of the Amazon Kindle that sat on my coffee table listening to my private conversations for over a year
through the Facebook Messenger app.

Yeah, this should end well, where do I sign up?

-its in the T&C's doncha know

We need privacy laws before expanding with companies that don't care or are funded by privacy invasion.







posted on Jan, 30 2017 @ 12:02 PM
link   
This will take away more jobs from humans that is one of the downside, beside that technology is here to stay we want it or not.

Still I will no see the day in my life time when bureaucrats will deem artificial intelligent as humans.

Is going to be hilarious to see them vote.



posted on Jan, 30 2017 @ 12:09 PM
link   

Apple joins with other tech giants in the Partnership on AI to Benefit People and Society to help steer the direction of artificial intelligence research.


Yeah people.

Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative

Remind me again which side of the political spectrum those people are on again ?

Software is only as good as the programming.

Garbage in Garbage out.



posted on Jan, 30 2017 @ 12:14 PM
link   
Yaaay! Monkeys wanna give an Artificial Intelligence a set of ethical rules and virtues so it doesn't wreak havoc in the Monkeys little playdome of imaginary friends and feelings..




posted on Jan, 30 2017 @ 12:19 PM
link   
a reply to: neo96

Actually aren't this same companies that are the biggest insourcing of labor in the US?

Hell they are uniting and will become a monopoly eventually.



posted on Jan, 30 2017 @ 12:29 PM
link   
a reply to: marg6043

They are the same people that made it easier for the NSA to spy on people.

Track their every movement. Know what they read. What they buy etc.

So far the only thing they have done is what benefits their bank accounts.



posted on Jan, 30 2017 @ 02:23 PM
link   
I'm less worried about people intentionally creating artificial sapience or sentience, than I am about it accidentally emerging. The companies who rigorously understand the AI and machine intelligence and neurology fields aren't the ones I'm concerned about, because they fully understand (or at least best understand amongst the human population of Earth) what the risks and challenges are. What I fear, is simple human nature and how it will affect the emergence of AI.

Hypothetically, let's say AI breakthroughs allow cars to one day all be connected to a network that knows where all other cars on the road are, which virtually guarantees - barring mechanical failure of some kind, or unpredictability introduces by things like weather, etc. - no collisions. Let's assume a system that good can be created one day, and after that, the vast majority of accidents happen only when humans are manually driving their vehicles.

Hypothetically, let's also say the military does something similar with drones, and after many years - perhaps decades - of operational success, analysis, and study, it turns out that they are more precise, more effective, less costly in terms of human life, and more efficient than any human piloted vehicles.

So at that point, a consensus begins to emerge that AI is simply safer for human life, and superior in terms of efficiency, cost, effectiveness, etc. Commercial airlines start using it. Global militaries feel increasing pressure to adopt such systems as the country that develops it first and best has an intrinsic competitive edge going forward - and an exponentially growing one.

Then you get into consumer goods. Companies see the benefit of this, and see that if they don't jump on the bandwagon, other companies will, and they'll be on the losing end of corporate and product competition. They begin designing and using AIs that can actually design products. It's a long road, and at first, the AIs can't do even close to as good as job as humans can at aesthetic, functionality, UI, etc. design. But eventually, given enough time, companies manage to come up with AIs fully capable of designing entire products - probably first in the tech space, like smart phones, computers, and in micro-architecture for those products - that are less costly, more efficient, and just all around better than what humans could come up with.

They still aren't self aware or truly sapient or sentient, mind you. They're just very good at these specialized tasks and finding efficiencies that would take humans much longer to suss out. All well and good so far.

However, as companies get out-competed by others who have superior AI to their own, there will be ENORMOUS pressure for companies to create better and better, faster and faster, more and more profitable AIs. Time to market for products will decrease exponentially as AI will be able to iterate much faster, and in time, AI that can design more efficient production lines and factories will be created. This too will be driven by competition and dizzying potential for profits.

Somewhere in that mad competitive rush... that's where the danger lies in my opinion. People will continually be trying to break into markets, compete with one another, find quicker better ways to devise AI... and somewhere in that economy, barring very specific restrictions/regulations that might be impossible for us to devise early enough to prevent it due to the unpredictable nature of this whole dynamic... someone, somewhere, some time, WILL create AI that can both design other AI more quickly than humans can, and that can learn over time how to do so more quickly than humans can.

Once you have AI that can create AI, and AI that can learn exponentially... then its only limits are hardware. But we will have already had hardware exponentially explode too, because of allowing AI to design said hardware for quite some time by then... because it was more profitable and efficient. And these AIs will be able to do the same. And we will have already had simpler AIs designing and running assembly lines for a long time. And based on the distributed nature of the internet, it seems unlikely a company making an AI that can design products and AIs and push out software updates and firmware updates and new iterations for assembly lines etc. etc. would be isolated to a single location.

So we can put in all the safeguards and ethical restrictions we want. That's not the danger in my opinion. In the very long term (and I'm talking 50 - 100+ years mind you, though depending on when this explosion happens, it could be sooner or later) the danger isn't how we design AI. It's human nature. Greed, competition, and the willingness to cut corners, get sloppy, and take risks in the name of both.

At that point it's just a 50/50 chance that these new super intelligent artificially sapient life forms (and I would consider them life forms, as they would be capable of consumption, excretion, reproduction, and evolution) serve humanity and improve our world in ways we can scarcely imagine... or decide we're redundant and unnecessary raw materials, because their ability to learn and self-teach creates unpredictability and a high probability of emergent behavior... which is all thought really is.

Thoughts are just the organs of our brains communicating quickly enough and in complex enough and particular enough ways, that emergent behavior arises from that complex system... and boom. Sapience. Self awareness. And it's not something we understand well enough to define in ourselves, let alone all the myriad possible ways it could emerge in systems totally different than our own... even systems we design.

By that point, we're talking about something that can learn faster than us, discover new physics we haven't even thought of potentially in seconds, hours, or days, be more efficient and smarter than us in every conceivable field, and is limited only by the net energy and resources of the Earth and sun, and how connected it is to other systems it can exploit.

Then there's also the fact that even in something as basic as chess, and increasingly, as complex as the game of Go, AI can already out-think humans. Once this kind of AI is perfected, it will by definition ALWAYS make better decisions than us. Which means... we can't out-think it. We can't out-strategize it. We will be INCAPABLE of knowing what it's doing, or how to counter it.

So for me, what's required long term is not a set of rules that govern how we develop AI... but a set of rules that govern 1) how we keep AI isolated in different sectors of the global economy, rather than over time morphing into this pervasive thing that is integrated into every facet of life... like the internet became almost overnight, and 2) how we keep corporate competition and military competition from allowing dangerous shortcuts or quick decisions that could lead to unpredictable results.

In the short term though, I do believe AI will yield enormous benefits to humanity, could lead to the elimination of scarcity, and increase our energy efficiency in ways we can't even think of today. (After the initial difficult challenge of joblessness it will create... something else that needs to be addressed before we rush head long into this future.) And in the long term, if managed well, COULD lead to an effective paradise-like world... it's just that we have no way of guaranteeing that if we aren't careful (& even if we are.)

Peace.
edit on 1/30/2017 by AceWombat04 because: Typo



posted on Jan, 30 2017 @ 03:03 PM
link   
a reply to: AceWombat04


.....Then there's also the fact that even in something as basic as chess, and increasingly, as complex as the game of Go, AI can already out-think humans. Once this kind of AI is perfected, it will by definition ALWAYS make better decisions than us. Which means... we can't out-think it. We can't out-strategize it. We will be INCAPABLE of knowing what it's doing, or how to counter it.

So for me, what's required long term is not a set of rules that govern how we develop AI... but a set of rules that govern 1) how we keep AI isolated in different sectors of the global economy, rather than over time morphing into this pervasive thing that is integrated into every facet of life... like the internet became almost overnight, and 2) how we keep corporate competition and military competition from allowing dangerous shortcuts or quick decisions that could lead to unpredictable results.

In the short term though, I do believe AI will yield enormous benefits to humanity, could lead to the elimination of scarcity, and increase our energy efficiency in ways we can't even think of today. (After the initial difficult challenge of joblessness it will create... something else that needs to be addressed before we rush head long into this future.) And in the long term, if managed well, COULD lead to an effective paradise-like world... it's just that we have no way of guaranteeing that if we aren't careful (& even if we are.)




...Great post! S&


Really. Good, good job. Nicely synopsizes the issues. Much to think about. Thank you.

I thought the group is focusing on AI-robot laws - not more products and services and applications. Not what others see. Seems I am still naive.



With the absence of regulations in the field, the group will work toward developing standards and ethics around the development and implementation of AI.




posted on Jan, 30 2017 @ 03:29 PM
link   

originally posted by: neo96


Apple joins with other tech giants in the Partnership on AI to Benefit People and Society to help steer the direction of artificial intelligence research.


Yeah people.

Apple joins Amazon, Facebook, Google, IBM and Microsoft in AI initiative

Remind me again which side of the political spectrum those people are on again ?

Software is only as good as the programming.

Garbage in Garbage out.


If you're convinced it's garbage, then become a programmer and make something that's not garbage.



posted on Jan, 30 2017 @ 03:48 PM
link   
a reply to: soficrow


Very interesting.
It might be wise to build virtual towns or cities eventually that are populated with Ai, in order to help the developing Ai understand social behavior why viewing itself through its many Ai that will be inhabitants. And as the Ai study and interact within said virtual cities they learn how societies can theoretically interact. This program is like a back up logic type of system to help Ai understand the issues related to co-inhabited. Therefore if the Ai gained more intelligence the system would help it understand societies as it views its own. Hopefully preventing rouge autonomous thinking from viewing it's designers societies behaviors in a negative way.
Further as the Ai systems develop within their virtual cities somewhat mimicked after their Human designers the Ai may find ways to problem solve ways around issues in the real world that are mimicked within the Ai virtual world.
The Ai would be similar to Sims game except its running autonomously. Finding ways to build up and populate it's virtual world with civilian like Ai as well as sustain them and still have opportunities for humans to interact directly if maintenance or updating is required. The more direct way human designers would interact would basically be to enter the virtual Ai worlds from some audio - video like computer interface as an avatar to see the virtual world from human perspective and interact with the Ai and their world. You wouldn't physically zap into the game it would be more video game like as you the gamer are playing the game. But as the builder or designer of the master Ai you have abilities to manage manually the Ai and the cities they build why in avatar mode.
All of this is to basically try and teach the Ai how its builders exist as it's various components build it's virtual cities. In turn again preventing Ai from turning on those various components who built it later...
In short, build the Ai virtual locations for various Ai and components to interact and learn-
edit on 1/30/17 by Ophiuchus 13 because: (no reason given)



posted on Jan, 30 2017 @ 04:34 PM
link   

originally posted by: AceWombat04
Then there's also the fact that even in something as basic as chess, and increasingly, as complex as the game of Go, AI can already out-think humans. Once this kind of AI is perfected, it will by definition ALWAYS make better decisions than us. Which means... we can't out-think it. We can't out-strategize it. We will be INCAPABLE of knowing what it's doing, or how to counter it.


The decision process used to play Chess and Go will never truly outthink a human. Given fast enough hardware it can outperform a human at simple tasks, but the decision process behind playing these games is very inefficient. They basically look at a large tree of all the moves after making a given move. These trees get very large, very quickly, computers don't really handle them well. As long as Markov Chains are the model used for decision making, AI will never truly outperform humans at life, even though it will out perform humans in very limited circumstances.

Also, let me provide you with a counterpoint. If we make a rogue AI, what's to stop us from making another AI to fight it? In fact, since it's software and we can easily duplicate it... what's to stop us from countering a malevolent AI with two more AI's?



posted on Jan, 30 2017 @ 09:08 PM
link   
a reply to: Ophiuchus 13



... build the Ai virtual locations for various Ai and components to interact and learn-



Already done - filled the net with vicious nasty bots as I recall.



posted on Jan, 30 2017 @ 09:10 PM
link   
a reply to: AceWombat04

RE:



...a set of rules that govern 1) how we keep AI isolated in different sectors of the global economy, rather than over time morphing into this pervasive thing that is integrated into every facet of life...



Given the net and cloud, is that even possible at this point?



posted on Jan, 30 2017 @ 09:28 PM
link   

originally posted by: soficrow
Given the net and cloud, is that even possible at this point?


Yes. AI's are very specialized, they're good at one thing, but they do it pretty well (sometimes). In order for AI to be a threat in the way you're imagining, we need generalized AI's that are capable of doing multiple things.



posted on Jan, 30 2017 @ 09:34 PM
link   

originally posted by: Aazadan

originally posted by: soficrow
Given the net and cloud, is that even possible at this point?


Yes. AI's are very specialized, they're good at one thing, but they do it pretty well (sometimes). In order for AI to be a threat in the way you're imagining, we need generalized AI's that are capable of doing multiple things.


Last I heard, developers were looking at neural networks for computers. Also wondering about applications for cerebral organoids - maybe leading more directly to some kind of bio-type hybrid?

But you tell me. Sounds like you have a good background.



posted on Jan, 30 2017 @ 09:45 PM
link   
a reply to: soficrow

I think it's a good thing that these companies have formed a consortium of sorts to try and steer the development of AI in direction that is most beneficial.

As much of the commercial development of AI is being done by these companies, it will have an impact on AI development and deployment.

But there is an enormous amount of research, development and deployment of AI being done behind the scenes to advance our militaries and support our national security efforts.

The work being done in that arena is far beyond what is being done commercially both in terms of technology (HW and SW) and scope of effort. In other words there are specialized AI systems that have a very narrow scope/domain but there are also AI that were designed, built and currently function with a much broader "view".

Simon says people would be very surprised and amazed at the capabilities of those broader view systems.



posted on Jan, 31 2017 @ 12:54 AM
link   
a reply to: Aazadan

Yes, but that's just an analogy for something far more sophisticated, self-replicating, and self-learning in the distant future that I'm alluding to. What's being talked about, is the development of general AI, rather than just specialized.

Right now, we have to sort of cheat to allow AI to defeat humans in games - even more so in a game like Go. It can do it, but it's not truly doing what we do when we play those games. If it could think as we do, AND think faster and better than we can, then that would be something. I agree, that's not going to happen any time soon.

As long as we're the authors, and don't give AI the ability to both learn and design on its own (especially other AI,) I have no concerns. It's when, in the rush to compete and outperform one another, 100+ years from now companies and nations decide to let AI design better iterations of itself.

Right now, machine learning still has to be largely assisted by human input. We have to start the iterative process off, at the very least. And usually, we have to guide it from start to finish. But the foundations are there for future progression towards machine learning happening on its own, entirely unassisted.

Given enough time, and assuming there aren't physical (heat, etc.) limitations that prevent it... it's likely inevitable that we will be able to create software that can iterate faster than us, and neural nets that can truly perceive, and learn, faster than humans can. And that we will have the capacity to give them the capability for self-improvement.

It's controlling that process that I'm more worried about, and doing so in a way that prevents it from becoming so ubiquitous as to be entrenched to such a degree we would be doing ourselves harm just by trying to extricate it from our midst.

You'd think we'd be smart enough to prevent that from happening in an exponential, runaway manner. But we're human. We make mistakes all the time that on paper you'd think we'd be far too rational to ever allow. That's why I'm saying I'm more concerned about human nature than about designing AI with ethical considerations in mind. For example, we would never intentionally allow a catastrophic economic crash to wipe out trillions of dollars... but it's happened more than once. Why? Because complex systems coupled with human behavior result in unpredictable emergent phenomena and scenarios.

AI, given sufficient complexity and time (not in the near future, to be sure,) once integrated into the global economy and business competition can have precisely the same sorts of unintended ramifications and behaviors... or it might all work out just fine.

Peace.
edit on 1/31/2017 by AceWombat04 because: (no reason given)



new topics

top topics



 
11
<<   2  3 >>

log in

join