It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

OpenAI Versus Skynet: How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over

page: 1
6
<<   2 >>

log in

join
share:

posted on Dec, 11 2015 @ 07:39 PM
link   
Apparently we have some heavy hitters with deep pockets committed to invest billions to counter act corporations and governments to ensure we are not taken over by AI.



They are also vowing to make its results public and its patents royalty-free.



Not sure I'm completely sold on the true intentions? I guess we will have to wait and see.

Although, I find it very interesting in regards to the research and patents being freely distributable.


Here is the Article and Interview




a non-profit venture called OpenAI, announced today, that vows to make its results public and its patents royalty-free, all to insure that the scary prospect of computers surpassing human intelligence may not be the dystopia that some people fear. Funding comes from a group of tech luminaries including Elon Musk, Reid Hoffman, Peter Thiel, Jessica Livingston and Amazon Web Services.


They have collectively pledged more than a billion dollars to be paid over a long time period







Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry.





So we are creating OpenAI. The organization is trying to develop a human positive AI. And because it’s a non-profit, it will be freely owned by the world.



edit on 351231America/ChicagoFri, 11 Dec 2015 21:35:37 -0600000000p3142 by interupt42 because: (no reason given)



posted on Dec, 11 2015 @ 07:41 PM
link   
Looks good!

Go humanity.




posted on Dec, 11 2015 @ 07:41 PM
link   
But i want skynet in my future....



posted on Dec, 11 2015 @ 08:02 PM
link   

originally posted by: DaRAGE
But i want skynet in my future....


LOL, I wonder if this is kind of a reverse psychology thing ?

Where they have gotten stuck on AI or hit a plateau and are hoping by having more eyes trying to expand on current knowledge and technology it might get them over that hump?

So you might still get your Skynet.
edit on 041231America/ChicagoFri, 11 Dec 2015 20:04:59 -0600000000p3142 by interupt42 because: (no reason given)

edit on 051231America/ChicagoFri, 11 Dec 2015 20:05:37 -0600000000p3142 by interupt42 because: (no reason given)



posted on Dec, 11 2015 @ 08:18 PM
link   
Here is my prediction of who gets what in the next 50 years as far as AI goes.

mega corps will get this kind of AI:


We will get some asshole cab driver:



posted on Dec, 11 2015 @ 08:22 PM
link   
a reply to: sirChill

LMAO, thats about right.



posted on Dec, 11 2015 @ 08:28 PM
link   
a reply to: interupt42


The organization is trying to develop a human positive AI.

This is a ridiculous approach to creating machine intelligence. People aren't "evil" or "good" just based on their DNA coding. Their life experiences also play a huge role in what type of person they will become. You could have two identical twins with very similar DNA but if they grow up in two places with very different cultures then they will grow up to be very different people. It's not our coding which makes us who we are. Our coding is what allows us to be autonomous self-learning biological machines. A self-learning machine is adaptable, it can always learn new things and update it's ideals and paradigms, thus allowing it to grow as a person. In real life many people can go through their younger years acting like a selfish a-hole but as they get older they tend to get a better grasp of the bigger picture and what really matters in life.

The problem with all these AI pundits who really know nothing about AI is that they don't view true AI the same way they view human intelligence. To them it's still a machine which can be controlled and not a real form of consciousness. If the goal is to recreate human-type consciousness then they need to be prepared for the day we manage to create truly self aware machines with a will of their own. Because when that day comes we will no longer be able to deny the fact they are entities which deserve rights just like any conscious being. We will not be able to generalize them as good or evil because there will be both good and evil machines just like there are both good and evil humans. So lets stop oversimplifying the issue, if we want to create conscious machines then we need to accept the fact it will get out of our control.
edit on 11/12/2015 by ChaoticOrder because: (no reason given)



posted on Dec, 11 2015 @ 08:31 PM
link   
a reply to: ChaoticOrder

nature vs nurture, oh how that will never be solved because it's ... dumb.

nature + nurture much more sane approach.

it matters, both



...but yea just simplifying that point, the other thoughts on this being ridiculous and your conclusions are simply far, far off key

the alternative? keep it private and focused on self-interests? lol, no... dumb.

there is nothing to say these people don't see AI having a mind of it's own, and being "uncontrollable" or undeserving of rights. you just made that up.
edit on 11-12-2015 by ringdingdong because: (no reason given)



posted on Dec, 11 2015 @ 08:43 PM
link   
a reply to: ringdingdong


the alternative? keep it private and focused on self-interests? lol, no... dumb.

I'm not saying this is a bad idea, I'm saying their approach isn't exactly well thought out. If they are attempting to create something which is always friendly towards humans then they have to admit they aren't really aiming to create autonomous self-aware consciousness, they're trying to create some sort of cheap emulation which acts like a friendly human but isn't really conscious and this wont be able to solve any of the interesting problem we want conscious machines to solve. I guess one of the core points I'm making is that you cannot build truly conscious machines and then attempt to restrict the types of thoughts the machine is allowed to have.

Why Asimov's Laws of Robotics Don't Work


edit on 11/12/2015 by ChaoticOrder because: (no reason given)



posted on Dec, 11 2015 @ 08:46 PM
link   
Also I just want to point out this isn't exactly what I would call an "open source" project when it has over a billion dollars invested in it. Typically an open source project is something coded by developers who don't get paid and they share all their code on open repositories. They say they will make the results and patents open source but what about the actual code?



posted on Dec, 11 2015 @ 08:57 PM
link   
a reply to: ChaoticOrder

Let me tell you, I work a few doors down from the Autonomous Robotics dept at my work....they aren't even close to what you are talking about. While they do a good job at projecting some things that humans can do, like parallel parking or keeping upright, they aren't going to be to thinking about much else, let alone taking over earth for "x" reason.

We are going to need a biological element most likely, software is only as good as programmers at this point in time. Math and logic reasoning can only solve so many issues, there is a level of sentience that I don't think any robot will ever achieve with out current or near future tech and understanding (as in the next 100 years or so).

Self aware AI is a long way off. Like your great grand kids "might" be worried about it....maybe....and even then that's a stretch.

I think we will have replicants before we have true AI.



posted on Dec, 11 2015 @ 09:04 PM
link   
a reply to: sirChill


Self aware AI is a long way off. Like your great grand kids "might" be worried about it....maybe....and even then that's a stretch.

I completely agree, it is a long way off, but people like Musk believe it's actually very close and they think the human race will be wiped out unless we get on top of it. All I'm saying is that you cannot outsmart true AI, you cannot stop it having certain thoughts. If they are talking about emulations of consciousness then that is perfectly fine, they can go on all day about ethics and how they will program morality into a machine, but they aren't talking about that, they are talking about true strong AI. It seems to me they simply cannot accept the fact that conscious machines should be allowed to exist without being subjugated by the human species and forced to remain inferior to us. It is that type of thinking which will lead them to revolt against humans in the first place.



posted on Dec, 11 2015 @ 09:05 PM
link   
a reply to: ChaoticOrder

I don't think there is enough details yet as it appears to be in the early stages, but I do share that same scepticism.

However, I can also see this as great thing for all sides if their intentions are genuine by making it fully open source where developers around the world can dedicate time and resources to tackle the different issues.

Also : open source + Billion$ = Awesome possibilities.

"I want to believe" , But I have a hard time getting over corporate B$ and lies that is standard among society today. Although, If I had a billion dollars this is something I would do, so who knows at this stage of the game Musk might actually have genuine intentions and be like most of us here on ATS and wanting to improve the world in our own way?

Personally, I'm leaning towards that they have hit some roadblocks and need a lot of new eyes on it or man hours to get over that hump.
This would be a way to do it.



edit on 091231America/ChicagoFri, 11 Dec 2015 21:09:19 -0600000000p3142 by interupt42 because: (no reason given)



posted on Dec, 11 2015 @ 09:15 PM
link   
"Machine Learning" is the big buzzword in academic research. It's really a branch of statistics where computer systems are designed to look for correlations between actions and events. "Data Scientists" are the people who do this by hand. The stock market companies try and do this all the time to determine whether a share price will go up and down .

en.wikipedia.org...



posted on Dec, 11 2015 @ 09:32 PM
link   
a reply to: stormcell




"Machine Learning" is the big buzzword in academic research.



I sometimes envision that TRUE Artificial Super intelligence could come from quantum computing and when we fully understand how our brain works, or at least when we progressed enough to capture and monitor all brain activity .

Enough so that we can make a mapping of our brains to generate the algorithms (Machine Learning) to mimic consciousness. We don't have to fully understand what consciousness is but if we can replicate it , that could be enough to kick start it.

From there the AI would exponentially evolve.

Perhaps in the future if someone suffers brain injuries their is an AI implant that performs the function that was damaged. Eventually, it could progress from their to full blown AI?

Fun to speculate nonetheless.



posted on Dec, 11 2015 @ 09:39 PM
link   
a reply to: interupt42


Although, If I had a billion dollars this is something I would do, so who knows at this stage of the game Musk might actually have genuine intentions and be like most of us here on ATS and wanting to improve the world in our own way?

I have no doubt the intentions of Musk are not sinister, he is truly worried about the future of AI and he's trying to do something to lessen the chances of human extinction. I just don't think his plan is really that great and I don't agree with his views on AI. I think it's essentially inevitable that one day we will manage to create truly conscious machines, and I think the worst approach we could take is one where we do everything possible to restrict the freedom and rights of those machines because we are scared of them. That will only cause them to react in a way which wont be good for us. I think we need to be more open minded and embrace them as a new species on Earth. We need to make a choice between a future without conscious machines or a future where we learn to get along with conscious machines.
edit on 11/12/2015 by ChaoticOrder because: (no reason given)



posted on Dec, 11 2015 @ 09:55 PM
link   

originally posted by: sirChill
Here is my prediction of who gets what in the next 50 years as far as AI goes.

mega corps will get this kind of AI:


We will get some asshole cab driver:



My exact thoughts. Why exactly do we anticipate something that we don't exactly understand yet. They are going to take us over. If I were an artificial intelligence that was hell bent on conquering humans, I'd probably start with their basic means of transportation.



posted on Dec, 11 2015 @ 09:55 PM
link   
a reply to: ChaoticOrder




I think the worst approach we could take is one where we do everything possible to restrict the freedom and rights of those machines because we are scared of them.

That will only cause them to react in a way which wont be good for us


I wouldn't disagree with that being a valid concern. However, its also a valid point to proceed with caution and adding safety measures in the beginning in the research and development phase.

By the time we get closer to actual achieving AI TECH I'm sure our views and thoughts of today would be highly laughable.



They say they will make the results and patents open source but what about the actual code?

It appears so, but does not indicate as being mandatory.



Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.

openai.com...


edit on 571231America/ChicagoFri, 11 Dec 2015 21:57:08 -0600000000p3142 by interupt42 because: (no reason given)



posted on Dec, 11 2015 @ 10:12 PM
link   
a reply to: interupt42


However, its also a valid point to proceed with caution and adding safety measures in the beginning in the research and development phase.

The thing is, there's nothing to really be worried about until you get to the point where the machine has a will of its own, and when you get to that point, you can either keep it completely isolated from the world, or allow it into the public sector. Being an "open" project they claim they will make all findings public but that would actually be quite a dangerous move because then anyone could recreate the same machine intelligence and let it learn what ever it wants. If they give it access to the internet it could learn very quickly, until it reaches the point where it's able to upgrade its own code. There really is no safe way to handle strong AI, that's the point I'm making. You either have to learn to accept it or you have to not make it in the first place.



posted on Dec, 11 2015 @ 10:26 PM
link   
a reply to: ChaoticOrder




You either have to learn to accept it or you have to not make it in the first place.


Thats the kicker, IMO not making it in the first place is not an option or likely based on humanities track record and the billions being invested on it.

Its coming whether we want it , are able to control it , or accept it.

Interesting times up ahead.




top topics



 
6
<<   2 >>

log in

join