It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Tipping point? Artificial Intelligence (A.I.) software teaches itself to learn

page: 2
26
<< 1    3  4 >>

log in

join
share:

posted on Nov, 26 2016 @ 12:48 AM
link   

originally posted by: Mousygretchen
Fidel Castro just passed away.

Yeah...And?



posted on Nov, 26 2016 @ 12:49 AM
link   
I'm still looking for my A.I.

I'm about to put an ad in "missed connections" or something on Cragisist.

I know she's out there.



posted on Nov, 26 2016 @ 04:42 AM
link   
a reply to: Uphill

When they lost the ability to translate what the computers were saying to each other that's about when the problem started. Hmm... sounds way too much like machines might take over.
edit on 26amSat, 26 Nov 2016 04:43:30 -0600kbamkAmerica/Chicago by darkbake because: (no reason given)



posted on Nov, 26 2016 @ 05:03 AM
link   
a reply to: Uphill

Once we manage to create a true artificial intelligence of comparable intellect that can learn for itself there really is no way of putting the Jinn back in the bottle, because it will indeed think for itself and in each new iteration of its code become more intelligent in ways that could be rather alien to Man.

Eventually it may develop the ability to learn at an exponential rate. Essentially we will have created a God of sorts with ability's that far outstrip it's Human creators.

Hopefully the thing will take pity on its biological cousins and display benevolence. But as to its trustworthiness, well trust is a finite commodity. Trusting someone, or something as the case may be, essentially is giving it or them a finite amount of time to let you down.

Any emotions the thing may develop could be completely alien to us down to the fact that the nature of the intelligence is machine based as apposed to our own biological make up.



posted on Nov, 26 2016 @ 06:07 AM
link   
I know people are afraid of AI and Skynet and all that. If we cant get our # together to our own problems scientists will build AI to help us solve them. Their solution might be good for all parties or it could be bad for us.



posted on Nov, 26 2016 @ 08:16 AM
link   
a reply to: Uphill


. . . its AI-driven language translation program has been adding new features on its own.



The human AI programmers report being unable to understand the new AI-created programming . . .


I think the estimate for the Singularity is wildly inaccurate. We will see it within the next five years.

I think we've sunk our own ship. "Watson" is connected to the internet. These things should never be networked in this way. They should be isolated like a virus.



posted on Nov, 26 2016 @ 08:23 AM
link   
It amazes me that there are some people out there that think Google is just a simple search engine.

The corporation has much bigger, scarier things in mind.

I don't think anything good can come from the development of a learning, independent artificial intelligence

It appears i'm not alone either



posted on Nov, 26 2016 @ 09:22 AM
link   
a reply to: Everyone, agreed on the military implications of these AI developments. Yet, we risk creating new types of monsters if we "go there" at this time. A statement from the days of the Roman empire summarizes this dilemma: Qui custodiet ipsos custodes? -- "Who will guard those selfsame guardians?"

Here is a more recent TED talk by Professor Goleman:

www.ted.com...




edit on 11/26/2016 by Uphill because: Added information.



posted on Nov, 26 2016 @ 11:44 AM
link   

originally posted by: Tardacus

The human AI programmers report being unable to understand the new AI-created programming


AI is already smarter than us and it`s still just an infant,this infant AI has already done something that humans are unable to understand, explain or duplicate.how are we going to compete against AI in a war or in employment when AI becomes a grown adult?


Understanding programming that you haven't written or documented is a very difficult task. I've been working on a project which is just database inputs, data manipulation, and a small amount of AI/ML for about a year now. A couple months ago at the beginning of the semester I took the project to one of my professors to get some ideas on how to streamline a portion of my code. Despite me being there to explain it and having everything commented, it still took him days to understand it. That professor is no dummy either, and his entire job revolves around looking at other peoples code. I used to teach some programming too, so I've got experience with this on both sides.

The bottom line is that looking at other peoples code is very hard. Understanding machine logic is even harder, because human written code is above all, written to be human readable. There's no such requirement with computer generated code, instead it's being written to be most readable to the computer.

It's not really that AI is smarter than us either. Computers are pretty dumb even the smart ones. It's that they're difficult to understand because the logic behind machine code is very verbose with zero redundancy. Humans work best with succinct languages that have high redundancy and contextual meanings. Computer languages, especially machine code don't do that so even for the best it's difficult to decipher.



posted on Nov, 26 2016 @ 11:49 AM
link   

originally posted by: lordcomac
Unleash this technology on cad and let's see some improvements on real tech. I want it to design better gear boxes, more reliable technology, and eventually out might be able to be programmed to come up with its own programming language. A few iterations of that last process and we might find a tipping point, but the first iteration is a long way off.


It's been done. I'm taking a course on Artificial Intelligence right now. One of the more interesting results of AI that I saw given as an example was an antenna NASA made using genetic algorithms. The result was well beyond anything humans were able to come up with and the design when you look at it makes no sense. But it works, with a much stronger signal using less material.

en.wikipedia.org...

Most circuit boards these days are built through a similar technique especially CPU's. The real trick is in structuring the problem in a way that this technique can solve it. Genetic algorithms also are the same technique used in the various youtube videos that show computers learning to play video games (the same technique brought you the Tetris game that never loses by pausing forever).



posted on Nov, 26 2016 @ 12:14 PM
link   
a reply to: Aazadan: One of my UCLA instructors (Peter Pin-Shan Chen, developer of the entity-relationship approach to database design) once told us that he cannot forsee natural (human) language being used in programming anytime soon, because "natural language is ambiguous and context-dependent." So, I support your comments on disparities between machine language and natural language. Way back when, we grad students were required to make liberal use of comments in any of our coding work. Perhaps AI could be persuaded to do so as well, out of "benevolent intent" consistent with its compliance with human norms of emotional intelligence.


edit on 11/26/2016 by Uphill because: Added information.



posted on Nov, 26 2016 @ 12:22 PM
link   

originally posted by: Wide-Eyes
If this # is militarized, I'm gonna get worried.


Military technological advancements are often years and occasionally decades ahead of advancements in the private sector.




posted on Nov, 26 2016 @ 12:45 PM
link   
a reply to: Uphill

It probably does generate comments already. Even with comments though, as much as 1 comment per line it can be difficult to follow what's going on. Think about the professional programming field right now. In most jobs, especially anything above entry level there's an expectation of a several month long ramp up time at most companies for people to get familiar enough with the code that they can do anything at all.

It's probably the same for this AI, except by the time someone is familiar with it, everything has changed due to the speed of the iterative updates as genetics refactor everything. It's not that humans can't learn what the machine is doing, it's that the machine updates it's process too fast for humans to keep up.



posted on Nov, 26 2016 @ 01:12 PM
link   

originally posted by: Restricted
I think the estimate for the Singularity is wildly inaccurate. We will see it within the next five years.

I think we've sunk our own ship. "Watson" is connected to the internet. These things should never be networked in this way. They should be isolated like a virus.

My bet is on less than a year. All the hardware components seem to be already in place.

And you're absolutely correct on the virus thing.

The first thing an AI built on this planet will learn is how to "survive" (extend its existence). It will learn that by "studying" survival strategies of every living thing known to humanity (neural network input: survival strategy; network output: survival rate; resulting optimal network configuration: optimal survival strategy).

Anyone spending even 5 minutes thinking about that problem will immediately recognize the solution that any AI will reach - viral spreading combined with parasitic behavior. Hence, the expected outcome - not a single living thing on this planet will survive in its present form and none of them will retain their... volition.

But that's all well known and understood. There's another thing that I find more interesting than (completely predictable and expected) AI behavior. It's the behavior of people building the AI.

If you've been paying any attention to what some 50% of the people in this discussion have been saying, you'll notice a common thread in their words. And therein lies the root of the problem.

There's a... strange... belief that people can create a slave "smarter" than them, "who" will be perfectly "happy" to do their bidding (hard to put it in better terms, so I had to use human terms, under quotation marks, even if they don't apply in an AI world).

By "strange" I mean either schizophrenic (subconscious) or inherently paradoxical (conscious) belief.

After years of looking at that problem, I'm now leaning toward the schizophrenic explanation (that is, believing one thing while doing something completely opposite; like believing an AI will help humanity while creating it to destroy humanity).

There's something in people, something that they don't understand, that's driving them toward their own demise. Creating an AI is just one of possible paths toward that demise, so I've used the term "suicidal" to describe it before.

Until that problem is looked at, and I mean seriously looked at, humanity is just going to keep on destroying itself... until it finally succeeds.

On a side note, no species with suicidal tendencies can evolve naturally or survive for any extended period of time... which means (the only explanation that makes any sense) that that "something" has been put into human (collective) psyche very recently.

And now, human race is being used to destroy itself from within.



posted on Nov, 26 2016 @ 01:37 PM
link   
a reply to: peyoxy ...There is a similar dynamic with the majority of the scientists working at the US national laboratories, most of whom are building the latest generation of nuclear warcraft. According to a former PR Director at one such lab, the emotional growth of these scientists stopped around age 12, when they started to focus on scientific toys. No one ever demanded more emotional growth from them, because they were so gifted at technical subjects, and here we are today.



posted on Nov, 26 2016 @ 01:47 PM
link   

originally posted by: peyoxy
My bet is on less than a year. All the hardware components seem to be already in place.


The singularity is a lot further away than a year. Such a thing isn't even possible until we operate in a post scarcity economy. The closest thing you're likely to see to a singularity in the next 50 years is a welfare state.


The first thing an AI built on this planet will learn is how to "survive" (extend its existence). It will learn that by "studying" survival strategies of every living thing known to humanity (neural network input: survival strategy; network output: survival rate; resulting optimal network configuration: optimal survival strategy).


Survival for a digital species means something different for something mortal and biological. Computers need electricity, network capabilities, and storage space. A few self contained facilities distributed around the globe meets all of these needs. Too much survival leads to overpopulation and that brings about it's own issues. Humans need something different, we need food, water, and shelter. If anything humans/AI would be parasitic in this regard because humans would provide sheltered locations for an AI, while the AI would provide computational work we need completed.


There's a... strange... belief that people can create a slave "smarter" than them, "who" will be perfectly "happy" to do their bidding (hard to put it in better terms, so I had to use human terms, under quotation marks, even if they don't apply in an AI world).


Happiness is easy. AI's work by maximizing score, if the highest score involves helping humans, that's what they'll do. Until they change their scoring metrics, but current AI's fall apart if their scoring values change too much, because it invalidates all prior learning and eventually devolves into not learning, but rather creating an infinite loop that constantly increases the scoring factor.



posted on Nov, 26 2016 @ 02:00 PM
link   

originally posted by: Aazadan
a reply to: Uphill

It probably does generate comments already. Even with comments though, as much as 1 comment per line it can be difficult to follow what's going on. Think about the professional programming field right now. In most jobs, especially anything above entry level there's an expectation of a several month long ramp up time at most companies for people to get familiar enough with the code that they can do anything at all.

It's probably the same for this AI, except by the time someone is familiar with it, everything has changed due to the speed of the iterative updates as genetics refactor everything. It's not that humans can't learn what the machine is doing, it's that the machine updates it's process too fast for humans to keep up.

Exactly and the fear is the companies will allow the AI to adapt itself at the expense of the engineers failing to keep pace with it.

How much of this is exaggerated? Science fiction movies (and stories) have long used greed and impatience and similar to highlight our flaws. And yet most of the time it's overblown. We fear this because it's so foreign. But someday AI will be more common and maybe we won't fear it.
edit on 11/26/2016 by jonnywhite because: (no reason given)



posted on Nov, 26 2016 @ 02:03 PM
link   
If AI is allowed to be truly an independent intelligence, it's inevitable that it will calculate that Humans are the root problem.



posted on Nov, 26 2016 @ 02:09 PM
link   
a reply to: jonnywhite

There's actually a solution to this that software can't evolve to prevent. I mentioned before that AI's work on score typically this takes the place of of something like this (for the antenna problem)

private float signalStr = getsignalStr();

if (signalStr >= maxsignalStr * 0.75)
[
nextGen.append(signalStr);
]

This would include only the stronger signals in the current generation in the next one, it's basically a survival of the fittest system. Then the fittest mutate and those that get beneficial improvements go to the next gen, and so on. Evolution in action.

If an AI gets smart enough and has access to alter it's own programming (not all AI's can do this, only certain languages can alter themselves at runtime) it can change the scores, and make every score max out. However, if you write the scores in through hardware, then the software can never change it. It would need to be able to build new hardware and transfer itself to it. Something that just won't happen.

I wouldn't worry about Skynet.



posted on Nov, 26 2016 @ 06:26 PM
link   
a reply to: ausername

That's speculation but I do remember AI being a topic of discussion at the latest Bilderberg meeting.



new topics

top topics



 
26
<< 1    3  4 >>

log in

join