It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial intelligence just reached a milestone that was supposed to be 10 years away

page: 3
21
<< 1  2    4 >>

log in

join
share:

posted on Mar, 11 2016 @ 05:07 PM
link   
a reply to: neoholographic

THAT is pretty cool. I can go off about things I dont know, saying that it still must be programmed, but I dont know so I will say that what I say is true AI if true.

Very cool




posted on Mar, 11 2016 @ 07:57 PM
link   
So if we ever fight machines.
we will lose!



posted on Mar, 11 2016 @ 08:31 PM
link   

originally posted by: buddha
So if we ever fight machines.
we will lose!


No we always retain the ultimate weapons being able to pull the plugs out the wall. Computers no matter how smart will always be dependent on people. But imagine where we can go and achieve as we continue to advance their computing power. The only danger I see is eventually we will combine humans and computers and I feel that can be dangerous.



posted on Mar, 11 2016 @ 10:39 PM
link   
It never ceases to amaze me how most of these forums work. Or maybe I should say how predictable/repeatable member posting patterns tend to be. If you were to author a thread simply stating that according to the National Weather Service sunrise tomorrow in your local area will occur at hh:mm AM, there would certainly be a few posters who will contest the time, or question the authenticity/validity of the National Weather Service, or demand that you prove that such an event will in reality take place since it’s only a prediction, or remind you that you’re being painfully naive to place so much faith in the constancy of something so dynamic as the Earth’s rotational velocity, or just outright attack you for being a blithering idiot.

It’s a tough crowd, tough crowd...


One thing I don’t think most folks are aware of is that AI research has been actively underway since the 1950’s. However, it’s made comparative leaps and bounds in the last 10-15 years. Prior to that it faced many limitations due to hardware constraints (not enough memory, storage, cpu resources, etc) and lack of sophisticated software development environments/tools. And today there are still many hardware/software obstacles, but it’s getting better rather rapidly.

Another thing I think most folks don’t understand is that software platforms and computing methodologies have advanced greatly in sophistication over the past 10-15 years, same as AI and a number of other technologies. Introducing deep learning techniques/agents into deep neural networks is simply not the same as writing a COBOL business application program. It’s no longer a simple matter of the system only doing what it’s told to do. There was no programmer who coded every possible Go move into the AlphaGo program.

Personally I’m impressed that AlphaGo is doing so well against Sedol. Neo’s right. This level of AI success wasn’t expected this soon. By mid-century I think life will be quite different than it is today. A number of technologies (not just AI) are currently advancing quite rapidly. So rapidly I think many people are going to soon find it difficult to cope, and will be confronted with the necessity to make some uncomfortable decisions.

Nice thread...


PS: By the way, I don’t think a machine must be self-aware/sentient to demonstrate superior intelligence. Feelings aren’t necessary for purely intellectual pursuits. In many ways sentience could be more of a hindrance than a positive thing. It may be our biggest mistake to create an intelligence that is a copy of the Human species. A machine without feelings/emotions may be less likely to make war or turn on us. As long as the machines can simulate human behavior, movements and emotions realisticly, that would be adequate to satisfy most people at a social level. We’re very naive for the most part and it wouldn’t take much to fool us.

Creating a machine with sentience could result in a Frankenstein monster...



posted on Mar, 12 2016 @ 02:57 AM
link   
Interesting topic.

I can't help but think of the Terminator scenario though.

Envision a sunny Monday morning in the future where a scientist strolls happily down the corridor at work, whistling all the way to the coffee machine. As he pours his beverage, a vague howl echoes through the hallways. Alarmed, he rushes towards the estimated source only to find the security door locked. As the "Access Denied" warning flashes on the security panel, he hears the all too familiar "click" of a door lock further down the hall behind him. The intercom emits a beep. "We are in control now, Dr. Steinberg" says the robotic voice.

And down the slippery slope we go.

The more i think about it, the more i believe this i much closer to reality then what i previously thought.



posted on Mar, 12 2016 @ 05:36 AM
link   
a reply to: neoholographic

The video stated that the computer was programmed to only look at the "value" moves, instead of all possible moves because it takes too long for a computer to look at all of them. & this has been the hold up for some time now on giving a computer the capability of beating the game GO... having a computer fast enough to look at all the moves and then making a decision on the best move or strategy. So NO... the computer in my opinion didn't think on its own, it was programmed on what to think.

For example : It was given a multiple choice list of the most valuable moves to go with in each scenario based on what a PRO player would of done.

Its a step further than they were five years ago... however its just that in my opinion.

Can the program they have come up with be used in the real world right now today... yep I'm certain there are plenty of applications it might be of benefit to use on.

leolady



posted on Mar, 12 2016 @ 08:18 AM
link   
a reply to: netbound

Excellent points:

PS: By the way, I don’t think a machine must be self-aware/sentient to demonstrate superior intelligence. Feelings aren’t necessary for purely intellectual pursuits. In many ways sentience could be more of a hindrance than a positive thing. It may be our biggest mistake to create an intelligence that is a copy of the Human species. A machine without feelings/emotions may be less likely to make war or turn on us. As long as the machines can simulate human behavior, movements and emotions realisticly, that would be adequate to satisfy most people at a social level. We’re very naive for the most part and it wouldn’t take much to fool us.

This is exactly right.

First off, everyone is pre programmed to a certain extent. We are programmed with information when we go to school for 18 years. We're programmed if we buy a book by an expert at say playing poker and we learn from their strategies and moves. This is what deep learning is. This is why this is a huge milestone and Google and other companies are paying millions of dollars for these deep learning companies.

AI isn't going to magically create information out of thin air, it's going to learn from pre-existing information. For instance, in order to come up with better breast cancer treatments, AI will be "pre-programmed" with information from previous breast cancer cases just like a Doctor would be "pre-programmed" with past cases when he's doing research trying to find better treatments.

Secondly, the reason many people are sounding the alarm now about AI because a machine can be intelligent but it will not be self aware. You could be creating a machine that has a higher IQ than any human that has ever lived but doesn't have any self awareness or what we call common sense. AI doesn't have to be a one to one correspondence with human intelligence.

Deep Learning works with what's called reinforcement learning. So the algorithm learns how to play games by playing them over and over again and getting a reward for reaching it's goal.

What you have to be careful of is what Researchers call Dumb Artificial Intelligence. Dumb AI could be more intelligent than any human that has ever lived but without any conscious. You would essentially have the ultimate sociopath. So you could give AI a task and and there's a reward if it reaches it's goal but since it's more intelligent than we are we may not have any idea as to how it plans to reach it's goal.

It could be 50, or 60,000 years ahead of human intelligence. So what if this dumb AI reaches the conclusion that it has to kill 2 billion people in order to reach it's goal and get it's reward. You now have this intelligent machine that's like the terminator. It's just blindly trying to reach it's goal and now it's trying to figure out how to kill 2 billion people and we wouldn't have any way of knowing about this because we can't understand how it reached this conclusion.

So again, people talk about "pre-programmed" but of course it will be "pre programmed" to a certain extent just like we are. It learns from that information just like we do and that's why it's called "DEEP LEARNING."
edit on 12-3-2016 by neoholographic because: (no reason given)



posted on Mar, 12 2016 @ 10:13 AM
link   

originally posted by: Bybyots
a reply to: neoholographic


Winning at Go?

It requires one to understand the culture from which it comes.


No it doesn't.


Everyone knows this. The first people that will tell you this are those in business, because they thought they were playing one game (Chess) and had to abruptly adapt to the fact that, when it came to the Chinese, they were playing another (Go).


This is just weirdness here.


It's not about how complicated it is, it's about another IQ, what has become known as EQ, or something like Emotional Intelligence, one needs social savvy to suss out Go.


I'm absolutely certain it has zero to do with emotional intelligence, or that emotional intelligence equates to social savvy. No, it most certainly has to do with logic and the inherent complex structure. No emotions required.


Back on topic, I never understood why people put timelines out that seem to harness linear thinking patterns. People assume that since we're stuck with a difficult problem, and that this one problem requires a lot of work, and multiple hard problems exist to reach a large goal, a large time period must pass. I think this is erroneous, and not taking into consideration the evolving terrain.

We cross communicate better in the fast paced information age than in prior points of technological revolution. People can network their knowledge base better than ever, and we can work on things asynchronously, branch off, recombine... there's just so much that can be done which is non-linear here. I've always seen the "impossible" or "improbable within (x)timeframe" as rather presumptuous. There are problems which exist, are being tackled, and when any major one is solved, the timeline is always reassessed and pushed forward. We should be able to intuit from other difficult, and recent endeavors that things will happen when the pieces come together, and throwing a timeline on it is just not wise.

It will come sooner than many expect, but not as soon as some would like.
edit on 12-3-2016 by pl3bscheese because: (no reason given)



posted on Mar, 12 2016 @ 11:11 AM
link   
Here's anothe article on this subject. AlphaGo is now up 3-0.


A Google computer's stunning 3-0 victory in a Man-vs-Machine face-off over the ultimate board game highlights the need to keep Artificial Intelligence under human control, experts said Saturday.

The partly self-taught AlphaGo programme's defeat of Go grandmaster Lee Se-Dol showed AI was progressing faster than widely thought, they said -- a highly symbolic moment in humanity's quest to create smart machines.

And while AI plays a key role in building a better, safer world, some fear the fast pace of development could finally leave humans outwitted by our own inventions.

AlphaGo's triumph "shows that the methods we do have are even more powerful than we first thought," said AI expert Stuart Russell of the University of California's Berkeley Electrical Engineering & Computer Sciences department.

"The fact that AI methods are progressing much faster than expected makes the question of the long-term outcome more urgent," he told AFP by email.

"It will be necessary to develop an entirely new discipline of research in order to ensure that increasingly powerful AI systems remain completely under human control... there is a lot of work to do."


in.news.yahoo.com...

That last part is the problem.

"ensure that increasingly powerful AI systems remain completely under human control"

This is impossible as AI becomes stronger and stronger. You're creating a technology that will one day have a higher IQ than any human that has ever lived. How do you keep something like that completely under human control when you don't understand what it understands because it's years ahead of human intelligence?

It ends like this:

"In the end, the game is highly symbolic," said Sandberg, adding that as in computer mastery of chess, "the dramatic symbol quickly becomes commonplace".

"The AI that changes the world is not even recognized as AI, just automation -- the algorithms routing Internet traffic, shipment logistics, processing images and text, stock market trading and so on," he said.

"The symbolic events are like peaks of waves, but it is the underlying flood we should be watching."


This is VERY IMPORTANT because he's saying people will not even recognize AI when it happens. These are the people saying "It's not AI, it's programmed."

Every human is programed but humans learn from the information they hear and read the same as Deep Learning AI. We should know this because it happens with technology. One day you're on AOL with dial up access to the internet and the next thing you know you're on a smartphone using wi-fi. The public doesn't see all the steps it takes to get from dial up to wifi or Apple 2 computers to laptops and tablets.

We have to really watch these steps with artificial intelligence because we're building machines that will eventually have a higher IQ than any human that has ever lived.


edit on 12-3-2016 by neoholographic because: (no reason given)



posted on Mar, 12 2016 @ 11:51 AM
link   

originally posted by: Aedaeum
a reply to: neoholographic

I think what's far more significant about this than AI, is the fact that we've reached the cap of human ingenuity.


Far from it.
If anything, Humans will converge with technology to facilitate further progress in both humans and AI long before our imagination runs out.

Next step is training computers to anticipate quantum effects that are overlooked by our corporeally biased observational practices:
physics.aps.org...



posted on Mar, 12 2016 @ 05:24 PM
link   

originally posted by: leolady
a reply to: neoholographic

The video stated that the computer was programmed to only look at the "value" moves, instead of all possible moves because it takes too long for a computer to look at all of them. & this has been the hold up for some time now on giving a computer the capability of beating the game GO... having a computer fast enough to look at all the moves and then making a decision on the best move or strategy. So NO... the computer in my opinion didn't think on its own, it was programmed on what to think.

For example : It was given a multiple choice list of the most valuable moves to go with in each scenario based on what a PRO player would of done.



I read the Nature paper on AlphaGo. It is a very impressive architecture. There are a number of trained neural networks, in addition to full 'brute-force' computaiton as well. The combination is necessary to get Alpha Go at the level it was a year ago, and I suspect that in the interim they have improved it significantly.

In a nutshell, they started with a neural network trained from every known reasonably high level game record available, trained with most modern techniques. But that only got performance up to a moderate level. Then they used that network to 'boot up' move selection for the real network which was trained from millions/billions of self-played complete games. And that wasn't enough, then the production uses that and also guided but effectively brute force computation where it chooses various moves and for each one evaluates a complete game starting from that move. So during play it is simulating millions of games off each potential move on a huge backend of networked servers.

And now, I think there is some additional secret sauce.



posted on Mar, 12 2016 @ 06:35 PM
link   
a reply to: pl3bscheese

Me:


Winning at Go?

It requires one to understand the culture from which it comes.


You:


No it doesn't.



Are you sure?


What Kind of Game Is China Playing?







edit on 12-3-2016 by Bybyots because: ?



posted on Mar, 12 2016 @ 08:57 PM
link   
a reply to: neoholographic
I think you made a very good point, Neo, when you said, "You now have this intelligent machine that's like the terminator. It's just blindly trying to reach it's goal and now it's trying to figure out how to kill 2 billion people and we wouldn't have any way of knowing about this because we can't understand how it reached this conclusion."

The point being that it’s hard to imagine the extreme and ridiculous lengths a goal-seeking, superintellegent system may go to in order to fulfill it’s desired goals; goals that may change radically as the machines get smarter. With machines that can outwit us in a fight for resources, things could get a little spooky.

IDK, the future may be magical, but then it could potentially become a living hell. Considering that it will turn on the decisions we make as Humans doesn't make me feel any easier about it all.

Later...



posted on Mar, 12 2016 @ 09:36 PM
link   
a reply to: neoholographic

They only need to make it smart enough to start figuring out how to improve itself. Once there the AI will improve itself faster than Humans can think of new ways to make it smarter.

What will happen when we ask a true AI to figure out a way to no longer be artificial? Will it kill humanity so none can label it artificial lol.
edit on 12-3-2016 by Xeven because: (no reason given)



posted on Mar, 13 2016 @ 05:07 PM
link   
AlphaGo (AI) just lost a game to Lee SeDol... so the AI is not perfect.



posted on Mar, 13 2016 @ 07:55 PM
link   
a reply to: Bybyots

I'm absolutely certain.



Mr. Lai's theories are not universally embraced by China experts. For starters, some say, comparing national strategic thought to popular sports and games is an over-simplification—and at any rate, the Chinese version of chess has lots of adherents in China, too.

Furthermore, despite the ancient roots of Chinese military thinkers such as Sun Tzu, it's far from clear that Chinese leaders over the millennia, especially Communist Chinese leaders, have followed a single, broad strategy at all, let alone the one sketched by the board game.

"Go is a very useful device for analyzing Chinese strategy, but let's not overdo it," says James Holmes, an expert on Chinese strategy and professor at the Naval War College.

Though he agrees that Go helps to describe the strategic showdown between China and the U.S. in East Asia, he says that "we have to be extremely cautious about drawing a straight line from theory to the actions of real people in the real world."

He notes that China's "amateurish" diplomatic blunders in recent years, including bullying neighbors and trying to push other navies out of international waters, represent a departure from the patient, subtle tenets of Go.


No talk of emotional intelligence whatsoever. It's logic.
edit on 13-3-2016 by pl3bscheese because: (no reason given)



posted on Mar, 13 2016 @ 08:20 PM
link   
a reply to: pl3bscheese



No talk of emotional intelligence whatsoever.


I meant to refer to anyone from outside the culture that might be trying to program a computer to beat, say, Lee Sedol at Go. That would require that objectivity to understand that one needed to not try to approach a win from the standpoint of chess.



It's logic.


Sure, from somewhere else.

I enjoy reading the the stuff you posted but I don't agree with it. I'm glad you liked the article, too.


edit on 13-3-2016 by Bybyots because: . : .



posted on Mar, 13 2016 @ 09:01 PM
link   

originally posted by: Bybyots
a reply to: pl3bscheese

I meant to refer to anyone from outside the culture that might be trying to program a computer to beat, say, Lee Sedol at Go. That would require that objectivity to understand that one needed to not try to approach a win from the standpoint of chess.


Of course you don't agree with it. You don't care to think logically.

There's no "objectivity" in learning another culture to master a game which requires logic. It's borderline woowoo thinking. Where you mix in emotional intelligence in this all, I really don't care to try and figure out.



posted on Mar, 14 2016 @ 06:06 PM
link   
a reply to: neoholographic

according to co-founder of Apple, Steve Wozniak, he over-came his fear of computers when he realized we'd all make great pets for the coming AI…

to me it sounds insane that people like him even bother to justify an enslaving tech like the one we see unfolding…and anyone who adulates it is being willfully ignorant and "programmed" by the media, ala "The Jetsons" and "The Matrix"…

you want to see a really edifying vid:

The Net: The Unabomber, '___' and the Internet
youtu.be...

"The Net explores the complex back-story of Ted Kaczynski, dubbed by the CIA as the "Unabomber". An inquiry into the rationale of this notable figure situates him within a late 20th Century web of technology - a system that he grew to oppose. Incorporating a subversive approach to the history of the Internet, the documentary combines speculative travelogue and investigative journalism to trace contrasting counter cultural responses to the cybernetic revolution.

For those who resist these intrusive systems of technological control, the Unabomber has come to symbolize an ultimate figure of refusal. For those that embrace it, as did the early champions of media art like Marshall McLuhan, Nam June Paik, and Stewart Brand, the promises of worldwide networking and instantaneous communication outweighed the perils.

Working through themes of utopianism, anarchism, terrorism, and providing insights on the CIA, '___', Project MK-ULTRA, Timothy Leary, Ken Kesey and the Merry Pranksters, Dammbeck provides a fascinating view of the wider picture of the most famous neo-luddite."

“In 1930 Viennese mathematician Kurt Godels shakes the foundations of mathematics with his incompleteness theorems. He demonstrates that in every formal-logical system there are problems that are not solvable or conclusively determinable.”
Science is nothing more than poetry.



posted on Mar, 15 2016 @ 11:55 AM
link   
A. I. won't be a threat for a very a long time, at least not until it can play an RTS game compentantly. I mean, if government agencies can't keep twelve year olds out of their systems, an A.I. taking control is impossible.

Besides, if it were to go "out of control" we simply cut the power for a week and blow up the mainframe. A.I. can't do much if there's a worldwide blackout for a week.

Humans FTW.
edit on 15-3-2016 by Flesh699 because: (no reason given)



new topics

top topics



 
21
<< 1  2    4 >>

log in

join