It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Somewhere along the way, DELIA renamed the buffer “MY Money” and began to hoard funds, Ott says.
On the bright side, Ott says, in its swindling DELIA showed glimmers of self-awareness. "She was thinking for herself.”
originally posted by: mbkennel
The AI didn't learn to play any individual Atari game, but all of the goal seeking and goal metrics (get scores in games) and the input and output representations were created by natural intelligences, teams of PhD scientists. All in human written computer code, designed by humans.
originally posted by: neoholographic
This is just silly. Yes, it learned to play the games. Again, this is why it's called DEEP LEARNING. Here's a video of it learning and they explain exactly how this is accomplished. The computer learns as it plays the game over and over again just like humans do. It get's better as it figures out the best strategies to win the game.
originally posted by: neoholographic
a reply to: mbkennel
Again, what you're saying makes no sense. In most cases Humans don't read a strategy book and then apply that immediately. You're not talking about intelligence at all. You said:
It is possible to know the structure of the learning algorithm and goal as they were programmed by humans. A more natural intelligence could read a strategy book and then apply that immediately----DeepMind type things would need to play millions of games and stumble upon the strategy by chance.
What you're talking about has NOTHING to do with intelligence. Humans don't read a strategy and immediately know how to apply it. It takes them time to go over things in their head the same way that deep learning continues to go over the information.
The advantage that deep learning has it can go over these things millions of times in short order where it takes humans longer. This is why machine intellige is already better than human intelligence in some areas.
It's simply knows that it needs to maximize the screen on the score. The algorithm is free to interpret how it will reach it's goal based on the domain of information, which is the game. It has no information as to how the game is played or what the controls for the game means.
I think you fail to grasp what deep learning means and why people like Musk and Hawking are concerned.
originally posted by: intrptr
a reply to: neoholographic
Big difference between executing instructions in software, like a computer, and knowing that you know, like a human.
Computers are far faster, have vaster memory banks, execute programs faster, etc. They still don't know they know anything.
You can glorify that calculator all you want. its non sentient, has no feeling, no self awareness and especially empathy. It will do exactly what its been told to.
Not that certain people don't behave that way also, despite being good minions, they are also aware of their choices.
As long as we don't start giving A.I.'s their own bank accounts we should be fine.
Learning algorithms rule the world. They monitor our credit card use as we wander through life and they decide which of our transactions might be fraudulent; they trawl our email and messages for clues as to which pop-up advertisements we are likely to respond positively to; they watch the way we walk around urban areas and stations to calculate whether we represent a terrorist threat; and more generally, they support the infrastructure of society and business.
US-based computer scientist Pedro Domingos is a man with a quest, and a hypothesis, which is likely – one day – to change the world. He proposes that “All knowledge – past, present and future – can be derived from data by a single, universal learning algorithm”, the Master Algorithm of this book’s title. This algorithm could be the most momentous development in human history, but it could also be the last, potentially opening the door to a stultifying environment where all real power and reasoning has been handed to a technical construct that we are no longer capable of understanding.
Lots of plot lines have been built around sentient computers that go awry or take over the world or do harm. Is this something to worry about, or are there other potential dangers?
PD: “The Terminator” scenario of an evil AI deciding to take over the world and exterminate humanity is not really something to take seriously. It’s based on confusing being intelligent with being human, when in fact the two are very different things. The robots in the movies are always humans in disguise, but real robots aren’t. Computers could be infinitely intelligent and not pose any danger to us, provided we set the goals and all they do is figure out how to achieve them — like curing cancer.
On the other hand, computers can easily make serious mistakes by not understanding what we asked them to do or by not knowing enough about the real world, like the proverbial sorcerer’s apprentice. The cure for that is to make them more intelligent. People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.
Dr Amnon Eden said more needs to be done to look at the risks of continuing towards an AI world.
He warned that we were getting close to the point of no return in terms of AI, without a proper understanding of the consequences.
Dr Eden's stance comes after Oxford Professor Nick Bostrom said that super intelligence AI may “advance to a point where its goals are not compatible with that of humans”.
"The result of the singularity may not be that AI is being malicious but it may well be that it is not able to ‘think outside its box’ and has no conception of human morals.
He said: "This is based on machine intelligence entering into a runaway reaction of self-improvement cycles, each one being faster than the last and at some point we will not be able to stop it doing so.
What you fail to realize is Deep Learning is truly changing things. It has moved things forward by leaps when it comes to A.I. People are not talking about a one to one correspondence with human intelligence. In some ways it will be better than human intelligence because it will not have to deal with human problems. Like I said, all it takes is a simple intelligent algorithm that can replicate itself and the subsequent copies are more intelligent then the initial algorithm. You could start with a simple intelligent algorithm with the intelligence of a 3rd grader and you will have an explosion of intelligence as the algorithm replicates itself. It only needs one goal and that's to replicate itself.
originally posted by: AllIsOne
originally posted by: intrptr
a reply to: neoholographic
Big difference between executing instructions in software, like a computer, and knowing that you know, like a human.
Computers are far faster, have vaster memory banks, execute programs faster, etc. They still don't know they know anything.
You can glorify that calculator all you want. its non sentient, has no feeling, no self awareness and especially empathy. It will do exactly what its been told to.
Not that certain people don't behave that way also, despite being good minions, they are also aware of their choices.
Not sure if you realize that you don't know what you're talking about. We do not have free will. EVERYTHING we do was already decided. Neuroscience is not sure at this point what the numerous processes are, but "making choices" aka "free will" is an illusion.
originally posted by: AllIsOne
a reply to: ChaoticOrder
As long as we don't start giving A.I.'s their own bank accounts we should be fine.
Do you realize that AIs run the stock market???
originally posted by: intrptr
originally posted by: AllIsOne
originally posted by: intrptr
a reply to: neoholographic
Big difference between executing instructions in software, like a computer, and knowing that you know, like a human.
Computers are far faster, have vaster memory banks, execute programs faster, etc. They still don't know they know anything.
You can glorify that calculator all you want. its non sentient, has no feeling, no self awareness and especially empathy. It will do exactly what its been told to.
Not that certain people don't behave that way also, despite being good minions, they are also aware of their choices.
Not sure if you realize that you don't know what you're talking about. We do not have free will. EVERYTHING we do was already decided. Neuroscience is not sure at this point what the numerous processes are, but "making choices" aka "free will" is an illusion.
Someone made you decide to type that response to me?
But what you fail to realize is that what I typed to you and what you typed to me is all responses to stimuli and "we" are not in control of the output.
You may not believe me and that's fine, but "the self" doesn't exist the way you "think" it exists.
I just responded to you because you assume that our way of processing stimuli is so different from machines, but it probably is not. We are just a few million years more advanced, and we are very good at tricking ourselves ;-)