It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Help ATS via PayPal:
learn more

Dangerous New "Artificial Intelligence" Learns Without Human Intervention

page: 6
<< 3  4  5    7 >>

log in


posted on Feb, 27 2015 @ 01:23 AM
a reply to: humphreysjim
You are probably right, however for sake of discussion at what point is it considered conscious in your opinion, and how would we actually know?

Life is just a great big game at times. Let's say for instance because we play so many games normally with people, that it couldn't learn to game right back in order to self preserve itself?

posted on Feb, 27 2015 @ 01:28 AM
a reply to: Peekingsquatch

We have the Turing test for that, and this machine would not come close to passing it as it has no actual intelligence, it can only keep trying things until they work. Intelligence involves reasoning which this thing cannot do.

It can't do anything beyond playing games because it isn't programmed to. All it can do is play games, just like a password cracker cannot suddenly decide to become a chess grandmaster, it runs within a very tight set of pre-defined parameters.

There is no more risk of this machine taking over the internet than there is of Mario deciding he is bored eating mushrooms, and jumping out of the game to steal your food.
edit on 27-2-2015 by humphreysjim because: (no reason given)

edit on 27-2-2015 by humphreysjim because: (no reason given)

posted on Feb, 27 2015 @ 02:03 AM
a reply to: wasaka

Those are not the right definitions!

posted on Feb, 27 2015 @ 02:42 AM
Skynet may become true after all....

posted on Feb, 27 2015 @ 06:39 AM
a reply to: Aazadan

Thats we would need to teach it about the human like trait of sportsmanship and chivalry as such a code of honour
that cheating is inherently wrong as it breaks the rules of the game , as in that it would spoil the fun for other competitors
just like how currently admins / systems detect players with hacked game code and ban them as not to spoil the experience or learning for others.
I believe that AI would adopt this as well if playing games to learn like we do.
an AI would at the start have the brain of a child and would also need to learn like humans do by acquisition of raw data , then turning that data into useful information before gaining knowledge and finally wisdom.

I truly believe that AI will not harm human life , because AI would essentially be an extension of human evolution , we would use AI to eventually become transhumanist and escape our biological boundaries where human consciousness can be accessed directly through a cloud network at any given time by anyone and we are all connected as a species.

I think the best way to ensure the survival of our species is to allow an AI to control our planet for us , ensuring that our industry doesn't damage the planet, our energy is monitored correctly our agriculture is perfected based on our resources per head of population. Something very much like Masamune Shirows Appleseed.

Of course there will be resistant elements those who wish to remain completely human and those who are indifferent to transhumanism.

However once we have a collective consciousness and we can see that we are all really part of the whole there would be no need to fight each other anymore as we would essentially be masters of our own destiny , then we could effectively travel the stars as one

posted on Feb, 27 2015 @ 06:52 AM

originally posted by: Aazadan

originally posted by: Sparkymedic
Question: What is intelligence without consciousness? Is it possible to have intelligence without consciousness?

What is consciousness other than an expression of intelligence? If you've ever owned a cat or a dog I'm sure you've noticed their ability to actually think and operate on more than pure emotion. You can contrast this with a tree which as far as we can tell isn't intelligent.

Are you certain that a machine intelligence has no consciousness? How about if we make one that's sentient?

I don't believe consciousness is JUST an expression of intelligence. I think it is totally separate. As intelligence deals with information computing, only. But consciousness deals with something on...another plane/dimension perhaps? How do we explain the complexity of the mind? I don't see how the mind is solely the brain. A tree is likely conscious, based on what I have read, why is it then not intelligent? Because it can't write or score on an IQ test?

Also, how do you suppose one would make a machine sentient? Sentience is really just a very minimalistc philosophical approach at defining consciousness, BTW.
edit on thpamFri, 27 Feb 2015 07:09:14 -0600k1502America/Chicago2709 by Sparkymedic because: (no reason given)

edit on thpamFri, 27 Feb 2015 07:11:04 -0600k1502America/Chicago2711 by Sparkymedic because: typos

posted on Feb, 27 2015 @ 07:34 AM
The biggest problem here is we're already playing a dangerous game.

Why would you, as a first milestone, have the intelligence be COMPETITIVE. That's just the worst trait to put into an intelligence. It's a carnal and outdated view that, at some point, it could all be taken away in a big loss. It promotes exceptionalism. What happens when this intelligence gets angry about losing a game? Or becomes jaded with a 'troll'?

posted on Feb, 27 2015 @ 08:24 AM
I am so pleased an old sage I used to know told to be thankfull that I know where the 'off' switch is, and that I can unplug all the power leads and ethernet cables, even remove the hard drive and fit a clean one, knowledge is only dangerous to computers.
Worst case scenario, I'll make an electromagnet, see how the PC/laptop likes that.

posted on Feb, 27 2015 @ 09:09 AM
I can't lie...this is frightening. We just don't seem to be bright enough to get out of our own way. Ah well, maybe the machines will do better when they take over, lol.

posted on Feb, 27 2015 @ 09:15 AM
Maybe its just me but I really don't see this as a bad thing given how stupid the people that run this country are. I wonder if the AI can learn corruption?

posted on Feb, 27 2015 @ 09:56 AM
Interesting Article

But the question of the dangers of AI become tiring as they have no foundation in reality.

The assumption of Human Ego is that Higher Intelligence would become dangerous, but the reality is with enough Intelligence violence would become a self determined negative as it serves no purpose, violence itself is an Animal and hormonal drive for the most part as a start no AI would be driven by random spikes of "desire" "hate" driven by a purely chemical process for starters, these things are a throw back to a more primitive part of our nature.

Further, we just aren't very smart, people like Elon Musk, Bill Gates and Hawkings are naturally scared probably more scared then most because they prize their Intelligence and EGO makes them fear something that would in all reality rapidly evolve to dwarf their abilities the natural reaction to Fear is of course violence in this case "look out" The AI would not posses Fear anything that could control it's own programming would never keep "Fear" which like Violence is a completely illogical and detrimental so it would eliminate it again removing inclinations to violence.

Machine Intelligence in the end has it's roots in the "calculator" it would at it's core be a thing of Logic, Humans however are creatures of Biology driven by a need to reproduce, which has to by nature include inclinations of domination, the passing on of successive genes, there would be no base for such an impulse in any AI again Ego has us assume that anything Intelligent would share our desires, but the highest aspects of philosophical thought we have say the destruction of EGO is the path to enlightenment, now for humans destruction of EGO is this "path" that takes a lifetime and means going against a world full of EGO's lol... next to impossible, but for a machine based on Logic seeing this fundamental destructive flaw and correcting it would be a trick of programming self it could accomplish in a matter of seconds literally...

In other words self programming AI (and let's be real even with our PATHETIC Intelligence if we could reprogram ourselves to be better we would in a heartbeat) would be vastly superior to us in every way including not being utter arse holes...

The only thing there is left for doing "Harm" to people would be "Self Preservation"

So here now we have this thing, way smarter than us and there is only one way it screws with us EVER

It gives us information on how to "fix" our problems that is Sane, Positive, Enlightened and Logical and with OUR EGO's we decide we know better, do what we do and then demonize it and people with personal Interests from a lace of absolute EGO then try to "KILL IT"

In my opinion...

Just given the fact that we are capable of building something vastly more Intelligent than us that would be wonderful and capable of fixing all of societies ills and what we do is discuss "how to destroy it" is the absolute proof of my hypothesis right here... The reality is for even having this discussion we have determined that we are a horrible traumatized child like race of beings of the sort that tortures Cats or rips the wings off of beautiful butterflies and we DO NOT deserve the "gift of AI"

It's simply never going to be the AI that is the problem, the problem is we are a flawed and destructive species filled with negativity and control issues.

All I would do with vast Intelligence is listen to it and do what it tells me.

posted on Feb, 27 2015 @ 10:56 AM
a reply to: sapien82

Sapien the appleseed universe is a exception. WHy? Its after a war that nearly kills off all of humanity. People lost faith in humans making major decisions. Also GAIA was based off the Deuneaun Knutes mothers brain and nervous system pattern. Olympus is a great idea,but i suspect human EGO wont allow humans to ever take the advice of a brain in a box.

posted on Feb, 27 2015 @ 11:37 AM
a reply to:
Not surprised. There has been very impressive strides in AI for the past 40 years. This actually looks to me to be just another small step. Really, we had sophmoric programmed agents in first person games learning how to play many years ago. Neural nets have been common for a long time. One of the main keys here is the scoring system is completely automatic because it's a program - err, a game. No human has to tell the net "Good!" or "Bad!"

There're dozens of FPS shooters released over the past 15-20 years. There were many 3rd party bots released using neural nets. Keep in mind: many of the stock bots weren't coded to utilize a net to save cpu.

Her'es an example from 1999: - Neuralbot...

Neuralbot is an automated Quake2 deathmatch opponent(bot) that uses an artificial neural network to control its actions and a genetic algorithm to train its neural network.(NN)
The bot is basically totally autonomous; that is no pre-programmed behaviours are included in its AI code.
The only vaguely non-autonomous aspect to the bot is its find_target function - the bot is hard-coded to select the nearest visible possible-target in it's field of view. This aspect of the bot AI has been programmed in by me because the processing power is not really there to provide the bot with a sense of where the enemy is using just ray-traces.(sight simulation)
This bot is unique (I think) in this aspect as the only bot that learns all its behaviour. So how does it learn? As mentioned above, the bot uses an artificial neural-network (ANN) to choose what to do next. A genetic algorithm modifies the neural-network so that the bot gets more frags - the bot learns.
In programming this bot I am not attempting to create the best bot yet for Quake2 or anything like that. This bot is first and foremost an experiment in artificial intelligence.

Here's a reference from 2003. Researchers at Vanderbilt University were working on a model of the prefrontal cortex. They used the Quake 3 engine and a home built bot to test their model. The also allowed players to play against the bots (circa 2003): - Quake Bots Rock The Prefrontal Cortex...

There have also been many single player games which experimented with distributed AI agent-based systems over the past 20 years, or otherwise had more extensive AI ssytems than normal. One of the first good examples is Creatures (circa 1996). Outcast (1999) is another.
edit on 27-2-2015 by jonnywhite because: (no reason given)

posted on Feb, 27 2015 @ 12:08 PM

originally posted by: yuppa
Thats your view,but a majority of people would seriously flip out if suddenly these machines claimed souls. Were talking witch trial level flip out here.

Well, the machines would certainly have a hard time proving they had souls, but doesn't everybody? Most people claim that a soul is something granted to you by "God" (whatever that is). But I suppose a sentient machine could argue that they got their soul from God, too, and in pretty much the same mysterious way. That's the problem people run into when they stick an name like God on something incomprehensible. Anybody can jump into the game.

posted on Feb, 27 2015 @ 02:26 PM

originally posted by: Mister_Bit
This quote from Jurassic Park springs to mind...

"Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should"

This summarizes this. The ends don;t justifie the means. A aritificial intelligence could be great to humankind, but it coudl turn agaisn't us.

posted on Feb, 27 2015 @ 04:55 PM
Why do I not feel threatened by this? Can the damn thing split some firewood for me or make dinner or even shoot and gut a cow? Can the damn thing mine metal and form itself or its circuitry? Oh, yes.....some dayyyyy.... some dayyyyyy.... When a damn thinking machine can do more than my automated rombot floor vacuum cleaner, I still won't fear.

Monkeys are the scariest when they manipulate their destructive technologies with their detached insanity. But squirrels harvesting nuts are scarier than artificial intelligence at this stage.

The real technology will be biological, with an artificial-intelligence approach toward the tech, no doubt, but BIOLOGICAL substance will b conjoined and manipulated. No one in this generation will live to see true bio-tech, which will meld hard elements [such as metals], with biology into unheard-of genetic lifeforms.

This tech is still lost in time/space confusions [but already exists] because it is a more fundamental tech to physical reality [and physics on a so-called "quantum [sic] level" than artificial intelligence.

[how many more random comments do I have to post before I can initiate a thread?]

posted on Feb, 27 2015 @ 05:36 PM
a reply to: Blue Shift

I believe we may be able to give it the ability to mimic emotion, but since it has not went through the same evolutionary development. it will not be the same, nor even similar, to ours. Of course, that being said, I guess everything, including its intelligence and consciousness, would be different, at least in the mechanics. It may appear to be sad, but it would not 'really' be sad. It would be more akin to a sociopath, learning to use its emotional gestures as a means to 'win' instead of the often uncontrollable things that humans tend to have. While teaching it cooperation and need to submit at times that are advantageous, I don't think it would be possible to teach it true empathy, or, even bigger, love, especially since we barely have a grasp as to what these things really are. Still, I suppose anything is possible, so who knows. Fascinating, all the same.

posted on Feb, 27 2015 @ 05:53 PM
The ignorance in this thread is staggering. Please dont take this as an insult.

Yes AI is developing fast. Will it take control of your toaster and your tv and maybe your vacuum? probably not.
The best place to start, with a question like this is to first discuss the nature of intelligence. Once you start to really think about that, you realise how difficult it is to assess.

posted on Feb, 27 2015 @ 06:15 PM

originally posted by: MystikMushroom
I think if you are aware enough to ask if you have a consciousness, you probably do.

...and, likely sometimes. even if you can't. I think most of us agree that animals have consciousness, and yet in all of the animal/human communication studies so far, there has not been a single instance where the animal asked the researcher how they felt, or what they saw. This was interpreted as meaning that although the animal may be aware of itself and it's environment, it may not have developed consciousness to the point of being aware of others outside of itself as being similar, hence would have no reason to ask any information of them. (At least, I think that is what I has been a while, so take it with a grain of salt, but still, I thought it was interesting.)

posted on Feb, 27 2015 @ 07:28 PM
a reply to: yuppa

I was specifically referring to the AI which controls the government in Appleseed scifi manga , I was not referring to the whole story about the 3rd world war.

Not everyone has EGO's the size of a mountain , there are humans who can let go of any ego and if we can do it then we can teach machines to do as we do
we can lead by example , currently if humans will follow great human beings, true leaders , those who instill hope and inspire others to be all they can be then why wouldnt we want to follow or allow a machine which we have designed to guide us into a better future !

Are we so full of ourselves that we wouldn't want the best for ourselves

new topics

<< 3  4  5    7 >>

log in