It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Artificial Intelligence learning how to play Hide & Seek

page: 1
11

log in

join
share:

posted on Sep, 18 2019 @ 10:28 AM
link   
This is fascinating stuff. Yeah, I know...some people will correlate this to doom porn and the end of mankind, but I'd rather focus on how AI could be used to make the world a better place.

Check out this video of an AI program learning how to play hide and seek. It's presented in a way anyone can understand so it's worth the 3 minutes of your time.



Imagine cities being designed with this technology. Streets, highways and interstates could all be ran through these programs to find the perfect design. Military strategy could be ran through AI to find the most ideal solution to any problem without death and destruction. Farming, medical procedures, air traffic control...

Watching a program learn how to interact with the world around them like this is pretty cool, I think. The dark side that AI offers has been discussed at great length. Of course, as a skeptic, I'm weary of such an outcome. But there's also a chance it can be used for the betterment of mankind.

Perhaps one of the biggest obstacles in AI being used for good is its lack of humanity, and the corruption of mankind. But we write the code that makes it, and we create the algorithms behind it all. Crazy to think what the world will look like in 50 years and how much AI will grow in that time.



posted on Sep, 18 2019 @ 10:34 AM
link   
a reply to: Assassin82

This is very very interesting!! I wonder what would happen if they continue this simulation, like the big one zoomed out at the end, yet add ever increasing resources. Would these agents eventually cooperate together and build themselves a giant city? Judging from their behavior here, this seems like it could be a distinct possibility.



posted on Sep, 18 2019 @ 10:40 AM
link   

originally posted by: PokeyJoe
a reply to: Assassin82

This is very very interesting!! I wonder what would happen if they continue this simulation, like the big one zoomed out at the end, yet add ever increasing resources. Would these agents eventually cooperate together and build themselves a giant city? Judging from their behavior here, this seems like it could be a distinct possibility.


That's a good question. I'd think that if it were a large, open world like that eventually they would start to interact with each other, perhaps like ancient tribes of hominids leaving the caves and jungles in search for more resources.



posted on Sep, 18 2019 @ 10:54 AM
link   
So when it becomes self aware it will have no problem finding us hiding out in Zion. This is agent Smiths baby steps.
a reply to: Assassin82



posted on Sep, 18 2019 @ 11:11 AM
link   

originally posted by: Athetos
So when it becomes self aware it will have no problem finding us hiding out in Zion. This is agent Smiths baby steps.
a reply to: Assassin82



Maybe AI is a cumulative product of humanities collective subconscious. Maybe AI is a part of our natural evolution. In such a scenario the destruction of Zion would be entirely self-inflicted and Agent Smith would be the embodiment of our evolution.



posted on Sep, 18 2019 @ 11:49 AM
link   
a reply to: Assassin82

There are two types of AI. Weak AI and Strong AI. Here is an example of why Weak AI is so dangerous because people make no distinction with it in terms of Strong AI:



Contrary to the delusions of the enthusiasts, computer programes only ever do exactly what they've been programmed to do.

Here's a great discussion on why Strong AI is so difficult to program:




posted on Sep, 18 2019 @ 11:51 AM
link   

originally posted by: Assassin82
Maybe AI is a cumulative product of humanities collective subconscious. Maybe AI is a part of our natural evolution. In such a scenario the destruction of Zion would be entirely self-inflicted and Agent Smith would be the embodiment of our evolution.

Yes. Human beings are a transitional species, here and gone in a flash in geological terms. We'll clever ourselves out of existence pretty soon by both genetic engineering and AI. People will alter their genetics to the point where our breeding compatibility will be severely compromised even more than it is right now because of stress and chemicals in the environment. AI will either protect us to death, outright kill us, or decide to take off into the universe to find similar "beings" in order to upgrade itself. They will be our emissaries to the stars because we don't have the physical capability and longevity to make the trip ourselves.



posted on Sep, 18 2019 @ 11:51 AM
link   
a reply to: Assassin82

Probably the scariest thing I've heard in like forever.

Wow, just wow.



posted on Sep, 18 2019 @ 11:56 AM
link   

originally posted by: Assassin82

originally posted by: Athetos
So when it becomes self aware it will have no problem finding us hiding out in Zion. This is agent Smiths baby steps.
a reply to: Assassin82



Maybe AI is a cumulative product of humanities collective subconscious. Maybe AI is a part of our natural evolution. In such a scenario the destruction of Zion would be entirely self-inflicted and Agent Smith would be the embodiment of our evolution.


Or, what makes us conscious and capable of intelligence is not a computer program but something else. You can't prove a negative so at some point in the future monkeys might fly out of my butt. But the chances of monkeys flying out of my butt are pretty slim. The same thing with an arrangement of computer bits in RAM. Computer bits in RAM is a little like an arrangement of rocks in a field. It is possible the rocks develop some hyper intelligence and coalesce into a self-conscious rock giant with senses and intelligence stomping around killing people. Ancient astronaut theorists say "yes!" even though it is very unlikely this will ever happen.


edit on 18-9-2019 by dfnj2015 because: (no reason given)



posted on Sep, 18 2019 @ 11:59 AM
link   

originally posted by: dfnj2015
Contrary to the delusions of the enthusiasts, computer programes only ever do exactly what they've been programmed to do.

Well, these days we're working on programming AI to mimic human actions and responses. At some point, the AI will be so good at mimicking us that we won't be able to tell if it's self-conscious or not. And if you can't tell the difference, is there really a difference?

But that's just mimicking humans. Our machines are already starting to function and communicate with each other in ways we don't completely understand. I've recently begun to think that it's already taken over the foundations of our society, but we don't even recognize it because we only understand intelligence that resembles our own. It's a conceptual takeover.



posted on Sep, 18 2019 @ 12:46 PM
link   
a reply to: Blue Shift

Even if it doesn't take over the foundations of society, you know damn well it will be programmed to.
It's probably being looked at as a giant wet dream of control.
Absolute control of all media and the internet.
Not some tool to advance, a tool for massive control.
That's the scary part...manufactured reality



posted on Sep, 18 2019 @ 01:32 PM
link   

originally posted by: Mandroid7
a reply to: Blue Shift

Even if it doesn't take over the foundations of society, you know damn well it will be programmed to.
It's probably being looked at as a giant wet dream of control.
Absolute control of all media and the internet.
Not some tool to advance, a tool for massive control.
That's the scary part...manufactured reality


And people will willingly pay to be a part of that fake reality.



posted on Sep, 18 2019 @ 02:21 PM
link   
AI is good assuming the data to start with is correct and normally at the moment its used for very dedicated tasks that are easy to model but for military planning etc you always need to plan for the f--- ups which basically starts entering infinite recursion to try and plan for every possible event in detail.

Trying to model peoples behaviour can be hard as you think you got a handle on how people work then someone adds in a can of wal-marts best to the mix and things can soon take a twist.



posted on Sep, 18 2019 @ 02:28 PM
link   

originally posted by: Assassin82
And people will willingly pay to be a part of that fake reality.

Well, sure. Because our real reality sucks. This year people are expected to spend over $150 Billion on video game related stuff. That absolutely buries what people spend on TV and movies.

We love our virtual worlds more than our own. Another reason why people will gladly welcome our AI overlords.



posted on Sep, 18 2019 @ 03:42 PM
link   
I don’t disagree, but still don’t wanna be killed by the products of my evolution.
I’d rather become the agent really.

a reply to: Assassin82


edit on 18-9-2019 by Athetos because: (no reason given)



posted on Sep, 18 2019 @ 06:59 PM
link   

originally posted by: Athetos
I don’t disagree, but still don’t wanna be killed by the products of my evolution.

Eh. Happens all the time. Grandchildren are born with an immunity to diseases that their grandparents don't have. The cute little preschooler rolls around in the plastic ball bin exchanging diseases with all the other little kids. Grandma gives them a big smooch. So long, Granny!



posted on Oct, 11 2019 @ 05:31 AM
link   
 




 



posted on Oct, 11 2019 @ 11:14 AM
link   
a reply to: Assassin82




Perhaps one of the biggest obstacles in AI being used for good is its lack of humanity, and the corruption of mankind. But we write the code that makes it, and we create the algorithms behind it all. Crazy to think what the world will look like in 50 years and how much AI will grow in that time.


We can and probably should use something along the lines of the 3 laws of Robotics that Isaac Asimov published in his "Robot" series of books starting with "I, Robot" in 1950.

1 - A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2 - A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3 - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.



posted on Oct, 11 2019 @ 10:46 PM
link   
a reply to: Blue Shift




Well, these days we're working on programming AI to mimic human actions and responses. At some point, the AI will be so good at mimicking us that we won't be able to tell if it's self-conscious or not. And if you can't tell the difference, is there really a difference?


Check this out Blue Shift...

The Turing Test



posted on Oct, 12 2019 @ 02:06 AM
link   
Intelligence cannot be artificial, that's a misnomer.

Lifeless objects are not 'intelligent', in any way. Never will be, either.



new topics

top topics



 
11

log in

join