It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

A guy trained a machine to "watch" Blade Runner. Then things got seriously sci-fi.

page: 1
44
<<   2  3 >>

log in

join
share:
+15 more 
posted on Jun, 3 2016 @ 03:25 PM
link   
This is nuts. Seriously. I'm still trying to hash out the repercussions of this one guys. Check this out:



Broad's goal was to apply "deep learning" — a fundamental piece of artificial intelligence that uses algorithmic machine learning — to video; he wanted to discover what kinds of creations a rudimentary form of AI might be able to generate when it was "taught" to understand real video data.

An artificial neural network is a machine-built simulacrum of the functions carried out by the brain and the central nervous system. It's essentially a mechanical form of artificial intelligence that works to accomplish complex tasks by doing what a regular central nervous system does — using its various parts to gather information and communicate that information to the system as a whole.

Broad decided to use a type of neural network called a convolutional autoencoder. First, he set up what's called a "learned similarity metric" to help the encoder identify Blade Runner data. The metric had the encoder read data from selected frames of the film, as well as "false" data, or data that's not part of the film. By comparing the data from the film to the "outside" data, the encoder "learned" to recognize the similarities among the pieces of data that were actually from Blade Runner. In other words, it now knew what the film "looked" like.


So it "watched" the movie, and then reconstructed the movie. Basically, it created a version of what it -- the AI -- saw.

Here's some video footage:





Broad told Vox in an email that the neural network's version of the film was entirely unique, created based on what it "sees" in the original footage. "In essence, you are seeing the film through the neural network. So [the reconstruction] is the system's interpretation of the film (and the other films I put through the models), based on its limited representational 'understanding.'"


Here's "A Scanner Darkly" side by side:



So that's what these movies "look like" to this basic AI. I assume if we were to hook video cameras up to an AI as eyes, this might be what the world would "look" like to the AI?

It looks dreamy, doesn't it? Almost like what I'd imagine a newborn baby would first see after emerging from the womb.

You can read the whole article here -- there's some additional video samples and further/deeper explanations as to how this all works.

Anyway, my mind?



posted on Jun, 3 2016 @ 03:30 PM
link   
a reply to: MystikMushroom

Trippy S&F



posted on Jun, 3 2016 @ 03:35 PM
link   
Very interesting!

Though why he'd pick a movie where the AI "replicants" kill their "maker" is beyond me. ???

Why not start it on something with a less, shall we say, dark and dystopian manifestation. I mean, how is the AI supposed to differentiate between reality and the movie??

What the heck??

Anyway...I appreciated your OP!

- AB



posted on Jun, 3 2016 @ 03:49 PM
link   
a reply to: MystikMushroom

If I understand this rightly AI now has the ability to see virtually and playback it's interpretation of what it saw?

My mind too!






posted on Jun, 3 2016 @ 04:06 PM
link   

originally posted by: AboveBoard
Very interesting!

Though why he'd pick a movie where the AI "replicants" kill their "maker" is beyond me. ???

Why not start it on something with a less, shall we say, dark and dystopian manifestation. I mean, how is the AI supposed to differentiate between reality and the movie??

What the heck??

Anyway...I appreciated your OP!

- AB


If you read the article, he says because Philip K Dick's work is the perfect material for this very thing.


n other words, using Blade Runner had a deeply symbolic meaning relative to a project involving artificial recreation. "I felt like the first ever film remade by a neural network had to be Blade Runner," Broad told Vox.


The AI didn't understand the movie, either. It merely learned to recognise the video data after 6 "training" sessions and it was then able to detect the movie based on this. It's not watching the movie and understanding the ideas portrayed.

Effectively, it's a way to encode video data without using a human created codec, it can compress data using it's own learned ability.


Normally, video encoding happens through an automated electronic process using a compression standard developed by humans who decide what the parameters should be — how much data should be compressed into what format, and how to package and reduce different kinds of data like aspect ratio, sound, metadata, and so forth.

Broad wanted to teach an artificial neural network how to achieve this video encoding process on its own, without relying on the human factor.


At least that's how I read the article.

But I did find it quite funny that the videos were taken down due to copyright infringement by Warner Bros. who use a video recognition algorithm to detect copyright material. It was then reinstated as the work was not theirs.... leaving the door open for potential future legislation to make it illegal to upload videos based on what "AI" sees.


But for sure, this is an awesome topic, it's the start of something that has long been a goal - AI being able to detect familiar things out of the randomness of everything else.

Good OP!!



posted on Jun, 3 2016 @ 04:08 PM
link   
Very dream-like. I noticed that most of the scenes with violence became choppier and more kaleidoscopic...I wonder if that's just to do with the processing of more rapid images?



posted on Jun, 3 2016 @ 04:18 PM
link   

originally posted by: Unresponsible
Very dream-like. I noticed that most of the scenes with violence became choppier and more kaleidoscopic...I wonder if that's just to do with the processing of more rapid images?


He purposely used low resolution, which gives it a more blurry look. I think you would get the same result using a contemporary codec with very low settings.



posted on Jun, 3 2016 @ 04:31 PM
link   
a reply to: MystikMushroom

Are you starting to believe AIs can be creative?

You argued against the point in another post on here about the dangers of AI posted here earlier today.



posted on Jun, 3 2016 @ 04:35 PM
link   
a reply to: MystikMushroom

The AI version does not "know" about humans or that humans are watching the film. The film is the AI's best guess at meaning using a convolutional neural network (CNN). These neural nets are actually based on vision/visual cortex processing. The pixels "look like" they belong together (to... let's give it a name, DeckardAI) so DeckardAI ordered them that way. What us naked monkeys are interested in, humans, faces, letters, words, are not "recognized" as being important to the visual based DeckardAI "thinks" is important so that data is not viewed as important and we get a chopy, kind of pixelated effect. Still pretty d@mn cool. Wikipedia CNN (not news!)

What happens if you hook up DeckardAI to another NN that does? Sure we will see some new great developments when they do (and they probably already have).

S+F



posted on Jun, 3 2016 @ 05:09 PM
link   
a reply to: MystikMushroom

such network does nothing but data processing. without complex additions to make it able to speak or even write, 'think' on its own - that is, generate output without having an input, not even mentioning self-awareness, it's nothing but a regular video filter with the added capability of image recognition, limited to yes/no (blade runner/not blade runner).

hardly impressive, and those comparisons are basically meaningless - the only thing they show is the complexity of that neural network, because it has influence on the number of details such network can process at a time.

sorry, but that thing that blew your mind, was your own ignorance.



posted on Jun, 3 2016 @ 05:12 PM
link   
a reply to: nightbringr

It's not being creative, it's rendering what it sees. We are seeing what it "sees" by what it is being shown. It is like looking through the eyes of someone else or an animal.



posted on Jun, 3 2016 @ 05:15 PM
link   
a reply to: jedi_hamster

It's not just a filter or some kind of rendering like in Photoshop ... where you tell it to "posterize" or "cartoon-ify" an image.

It is "looking" at the data of the video and trying to duplicate it. We are "seeing" an approximation of what it "sees" based on the video data it is being fed.

I understand what you are saying, but you seem to have oversimplified it.

It's not alive, it's not aware, it's not sentient. It's simply a program processing data -- I get that, but it is processing it in a novel way.



posted on Jun, 3 2016 @ 05:27 PM
link   
a reply to: MystikMushroom

You cant distinguish a Robot from a Human, they follow the same principle of intelligent design by a creator.
In our western culture, our kids are taught the religious aspect of Christ and live with it. The difference is in their way you teach it, if it acts, talks and behaves like a human, is it not a human? Define yourself within the boundaries of being a human and then define the difference of what we are trying to create as a robot.



posted on Jun, 3 2016 @ 05:37 PM
link   

originally posted by: MystikMushroom
a reply to: nightbringr

It's not being creative, it's rendering what it sees. We are seeing what it "sees" by what it is being shown. It is like looking through the eyes of someone else or an animal.

Possibly with no AI at all ? Smells like a boatload.



posted on Jun, 3 2016 @ 05:37 PM
link   
cool start...so we are at the level the small tape recorder filled the neo tech 60's.....

next step may be self awareness ......watch out.....the date skynet became self-aware



posted on Jun, 3 2016 @ 05:40 PM
link   
a reply to: Gothmog

It's a non-sentient, non-self aware AI.

Siri is an AI. Cortana is an AI. They're not sentient, but they are both still a rudimentary form of artificial intelligence.



posted on Jun, 3 2016 @ 05:42 PM
link   
a reply to: MystikMushroom

Whats the big deal? obviously the AI needs glasses to better focus.



posted on Jun, 3 2016 @ 05:43 PM
link   

originally posted by: MystikMushroom
a reply to: Gothmog

It's a non-sentient, non-self aware AI.

Siri is an AI. Cortana is an AI. They're not sentient, but they are both still a rudimentary form of artificial intelligence.

Looks to me no different than an image being rendered on cheap video card from a set of horrible lenses.
Peace



posted on Jun, 3 2016 @ 06:14 PM
link   
Interesting thread!

Now i wonder what would happen if you took an AI who hadn't been given info prior to this, and gave it the choice between watching something Violent vs something Non Violent? hmm that could be interesting.



posted on Jun, 3 2016 @ 06:19 PM
link   
I kinda feel like I should dig out my old VHS video player go on a road trip to the scientists lab Oh and kidnap the scientist on the way explaining that this thing is going to cause Armageddon then watch as my old VHS player duffs up the machine but then has to die in a steel press.

I think in the next ten years we will have real AI.
I think its cool but we have to meld man and machine!.




top topics



 
44
<<   2  3 >>

log in

join