It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
originally posted by: AboveBoard
Very interesting!
Though why he'd pick a movie where the AI "replicants" kill their "maker" is beyond me. ???
Why not start it on something with a less, shall we say, dark and dystopian manifestation. I mean, how is the AI supposed to differentiate between reality and the movie??
What the heck??
- AB
The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:
[1] Record brain activity while the subject watches several hours of movie trailers.
[2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured.
(For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)
[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.
[4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction.
BERKELEY — Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, scientists at the University of California, Berkeley, are bringing these futuristic scenarios within reach. Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers. As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.
It's a Brain, an Artificial Brain by Steven G Anderson In this clip from the movie Demon Seed, Dr. Harris describes a new type of computer, one that is organic. When asked if the computer, Proteus IV, is alive however, the question seems ludicrous. Moving beyond the reliability of the mainframe computer and beyond the speed of the supercomputer, Proteus is an unknown entity, a machine that can learn and become more intelligent than its own creator. The use of computer terminals to interact with Proteus, and the notion of speech as a desired interface are prevalent in this film clip. Also, the immense size and location of Proteus, ten stories underground, point to the historical location of the mainframe, deep within the institution for which it serves.
Ever since the early days of modern computing in the 1940s, the biological metaphor has been irresistible. The first computers — room-size behemoths — were referred to as “giant brains” or “electronic brains,” in headlines and everyday speech. As computers improved and became capable of some tasks familiar to humans, like playing chess, the term used was “artificial intelligence.” DNA, it is said, is the original software. For the most part, the biological metaphor has long been just that — a simplifying analogy rather than a blueprint for how to do computing. Engineering, not biology, guided the pursuit of artificial intelligence. As Frederick Jelinek, a pioneer in speech recognition, put it, “airplanes don’t flap their wings.”
originally posted by: nemonimity
I'm not sure if any body is grasping the article, there is no AI here, he's just used deep learning algorithms to encode video. It's definitely neat but saying he taught an AI to watch video is patently ridiculous. All he did was write a program that decodes and re-encodes video.