It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
BERKELEY — Imagine tapping into the mind of a coma patient, or watching one’s own dream on YouTube. With a cutting-edge blend of brain imaging and computer simulation, scientists at the University of California, Berkeley, are bringing these futuristic scenarios within reach. Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers. As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.
“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.”
They watched two separate sets of Hollywood movie trailers, while fMRI was used to measure blood flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or “voxels.” “We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity,” Nishimoto said.
The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity. Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.
Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.
RNM has been developed after about 50 years of neuro-electromagnetic involuntary human experimentations. According to many scientists, within a few years it is expected that DNA microchips will be implanted in the human brain which would make it inherently controllable. With RNM, it will be possible to read and control a person's emotional thought processes along with the subconscious and dreams. At present, around the world, supercomputers are monitoring millions of people simultaneously with the speed of 20 billion bits per second especially in countries like USA, Japan, Israel and many European countries.
RNM has a set of certain programs functioning at different levels, like the signals intelligence system which uses electromagnetic frequencies (EMF), to stimulate the brain for RNM and the electronic brain link (EBL). The EMF Brain Stimulation system has been designed as radiation intelligence which means receiving information from inadvertently originated electromagnetic waves in the environment. However, it is not related to radioactivity or nuclear detonation. The recording machines in the signals intelligence system have electronic equipment that investigate electrical activity in humans from a distance. This computer-generated brain mapping can constantly monitor all electrical activities in the brain. The recording aid system decodes individual brain maps for security purposes.
Originally posted by bigfatfurrytexan
reply to post by tetra50
Have you ever wondered if what I call red is what you call red?