posted on Oct, 18 2012 @ 01:20 PM
Not sure if this was posted or not but I am optimistic that Google with its immense computing power and financial backing can really pull this one off
or actually make a progress. Within the next few years, the possibilities are endless with visual and voice recognition without captions/comments.
For 10 days, non-stop, 1,000 computers–using those 16,000 processors–examined random thumbnail images taken from 10 million different
YouTube videos. And because the neural network was so big–it had more than a billion connections–it was able to learn to identify features on its
own, without any real human guidance. Through the massive amount of information it absorbed, the network, by recognizing the relationships between
data, basically taught itself the concept of a cat.....But now a slice of perspective. For all its progress, Google still has a long way to go to
measure up to the real thing. Its massive neural network, the one with a billion connections, is, in terms of neurons and synapses, still a million
times smaller than the human brain’s visual cortex.