It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


New computer program aims to teach itself everything about any visual concept

page: 1

log in


posted on Jun, 15 2014 @ 03:13 AM
New computer program aims to teach itself everything about any visual concept

In today's digitally driven world, access to information appears limitless.

But when you have something specific in mind that you don't know, like the name of that niche kitchen tool you saw at a friend's house, it can be surprisingly hard to sift through the volume of information online and know how to search for it. Or, the opposite problem can occur -- we can look up anything on the Internet, but how can we be sure we are finding everything about the topic without spending hours in front of the computer?

Computer scientists from the University of Washington and the Allen Institute for Artificial Intelligence in Seattle have created the first fully automated computer program that teaches everything there is to know about any visual concept. Called Learning Everything about Anything, or LEVAN, the program searches millions of books and images on the Web to learn all possible variations of a concept, then displays the results to users as a comprehensive, browsable list of images, helping them explore and understand topics quickly in great detail.

"It is all about discovering associations between textual and visual data," said Ali Farhadi, a UW assistant professor of computer science and engineering. "The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them."

The program learns which terms are relevant by looking at the content of the images found on the Web and identifying characteristic patterns across them using object recognition algorithms. It's different from online image libraries because it draws upon a rich set of phrases to understand and tag photos by their content and pixel arrangements, not simply by words displayed in captions.

Users can browse the existing library of roughly 175 concepts. Existing concepts range from "airline" to "window," and include "beautiful," "breakfast," "shiny," "cancer," "innovation," "skateboarding," "robot," and the researchers' first-ever input, "horse."

If the concept you're looking for doesn't exist, you can submit any search term and the program will automatically begin generating an exhaustive list of subcategory images that relate to that concept. For example, a search for "dog" brings up the obvious collection of subcategories: Photos of "Chihuahua dog," "black dog," "swimming dog," "scruffy dog," "greyhound dog." But also "dog nose," "dog bowl," "sad dog," "ugliest dog," "hot dog" and even "down dog," as in the yoga pose.

The technique works by searching the text from millions of books written in English and available on Google Books, scouring for every occurrence of the concept in the entire digital library. Then, an algorithm filters out words that aren't visual. For example, with the concept "horse," the algorithm would keep phrases such as "jumping horse," "eating horse" and "barrel horse," but would exclude non-visual phrases such as "my horse" and "last horse."

Once it has learned which phrases are relevant, the program does an image search on the Web, looking for uniformity in appearance among the photos retrieved. When the program is trained to find relevant images of, say, "jumping horse," it then recognizes all images associated with this phrase.

It is amazing to see how far science has come since I've been alive. (Getting close to 3 decades)

posted on Jun, 15 2014 @ 04:46 AM
You mean like Google does?

posted on Jun, 15 2014 @ 04:49 AM
a reply to: Watchfull

I believe it is more advanced than google.

posted on Jun, 15 2014 @ 05:45 AM

originally posted by: Watchfull
You mean like Google does?

I think Google works by using the image tags that people embed in their images themselves - this would allow the computer to tag the images. So one step closer to A.I. with image recognition and cognition. The closest thing would be the image search option on Google that finds web pages with a certain image, I think.
edit on 15amSun, 15 Jun 2014 05:46:31 -0500kbamkAmerica/Chicago by darkbake because: (no reason given)

posted on Jun, 15 2014 @ 05:55 AM
I know, I was being flippant, but it does basically the same thing.

And if the new process becomes a workable solution, the Google will buy it, that is if they haven't already patented the concept.

Although Google has not elaborated on why it has bought the firm – and on past form it probably won't – two US patents filed by DeepMind Technologies on 16 January offer some clues.

Both patents cover intelligent ways to improve the process of "reverse image search", the notion of uploading a picture to a search engine so that it can find similar ones. This is already possible on Google's image search page but it often retrieves amusingly irrelevant images. To improve this, in US patent filing 2014/0019484, DeepMind engineers Benjamin Coppin and Mustafa Suleyman reveal a different trick: allow the user to input two images, let the algorithm find similarities between them, and then search for those instead.

A second patent filed by the same pair enables the user to home in on a small area of two pictures to improve image search still further.

New Scientist

posted on Jun, 15 2014 @ 06:02 AM
And here's a link to the LEVAN webpage, the program mentioned in the article:

I've looked at it, and their database is not very large yet. I imagine it will grow as more people use it. It's interesting, but I don't think it is big enough yet.

Edit: If you click on "stirring" on the left, it gives a good example of what this is capable of. It is a search engine that relates the intertwining of concepts. If G00gle or any one else get this, it will make searching more specific. Cool find!
edit on 6/15/2014 by InFriNiTee because: (no reason given)

posted on Jun, 16 2014 @ 11:07 PM
Ray Kurzweil is now the new director for future technology at Google...oh yes its happening.

Google is a exponentially growing beast, im still undecided if they pose a threat or not and really I dont care.

Im confident in my ability to adapt to any future if anything I find this current paradigm boring an challenging. So bring on the AI and let the games begin.

Google is investing billions in R&D as well as buying companies that will help it do what it really wants to do. Create an AI god. And they just might pull it off.

Are the robots about to rise? Google's new director of engineering thinks so…

Ray Kurzweil popularised the Teminator-like moment he called the 'singularity', when artificial intelligence overtakes human thinking. But now the man who hopes to be immortal is involved in the very same quest – on behalf of the tech behemoth

edit on 16-6-2014 by TiM3LoRd because: (no reason given)

new topics

top topics


log in