It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
The most extensive claims yet came this spring in a report written for the European Parliament. The report says that the U.S.
National Security Agency, through an electronic surveillance system called Echelon, routinely tracks telephone, fax, and e-mail transmissions from around the world and passes on useful corporate intelligence to American companies.
Among the allegations: that the NSA fed information to Boeing and McDonnell Douglas enabling the companies to beat out European Airbus Industrie for a $ 6 billion contract; and that Raytheon received information that helped it win a $ 1.3 billion contract to provide radar to Brazil, edging out the French company Thomson-CSF. These claims follow previous allegations that the NSA supplied U.S. automakers with information that helped improve their competitiveness with the Japanese (see "Company Spies," May/June 1994).
The idea appeared in Technology Review citing Peter Norvig, director of research at Google, who says these ideas will show up eventually in real Google products - sooner rather than later.
The idea is to use the existing PC microphone to listen to whatever is heard in the background, be it music, your phone going off or the TV turned down. The PC then identifies it, using fingerprinting, and then shows you relevant content, whether that's adverts or search results, or a chat room on the subject.
And, of course, we wouldn’t put it past Google to store that information away, along with the search terms it keeps that you've used, and the web pages you have visited, to help it create a personalised profile that feeds you just the right kind of adverts/content. And given that it is trying to develop alternative approaches to TV advertising, it could go the extra step and help send "content relevant" advertising to your TV as well. www.theregister.co.uk...
"over the last years, estimating a person’s activities has gained increased interest in the artificial intelligence, robotics, and ubiquitous computing communities."
"the concept of a personal map, which is customized based on an individual’s behavior. A personal map includes personally significant places, such as home, a workplace, shopping centers, and meeting places and personally significant routes (i.e., the paths and transportation modes, such as foot, car, or bus, that the person usually uses to travel from place to place). In contrast with general maps, a personal map is customized and primarily useful for a given person. Because of the customization, it is well suited for recognizing an individual’s behavior and offering detailed personalized help."
Neven Vision comes to Google with deep technology and expertise around automatically extracting information from a photo. It could be as simple as detecting whether or not a photo contains a person, or, one day, as complex as recognizing people, places, and objects. This technology just may make it a lot easier for you to organize and find the photos you care about. We don’t have any specific features to show off today, but we’re looking forward to having more to share with you soon.
“An intelligent thinking machine would also needs ears, and ears they are giving it. Make a call to 1-800-GOOG411 and experience their speech recognition algorithms for yourself. No surprise that the service is free, because the more people use it the more you help them reach their goal of omniscience.”
f you own an iPhone, you can now be part of one of the most ambitious speech-recognition experiments ever launched. On Monday, Google announced that it had added voice search to its iPhone mobile application, allowing people to speak search terms into their phones and view the results on the screen.
Fortunately, Google also has a huge amount of data on how people use search, and it was able to use that to train its algorithms. If the system has trouble interpreting one word in a query, for instance, it can fall back on data about which terms are frequently grouped together.
Google also had a useful set of data correlating speech samples with written words, culled from its free directory service, Goog411. People call the service and say the name of a city and state, and then say the name of a business or category. According to Mike Cohen, a Google research scientist, voice samples from this service were the main source of acoustic data for training the system.
But the data that Google used to build the system pales in comparison to the data that it now has the chance to collect. "The nice thing about this application is that Google will collect all this speech data," says Jim Glass, a principal research scientist at MIT. "And by getting all this data, they will improve their recognizer even more." LIINK