Originally posted by tridentblue
Originally posted by SkepticOverlord
reply to post by tridentblue
In terms of applying divergence, yes. But transinformation and entropy don't apply to the highly-structured metadata used in PRISM. Such data has no inherent entropy, and instead, becomes more normalized the more we have.
I have my own rule of thumb but I would like to assume that most avid/savvy users from IT background know about this and wouldn't click on recommendations regardless But never know.
So the more options, the more interactive networks become, 1) the better user experience 2) the more information the user reveals about themselves in using it. That's just the law of things: The more choices a user is allowed, the more information we share about ourselves in making those choices. And the modern Internet gives us a lot of choices, as does any good site.
Behind the Curtain politicians are targets NOT TERRORISTS and the agenda is to blackmail them to direct the political changes the UNELECTED bureaucracy demands. This is the real object of collecting absolutely everything. People on Capitol Hill have told me personally they suspected their emails were being sucked up and that has been for the last 2 years. European politicians are now starting to wake up that they too are targets and that blackmail is the game.
They realize that the New York Attorney General Eliot Spitzer was targeted when he tried to go after the Wall Street Investment Bankers everyone today calls the UNTOUCHABLES. After he got rid of Hank Greenberg at AIG for Wall Street not realizing he was doing them a service, when he turned on them that was his serious mistake. They suddenly discovered checks to a hooker and his hotel in Washington was bugged when he met with her. Since then, no one has dared to investigate Wall Street.
2. STREAMING OF SOCIAL DATA This session covered challenges in building scalable infrastructure for managing social media streams and in extracting valuable information from social media streams such as emergent topics. Sebastian Michel considered the problem of emergent topics discovery by continuously monitoring correlations between pairs of tags (or social annotations) to identify major shifts in correlations of previously uncorrelated tags in Twitter streams [1, 2]. Such trends can be used as triggers for higherlevel information retrieval tasks, expressed through queries across various information sources. Mila Hardt gave two talks on aspects related to managing streams at Twitter, in particular on infrastructure to enable processing of 400 million tweets a day and real-time top queries. Mila explained how stream processing needs at Twitter eventually led to the development of the open-source projects Storm and Trident1 for large-scale high-performance distributed stream processing. She also pointed out current challenges at Twitter in providing support
for fault tolerance, online machine learning by trading oﬀ exploration and exploitation, and approximating aggregates (such as counts). An interesting exercise involving the audience was on thinking how
topic ranking is done at Twitter. Daniel Preotiuc-Pietro introduced the Trendminer2 system for real time analysis of social media streams . Trendminer’s scalability relies on the MapReduce framework for distributed computing. Daniel also presented how to build regression models of trends in streaming data using TrendMiner .
RainStor is a software company that provides a database designed to manage and analyze big data for large enterprises. The company's origin traces back to a special project conducted by the United Kingdom's Ministry of Defence with the purpose of storing volumes of data from years of field operations for ongoing analysis and training purposes.