It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
What is big data?
Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This data comes from everywhere: sensors used to gather climate information, posts to social media sites, digital pictures and videos, purchase transaction records, and cell phone GPS signals to name a few. This data is big data.
When you browse online for a new pair of shoes, pick a movie to stream on Netflix or apply for a car loan, an algorithm likely has its word to say on the outcome.
The complex mathematical formulas are playing a growing role in all walks of life: from detecting skin cancers to suggesting new Facebook friends, deciding who gets a job, how police resources are deployed, who gets insurance at what cost, or who is on a "no fly" list.
Algorithms are being used -- experimentally -- to write news articles from raw data, while Donald Trump's presidential campaign was helped by behavioral marketers who used an algorithm to locate the highest concentrations of "persuadable voters."
But while such automated tools can inject a measure of objectivity into erstwhile subjective decisions, fears are rising over the lack of transparency algorithms can entail, with pressure growing to apply standards of ethics or "accountability."
Algorithms can predict a person’s intelligence based on social network photos as accurately as humans can and without faulty stereotyping, reveals a new study from Cambridge Judge Business School.
The findings: machines using factors such as a photo’s colour, composition and texture predict a person’s “measured intelligence” as accurately (or marginally better) as humans do, while humans use inaccurate cues such as eyeglasses in judging “perceived intelligence”.
This has important implications for hiring and other practices in which profile photos are routinely reviewed.
“The fact that machines can accurately judge intelligence poses an obvious privacy risk, as social media profile photos are normally public by default,” says research paper co-author Dr David Stillwell, deputy director of the Psychometrics Centre at the Judge.
I work in computational quantum condensed-matter physics: the study of matter, materials, and artificial quantum systems. Complex problems are our thing.
Researchers in our field are working on hyper-powerful batteries, perfectly efficient power transmission, and ultra-strong materials—all important stuff to making the future a better place. To create these concepts, condensed-matter physics deals with the most complex concept in nature: the quantum wavefunction of a many-particle system. Think of the most complex thing you know, and this blows it out of the water: A computer that models the electron wavefunction of a nanometer-size chunk of dust would require a hard drive containing more magnetic bits than there are atoms in the universe.
One small breakthrough in condensed-matter physics could change everything. Complexity, and the challenge of tackling complex problems with existing technology, is what keeps me up at night. The most complex problem is understanding the wavefunction of a many-particle quantum system with sufficient accuracy to design new quantum materials and devices. When DeepMind beat Sedol, I began to wonder: Could machine learning help us solve the most complex problem in physics? The most complex problem in physics could be solved by machines with brains.
originally posted by: swanne
a reply to: Aazadan
I agree. Does the OP knows that algorithm doesn't mean artificial intelligence, but actually maths?
The future of artificial intelligence begins with a game of Space Invaders.
This player, it should be mentioned, is not human, but an algorithm on a graphics processing unit programmed by a company called DeepMind. Instructed simply to maximise the score and fed only the data stream of 30,000 pixels per frame, the algorithm -- known as a deep Q-network – is then given a new challenge: an unfamiliar Pong-like game called Breakout, in which it needs to hit a ball through a rainbow-coloured brick wall. "After 30 minutes and 100 games, it's pretty terrible, but it's learning that it should move the bat towards the ball," explains DeepMind's cofounder and chief executive, a 38-year-old artificial-intelligence researcher named Demis Hassabis. "Here it is after an hour, quantitatively better but still not brilliant. But two hours in, it's more or less mastered the game, even when the ball's very fast. After four hours, it came up with an optimal strategy -- to dig a tunnel round the side of the wall, and send the ball round the back in a superhuman accurate way. The designers of the system didn't know that strategy."
originally posted by: loam
a reply to: neoholographic
I know quite a bit about this space. 100% dead on.
The problem is many have no clue how far along we actually are toward this extreme peril. Our entire global economy is already mostly there....everything from your credit and insurance scores to what is marketed to you to how equities are bought and sold.
There also seems to be no stopping of any of it.
People laugh at this, but they really shouldn't.
originally posted by: swanne
a reply to: neoholographic
I program AI as a hobby, mate.
I know what I'm talking about.
Yes, AI depends on algorithms. But that still doesn't make algorithms "artificial intelligences", no more than writing an equation on the black board will make the board self-aware.
originally posted by: swanne
a reply to: neoholographic
"Algorithm" isn't a substance, silly. AIs can't "control algorithms", any more than you can steal someone's Pi constant.
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal.[1] Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"