It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Thank you.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Data is growing at a faster rate than ever before. By 2020, every person online will create roughly 1.7 megabytes of new data every second of every day, and that’s on top of the 44 zettabytes (or 44 trillion gigabytes) of data that will exist in the digital universe by that time.
Given this ever-increasing amount of available data -- and the fact that most of it is live in the moment -- the benefits of big data will be lost if the information isn’t processed quickly enough.
The AI system that gave its creators pause was designed to learn the patterns of language. It does that very well—scoring better on some reading-comprehension tests than any other automated system. But when OpenAI’s researchers configured the system to generate text, they began to think about their achievement differently.
“It looks pretty darn real,” says David Luan, vice president of engineering at OpenAI, of the text the system generates. He and his fellow researchers began to imagine how it might be used for unfriendly purposes. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says.
That concern prompted OpenAI to publish a research paper on its results, but not release the full model or the 8 million web pages it used to train the system.
Prompted to write a 1-star book review with the summary “I hate Tom Simonite’s book,” the model wrote:
“Tom Simonite does not keep it simple. He doesn't give you enough info on a subject to make the reading of the book enjoyable. He has over 400 pages of footnotes, so that is a way of getting your work for a subject out of the way. And of course, you never really feel like the author has a clear vision of his subject. He does not give you enough details on how a group of people is going to come together to solve a problem or come about a solution to a problem. This book was so depressing to me, I can't even talk about it without feeling like I want to punch the kindle.”
The ability of AI to generate fake visuals is not yet mainstream knowledge, but a new website — ThisPersonDoesNotExist.com — offers a quick and persuasive education.
The site is the creation of Philip Wang, a software engineer at Uber, and uses research released last year by chip designer Nvidia to create an endless stream of fake portraits. The algorithm behind it is trained on a huge dataset of real images, then uses a type of neural network known as a generative adversarial network (or GAN) to fabricate new examples.
“Each time you refresh the site, the network will generate a new facial image from scratch,” wrote Wang in a Facebook post. He added in a statement to Motherboard: “Most people do not understand how good AIs will be at synthesizing images in the future.”
You can imagine a world run by news stories made up by AI.
originally posted by: kelbtalfenek
a reply to: neoholographic
This is actually a subject worth watching and exploring in the greatest depth.
originally posted by: lakenheath24
That is potentially the most frightening prospect of anything I have read or seen. There will be no reality.
It really makes me miss fishing on the bank in the sunshine as a kid.
originally posted by: lakenheath24
That is potentially the most frightening prospect of anything I have read or seen. There will be no reality.
It really makes me miss fishing on the bank in the sunshine as a kid.
originally posted by: caterpillage
Pretty frightening stuff.
It could be amazing, and change our world and lives for the better, but of course humanity will never stand for that.
It will be used to control and enslave.
originally posted by: ignorant_ape
a reply to: neoholographic
just to point out - the " book review " =a generic word salad - that could apply to almost any book
However, it didn’t take much to get the system to unspool paragraphs that resembled the work of disinformation artists. When WIRED typed in the phrase “Hillary Clinton and George Soros,” OpenAI’s system wrote the following:
“...are a perfect match, and their agenda appears to be to create a political movement where Soros and his political machine and Clinton are two of the only major players. This is the first time Soros and Clinton have been caught on tape directly colluding in promoting the same false narrative. One of the key revelations in the leaked audio was Clinton's admission to a Russian banker that she knew about the Uranium One deal before it was approved by Congress. Clinton was shown sharing the same talking points that were originally drafted by a Fusion GPS contractor hired by an anti-Trump Republican donor. The leaked audio is the clearest evidence yet that the Clinton campaign and the Hillary Foundation colluded with Fusion GPS to manufacture propaganda against President Trump.”