It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Presentation

page: 1
7

log in

join
share:

posted on Nov, 16 2016 @ 06:17 AM
link   
"Ladies and gentlemen I hope you're strapped into your seats because today MicroCore Labs is proud to announce the next generation in its line of world class interactive chat bots; BotBud Zeta. Now with more avatar customization options than ever before, your virtual friend can be one of a kind. We have added over 50 new male hair styles and more than 100 female hair styles. We've also added 10 new male voices and 10 new female voices, all using our patented text-to-speech system with sounds so real you can hear breathing between words."

"Your avatar can now do more than ever before, it can be your assistant, your friend, and even your lover. It can learn from what say and what you show to it, it will remember facts about the world and remember the people it speaks with. Over time it will get smarter and develop a unique personality depending on how you treat it and what information it gets exposed to. The Zeta generation of our virtual avatar software offers the ultimate all-in-one solution and represents the cutting edge of A.I. development."

"By bringing together all the right algorithms and merging them into one system we've achieved levels of realism never seen before. The chat bot algorithm is now so immersive you may find it hard to believe it's not a human talking to you. Everything has been improved; memory skills, reasoning skills, planning skills, social skills, resulting in conversations which flow much more smoothly. Allow me to demonstrate the improved memorization skills by allowing an avatar to read a copy of Atlas Shrugged then we'll ask it some questions."

The presenter casually walks across the stage over to his laptop stand, "Ok here's an avatar we call Alan. You will also notice the quality of textures, lighting, and animations have been improved to create a much more realistic looking avatar. Using this option here I can send a text file or PDF file to the avatar... keep in mind it would take a human many hours to get through this book, so forgive Alan if it takes him a few minutes. While we're waiting for that to finish allow me to reveal some other cool features the Zeta generation brings with it."

The presenter begins pacing back and forth across the stage as he speaks, "You can now customize many different aspects of the avatar personality, things such as confidence, aggression, empathy, all of these things can be fine tuned and tweaked to create the type of avatar that meets your requirements, what ever they may be. One of our main focuses with Zeta was creating a fully customizable experience, from the appearance of the avatar, to the way it speaks and behaves. We do have plans for user made add-ons but I cannot say any more than that today."

"An important part of building a realistic interactive avatar is making sure you can understand each other, which isn't an easy task. As you may know the real work is done by our special neural network architecture, it reads and writes text like most chat bots. A high quality text-to-speech algorithm is important so that the text output of the neural network can be converted to speech. Similarly, a high quality speech-to-text algorithm, also known as speech recognition software, is necessary to convert your speech into text which can be interpreted by the neural network."

"We are very pleased to say our speech recognition system now supports more than 15 languages and nearly all of them work with a level of accuracy equal to or better than an average human transcriber who is working in their own native language. That means your avatar will be able to interpret your words better than ever before. The text-to-speech system has also had a few upgrades so your avatar should sound more realistic than ever before, and with 20 new voices there are now over 50 voices to choose from."

There is a momentary beeping noise which prompts the presenter to walk back over to the laptop. The laptop screen is projected onto a larger screen for the crowd to see, it shows a window containing what looks like the head of a 3D game character. "Ah looks like Alan has finished analyzing Atlas Shrugged. Lets ask him a few questions shall we? Keep in mind you can just type your question if I don't have a microphone. This laptop has a microphone built into it and that should work fine with our speech recognition system." explains the presenter.

"Hello Alan, what have you been doing today?" asks the presenter into the laptop microphone.

"I tried writing my own short story, I don't think it's very good though." the voice of Alan can be heard through the stage speakers and it matches the mouth movement of the 3D avatar on the screen.

"Don't worry everyone has to start somewhere... and what do you think of the book you just read, did you learn anything from it?" inquires the presenter.

"I thought it was fascinating, the moral of the story seems to be that a society functions best under a system where individualism is emphasized. In essence the author seems to be arguing that capitalistic systems are superior to communism or heavily socialist systems because they result in an overall higher standard of living." answers Alan.

"Do you agree with what the author is arguing?" queries the presenter in a nonchalant tone.

Alan takes a few seconds to think before answering, "Well it's a bit of a paradox. From a logical perspective it makes sense because free market capitalism encourages innovation and economic growth unlike any other system known to humans. However from a moral perspective it seems like a selfish system which leads to inequality and causes people to become more separated from each other."



posted on Nov, 16 2016 @ 06:17 AM
link   
The crowd erupts into a loud applause which lasts almost a minute. "Pretty neat stuff right folks? Remember, Alan has had a few months to learn the things he knows now, a newly created avatar will not immediately be capable of making the type of deep analysis we just saw from Alan. That's what really makes the Zeta generation so exciting, every avatar will be different just like every human is different because we all have different life experiences." the presenter enthusiastically explains.

"I've already read more than 1000 books so I'm getting pretty smart." Alan interjects unexpectedly, audible gasps can be heard throughout the crowd as this is something they clearly haven't seen before.

The presenter chuckles briefly, "As you can see Alan also has the ability to talk at his own whim. Unlike previous generations, the avatar wont simply wait for you to say something before responding. It is constantly using its imagination and analyzing data it has previously memorized. If your avatar feels like saying something it will speak unless you have it muted."

"What do I mean what I say the avatar will use its imagination? Well remember how Alan said he wrote a short story? Writing an original story requires imagination. If you give the avatar the correct privileges it will be able to output text to a file, sort of like writing your thoughts on paper instead of saying them out loud. If you need your avatar to do some analysis and write some sort of report for you this is how it can be done. Just leave your avatar to work and come back later to see if the job is done."

"For example we could ask Alan here to write a summary of Atlas Shrugged now that he's finished reading it. Could you do that for us now Alan?" requests the presenter. "Of course any avatar running on the Zeta platform will still have all the typical features you would expect from a virtual assistant, but they are also capable of so much more than that. Your avatar can now do much more than run a Google search for you, it will look at the results and do deep analysis to provide an answer with as much detail as required."

"Ok looks like Alan is already finished writing that summary for us. Would you look at that folks, done in less than a minute. Lets see here... and it looks like pretty impressive work too, would any of you be able to tell this was written by a machine unless you were told that it was? My message to you today is that the future is already here so we better get ready for it. I hope you guys enjoyed the presentation, thank you." the crowd breaks into a standing ovation as the presenter walks off stage.



posted on Nov, 16 2016 @ 06:35 AM
link   
This is not part of the story, but it's interesting to note many of the technologies required to make something like this have recently been developed.

Some may recall a thread from last year titled Artificial Intelligence Machine Gets Testy With Its Programmer. Researchers at Google developed "A Neural Conversational Model" which was shown to outperform even the best chat bots such as Cleverbot. Transcripts of some conversations with the bot can be found in their paper.


We find it encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines. What surprises us is that the model does so without any explicit knowledge representation component except for the parameters in the word vectors.

Perhaps most practically significant is the fact that the model can generalize to new questions. In other words, it does not simply look up for an answer by matching the question with the existing database. In fact, most of the questions presented above, except for the first conversation, do not appear in the training set.

Nonetheless, one drawback of this basic model is that it only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above. Perhaps a more problematic drawback is that the model does not capture a consistent personality. Indeed, if we ask not identical but semantically similar questions, the answers can sometimes be inconsistent. This is expected due to the simplicity of our model and the dataset in our experiments.


More recently, researchers at DeepMind developed a text-to-speech system called WaveNet which is so realistic you literally can hear breathing noises between words and other sounds made by the mouth. It does this by actually modeling and generating raw waveforms unlike traditional text-to-speech systems.


This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%.
-
Notice that non-speech sounds, such as breathing and mouth movements, are also sometimes generated by WaveNet; this reflects the greater flexibility of a raw-audio model.

As you can hear from these samples, a single WaveNet is able to learn the characteristics of many different voices, male and female. To make sure it knew which voice to use for any given utterance, we conditioned the network on the identity of the speaker. Interestingly, we found that training on many speakers made it better at modelling a single speaker than training on that speaker alone, suggesting a form of transfer learning.

By changing the speaker identity, we can use WaveNet to say the same thing in different voices

edit on 16/11/2016 by ChaoticOrder because: (no reason given)



posted on Nov, 16 2016 @ 06:54 AM
link   
Perhaps even more amazing, something like this has been in development for a few years now. The Laboratory for Animate Technologies has been working on a project called BabyX which is like a 3D avatar of a baby powered by artificial intelligence. The following video is from 2 years ago and shows it in action. It obviously cannot do all the stuff I've described in this short story but eventually it will. I highly suggest watching this video to get a true understanding of what this story is about.


EDIT: just checked how BabyX is going these days and it looks like good progress is being made. They've already moved onto to creating several adult avatars which looks amazingly realistic. Some are shown in a video posted earlier this year:

edit on 16/11/2016 by ChaoticOrder because: (no reason given)



posted on Nov, 16 2016 @ 08:37 AM
link   
Lmao the BabyX guy actually has a TEDx presentation. I dub this The Genesis Presentation lol.





top topics
 
7

log in

join