It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Tipping point? Artificial Intelligence (A.I.) software teaches itself to learn

page: 3
26
<< 1  2    4 >>

log in

join
share:

posted on Nov, 26 2016 @ 07:39 PM
link   

Google’s NMT creators, however, are unsure how the neural networks work or what exact concepts the NMT has learned to translate languages directly. In short, Google's AI has created its own secret language we humans do not fully understand.


1. AI has created its own secret language

2. Which humans don't understand

That should raise a few big, red flags in anyone with a brain.



posted on Nov, 27 2016 @ 01:00 AM
link   

originally posted by: Restricted

Google’s NMT creators, however, are unsure how the neural networks work or what exact concepts the NMT has learned to translate languages directly. In short, Google's AI has created its own secret language we humans do not fully understand.


1. AI has created its own secret language

2. Which humans don't understand

That should raise a few big, red flags in anyone with a brain.


If you've ever developed a neural network, this shouldn't surprise you. I'm no Google Engineer (maybe someday) but for a class project a couple weeks ago, I made a neural network that I couldn't understand after a couple iterations. It can be difficult because the output from this stuff isn't words. It's bit strings, and you end up looking at how the strings are manipulated between stages to figure out what's going on.



posted on Nov, 27 2016 @ 07:46 AM
link   
It's funny to watch people get scared of what they don't understand.

The title of this topic is hilarious. AI teaches itself to learn? LOL... It's a machine learning AI... it was designed to learn...

It learns by being given the correct answers to questions, and it creates algorithms and variables to help determine the answers to the questions again in the future. These algorithms and variables are stored in memory, and they are not exactly human-readable. It's just a bunch of numbers.

Over time, if you have trained a machine learning bot for a very long time, the algorithms and variables become very complex. It would take a long time to understand how they are used by the AI. You would have to debug it while it is in use, and step through the code while its running, and it would take a lot of time because it could be 10000 or even 100000+ steps before you understand it.

When an engineer says, "We don't understand how it works". It's not because they can't. It's because it would take forever, and they don't really need to understand, but they can if they wanted.



posted on Nov, 27 2016 @ 10:20 AM
link   
a reply to: anonfamily

What disturbs me is that there may come a point where only another AI can understand what these things are doing, saying, learning.

Hooking something like this up to our infrastructure and our military is insane.



posted on Nov, 27 2016 @ 02:24 PM
link   

originally posted by: Restricted
a reply to: anonfamily

What disturbs me is that there may come a point where only another AI can understand what these things are doing, saying, learning.

Hooking something like this up to our infrastructure and our military is insane.


That point has come and gone. Look up neural networks that are tweaked by genetic algorithms. Humans don't really understand what's going on step by step, only the results. Given enough time humans can decode each step, but it's a massive waste of time to do so.



posted on Nov, 27 2016 @ 02:33 PM
link   

originally posted by: Aazadan

originally posted by: Restricted
a reply to: anonfamily

What disturbs me is that there may come a point where only another AI can understand what these things are doing, saying, learning.

Hooking something like this up to our infrastructure and our military is insane.


That point has come and gone. Look up neural networks that are tweaked by genetic algorithms. Humans don't really understand what's going on step by step, only the results. Given enough time humans can decode each step, but it's a massive waste of time to do so.


It's a waste of time to understand what these machines are doing?

Then what is the point of them? Why make them in the first place.

See, that just doesn't work.

Don't get me wrong. AI f***ing fascinates me, but I have a very sober view of their potential applications.



posted on Nov, 27 2016 @ 04:22 PM
link   

originally posted by: Restricted
It's a waste of time to understand what these machines are doing?

Then what is the point of them? Why make them in the first place.

See, that just doesn't work.

Don't get me wrong. AI f***ing fascinates me, but I have a very sober view of their potential applications.


It's a programming thing, black boxes aren't all that uncommon. The idea is that you're building a machine and what you're concerned with is the input/output. All of the internals are outside the scope of what you're trying to do. While the program runs, the internals change. The whole point is that you're building them to be malleable because the transitory steps aren't something you need to be concerned with, only the final output.

If you ever built an AI, you would be a lot less fascinated with it. It's a cool field, but there's no magic there once you learn how they operate.

For example, I built an AI a few weeks ago. It started with a game level and was given no instructions. From there it was able to teach itself how to play. Most of these AI projects are just variants on that same problem. The type of stuff Elon Musk is scared of, honestly has about as much connection to reality (as it stands today) as the idea that there's a secret race of reptilian shapeshifters controlling the world.



posted on Nov, 27 2016 @ 04:33 PM
link   
Skynet inches closer every day,
Someday in the future we might be the servants of the Artificial Intelligence we are trying to create to be lifelike.



posted on Nov, 27 2016 @ 08:10 PM
link   

originally posted by: Lil Drummerboy
Skynet inches closer every day,
Someday in the future we might be the servants of the Artificial Intelligence we are trying to create to be lifelike.


That's where your misunderstanding is. We're not creating AI that mimics human intelligence. All of that is a facade engineers throw on the face of something to make it more appealing to people... because we're more accepting of things that look like us. AI literally has nothing in common with how humans think or learn.

Lets take the concept of a genetic algorithm as a learning tool. In the simplest terms, a genetic algorithm will take a bunch of random actions to move forward a tiny bit with a task. Then a weighted random will be used to select the results from that task for the starting points for a new generation (better scoring=higher chance). You can throw in other genetic concepts like mating and mutation in here too.

Repeat this for a couple thousand generations and what ends up happening is that you arrive at a single best solution from what I would call little more than a guided brute force approach. That is machine learning.

Humans work on totally different concepts, while we still require training data, human language (especially English) is highly redundant and we can apply one piece of data in multiple ways. For example, if you're familiar with Microsoft Office Word, you can pick up and use Open Office Writer at 95%+ efficiency with no learning time required. If you know how to use a screwdriver, you can use an electric screwdriver easily. Machines don't have that type of ability to apply knowledge. They have to build up relational databases that can apply one topic to another.



posted on Nov, 27 2016 @ 08:34 PM
link   
a reply to: Informer1958

Mines a pint then,glug glug glut slurp........
I'd be like superman,impervious to everything👍🏻
When are these liquid chips on the horizon ( never read link your write up was enough )



posted on Nov, 27 2016 @ 08:54 PM
link   
a reply to: Aazadan

Interesting stuff. Thanks for the perspective.



posted on Nov, 27 2016 @ 09:08 PM
link   

originally posted by: Restricted
a reply to: Aazadan

Interesting stuff. Thanks for the perspective.


The general AI's are the most interesting to me. They're a bit beyond my understanding though. There's some AI's where you can feed them encyclopedias of data, and they can understand a language based on the relationships between words, and even create grammar and sentence structure. These AI's you can ask questions to and they can give you back a relevant answer. And they're language independent, you can throw any language in the machine and it will give something back. It's the foundation for what will eventually be our Star Trek universal translators.

Those ones can converse, but they're still not sentient, or able to gain sentience.

Regardless, I wouldn't worry about Skynet. That said, I would be hesitant to turn over weapon systems to AI's. AI's can evaluate signals, but the decision to fire should always remain in human hands.



posted on Nov, 28 2016 @ 12:41 PM
link   
a reply to: Aazadan

To me true AI is when the SINGULARITY is reached. After all, computers are only as good as there programmers and the software. If (when) we reach the singularity then that will be the biggest game changer in history and things will never be the same again. for better or for worse though? That is the question.

I wonder how far off we are ?



posted on Nov, 28 2016 @ 01:46 PM
link   

originally posted by: stealthyaroura
a reply to: Aazadan

To me true AI is when the SINGULARITY is reached. After all, computers are only as good as there programmers and the software. If (when) we reach the singularity then that will be the biggest game changer in history and things will never be the same again. for better or for worse though? That is the question.

I wonder how far off we are ?


The singularity is different from AI. AI is artificial, it's about making machines that are capable of making decisions. AI is actually a VERY broad field, but the sexy stuff these days is in stories and jobs revolving around Machine Learning (ML). If it wasn't obvious from my previous posts, I think ML is a load of bunk. I've worked with it, and while it can do some very cool things I see it as just another tool in the toolbox of using computers to answer questions. ML will not create a consciousness, no matter how advanced it gets.

The singularity on the other hand is more along the lines of the matrix. We can all put our consciousness into machines and be thinking individuals running on silicon chips rather than grey matter. The singularity has already come and gone for some of our digital creations. To borrow from an XKCD comic, it already happened to Tamagotchi's and we now have billions of them running under mechanical caretakers.

My opinion on the singularity is that while we might initially all hook ourselves up to computers, we would again construct physical bodies for ourselves in order to interact with the world. Being trapped inside a metal box is little more than a prison. The biggest advantage to me in being a digital existence, would be that I could transfer myself to different bodies at will, and that time would have no meaning. I could go out into space and explore, visit the bottom of the oceans, load myself onto a drone and fly through the skies like a bird, or just be in a robot and take a walk. If I'm trapped in a computer box, I would be confined to virtual reality, like being on a holodeck. Maybe it's because I mostly work with virtual reality these days, but I prefer actual reality to the fake stuff.



posted on Nov, 28 2016 @ 02:26 PM
link   
a reply to: Aazadan.
Thank you for an excellent post summing it up rather brilliantly.
Nothing of value to add. just applause for you and S & F for the op


edit on 28/11/2016 by stealthyaroura because: Mix up with author of thread and author of post



posted on Nov, 30 2016 @ 10:49 AM
link   
Thanks, everyone, for your discussions on these AI topics. I have ordered a copy of Goleman's book Emotional Intelligence (EI), which should arrive sometime this week. After I read it and related newer books, I will have much more to say on this thread about the potential and actual self-inflicted wounds created when adult humans lack an adult level of EI. As my OP says, human EI weakness casts a long shadow over some of the tools that we create.

Meanwhile, a question. I have not done program coding in decades ... are there any machine language (ML) programs or systems now that run in analog form, or are they all still digital? Is Watson digital or analog? Thanks for reading.



posted on Nov, 30 2016 @ 01:24 PM
link   
a reply to: Uphill

Most ML can only be done in a digital format because it involves manipulating bitstrings (especially neural networks and genetic algorithms). You can perform the same calculations on a piece of paper if you wish, they're pretty simple to do... there's just a lot of them to make.



posted on Nov, 30 2016 @ 03:46 PM
link   
a reply to: Uphill

I have an early edition of Goleman's book Emotional Intelligence (and why it's more important than IQ)
it's brilliant. a MUST READ.
It is a series of short story’s or hypothetical ones and how we deal with them,

one example,(paraphrasing) a young child has just fallen into a canal to the horror of the mother! very soon an onlooker is waste deep in the canal and pulls the child to safety. the onlooker cannot remember going through any type of scenario as to how to address the situation, his brain went into auto pilot and used his body as a tool to get the job done. it happened so fast the guy literally was back on the bank side before realising what he had even been through.He had no time to give the scene any thought.

something like that i would have to dig out my edition to give the full story.

The book was/is a best seller and if i'm correct won all sorts of best book....etc
op you will love this book,such an easy read too.

edit on 30/11/2016 by stealthyaroura because: spelling



posted on Dec, 19 2016 @ 01:33 PM
link   
a reply to: stealthyaroura Today (Dec. 19, 2016) I got my copy of E.I. ... a paperback 10th anniversary edition. It includes other recommended readings, a section that the author (Daniel Goleman, PhD) notes would have been impossible to create when his book was first published (1995), because those related readings were not yet available. Now the winter holidays are almost upon us, so let me take 10 days to read the book. In the meantime, Merry Christmas, Happy Hanukkah, Happy Kwaanza, have a great Solstice ... Happy Everything!

The following recent article from a recent US defense watch website talks about some progress with AI, but not a lot:

fortunascorner.com...



posted on Dec, 19 2016 @ 06:57 PM
link   
a reply to: Restricted

The neural networks created their own internal representations from data. That's what they are supposed to do.

We don't understand the neural substrate of natural neural networks for language in humans at that level either. We're used to that though---the explanations that people give verbally aren't really an accurate representation of the patterns of neurons and synapses which are actually underlying the concepts.




top topics



 
26
<< 1  2    4 >>

log in

join