It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Two rival AI approaches combine to let machines learn about the world like a child

page: 1
4

log in

join
share:

posted on Apr, 8 2019 @ 06:34 AM
link   
The first half of this got deleted, so I will attempt to piece the info back together again :-)

First and foremost, I am not an expert in any feild related to Artificial Intelligence. If you are able to shed light on areas I am lacking in, please do.

We are getting closer to AI that can learn like a child.




since the inception of artificial intelligence, research in the field has fallen into two main camps. The “symbolists” have sought to build intelligent machines by coding in logical rules and representations of the world. The “connectionists” have sought to construct artificial neural networks, inspired by biology, to learn about the world. The two groups have historically not gotten along.


One of the downsides to Neural Networks (NNs) is the amount of manually labeled data that is required to train them.




It is possible to train just a neural network to answer questions about a scene by feeding in millions of examples as training data. But a human child doesn’t require such a vast amount of data in order to grasp what a new object is or how it relates to other objects. Also, a network trained that way has no real understanding of the concepts involved—it’s just a vast pattern-matching exercise. So such a system would be prone to making very silly mistakes when faced with new scenarios. This is a common problem with today’s neural networks and underpins shortcomings that are easily exposed


The "symbolists" approach reminds me of genetic memory and instructions passed down through evolution. In other words it's hard coded.

When combining both approaches, it appears to eliminate some of the short comings from both while also complimenting each other.



The system consists of several pieces. One neural network is trained on a series of scenes made up of a small number of objects. Another neural network is trained on a series of text-based question-answer pairs about the scene, such as “Q: What’s the color of the sphere?” “A: Red.” This network learns to map the natural language questions to a simple program that can be run on a scene to produce an answer.

The NS-CL system is also programed to understand symbolic concepts in text such as “objects,” “object attributes,” and “spatial relationship.” That knowledge helps NS-CL answer new questions about a different scene—a type of feat that is far more challenging using a connectionist approach alone. The system thus recognizes concepts in new questions and can relate them visually to the scene before it.


Before I go quoting the whole article, you can read the rest here:

www.technologyreview.com...

Now, I wonder if it would be possible to use neuroevolution to provide further hardcoding in future generations of this system. In turn creating something akin to the way biological life passes on  knowledge.



Neuroevolution, or neuro-evolution, is a form of artificial intelligence that uses evolutionary algorithms to generate artificial neural networks (ANN), parameters, topology and rules. It is most commonly applied in artificial life, general game playing and evolutionary robotics."


I apologize if this seems a little scattered, it's 4:30 in the morning here and I wanted to post it before I slept the thoughts away.



posted on Apr, 8 2019 @ 06:56 AM
link   
a reply to: Irikash

Computer programs are only ever as smart as the people who create them:



Contrary to the delusional fantasies of AI programmers, it is very unlikely "strong AI" will ever become a reality with the von Neumann architecture as brilliantly argued by John Searle:



It may be difficult for some you AI proponents to accept but Commander Data of Star Trek was played by a human actor. You can't prove a negative. You can't prove God does not exist. And you can't prove "strong AI" will never occur. But with the way computers are currently made it is very unlikely.

There is actually a field of study on the limitations of what computers are capable of achieving called Computability Theory. Most people do not even know computer programs HAVE limitations!!!

Computability: Turing, Gödel, Church, and Beyond (The MIT Press)

It's been 69 years since ENIAC has been created. And despite Hollywood's favorite science-fiction nemesis nothing has really change in the way computers have worked. You would think in those 69 years if "strong AI" were a possibility it would have happened by now.


edit on 8-4-2019 by dfnj2015 because: (no reason given)



posted on Apr, 8 2019 @ 07:08 AM
link   
a reply to: Irikash

Here's a funny article. The ultimate goal of Artificial Intelligence is unbiased communist government:

What is the Ultimate Goal of Artificial Intelligence



posted on Apr, 8 2019 @ 07:40 AM
link   
There is no such thing as A.I yet...

All "A.I" are just programs made by programmers as of now.



posted on Apr, 8 2019 @ 07:52 AM
link   

originally posted by: Spacespider
There is no such thing as A.I yet...

All "A.I" are just programs made by programmers as of now.


argument claims that Gödel’s first incompleteness theorem shows that the human mind is not a Turing machine

Although many AI proponents have argued Lucas-Penrose Argument is false, I think it's a choice based on semantic interpretation. The criticisms that are made by AI proponents are not objective which in my way of thinking only proves the Lucas-Penrose argument is valid.



posted on Apr, 8 2019 @ 08:19 AM
link   
a reply to: Irikash

If people where like machines a computer AI system would be able to predict where the next mouse click would be and provide the information we are asking for before we actual click the mouse. But since the computer AI system cannot predict where our next mouse click will be we are therefore not a mechanistic computing machine but something more.



posted on Apr, 8 2019 @ 08:36 AM
link   
When I was in college studying Artificial Intelligence I wanted to prove the human mind was a mechanistic machine like a computer. And our thoughts were simply computer programs. To do this, I wanted to put my mind into an infinite loop, thereby, not being able to have another thought ever again because my mind was locked in a loop. Sort like the way Republicans criticize Democrats as being responsible for everything that is bad in this country. Essentially, I was trying to have a thought in my brain that would lock it up. It would be the equivalent to the following computer program:

10 NOP
20 GOTO 10

Essentially, this program does nothing other than loop forever. So no matter how hard I tried I was not able to come up with a similar program in my mind by having a particular sequence of thoughts. This was the closest I was able to come up with. Hopefully, the following thought which takes the form of a question will not hurt anyone by permanently locking up their brain in an infinite loop. Okay, so here's the thought:

Have you ever thought about what your brain is doing between thoughts?

Most people hear this question chuckle. I think the chuckle or our sense of humor is what allows our minds to prevent our brains from ever locking up completely in an infinite loop. Of course, I have no way to know for sure. Maybe someone read my brain locking thought above and really are now having their brains permanently stuck in an infinitely loop.

The bottom line is computers have no sense of humor.

Halting Problem



edit on 8-4-2019 by dfnj2015 because: (no reason given)



posted on Apr, 8 2019 @ 12:22 PM
link   
a reply to: dfnj2015

Ah I see, so when you where studied brain loops at Hogwarts

Where you a ravenclaw or hufflepuff ?

A.I are not real



posted on Apr, 8 2019 @ 02:48 PM
link   
a reply to: dfnj2015

Let me clarify a little. I use the term "A.I" as it is currently used by the mainstream, very loosely. If we create an A.I., would it still be considered "artificial"? (rhetorical, but feel free to answer if you feel inclined) I think it would require a consciousness to become intelligent. This may be achievable with quantum technologies, look up quantum mechanics and consciousness if you want more info in that regard (that branches much deeper into other subjects than is necessary at the moment.)

I'm looking at it more as a tool to free up the human "work" required to progress us to a level 1 civilization on the Kardashev Scale. This new system could be a nice step in freeing up a lot of time used to label data.



posted on Apr, 8 2019 @ 02:49 PM
link   
a reply to: dfnj2015


originally posted by: dfnj2015
a reply to: Irikash

Here's a funny article. The ultimate goal of Artificial Intelligence is unbiased communist government:

What is the Ultimate Goal of Artificial Intelligence


That is rather interesting, and frightening haha. I think we are progressing too quickly with dumb A.I. and may never see their envisioned utopia (perhaps for the best). I feel it's also inline with the creation of a god, as we have tried to do for quite some time now. We are such complex creatures that we confuse ourselves on a regular basis. This attempt to create a god is, in my opinion, us trying to better understand ourselves.



posted on Apr, 8 2019 @ 02:50 PM
link   
a reply to: dfnj2015



It's been 69 years since ENIAC has been created. And despite Hollywood's favorite science-fiction nemesis nothing has really change in the way computers have worked. You would think in those 69 years if "strong AI" were a possibility it would have happened by now.


Putting a time limit on the possibilty of things is too limiting. Although for the purpose of allocation of resources, that argument is a very valid one.



posted on Apr, 8 2019 @ 02:57 PM
link   
in regard to A.I. and machine learning being used to help with that level 1 civilization goal:

Fusion power and AI


edit on 8-4-2019 by Irikash because: typo




top topics



 
4

log in

join