It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

The Dark Secret at the Heart of AI

page: 4
48
<< 1  2  3   >>

log in

join
share:

posted on Dec, 31 2017 @ 03:50 PM
link   
a reply to: neoholographic

Um? What exactly did you want to discuss about Neural Nets? The subject of how they make decisions and if that can determined?




posted on Dec, 31 2017 @ 04:04 PM
link   

originally posted by: neoholographic


Here's what Godel said:

So the following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified . . . (Gödel 1995: 310).





This is important because it shows the separation between intelligence and consciousness.


No, it shows the separation between certain finite logical algorithmic steps of logic and what mathematicians have decided is also good and useful mathematics---certain kinds of inductive reasoning to infinity.


Consciousness can't be computed but intelligence can. This means consciousness is something that's non material but can interact with and influence what we call the material brain.


Unsupported assertion, and empirical evidence of the reverse---material substances called anaesthetics reliably influence away consciousness.



Godel's theorem's are actually strong evidence for a Creator Consciousness that exists outside of the universe.


Not in the slightest, any more than Turing's halting problem theorem which is related. It's a mathematical description about limits of certain algorithmic procedures in that domain.

Wiki:


Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement.


It means that what humans consider to be mathematics consensually is a larger class than certain formal deductive algorithmic procedures.



Godel's incompleteness theorem is simple yet profound. It simply says nothing can be proven or explained within itself.


I recently went to a public, informal talk about "Quantum Bull#", a philosopher and historian of science who delineated all the ridiculous and silly ways that quantum mechanics has been erroneously attached to all sorts of pseudoscience and mystical bull#.

Godel's theorems might be a good third place on that list after relativity.



posted on Dec, 31 2017 @ 04:11 PM
link   

originally posted by: neoholographic

Eventually AI trying to explain things to us will be like if we tried to explain the geometry of an ant hill to an ant. So we will have to merge our brains with AI. Here's an image from Google's Deep Dream.



Is it trying to tell us that it's beginning to see all things?


Is that what an a.i is supposed to look like, because it looks like very much like this.

Next things you know, Pinocchio the Son of Man. He will be a real boy.



posted on Dec, 31 2017 @ 04:25 PM
link   

originally posted by: Aazadan

originally posted by: Lightworth
a reply to: neoholographic

Didn't watch the video, but I still see nothing that addresses the anomalous, unexpected, unexplained etc. Or please let me know what specifically I'm missing if applicable. How extremely ironic it is that so much of science AS WE KNOW IT is basically just a fancier, more scholarly version of the old (and particularly monotheistic) religions, which believes itself to have all the answers. Same arrogance, new packaging, or at least in a strong enough sense. Am not saying there is no validity in what you present, just that it doesn't cover absolutely everything.


Check out Godel's Incompleteness Theorem. Basically, it asserts that there are truths in the universe that are true but cannot be proven. It's a real issue in computing.


It says that there are truths in certain mathematical systems that cannot be proven (in a specific and restricted definition of 'proven' which is a small subset of what humans mean by this linguistically) from finite operations of algorithms in certain deductive formal systems. Humans have the ability to alter what they mean by the formal systems they accept and what they mean by 'proven' to accommodate new mathematics. Take, for instance, calculus.

And in physics this is known from yet again, Isaac Newton---Universal Gravitation cannot be deduced by mathematical logic from unquestioned axioms about "how the world should work" and neither can the fundamental fields of the Standard Model. That was the problem of (some) Greeks & Christian philosphers---too much internal thinking and not enough measuring. We've learned better now.



posted on Dec, 31 2017 @ 04:28 PM
link   

originally posted by: neoholographic

Eventually AI trying to explain things to us will be like if we tried to explain the geometry of an ant hill to an ant. So we will have to merge our brains with AI. Here's an image from Google's Deep Dream.



Is it trying to tell us that it's beginning to see all things?


No. It---no we humans notice---it is an effect from the statistical structure of the training set of images and the convolutional multilayer network which learned the prominence of eyes and their importance in image classification with neural networks.

People have a habit of reading in far too deep and unsupported baloney about normal science.



posted on Dec, 31 2017 @ 04:41 PM
link   
a reply to: dfnj2015

Software complexity generally is about how many layers of abstraction you have. So in simple terms, an NN is a layer of abstraction that implements a neural net. The underlying technology like a traditional processor does not necessarily have to change to implement AI but of course that is evolving too.



posted on Dec, 31 2017 @ 05:08 PM
link   
a reply to: PlasticWizard




OK, these Al guys need to unplug the computer brain and focus on creating an algorithm or mechanism for the AI to explain why it makes certain decisions and a way for the human user can correct the problem with new information


Yes that is taking place because of course it is really important to anyone trying to tune a NN.
edit on 31-12-2017 by PDP11 because: quote post



posted on Dec, 31 2017 @ 05:27 PM
link   

originally posted by: PDP11
a reply to: D8Tee

There's a group called "The Brotherhood" in some book. The AI selects who can run for president. One of the requirements is that the candidate absolutely can NOT want the job.


I have come acroos information that says the world has been run from sometime from the commands of black boxes.

Who knows.



posted on Jan, 1 2018 @ 07:34 AM
link   

originally posted by: neoholographic
Again, you have an AI explosion and Researchers don't really understand how it's happening


Wrong, its just too expensive to understand for mass market applications.

If you log all data used to train the system and all sensor input to it after training, in real world use, then you are able to do a step by step analysis.

But it would mean even bigger data. That self driving car would have to be a datacenter on wheels.

Its only a black box if you don't log every input for a post mortem replay.



posted on Jan, 2 2018 @ 04:17 PM
link   
a reply to: mbkennel


It says that there are truths in certain mathematical systems that cannot be proven (in a specific and restricted definition of 'proven' which is a small subset of what humans mean by this linguistically) from finite operations of algorithms in certain deductive formal systems. Humans have the ability to alter what they mean by the formal systems they accept and what they mean by 'proven' to accommodate new mathematics. Take, for instance, calculus.


If a system is sufficiently powerful (to be useful.) Simple systems can be proven. His proof of this was to create a contradiction using any system like "X is true and X is NOT true."



posted on Jan, 9 2018 @ 01:32 AM
link   
a reply to: PDP11

No it wasn't creating a contradiction, it was a conceptual extension of the extremely important "Cantor diagonal argument"

en.wikipedia.org...

This concept was very important and extended to many domains.

I'm not a real mathematician, bt as I understand the justly famous Cantor argument and the history, it was a huge conceptual breakthrough and it challenges the notion of "existence" or what it means for something to exist in a mathematical theory, constructively (as if a computer could make it) or non-constructively.




top topics



 
48
<< 1  2  3   >>

log in

join