Help ATS with a contribution via PayPal:
learn more

Question: Could this new concept help with AI?

page: 1
2

log in

join

posted on Jan, 31 2014 @ 06:28 AM
link   
I was just thinking today about the book Parallel Universes by Dr. Fred Alan Wolf that I read about 5 years back when writing a thesis paper - part of the book, well a lot of it, talks about how parallel universes work. One of the interesting questions was this - let's say that you see a car drive past your house, and you don't know who owns the car or who is in the car.

Well, is it possible that the variables of who owns the car and who was in the car are not actually set until you find them out later? I was thinking about that today while I was running my Pathfinder campaign.

There was a temple on the top of this hill, and it was ran by an older priest - but it turns out that the priest was really running what is somewhat similar to a cult, with multiple wives and even the only town guard involved - although this was a love / hippie cult and nothing sinister - the point is, I didn't know about this development until the players interrogated it out of the town guard when they started wondering how come he was acting suspicious.

But then, coming back to it, does it really matter if I knew about it ahead of time or not as long as the variables are set after they are discovered?

So check this out - lets say you have an A.I. interface, and you start a conversation with =her and start to ask her questions. Well, she might have to answer - but what if her answers are chosen randomly and remembered and checked for synchronicity as they are made - that saves the trouble of having to think of every possible conversation ahead-of-time and is still just as realistic. It gives the illusion of there being a lot more there than there really is - and that saves computing power, and more importantly, (nearly impossible) coding.
edit on 31amFri, 31 Jan 2014 06:31:12 -0600kbamkAmerica/Chicago by darkbake because: (no reason given)




posted on Jan, 31 2014 @ 06:42 AM
link   
reply to post by darkbake
 


Sup dark, thought you might dig on this...
Source

Visual


Here, we introduce a new class of computer which does not use any circuit or logic gate. In fact, no program needs to be written: it learns by itself and writes its own program to solve a problem. Gödel’s incompleteness argument is explored here to devise an engine where an astronomically large number of “if-then” arguments are allowed to grow by self-assembly, based on the basic set of arguments written in the system, thus, we explore the beyond Turing path of computing but following a fundamentally different route adopted in the last half-a-century old non-Turing adventures. Our hardware is a multilayered seed structure. If we open the largest seed, which is the final hardware, we find several computing seed structures inside, if we take any of them and open, there are several computing seeds inside. We design and synthesize the smallest seed, the entire multilayered architecture grows by itself. The electromagnetic resonance band of each seed looks similar, but the seeds of any layer shares a common region in its resonance band with inner and upper layer, hence a chain of resonance bands is formed (frequency fractal) connecting the smallest to the largest seed (hence the name invincible rhythm or Ajeya Chhandam in Sanskrit). The computer solves intractable pattern search (Clique) problem without searching, since the right pattern written in it spontaneously replies back to the questioner. To learn, the hardware filters any kind of sensory input image into several layers of images, each containing basic geometric polygons (fractal decomposition), and builds a network among all layers, multi-sensory images are connected in all possible ways to generate “if” and “then” argument. Several such arguments and decisions (phase transition from “if” to “then”) self-assemble and form the two giant columns of arguments and rules of phase transition. Any input question is converted into a pattern as noted above, and these two astronomically large columns project a solution. The driving principle of computing is synchronization and de-synchronization of network paths, the system drives towards highest density of coupled arguments for maximum matching. Memory is located at all layers of the hardware. Learning, computing occurs everywhere simultaneously.



posted on Jan, 31 2014 @ 07:06 AM
link   

darkbake

So check this out - lets say you have an A.I. interface, and you start a conversation with =her and start to ask her questions. Well, she might have to answer - but what if her answers are chosen randomly and remembered and checked for synchronicity as they are made - that saves the trouble of having to think of every possible conversation ahead-of-time and is still just as realistic. It gives the illusion of there being a lot more there than there really is - and that saves computing power, and more importantly, (nearly impossible) coding.



If it were a True AI, there wouldn't be any need for complex coding. It would learn and create it's own. Being 'Just as realistic' is missing the point of trying to create a True AI response.

Am I missing the point here?

Are we talking about creating a True AI or a much better mimic?



posted on Jan, 31 2014 @ 07:38 AM
link   
reply to post by SLAYER69
 


I was talking about a better mimic, if it were true A.I. which I like to call Virtual Intelligence, then I would have advocated some kind of quantum computer, which I still think is a good possibility, using something similar to Godel's Incompleteness Theorem as a gateway to another realm where ideas are tagged with imaginary numbers.

I like how you isolated what the difference is between true A.I. and Artificial Intelligence. I believe that difference is accounted for by quantum processes in neurons, therefore it could be recreated by quantum computers and this could also have an impact on Godel's Incompleteness Theorem -

Although it looks like what Thornblood was saying is getting closer to mimicking the quantum aspect mechanically. I guess it was a mental exercise just to get a better grasp on concepts, I think any concept can be mapped out and used later -
edit on 31amFri, 31 Jan 2014 07:50:15 -0600kbamkAmerica/Chicago by darkbake because: (no reason given)



posted on Feb, 1 2014 @ 02:30 PM
link   
reply to post by Thorneblood
 


Wow Thorneblood, this looks like the real thing! I've had a quick scan of the full PDF, will take a few hours to read in detail. But from what I know about neurophysiology and computing, this is groundbreaking!

There are some very deep philosophical issues implicated, but the detail is layered in a lot of juicy tech. Just the thing for a Sunday tomorrow



posted on Feb, 4 2014 @ 09:07 AM
link   
reply to post by asciikewl
 


The idea that humans can't think sequentially very well tripped me up for days. I'm not sure if this is the case for all humans. For example, check out this "Parallel Thinking" entry on Wikipedia here.


Parallel thinking is defined as a thinking process where focus is split in specific directions. When done in a group it effectively avoids the consequences of the adversarial approach (as used in courts).

In adversarial debate, the objective is to prove or disprove statements put forward by the parties (normally two). This is also known as the dialectic approach. In Parallel Thinking, practitioners put forward as many statements as possible in several (preferably more than two) parallel tracks. This leads to exploration of a subject where all participants can contribute, in parallel, with knowledge, facts, feelings, etc.



posted on Feb, 7 2014 @ 08:34 AM
link   

darkbake
reply to post by asciikewl
 


The idea that humans can't think sequentially very well tripped me up for days. I'm not sure if this is the case for all humans.


We do 'think' vastly faster with our parallel hardware, but what most people would refer to as 'thinking' is a serial process which happens comparatively very slowly. Most likely this serial thinking requires keeping track of more than one mind state, in other words the ability to think of 2 (or more) 'things' at the same time, and doing a trick with short term memory to be able to go back. It may even be partly this ,evolutionary speaking, new ability that creates a significant difference between us and the majority of other animals.



posted on Feb, 9 2014 @ 10:31 PM
link   
For a machine to become sentient and self-aware, it must be capable of learning by interacting with it’s environment, as well as follow all pre-programmed directives. That's a minimum. It’s progress toward becoming a “mature and responsible” thinking machine might be similar to that of a child growing up into adulthood. By far the greatest part of it’s knowledge, at maturity, would derive from “learning”, and not via coded objects. There’s a lot of progress being made right now in this area of AI, right? I mean, a wiki for robots is now being established so that they can begin learning from one another. Pretty cool.

The human brain is capable of both serial and parallel processing. The subconscious mind seems to handle most of the parallel processing duties, while the conscious mind is the serial processor. For instance, when you get up to take a walk and look around, the subconsious mind goes to work processing many, many inputs simultaneously as it controls all the muscular and visual functions in parallel. But, if you’re just sitting around reading a book, or working a math problem, or making a post to ATS, then these tasks are performed in a serial fashion.

Since we consciously process information serially, it makes us poor at multitasking, though. It wouldn’t workout very well if I were trying to carry on a conversation at the same time as I’m posting this. So, machines with multi-core processors and hyper-threading capabilities will quickly outperform us.

I have a feeling it won’t be long before machines rule. Who knows, maybe they will be able to straighten out the mess we’ve gotten ourselves into...





new topics

top topics



 
2

log in

join