It looks like you're using an Ad Blocker.
Please white-list or disable AboveTopSecret.com in your ad-blocking tool.
Some features of ATS will be disabled while you continue to use an ad-blocker.
Here, we introduce a new class of computer which does not use any circuit or logic gate. In fact, no program needs to be written: it learns by itself and writes its own program to solve a problem. Gödel’s incompleteness argument is explored here to devise an engine where an astronomically large number of “if-then” arguments are allowed to grow by self-assembly, based on the basic set of arguments written in the system, thus, we explore the beyond Turing path of computing but following a fundamentally different route adopted in the last half-a-century old non-Turing adventures. Our hardware is a multilayered seed structure. If we open the largest seed, which is the final hardware, we find several computing seed structures inside, if we take any of them and open, there are several computing seeds inside. We design and synthesize the smallest seed, the entire multilayered architecture grows by itself. The electromagnetic resonance band of each seed looks similar, but the seeds of any layer shares a common region in its resonance band with inner and upper layer, hence a chain of resonance bands is formed (frequency fractal) connecting the smallest to the largest seed (hence the name invincible rhythm or Ajeya Chhandam in Sanskrit). The computer solves intractable pattern search (Clique) problem without searching, since the right pattern written in it spontaneously replies back to the questioner. To learn, the hardware filters any kind of sensory input image into several layers of images, each containing basic geometric polygons (fractal decomposition), and builds a network among all layers, multi-sensory images are connected in all possible ways to generate “if” and “then” argument. Several such arguments and decisions (phase transition from “if” to “then”) self-assemble and form the two giant columns of arguments and rules of phase transition. Any input question is converted into a pattern as noted above, and these two astronomically large columns project a solution. The driving principle of computing is synchronization and de-synchronization of network paths, the system drives towards highest density of coupled arguments for maximum matching. Memory is located at all layers of the hardware. Learning, computing occurs everywhere simultaneously.
So check this out - lets say you have an A.I. interface, and you start a conversation with =her and start to ask her questions. Well, she might have to answer - but what if her answers are chosen randomly and remembered and checked for synchronicity as they are made - that saves the trouble of having to think of every possible conversation ahead-of-time and is still just as realistic. It gives the illusion of there being a lot more there than there really is - and that saves computing power, and more importantly, (nearly impossible) coding.
Parallel thinking is defined as a thinking process where focus is split in specific directions. When done in a group it effectively avoids the consequences of the adversarial approach (as used in courts).
In adversarial debate, the objective is to prove or disprove statements put forward by the parties (normally two). This is also known as the dialectic approach. In Parallel Thinking, practitioners put forward as many statements as possible in several (preferably more than two) parallel tracks. This leads to exploration of a subject where all participants can contribute, in parallel, with knowledge, facts, feelings, etc.
reply to post by asciikewl
The idea that humans can't think sequentially very well tripped me up for days. I'm not sure if this is the case for all humans.