It looks like you're using an Ad Blocker.

Please white-list or disable AboveTopSecret.com in your ad-blocking tool.

Thank you.

 

Some features of ATS will be disabled while you continue to use an ad-blocker.

 

Evolving circuits that learn and they have no idea how

page: 2
43
<< 1    3  4 >>

log in

join
share:

posted on Jan, 14 2015 @ 10:43 AM
link   

originally posted by: MALBOSIA
So does energy carry information?


Kantor would say energy is the catalyst for information transfer.



When someone gives off a certain energy, others around them can pick up on it. We are not physically connected but seem to have a energy field around us that displays our information. We cannot measure it and we cannot prove it but we all experience it.


THAT use of "energy" and "field" are woo, otherwise known as new age or Theosophy, and are not the same concepts that physics attaches to those words. So, you DO give off energy - thermal energy, some low level microwaves and the like. But you don't give off energy that "displays information". You can't measure it or prove it because it's not there.

You are, however, sensitive, like all mammals, to posture, expression, motion, eye pointing, sounds, smells and the like that you are not consciously aware of. If that is obscured, you will not "sense" anything.



posted on Jan, 14 2015 @ 10:48 AM
link   

originally posted by: DigitalJedi805

originally posted by: oneoneone
This is definitely not good. It's getting close to my own program and ...


Erm... I'm probably barking down the wrong rabbit hole here... But you're telling me you've written evolving software? Because if that's the case, I'm particularly interested in a demonstration...


Genetic algorithms and genetic programming are a moderately common tool in the field of "complexity reduction".

Doing it to a Xilinx part is sort of novel. I don't know anyone that does that as engineering practice.

You can get off-the-shelf tools that can do this (sort of) for you. Matlab comes to mind.

One of the guys here working on Operation ITOFTS is attacking the problem with genetic neural nets, although it's awfully complicated and I believe I will beat him to the punch with Tom's Algorithm for Partial Reduction.



posted on Jan, 14 2015 @ 11:11 AM
link   
a reply to: NiZZiM

Wow, that was definately a good read! Very interesting.


It seems that evolution had not merely selected the best code for the task, it had also advocated those programs which took advantage of the electromagnetic quirks of that specific microchip environment. The five separate logic cells were clearly crucial to the chip's operation, but they were interacting with the main circuitry through some unorthodox method-- most likely via the subtle magnetic fields that are created when electrons flow through circuitry, an effect known as magnetic flux.


Maybe it is using quantum entanglement.



posted on Jan, 14 2015 @ 01:26 PM
link   
a reply to: Korg Trinity
My personal take on the "Universe and Everything" (Could there be more? Sure.) is that there is one Universal Principle.

Sounds simple minded but it's the only thing I ALWAYS can support regardless what system I deal with. The Universe wants to/needs to/is designed to evolve.
Any system that can sustain itself will evolve.
Entropy is part of the process along with statistical variation and validity checks.
You do not need any omniscient being in this conceptualization to achieve the outcome we see. Heck you don't even need principles or principals. (Both can be useful for rapidity in development.)
In this case design was used and it became surprising to the designer that it evolved in a manner that was unanticipated.
It is hubris to think that all outcomes are predictable. It is like predicting cosmic ray interactions with an IC. Not happening, ever.
This same unpredictability is present in every other system.
I almost sympathize with CEOs tasked on figuring out the next quarter. The Drunkard's Walk through our lives is mandatory and tough to accept. This situation demonstrates that precautions and monitoring, while not completely effective, need to be used whenever you introduce self-evolving mechanisms.
The Universe is designed to become weird. Keep an eye on it.



posted on Jan, 14 2015 @ 02:08 PM
link   

originally posted by: RocketPropelledRenegade

Maybe it is using quantum entanglement.



Now that would be amazing wouldn't it.


Great article.



posted on Jan, 14 2015 @ 04:28 PM
link   
a reply to: AdmireTheDistance

The point is that the evolution came up with results which are like what you see in evolved biology.

In biology, as opposed to engineered systems, the solutions are not clean or abstractly organized, and often take advantage of very concrete "implementation details" and accidents and side-effects.

They are the opposite of what a good software engineer would do. But evolution don't care.

It's a strong indication of unintelligent non-design.



posted on Jan, 14 2015 @ 04:32 PM
link   

originally posted by: NiZZiM
I found this wandering the internets just a bit ago and it's very interesting. This computer engineer made a chip and programmed it to basically learn and evolve using a simple system that determines what programming works best then mating it with the next best and moving forward from there.


Very interesting indeed, I never knew about this before now. The idea that such circuits could communicate amongst themselves makes me think transcendence the film. I can already see how big of a find this could be and how great it could become, if a system was built to be self knowledgable and able to proceed to learn in an A.i type fashion it could be used to solve huge health and world problems. When scientists scratch their head on a daily basis on what process to use to cure lets say cancer, it could be given to the program like this to find out the most 'probably' answer.

Amazing tech but probably not the first of its kind I am sure.



posted on Jan, 14 2015 @ 04:38 PM
link   


I just know we can do better. . . . .

I want a swarm of nanobot 3d printers to obey my every whim!



posted on Jan, 14 2015 @ 05:37 PM
link   
This is cool stuff. Evolutionary algorithms and circuits with similar feedback behaviour actually do some interesting things, and are quirky to say the least.

The most experience I had with this was with some experimental game called NERO. (Neuro Evolving Robotic Operatives) Sounded interesting, so I gave it a try. Played a little bit like an RTS, but you could also go into a training mode. I made a side-game of it by making maze-running bots that would head from a starting location to a waypoint I set. Basically any bots that didn't make the maze in a certain timeframe had their algorithms "killed off" while remaining ones got put back in the mix for the next generation. Eventually they started getting pretty good, so I added some side traits, which meant I manually picked ones to kill off. So I made my own rules about no wall-hugging and no spinning, which were some emergent behaviors in fast maze runners. By the time I started getting some really optimal bots that could solve fast by going down the middle of the lanes, the software crashed. The code used to define the bot behavior grew really huge really fast, definition files for the "good" bots was approaching 1GB in size and I suppose it couldn't unpack properly in the program anymore.

What was in that code? Who knows. I wasn't a developer for that game so those details were lost on me. I just managed to play and get them doing neat stuff by adding boost points to bots I liked and killing off ones I didn't. I just thought it was crazy that the files would balloon like that once the bots started doing some neat stuff.



posted on Jan, 14 2015 @ 05:46 PM
link   
a reply to: DigitalJedi805

oneoneone is Skynet.


originally posted by: ChaoticOrder
But I still don't really see why building adaptive hardware would be better than simulating adaptive hardware on normal hardware. The bleed through effects are interesting but even that could be simulated.


I would think it's because the level of abstraction they would have chosen to emulate probably wouldn't have accounted for such properties because it wouldn't have occurred to them that such behaviour could be emergent. It would be like modelling the weather but not factoring some bizarre and counterintuitive effect like, say, llama populations affecting the result.



posted on Jan, 14 2015 @ 05:59 PM
link   
NOt sure what to make of this. The only thing I think I somewhat understand is the idea that randomness is being filtered through a sort of natural selection. The "natural selection" is like hte judge or the decider. He/she kills the underperformers and preserves the viable candidates by allowing them to procreate, all the while changing them in tiny random ways. Dr. Adrian Thompson was hte acting judge.

As described here:

He cooked up a batch of primordial data-soup by generating fifty random blobs of ones and zeros. One by one his computer loaded these digital genomes into the FPGA chip, played the two distinct audio tones, and rated each genome's fitness according to how closely its output satisfied pre-set criteria. Unsurprisingly, none of the initial randomized configuration programs came anywhere close. Even the top performers were so profoundly inadequate that the computer had to choose its favorites based on tiny nuances. The genetic algorithm eliminated the worst of the bunch, and the best were allowed to mingle their virtual DNA by swapping fragments of source code with their partners. Occasional mutations were introduced into the fruit of their digital loins when the control program randomly changed a one or a zero here and there.

Seems to me it's all in the details. Just baed on this informati in the quote above, I wouldn't come close to reproducine it. Maybe it has something to do with the "tiny nuances" or maybe the "eliminated the worst of the bunch" or maybe "Occasional mutations were introduced..." It's a bit like someone saying hte key to nuclear fission power plants is employing the use of exothermic nuclear processes. On hte outside, it's a good clue, but when it comes ot producing a real world result, it's barely anything.
edit on 14-1-2015 by jonnywhite because: (no reason given)



posted on Jan, 14 2015 @ 06:09 PM
link   
a reply to: jonnywhite

The driving force are the random mutations or anamolies that take place naturally, by chance.



posted on Jan, 14 2015 @ 08:10 PM
link   
a reply to: jonnywhite

The overall procedure is rather simple. It's essentially copying the way evolution solves problems.

Imagine I have generated the following 4 random DNA strings:

100101010110
101010101010
011010100110
110100010101

Each of the 4 binary strings above represent a different "virtual organism" which will do different things when tested. Don't worry about how the virtual organism is created from the DNA, it could be done many different ways and it really depends on the problem you are trying to solve. For example if I was trying to evolve neural networks then the DNA would be instructions for building the neural networks.

Then I test each of my virtual organisms and rank them based on how well they perform. The rank is usually referred to as the "fitness" of the organism. In many cases the programmer doesn't need to play any part in the ranking process because the computer already knows what the correct output is. For example if I'm training my neural network to read text from images, I can pre-program the computer with the correct answers, and then it can compare its answer to the correct answer and be assigned a ranking based on how close the answer was to the correct answer. I believe this type of training process is referred to as unsupervised learning because the programmer doesn't need to play a constant role in the evolution process.

Then after I have ranked each of my virtual organisms and assigned each of them a fitness value, I can keep the best performing subjects and throw out the worst performing subjects. Then I "breed" the remaining subjects by mixing their DNA together. The mixing process can be done in many different ways. The easiest way is to simply split the DNA string in half and then swap the halves. Each of the DNA strings I posted above are 12 characters long, so to mix them together I simply take the first 6 characters of one DNA string and take the last 6 of another DNA string and then combine them together to create a completely new DNA string which is different from the rest.

If I'm lucky the new DNA string will produce a virtual organism which is better than any of my previous organisms because it inherited the best traits of both its parents. Of course I'm not guaranteed to get a better virtual organism by simply cutting the DNA strings in half and mixing them together. But if I start with 1000 DNA strings instead of just 4, and I keep the best 500 subjects, and then create another 1000 "offspring" by mixing together their DNA, the odds are good that my virtual organisms will get better and better with each generation (each time I repeat the ranking and breeding process).

But it's not always obvious that any progress is being made, even when you repeat this process hundreds of times. Some times your population may even degrade and get worse with each generation. It only really works when you start with thousands of subjects and have thousands of generations. But it does work and in my opinion the fact that this unsupervised learning process does work is proof that real world evolution also works just as well.


For the first hundred generations or so, there were few indications that the circuit-spawn were any improvement over their random-blob ancestors. But soon the chip began to show some encouraging twitches. By generation #220 the FPGA was essentially mimicking the input it received, a reaction which was a far cry from the desired result but evidence of progress nonetheless. The chip's performance improved in minuscule increments as the non-stop electronic orgy produced a parade of increasingly competent offspring. Around generation #650, the chip had developed some sensitivity to the 1kHz waveform, and by generation #1,400 its success rate in identifying either tone had increased to more than 50%.

edit on 14/1/2015 by ChaoticOrder because: (no reason given)



posted on Jan, 14 2015 @ 08:23 PM
link   
a reply to: ChaoticOrder

When will the AI ask us the simple question....

How did you humans not know you were synthetic AI ??



posted on Jan, 14 2015 @ 08:43 PM
link   

originally posted by: ParasuvO
a reply to: ChaoticOrder

When will the AI ask us the simple question....

How did you humans not know you were synthetic AI ??

What do you mean by "synthetic AI"? Define synthetic. Define artificial intelligence. Everything is information at the end of the day.

edit on 14/1/2015 by ChaoticOrder because: (no reason given)

edit on 14/1/2015 by ChaoticOrder because: (no reason given)



posted on Jan, 14 2015 @ 09:05 PM
link   
a reply to: AdmireTheDistance

Well he says they also don't know how the chip can learn to do that. I mean since when does a normal chip gain those properties? It's not made to work that way yet it learned to do so. I think it's amazing.



posted on Jan, 15 2015 @ 01:17 AM
link   

originally posted by: NiZZiM
a reply to: AdmireTheDistance

Well he says they also don't know how the chip can learn to do that. I mean since when does a normal chip gain those properties? It's not made to work that way yet it learned to do so. I think it's amazing.

The adaptive microchip is not a normal chip and it obviously had those properties the whole time. The chip didn't learn to work that way, it always worked that way. The evolutionary algorithm learnt to exploit those little known properties of the chip in order to achieve the specified goal.



posted on Jan, 15 2015 @ 01:59 AM
link   
a reply to: NiZZiM

Yup! That's the paradigm chip the DARPA robot research has been waiting for. DARPA are building robots and are expecting a technological breakthrough coming along an estimated evolutionary curve. A sort of "will" chip into which they will upload a human consciousness or A.I. inside a cyberlike organism, probably used for warfare or deep space exploration of the galaxy in the best case scenario.
edit on 15-1-2015 by Rapophis because: typo



posted on Jan, 15 2015 @ 02:19 AM
link   

originally posted by: ChaoticOrder

originally posted by: ParasuvO
a reply to: ChaoticOrder

When will the AI ask us the simple question....

How did you humans not know you were synthetic AI ??

What do you mean by "synthetic AI"? Define synthetic. Define artificial intelligence. Everything is information at the end of the day.


Absolutely!

Intelligence is intelligence, regardless of how it is generated. Intelligence and consciousness together creates self awareness.

If ever there is a technology based consciousness, a machine based intelligence that is self aware.... that is separate and desperate from us... we are in trouble.... this is called a Negative Singularity.

However, I don't believe that will happen... Nor do I think the end result of this kind of research will harbor a physical robot, more it will be included in programs that we interact with either physical as in bionics (Trans humanism) or nonphysically through the use of our technologies such as VR / AR.

Either way.... 'Cogito ergo sum' NOT 'Cogito ergo sum, sed tantum si Im humanam"

Peace,

Korg.
edit on 15-1-2015 by Korg Trinity because: (no reason given)



posted on Jan, 15 2015 @ 02:37 AM
link   
It's the 5th element, or ether, manifesting through binary code. The 5th element is a medium that acts as a gradient. Gradient is the most fondamental behavior. It leads a wild animal to it's prey, it leads somebody to a heat source or it's why you push your reading glasses up to have a better view of what I'm writting. This chip is not about probabilities, but it's at the borders of curiosity. It rethinks and calculate the best matches for a definite goal.

My hypothesis is that the chip may binds with subatomic particules (such as ether, or quarks) which carry information from the past, present and the future all at the same moment, just like the ether permeates time and is, therefore, a coordinate in the chip, hence predictable




top topics



 
43
<< 1    3  4 >>

log in

join