It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


REVERSIBLE COMPUTING: Computers 10^10 more energy efficient!

page: 2
<< 1    3 >>

log in


posted on Jul, 2 2010 @ 08:58 AM
Cool beans! S+F

Nope, I don't understand it all, but have a basic intuitive grasp that this is quite the technological leap in the making. It's enough for me to post this short response in hopes that it'll get more attention that it definitely deserves.

posted on Dec, 20 2010 @ 09:36 AM
reply to post by unityemissions

Thank you for the bump.

Yes I was hoping more people would find this interesting enough to debate.

There is a new thread that has some more info i think is relevant here!
edit on 12/20/2010 by VonDoomen because: (no reason given)

posted on Dec, 20 2010 @ 10:15 AM
reply to post by VonDoomen

This is all well and good, and would enable a device with substantially more computing power than all of the computers in the world, including all military and civilian supercomputers, but fit into something the size of a grain of rice!

BUT...what will a computer, that possesses more 'intelligence' than trillions of human beings think of us?

I suppose more to the point, what will it think of itself in relation to us!

It would become the digital equivalent of a...god.
edit on 20/12/2010 by spikey because: (no reason given)

posted on Dec, 20 2010 @ 06:05 PM
reply to post by spikey

A computers intelligence is linked to its level of software, not its physical capabilities.Of course A computers physical characteristics obviously affect how fast and efficiently it can perform its objectives. However, it is possible to have computers that are AMAZINGLY fast in their ability to perform one or two functions that typically require a humans level of intelligence. but that is once again linked to its level of software.

posted on Dec, 21 2010 @ 01:13 AM
reply to post by spikey

Intelligence is different than processing capacity or capability.

Human intelligence will never truly exist in anything but a human. This is a relatively new cognitive theory within robotics and AI - the whole may seem greater than the sum of its parts, but is none-the-less built off of them. A computer, regardless of how it is programmed, will never understand 'reaching for something' unless it has hands and must exert the effort of reaching for something. Even if you were to take a human brain and remove it from the human body - the intelligence would soon become something other than human, based on whatever you connected it to (if you connected it to no stimulus, you would likely get no intelligence).

Complicating things is the concept of adaptive neural networks. Our brain essentially re-wires and restructures itself based on its experiences and whatever it decides is a desirable outcome. This is very difficult to model in hardware or software - and it may even be the key to intelligence as we qualify it.

To create an intelligence out of a computer with such vast computational abilities would be to create an intelligence we would never regard as anything but a machine, and an intelligence that would never regard us as anything more than a curious biological phenomena. Neither could relate.

As for the whole idea - the problem is that quantum mechanics ultimately ends up blurring the lines between information and entropy. There is a minimum amount of energy involved in performing computations. However, the entire idea behind quantum computing is that you are processing with far less energy to begin with.

posted on Dec, 21 2010 @ 05:02 PM
reply to post by Aim64C

thanks for the post! you explained it very elegantly!

I think what people need to understand with a computer of this nature is that its important parts are governed by quantum law, not newtonian physics.

posted on Dec, 23 2010 @ 01:42 AM
I, honestly, don't know enough about quantum mechanics to say much about the idea. It sounds like there are some very specific concepts being talked about that are parallel or tangent to the current research into quantum computing. It sounds like you would construct a 'pipeline' of quantum interactions - changing the state of particles within an atom at one end of the pipeline would, nearly instantly, produce an "answer" in the form of quantum states at the opposite end. If this was the end of it - then it would take, in theory, no energy other than what is already present in the system. However - as you have stated, to get a meaningful result and extract the result, it would take energy. If done at maximum efficiency, it would take a minimum of some nearly irrelevant amount of energy - a brick existing at current maximum thermal entropy could compute several orders of magnitude faster than today's supercomputers, if it were operating at unity.

In either case - 'normal' or 'this' type of quantum computing would be a serious improvement over current computers. I look forward to the day when it starts to look like something out of Star Trek or the Star Gate series - crystal data storage and processing.

Not that I have any idea what we'd ever do with such vast amounts of computational power. It would be like telling someone in the 80s: "Within your lifetime, computers will be capable of having 16 gigabytes of physical ram and address 18 exabytes of RAM, if we had that much - the average computer will be capable of over a teraflops, and specialized streaming computers can hit about a hundred teraflops while running off of standard 120/60hz mains."

Their response would be: "And what in the world could you POSSIBLY need that for?" Bear in mind - this is back in the days when this web-page would have not fit in the memory of a computer.

posted on Dec, 27 2010 @ 12:45 PM
reply to post by Aim64C

Thanks for the input!
And yes, these quantum computers are basically the theoretical maximum we can theorize right now, and they will be exponentially stronger than current computers.

And what will we use them for?

The future will be about virtual reality! Imagine how much processing power it would take to simulate the earth so that it was indistinguishable from how we percieve it with our eyes? Read up on "Moores law and simulism".

Heres an interesting tidbit-

If computing power continues to increase according to Moore's Law, then 'The Simulation Argument' will eventually be proven correct. It's only a matter of time, and because of the 'doubling' every 2 years, we will see it coming closer exponentially. The reasoning here is simple. Right now, we are able to simulate something simple. In 2 years, we will be able to simulate something twice as complex. In 20 years, we will be able to simulate with 2*2*2*2*2*2*2*2*2*2 = 1024 times as much complexity as today. 40 years will bring us 2 to the power of 20 = 1 million times more power than we have today. Let's focus on the human body. Suppose we are able to simulate a complete human being in, say year t. How long would it then take before we are able to simulate the world population of approximately people? Simple equasion: 2^x = 6 billion, gives us something between 32 and 33 for x. This means that from the time we will be able to simulate one person, to the time we will be able to simulate the entire population, will take only 33 times 2 years = 66 years.

posted on Dec, 27 2010 @ 01:27 PM
Would agitating molecules in a object create heat? I wouldsay it would get very hot after all microwave ovens agitate the molecules to cook food. And is there a radioactivity related to such a process? A nuclear active computer? And the power it takes to runb a microwave is quite high. So the rock doesn't use power but neither does a cd or dvd.
edit on 27-12-2010 by JBA2848 because: (no reason given)

posted on Dec, 27 2010 @ 01:31 PM
if the rock can do what it is doing with no heat.
why can you not compute with no heat?
as for the screen. one day they will find a way to get crystals to make a color screen.
by just changing the angle of the crystal. and it would be very high resolution.
piezo electric key board. the pressure of your finger makes power.
I can not wait for my brick?

posted on Dec, 27 2010 @ 06:30 PM
reply to post by creepyalien


everything in reality is continually "computing" itself into reality, at the atomic/sub-atomic levels.
now if we could only encode the substance of a rock with meaningful information.

and I think this is where some people get caught up. Its not like we are writing letters onto atoms or anything.
here is an example
electrons for example have what called a spin. And they can spin in opposite directions. In reality its is more linked to electro-magnetic interactions (think north/south) than actual spin. however, we could say that if an electrons spin is counterclockwise then it is 0, and clockwise is 1. then we have a working binary system.
now take into account that we could do this same thing with other attributes of the electron, and we have some serious computing power!

posted on Dec, 28 2010 @ 07:27 AM
reply to post by VonDoomen

The problem with your example, however, is that simulating the entire population can already be done. Society acts like its own intelligence, the mechanics are simply altered a bit because of the scope. Simulating a person is also not all that difficult - though simulating our cognitive process is, depending upon which theory you subscribe to, impossible without the experience of being human.

The other problem is simulating one person versus simulating two or three people. It's not necessarily a linear ordeal. Simulating the interactions of two or three people is an entirely different concept from simulating the individuals (a simulation of a single intelligence is incomplete if it is also not simulating the interactions it has with other beings, its environment, and society as a whole.... how could they simulate my personal tendency to detest society if there is no society for my simulation to detest, and no experiences with that society for it to base its detest off of?).

It gets pretty crazy, pretty fast.

Personally, I think the next big deal will be computer-aided design (CAD). Now - it's been a relatively unchanged field for quite some time. It has merely become more accessible to lower-end systems and companies. The next big thing will be merging things like the "kinnect" and CAD with procedural generation (a concept put to use long ago in computers when there was a need to create far more information than you could store).

Think about how long it takes to create a game today. It doesn't take all that long to code the actual game - the two most resource-consuming aspects of a game revolve around creating the graphical resources and the scripting of events in the game. Games like The Elder Scrolls IV: Oblivion took years to create the content. As computing power increases, it only makes sense to make the programs utilize that power to more intelligently assist this process. For example - leaves have rather common features - their veins and shapes are closely related, to the point where a program shouldn't have much trouble taking an outline of a leaf and applying realistic textures and features to it while populating a tree with realistic variants of that leaf. The tree, itself, could be made and textured by a program. In fact - many games and simulations use programs that do, to a lesser extreme, just that.

Why should you have to personally design several different models of a bridge to test which ones have the best characteristics for what you are wanting to do? Instead, put those characteristics into the computer, and let it run hundred or thousands of different models through an evolutionary process to get the best result - all in the same time it would take you to design two or three models.

And, to make this an official ATS discussion: presuming we want to navigate the stars at faster-than-light speeds, it would probably help if we could process the data from hundreds, thousands, maybe even millions of telescopes and satellites to create an "instant" model of the galaxy - IE, where everything actually is at this point in time. If we want to go to a star, it would help if we knew where it is right now, and where it is likely to be within the expected travel time, to plot an effective intercept. Exiting "warp" at the place where the star was a thousand years ago would make us look like a bunch of bumbling idiots.

However, I think the next big thing is making computers more of a companion than a tool. A tool is limited and not necessarily comprehensive. A computer as a companion bridges the human and digital worlds - making it easier for you to "tell the computer what you mean." Which is frustrating as all Holy Hell, right now.

Of course, this will also give way to cybernetics and interactive intelligence simulations (again - not "virtual intelligence" or "artificial intelligence" - an 'intelligence' designed as an interface between digital computation resources and human ideas/communication). The best way I can think of to put it is the Microsoft paperclip mixed with Cortana - not necessarily intelligent, but far more dynamic in its abilities - more useful than annoying.

posted on Dec, 28 2010 @ 08:44 AM
reply to post by Aim64C

There's been a massive lurch towards procedural content generation for the very reasons you mention. I'm only peripherally involved in the field (it's not my speciality) but the traditional limitations of PCG are slowly being eroded. We're reaching a watershed moment where the time taken to generate (for example) game assets manually is becoming less and less feasible as the technology becomes increasingly detailed and complex. It will indeed be very interesting to see how the field matures

posted on Dec, 28 2010 @ 09:38 AM
This is interesting. The way I understand it currently the heat the processors are producing is the major roadblock to advancement at this stage.

Then we also have the problem of hardware advancing exponentially faster than software. If you are proficient at multiple threading in relation to programming you are currently in very high demand. Most applications at this point really do not take advantage of the multiple cores that are in your computer. In fact, sometimes it is even faster to have a 3.2ghz single core than it is to be running a quad core.

I read recently somewhere (cant remember) that both AMD and INTC are working on some optical processors Optics that far surpass any data transmission we have currently. IIRC these were slated for retail in the 3-5 year range. It probably will not be too many years until you can hold the summation of all current computing power in the world today in the palm of your hand.

posted on Dec, 28 2010 @ 12:52 PM
reply to post by Dance4Life

There are a few things that are road-blocking the speed of silicon and germanium transistors.

While one of the problems is heat - this is mostly a problem with switching and leakage current caused by imperfect insulation of the gates.

To understand the problem is to understand how a transistor works. A transistor works by changing its resistance based on the voltage between the gate and collector (the terminology for transistors was all developed around the "hole theory" where things get kind of confusing because everything is bass-ackwards). A perfect transistor has an infinite resistance with no bias voltage, and no resistance when saturated.

The problem is that real-world transistors have a few draw-backs. First, the smaller you make them, the more current tends to leak across the boundary region. This generates heat, and increases power requirements. Second, there is the in-between time when a transistor is switching between 'off' and 'on' - this also generates heat as the gate must dissipate the power consumed by the resistive property of the gate.

Right now, the main thing is the limitations of how fast a transistor can go from "off" to "on" and vice-versa. This is the "rise" and "fall" time of the switch. We were able to get processors switching faster after dropping from 5v source to 3.5 - and again from 3.5v to 1.5v (the present source, if I'm remembering correctly). We can't go much lower, as transistors will drop about 0.7 volts even when fully saturated, or 0.5 for germanium substrates (this may be a little better in the higher-end lithography processes, but it's one of those weird solid-state mechanics things that are about as vexing as quantum mechanics... likely because the two are highly related).

Anyway - heat is a byproduct of a number of limitations of current technology, and one that only exacerbates the problems. A warmer gate tends to perform less ideally and consume more power (generate even more heat). So, while heat is certainly a problem, it's a symptom of technological limitations that does no favors aside from allowing you to use your computer as a space-heater on cold days.

As for computer-optics, there has been and continues to be a lot of research into creating "photon transistors" that would work in ways similar to our present electronics, just using light as opposed to electricity. In theory - they could be made to be far more efficient and powerful than current computers. Though, to my knowledge, the "photon transistor" is rather elusive, so the theory doesn't have much to stand on until we have a functional mechanism to theoretically analyze.

A number of engineers have been looking into diamond-substrate solid state electronics - which promise to have much more ideal characteristics than silicon and germanium substrates. The highly ordered structure of diamond has much higher thermal tolerances and lower overall leakage while promising better switching times. It's silicon but better.

The advantage to something like that is that it becomes a drop-in replacement for current electronics. Laser-computers may require an entirely new format from the board-up. That's going to take a hell of a long time to make commercially viable. Something like a processor made with synthetic diamond would be able to be substituted into any market solution. They could go back and re-fab a CPU core from today to make it consume less power in today's systems. It would be somewhat silly to do so - so they would likely make a current-generation core with the fabrication process, but it's just to illustrate that it's pin-for-pin compatible.

posted on Dec, 28 2010 @ 01:30 PM

Originally posted by Aim64C
reply to post by VonDoomen

The problem with your example, however, is that simulating the entire population can already be done. Society acts like its own intelligence, the mechanics are simply altered a bit because of the scope. Simulating a person is also not all that difficult - though simulating our cognitive process is, depending upon which theory you subscribe to, impossible without the experience of being human.

The other problem is simulating one person versus simulating two or three people. It's not necessarily a linear ordeal. Simulating the interactions of two or three people is an entirely different concept from simulating the individuals (a simulation of a single intelligence is incomplete if it is also not simulating the interactions it has with other beings, its environment, and society as a whole.... how could they simulate my personal tendency to detest society if there is no society for my simulation to detest, and no experiences with that society for it to base its detest off of?).

This is incorrect. when i say simulating a Person, I mean simulating everything about that person, including the physical aspect down to the molecules, down to the neuronal connections. You could simulate yourself in the absence of society. But we would only be simulating how YOU would act in this absence. I think what we are trying to simulate is a tendency or a potential. We simulate a human, and then it would act according to its environment and your tendencies. I know this all sounds like wishfull thinking, however I understand the monumental undertaking this would be. However I think the manner in which we organize this undertaking is very important, as my programming instructor says, "work smart, not hard"

posted on Dec, 28 2010 @ 03:05 PM
reply to post by Aim64C

Interesting post. I looked for while and couldn't come up with the exact article I was trying to recall. Here is one, but they are stating in this one 10-15 years - but able to demonstrate this in a proprietary lab setting. In the previous article they were most likely just talking it up or probably I just didn't remember correctly. Which wouldn't be the first time.

posted on Dec, 28 2010 @ 03:17 PM
The fundamental problem with Kurzweil's predictions is that he believes thought/consciousness is caused by neural calculations (as does most all of western medical science).

The truth is that neural calculations are caused by consciousness.

So until you can get consciousness bound into these computers of the future, they will always be just dumb machines.

You can program all you want for the machine to mimic thought and decision making processes, but they will always only be models.
edit on 28-12-2010 by harrytuttle because: clarification

posted on Dec, 28 2010 @ 03:32 PM
Really cool idea/tech/topic, thx for posting.

For those having trouble understanding how one can compute in reverse, and not use up more energy, maybe this will help (though I'm not sure I totally get it either):
There are two general types of reactions, endothermic and exothermic. Endothermic reactions involve an absorption of heat/energy, and exothermic involve a release of heat/energy. Running the computation forward results in an exothermic reaction, where energy is dissipated and entropy for the system increases. Running the computation backwards would decrease the entropy of the system and be endothermic, absorbing the heat.

Can somebody comment and let me know if I'm thinking about this correctly?

posted on Dec, 28 2010 @ 03:35 PM
If proved even possible within our lifetime...

It's a pipe dream.. For now atleast..
edit on 28-12-2010 by thecinic because: (no reason given)

new topics

top topics

<< 1    3 >>

log in