The end of the thermodynamics of computation?

by Eric Drexler on 2014/03/29

In a recent post, the always intelligent and provocative Cosma Shalizi notes John D. Norton’s argument against (nearly) thermodynamically reversible computation, but Norton’s argument is mistaken.

In his paper “The End of the Thermodynamics of Computation: A No-Go Result,” Norton correctly states that “In a [nearly] thermodynamically reversible process, all component systems are in [nearly] perfect equilibrium with one another at all stages,” and then discusses systems in which “Fluctuations will carry the system spontaneously from one stage to another [and as] a result, the system is probabilistically distributed over the different stages.”

But the stages of computation themselves need not be in equilibrium with one another, and hence subject to back-and-forth fluctuations. Instead, a time-varying potential can carry a system deterministically through a series of stages while the system remains at nearly perfect thermodynamic equilibrium at each stage. In other words, the state of the system need not be probabilistically distributed over the different stages.

This is an example of a scientist describing an unworkable solution to a problem and then asserting that no solution will work, when workable solutions are already known. Richard Smalley did a similar but more damaging disservice to atomically precise fabrication by inventing and rejecting an unworkable concept involving exotic atom-plucking “fingers,” while ignoring a decades-old literature that described the now-mundane concept of guiding the motion of reactive molecules.

TL;DR: The standard view of the thermodynamics of computation is correct.

{ 9 comments… read them below or add one }

Chris Phoenix March 29, 2014 at 10:44 pm UTC

I think it’s related to, or at least comparable to, a fundamental misunderstanding of the basic difference between “out of equilibrium” and “unstable.” A book sitting on a table is out of equilibrium, in the sense of being in a higher-energy state than if it were on the floor. But it would take a large earthquake to make the book move to the floor.

I can’t even count the number of nanoscientists who have told me that “hard machine” type nanosystems would not be stable because the molecular configurations are “out of equilibrium.” Thus totally ignoring the high-school-chemistry concept of an activation energy barrier.

A professor of computer science (and former vice president of the Association for Computing Machinery!) wrote an essay claiming that Babbage’s designs could not work, because “entropy” would have caused an accumulation of mechanical positional error. I wrote a rebuttal here:
http://crnano.org/archive04.htm#Bugbear
With better formatting here:
http://www.voyle.net/Guest%20Writers/Chris%20Phoenix/Chris%20Phoenix%202004-0004.htm

It sounds like this is a similar, though more sophisticated, misunderstanding. Just because the system can move smoothly (non-dissipatively) from one state to another, does _not_ mean that it has equal probability of being in any state at any time.

Eric Drexler April 1, 2014 at 1:31 pm UTC

Hi Chris,

Please don’t tell me things like this. Learning of such profound confusion among scientists is too disturbing.

On second thought, I suppose I should hear about it anyway. Thanks.

— Eric

Carbonoid April 3, 2014 at 2:29 pm UTC

Eric, Chris, this perplexes me, as well. I was speaking with a professional chemist, discussing atomically-precise molecular manufacturing. He was very ‘debunking’ about it, saying the “most that can ever be done” is to make “programmable molecules that perform a specific task”. He holds to this idea that mechanosynthetic machine-phase positional assembly is not possible, unless he personally sees it carried out. It HAS already been done, to an extent, with SPMs and other methods, as well as in nature.

Michio Kaku admits it is possible. He just has an exceedingly longer time frame for it.

The inventor Steve Bridgers has patented a system called INCA: Inter-Nodal-Connector-Architecture, which is based on fullerenes, and can be used to make assembler systems. He already has a plan laid out as to how to do it.

http://contest.techbriefs.com/2010/entries/machinery-and-equipment/956

Mr. Bridgers wants to develop APM, Eric, and perhaps you would be interested in discussing it with him and collaborating on some concepts?

http://amodelkit.com/cart/page.html?chapter=0&id=2

If you want I can give you his phone number so you can call him.

Bridgers and myself and others are confident, Eric, that APM technologies can be had sooner than many realize, if the right mechanisms are implemented.

Eric Drexler April 3, 2014 at 3:40 pm UTC

Carbonoid —
You might want to ask your professional chemist to explain why machines cannot position molecules (as has already been demonstrated), or why those machines cannot be small (as design exercises and computational modeling indicate that they can), or why small machines cannot be made to execute a range of controllable motions (as large machines can), or why controlling guiding the motions of reactive molecules with high accuracy and <0.03 nm rms cannot provide a powerful way to direct chemical reactions (see: “transition state theory”).

Or, you might ask your chemist to describe any other concrete problem that hasn’t already been studied and analyzed in substantial and quantitative depth. No one else has managed to do this, and if someone had, they’d be widely quoted in preference to recycling the debunked and off-topic Smalley rubbish. — But hey, go for it! Thousands of would-be critics would be grateful for a substantive argument that supports their views.

(And having a look at the National Academies report might help to clarify the state of evidence.)

Eric Drexler April 3, 2014 at 3:42 pm UTC

BTW, re. the “Nodal-Connector-Architecture”, what I see makes no practical sense in terms of accessible molecular technologies.

What does make sense is to harness the ongoing progress in macromolecular design and synthesis and apply it in novel ways.

Carbonoid April 5, 2014 at 12:38 am UTC

Thank you very much Eric. Regarding the Nodal system, what are some novel ways in which you could see this being used for, based on what you know of molecular engineering, and larger scales, in the micron and mesoscopic realm?

One major concept we were considering is collapsible structures that can open up/expand, and lock into place. In Unbounding the Future you mentioned very interesting foldable tents for emergency disaster shelters, that would be very useful.

Another possibility: Molecular Stencils?

Regarding a route to molecular mechanosynthesis, would you consider the engineering of viruses to be viable, Eric?
Angela Belcher and team have assembled batteries and molecular nanowires with these:

https://student.societyforscience.org/article/batteries-built-viruses

Could adding non-biological materials such as fullerenes and other structures enable a sortof “virus bootstrapped assembler”?

Eric Drexler April 5, 2014 at 11:59 am UTC

Carbonoid —
Biopolyers (DNA, polypeptides) have been developed into engineering materials, and although viral capsids may provide useful models for a few classes of self-assembled system, the structures themselves (evolved for non-technological purposes) seem to have, at best, narrow applications as atomically precise components. Note that Angela Belcher’s products are not atomically precise.

De novo design works. As an analogy, human technology has used a biopolymer — cellulose — for a wide range of structures (products made of wood and paper), but tree trunks, bark, and twigs have found only limited use in their natural forms. By contrast, wood itself, used as a material, has be used to build products as intricate as pocket watches, including internal mechanisms.

Mark Gubrud June 26, 2014 at 7:07 pm UTC

Norton references Bennet’s early model of diffusive dissipationless computing, in which each step is in thermodynamic equilibrium with its predecessor. In this model there is indeed equal probability of the computation proceeding forward or backward, but Norton’s claim that this means that “If the system is in one stage l at some moment, it is equally likely to be found at the next moment in any other stage” is nonsense. Fluctuations make the process move forward and backward, but not jump all over the place.

In Bennett’s model, the computer starts in a set-up, well-defined initial state, and randomly diffuses down its computational path and back. Eventually the computer wanders into a penultimate state from which a dissipative step is accessible, into which it will fall irreversibly.

The final state after the dissipative step contains the desired result. The reason for dissipation is to keep the computer in the result state, and the amount of dissipation required to keep it there for some length of time will be independent of the number of steps it took to get there. However, you will need to have waited an arbitrarily long time that scales as roughly the square of the number of steps of computation, which may itself be a steep function of problem size, and is also dependent on the kinetics of each step.

If you plan on copying the result during the unknown window of time while the computer is in the result state, some even more dissipative device will have to have been watching, somehow, ready to copy the results; I’m not sure if this has been fully explored.

Other models of reversible computation are ballistic: the computation proceeds straight through to completion in dissipationless, necessarily logically and physically reversible steps, guided by mechanical constraints (i.e., a potential).

Quantum computation would of necessity be logically and dynamically reversible, so if Norton was right, a lot of physicists would be very upset. However, Norton just uses arguments from thermo and does not discuss mechanics.

When you invoke a time-varying potential, you have to be careful that you are not supplying and dissipating energy. However, Landauer’s analysis shows that this can be physically (dynamically and thermodynamically) reversible if it is also logically reversible; otherwise it costs a minimum kTln2 per lost bit.

Online Hotel Bookings August 30, 2014 at 3:28 pm UTC

Hello there I am so happy I found your blog, I really found you by error, while I was
researching on Google for something else, Anyhow I am here now and would just like to say thank you for a remarkable post and
a all round thrilling blog (I also love the theme/design), I don’t have time to
read it all at the minute but I have saved it and also added your RSS feeds, so when I have
time I will be back to read much more, Please do keep up the awesome b.

Leave a Comment

Previous post:

Next post: