Graphene Nanotechnology
(and TEAM Microscopes)

by Eric Drexler on 2009/04/02

Graphene edge imaged with a TEAM microscope
Graphene edge imaged with
a TEAM microscope


“Graphene at the Edge:
Stability and Dynamics”
Ç. Ö. Girit et al., Science,
323:1705–1708 (2009)

I’ve intended to write about the wonders of graphene and related materials for nanotechnology, both as products and as a basis for building productive nanosystems, but there is so much to say that I didn’t know where to begin. As Rosa reminds me, though, a great virtue of a blog is that you can use a current event as an excuse for starting in the middle, and not worry so much about the order of presentation.

What prompts this post is the current cover of Science, which shows an atomic resolution image of a single sheet of graphene, as seen by a new-generation, transmission electron aberration-corrected microscope (TEAM). (The upper right corner is of this page, by the way, shows gold atoms imaged with a TEAM microscope.) The paper, from the remarkable Zettl group at UC Berkeley and the Lawrence Berkeley National Laboratory, reveals the complex dynamics of atoms at the edge of a graphene sheet, driven by energy from the imaging electron beam itself.

First, a few words about the TEAM microscope.

TEAM meets a Feynman challenge

The TEAM microscope, developed at LBNL, meets a challenge that Richard Feynman proposed in his famous 1959 talk, “There’s Plenty of Room at the Bottom”:

The reason the electron microscope is so poor is that the f-value of the lenses is only 1 part to 1,000; you don’t have a big enough numerical aperture. And I know that there are theorems which prove that it is impossible, with axially symmetrical stationary field lenses, to produce an f-value any bigger than so and so; and therefore the resolving power at the present time is at its theoretical maximum. But in every theorem there are assumptions. Why must the field be symmetrical? I put this out as a challenge: Is there no way to make the electron microscope more powerful?

The TEAM approach does make the electron microscope more powerful, and does indeed so by exploiting asymmetric fields. Why has it taken so long to achieve this? One difficulty is that asymmetric (quadrupole, hexapole, and octupole) correctors must do more than just compensate for the spherical aberration introduced by the symmetric lenses: taken together, they must correct for the asymmetric distortions that they themselves cause. Abandoning symmetry along the axis of the instrument, yet achieving sub-Ångstrom resolution at the end, strikes me as being inherently very, very difficult. I’m uncommonly impressed by the achievement.

(Reading further, I find that requirements have included stabilizing electromagnetic fields to an accuracy of about 100 parts per billion and developing means for extremely precise alignment of the many multipole elements. The TEAM design simultaneously corrects for the chromatic aberration which results from a spread in electron energies.)

Graphene for Nanotechnology, and vice versa

Here are few observations, some of which I hope to expand into discussions later:

  • Interest in graphene is exploding because of its unique physical properties and their potential application to nanoelectronic systems, including high-speed digital devices. This potential can be expected to drive forward a wide range of technologies for synthesizing, studying, modifying, and using graphene structures. Atomically precise shapes can be important.
  • Carbon nanotubes can be viewed as graphene cylinders, and many of their virtues are also virtues of graphene. For electronic device applications, however, graphene has added promise because graphene devices can merge smoothly into networks of graphene conducting paths, all formed by lithography. Nanotubes, by contrast, must be put in place (not carved from a sheet), and they form less-than-ideal electrical contacts with metal conductors (e.g., Schottky barriers).
  • Experiments with multiwall carbon nanotubes, pioneered by the Zettl group, confirm the extraordinarily low sliding friction that had been predicted for analogous nanomechanical bearings (by me, for example). These results, of course, refute the lab-bench legend that attractive intersurface forces imply so called “sticktion” (stickiness + friction), a phenomenon sometimes thought to provide the missing obstacle to the eventual implementation of high-performance nanomechanical systems.
  • Graphene, nanotubes, and more complex, curved structures have the potential to serve not only as bearings, but as frameworks and moving parts for intricate mechanical and electromechanical nanosystems.
  • Precisely structured graphene sheets with hundreds of atoms have been synthesized by chemical means, and controlled chemical synthesis could potentially make useful components for self-assembled composite nanosystems.
  • Intricate, atomically precise graphene structures are of interest for the reasons suggested above, and the synthetic techniques just mentioned suggest that they could be made solution-phase processes guided by mechanosynthetic means (a possible middle-generation technology).

This is a lot to discuss.

{ 9 comments… read them below or add one }

Vasilii Artyukhov April 2, 2009 at 12:37 pm UTC

Just a note related to #1 observation:

An important factor in the graphene boom is the fact that graphene has arrived after a decade and a half of studies of carbon nanotubes. Since the basic physics of CNTs and graphene are more or less the same, when experimental fabrication of graphene was first reported as such (while, e.g., the STM people have always been dealing a lot with graphene, they just used to throw the stuff away as junk), most of the theory and experimental setups were already there, ready to be applied to studies of graphene. Meaning, among other things, that there’s been a lot of obvious ‘low-hanging fruit’ to study and publish.

Michael G.R. April 2, 2009 at 5:02 pm UTC

A little while ago I read Nick Bostrom’s Whole Brain Emulation Roadmap (didn’t understand half of it, but it was still fascinating):

http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf

One of the biggest challenges was developing destructive scanning techniques powerful enough to image a brain with enough detail to be useful. I’m not sure if you know much about these types of scanning Eric, but if you do, do you think the TEAM microscope could be useful for that type of work?

Eric Drexler April 2, 2009 at 8:09 pm UTC

@ Vasilii Artyukhov — Yes, the CNT work has done a lot to prepare the community for graphene, and not just because of the knowledge and instrumentation itself, but also because of the associated community momentum.

Eric Drexler April 2, 2009 at 8:36 pm UTC

@ Michael G.R. — I’ve been surprised by the amount of research already being done on computational modeling of neural systems, and by the degree of reported success. Microscopy has, of course, been crucial in finding out what to model.

The TEAM microscope, though, provides far more resolution than is necessary (or even desirable) for tissue studies. The problems with 3D neural mapping, as I understand them, are chiefly those of sample preparation, high-throughput imaging, and software for image processing, alignment, and inference of 3D structures.

I would expect the appropriate resolution to be in the nanometer range, which makes the volume elements imaged larger by >103. Where atomic resolution would be appropriate, for example, in determining protein structures, it turns out to be unavailable: the electron beam destroys the sample before enough data can be collected to make a useful inference about atomic (or even near-atomic) structure. Successful imaging is limited to larger volume elements, or to robust, non-biological materials. (Feynman had hoped for a larger contribution to biology.)
—————
Update: I just came across a link to a new and very relevant article in PLoS Computational Biology: “A Proposal for a Coordinated Effort for the Determination of Brainwide Neuroanatomical Connectivity in Model Organisms at a Mesoscopic Scale”. They propose to develop mesosopic maps because “for complex vertebrate brains it is not currently technologically feasible to determine brainwide connectivity at the level of individual synapses”.

the oakster1 April 3, 2009 at 8:09 pm UTC

The group that did this keeps talking about super stable electronics; i just have to ask if this could be ‘chaotic dynamics control’ at work?

jim moore April 4, 2009 at 4:26 pm UTC

Other potentially very use properties of graphene for engineered nano-systems :
- Single sheets of graphene are excellent gas barriers to even hydrogen and helium.

- Excellent 2-Dimensional heat conductor. (think about it for a while )

- By changing the shape and size of the sheet of graphene you change how it interacts with the EM spectrum.

- Conceptually simple way to make a very wide variety of 3 -D shapes : Cut – Stack – (Mechanically) Lock in Place standard sheets of graphene [or Stamp - Stack -(chemically) Stick standard sheets of graphene together.]

jim moore April 15, 2009 at 8:40 pm UTC

There have been recent developments in making graphene ribbons by splitting carbon nanotubes.
http://www.newscientist.com/article/dn16955-nanotubes-unzip-to-offer-computing-route-beyond-silicon.html

Jeffrey Soreff April 22, 2009 at 1:58 am UTC

On the subject of microscopy, coherent x-ray imaging also looks promising. The site says

The full transverse coherence of the LCLS laser will allow single particles to be imaged at high resolution while the short pulse duration will limit radiation damage during the measurement. The instrument will allow imaging of biological samples beyond the damage limit that cannot be overcome with synchrotron sources.

Admittedly, I’m not sure, from the description of the source, whether they can use the transverse coherence of their x-ray source to get a hologram of a single molecule.

adi August 7, 2010 at 11:31 pm UTC

Iwant a book abut graphene as physics or fabrication

{ 5 trackbacks }

Leave a Comment

Previous post:

Next post: