World-Watching: How Nature Paints With Color

[from Quanta Magazine]

by Yasemin Saplakoglu

When objects interact with light in particular ways — by absorbing or reflecting it — we see in color. A sunset’s orange hues and the ocean’s deep blues inspire artists and dazzle observant admirers. But colors are more than pretty decor; they also play a critical role in life. They attract mates, pollinators and seed-spreaders, and signal danger. And the same color can mean different things to different organisms: A red bird might attract a mate, while a red berry might warn off a hungry human.

For color to communicate meaning, systems to produce it had to evolve, by developing pigments to absorb certain wavelengths of light or structures to reflect them. Organisms also had to produce the machinery to perceive color. When you look out into a forest, you might see lush greenery dappled with yellowish sunlight and pink blooms. But this forest scene would look different if you were a bird or a fly. Color-perception machinery — which include photoreceptors in our eyes that recognize and distinguish light — can differ between species. While humans can’t see ultraviolet light, some birds can. While dogs can’t see red or green, many humans can. Even within species there’s some variation: People who are colorblind have trouble distinguishing some combinations, such as green and red. And many organisms can’t see color at all.

Within one planet, many colorful worlds exist. But how did colors evolve in the first place?

What’s New and Noteworthy

To pinpoint when different kinds of color signals may have evolved, researchers recently reviewed many papers, covering hundreds of millions of years of evolutionary history, to bring together information from the fossil record and phylogenetic trees (diagrams that depict evolutionary relationships between species). Their analysis across the tree of life suggested that color signals likely evolved much later than color vision. It’s likely that color vision evolved twice, developing independently in arthropods and fish, between 400 million and 500 million years ago. Then plants started using bright colors to attract pollinators and animals to disperse their seeds, and then animals started using colors to warn off predators and eventually to attract mates.

One of the most common colors that we see in nature is green. However, this isn’t a color signal: It’s a result of photosynthesis. Most plants absorb almost all the photons in the red and blue light spectra but only 90% of the green photons. The remaining 10% are reflected, making the plants appear green to our eyes. But why did they evolve to do this? According to a model, this makes photosynthetic machinery more stable, suggesting that sometimes evolution favors stability over efficiency.

The majority of colors in nature are produced by pigments that absorb or reflect different wavelengths of light. While many plants can produce these pigments on their own, most animals can’t; instead, they acquire pigments from their diet. Some pigments, though, are hard to acquire, so some animals instead rely on nanoscale structures that scatter light in particular ways to create “structural colors.” For example, the shell of the blue-rayed limpet has layers of transparent crystals, each of which diffracts and reflects a sliver of the light spectrum. When the layers grow to a precise thickness, around 100 nanometers, the wavelengths in each layer interact with one another, canceling each other out — except for blue. The result is the appearance of a bright blue limpet shell.

New Ultrathin Capacitor Could Enable Energy-Efficient Microchips

Scientists turn century-old material into a thin film for next-gen memory and logic devices

[from Berkeley Lab, by Rachel Berkowitz]

Electron microscope images show the precise atom-by-atom structure of a barium titanate (BaTiO3) thin film sandwiched between layers of strontium ruthenate (SrRuO3) metal to make a tiny capacitor. (Credit: Lane Martin/Berkeley Lab)

The silicon-based computer chips that power our modern devices require vast amounts of energy to operate. Despite ever-improving computing efficiency, information technology is projected to consume around 25% of all primary energy produced by 2030. Researchers in the microelectronics and materials sciences communities are seeking ways to sustainably manage the global need for computing power.

The holy grail for reducing this digital demand is to develop microelectronics that operate at much lower voltages, which would require less energy and is a primary goal of efforts to move beyond today’s state-of-the-art CMOS (complementary metaloxide semiconductor) devices.

Non-silicon materials with enticing properties for memory and logic devices exist; but their common bulk form still requires large voltages to manipulate, making them incompatible with modern electronics. Designing thin-film alternatives that not only perform well at low operating voltages but can also be packed into microelectronic devices remains a challenge.

Now, a team of researchers at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley have identified one energy-efficient route—by synthesizing a thin-layer version of a well-known material whose properties are exactly what’s needed for next-generation devices.

First discovered more than 80 years ago, barium titanate (BaTiO3) found use in various capacitors for electronic circuits, ultrasonic generators, transducers, and even sonar.

Crystals of the material respond quickly to a small electric field, flip-flopping the orientation of the charged atoms that make up the material in a reversible but permanent manner even if the applied field is removed. This provides a way to switch between the proverbial “0” and “1” states in logic and memory storage devices—but still requires voltages larger than 1,000 millivolts (mV) for doing so.

Seeking to harness these properties for use in microchips, the Berkeley Lab-led team developed a pathway for creating films of BaTiO3 just 25 nanometers thin—less than a thousandth of a human hair’s width—whose orientation of charged atoms, or polarization, switches as quickly and efficiently as in the bulk version.

“We’ve known about BaTiO3 for the better part of a century and we’ve known how to make thin films of this material for over 40 years. But until now, nobody could make a film that could get close to the structure or performance that could be achieved in bulk,” said Lane Martin, a faculty scientist in the Materials Sciences Division (MSD) at Berkeley Lab and professor of materials science and engineering at UC Berkeley who led the work.

Historically, synthesis attempts have resulted in films that contain higher concentrations of “defects”—points where the structure differs from an idealized version of the material—as compared to bulk versions. Such a high concentration of defects negatively impacts the performance of thin films. Martin and colleagues developed an approach to growing the films that limits those defects. The findings were published in the journal Nature Materials.

To understand what it takes to produce the best, low-defect BaTiO3 thin films, the researchers turned to a process called pulsed-laser deposition. Firing a powerful beam of an ultraviolet laser light onto a ceramic target of BaTiO3 causes the material to transform into a plasma, which then transmits atoms from the target onto a surface to grow the film. “It’s a versatile tool where we can tweak a lot of knobs in the film’s growth and see which are most important for controlling the properties,” said Martin.

Martin and his colleagues showed that their method could achieve precise control over the deposited film’s structure, chemistry, thickness, and interfaces with metal electrodes. By chopping each deposited sample in half and looking at its structure atom by atom using tools at the National Center for Electron Microscopy at Berkeley Lab’s Molecular Foundry, the researchers revealed a version that precisely mimicked an extremely thin slice of the bulk.

“It’s fun to think that we can take these classic materials that we thought we knew everything about, and flip them on their head with new approaches to making and characterizing them,” said Martin.

Finally, by placing a film of BaTiO3 in between two metal layers, Martin and his team created tiny capacitors—the electronic components that rapidly store and release energy in a circuit. Applying voltages of 100 mV or less and measuring the current that emerges showed that the film’s polarization switched within two billionths of a second and could potentially be faster—competitive with what it takes for today’s computers to access memory or perform calculations.

The work follows the bigger goal of creating materials with small switching voltages, and examining how interfaces with the metal components necessary for devices impact such materials. “This is a good early victory in our pursuit of low-power electronics that go beyond what is possible with silicon-based electronics today,” said Martin.

“Unlike our new devices, the capacitors used in chips today don’t hold their data unless you keep applying a voltage,” said Martin. And current technologies generally work at 500 to 600 mV, while a thin film version could work at 50 to 100 mV or less. Together, these measurements demonstrate a successful optimization of voltage and polarization robustness—which tend to be a trade-off, especially in thin materials.

Next, the team plans to shrink the material down even thinner to make it compatible with real devices in computers and study how it behaves at those tiny dimensions. At the same time, they will work with collaborators at companies such as Intel Corp. to test the feasibility in first-generation electronic devices. “If you could make each logic operation in a computer a million times more efficient, think how much energy you save. That’s why we’re doing this,” said Martin.

This research was supported by the U.S. Department of Energy (DOE) Office of Science. The Molecular Foundry is a DOE Office of Science user facility at Berkeley Lab.