Quantum mysteries dissolve if possibilities are realities

When you think about it, it shouldn’t be surprising that there’s more than one way to explain quantum mechanics. Quantum math is notorious for incorporating multiple possibilities for the outcomes of measurements. So you shouldn’t expect physicists to stick to only one explanation for what that math means. And in fact, sometimes it seems like researchers have proposed more “interpretations” of this math than Katy Perry has followers on Twitter.

So it would seem that the world needs more quantum interpretations like it needs more Category 5 hurricanes. But until some single interpretation comes along that makes everybody happy (and that’s about as likely as the Cleveland Browns winning the Super Bowl), yet more interpretations will emerge. One of the latest appeared recently (September 13) online at arXiv.org, the site where physicists send their papers to ripen before actual publication. You might say papers on the arXiv are like “potential publications,” which someday might become “actual” if a journal prints them.

And that, in a nutshell, is pretty much the same as the logic underlying the new interpretation of quantum physics. In the new paper, three scientists argue that including “potential” things on the list of “real” things can avoid the counterintuitive conundrums that quantum physics poses. It is perhaps less of a full-blown interpretation than a new philosophical framework for contemplating those quantum mysteries. At its root, the new idea holds that the common conception of “reality” is too limited. By expanding the definition of reality, the quantum’s mysteries disappear. In particular, “real” should not be restricted to “actual” objects or events in spacetime. Reality ought also be assigned to certain possibilities, or “potential” realities, that have not yet become “actual.” These potential realities do not exist in spacetime, but nevertheless are “ontological” — that is, real components of existence.

“This new ontological picture requires that we expand our concept of ‘what is real’ to include an extraspatiotemporal domain of quantum possibility,” write Ruth Kastner, Stuart Kauffman and Michael Epperson.

Considering potential things to be real is not exactly a new idea, as it was a central aspect of the philosophy of Aristotle, 24 centuries ago. An acorn has the potential to become a tree; a tree has the potential to become a wooden table. Even applying this idea to quantum physics isn’t new. Werner Heisenberg, the quantum pioneer famous for his uncertainty principle, considered his quantum math to describe potential outcomes of measurements of which one would become the actual result. The quantum concept of a “probability wave,” describing the likelihood of different possible outcomes of a measurement, was a quantitative version of Aristotle’s potential, Heisenberg wrote in his well-known 1958 book Physics and Philosophy. “It introduced something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.”

In their paper, titled “Taking Heisenberg’s Potentia Seriously,” Kastner and colleagues elaborate on this idea, drawing a parallel to the philosophy of René Descartes. Descartes, in the 17th century, proposed a strict division between material and mental “substance.” Material stuff (res extensa, or extended things) existed entirely independently of mental reality (res cogitans, things that think) except in the brain’s pineal gland. There res cogitans could influence the body. Modern science has, of course, rejected res cogitans: The material world is all that reality requires. Mental activity is the outcome of material processes, such as electrical impulses and biochemical interactions.

Kastner and colleagues also reject Descartes’ res cogitans. But they think reality should not be restricted to res extensa; rather it should be complemented by “res potentia” — in particular, quantum res potentia, not just any old list of possibilities. Quantum potentia can be quantitatively defined; a quantum measurement will, with certainty, always produce one of the possibilities it describes. In the large-scale world, all sorts of possibilities can be imagined (Browns win Super Bowl, Indians win 22 straight games) which may or may not ever come to pass.

If quantum potentia are in some sense real, Kastner and colleagues say, then the mysterious weirdness of quantum mechanics becomes instantly explicable. You just have to realize that changes in actual things reset the list of potential things.

Consider for instance that you and I agree to meet for lunch next Tuesday at the Mad Hatter restaurant (Kastner and colleagues use the example of a coffee shop, but I don’t like coffee). But then on Monday, a tornado blasts the Mad Hatter to Wonderland. Meeting there is no longer on the list of res potentia; it’s no longer possible for lunch there to become an actuality. In other words, even though an actuality can’t alter a distant actuality, it can change distant potential. We could have been a thousand miles away, yet the tornado changed our possibilities for places to eat.

It’s an example of how the list of potentia can change without the spooky action at a distance that Einstein alleged about quantum entanglement. Measurements on entangled particles, such as two photons, seem baffling. You can set up an experiment so that before a measurement is made, either photon could be spinning clockwise or counterclockwise. Once one is measured, though (and found to be, say, clockwise), you know the other will have the opposite spin (counterclockwise), no matter how far away it is. But no secret signal is (or could possibly be) sent from one photon to the other after the first measurement. It’s simply the case that counterclockwise is no longer on the list of res potentia for the second photon. An “actuality” (the first measurement) changes the list of potentia that still exist in the universe. Potentia encompass the list of things that may become actual; what becomes actual then changes what’s on the list of potentia.

Similar arguments apply to other quantum mysteries. Observations of a “pure” quantum state, containing many possibilities, turns one of those possibilities into an actual one. And the new actual event constrains the list of future possibilities, without any need for physical causation. “We simply allow that actual events can instantaneously and acausally affect what is next possible … which, in turn, influences what can next become actual, and so on,” Kastner and colleagues write.

Measurement, they say, is simply a real physical process that transforms quantum potentia into elements of res extensa — actual, real stuff in the ordinary sense. Space and time, or spacetime, is something that “emerges from a quantum substratum,” as actual stuff crystalizes out “of a more fluid domain of possibles.” Spacetime, therefore, is not all there is to reality.

It’s unlikely that physicists everywhere will instantly cease debating quantum mysteries and start driving cars with “res potentia!” bumper stickers. But whether this new proposal triumphs in the quantum debates or not, it raises a key point in the scientific quest to understand reality. Reality is not necessarily what humans think it is or would like it to be. Many quantum interpretations have been motivated by a desire to return to Newtonian determinism, for instance, where cause and effect is mechanical and predictable, like a clock’s tick preceding each tock.

But the universe is not required to conform to Newtonian nostalgia. And more generally, scientists often presume that the phenomena nature offers to human senses reflect all there is to reality. “It is difficult for us to imagine or conceptualize any other categories of reality beyond the level of actual — i.e., what is immediately available to us in perceptual terms,” Kastner and colleagues note. Yet quantum physics hints at a deeper foundation underlying the reality of phenomena — in other words, that “ontology” encompasses more than just events and objects in spacetime.
This proposition sounds a little bit like advocating for the existence of ghosts. But it is actually more of an acknowledgment that things may seem ghostlike only because reality has been improperly conceived in the first place. Kastner and colleagues point out that the motions of the planets in the sky baffled ancient philosophers because supposedly in the heavens, reality permitted only uniform circular motion (accomplished by attachment to huge crystalline spheres). Expanding the boundaries of reality allowed those motions to be explained naturally.

Similarly, restricting reality to events in spacetime may turn out to be like restricting the heavens to rotating spheres. Spacetime itself, many physicists are convinced, is not a primary element of reality but a structure that emerges from processes more fundamental. Because these processes appear to be quantum in nature, it makes sense to suspect that something more than just spacetime events has a role to play in explaining quantum physics.

True, it’s hard to imagine the “reality” of something that doesn’t exist “actually” as an object or event in spacetime. But Kastner and colleagues cite the warning issued by the late philosopher Ernan McMullin, who pointed out that “imaginability must not be made the test for ontology.” Science attempts to discover the real world’s structures; it’s unwarranted, McMullin said, to require that those structures be “imaginable in the categories” known from large-scale ordinary experience. Sometimes things not imaginable do, after all, turn out to be real. No fan of the team ever imagined the Indians would win 22 games in a row.

Watch NASA’s mesmerizing new visualization of the 2017 hurricane season

How do you observe the invisible currents of the atmosphere? By studying the swirling, billowing loads of sand, sea salt and smoke that winds carry. A new simulation created by scientists at NASA’s Goddard Space Flight Center in Greenbelt, Md., reveals just how far around the globe such aerosol particles can fly on the wind.

The complex new simulation, powered by supercomputers, uses advanced physics and a state-of-the-art climate algorithm known as FV3 to represent in high resolution the physical interactions of aerosols with storms or other weather patterns on a global scale (SN Online: 9/21/17). Using data collected from NASA’s Earth-observing satellites, the simulation tracked how air currents swept aerosols around the planet from August 1, 2017, through November 1, 2017.
In the animation, sea salt (in blue) snagged by winds sweeping across the ocean’s surface becomes entrained in hurricanes Harvey, Irma, Jose and Maria, revealing their deadly paths. Wisps of smoke (in gray) from fires in the U.S. Pacific Northwest drift toward the eastern United States, while Saharan dust (in brown) billows westward across the Atlantic Ocean to the Gulf of Mexico. And the visualization shows how Hurricane Ophelia formed off the coast of Africa, pulling in both Saharan dust and smoke from Portugal’s wildfires and transporting the particles to Ireland and the United Kingdom.

Warming ocean water is turning 99 percent of these sea turtles female

Warming waters are turning some sea turtle populations female — to the extreme. More than 99 percent of young green turtles born on beaches along the northern Great Barrier Reef are female, researchers report January 8 in Current Biology. If that imbalance in sex continues, the overall population could shrink.

Green sea turtle embryos develop as male or female depending on the temperature at which they incubate in sand. Scientists have known that warming ocean waters are skewing sea turtle populations toward having more females, but quantifying the imbalance has been hard.
Researchers analyzed hormone levels in turtles collected on the Great Barrier Reef (off the northeastern coast of Australia) to determine their sex, and then used genetic data to link individuals to the beaches where the animals originated. That two-pronged approach allowed the scientists to estimate the ratio of males to females born at different sites.

The sex ratio in the overall population is “nothing out of the ordinary,” with roughly one juvenile male for every four juvenile females, says study coauthor Michael Jensen, a marine biologist with the National Oceanic and Atmospheric Administration in La Jolla, Calif. But breaking the data down by the turtles’ region of origin revealed worrisome results. In the cooler southern Great Barrier Reef, 67 percent of hatched juveniles were female. But more than 99 percent of young turtles hatched in sand soaked by warmer waters in the northern Great Barrier Reef were female — with one male for every 116 females. That imbalance has increased over time: 86 percent of the adults born in the area more than 20 years ago were female.

It’s unclear what the long-term impact of such a strong skew will be, but it’s probably not good news for the turtles. Sea turtle populations can get by with fewer males than females (SN: 3/4/17, p. 16), but scientists aren’t sure how many is too few. And while turtles can adapt their behavior, such as laying eggs in cooler places, the animals’ instinct is to nest in the same spot they were born, which works against such a change.

Skyrmions open a door to next-level data storage

Like sailors and spelunkers, physicists know the power of a sturdy knot.

Some physicists have tied their hopes for a new generation of data storage to minuscule knotlike structures called skyrmions, which can form in magnetic materials. Incredibly tiny and tough to undo, magnetic skyrmions could help feed humankind’s hunger for ever-smaller electronics.

On traditional hard drives, the magnetic regions that store data are about 10 times as large as the smallest skyrmions. Ranging from a nanometer to hundreds of nanometers in diameter, skyrmions “are probably the smallest magnetic systems … that can be imagined or that can be realized in nature,” says physicist Vincent Cros of Unité Mixte de Physique CNRS/Thales in Palaiseau, France.
What’s more, skyrmions can easily move through a material, pushed along by an electric current. The magnetic knots’ nimble nature suggests that skyrmions storing data in a computer could be shuttled to a sensor that would read off the information as the skyrmions pass by. In contrast, traditional hard drives read and write data by moving a mechanical arm to the appropriate region on a spinning platter (SN: 10/19/13, p. 28). Those moving parts tend to be fragile, and the task slows down data recall. Scientists hope that skyrmions could one day make for more durable, faster, tinier gadgets.

One thing, however, has held skyrmions back: Until recently, they could be created and controlled only in the frigid cold. When solid-state physicist Christian Pfleiderer and colleagues first reported the detection of magnetic skyrmions, in Science in 2009, the knots were impractical to work with, requiring very low temperatures of about 30 kelvins (–243° Celsius). Those are “conditions where you’d say, ‘This is of no use for anybody,’ ” says Pfleiderer of the Technical University of Munich.

Skyrmions have finally come out of the cold, though they are finicky and difficult to control. Now, scientists are on the cusp of working out the kinks to create thawed-out skyrmions with all the desired characteristics. At the same time, researchers are chasing after new kinds of skyrmions, which may be an even better fit for data storage. The skyrmion field, Pfleiderer says, has “started to develop its own life.”
In a magnetic material, such as iron, each atom acts like a tiny bar magnet with its own north and south poles. This magnetization arises from spin, a quantum property of the atom’s electrons. In a ferromagnet, a standard magnet like the one holding up the grocery list on your refrigerator, the atoms’ magnetic poles point in the same direction (SN Online: 5/14/12).

Skyrmions, which dwell within such magnetic habitats, are composed of groups of atoms with their magnetic poles oriented in whorls. Those spirals of magnetization disrupt the otherwise orderly alignment of atoms in the magnet, like a cowlick in freshly combed hair. Within a skyrmion, the direction of the atoms’ poles twists until the magnetization in the center points in the opposite direction of the magnetization outside. That twisting is difficult to undo, like a strong knot (SN Online: 10/31/08). So skyrmions won’t spontaneously disappear — a plus for long-term data storage.

Using knots of various kinds to store information has a long history. Ancient Incas used khipu, a system of knotted cord, to keep records or send messages (SN Online: 5/8/17). In a more modern example, Pfleiderer says, “if you don’t want to forget something then you put a knot in your handkerchief.” Skyrmions could continue that tradition.
On the right track
Skyrmions are a type of “quasiparticle,” a disturbance within a material that behaves like a single particle, despite being a collective of many individual particles. Although skyrmions are made up of atoms, which remain stationary within the material, skyrmions can move around like a true particle, by sliding from one group of atoms to another. “The magnetism just twists around, and thus the skyrmion travels,” says condensed matter physicist Kirsten von Bergmann of the University of Hamburg.

In fact, skyrmions were first proposed in the context of particles. British physicist Tony Skyrme, who lends his name to the knots, suggested about 60 years ago that particles such as neutrons and protons could be thought of as a kind of knot. In the late 1980s, physicists realized the math that supported Skyrme’s idea could also represent knots in the magnetization of solid materials.

Such skyrmions could be used in futuristic data storage schemes, researchers later proposed. A chain of skyrmions could encode bits within a computer, with the presence of a skyrmion representing 1 and the absence representing 0.

In particular, skyrmions might be ideal for what are known as “racetrack” memories, Cros and colleagues proposed in Nature Nanotechnology in 2013. In racetrack devices, information-holding skyrmions would speed along a magnetic nanoribbon, like cars on the Indianapolis Motor Speedway.

Solid-state physicist Stuart Parkin proposed a first version of the racetrack concept years earlier. In a 2008 paper in Science, Parkin and colleagues demonstrated the beginnings of a racetrack memory based not on skyrmions, but on magnetic features called domain walls, which separate regions with different directions of magnetization in a material. Those domain walls could be pushed along the track using electric currents to a sensor that would read out the data encoded within. To maximize the available space, the racetrack could loop straight up and back down (like a wild Mario Kart ride), allowing for 3-D memory that could pack in more data than a flat chip.
“When I first proposed [racetrack memories] many years ago, I think people were very skeptical,” says Parkin, now at the Max Planck Institute of Microstructure Physics in Halle, Germany. Today, the idea — with and without skyrmions — has caught on. Racetrack memories are being tested in laboratories, though the technology is not yet available in computers.

To make such a system work with skyrmions, scientists need to make the knots easier to wrangle at room temperature. For skyrmion-based racetrack memories to compete with current technologies, skyrmions must be small and move quickly and easily through a material. And they should be easy to create and destroy, using something simple like an electric current. Those are lofty demands: A step forward on one requirement sometimes leads to a step backward on the others. But scientists are drawing closer to reining in the magnetic marvels.

Heating up
Those first magnetic skyrmions found by Pfleiderer and colleagues appeared spontaneously in crystals with asymmetric structures that induce a twist between neighboring atoms. Only certain materials have that skyrmion-friendly asymmetric structure, limiting the possibilities for studying the quasiparticles or coaxing them to form under warmer conditions.

Soon, physicists developed a way to artificially create an asymmetric structure by depositing material in thin layers. Interactions between atoms in different layers can induce a twist in the atoms’ orientations. “Now, we can suddenly use ordinary magnetic materials, combine them in a clever way with other materials, and make them work at room temperature,” says materials scientist Axel Hoffmann of Argonne National Laboratory in Illinois.

Scientists produced such thin film skyrmions for the first time in a one-atom-thick layer of iron on top of iridium, but temperatures were still very low. Reported in Nature Physics in 2011, those thin film skyrmions required a chilly 11 kelvins (–262° C). That’s because the thin film of iron loses its magnetic properties above a certain temperature, says von Bergmann, who coauthored the study, along with nanoscientist Roland Wiesendanger of the University of Hamburg and colleagues. But thicker films can stay magnetic at higher temperatures. And so, “one important step was to increase the amount of magnetic material,” von Bergmann says.

To go thicker, scientists began stacking sheets of various magnetic and nonmagnetic materials, like a club sandwich with repeating layers of meat, cheese and bread. Stacking multiple layers of iridium, platinum and cobalt, Cros and colleagues created the first room-temperature skyrmions smaller than 100 nanometers, the researchers reported in May 2016 in Nature Nanotechnology.

By adjusting the types of materials, the number of layers and their thicknesses, scientists can fashion designer skyrmions with desirable properties. When condensed matter physicist Christos Panagopoulos of Nanyang Technological University in Singapore and colleagues fiddled with the composition of layers of iridium, iron, cobalt and platinum, a variety of skyrmions swirled into existence. The resulting knots came in different sizes, and some were more stable than others, the researchers reported in Nature Materials in September 2017.

Although scientists now know how to make room-temperature skyrmions, the heat-tolerant swirls, tens to hundreds of nanometers in diameter, tend to be too big to be very useful. “If we want to compete with current state-of-the-art technology, we have to go for skyrmionic objects [that] are much smaller in size than 100 nanometers,” Wiesendanger says. The aim is to bring warmed-up skyrmions down to a few nanometers.
As some try to shrink room-temp skyrmions down, others are bringing them up to speed, to make for fast reading and writing of data. In a study reported in Nature Materials in 2016, skyrmions at room temperature reached top speeds of 100 meters per second (about 220 miles per hour). Fittingly, that’s right around the fastest speed NASCAR drivers achieve. The result showed that a skyrmion racetrack might actually work, says study coauthor Mathias Kläui, a condensed matter physicist at Johannes Gutenberg University Mainz in Germany. “Fundamentally, it’s feasible at room temperature.” But to compete against domain walls, which can reach speeds of over 700 m/s, skyrmions still need to hit the gas.

Despite progress, there are a few more challenges to work out. One possible issue: A skyrmion’s swirling pattern makes it behave like a rotating object. “When you have a rotating object moving, it may not want to move in a straight line,” Hoffmann says. “If you’re a bad golf player, you know this.” Skyrmions don’t move in the same direction as an electric current, but at an angle to it. On the racetrack, skyrmions might hit a wall instead of staying in their lanes. Now, researchers are seeking new kinds of skyrmions that stay on track.

A new twist
Just as there’s more than one way to tie a knot, there are several different types of skyrmions, formed with various shapes of magnetic twists. The two best known types are Bloch and Néel. Bloch skyrmions are found in the thick, asymmetric crystals in which skyrmions were first detected, and Néel skyrmions tend to show up in thin films.

“The type of skyrmions you get is related to the crystal structure of the materials,” says physical chemist Claudia Felser of the Max Planck Institute for Chemical Physics of Solids in Dresden, Germany. Felser studies Heusler compounds, materials that have unusual properties particularly useful for manipulating magnetism. Felser, Parkin and colleagues detected a new kind of skyrmion, an antiskyrmion, in a thin layer of such a material. They reported the find in August 2017 in Nature.

Antiskyrmions might avoid some of the pitfalls that their relatives face, Parkin says. “Potentially, they can move in straight lines with currents, rather than moving to the side.” Such straight-shooting skyrmions may be better suited for racetrack schemes. And the observed antiskyrmions are stable at a wide range of temperatures, including room temperature. Antiskyrmions also might be able to shrink down smaller than other kinds of skyrmions.

Physicists are now on the hunt for skyrmions within a different realm: antiferromagnetic materials. Unlike in ferromagnetic materials — in which atoms all align their poles — in antiferromagnets, atoms’ poles point in alternating directions. If one atom points up, its neighbor points down. Like antiskyrmions, antiferromagnetic skyrmions wouldn’t zip off at an angle to an electric current, so they should be easier to control. Antiferromagnetic skyrmions might also move faster, Kläui says.

Materials scientists still need to find an antiferromagnetic material with the necessary properties to form skyrmions, Kläui says. “I would expect that this would be realized in the next couple of years.”

Finding the knots’ niche
Once skyrmions behave as desired, creating a racetrack memory with them is an obvious next step. “It is a technology that combines the best of multiple worlds,” Kläui says — stability, easily accessible data and low energy requirements. But Kläui and others acknowledge the hurdles ahead for skyrmion racetrack memories. It will be difficult, these researchers say, to beat traditional magnetic hard drives — not to mention the flash memories available in newer computers — on storage density, speed and cost simultaneously.

“The racetrack idea, I’m skeptical about,” Hoffmann says. Instead, skyrmions might be useful in devices meant for performing calculations. Because only a small electric current is required to move skyrmions around, such devices might be used to create energy-efficient computer processors.

Another idea is to use skyrmions for biologically inspired computers, which attempt to mimic the human brain (SN: 9/6/14, p. 10). Brains consume about as much power as a lightbulb, yet can perform calculations that computers still can’t match, thanks to large interconnected networks of nerve cells. Skyrmions could help scientists achieve this kind of computation in the lab, without sapping much power.
A single skyrmion could behave like a nerve cell , or neuron, electrical engineer Sai Li of Beihang University in Beijing and colleagues suggest. In the human body, a neuron can add up signals from its neighbors, gradually building up a voltage across its membrane. When that voltage reaches a certain threshold, ions begin shifting across the membrane in waves, generating an electric pulse. Skyrmions could imitate this behavior: An electric current would push a skyrmion along a track, with the distance traveled acting as an analog for the neuron’s increasing voltage. A skyrmion reaching a detector at the end would be equivalent to a firing neuron, the researchers proposed in July 2017 in Nanotechnology .
By combining a large number of neuron-imitating skyrmions, the thinking goes, scientists could create a computer that operates something like a brain.

Additional ideas for how to use the magnetic whirls keep cropping up. “It’s still a growing field,” von Bergmann says. “There are several new ideas ahead.”

Whether or not skyrmions end up in future gadgets, the swirls are part of a burgeoning electronics ecosystem. Ever since electricity was discovered, researchers have focused on the motion of electric charges. But physicists are now fashioning a new parallel system called spintronics — of which skyrmions are a part — based on the motion of electron spin, that property that makes atoms magnetic (SN Online: 9/26/17). By studying skyrmions, researchers are expanding their understanding of how spins move through materials.

Like a kindergartner fumbling with shoelaces, studying how to tie spins up in knots is a learning process.

Genes could record forensic clues to time of death

Dying, it turns out, is not like flipping a switch. Genes keep working for a while after a person dies, and scientists have used that activity in the lab to pinpoint time of death to within about nine minutes.

During the first 24 hours after death, genetic changes kick in across various human tissues, creating patterns of activity that can be used to roughly predict when someone died, researchers report February 13 in Nature Communications.
“This is really cool, just from a biological discovery standpoint,” says microbial ecologist Jennifer DeBruyn of the University of Tennessee in Knoxville who was not part of the study. “What do our cells do after we die, and what actually is death?”

What has become clear is that death isn’t the immediate end for genes. Some mouse and zebrafish genes remain active for up to four days after the animals die, scientists reported in 2017 in Open Biology.
In the new work, researchers examined changes in DNA’s chemical cousin, RNA. “There’s been a dogma that RNA is a weak, unstable molecule,” says Tom Gilbert, a geneticist at the Natural History Museum of Denmark in Copenhagen who has studied postmortem genetics. “So people always assumed that DNA might survive after death, but RNA would be gone.”
But recent research has found that RNA can be surprisingly stable, and some genes in our DNA even continue to be transcribed, or written, into RNA after we die, Gilbert says. “It’s not like you need a brain for gene expression,” he says. Molecular processes can continue until the necessary enzymes and chemical components run out.

“It’s no different than if you’re cooking a pasta and it’s boiling — if you turn the cooker off, it’s still going to bubble away, just at a slower and slower rate,” he says.

No one knows exactly how long a human’s molecular pot might keep bubbling, but geneticist and study leader Roderic Guigó of the Centre for Genomic Regulation in Barcelona says his team’s work may help toward figuring that out. “I think it’s an interesting question,” he says. “When does everything stop?”

Tissues from the dead are frequently used in genetic research, and Guigó and his colleagues had initially set out to learn how genetic activity, or gene expression, compares in dead and living tissues.

The researchers analyzed gene activity and degradation in 36 different kinds of human tissue, such as the brain, skin and lungs. Tissue samples were collected from more than 500 donors who had been dead for up to 29 hours. Postmortem gene activity varied in each tissue, the scientists found, and they used a computer to search for patterns in this activity. Just four tissues, taken together, could give a reliable time of death: subcutaneous fat, lung, thyroid and skin exposed to the sun.

Based on those results, the team developed an algorithm that a medical examiner might one day use to determine time of death. Using tissues in the lab, the algorithm could estimate the time of death to within about nine minutes, performing best during the first few hours after death, DeBruyn says.

For medical examiners, real-world conditions might not allow for such accuracy.

Traditionally, medical examiners use body temperature and physical signs such as rigor mortis to determine time of death. But scientists including DeBruyn are also starting to look at timing death using changes in the microbial community during decomposition (SN Online: 7/22/15).

These approaches — tracking microbial communities and gene activity — are “definitely complementary,” DeBruyn says. In the first 24 hours after death, bacteria, unlike genes, haven’t changed much, so a person’s genetic activity may be more useful for zeroing in on how long ago he or she died during that time frame. At longer time scales, microbes may work better.

“The biggest challenge is nailing down variability,” DeBruyn says. Everything from the temperature where a body is found to the deceased’s age could potentially affect how many and which genes are active after death. So scientists will have to do more experiments to account for these factors before the new method can be widely used.

Cutting off a brain enzyme reversed Alzheimer’s plaques in mice

Knocking back an enzyme swept mouse brains clean of protein globs that are a sign of Alzheimer’s disease. Reducing the enzyme is known to keep these nerve-damaging plaques from forming. But the disappearance of existing plaques was unexpected, researchers report online February 14 in the Journal of Experimental Medicine.

The brains of mice engineered to develop Alzheimer’s disease were riddled with these plaques, clumps of amyloid-beta protein fragments, by the time the animals were 10 months old. But the brains of 10-month-old Alzheimer’s mice that had a severely reduced amount of an enzyme called BACE1 were essentially clear of new and old plaques.
Studies rarely demonstrate the removal of existing plaques, says neuroscientist John Cirrito of Washington University in St. Louis who was not involved in the study. “It suggests there is something special about BACE1,” he says, but exactly what that might be remains unclear.

Story continues below graphic
One theory to how Alzheimer’s develops is called the amyloid cascade hypothesis. Accumulation of globs of A-beta protein bits, the idea goes, drives the nerve cell loss and dementia seen in the disease, which an estimated 5.5 million Americans had in 2017. If the theory is right, then targeting the BACE1 enzyme, which cuts up another protein to make A-beta, may help patients.
BACE1 was discovered about 20 years ago. Initial studies turned off the gene that makes BACE1 in mice for their entire lives, and those animals produced almost no A-beta. In humans, however, any drug that combats Alzheimer’s by going after the enzyme would be given to adults. So Riqiang Yan, one of the discoverers of BACE1 and a neuroscientist at the Cleveland Clinic, and colleagues set out to learn what happens when mice who start life with normal amounts of BACE1 lose much of the enzyme later on.

The researchers studied mice engineered to develop plaques in their brains when the animals are about 10 weeks old. Some of these mice were also engineered so that levels of the BACE1 enzyme, which is mostly found in the brain, gradually tapered off over time. When these mice were 4 months old, the animals had lost about 80 percent of the enzyme.
Alzheimer’s mice with normal BACE1 levels experienced a steady increase in plaques, clearly seen in samples of their brains. In Alzheimer’s mice without BACE1, however, the clumps followed a different trajectory. The number of plaques initially grew, but by the time the mice were around 6 months old, those plaques had mostly disappeared. And by 10 months, “we hardly see any,” Yan says.

Cirrito was surprised that getting rid of BACE1 later in life didn’t just stop plaques from forming, but removed them, too. “It is possible that perhaps a therapeutic agent targeting BACE1 in humans might have a similar effect,” he says.

Drugs that target BACE1 are already in development. But the enzyme has other jobs in the brain, such as potentially affecting the ability of nerve cells to communicate properly. It may be necessary for a drug to inhibit some, but not all, of the enzyme, enough to prevent plaque formation but also preserve normal signaling between nerve cells, Yan says.

A new species of tardigrade lays eggs covered with doodads and streamers

What a spectacular Easter basket tardigrade eggs would make — at least for those celebrating in miniature.

A new species of the pudgy, eight-legged, water creatures lays pale, spherical microscopic eggs studded with domes crowned in long, trailing streamers.

Eggs of many land-based tardigrades have bumps, spines, filaments and such, presumably to help attach to a surface, says species codiscoverer Kazuharu Arakawa. The combination of a relatively plain surface on the egg itself (no pores, for instance) plus a filament crown helps distinguish this water bear as a new species, now named Macrobiotus shonaicus, he and colleagues report February 28 in PLOS ONE.
With about 20 new species added each year to the existing 1,200 or so known worldwide, tardigrades have become tiny icons of extreme survival (SN Online: 7/14/17).

“I was actually not looking for a new species,” Arakawa says. He happened on it when searching through moss he plucked from the concrete parking lot at his apartment. He routinely samples such stray spots to search for tardigrades, one of his main interests as a genome biologist at Keio University’s Institute for Advanced Biosciences in Tsuruoka City, Japan.
These particular moss-loving creatures managed to grow and reproduce in the lab —“very rare for a tardigrade,” he says. He didn’t realize it was an unknown species until he started deciphering the DNA that makes up some of its genes. The sequences he found didn’t match any in a worldwide database.

His two coauthors, at Jagiellonian University in Krakow, Poland, worked out that he had found a new member of a storied cluster of relatives of the tardigrade M. hufelandi. That species, described in 1834, kept turning up across continents around the world — or so biologists thought for more than a century. Realization eventually dawned that the single species that could live in such varied places was actually a complex of close cousins.

And now M. shonaicus adds yet another cousin to a group of about 30. Who knows where the next one will turn up. “I think there are lots more to be identified,” Arakawa says.

The debate over how long our brains keep making new nerve cells heats up

Adult mice and other rodents sprout new nerve cells in memory-related parts of their brains. People, not so much. That’s the surprising conclusion of a series of experiments on human brains of various ages first described at a meeting in November (SN: 12/9/17, p. 10). A more complete description of the finding, published online March 7 in Nature, gives heft to the controversial result, as well as ammo to researchers looking for reasons to be skeptical of the findings.

In contrast to earlier prominent studies, Shawn Sorrells of the University of California, San Francisco and his colleagues failed to find newborn nerve cells in the memory-related hippocampi of adult brains. The team looked for these cells in nonliving brain samples in two ways: molecular markers that tag dividing cells and young nerve cells, and telltale shapes of newborn cells. Using these metrics, the researchers saw signs of newborn nerve cells in fetal brains and brains from the first year of life, but they became rarer in older children. And the brains of adults had none.

There is no surefire way to spot new nerve cells, particularly in live brains; each way comes with caveats. “These findings are certain to stir up controversy,” neuroscientist Jason Snyder of the University of British Columbia writes in an accompanying commentary in the same issue of Nature.

Venus may be home to a new kind of tectonics

THE WOODLANDS, Texas — Venus’ crust is broken up into chunks that shuffle, jostle and rotate on a global scale, researchers reported in two talks March 20 at the Lunar and Planetary Science Conference.

New maps of the rocky planet’s surface, based on images taken in the 1990s by NASA’s Magellan spacecraft, show that Venus’ low-lying plains are surrounded by a complex network of ridges and faults. Similar features on Earth correspond to tectonic plates crunching together, sometimes creating mountain ranges, or pulling apart. Even more intriguing, the edges of the Venusian plains show signs of rubbing against each other, also suggesting these blocks of crust have moved, the researchers say.
“This is a new way of looking at the surface of Venus,” says planetary geologist Paul Byrne of North Carolina State University in Raleigh.

Geologists generally thought rocky planets could have only two forms of crust: a stagnant lid as on the moon or Mars — where the whole crust is one continuous piece — or a planet with plate tectonics as on Earth, where the surface is split into giant moving blocks that sink beneath or collide with each other. Venus was thought to have one solid lid (SN: 12/3/11, p. 26).

Instead, those options may be two ends of a spectrum. “Venus may be somewhere in between,” Byrne said. “It’s not plate tectonics, but it ain’t not plate tectonics.”

While Earth’s plates move independently like icebergs, Venus’ blocks jangle together like chaotic sea ice, said planetary scientist Richard Ghail of Imperial College London in a supporting talk.
Ghail showed similar ridges and faults around two specific regions on Venus that resemble continental interiors on Earth, such as the Tarim and Sichuan basins in China. He named the two Venusian plains the Nuwa Campus and Lada Campus. (The Latin word campus translates as a field or plain, especially one bound by a fence, so he thought it was fitting.)
Crustal motion may be possible on Venus because the surface is scorching hot (SN: 3/3/18, p. 14). “Those rocks already have to be kind of gooey” from the high temperatures, Byrne said. That means it wouldn’t take a lot of force to move them. Venus’ interior is also probably still hot, like Earth’s, so convection in the mantle could help push the blocks around.

“It’s a bit of a paradigm shift,” says planetary scientist Lori Glaze of NASA’s Goddard Space Flight Center, who was not involved in the new work. “People have always wanted Venus to be active. We believe it to be active, but being able to identify these features gives us more of a sense that it is.”

The work may have implications for astronomers trying to figure out which Earth-sized planets in other solar systems are habitable (SN: 4/30/16, p. 36). Venus is almost the same size and mass as the Earth. But no known life exists on Venus, where the average surface temperature is 462° Celsius and the atmosphere is acidic. Scientists have long speculated that the planet’s apparent lack of plate tectonics might play a role in making the planet so seemingly uninhabitable.

What’s more, the work also underlines the possibility that planets go through phases of plate tectonics (SN: 6/25/16, p. 8). Venus could have had plate tectonics like Earth 1 billion or 2 billion years ago, according to a simulation presented at the meeting by geophysicist Matthew Weller of the University of Texas at Austin.

“As Venus goes, does that predict where the Earth is going in the relatively near future?” he wondered.