If you look at a map of the world, it’s easy to think that the vast oceans would be effective barriers to the movement of land animals. And while an elephant can’t swim across the Pacific, it turns out that plenty of plants and animals — and even people — have unintentionally floated across oceans from one continent to another. Now comes evidence that tiny, sedentary trapdoor spiders made such a journey millions of years ago, taking them from Africa all the way across the Indian Ocean to Australia.
Moggridgea rainbowi spiders from Kangaroo Island, off the south coast of Australia, are known as trapdoor spiders because they build a silk-lined burrow in the ground with a secure-fitting lid, notes Sophie Harrison of the University of Adelaide in Australia. The burrow and trapdoor provides the spiders with shelter and protection as well as a means for capturing prey. And it means that the spiders don’t really need to travel farther than a few meters over the course of a lifetime.
There was evidence, though, that the ancestors of these Australian spiders might have traveled millions of meters to get to Australia — from Africa. That isn’t as odd as it might seem, since Australia used to be connected to other continents long ago in the supercontinent Gondwana. And humans have been known to transport species all over the planet. But there’s a third option, too: The spiders might have floated their way across an ocean.
To figure out which story is most likely true, Harrison and her colleagues looked at the spider’s genes. They turned to six genes that have been well-studied by spider biologists seeking to understand relationships between species. The researchers looked at those genes in seven M. rainbowi specimens from Kangaroo Island, five species of Moggridgea spiders from South Africa and seven species of southwestern Australia spiders from the closely related genus Bertmainius.
Using that data, the researchers built a spider family tree that showed which species were most closely related and how long ago their most recent common ancestor lived. M. rainbowi was most closely related to the African Moggridgea spiders, the analysis revealed. And the species split off some 2 million to 16 million years ago, Harrison and her colleagues report August 2 in PLOS ONE.
The timing of the divergence was long after Gondwana split up. And it was long before either the ancestors of Australia’s aboriginal people or later Europeans showed up on the Australian continent. While it may be improbable that a colony of spiders survived a journey of 10,000 kilometers across the Indian Ocean, that is the most likely explanation for how the trapdoor spiders got to Kangaroo Island, the researchers conclude.
Such an ocean journey would not be unprecedented for spiders in this genus, Harrison and her colleagues note. There are three species of Moggridgea spiders that are known to live on islands off the shore of the African continent. Two live on islands that were once part of the mainland, and they may have diverged at the same time that their islands separated from Africa. But the third, M. nesiota, lives on the Comoros, which are volcanic islands. The spiders must have traveled across 340 kilometers of ocean to get there. These types of spiders may be well-suited to ocean travel. If a large swatch of land washes into the sea, laden with arachnids, the spiders may be able to hide out in their nests for the journey. Plus, they don’t need a lot of food, can resist drowning and even “hold their breath” and survive on stored oxygen during periods of temporary flooding, the researchers note.
We’re going through a comic book phase at my house. Since lucking into the comics stash at the library, my 4-year-old refuses any other literary offering. Try as I might to rekindle her love of Rosie Revere, my daughter shuns that scrappy little engineer for Superman every single night.
I know that comic fans abound, but I’ll admit that I get a little lost reading the books. The multi-paneled illustrations, the jumpy story lines and the fact that my daughter skips way ahead make it hard for me to engage. And I imagine that for a preliterate preschooler, that confusion is worse.
There’s evidence to this idea (although it won’t help me force my daughter to choose girl-power science lit over Superman). A recent study found that kids better learn new vocabulary from books when there’s just one picture to see at a time.
Psychologist Jessica Horst and colleague Zoe Flack, both of the University of Sussex in England, read stories to 36 3½-year-olds. These were specially designed storybooks, with pages as big as printer paper. And sprinkled into the text and reflected in the illustrations were a few nonsense words: An inverted, orange and yellow slingshot that mixed things, called a tannin, and a metal wheel used like a rolling pin, called a sprock.
The researchers wanted to know under which reading conditions kids would best pick up the meaning of the nonsense words. In some tests, a researcher read the storybook that showed two distinct pictures at a time. In other tests, only one picture was shown at a time. Later, the kids were asked to point to the “sprock,” which was shown in a separate booklet among other unfamiliar objects.
Kids who saw just one picture at a time were more likely to point to the sprock when they saw it again, the researchers found. The results, published June 30 in Infant and Child Development, show how important pictures can be for preliterate kids, says Horst.
“As parents, it’s easy to forget that children do not look at the written text until they themselves are learning to read,” she says. (This study shows how infrequently preschoolers look at the words.) That means that kids might focus on pictures that aren’t relevant to the words they’re hearing, a mismatch that makes it harder for them to absorb new vocabulary. Does this mean parents ought to trash all books with multiple pictures on a page? Of course not. Horst and Flack found that for such books, gesturing toward the relevant picture got the word-learning rate back up. That means that parents ought to point at Wonder Woman’s Lasso of Truth or wave at the poor varlet that Shrek steals a lunch from. (Shrek!, the book by William Steig, contains delightful vocabulary lessons for children and adults alike.)
Those simple gestures, Horst says, will help you and your child “literally be on the same page.”
If you could put Uranus’ moon Cressida in a gigantic tub of water, it would float.
Cressida is one of at least 27 moons that circle Uranus. Robert Chancia of the University of Idaho in Moscow and colleagues calculated Cressida’s density and mass using variations in an inner ring of the planet as Uranus passed in front of a distant star. The team found that the density of the moon is 0.86 grams per cubic centimeter and its mass is 2.5×1017 kilograms. The results, reported August 28 on arXiv.org, are the first to reveal details about the moon. Knowing its density and mass helps researchers determine if and when Cressida might collide with another of Uranus’ moons and what will become of both of them.
Voyager 2 discovered Cressida and several other moons when the spacecraft flew by Uranus in 1986. Those moons, and two discovered later, orbit within 20,000 kilometers of Uranus and are the most tightly packed in the solar system.
Such close quarters puts the moons on collision courses. Based on the newly calculated mass and density of Cressida, simulations suggest it will slam into another moon, Desdemona, in a million years.
Cressida’s density suggests it is made of water ice with some contamination by a dark material. If the other moons have similar compositions, the moon collisions may happen in the more distant future than researchers thought. Determining what the moons are made of will also reveal their ultimate fate after a collision: whether they merge, bounce off each other or shatter into millions of pieces.
Artificial sweeteners can have a not-so-sweet side — a bitter aftertaste. The flavor can be such a turnoff that some people avoid the additives entirely. Decades ago, people noticed that for two artificial sweeteners — saccharin and cyclamate, which can taste bitter on their own — the bitterness disappears when they’re combined. But no one really knew why.
It turns out that saccharin doesn’t just activate sweet taste receptors, it also blocks bitter ones — the same bitter taste receptors that cyclamate activates. And the reverse is true, too. The result could make your bitter batter better. And it could help scientists find the next super sweetener.
Saccharin is 300 times as sweet as sugar, and cyclamate is 30 to 40 times as sweet as the real deal. Saccharin has been in use since its discovery in 1879 and is best known as Sweet’N Low in the United States. Cyclamate was initially approved in the United States in 1951, but banned as a food additive in 1969 over concerns that it caused cancer in rats. It remains popular elsewhere, and is the sweetener behind Sweet’N Low in Canada.
In the 1950s, scientists realized that the combination of the two (sold in Europe under brand names such as Assugrin), wasn’t nearly as bitter as either sweetener alone.
But for the more than 60 years since, scientists didn’t know why the combination of cyclamate and saccharin was such a sweet deal. That’s in large part because scientists simply didn’t know a lot about how we taste. The receptors for bitter flavors were only discovered in 2000, explains Maik Behrens, a molecular biologist at the German Institute of Human Nutrition Potsdam-Rehbruecke.
(Now we know that that there are 25 potential bitter taste receptors, and that people express them in varying levels. That’s why some people have strong responses to bitter flavors such as those in coffee, while others might be more bothered by the bitter aftertaste of the sweetener put in it.)
Behrens and his colleagues Kristina Blank and Wolfgang Meyerhof developed a way to screen which of the bitter taste receptors that saccharin and cyclamate were hitting, to figure out why the combination is more palatable than either one alone. The researchers inserted the genes for the 25 subtypes into human kidney cells (an easier feat than working with real taste cells). Each gene included a marker that glowed when the receptors were stimulated. Previous studies of the two sweeteners had shown that saccharin alone activates the subtypes TAS2R31 and TAS2R43, and cyclamate tickles TAS2R1 and TAS2R38. Stimulating any of those four taste receptor subtypes will leave a bitter taste in your mouth.
But cyclamate doesn’t just activate the two bitter receptors, Behrens and his colleagues showed. It blocks TAS2R31 and TAS2R43 — the same receptors that saccharin stimulates. So with cyclamate around, saccharin can’t get at the bitter taste subtypes, Behrens explains. Bye, bye bitter aftertaste.
The reverse was true, too: Saccharin blocked TAS2R1 — one of the bitter receptors that cyclamate activates. In this case, though, the amount of saccharin required to block the receptors that cyclamate activates would have bitter effects on its own. So it’s probably the actions of cyclamate at saccharin’s bitter receptors that help block the bitterness, Behrens and his colleagues report September 14 in Cell Chemical Biology.
The researchers also tested whether cyclamate and saccharin together could be stronger activators of sweet receptors than either chemical alone. But in further tests, Behrens and his colleagues showed that, no, the sweet sides of saccharin and cyclamate stayed the same in combination.
“This addresses a longstanding puzzle why mixing two different sweeteners changes the aftertaste,” says Yuki Oka, a neuroscientist at Caltech in Pasadena. “They are interrupting each other at the receptor level.” It’s not too surprising that a sweetener might block some receptor subtypes and stimulate others, he notes, but that saccharin and cyclamate have such clear compatibilities is a lucky chance. “Mechanism-wise, it’s surprisingly beautiful.”
Oka notes that no actual tongues tasted artificial sweeteners in these experiments. The tests took place on cells in dishes. But, he says, because the researchers used the human bitter taste receptors, it’s likely that the same thing happens when a diet drink hits the human tongue.
Behrens hopes the cell setup they used for this experiment can do more than solve an old mystery. By using cells in lab tests to predict how different additives might interact, he notes, scientists can develop sweeteners with fewer bitter effects. The technique was developed with funding from a multinational group of researchers and companies — many of which will probably be very interested in the sweet results. And on the way to sweeteners of the future, scientists may be able to resolve more taste mysteries of the past.
When you think about it, it shouldn’t be surprising that there’s more than one way to explain quantum mechanics. Quantum math is notorious for incorporating multiple possibilities for the outcomes of measurements. So you shouldn’t expect physicists to stick to only one explanation for what that math means. And in fact, sometimes it seems like researchers have proposed more “interpretations” of this math than Katy Perry has followers on Twitter.
So it would seem that the world needs more quantum interpretations like it needs more Category 5 hurricanes. But until some single interpretation comes along that makes everybody happy (and that’s about as likely as the Cleveland Browns winning the Super Bowl), yet more interpretations will emerge. One of the latest appeared recently (September 13) online at arXiv.org, the site where physicists send their papers to ripen before actual publication. You might say papers on the arXiv are like “potential publications,” which someday might become “actual” if a journal prints them.
And that, in a nutshell, is pretty much the same as the logic underlying the new interpretation of quantum physics. In the new paper, three scientists argue that including “potential” things on the list of “real” things can avoid the counterintuitive conundrums that quantum physics poses. It is perhaps less of a full-blown interpretation than a new philosophical framework for contemplating those quantum mysteries. At its root, the new idea holds that the common conception of “reality” is too limited. By expanding the definition of reality, the quantum’s mysteries disappear. In particular, “real” should not be restricted to “actual” objects or events in spacetime. Reality ought also be assigned to certain possibilities, or “potential” realities, that have not yet become “actual.” These potential realities do not exist in spacetime, but nevertheless are “ontological” — that is, real components of existence.
“This new ontological picture requires that we expand our concept of ‘what is real’ to include an extraspatiotemporal domain of quantum possibility,” write Ruth Kastner, Stuart Kauffman and Michael Epperson.
Considering potential things to be real is not exactly a new idea, as it was a central aspect of the philosophy of Aristotle, 24 centuries ago. An acorn has the potential to become a tree; a tree has the potential to become a wooden table. Even applying this idea to quantum physics isn’t new. Werner Heisenberg, the quantum pioneer famous for his uncertainty principle, considered his quantum math to describe potential outcomes of measurements of which one would become the actual result. The quantum concept of a “probability wave,” describing the likelihood of different possible outcomes of a measurement, was a quantitative version of Aristotle’s potential, Heisenberg wrote in his well-known 1958 book Physics and Philosophy. “It introduced something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.”
In their paper, titled “Taking Heisenberg’s Potentia Seriously,” Kastner and colleagues elaborate on this idea, drawing a parallel to the philosophy of René Descartes. Descartes, in the 17th century, proposed a strict division between material and mental “substance.” Material stuff (res extensa, or extended things) existed entirely independently of mental reality (res cogitans, things that think) except in the brain’s pineal gland. There res cogitans could influence the body. Modern science has, of course, rejected res cogitans: The material world is all that reality requires. Mental activity is the outcome of material processes, such as electrical impulses and biochemical interactions.
Kastner and colleagues also reject Descartes’ res cogitans. But they think reality should not be restricted to res extensa; rather it should be complemented by “res potentia” — in particular, quantum res potentia, not just any old list of possibilities. Quantum potentia can be quantitatively defined; a quantum measurement will, with certainty, always produce one of the possibilities it describes. In the large-scale world, all sorts of possibilities can be imagined (Browns win Super Bowl, Indians win 22 straight games) which may or may not ever come to pass.
If quantum potentia are in some sense real, Kastner and colleagues say, then the mysterious weirdness of quantum mechanics becomes instantly explicable. You just have to realize that changes in actual things reset the list of potential things.
Consider for instance that you and I agree to meet for lunch next Tuesday at the Mad Hatter restaurant (Kastner and colleagues use the example of a coffee shop, but I don’t like coffee). But then on Monday, a tornado blasts the Mad Hatter to Wonderland. Meeting there is no longer on the list of res potentia; it’s no longer possible for lunch there to become an actuality. In other words, even though an actuality can’t alter a distant actuality, it can change distant potential. We could have been a thousand miles away, yet the tornado changed our possibilities for places to eat.
It’s an example of how the list of potentia can change without the spooky action at a distance that Einstein alleged about quantum entanglement. Measurements on entangled particles, such as two photons, seem baffling. You can set up an experiment so that before a measurement is made, either photon could be spinning clockwise or counterclockwise. Once one is measured, though (and found to be, say, clockwise), you know the other will have the opposite spin (counterclockwise), no matter how far away it is. But no secret signal is (or could possibly be) sent from one photon to the other after the first measurement. It’s simply the case that counterclockwise is no longer on the list of res potentia for the second photon. An “actuality” (the first measurement) changes the list of potentia that still exist in the universe. Potentia encompass the list of things that may become actual; what becomes actual then changes what’s on the list of potentia.
Similar arguments apply to other quantum mysteries. Observations of a “pure” quantum state, containing many possibilities, turns one of those possibilities into an actual one. And the new actual event constrains the list of future possibilities, without any need for physical causation. “We simply allow that actual events can instantaneously and acausally affect what is next possible … which, in turn, influences what can next become actual, and so on,” Kastner and colleagues write.
Measurement, they say, is simply a real physical process that transforms quantum potentia into elements of res extensa — actual, real stuff in the ordinary sense. Space and time, or spacetime, is something that “emerges from a quantum substratum,” as actual stuff crystalizes out “of a more fluid domain of possibles.” Spacetime, therefore, is not all there is to reality.
It’s unlikely that physicists everywhere will instantly cease debating quantum mysteries and start driving cars with “res potentia!” bumper stickers. But whether this new proposal triumphs in the quantum debates or not, it raises a key point in the scientific quest to understand reality. Reality is not necessarily what humans think it is or would like it to be. Many quantum interpretations have been motivated by a desire to return to Newtonian determinism, for instance, where cause and effect is mechanical and predictable, like a clock’s tick preceding each tock.
But the universe is not required to conform to Newtonian nostalgia. And more generally, scientists often presume that the phenomena nature offers to human senses reflect all there is to reality. “It is difficult for us to imagine or conceptualize any other categories of reality beyond the level of actual — i.e., what is immediately available to us in perceptual terms,” Kastner and colleagues note. Yet quantum physics hints at a deeper foundation underlying the reality of phenomena — in other words, that “ontology” encompasses more than just events and objects in spacetime. This proposition sounds a little bit like advocating for the existence of ghosts. But it is actually more of an acknowledgment that things may seem ghostlike only because reality has been improperly conceived in the first place. Kastner and colleagues point out that the motions of the planets in the sky baffled ancient philosophers because supposedly in the heavens, reality permitted only uniform circular motion (accomplished by attachment to huge crystalline spheres). Expanding the boundaries of reality allowed those motions to be explained naturally.
Similarly, restricting reality to events in spacetime may turn out to be like restricting the heavens to rotating spheres. Spacetime itself, many physicists are convinced, is not a primary element of reality but a structure that emerges from processes more fundamental. Because these processes appear to be quantum in nature, it makes sense to suspect that something more than just spacetime events has a role to play in explaining quantum physics.
True, it’s hard to imagine the “reality” of something that doesn’t exist “actually” as an object or event in spacetime. But Kastner and colleagues cite the warning issued by the late philosopher Ernan McMullin, who pointed out that “imaginability must not be made the test for ontology.” Science attempts to discover the real world’s structures; it’s unwarranted, McMullin said, to require that those structures be “imaginable in the categories” known from large-scale ordinary experience. Sometimes things not imaginable do, after all, turn out to be real. No fan of the team ever imagined the Indians would win 22 games in a row.
ORLANDO, Fla. — Interbreeding with Neandertals restored some genetic heirlooms that modern humans left behind in the ancient exodus from Africa, new research suggests.
Those heirlooms are versions of genes, or alleles, that were present in humans’ and Neandertals’ shared ancestors. Neandertals carried many of those old alleles, passing them along generation after generation, while developing their own versions of other genes. A small number of humans left Africa around 100,000 years ago and settled in Asia and Europe. These migrants “lost” the ancestral alleles. But when the migrants or their descendants interbred with Neandertals, Eurasians reinherited the ancestral heirlooms along with Neandertal DNA, John “Tony” Capra reported October 20 at the annual meeting of the American Society of Human Genetics.
Present-day Europeans have more than 47,000 of these reintroduced ancestral alleles, and East Asians — who have more Neandertal ancestry than Europeans (SN Online: 2/12/15) — carry more than 56,000, said Capra, an evolutionary geneticist at Vanderbilt University in Nashville.
Capra and others have evidence that Neandertal versions of genes make humans more prone to some diseases (SN: 3/5/16, p. 18). Of the thousands of ancestral variants reintroduced into modern humans, only 41 have been linked in genetic studies to diseases, such as skin conditions and neurological and psychiatric disorders, he said. The researchers can’t tell for sure whether the effect is from the ancestral variant or neighboring Neandertal DNA. Capra and Vanderbilt colleague Corinne Simonti’s analyses indicate that the Neandertal DNA is more likely to blame. Many of the ancestral alleles are still present in modern-day Africans, Capra said, “so they’re unlikely to be very, very bad.”
Earth may not provide the best blueprint for how rocky planets are born.
An analysis of planets outside the solar system suggests that most hot, rocky exoplanets started out more like gassy Neptunes. Such planets are rocky now because their stars blew their thick atmospheres away, leaving nothing but an inhospitable core, researchers report in a paper posted online October 15 at arXiv.org. That could mean these planets are not as representative of Earth as scientists thought, and using them to estimate the frequency of potentially life-hosting worlds is misleading. “One of the big discoveries is that Earth-sized, likely rocky planets are incredibly common, at least on hotter orbits,” says planetary scientist Eric Lopez of NASA’s Goddard Space Flight Center in Greenbelt, Md., who wasn’t involved in the study. “The big question is, are those hot exoplanets telling us anything about the frequency of Earthlike planets? This suggests that they might not be.”
Observations so far suggest that worlds about Earth’s size probably cluster into two categories: rocky super-Earths and gaseous mini-Neptunes (SN Online: 6/19/17). Super-Earths are between one and 1.5 times as wide as Earth; mini-Neptunes are between 2.5 and four times Earth’s size. Earlier work showed that there’s a clear gap between these planet sizes.
Because planets that are close to their stars are easier for telescopes to see, most of the rocky super-Earths discovered so far have close-in orbits — with years lasting between about two to 100 Earth days — making the worlds way too hot to host life as we know it. But because they are rocky like Earth, scientists include these worlds with their cooler brethren when estimating how many habitable planets might be out there.
If hot super-Earths start out rocky, perhaps it is because the worlds form later than their puffy mini-Neptune companions, when there’s less gas left in the growing planetary system to build an atmosphere. Or, conversely, such planets, along with mini-Neptunes, may start with thick atmospheres. These rocky worlds may have had their atmospheres stripped away by stellar winds. Now, exoplanet astronomer Vincent Van Eylen of Leiden University in the Netherlands and his colleagues have shown that the fault is in the stars. “You really have these two populations, and the influence of the star is what creates this separation,” Van Eylen says. That result could warn astronomers not to rely too heavily on these hot, rocky worlds when calculating how many habitable planets are likely to exist.
To measure the planets’ sizes, astronomers need to know the sizes of their stars. Van Eylen and colleagues analyzed 117 planets whose host stars’ sizes had been measured using astroseismology. This technique tracks how often the star’s brightness changes as interior oscillations ripple through it, and uses the frequency to determine its size.
“Think of the stars as musical instruments,” Van Eylen says. A double bass and a violin produce sound the same way, but the pitch is different because of the instrument’s size. “It’s exactly the same thing with stars.”
The researchers then calculated the planets’ sizes — between one and four times the Earth — with about four times greater precision than in previous studies. As expected, the planets clustered in groups of around 1.5 and 2.5 times Earth’s radius, leaving a gap in the middle.
Next the team looked at how the planets’ sizes changed with distance from the host star. Planets that were rocky from the start should be smaller close to the stars, where studies of other young star systems suggest there should have been less material available when these planets were forming. But if proximity to a star’s winds is key, there should be some larger rocky worlds closer in, with smaller gaseous worlds farther out.
Van Eylen’s planets matched the second picture: The largest of the rocky planets nestled close to the stars were bigger than the distant ones. That suggests the rocky planets once had atmospheres, and lost them.
“It’s not fair to take the close-in planets and assume that the more distant planets are just like them,” says exoplanet astronomer Courtney Dressing of the University of California, Berkeley. “You might be fooling yourself.”
Kleptopredation klep-toe-preh-day-shun n. A food-gathering strategy of eating an organism and the meal it just ate.
A wily sea slug has a way to get two meals in one: It gobbles up smaller predators that have recently gulped in their own prey.
“Kleptopredation” is the term Trevor Willis of the University of Portsmouth in England and his colleagues propose for this kind of food theft by well-timed predation.
Researchers knew that the small Mediterranean nudibranch Cratena peregrina, with a colorful mane of streamers rippling off its body, climbs and preys on pipe cleaner‒skinny, branched colonies of Eudendrium racemosum hydroids, which are distant relatives of corals. The nudibranchs devour the individual hydroid polyps and, new tests show, prefer them well fed. In experimental buffets with fed or hungry polyps, the nudibranchs ate faster when polyps were fat with just-caught plankton. In this way, at least half of a nudibranch’s diet is plankton. This quirk explains why some biochemical signatures that distinguish predators from prey don’t work out clearly for nudibranchs and hydroids, the researchers report November 1 in Biology Letters.
A weird echo of this meal-stealing strategy shows up in certain jumping spiders. The arachnids don’t have the biology to drink vertebrate blood themselves. Instead, they catch a lot of female mosquitoes that have just tanked up (SN: 10/15/05, p. 246).
A newly fabricated material does more than just hold up under pressure. Unlike many ordinary objects that shrink when squeezed, the metamaterial — a synthetic structure designed to exhibit properties not typically found in natural materials — expands at higher pressures.
This counterintuitive material is made up of a grid of hollow 3-D crosses — shaped like six-way pipe fittings — mere micrometers across. When surrounding pressure of air, water or some other substance increases, the crosses’ circular surfaces bow inward. Because of the way these crosses are connected with levers, that warping forces the crosses to rotate and push away from each other, causing the whole structure to expand, says study coauthor Jingyuan Qu, a physicist at Karlsruhe Institute of Technology in Germany. The researchers were “very clever about how they connected this quite complex set of structural elements,” says Michael Haberman, a mechanical engineer at the University of Texas at Austin, who wasn’t involved in the work.
Qu and colleagues fashioned a microcube of their metamaterial, described in a paper accepted to Physical Review X, from a plasticlike substance, using a microversion of 3-D printing. When the researchers placed the material inside a gas chamber and cranked up the air pressure from one bar (about the atmospheric pressure at sea level) to five bars, the cube’s volume increased by about 3 percent. Until now, researchers have only described such pressure-expanding metamaterials in mathematical models or computer simulations, says Joseph Grima, a materials scientist at the University of Malta in Msida not involved in the work. The new metamaterial provides “much-needed proof” that this type of stuff can actually be fabricated, he says.
Adjusting the thickness of the crosses’ surfaces could make this new metamaterial more or less expandable: The thicker it is, the less the structure expands. A metamaterial fine-tuned to stay the same size under a wide range of pressures could be used to build equipment that withstands the crushing pressures of the deep sea or the vacuum of outer space.
NASA is going for the gold. Its GOLD mission — short for Global-scale Observations of the Limb and Disk mission — is slated for launch January 25, the agency announced January 4. GOLD will study the zone where Earth’s atmosphere meets outer space. Its goal is to better understand how both solar and terrestrial storms affect the ionosphere, an upper atmosphere region crucial for radio communications.
Earth’s ionosphere, where incoming cosmic and solar rays interact with the atmosphere to create charged particles, extends from about 75 to about 1,200 kilometers above the planet’s surface. From its geostationary orbit 35,000 kilometers high, GOLD will monitor the ionosphere’s density and temperature using an instrument called an ultraviolet imaging spectrograph. Previous satellites have provided snapshots of the ionosphere, but this is the first time an instrument will keep track of changes in the layers through time, collecting data every 30 minutes.
GOLD is the first NASA mission to be launched aboard a commercial communications satellite. NASA plans to launch a complementary mission, the Ionospheric Connection Explorer, later this year. That mission will travel directly through the ionosphere, studying its makeup, density and temperature.