Two artificial sweeteners together take the bitter out of bittersweet

Artificial sweeteners can have a not-so-sweet side — a bitter aftertaste. The flavor can be such a turnoff that some people avoid the additives entirely. Decades ago, people noticed that for two artificial sweeteners — saccharin and cyclamate, which can taste bitter on their own — the bitterness disappears when they’re combined. But no one really knew why.

It turns out that saccharin doesn’t just activate sweet taste receptors, it also blocks bitter ones — the same bitter taste receptors that cyclamate activates. And the reverse is true, too. The result could make your bitter batter better. And it could help scientists find the next super sweetener.

Saccharin is 300 times as sweet as sugar, and cyclamate is 30 to 40 times as sweet as the real deal. Saccharin has been in use since its discovery in 1879 and is best known as Sweet’N Low in the United States. Cyclamate was initially approved in the United States in 1951, but banned as a food additive in 1969 over concerns that it caused cancer in rats. It remains popular elsewhere, and is the sweetener behind Sweet’N Low in Canada.

In the 1950s, scientists realized that the combination of the two (sold in Europe under brand names such as Assugrin), wasn’t nearly as bitter as either sweetener alone.

But for the more than 60 years since, scientists didn’t know why the combination of cyclamate and saccharin was such a sweet deal. That’s in large part because scientists simply didn’t know a lot about how we taste. The receptors for bitter flavors were only discovered in 2000, explains Maik Behrens, a molecular biologist at the German Institute of Human Nutrition Potsdam-Rehbruecke.

(Now we know that that there are 25 potential bitter taste receptors, and that people express them in varying levels. That’s why some people have strong responses to bitter flavors such as those in coffee, while others might be more bothered by the bitter aftertaste of the sweetener put in it.)

Behrens and his colleagues Kristina Blank and Wolfgang Meyerhof developed a way to screen which of the bitter taste receptors that saccharin and cyclamate were hitting, to figure out why the combination is more palatable than either one alone. The researchers inserted the genes for the 25 subtypes into human kidney cells (an easier feat than working with real taste cells). Each gene included a marker that glowed when the receptors were stimulated.
Previous studies of the two sweeteners had shown that saccharin alone activates the subtypes TAS2R31 and TAS2R43, and cyclamate tickles TAS2R1 and TAS2R38. Stimulating any of those four taste receptor subtypes will leave a bitter taste in your mouth.

But cyclamate doesn’t just activate the two bitter receptors, Behrens and his colleagues showed. It blocks TAS2R31 and TAS2R43 — the same receptors that saccharin stimulates. So with cyclamate around, saccharin can’t get at the bitter taste subtypes, Behrens explains. Bye, bye bitter aftertaste.

The reverse was true, too: Saccharin blocked TAS2R1 — one of the bitter receptors that cyclamate activates. In this case, though, the amount of saccharin required to block the receptors that cyclamate activates would have bitter effects on its own. So it’s probably the actions of cyclamate at saccharin’s bitter receptors that help block the bitterness, Behrens and his colleagues report September 14 in Cell Chemical Biology.

The researchers also tested whether cyclamate and saccharin together could be stronger activators of sweet receptors than either chemical alone. But in further tests, Behrens and his colleagues showed that, no, the sweet sides of saccharin and cyclamate stayed the same in combination.

“This addresses a longstanding puzzle why mixing two different sweeteners changes the aftertaste,” says Yuki Oka, a neuroscientist at Caltech in Pasadena. “They are interrupting each other at the receptor level.” It’s not too surprising that a sweetener might block some receptor subtypes and stimulate others, he notes, but that saccharin and cyclamate have such clear compatibilities is a lucky chance. “Mechanism-wise, it’s surprisingly beautiful.”

Oka notes that no actual tongues tasted artificial sweeteners in these experiments. The tests took place on cells in dishes. But, he says, because the researchers used the human bitter taste receptors, it’s likely that the same thing happens when a diet drink hits the human tongue.

Behrens hopes the cell setup they used for this experiment can do more than solve an old mystery. By using cells in lab tests to predict how different additives might interact, he notes, scientists can develop sweeteners with fewer bitter effects. The technique was developed with funding from a multinational group of researchers and companies — many of which will probably be very interested in the sweet results. And on the way to sweeteners of the future, scientists may be able to resolve more taste mysteries of the past.

Plate tectonics started at least 3.5 billion years ago

Plate tectonics may have gotten a pretty early start in Earth’s history. Most estimates put the onset of when the large plates that make up the planet’s outer crust began shifting at around 3 billion years ago. But a new study in the Sept. 22 Science that analyzes titanium in continental rocks asserts that plate tectonics began 500 million years earlier.

Nicolas Greber, now at the University of Geneva, and colleagues suggest that previous studies got it wrong because researchers relied on chemical analyses of silicon dioxide in shales, sedimentary rocks that bear the detritus of a variety of continental rocks. These rocks’ silicon dioxide composition can give researchers an idea of when continental rocks began to diverge in makeup from oceanic rocks as a result of plate tectonics.

But weathering can wreak havoc on the chemical makeup of shales. To get around that problem, Greber’s team turned to a new tool: the ratios of two titanium isotopes, forms of the same element that have different masses. The proportion of titanium isotopes in the rocks is a useful stand-in for the difference in silicon dioxide concentration between continental and oceanic rocks, and isn’t so easily altered by weathering. Those data helped the team estimate that continental rocks — and therefore plate tectonics — were already going strong by 3.5 billion years ago.

Quantum mysteries dissolve if possibilities are realities

When you think about it, it shouldn’t be surprising that there’s more than one way to explain quantum mechanics. Quantum math is notorious for incorporating multiple possibilities for the outcomes of measurements. So you shouldn’t expect physicists to stick to only one explanation for what that math means. And in fact, sometimes it seems like researchers have proposed more “interpretations” of this math than Katy Perry has followers on Twitter.

So it would seem that the world needs more quantum interpretations like it needs more Category 5 hurricanes. But until some single interpretation comes along that makes everybody happy (and that’s about as likely as the Cleveland Browns winning the Super Bowl), yet more interpretations will emerge. One of the latest appeared recently (September 13) online at arXiv.org, the site where physicists send their papers to ripen before actual publication. You might say papers on the arXiv are like “potential publications,” which someday might become “actual” if a journal prints them.

And that, in a nutshell, is pretty much the same as the logic underlying the new interpretation of quantum physics. In the new paper, three scientists argue that including “potential” things on the list of “real” things can avoid the counterintuitive conundrums that quantum physics poses. It is perhaps less of a full-blown interpretation than a new philosophical framework for contemplating those quantum mysteries. At its root, the new idea holds that the common conception of “reality” is too limited. By expanding the definition of reality, the quantum’s mysteries disappear. In particular, “real” should not be restricted to “actual” objects or events in spacetime. Reality ought also be assigned to certain possibilities, or “potential” realities, that have not yet become “actual.” These potential realities do not exist in spacetime, but nevertheless are “ontological” — that is, real components of existence.

“This new ontological picture requires that we expand our concept of ‘what is real’ to include an extraspatiotemporal domain of quantum possibility,” write Ruth Kastner, Stuart Kauffman and Michael Epperson.

Considering potential things to be real is not exactly a new idea, as it was a central aspect of the philosophy of Aristotle, 24 centuries ago. An acorn has the potential to become a tree; a tree has the potential to become a wooden table. Even applying this idea to quantum physics isn’t new. Werner Heisenberg, the quantum pioneer famous for his uncertainty principle, considered his quantum math to describe potential outcomes of measurements of which one would become the actual result. The quantum concept of a “probability wave,” describing the likelihood of different possible outcomes of a measurement, was a quantitative version of Aristotle’s potential, Heisenberg wrote in his well-known 1958 book Physics and Philosophy. “It introduced something standing in the middle between the idea of an event and the actual event, a strange kind of physical reality just in the middle between possibility and reality.”

In their paper, titled “Taking Heisenberg’s Potentia Seriously,” Kastner and colleagues elaborate on this idea, drawing a parallel to the philosophy of René Descartes. Descartes, in the 17th century, proposed a strict division between material and mental “substance.” Material stuff (res extensa, or extended things) existed entirely independently of mental reality (res cogitans, things that think) except in the brain’s pineal gland. There res cogitans could influence the body. Modern science has, of course, rejected res cogitans: The material world is all that reality requires. Mental activity is the outcome of material processes, such as electrical impulses and biochemical interactions.

Kastner and colleagues also reject Descartes’ res cogitans. But they think reality should not be restricted to res extensa; rather it should be complemented by “res potentia” — in particular, quantum res potentia, not just any old list of possibilities. Quantum potentia can be quantitatively defined; a quantum measurement will, with certainty, always produce one of the possibilities it describes. In the large-scale world, all sorts of possibilities can be imagined (Browns win Super Bowl, Indians win 22 straight games) which may or may not ever come to pass.

If quantum potentia are in some sense real, Kastner and colleagues say, then the mysterious weirdness of quantum mechanics becomes instantly explicable. You just have to realize that changes in actual things reset the list of potential things.

Consider for instance that you and I agree to meet for lunch next Tuesday at the Mad Hatter restaurant (Kastner and colleagues use the example of a coffee shop, but I don’t like coffee). But then on Monday, a tornado blasts the Mad Hatter to Wonderland. Meeting there is no longer on the list of res potentia; it’s no longer possible for lunch there to become an actuality. In other words, even though an actuality can’t alter a distant actuality, it can change distant potential. We could have been a thousand miles away, yet the tornado changed our possibilities for places to eat.

It’s an example of how the list of potentia can change without the spooky action at a distance that Einstein alleged about quantum entanglement. Measurements on entangled particles, such as two photons, seem baffling. You can set up an experiment so that before a measurement is made, either photon could be spinning clockwise or counterclockwise. Once one is measured, though (and found to be, say, clockwise), you know the other will have the opposite spin (counterclockwise), no matter how far away it is. But no secret signal is (or could possibly be) sent from one photon to the other after the first measurement. It’s simply the case that counterclockwise is no longer on the list of res potentia for the second photon. An “actuality” (the first measurement) changes the list of potentia that still exist in the universe. Potentia encompass the list of things that may become actual; what becomes actual then changes what’s on the list of potentia.

Similar arguments apply to other quantum mysteries. Observations of a “pure” quantum state, containing many possibilities, turns one of those possibilities into an actual one. And the new actual event constrains the list of future possibilities, without any need for physical causation. “We simply allow that actual events can instantaneously and acausally affect what is next possible … which, in turn, influences what can next become actual, and so on,” Kastner and colleagues write.

Measurement, they say, is simply a real physical process that transforms quantum potentia into elements of res extensa — actual, real stuff in the ordinary sense. Space and time, or spacetime, is something that “emerges from a quantum substratum,” as actual stuff crystalizes out “of a more fluid domain of possibles.” Spacetime, therefore, is not all there is to reality.

It’s unlikely that physicists everywhere will instantly cease debating quantum mysteries and start driving cars with “res potentia!” bumper stickers. But whether this new proposal triumphs in the quantum debates or not, it raises a key point in the scientific quest to understand reality. Reality is not necessarily what humans think it is or would like it to be. Many quantum interpretations have been motivated by a desire to return to Newtonian determinism, for instance, where cause and effect is mechanical and predictable, like a clock’s tick preceding each tock.

But the universe is not required to conform to Newtonian nostalgia. And more generally, scientists often presume that the phenomena nature offers to human senses reflect all there is to reality. “It is difficult for us to imagine or conceptualize any other categories of reality beyond the level of actual — i.e., what is immediately available to us in perceptual terms,” Kastner and colleagues note. Yet quantum physics hints at a deeper foundation underlying the reality of phenomena — in other words, that “ontology” encompasses more than just events and objects in spacetime.
This proposition sounds a little bit like advocating for the existence of ghosts. But it is actually more of an acknowledgment that things may seem ghostlike only because reality has been improperly conceived in the first place. Kastner and colleagues point out that the motions of the planets in the sky baffled ancient philosophers because supposedly in the heavens, reality permitted only uniform circular motion (accomplished by attachment to huge crystalline spheres). Expanding the boundaries of reality allowed those motions to be explained naturally.

Similarly, restricting reality to events in spacetime may turn out to be like restricting the heavens to rotating spheres. Spacetime itself, many physicists are convinced, is not a primary element of reality but a structure that emerges from processes more fundamental. Because these processes appear to be quantum in nature, it makes sense to suspect that something more than just spacetime events has a role to play in explaining quantum physics.

True, it’s hard to imagine the “reality” of something that doesn’t exist “actually” as an object or event in spacetime. But Kastner and colleagues cite the warning issued by the late philosopher Ernan McMullin, who pointed out that “imaginability must not be made the test for ontology.” Science attempts to discover the real world’s structures; it’s unwarranted, McMullin said, to require that those structures be “imaginable in the categories” known from large-scale ordinary experience. Sometimes things not imaginable do, after all, turn out to be real. No fan of the team ever imagined the Indians would win 22 games in a row.

Mating with Neandertals reintroduced ‘lost’ DNA into modern humans

ORLANDO, Fla. — Interbreeding with Neandertals restored some genetic heirlooms that modern humans left behind in the ancient exodus from Africa, new research suggests.

Those heirlooms are versions of genes, or alleles, that were present in humans’ and Neandertals’ shared ancestors. Neandertals carried many of those old alleles, passing them along generation after generation, while developing their own versions of other genes. A small number of humans left Africa around 100,000 years ago and settled in Asia and Europe. These migrants “lost” the ancestral alleles.
But when the migrants or their descendants interbred with Neandertals, Eurasians reinherited the ancestral heirlooms along with Neandertal DNA, John “Tony” Capra reported October 20 at the annual meeting of the American Society of Human Genetics.

Present-day Europeans have more than 47,000 of these reintroduced ancestral alleles, and East Asians — who have more Neandertal ancestry than Europeans (SN Online: 2/12/15) — carry more than 56,000, said Capra, an evolutionary geneticist at Vanderbilt University in Nashville.

Capra and others have evidence that Neandertal versions of genes make humans more prone to some diseases (SN: 3/5/16, p. 18). Of the thousands of ancestral variants reintroduced into modern humans, only 41 have been linked in genetic studies to diseases, such as skin conditions and neurological and psychiatric disorders, he said. The researchers can’t tell for sure whether the effect is from the ancestral variant or neighboring Neandertal DNA. Capra and Vanderbilt colleague Corinne Simonti’s analyses indicate that the Neandertal DNA is more likely to blame. Many of the ancestral alleles are still present in modern-day Africans, Capra said, “so they’re unlikely to be very, very bad.”

Hot, rocky exoplanets are the scorched cores of former gas giants

Earth may not provide the best blueprint for how rocky planets are born.

An analysis of planets outside the solar system suggests that most hot, rocky exoplanets started out more like gassy Neptunes. Such planets are rocky now because their stars blew their thick atmospheres away, leaving nothing but an inhospitable core, researchers report in a paper posted online October 15 at arXiv.org. That could mean these planets are not as representative of Earth as scientists thought, and using them to estimate the frequency of potentially life-hosting worlds is misleading.
“One of the big discoveries is that Earth-sized, likely rocky planets are incredibly common, at least on hotter orbits,” says planetary scientist Eric Lopez of NASA’s Goddard Space Flight Center in Greenbelt, Md., who wasn’t involved in the study. “The big question is, are those hot exoplanets telling us anything about the frequency of Earthlike planets? This suggests that they might not be.”

Observations so far suggest that worlds about Earth’s size probably cluster into two categories: rocky super-Earths and gaseous mini-Neptunes (SN Online: 6/19/17). Super-Earths are between one and 1.5 times as wide as Earth; mini-Neptunes are between 2.5 and four times Earth’s size. Earlier work showed that there’s a clear gap between these planet sizes.

Because planets that are close to their stars are easier for telescopes to see, most of the rocky super-Earths discovered so far have close-in orbits — with years lasting between about two to 100 Earth days — making the worlds way too hot to host life as we know it. But because they are rocky like Earth, scientists include these worlds with their cooler brethren when estimating how many habitable planets might be out there.

If hot super-Earths start out rocky, perhaps it is because the worlds form later than their puffy mini-Neptune companions, when there’s less gas left in the growing planetary system to build an atmosphere. Or, conversely, such planets, along with mini-Neptunes, may start with thick atmospheres. These rocky worlds may have had their atmospheres stripped away by stellar winds.
Now, exoplanet astronomer Vincent Van Eylen of Leiden University in the Netherlands and his colleagues have shown that the fault is in the stars. “You really have these two populations, and the influence of the star is what creates this separation,” Van Eylen says. That result could warn astronomers not to rely too heavily on these hot, rocky worlds when calculating how many habitable planets are likely to exist.

To measure the planets’ sizes, astronomers need to know the sizes of their stars. Van Eylen and colleagues analyzed 117 planets whose host stars’ sizes had been measured using astroseismology. This technique tracks how often the star’s brightness changes as interior oscillations ripple through it, and uses the frequency to determine its size.

“Think of the stars as musical instruments,” Van Eylen says. A double bass and a violin produce sound the same way, but the pitch is different because of the instrument’s size. “It’s exactly the same thing with stars.”

The researchers then calculated the planets’ sizes — between one and four times the Earth — with about four times greater precision than in previous studies. As expected, the planets clustered in groups of around 1.5 and 2.5 times Earth’s radius, leaving a gap in the middle.

Next the team looked at how the planets’ sizes changed with distance from the host star. Planets that were rocky from the start should be smaller close to the stars, where studies of other young star systems suggest there should have been less material available when these planets were forming. But if proximity to a star’s winds is key, there should be some larger rocky worlds closer in, with smaller gaseous worlds farther out.

Van Eylen’s planets matched the second picture: The largest of the rocky planets nestled close to the stars were bigger than the distant ones. That suggests the rocky planets once had atmospheres, and lost them.

“It’s not fair to take the close-in planets and assume that the more distant planets are just like them,” says exoplanet astronomer Courtney Dressing of the University of California, Berkeley. “You might be fooling yourself.”

This sea slug makes its prey do half the food catching

Kleptopredation
klep-toe-preh-day-shun n.
A food-gathering strategy of eating an organism and the meal it just ate.

A wily sea slug has a way to get two meals in one: It gobbles up smaller predators that have recently gulped in their own prey.

“Kleptopredation” is the term Trevor Willis of the University of Portsmouth in England and his colleagues propose for this kind of food theft by well-timed predation.

Researchers knew that the small Mediterranean nudibranch Cratena peregrina, with a colorful mane of streamers rippling off its body, climbs and preys on pipe cleaner‒skinny, branched colonies of Eudendrium racemosum hydroids, which are distant relatives of corals. The nudibranchs devour the individual hydroid polyps and, new tests show, prefer them well fed.
In experimental buffets with fed or hungry polyps, the nudibranchs ate faster when polyps were fat with just-caught plankton. In this way, at least half of a nudibranch’s diet is plankton. This quirk explains why some biochemical signatures that distinguish predators from prey don’t work out clearly for nudibranchs and hydroids, the researchers report November 1 in Biology Letters.

A weird echo of this meal-stealing strategy shows up in certain jumping spiders. The arachnids don’t have the biology to drink vertebrate blood themselves. Instead, they catch a lot of female mosquitoes that have just tanked up (SN: 10/15/05, p. 246).

Staring into a baby’s eyes puts her brain waves and yours in sync

When you lock eyes with a baby, it’s hard to look away. For one thing, babies are fun to look at. They’re so tiny and cute and interesting. For another, babies love to stare back. I remember my babies staring at me so hard, with their eyebrows raised and unblinking eyes wide open. They would have killed in a staring contest.

This mutual adoration of staring may be for a good reason. When a baby and an adult make eye contact, their brain waves fall in sync, too, a new study finds. And those shared patterns of brain activity may actually pave the way for better communication between baby and adult: Babies make more sweet, little sounds when their eyes are locked onto an adult who is looking back. The scientists report the results online November 28 in the Proceedings of the National Academy of Sciences.

Psychologist Victoria Leong of the University of Cambridge and Nanyang Technological University in Singapore and colleagues invited infants into the lab for two experiments. In the first, the team outfitted 17 8-month-old babies with EEG caps, headwear covered with electrodes that measure the collective behavior of nerve cells across the brain. The infants watched a video in which an experimenter, also outfitted in an EEG cap, sung a nursery rhyme while looking either straight ahead at the baby, at the baby but with her head turned at a 20-degree angle, or away from the baby and with her head turned at a 20-degree angle.
When the researcher looked at the baby (either facing the baby or with her head slightly turned), the babies’ brains responded, showing activity patterns that started to closely resemble those of the researcher.

The second experiment moved the test into real life. The same researcher from the video sat near 19 different babies. Again, both the babies and the researcher wore EEG caps to record their brain activity. The real-life eye contact prompted brain patterns similar to those seen in the video experiment: When eyes met, brain activity fell in sync; when eyes wandered, brain activity didn’t match as closely.

The baby’s and the adult’s brain activity appeared to get in sync by meeting in the middle. When gazes were shared, a baby’s brain waves became more like the researcher’s, and the researcher’s more like the baby’s. That finding is “giving new insights into infants’ amazing abilities to connect to, and tune in with, their adult caregivers,” Leong says.

What are simpatico brain waves actually good for, you might ask? Well, researchers don’t know exactly, but they have some ideas. When high school students’ brain waves were in sync with one another, the kids reported being more engaged in the classroom, a recent study found. And when two adults reach a mutual understanding, their brains synchronize, too, says another study. These findings hint that such synchronization lets signals flow easily between two brains, though Leong says that much more research needs to be done before scientists understand synchronization’s relevance to babies’ communication and learning.
That easy signal sending is something that happened between the babies and the adult, too. When the experimenter was looking at the babies, the babies made more vocalizations. And in turn, these sweet sounds seemed to have made the experimenter’s brain waves even more similar to those of the babies.

It’s a beautiful cycle, it seems, when eyes and brains meet. And that meeting spot is probably where some interesting learning happens, for both adult and baby.

Jupiter’s massive Great Red Spot is at least 350 kilometers deep

NEW ORLEANS — Jupiter’s Great Red Spot has deep roots. Data from the first pass of NASA’s Juno spacecraft over the incessant storm show that its clouds stretch at least 350 kilometers down into the planet’s atmosphere. That means the storm is about as deep as the International Space Station is high above the Earth.

Juno has been orbiting Jupiter since July 4, 2016, and it made its first close flyby of the red spot about a year later (SN Online: 7/7/17). As the spacecraft swooped 9,000 kilometers above the giant storm, Juno’s microwave radiometer peered through the deep layers of cloud, measuring the atmosphere’s temperature down hundreds of kilometers.
“Juno is probing beneath these clouds, and finding the roots of the red spot,” Juno co-investigator Andrew Ingersoll of Caltech said December 11 at a news conference at the American Geophysical Union’s fall meeting. Cheng Li of Caltech presented the research at AGU on December 12.
The radiometer probes different layers of the atmosphere by measuring the gas in six different microwave wavelengths. Ingersoll and his colleagues found that the gas beneath the red spot’s surface gets warmer with depth, and a warm zone at the same location as the spot was visible down to 350 kilometers
The fact that the 16,000-kilometer-wide spot is warmer at the bottom than at the top could help explain the storm’s screaming wind speeds of about 120 meters per second. Warm air rises, so the internal heat could provide energy to churn the storm.

Juno principal investigator Scott Bolton of the Southwest Research Institute in San Antonio notes that the spot “goes as deep as we can see,” but it could go deeper. “I’m not sure we’ve established the true foot,” he says. On a future flyby, Juno will try to use gravity data to detect the storm at depths of thousands of kilometers. If the spot does go down that deep, theorists will struggle to explain why, Bolton says.

The only previous data on Jupiter’s interior came from the Galileo spacecraft, which ended its mission by entering Jupiter’s atmosphere at a single point in 1995. “I like to say that if aliens sent a probe to Earth and it landed in the Sahara, they would conclude the Earth is all desert,” says planetary scientist Michael Wong of Caltech, who was not involved in the new study. “Juno getting this global view gives us a new understanding of the inner workings … We have never really seen the interior of a giant planet in this way before.”

These weather events turned extreme thanks to human-driven climate change

NEW ORLEANS — For the first time, scientists have definitively linked human-caused climate change to extreme weather events.

A handful of extreme events that occurred in 2016 — including a deadly heat wave that swept across Asia — simply could not have happened due to natural climate variability alone, three new studies find. The studies were part of a special issue of the Bulletin of the American Meteorological Society, also known as BAMS, released December 13.
These findings are a game changer — or should at least be a conversation changer, Jeff Rosenfeld, editor in chief of BAMS, said at a news conference that coincided with the studies’ release at the American Geophysical Union’s annual meeting. “We can no longer be shy about talking about the connection between human causes of climate change and weather,” he said.

For the last six years, BAMS has published a December issue containing research on extreme weather events from the previous year that seeks to disentangle the role of anthropogenic climate change from natural variability. The goal from the start has been to find ways to improve the science of such attribution, said Stephanie Herring of the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information in Boulder, Colo., who was lead editor of the latest issue.

To date, BAMS has published 137 attribution studies. But this is the first time that any study has found that a weather event was so extreme that it was outside the bounds of natural variability — let alone three such events, Herring said.

Story continues below map
In addition to the Asia heat wave, those events were the record global heat in 2016 and the growth and persistence of a large swath of high ocean temperatures, nicknamed “the Blob,” in the Bering Sea off the coast of Alaska. The unusually warm waters, which lingered for about a year and a half, have been linked to mass die-offs of birds, collapsed codfish populations in the Gulf of Alaska and altered weather patterns that brought drought to California.

Many of the other 24 studies in the new issue found a strong likelihood of human influence on extreme weather events, but stopped short of saying they were completely out of the realm of natural variability. One study found that an already strong El Niño in 2016 was probably enhanced by human influence, contributing to drought and famine conditions in southern Africa. Another reported that greenhouse gas–driven warming of sea surface temperatures in the Coral Sea was the main factor driving an increase in coral bleaching risk along the Great Barrier Reef. But not all of the studies linked 2016’s extreme events to human activity. Record-breaking rainfall in southeastern Australia between July and September, for example, was due to natural variability, one study found.

With hurricanes, wildfires and drought, 2017 is chock-full of extreme event candidates for next year’s crop of BAMS attribution studies. Already, the likelihood of human influence on the extreme rainfall from Hurricane Harvey is the subject of three independent studies, two of which were also presented at the American Geophysical Union meeting. The storm dropped about 1.3 meters of water on Houston and its surrounding areas in August. The three studies, discussed in a separate news conference December 13, found that human influence probably increased the hurricane’s total rainfall, by anywhere from at least 15 percent to at least 19 percent.

Story continues below image
“I think [the BAMS studies] speak to the profound nature of the impacts we’re now seeing,” says Michael Mann, a climate scientist at Penn State who was not involved in any of the studies. But Mann says he’s concerned that many researchers are too focused on quantifying how much human influence was responsible for a particular event, rather than how human influence affects various processes on the planet. One example, he notes, is the established link between rising temperatures and increased moisture in the atmosphere that is also implicated in Hurricane Harvey’s extreme rainfall.

Another possible issue with attribution science, he says, is that the current generation of simulations simply may not be capable of capturing some of the subtle changes in the climate and oceans — a particular danger when it comes to studies that find no link to human activities. It’s a point that climate scientist Andrew King of the University of Melbourne in Australia, who authored the paper on Australia’s rainfall, also noted at the news conference.

“When we find no clear signal for climate change, there might not have been a human influence on the event, or [it might be that] the particular factors of the event that were investigated were not influenced by climate change,” he said. “It’s also possible that the given tools we have today can’t find this climate change signal.”

Rosenfeld noted that people tend to talk about the long odds of an extreme weather event happening. But with studies now saying that climate change was a necessary condition for some extreme events, discussions about long odds no longer apply, he said. “These are new weather extremes made possible by a new climate.”

Smothered jet may explain weird light from neutron star crash

The neutron star collision heard and seen around the world has failed to fade. That lingering glow could mean that a jet of bright matter created in the crash has diffused into a glowing, billowy cocoon that surrounds the merged star, researchers report online December 20 in Nature.

Gravitational waves from the collision between two ultradense stellar corpses was picked up in August by the Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO, and its sister experiment in Italy, Advanced Virgo (SN: 11/11/17, p. 6). Using telescopes on the ground and in space, physicists raced to conduct follow-up observations, and found that the collision released light across the electromagnetic spectrum.
Right away, the event looked unusual, says astrophysicist Kunal Mooley, who conducted the research while at the University of Oxford. Physicists think that a jet of fast-moving, bright material blasts out of the center of neutron star collisions. If that jet is aimed directly at Earth, telescopes can see it as an ephemeral flash of light called a short gamma-ray burst, or GRB.
But the gamma-ray signals produced by the August collision were 10,000 times less bright than those seen in other detected short gamma-ray bursts. Even stranger, X-rays and radio waves from the event didn’t appear until about 16 days after the collision. Most short gamma-ray bursts are visible in X-rays and radio waves right away and fade over time.
Astronomers thought those oddities meant the jet was facing slightly away from Earth and expected the light to fade quickly. But Mooley and colleagues continued tracking the glow with three radio telescope arrays on three continents for more than 100 days after the collision. Radio wave emissions continued to brighten for at least 93 days, and are still visible now, the team found. (X-rays were temporarily blocked when the neutron star moved behind the sun from Earth’s perspective.)

“This thing continues to rise, instead of fading into oblivion as we expected,” says astrophysicist Wen-fai Fong of Northwestern University in Evanston, Ill., who was not involved in the new study.

The finding may mean that astronomers are seeing a new kind of gamma-ray burst. Mooley and colleagues suggest that the rise in radio wave emissions could be explained if the jet slammed into a shell of neutron-rich material kicked out in the neutron star crash, transferring most of its energy to that debris and smothering the jet. That extra energy could create a glowing cocoon that keeps radiating far longer than the original blast.

The new result is “really challenging our understanding of what physics is happening from this merger,” Fong says. But, she adds, “the jury is still out on whether this is the same as the short GRBs we’ve seen over the past decade, or whether it’s something completely different,” such as a luminescent cocoon. She and her colleagues also took radio wave observations of the merged stars in the first 100 days after the collision. The team is preparing a paper with a different interpretation that includes a jet emerging from the wreckage later, she says.

Other explanations for the lingering light are possible, Mooley acknowledges. Future detections “will give us an opportunity to really study … what fraction of neutron star mergers give rise to [short] GRBs and what fraction give rise to other phenomena and explosions that we haven’t seen so far in our universe,” he says.