/Runners-up | Science (via Qpute.com)

Runners-up | Science (via Qpute.com)

A killer impact and its aftermath

Embedded Image

Illustration of the Chicxulub impact crater on the Yucatán Peninsula in Mexico soon after its creation.


After a giant asteroid hit Earth 66 million years ago, 76% of the world’s species, including the big dinosaurs, disappeared. But exactly how and when they died, and how quickly ecosystems recovered, has been unclear. Now, a sediment core extracted from the site of the impact on Mexico’s Yucatán Peninsula, together with several rich fossil finds in the United States, are bringing the cataclysm and its aftermath into sharp focus.

In 2016, the International Ocean Discovery Program drilled into the rugged hills around the center of the 193-kilometer-wide Chicxulub crater, which now lies mostly underwater on the Yucatán coast. The drilling extracted an 835-meter core, including 130 meters deposited the day the asteroid hit. An examination of the core, published this year, provides an almost minute-by-minute reconstruction of what happened after the impact. Molten rock filled the impact hole, followed by a hailstorm of debris. The ocean surged in, churning the deposits; then, by the end of the first day, a tsunami swept in more material, including charcoal from impact-induced wildfires. Even though sulfur-rich material was abundant at that site, there was little present in the core, suggesting it all vaporized and likely helped cause rapid global cooling and darkness.

Thousands of kilometers from ground zero, a new site in North Dakota captured the catastrophic effects of the impact on living things. In less than 1 hour, impact-induced seismic activity caused waves of water to rush up an ancient river system at the site, sweeping living things into tumbled deposits. Fossil fish bear a vivid fingerprint of the impact: Their gills are filled with glass particles, rich in iridium, from the impactor itself.

But life recovered faster than expected, as an analysis of pollen, plant fossils, mammal skulls, and other bones at another site, Colorado’s Corral Bluffs, chronicles. Ferns and mammals no bigger than rats survived the impact, which ended the Cretaceous period and marked the beginning of the Paleogene, creating the K-Pg boundary. Palm trees replaced the ferns within 1000 years; by 300,000 years, walnutlike species dominated; and by 700,000 years after the impact, legumes showed up. Mammals doubled in size and diversity within the first 100,000 years, a trend that accelerated and continued, particularly after the legumes appeared; by 700,000 years, some mammals topped 50 kilograms.

Last year, an analysis of tiny shelled plankton called foraminifera in the Chicxulub drill core indicated that the marine ecosystem at the crater site was back up and running within 30,000 years, much faster than anticipated. But recovery was slower elsewhere. This year, analyses of foraminifera from the drill core and from sites around the world documented rapid acidification of the oceans following the impact and suggested a 50% reduction in the amount of organic material reaching the sea bottom, which may have suppressed marine life for 1 million years.

All these results made for a “superyear” in studies of the K-Pg mass extinction, says Vivi Vajda, a paleontologist at the Swedish Museum of Natural History in Stockholm. —Elizabeth Pennisi

1 People’s Choice

Face to face with the Denisovans

Embedded Image

This artist’s reconstruction of a Denisovan girl from Siberia is based on a new way to infer physical features from DNA.


Almost 40 years ago, a Buddhist monk found an odd human jaw bone in Baishiya Karst Cave, high on the edge of the Tibetan Plateau. Recognizing that the jaw, with its giant molars, was something special, he gave it to another monk, who donated it to scholars. But no one knew what to make of it. Then in May, scientists applied a new method of analyzing ancient proteins and identified the strange jaw as that of a Denisovan, a mysterious human ancestor who ranged across Asia until some 50,000 years ago, about the same time as the Neanderthals. The work brings these enigmatic ancient people into focus, and heralds a potential protein-based revolution in understanding ancient life.

The Denisovans have haunted human evolution researchers for 10 years. Back in 2010, researchers identified them by sequencing DNA from a fossilized pinkie bone found in Denisova Cave in Siberia. The DNA, which came from a girl, differed from that of Neanderthals and modern humans. Today, ghostly traces of Denisovans linger in the DNA of living people across Asia, suggesting the group was once widespread and mingled with both Neanderthals and modern humans. But until this year, only a few scraps of additional Denisovan fossils had been identified, all from Denisova Cave. Scientists were left guessing what the Denisovans looked like.

The 160,000-year-old Baishiya jaw yielded no DNA. But a Chinese and European team managed to extract collagen, a common protein, from the bone, and match it with collagen from the Denisova Cave girl. That suggested the jaw was Denisovan and that these mystery humans had robust jaws, big molars, and teeth with three roots.

In September, another team refined that picture by applying a new technique to the Denisova Cave girl’s genome. They traced chemical modifications of the DNA called methylation, which can silence genes, then combined that information with a database that describes how missing or defective genes influence anatomy in living people. The results suggested how the methylation pattern of the girl’s DNA might have shaped her body. The team concluded that she would have looked a lot like a Neanderthal, with a wide pelvis, sloping forehead, and protruding lower jaw. But she also had a wider face than modern humans or Neanderthals, and a longer arch of teeth along her jaw bone. When the researchers tested their view of the Denisovan smile against the newly identified Baishiya jaw bone, it fit almost perfectly. —Elizabeth Culotta

2 People’s Choice

Hope for Ebola patients, at last

Embedded Image

A health worker dons protective gear during the ongoing Ebola outbreak in the DRC.


In 1976, a new virus emerged from the rainforest of the Democratic Republic of the Congo (DRC, then called Zaire). It killed 280 people in the village of Yambuku and disappeared, only to pop up again sporadically with devastating effect. Ever since, the name of that virus, adopted from a nearby river, has been synonymous with deadly, incurable infection: Ebola. This year, that characterization began to change.

In the midst of another outbreak, the deadliest in the DRC’s history, scientists finally identified two drugs that dramatically reduce death rates from the disease. Both are antibodies, one isolated from a survivor of a 1996 Ebola outbreak, the other a mix of three antibodies produced in mice with humanized immune systems. In a randomized trial that pitted four different drugs against each other, about 70% of the patients who received one of those two medicines survived, compared with about 50% of those given either of the other two drugs. The result was so compelling that the trial was stopped early. Simply conducting the trial was a notable achievement in itself: It was carried out in makeshift treatment units in the midst of a devastating outbreak and in an area of violent conflict.

The result should help combat the disease not only by improving patients’ chances of survival, but also by encouraging people to seek treatment early. With no effective drugs available, people with symptoms have often tried to evade detection and sought out traditional healers, which has fueled outbreaks.

More than 40 years after the threat from Ebola emerged, the world is finally better prepared to deal with the virus. And the architect of this victory, the principal investigator of the trial, was Jean-Jacques Muyembe-Tamfum, the Congolese virologist who was instrumental in the discovery of the virus in Yambuku. —Kai Kupferschmidt

Quantum supremacy attained

Embedded Image

An array of chips for Google’s quantum processor being prepared for testing.


In October, the era of quantum computing dawned—maybe. Physicists with Google claimed they had used a quantum computer to calculate something no ordinary computer could, reaching a milestone known as quantum supremacy. Although a rival group disputed the claim, it was widely hailed as a major achievement. But a quantum computer that can solve practical problems could still remain decades away.

A conventional computer manipulates information encoded in bits that can be set to either zero or one. A quantum computer uses qubits that can be set to zero, one, or both zero and one at once. For certain problems, possible solutions can then be represented by different quantum waves sloshing simultaneously through the qubits. The wrong solutions can interfere to cancel one another, while the right solution pops out. Because the system explores a vast number of potential solutions at once, a full-fledged quantum computer could, for example, factor huge numbers much faster than conventional computers. That could enable a quantum computer to crack current internet security protocols.

Google researchers say they have taken a key step toward a functioning quantum computer by achieving quantum supremacy with an abstract test problem. Using a chip containing 53 qubits made of tiny circuits of superconducting metal, they implemented a set of randomly chosen interactions and showed, essentially, that the machine would output the correct quantum state. For calculations requiring a few qubits, they checked the result with supercomputer simulations. For larger numbers of qubits, they employed a statistical measure to help confirm the result. The comparisons showed that the quantum computer calculated in 200 seconds something that would take a supercomputer 10,000 years to figure out, the team says.

Immediately, however, IBM researchers questioned whether Google had truly reached its mark. They claimed that, with the right algorithm, a supercomputer could solve the problem in as little as 2 days. Other physicists say that to solve practical problems, a quantum computer will have to be able to correct errors in its own qubits, something that has yet to be achieved. Researchers also face massive practical challenges in scaling up from a single machine containing a few dozen qubits to a vast array of machines containing 100 million qubits—the number it would take to crack internet encryption. The era of quantum computing may have dawned, but we may be waiting a while for breakfast. —Adrian Cho

A ‘missing link’ microbe emerges

Embedded Image

It took 12 years to culture this tentacled deep-sea microbe (lighter color).


This year, microbiologists took a major step toward resolving a controversy over the origin of eukaryotes, the group that encompasses all plants and animals—including humans. After 12 years of trying, a team in Japan succeeded in growing a mysterious microbe from deep-sea sediments and sequenced its genome. It could shed light on the ultimate ancestry of us all.

The organism, Prometheoarchaeum syntrophicum strain MK-D1, is a member of the recently recognized Asgard group of microbes, which are not bacteria but a completely separate branch of life called archaea. The Asgards were known only from DNA fragments isolated from deep-sea sediments and other extreme environments. Surprisingly, those fragments contain genes formerly thought to be found only in eukaryotes—organisms with cells that have nuclei and organelles such as mitochondria. Comparative DNA analyses indicated the Asgards, or an ancient relative, may have even given rise to eukaryotes. That radical idea would shrink the domains of life from three—archaea, eukaryotes, and bacteria—to two: bacteria and archaea, with eukaryotes reduced to a subset of archaea. But given the scant evidence, many researchers were skeptical.

By growing an Asgard in culture, the team in Japan could sequence its full genome and confirm that it carries eukaryotic genes. The researchers also found that it seems to grow best in association with certain bacteria, and that it forms short tentacles that might engulf bacterial companions. If so, that could explain how an Asgard acquired the microbial guests that became mitochondria. (A paper detailing the findings was posted on the bioRxiv preprint archive; it has now been accepted for publication.)

Other studies this year have identified more eukaryotic genes in DNA fragments from other members of the Asgard group. And information derived from DNA about Asgard metabolism also bolsters the two-domain over the three-domain hypothesis. Proponents of both ideas agree, however, that events that happened almost 3 billion years ago will be hard to reconstruct, and that new ideas may emerge as more Asgards are studied—and, perhaps, cultured. But with one now in hand, researchers have a clearer window into that distant past. —Elizabeth Pennisi

A close-up of a far-out object

Embedded Image

Arrokoth, a remnant of the early years of the Solar System.


Last year, it was just a tiny gray spot in the blackness of space; now, it is Arrokoth. On the first day of this year, NASA’s $800 million New Horizons spacecraft swept by 2014 MU69, a 36-kilometer-wide object some 6.6 billion kilometers from Earth, in a region beyond Neptune called the Kuiper belt. Astronomers have discovered thousands of objects lurking in the belt, which they believe harbors material little altered from the early years of the Solar System. But they have never had a close-up look.

New Horizons revealed that Arrokoth—MU69’s new official name, after the Powhatan and Algonquian word for “sky”—consists of two pristine planetary building blocks that resemble lumpy pancakes, joined at a narrow neck. Relatively free of craters, Arrokoth’s two icy lobes formed separately at the Solar System’s beginning, likely condensing out of the same cloud of dust. Their odd shape and unmarred, homogenous surfaces support a new notion for how planetary building blocks form. They don’t grow by collision after collision. Rather, soon after the Sun formed, static electricity drew together dusty grains into centimeter-size pebbles; the swirl of the primordial nebula then caused the pebbles to flock together into gathering clouds, which gravitationally collapsed into kilometer-size lumps. This “streaming instability” can explain why Arrokoth has two lobes: The pebble cloud rotated faster as it collapsed, developing turbulence that fractured it. Two of the pieces drew close until their axes aligned, and mutual attraction pulled them together in a gentle kiss.

Much work remains to be done; the spacecraft will not even finish beaming back all its observations of Arrokoth until the end of 2020. Even then, its mission may not be over: The New Horizons team is now using the probe’s telescope to search for a new Kuiper belt target to visit, one too small for any telescope on Earth to see. —Paul Voosen

4 People’s Choice

In a first, drug treats most cases of cystic fibrosis

Embedded Image

A model of the CFTR protein (green), which is defective in cystic fibrosis, at the cell surface.


In October, scientists celebrated a milestone for gene-based drugs: the approval of an effective treatment for most cases of cystic fibrosis (CF). The treatment, a triple-drug combination called Trikafta, corrects the effects of the most common mutation behind the lung disease. For those who have the mutation—about 90% of all CF patients—it could convert CF from a progressive disease into a more manageable chronic illness. Trikafta is the upshot of 30 years of research since the CF gene, CFTR, was discovered.

CF strikes when a child inherits two mutated copies of the gene, and life expectancy for patients born today averages in the mid-40s. Trikafta builds on other CF drugs made by the company Vertex Pharmaceuticals, which target different defects in the CFTR protein. The first, Kalydeco, was leveled at a rare mutation called G551D, which affects about 4% of patients in the United States. In these patients, the CFTR protein fails to open its “gate” and let chloride pass through—which, like other defects in CFTR, leads to a buildup of mucus in the lungs. Kalydeco mended this defect and was approved in 2012. Vertex then combined Kalydeco with another drug aimed at repairing the effects of a different mutation, F508del, which misfolds CFTR and prevents it from reaching the cell surface. But this two-drug formulation, Symdeko, proved less effective than hoped.

Trikafta adds a third drug to the mix to enhance the strategy’s effectiveness. The triple combination, which is geared to CF patients who carry at least one copy of F508del, helps CFTR reach the cell membrane and open its gate. In clinical trials the drug boosted lung capacity by 10% to 15% and blunted CF complications.

Questions remain about how early to start the treatment—it’s now approved for ages 12 and up but is being tested on younger children—and how to design new drugs for the 10% of patients whose disease isn’t targeted by Trikafta.

Amid the excitement looms a shadow, however: Trikafta has a list price of more than $300,000 a year, and presumably must be taken for life. —Jennifer Couzin-Frankel

Microbes combat malnourishment

Embedded Image

Supplements that improve gut microbes could be a game changer for children like this infant in Bangladesh.


Each year, millions of severely malnourished children fail to recover completely, remaining stunted and sickly even after they are well fed. Ten years of research has pointed to a root cause: Their gut microbes do not mature. This year, an international team built on that research to come up with a low-cost, easy-to-obtain supplement that preferentially stimulates the growth of beneficial gut bacteria. The supplements performed well in a small trial, and larger scale clinical trials are now underway to see how well the supplement works to prevent stunting.

Earlier research determined that malnourished children who fail to recover have gut microbial communities—or microbiomes—characteristic of infants, and that more mature microbiomes are key to responding well to nourishment. The team first pinpointed 15 types of bacteria that characterize a mature gut microbiome. They also identified blood markers, including proteins, that signal a recovery from the effects of malnutrition. They then tested various combinations of food readily found in developing countries to see how the microbiome responded, first in mice, then in pigs, and finally in a small group of malnourished children.

Milk powder and rice, standard components of food aid, did little to foster expansion of the key bacteria, but supplements containing chickpeas, bananas, and soy and peanut flours helped the microbiomes mature. After a short clinical trial, children who received the supplements had more of the blood proteins and metabolites that are markers for normal growth.

More children are being followed for longer periods to see whether these changes translate into recovery from stunting, the final proof that improving the microbiome can help solve this worldwide problem. If it can, and particularly if the treatment can be provided outside a hospital, at home—a recent trend for combatting this problem—“the impact could be huge,” says Eric Pamer, a physician scientist at the University of Chicago in Illinois. —Elizabeth Pennisi

Artificial intelligence masters multiplayer poker

Embedded Image


This year, an artificial intelligence (AI) program beat some of the world’s best players in the most popular version of poker, no-limit Texas Hold ’em. The landmark result marks the first time AI has prevailed in a multiplayer contest in which players have only imperfect information about the state of the game.

AI has been trouncing humans in games at a spectacular rate. In 2007, computer scientists developed a program guaranteed not to lose at checkers. In 2016, another team developed an AI program that defeated the best humans at Go, a board game with vastly more configurations than checkers.

Poker presents a stiffer challenge, as players cannot see their opponents’ cards and thus have limited information. In 2017, computer scientists developed an AI program unbeatable at a two-player version of Hold ’em—in which each player forms a hand from five cards laid face up on the table and two more each holds privately.

Now, AI has bested world-class players in the full multiplayer game, as computer scientists at Carnegie Mellon University in Pittsburgh, Pennsylvania, announced in August. By playing 1 trillion games against itself, their program, Pluribus, developed a basic strategy for various kinds of situations—say, playing for an inside straight. For each specific hand, it could also think through how the cards would likely play out. In 20,000 hands with six players it outperformed 15 top-level players, as measured by average winnings per hand.

Pluribus played differently from programs for two-player games. Those programs sought out a no-lose strategy, known as a Nash equilibrium, which guarantees that, on average, their opponents would do worse unless they also played with the exact same strategy. With multiple opponents, there is no such guarantee, so Pluribus simply learned what was most effective in a given situation. The program also taught itself to play while running on a single server with 64 processors—whereas the Go-playing program required more than 1200 processors.

AI developers aren’t done playing games. In poker there’s still room for improvement. Although Pluribus can bluff, it cannot adapt its strategy to exploit an opponent’s particular weaknesses. Some more complex games such as contract bridge remain unmastered. Still, the most famous objective in the application of AI to games has fallen to the computers. It may be time humans cashed in their chips. —Adrian Cho

This is a syndicated post. Read the original post at Source link .