What was the biggest science / tech breakthrough of the last decade?

Broad researchers weigh in with their thoughts, ranging from sequencing advances to space exploration.

Lauren Solomon, Broad Communications
Credit: Lauren Solomon, Broad Communications

As 2019 draws to a close, we bid farewell not only to a year, but to a decade — and what an extraordinary decade it’s been for science! Genome sequencing has gotten faster and cheaper, computing power has reached unprecedented levels, and datasets have grown immense — all leading to an explosion of scientific discovery in a wide range of fields.

We asked members of the Broad’s research community for their thoughts on what the biggest breakthrough has been in the last 10 years, either in their field or biomedical science in general. Here is what they told us. This is just a sampling of perspectives, and not a comprehensive review of all the great science that has happened since 2010.

 

Paul Blainey, core institute member

I would argue for the "molecular barcode" (small bits of DNA or RNA sequence used to uniquely label each molecule in a sample). Barcodes are essential to high-throughput single-cell genomics in multiple ways: 1) marking the cells, 2) quantifying RNA accurately, 3) following cell fates and developmental trajectories (e.g., lineage tracing), and 4) collecting information like protein abundance using CITE-seq. Single-cell barcodes are also used in conjunction with CRISPR technology, for example, for PERTURB-seq.

 

Steven Carr, institute scientist and senior director of the Proteomics Platform

I think that one of the biggest breakthroughs in science was Voyager 1's literal break through the heliosphere. As the first manmade object to exit our solar system, it marked a true first in interstellar travel for humankind. On the biomedical side of things, I would say immunotherapy was the biggest and most important breakthrough.

 

Stacey Gabriel, institute scientist and senior director of the Genomics Platform

I would vote for gene editing, both for the impact on the ability to functionally characterize DNA variation and for the potential clinical applications.

 

Alicia Martin, associated scientist, Stanley Center for Psychiatric Research

The growth in genetic studies and related technology development has been astounding. I would probably have to cast a vote for CRISPR, given how game-changing a tool it is for making carefully controlled biological perturbations and teasing apart disease mechanisms.

 

Heidi Rehm, institute member, Program in Medical and Population Genetics

The most important breakthrough in my field in the last decade is the launch of clinically useful public gene variant databases. This includes high-quality population databases starting with ExAC and now replaced by gnomAD, which houses data from nearly 200,000 individuals, and also includes resources like ClinVar and ClinGen, which contain clinically interpreted variants submitted by thousands of labs around the world. These resources have dramatically improved our ability to return accurate and clinically meaningful answers to patients with suspected genetic disorders.

 

Stuart Schreiber, core institute member, Chemical Biology and Therapeutics Science Program 

The ability to sequence ancient genomes and the insights into the history of humanity that followed.

 

 

Morgan Sheng, core institute member and co-director of the Stanley Center

Harnessing the immune system to treat cancer, with checkpoint inhibitors (e.g., anti-PD1, anti-PDL1) and with genetically engineered immune cells (e.g., CAR-T cells). Not only are these therapies clinically successful — achieving lasting responses in a sizable fraction of patients with hard-to-treat cancers — but they remind us about the transcendent power of the immune system to modulate disease course, regardless of the mutations or cell autonomous mechanisms driving the disease.

 

Geraldine Van der Auwera, director of outreach and communications, Data Sciences Platform

From a data generation standpoint I would nominate single-cell sequencing, which is transformative because it gives us such a granular view of gene expression. 

When it comes to dealing with the huge wave of data that these new techniques allow us to produce, my vote goes to cloud computing. It is revolutionizing how we collaborate and ensure computational reproducibility across organizations, and I would predict that it will play a huge role in democratizing access to high-power infrastructure and large datasets, so that even small research groups can do big things. It's also been helping power the rebirth of machine learning, which is in reality an old idea that was hampered in the past by massive infrastructure requirements. But the rise of cloud computing has enabled a boom in the research, development, and application of machine learning, which is already starting to deliver the next wave of breakthroughs in data analysis.

 

Jane Wilkinson, senior director of alliance and project management, Genomics Platform

The 2010s ushered in new sequencing machines that enabled us to finally realize the $1,000 genome at 30x coverage. This was a huge turning point for both the research community and the sequencing lab here at Broad. This has enabled many new initiatives — including the NIH All of Us project and many biobank projects around the world including the UK BioBank. As we continue to work to improve the technology and reduce the cost, I look forward to seeing a $100 genome in the next decade!