How to find Nobel Prize-winning ideas

A short essay by someone who has never won the Nobel Prize

Introduction

In my field of work, ideas are currency. Original ideas pave the way to good research, which leads to high-impact papers, which ultimately open the door to fame, wealth, and glory (or so I have been told). The better the idea, the greater the glory, and if it just so happens that your idea has the right mix of novelty, awesomeness, and a cool name, all while simultaneously solving a major world problem, you just might end up winning the Nobel Prize.

While all this sounds good on a blog post, there’s just one tiny problem. Simply put, good ideas are really hard to come by. Nobel-Prize-winning ones are even more so. So how do you go about finding them?

The Ideas conundrum

Yesterday I had an existential crisis.

It seemed that for every big idea I could think of, there were already 3 papers published. It felt like I had nothing new to say or no original thoughts to add to the narrative of my field. Despite this, hundreds of new papers were being published every month. Clearly, some people somewhere were getting ideas. The only question is who, what, and where, (which I guess are three questions, but the point remains).

This led me to ask the broader question about Nobel Prize winners and the genesis of their ideas. Is there a secret recipe for getting them? Are there any patterns in the way big ideas are generated? How do you find Nobel-Prize-winning ideas?

If all it takes is one life-changing serendipitous discovery, then how can you set yourself to be the one that makes it? 

No matter what field you are in, coming up with creative yet feasible ideas is an important part of research. Yet, there is surprisingly scarce literature available on the subject. Nobody teaches you how to go about ‘thinking’ of ideas, you are just expected to learn it through experience.  Part of the problem lies in the fact that even the people who do come up with good ideas, find it exceedingly difficult to articulate how they thought of it in the first place. Vague terms such as ‘think out of the box’ or ‘be more resourceful’ are usually thrown around the subject of creativity, but what does that even mean? 

I call this the ideas conundrum – the problem of not knowing where good ideas come from, or how to go about looking for them in the first place. To come up with a solution, I dug into the history of the big ideas in science and tried to trace back their origin. The rest of this series delves into my key takeaways from this process, one aspect at a time. 

Part I: Noticing Confusion

It is often believed that great discoveries require great experimentation – they don’t. Often great discoveries involve little, tiny facts that if properly understood completely change the way you think about the world. By their very nature, however, truly profound discoveries present themselves as agents of confusion. They will appear as facts that people never expect, facts that some people might even think are quite impossible, yet they are there, staring right back at you from the visceral depths of your own raw data, if only you would take notice.

Noticing your own confusion is the first step towards making big discoveries. It is the art of recognizing things that you hadn’t expected, and that don’t quite make sense, but are true nevertheless.

It is the skill of noticing when you have a phlogiston theory on your hands.

Phlogiston theory is an actual scientific theory that existed in the 1600s and early 1700s that attempted to explain why things burn. It postulated that combustible substances contain a particle called phlogiston, which is released or dephlogisticated upon burning. Growing plants then absorb this released phlogiston, which is why air does not spontaneously combust and also why plants burn so well.

The phlogiston theory was hugely popular because it provided an explanation for a wide variety of observed phenomena. Metals were thought to rust over time because they gave off phlogiston into the thin air. It also explained why objects burning in an enclosed space, like a candle in a jar, would stop burning after some time. Why it was because the air in the jar became saturated with phlogiston, and therefore no more phlogiston could be released.  

However, soon people began to notice that while the rust looked lighter than the metal it came from, it actually weighed significantly more. If the phlogiston was being released from the metal, would the rust not weigh less than the original metal? Phlogiston proponents, however, were in love with their theory. They would explain away the weight difference by postulating that phlogistons indeed have negative mass. With the benefit of time, we can see how they were wrong. In reality, however, these phlogiston theorists were just ordinary scientists that were confused, and they failed to notice their confusion. 

Scientists are married to their theories. In the face of confusion and uncertainties, they will try to explain away all of their observational inconsistencies in a way that fits their theory. However, even 100-year-old theories get modified or outright disproved all the time. The genesis of big ideas often lies in making an observation that goes against every scientific principle that you know to be true. 

In France, Antoine Lavoisier saw the same phenomenon in a different light. He postulated that instead of a phlogiston being released into the air, a simpler explanation comes by hypothesizing that something is being absorbed from the air during combustion. It seems like an obvious conclusion to reach today, yet, going against a well-accepted theory back then would have required immense resilience. Years later, through careful experimentation, Lavoisier proved that indeed a gas called Oxygen is absorbed from the air in the process of combustion, and it is this same gas that causes the rusting of metals, leading to the production of a substance called oxide, which in fact, is heavier than the metal itself due to the additional weight conferred by the Oxygen.

The point is, that no theory is bulletproof, and in science, it is not only commonplace for well-established theories to be proven wrong all the time, but is actually somewhat of a prerequisite to making great leaps in our understanding of nature.

Here’s another example:

For a long time, people believed the earth to be the center of the universe and all other celestial objects were thought to revolve around it4. Even prominent philosophers such as Aristotle and Ptolemy believed the same. I don’t blame them, If I were a medieval peasant, I would also buy that view. Hell, if I were a medieval peasant, I would also probably laugh if a middle-aged Polish bloke by the name of Copernicus tried to tell me that the Sun is actually the center of the universe, and everything we see revolves around the Sun!

While closer to reality than the earth-centric model, even Copernicus’ theory wasn’t fully formed and needed to be corrected first by Johannes Kepler who postulated that the orbits that planets make around the Sun are elliptical – not circular, and later by modern astronomers who realized that while the sun is close to the barycenter of the solar system, there is no such thing as the Center of the universe5!

A similar pattern follows the progression of atomic theory. You know the gist. In 1803 John Dalton proposed that atoms are indivisible little blobs of the sphere that all matter is made of. In 1904, JJ Thomson told Dalton, “No wait, that’s not true, here are electrons that are smaller than atoms” and proposed the plum pudding model. In 1911 Ernest Rutherford telegrammed both Thomson and Dalton saying “Hold my gold foil6, it’s not just electrons, it seems like there is a large concentration of something positively charged at the center of the atom” this proved the existence of the proton. In 1913, Neils Bohr corrected Rutherford and said ‘The electrons actually move in fixed orbits around the nucleus, and the electron energies are quantized in their orbits, leading to emission or absorption of energies as electrons jump between the orbits’. 

Bohr won the Nobel Prize in Physics in 1922 for this work. However, even Bohr’s model could not explain why the electrons don’t actually collapse into the nucleus. It needed the development of quantum theory to understand that electrons are not localised, but they actually move around the nucleus in a cloud of probability, and it is impossible to know their exact position. This is the most accurate model of the atom, to date, and yet this might still be corrected in the future. 

No such thing as a bulletproof model. 

There is a famous theory by the statistician Nassim Nicholas Taleb called the Black Swan theory. It postulates: 

 “No amount of observations of white swans can allow the inference that all swans are white, but the observation of a single black swan is sufficient to refute it.”

The black swan theory is a metaphor that describes an event that often comes as a surprise and has a major effect on the observer’s world-view. It talks about the psychological biases that blind people, both individually and collectively, to uncertainty and a rare event’s massive role in historical affairs. 

The first key insight towards getting closer to a Nobel Prize-winning idea, then, is to recognize the black swans in your science – those tiny little interesting things that don’t quite fit the model – and have the courage to follow their stories, and the determination to see them through to the end.

Notice your confusion.

– The Starlitknight

Notes

  1. Of course, not all good ideas end up winning the Nobel Prize, but the prize-winning ones get greater recognition. In the context of this article, I am referring to any high-impact scientific idea, even though I specify ‘Nobel Prize-winning’ ideas.
  2. Source: Phlogiston theory
  3. Source: Antoine Lavoisier
  4. Source: Geocentric Model of the universe
  5. Source: Heliocentric Model of the universe
  6. None of the conversations between Dalton, Rutherford, and Bohr actually happened
  7. Source: Models of atoms
  8. Source: The Black Swan theory

The Phage Wars: How A Centuries Old Battle Gave Us CRISPR

By Santosh Rananaware

Humans are the second most dangerous creatures in the universe. The deadliest creatures in the universe are bacteriophages.  Bacteriophages kill more living things everyday than all other organisms combined. Despite their notoriety, however, they do not kill indiscriminately. Bacteriophages are specialized viruses that only kill bacteria.

Bacteria are ubiquitous, unassuming, little microbial organisms that are found everywhere from arctic ice cores to volcanic hot springs. They are also responsible for causing some of the deadliest diseases known to mankind. The Black Plague that ravaged through much of Europe during the 14th century and killed hundreds of millions of people is believed to have been caused by a tiny little bacterium called Yersinia pestis. However, only a very tiny fraction of all the bacteria present in the universe are harmful to any organism at all. In fact, many of them are beneficial to our body and help us in digesting the food we eat.

Fig.1: Enterobacteriophage P4. Illustration Courtesy Ben Darby

Despite their near omnipresence and significant impact on the environment, for much of history humans were oblivious to the existence of bacteria or bacteriophages. This is because of the extremely small size of these creatures. Bacteria are typically only of the size of 1 µm in diameter, which means if you line up a thousand bacteria end-to-end, they would only form a line of about 1 mm long. Bacteriophages are viruses and are a thousand times smaller still. Even today, these minute organisms are visible to us only with the help of extremely strong microscopes.

Despite their infinitesimal size, if you were somehow able to observe these minuscule creatures as they go about their day-to-day activities, you would notice that the bacteria and the phages are in a state of constant war with each other. This war has been going on for centuries. Much of nature’s activities revolve around phages trying to invade bacterial colonies and the bacteria fighting back. Whenever any particular strain of bacteria prosper and threaten to take over a local ecosystem, the phages come like the horsemen of the apocalypse and hunt them down, thereby maintaining balance and diversity.

At this point, it is important to understand that bacteriophages do not ‘hunt’ bacteria in the traditional sense of the word. They do not ‘eat’ their prey like wolves eat deer. Rather, they infect the bacterium with their own genetic material (viral DNA or RNA) and hijack its replication machinery (Fig.2). The bacterium, unaware of the foreign genes being injected in its own genome, replicates them, thereby creating more copies of the virus inside it. After a certain number of viral copies are made and assembled inside the bacterium, it just explodes and releases all those copies into the environment. The released bacteriophages are now free to roam and infect other bacteria.

Fig.2: Schematic of bacteriophage replication (Adapted from learn microbiology)

CRISPR/Cas as a bacterial defense mechanism

To counter these phage attacks, the bacteria have evolved to develop many defense mechanisms to protect themselves. For example, one such mechanism adapted by bacteria involves blocking of the phage receptors, which are tiny structures on the surface of bacterial cells that the phages use to attach themselves to the bacteria. By blocking these receptors, the bacteria ensures that the phages are not able to connect to them in the first place and are thereby unable to infect. However, to counter this, the phages have evolved to recognize the blocked receptors and bind to new receptors on the bacterial surface. In this manner, the bacteria and the bacteriophages are in a constant state of evolutionary competition with each other.

Over time the bacteria evolved to develop a sophisticated defense mechanism, known as the CRISPR-Cas system, to protect themselves from the invading phages. The CRISPR-Cas system evolved when bacteria that somehow survived phage attacks started preserving small fragments of the invaders DNA into its own genome at fixed positions in their chromosome called the CRISPR Loci. The CRISPR loci consists of small fragments of DNA from all the different viral invaders that the bacteria was able to survive. These sequence of viral DNA fragments are known as  ‘spacer’ sequences and they are sandwiched between ‘repeat’ sequences, which are palindromic sequences of the bacteria’s own DNA that are repeatedly placed between different spacer sequences (Fig.3). The name ‘CRISPR’ is derived from such a peculiar organization of the foreign DNA into the bacterial genome and is the abbreviated form for the term ‘Clustered Regularly Interspaced Palindromic Repeats’.

Fig.3: Schematic of spacer and repeat sequences in the CRISPR loci
(Adapted from XBio)

Not much is known about the mechanism by which bacteria acquire and store these spacer sequences. However, Cas1 and Cas2 proteins are believed to play a role in the process. The term ‘Cas’ is the abbreviation for ‘CRISPR associated’ and hence, Cas proteins refer to the family of proteins within the bacteria that are involved in the CRISPR Cas defense mechanism. Every time a new spacer is acquired from a failed bacteriophage attack, it is stored at the leading end of the CRISPR sequence in the bacteria (Fig.4).

The CRISPR loci is eventually transcribed into a short RNA molecule called CRISPR RNA or crRNA. The crRNA then binds with a Cas enzyme which can have one or more nuclease (DNA-cutting) domains that can target and cut any DNA. The crRNA/Cas complex roams around freely inside the bacterial cell. The next time if the same viral bacteriophage invades the bacteria with its genetic material, the crRNA hybridizes with the invading viral DNA through base pairing and guides the Cas enzyme towards it like a homing missile. The Cas enzyme is then able to unwind and cut the foreign DNA, thereby disabling it and protecting the bacteria from its effects.

One might ask that because the spacer sequence stored in the CRISPR array is identical to the foreign DNA injected by the virus, why doesn’t the CRISPR/Cas system accidentally destroy the CRISPR array itself, thinking it to be the invading DNA. The answer to that lies in a short DNA motif that is found only in the invading viral sequence (and not in the spacer stored in the CRISPR array) called the PAM sequence. To prevent potential auto-immunity, a large number of CRISPR/Cas systems in bacteria cleave DNA targets only if they are flanked by short sequences known as the PAM sequences. The spacers present in the CRISPR array lack the PAM sequence and are therefore, immune to CRISPR mediated destruction. The mechanism by which spacers are chosen so that they target only PAM-associated protospacers are still unknown.

The key thing about the CRISPR/Cas defense mechanism, however, is that it is hereditary and hence, subsequent generations of bacteria acquire this phage resistance from their parents without having to have encountered these phages themselves. Around half of all the known bacteria and almost all of archaea are found to have the CRISPR-Cas defense mechanism within them.

Fig.4: CRISPR/Cas defense mechanism (Reprinted from Doudna Lab)

CRISPR/Cas can be repurposed into a genome editing tool

A key breakthrough in science occurred when scientists realized that the CRISPR/Cas system can be taken out of the bacteria and can be repurposed to easily make precise cuts not just in viral DNA, but in the DNA of any organism! The only components necessary to do so were a guide RNA that can base pair and target the DNA of choice, the presence of a PAM sequence on the target DNA and a Cas enzyme such as Cas9 that can bind and cut the target DNA. Such a system can now be used to target and cut any gene within the genome of any organism. A majority of inheritable diseases, for e.g., sickle cell anemia, occur due to minor genetic defects in the chromosome. The CRISPR/Cas system can now be potentially used to ‘cut’ the defected gene and replace it with a healthy copy of the same gene.

It is amazing to think that a mechanism that evolved in bacteria over thousands of years as a way to protect themselves from their natural enemy, the bacteriophage, could now be harnessed by humans as a tool cut and edit their own DNA.

CRISPR is far from a perfect tool. Scientists have realized that after CRISPR cuts the DNA at the target site, it goes haywire and starts cutting other sites in the DNA thereby leading to unwanted mutations, or off-target effects, in the genome of the target organism. There are also challenges involved delivering the CRISPR/Cas components inside the human cell and increasing its efficiency of on-target editing. A tremendous amount of research is currently ongoing to improve and perfect it.  

Meanwhile, however, the bacteriophages (deadliest organisms on earth, if you remember) have evolved to find a way to completely circumnavigate around the CRISPR defense in bacteria. It was recently discovered that the phages produce something that has come to be known as as anti-CRISPR proteins to help them fight CRISPR. Anti-CRISPR proteins are highly diverse but most block CRISPR in one of three ways: (a) Inhibiting DNA binding, (b) Inhibiting crRNA loading, and (c) Inhibiting DNA cleavage. Thus, the arms race continues.