Friday, 29 December 2017

Like a Moth to a Flame



Richard Dawkins at a Q&A session after a lecture at the University of Liverpool on February 25th 2008.

"Religion is widespread at least historically, perhaps universal, and human specific. So my question to you is do you think religion or religious belief evolved in humans and if so did it confer an evolutionary advantage?"

As a Darwinian, I am among those who believes that anything that is very widespread in a species in some sense has an evolutionary advantage.  It wouldn’t be there if it didn’t.
So yes, I think the answer is that it probably does have an evolutionary advantage but having said that, we have to be a bit careful about what we mean. Because very often when biologists look at something and say: “What’s the evolutionary advantage of that?” it turns out that they are asking the wrong question.
It may be that what we are looking at is some kind of a by-product of something else which has an evolutionary advantage and we have chosen to focus our attention on the by-product, which is not the thing that in itself has the advantage, but which is a consequence of the thing that has the advantage.
The example that I use to introduce the idea of by-product, is the question: Why do moths commit suicide by flying into candle flames, or into electric lights?  You could say, what is the Darwinian survival value of suicidal immolation behaviour in moths and it would be pretty difficult to think of what the survival value of suicide is. However, it’s the wrong question.  Flying into candle flames is a by-product of something else. And probably what it is, is this. This may not be right but it’s a good theory for what’s going on. Insects are well known to use distant light sources as compasses. Day flying insects use the sun as a compass, night flying insects use the moon or stars. The reason it’s a good thing to have a compass is that it’s a good thing to be able to fly in a straight line, it doesn’t waste time and so on.
The reason it works is that celestial objects like the sun and the moon, being at optical infinity, the light rays from them are parallel and therefore a simple rule of thumb in the brain that says “keep the light rays at an acute angle of say 20 degrees” will cause you to fly in a dead straight line, but only if the light source is at optical infinity something like the moon. If the light source is a candle, then the light rays are not coming parallel from the candle, they are radiating out at close quarters from the candle. The rule of thumb “keep the light rays at an angle of 20 degrees to your flight” will cause you to fly in a logarithmic spiral into the candle flame. These moths are not committing suicide. They are doing a piece of behaviour that would be sensible for all the millions of years when the only lights you ever saw at night were celestial objects at optical infinity.
Now, I think that’s what religion is like. I think that religion is a by-product of probably several psychological predispositions, which in themselves have Darwinian survival value, but which have consequences (parallel to the consequence of the moth flying into the candle flame), have consequences which probably don’t have survival value.  But just as the moth doesn’t know that the candle flame is not at infinity but is close by, so those of us who have these psychological predispositions which would have been a good thing in our ancestral past (may still be a good thing) the consequence of leading to religious behaviour which may not be a good thing, doesn’t occur to us.
The kind of thing I’m thinking about is a tendency to obey authority in a child. It’s probably a good thing for a child to obey its parents, to believe its parents indeed, when its parents tell it things about the world, because the child is too young to know a lot of important things about the world and would die if it ignored its parent’s beliefs, its parent’s advice.
So good advice like “don’t jump in the fire”, has survival value. But the child brain, just like the moth brain, has no way of distinguishing the good advice like “don’t jump in the fire” from the stupid advice, like… “sacrifice a mongoose’s kidneys at the time of the full moon or the crops will fail”.
So, I suspect that religion may be a complicated set of by-products of psychological predispositions, each one of which itself has an advantage, but the religious by-product is either neutral or…  well we don’t even need to say whether it has an advantage, it doesn’t matter. The Darwinian explanation is sufficient if we postulate that the original psychological predispositions had survival value.


Thursday, 21 December 2017

The god-shaped hole in your brain

From New Scientist 13 December 2017

Effortless thinking: The god-shaped hole in your brain



Is that rustle in the dark a predator, or just the wind? It pays to think something causes everything – a survival trait that makes us all hard-wired to believe
Cognitive skills that evolved to promote our survival also underpin key religious beliefs
If God designed the human brain, he (or she) did a lousy job. Dogged by glitches and biases, requiring routine shutdown for maintenance for 8 hours a day, and highly susceptible to serious malfunction, a product recall would seem to be in order. But in one respect at least, God played a blinder: our brains are almost perfectly designed to believe in him/her.

Almost everybody who has ever lived has believed in some kind of deity. Even in today's enlightened and materialistic times, atheism remains a minority pursuit requiring hard intellectual graft. Even committed atheists easily fall prey to supernatural ideas. Religious belief, in contrast, appears to be intuitive.

Cognitive scientists talk about us being born with a "god-shaped hole" in our heads. As a result, when children encounter religious claims, they instinctively find them plausible and attractive, and the hole is rapidly filled by the details of whatever religious culture they happen to be born into. When told that there is an invisible entity that watches over them, intervenes in their lives and passes moral judgement on them, most unthinkingly accept it. Ditto the idea that the same entity is directing events and that everything that happens, happens for a reason.

This is not brainwashing. The "cognitive by-product theory" argues that religious belief is a side effect of cognitive skills that evolved for other reasons. It pays, for example, to assume that all events are caused by agents. The rustle in the dark could be the wind, but it could also be a predator. Running away from the wind has no existential consequences, but not running away from a predator does. Humans who ran lived to pass on their genes; those who did not became carrion.

Then there's "theory of mind", which evolved so that we could infer the mental states and intentions of others, even when they aren't physically present. This is very useful for group living. However, it makes the idea of invisible entities with minds capable of seeing into yours, quite plausible. Religion also piggybacks on feelings of existential insecurity, which must have been common for our ancestors. Randomness, loss of control and knowledge of death are soothed by the idea that somebody is watching over you and that death is not the end of existence.

This helps explain why religious ideas were widely accepted and disseminated once they got started. It has even been argued that religion was the key to civilisation because it was the social glue that held large groups of strangers together as societies expanded. No doubt it still has much of its original appeal. But these days, religion's downsides are more apparent. Conflict, misogyny, prejudice and terrorism all happen in the name of religion. However, as the rise of atheism attests, it is possible to override our deep-seated religious tendencies with rational deliberation – it just takes some mental effort.

This article appeared in print under the headline "Religion"


Tuesday, 12 December 2017

Postulates for Reality

1) Introduction

If a set of universal postulates for reality can be agreed, what are the implications for explanations of reality where god(s) are necessary versus explanations where god(s) are not necessary?

2) List of Postulates

Postulate Implications for models of reality where god(s) are necessary Implications for models of reality where god(s) are not necessary
1) The word “reality” stands alone. Prefixes such as “basic”, “ultimate”, “fundamental” etc. are redundant (unless they are clearly defined and differentiated from the word "reality"). Not applicable Not applicable
2) If God exists He must be part of reality. God(s) exist God(s) could exist 
3) If God is not part of reality, He is not real. God(s) are part of reality God(s) could exist
4) We only perceive a tiny part of reality - referred to here the observable universe God(s) intervene in the observeable universe God(s) could exist but only beyond the observeable universe
5) Reality is bigger than the observable universe, perhaps infinitely bigger God(s) are infinite (exist everywhere) God(s) could exist but only beyond the observeable universe
6) Anything that exists beyond the observable universe is unknowable to us God(s) are an exception to this rule God(s) are speculative and unfalsifiable
7) Theories are only valid within the scope of the observable universe God(s) cannot be described by any theory God(s) cannot be described by any theory
8) The concept of an uncaused cause is a valid concept If God(s) are not created, then God(s) are the uncaused cause Reality could be the uncaused cause
9) Reality did not necessarily have a beginning and may not necessarily have an end Reality originally consisted only of god(s) Reality is eternal
10) Our universe did have a start and will have an end God(s) created our universe Our universe appeared as a natural process within reality (was not created)
11) Reality may be uncaused God(s) were once all that existed Reality is eternal
12) Even within the tiny range of reality we can observe, 95% is currently unknown to us Not applicable Not applicable
13) It's possible to find evidence in the observable universe which can be used to support hypotheses regarding aspects of reality beyond the observable universe There is evidence for god(s) in the observeable universe which can be used to form hypotheses about god(s) beyond the observeable universe There is evidence in the observable universe which can be used to support hypotheses regarding aspects of reality beyond the observable universe
14) Reality exists because a state of absolute nothingness is impossible.  God(s) exist for this reason God(s) are unnecessary for this reason. 
Notes
#13 appears to contradict #6 and #7 but there is no contradiction if we clearly define the difference between theory and hypothesis

Would "supernatural entities" be a more appropriate term than "god(s)"?
#4 does not apply to deism or pantheism  

#14 supports #11

3) Models Compared

Step 1 Step 2 Step 3 Step 4 Step 5
Before Reality Reality  Realms within reality Life Consciousness
Model A God Reality is created by God God creates heaven, hell, angels, our universe (and other realms/entities, other universes?) God creates life on at least one planet God creates conscious beings (humans)
Model B This step not required An environment within which universes naturally appear and disappear. This step not required Simple, single celled organisms appear on at least one planet Consciousness emerges as life evolves
Attributes for Model A God is eternal, always was and is uncaused. Nature unknown. Reality is an eternal, infinite environment within which various realms and God exist. Nature unknown (?) Process used by God for creating universes, angels, realms, etc. is unknown Process used by God to create life is unknown Process used by God to create consciousness is unknown
Attributes for Model B This step not required (there is no "before reality")
Reality is eternal, infinite, always was and is uncaused.

Nature unknown (but there are some hypotheses)
Process for universe formation is unknown (but there are some hypotheses) Process for appearance of the first organisms is unknown (but there are some hypotheses) Process by which consciousness emerges is unknown (but there are some hypotheses)

jpeg version: http://oi67.tinypic.com/343jldx.jpg

Words of Wisdom

Before you speak, let your words pass through three gates:
At the first gate, ask yourself “Is is true?”
At the second gate ask, “Is it necessary?”
At the third gate ask, “Is it kind?”

Did the Quakers come up with this?

http://www.strecorsoc.org/storygarden/58199_tts.html

—-


For me, it is far better to grasp the Universe as it really is than to persist in delusion, however satisfying and reassuring.
- Carl Sagan



Wednesday, 6 December 2017

Saturday, 25 November 2017

How Deluded are You?

Grand delusions: Why we all believe the weirdest things
The human mind is the perfect breeding ground for bizarre beliefs, so we shouldn’t be surprised that fake news has such a powerful influence

By Dan Jones
New Scientist 15 Nov 2017
https://www.newscientist.com/article/mg23631520-800-grand-delusions-why-we-all-believe-the-weirdest-things/



THREE Messiahs walk into a psychiatric unit… No, this isn’t the set-up to a tasteless joke, but the beginning of a study done in the 1950s by Milton Rokeach at Ypsilanti State Hospital, Michigan. Rokeach brought together three men, each harbouring the delusion that he was Jesus Christ, to see if meeting the others and confronting their mutually contradictory claims would change their minds. Two years and many arguments later, their beliefs had barely budged. For each Jesus, the other two were fakers, while they were the real deal.

As delusions go, the Messiah complex is extreme. Most delusions are far more mundane, such as an unfounded belief that you are exceptionally talented, that people are out to get you or that a celebrity is in love with you. In fact, more than 90 per cent of us hold delusional beliefs. You may find that figure shockingly high – or perhaps you see evidence all around, in the willingness of so many people to swallow fake news, in the antics of politicians and celebrities, and even among your Facebook friends. Either way, what exactly does it mean? Why are some of us more prone to delusions than others? How do false beliefs get a hold in our minds? And can we all learn to tame our delusional tendencies?

First we need to be clear about what a delusion is. “There’s a loose way of talking about delusions – like when we talk about the ‘God delusion’ – which simply means any belief that’s likely to be false and is held despite lack of evidence, or even in spite of the evidence,” says Lisa Bortolotti at the University of Birmingham, UK. The psychological take is more nuanced. Delusions are still seen as irrational, but they are also idiosyncratic, meaning the belief is not widely shared. That rules out lots of things including most religious beliefs, conspiracy theories and the denial of climate change. Furthermore, the idiosyncratic nature of delusions makes them isolating and alienating in a way that believing, say, a conspiracy theory is not. Delusions also tend to be much more personal than other irrational beliefs, and they usually conform to one of a handful of themes (see “Strange beliefs”).

Bizarre beliefs
At any time, around 0.2 per cent of people are being treated for delusional disorders. We now know that this is the tip of an iceberg. In 2010, Rachel Pechey and Peter Halligan, both at Cardiff University, UK, presented 1000 people with 17 delusion-like beliefs, and asked whether they held them strongly, moderately, weakly or not at all. The beliefs were either relatively mundane, such as “Certain people are out to harm me” and “I am an exceptionally gifted person that others do not recognise”, or more bizarre, including “I am dead and/or do not exist” and “People I know disguise themselves as others to manipulate or influence me”. In all, 39 per cent of participants held at least one of these beliefs strongly, and a whopping 91 per cent held one or more at least weakly. What’s more, three-quarters of people subscribed to bizarre beliefs to at least some extent.

“Symptoms of psychosis-like delusions are just the extreme end of a continuum of similar phenomena in the general population,” says Ryan McKay at Royal Holloway, University of London. More evidence for this comes from the Peters Delusion Inventory, which is frequently used to measure how prone people are to delusional thinking. The inventory asks respondents whether or not they have ever experienced various different beliefs that often crop up in a clinical context, resulting in a delusion-proneness range from 0 to 21 (see “How deluded are you?“). Among the general population, people score an average of 6.7, with no difference between men and women. People with psychotic delusions score about twice this. So they do have more of these beliefs, but what really sets them apart from others is that they tend to be more preoccupied with their delusional beliefs and more distressed by them. “It’s not what you think, it’s the way that you think about it,” says Emmanuelle Peters of King’s College London, who led the development of the inventory.

That we are all prone to delusions may not be so surprising. A range of cognitive biases makes the human mind fertile soil for growing all kinds of irrational beliefs. Confirmation bias, for example, means we ignore inconvenient facts that go against our beliefs and uncritically accept anything that supports them. Desirability bias leaves us prone to shoring up beliefs we have a vested interest in maintaining because they make us or our group look good. Clustering bias refers to our tendency to see phantom patterns in random events, impairing our ability to draw logical conclusions from the available evidence.

A quick trawl of social media is all it takes to see how these utterly human ways of thinking can contribute to a cornucopia of strange and idiosyncratic beliefs. But the question of why some of us are more delusion prone than others is more difficult to answer.

It could have something to do with how we see the world – literally. In one study, volunteers watched an optical illusion consisting of a set of moving dots that could either be perceived as rotating clockwise or anticlockwise. The dots seemed to periodically flip direction, in much the same way a Necker cube changes its orientation as you look at it. Those individuals who scored highly on delusion-proneness perceived the dots as switching direction more frequently, suggesting they have a less stable perceptual experience. How this influences thinking remains unclear, but it doesn’t end there. Participants then wore glasses, which they were told would make the dots appear to rotate one way rather than the other. In reality, the glasses had no effect, but delusion-prone people reported seeing the dots move in the supposed biased direction – a case of seeing what you believe. Another recent study revealed that their perception of time is distorted, too. Delusion-prone people were more likely to believe they had predicted events they could not have because of the order they happened in, indicating that they were making mistakes in judging the temporal order of their thoughts.

It is tempting to conclude that people susceptible to delusional thinking are more suggestible than others, but another study designed to test this explicitly found the opposite. In fact, delusions don’t typically start out as an idea seeded by someone else, but from a strange or anomalous experience generated by ourselves. The crucial second step in forming a delusion is that the person then invents an explanation for this experience. People with Capgras delusion, for example, explain a disturbing feeling of disconnection from a loved one by concluding that he or she has been replaced by a doppelgänger or hyper-realistic robot. This implies some kind of problem with evaluating the plausibility of one’s beliefs, says McKay. In Capgras, it has been linked to damage in specific brain networks. However, there is a more everyday reason that people hold implausible beliefs: a tendency to jump to conclusions on the basis of limited evidence.

“Do you ever feel that things in magazines were written especially for you?”

The extent to which anyone does this can be measured with a simple experiment. Imagine two jars containing a mix of black and orange beads: one contains 85 per cent black beads and 15 per cent orange, and the other has the reverse proportions. You select a bead from one, without knowing which it is. Let’s say the bead is orange. You are then asked whether you would like to make a call on which jar you are taking beads from, or whether you want to draw another bead to help work it out. It is prudent to examine a few beads at least – it is quite possible to draw two orange beads from a jar with mostly black, and vice versa. Yet, around 70 per cent of people being treated for a delusion make a judgement after seeing just one or two beads. Only 10 per cent of the general population are as quick to jump to conclusions, but the more prone you are to delusional thinking, the fewer beads you are likely to sample before making your decision.

This jumping-to-conclusions bias might seem stupid, but it isn’t a sign of low intelligence, according to clinical psychologist Philippa Garety at King’s College London. Instead, she believes it reflects the kind of reasoning an individual favours. Some of us rely more on intuitive thinking – so-called system one thinking – while others are more likely to engage slower, analytic “system two thinking”, which is needed for reviewing and revising beliefs. In a recent study, Garety’s team found that the less analytical a person’s style of thinking, the fewer beads they wanted to see before making a judgement. “It’s not that people with a jumping-to-conclusions bias don’t understand or can’t use evidence,” she says. “They’re just overusing system one at the expense of system two.” And sure enough, Garety’s latest study confirms that these intuitive thinkers are also more prone to clinical delusions.


It looks like a vulnerability to delusions is part and parcel of regular human psychology. After all, everyone is an intuitive thinker at times: even people who favour system two thinking rely on quick, system one thinking when tired, stressed or scared. Whether humanity is becoming more deluded than ever is another question. In today’s hyper-mediated world, we are continually exposed to new experiences and people, and called upon to evaluate all sorts of beliefs that our forebears wouldn’t have encountered. We may also be more tired and stressed. As a result, it is possible we have more numerous or richer delusions than past generations. Nobody has done the research. But even if that is the case, this may have some advantages. “Delusions can be helpful when they make people feel good about themselves or explain aspects of their life that are difficult to understand,” says Bortolotti. It can be empowering to feel that a celebrity is in love with you, for example. And there is plenty of evidence that an inflated belief in your talents can have all sorts of benefits, from success in job interviews to attracting a sexual partner.

Alternatively, our alarming susceptibility to fake news and the outlandish behaviour of key players on the world stage might lead you to conclude that we could do with a bit less delusional thinking. If so, the good news is that insights into delusion psychology point to some ways we can curb it. Garety has helped design an intervention to train people’s slow-thinking skills. SlowMo, which includes therapy and an app, is intended for people with paranoid delusions, but it nurtures mental habits all of us can benefit from. They include gathering sufficient data before making conclusions, learning to question your initial thoughts and impressions about events, and considering different explanations of experiences. SlowMo is currently being tested. If it proves effective, the app will be available in the UK through the National Health Service.

Changing your mind
Of course, changing the way you think isn’t easy. It takes effort, and support. “There’s some evidence that people who have good relationships at home and have someone to talk to are more able to activate slow thinking,” says Garety. Even if that’s not your goal, sharing your thoughts is a good first step to dispelling delusions. “It’s psychologically healthy to recognise that our thoughts sometimes need inspection and engagement with the world to assess how right they are,” says clinical psychologist Daniel Freeman at the University of Oxford.

Simply talking can highlight delusional thoughts in ourselves and others. But then what? We know that delusions are impervious to counterargument. In fact, trying to disprove them can backfire. Freeman has some advice based on his clinical work. First, provide a plausible, non-threatening alternative perspective. Then, help the deluded person gain evidence that bolsters this perspective.

“We don’t try to disprove people’s beliefs,” he says, “because we know that has the opposite effect, just like when people argue in a pub – no one changes their mind.” If you don’t believe him, ask any Jesus Christ.


Strange Beliefs

Despite being diverse and idiosyncratic, delusions cluster into a few core themes.

Persecutory Delusions: beliefs that others are out to harm you. This is the most common type of delusion, affecting between 10 and 15 per cent of people.

Referential Delusions: beliefs that things happening in the world – from news headlines to song lyrics – relate directly to you. Persecutory and referential delusions often go hand in hand.

Control Delusions: beliefs that your thoughts or behaviours are being manipulated by outside agents. Such delusions are common in schizophrenia.

Erotomanic Delusions: beliefs that someone who you don’t know, typically a celebrity, is in love with you.

Grandiose Delusions: unfounded beliefs that you are exceptionally talented, insightful or otherwise better than the hoi polloi.

Jealous Delusions: irrational beliefs that your partner is being unfaithful. This is the type of delusion most commonly associated with violence.

Somatic Delusions: erroneous beliefs about the body. In Ekbom’s syndrome, people believe they are infested with parasites. People with Cotard delusion believe they are dead or don’t exist.

Misidentification Delusions: beliefs about changed identity. A classic is Capgras delusion, where people believe that a loved one has been replaced by a doppelgänger.


How Deluded Are You?

Almost everyone is vulnerable to delusions, but some of us more than others. These 21 questions constitute the Peters Delusion Inventory, which is the most widely used measure of delusion proneness. Give yourself one point for each “yes” and zero points for each “no”, then tot up your score.

1 Do you ever feel as if people seem to drop hints about you or say things with a double meaning?

2 Do you ever feel as if things in magazines or on TV were written especially for you?

3 Do you ever feel as if some people are not what they seem to be?

4 Do you ever feel as if you are being persecuted in some way?

5 Do you ever feel as if there is a conspiracy against you?

6 Do you ever feel as if you are, or destined to be someone very important?

7 Do you ever feel that you are a very special or unusual person?

8 Do you ever feel that you are especially close to God?

9 Do you ever think people can communicate telepathically?

10 Do you ever feel as if electrical devices such as computers can influence the way you think?

11 Do you ever feel as if you have been chosen by God in some way?

12 Do you believe in the power of witchcraft, voodoo or the occult?

13 Are you often worried that your partner may be unfaithful?

14 Do you ever feel that you have sinned more than the average person?

15 Do you ever feel that people look at you oddly because of your appearance?

16 Do you ever feel as if you had no thoughts in your head at all?

17 Do you ever feel as if the world is about to end?

18 Do your thoughts ever feel alien to you in some way?

19 Have your thoughts ever been so vivid that you were worried other people would hear them?

20 Do you ever feel as if your own thoughts were being echoed back to you?

21 Do you ever feel as if you are a robot or zombie without a will of your own?



0-5 You are less prone to delusions than most. Your thinking style is probably more analytical than intuitive.
6-7 Congratulations! You are normal. The average score is 6.7, with no difference between men and women.
8-21 You are more prone to delusions than most. You are likely to think intuitively and jump to conclusions.

Schizophrenia Bulletin

This article appeared in print under the headline “Delusional you”

Article amended on 24 November 2017
We corrected the scoring of the Peters Delusion Inventory questionnaire

Monday, 13 November 2017

The very first living thing is still alive inside each one of us

From New Scientist Nov 2017

The very first living thing is still alive inside each one of us
A cellular machine so powerful that it gave rise to all of life and created our marble planet can tell us how it all began
Levente Szabo
By Bob Holmes
SOME 4 billion years ago, somewhere in the mass of inert minerals and molecules that made up our wet, rocky planet, dead became alive. This was the most important chemical transformation ever to happen on Earth. Not only did it give rise to all the living things that have ever existed, it also altered the chemistry of the oceans, the land and the atmosphere above. If it hadn’t happened, there would be no blue marble.
That first chemical step towards life may be a lot closer than we thought. Buried within every cell of every organism on the planet, from bacteria to barnacles to Britons, is a living, working version of the earliest life on Earth – a time machine that allows us to peel away those 4 billion years of history and work out how it all began. “We can stop bullshitting about the origin of life,” says Loren Williams, a biochemist at the Georgia Institute of Technology in Atlanta. “We can see it.” What he and his colleagues are discovering is turning our view of life’s origins on its head.
Until now, most efforts to understand how life began have attacked the problem from the bottom up. Broadly, they start with an experimental soup of primordial molecules and try to either recreate the building blocks of genes or get them to evolve key functions, like self-replication. Despite some promising results, these approaches can at best show a plausible path that life might have followed. They can never reveal what actually happened.
The new approach starts with modern life and works backwards. Formed of a tangle of proteins and a relative of DNA called RNA, ribosomes are molecular machines found inside every living cell. They do just one thing and do it well: they read the genetic code contained in DNA and use it to construct proteins. In essence, they are cellular robots that build the stuff that makes our cells tick.
Their task is so crucial to life that it works the same way in all organisms: your ribosomes differ from those of a lowly bacterium only in the ornamentation on their outer surface. That kind of uniformity suggests they date back to when life began. As evolution progressed, new species tacked extra bits of RNA to their ribosomes. The additions left identifiable traces, much as a branch sprouting from a tree leaves a visible mark in the wood. “Even if the branch is gone, you can look at the wood and say something grew out here,” says Williams. Strip these sprouting branches off and you are left with a common core: the part of the ribosome that was functional at the time of the last universal common ancestor (LUCA) from which all known life is descended.
“This system may not have been truly alive, but it was starting on the path to life”
In this way, by comparing the ribosomal RNA of living organisms, Williams and his team have been able to ride the ribosome back to LUCA’s time, and beyond. Then, once they knew how to recognise the “insertion fingerprints” in the ribosome’s modern decoration, the team looked at what RNA would already have been present in LUCA’s ribosome. They found similar traces of insertions, pointing to ancient additions that must have taken place even before LUCA. Every time they found an addition, they snipped it away, pruning branches further and further into the past to reconstruct ever earlier, more primitive versions of the ribosome, all the way back to its beginnings.
The most ancient part of the ribosome, Williams and his colleagues found, is a stretch of RNA that includes the cradle-resembling region that today links amino acids to form protein-like chains. Other teams have used different methods to identify the earliest parts of the ribosome, such as simply peeling away layers, like an onion, to reveal the core. They agree that this cradle is the most ancient bit. “This is probably the best model we have of the history of the ribosome,” says George Fox, an evolutionary biologist at the University of Houston, Texas.
Stripped of all its later refinements, this rudimentary ribosome had none of the precision it would achieve by the time of LUCA. It lacked the regions that read the genetic code, so it couldn’t have produced specific proteins. Instead, it must have linked amino acids, and probably whatever other molecules would fit into its cradle, willy-nilly into short, random chains through a simple chemical reaction that points to a terrestrial origin of life (see “A dry little pond”). As a result, early ribosomes would have made a mishmash of different molecules, says Williams. “We call them molecular sausage-makers.” Other researchers have found supporting evidence: under the right conditions, even modern ribosomes can be tricked into linking molecules other than amino acids together.
These most ancient parts of the ribosome can tell us about how all of life on Earth was booted up

On primordial Earth, as random fragments churned out of the sausage-maker, a few would have happened to take a shape that helped them stick to the ribosomal RNA. This would have stabilised them a little. “Those sequences that form more stable structures are going to last longer, so we’re going to have those building up,” says Nicholas Hud, a biochemist at Georgia Tech who collaborates with Williams. Gradually, these accretions built the ribosome into a larger and larger structure. “That’s a form of evolution, and I haven’t said anything about genes or information. I would call it chemical evolution,” says Hud.
This mess of co-evolving RNA and protein fragments falls well short of what most researchers would call truly alive. The molecules may not even have been recognisably RNA or protein, but a range of similar proto-molecules that self-assembled more easily. But the whole system was starting down a path to life, as chemical evolution winnowed through the randomness and selected the components that clung most to one another.
Although some details remain murky, Williams has no doubt about one feature of this early stage: there was no genetic code and no precise replication of a genome until just before true life evolved. In modern cells, instructions from the central DNA library in the nucleus are shipped out to the ribosomes via messenger RNA. The ribosome binds these mRNA transcripts and reads them three letters at a time, with each triplet coding for one amino acid. The specified amino acid, in turn, is escorted to the ribosome by a transfer RNA, which has a matching sequence to the triplet.
Transfer RNA and mRNA are essential executors of the genetic code. Yet in all the reconstructions of pre-LUCA history, the parts of the ribosome that are responsible for binding them appear relatively late. Even then, Williams conjectures, the initial function of mRNA and tRNA was unlikely to be anything as sophisticated as triplet coding. Instead, they probably arose as little snippets of RNA that happened to help position random amino acids in the right orientation to attach to the growing protein molecule.
Today, the blueprint contained in the genetic code allows the ribosome to make exact copies of the same protein over and over again. But at this point in life’s prehistory, the proto-ribosome had no way to read a genome and, thus, no use for one. Instead of exact copies, it would have made a range of new molecules, with the ones that helped stabilise the system sticking around.
Gradually, this process would have selected mRNA and tRNA that were better and more precise at their jobs, eventually leading to molecules that introduced specific amino acids at specific times – the genetic code that today is found in all living things. At last, Darwinian evolution had arrived, and the system finally reached the point at which it can truly be called alive. Most evolutionary biologists agree that RNA would have carried the genetic code first. DNA, a more stable molecule, came later (see “Before life’s big bang”).



Molecular snuggling
One of the key processes that RNA and proteins had to evolve as they became more refined was folding: modern proteins all contort into intricate three-dimensional shapes without which they do not function. How this arose is a bit of a mystery. Folding is an extremely rare property, and there are more potential protein chains than there are atoms in the universe. So it is inconceivable that evolution could have explored all possible proteins to find the few that fold, says Andrei Lupas of the Max Planck Institute for Developmental Biology in Germany.
Chemical evolution offers a solution to this problem. For a protein to fold well, one part of it has to snuggle tightly against another without intervening water molecules. That is also the feature that would have helped bits of proto-protein bind and stabilise proto-RNA. As these molecules co-evolved, they would have selected protein fragments that were predisposed to fold well, says Lupas.
The ribosome still preserves a record of this process. The proteins associated with the oldest part of the ribosome show little or no complex folding. Moving to successively more recent parts of the ribosome, researchers observed proteins folding first into simple sheets and then into more and more precise and intricate shapes. At the same time, the RNA portions of the ribosome were also developing tighter and more stable folding. “The idea that protein and RNA co-evolved is mapped right there. We can see it,” says Williams. His scenario offers a second way out of early life’s “chicken and egg” paradox. It also represents a major departure from the dominant “RNA world” hypothesis(see “Cracked: the life and the egg”).
Even Williams’s critics welcome his ideas. “I think that Loren is championing a radical revision of the way we look at early life,” says Niles Lehman at Portland State University in Oregon. “Even if you don’t agree with all the details, it does help us rethink models of how life started, and that’s a valuable contribution.”
Williams says he’s just getting started. His team is trying to recreate the stages of ribosomal evolution in the lab, so that they can test what each can do. So far, they have built the earliest proto-ribosomal core and are beginning to put it through its paces.
“The ribosome is nature’s gift to us,” he says. “It’s a little time capsule, and we’ve just opened it up and are starting to look inside.”
A dry little pond
Researchers have long debated whether life originated in a warm little pond, as Darwin speculated, or in some other habitat like undersea hydrothermal vents, terrestrial hot springs or even clay sediments.
The ribosome may offer a clue to this puzzle. Its oldest part does a rough job of linking small molecules into longer chains. It achieves this through a dehydration reaction, in which the link is sealed through the release of a water molecule. Because this happens more easily under dry conditions, it suggests the early ribosome didn’t ply its trade in the ocean, says Nicholas Hud at the Georgia Institute of Technology.
The most likely spot would be around the fringes of a temporary pond, where conditions would alternate between moist – favouring the mixing of ingredients – and dry, which would favour longer chains.
Cracked: the chicken and the egg
Life as we know it today poses a chicken-and-egg problem: DNA can’t replicate without proteins to do the work, but proteins can’t exist without DNA to spell out their structure.
That dilemma is a big reason why researchers favour the idea that life began not with DNA but RNA, which can not only store information, but also fold into complex shapes that help it act as a catalyst. An RNA molecule that can catalyse its own replication would neatly solve the chicken-and-egg problem by combining both information and catalysis into one molecule. In this “RNA world” scenario, self-replicating RNA later outsourced its catalytic role to proteins, which are more versatile, and passed on its information-storage job to DNA, which is more stable.
Loren Williams and his colleagues at Georgia Tech offer a different solution. Neither proteins nor nucleic acids came first, they say. Instead, both evolved together from the very beginning.
The whole notion of an RNA world rings false to Williams because it requires life to abandon a working RNA-based system and reinvent itself in DNA and proteins, instead of tinkering to refine existing processes. “I just don’t think evolution does those kinds of things,” he says. “It would mean evolution is not a tinkerer, it’s an engineer.” The alternative – that RNA and proteins co-evolved – is more plausible, he says.
Not everyone agrees. “That is the minority report,” says Niles Lehman, an evolutionary biochemist at Portland State University in Oregon. “But I think it is a growing minority.”