Friday 22 June 2018

Consciousness: How we’re solving a mystery bigger than our minds

From New Scientist, June 2018
https://www.newscientist.com/article/mg23831830-300-consciousness-how-were-solving-a-mystery-bigger-than-our-minds/

What is being in love, feeling pain or seeing colour made of? How our brains make conscious experience has long been a riddle – but we’re uncovering clues


consciousness artwork
Ivan Blažetić Šumski


TWENTY years ago this week, two young men sat in a smoky bar in Bremen, northern Germany. Neuroscientist Christof Koch and philosopher David Chalmers had spent the day lecturing at a conference about consciousness, and they still had more to say. After a few drinks, Koch suggested a wager. He bet a case of fine wine that within the next 25 years someone would discover a specific signature of consciousness in the brain. Chalmers said it wouldn’t happen, and bet against.

It was a bit of fun, but also an audacious gamble. Consciousness is truly mysterious. It is the essence of you – the redness of red, the feeling of being in love, the sensation of pain and all the rest of your subjective experiences, conjured up somehow by your brain. Back then, its elusive nature meant that many believed it wasn’t even a valid subject for scientific investigation.

“consciousness”

Today, consciousness is a hot research area, and Koch and Chalmers are two of its most influential figures. Koch is head of the Allen Institute for Brain Science in Seattle. Chalmers is a professor at New York University and famous for coining the phrase the “hard problem” to distinguish the difficulty of understanding consciousness from that of grasping other mental phenomena. Much progress has been made, but how close are we to solving the mystery? To find out, I decided to ask Chalmers and Koch how their bet was going. But there was a problem – they had mislaid the terms of the wager. Luckily, I too was in Bremen as a journalist 20 years ago and was able to come to their rescue.

The consciousness bet has its roots in Koch’s research. In the mid-1980s, as a young scientist, he began collaborating with Francis Crick who had co-discovered the double helix structure of DNA. Both men were frustrated that science had so little to say about consciousness. Indeed, the International Dictionary of Psychology described it thus: “a fascinating but elusive phenomenon; it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written about it.” The pair believed this was partly because researchers lacked a practical approach to the problem. In his work on DNA, Crick had reduced the mystery of biological heritability to a few intrinsic properties of a small set of molecules. He and Koch thought consciousness might be explained using a similar approach. Leaving aside the tricky issue of what causes consciousness, they wanted to find a minimal physical signature in the brain sufficient for a specific subjective experience. Thus began the search for the “neural correlates of consciousness”.

This approach, which allows incremental progress and appeals to researchers no matter what their philosophical stance, has been central to the study of consciousness ever since. Indeed, neural correlates of consciousness – the subject of Koch’s wager – was the topic under discussion at the Bremen conference. There, Koch argued that gamma waves might be involved, based on research linking awareness to this kind of brain activity, where neurons fire at frequencies around 40 hertz. Conference delegates also heard that pyramidal cells in the brain’s outer layer or cortex might play a key role (see “Where’s consciousness?”).

Where's consciousness?

There was no shortage of ideas. However, the early ones all proved too simplistic. Take the notion, which Koch and Crick favoured for a while, that a sheet-like structure beneath the cortex called the claustrum is crucial for consciousness. There was reason to be optimistic: a case study in 2014 showed that electrically stimulating this structure in a woman’s brain caused her to stare blankly ahead, seemingly unconscious, until the stimulation stopped. But another study described someone who remained fully conscious after his claustrum was destroyed by encephalitis, undermining that idea.




Undeterred, those searching for the neural correlates of consciousness have come up with more sophisticated ideas. But are we really any closer to cracking the problem? Last year, I had the perfect opportunity to find out when I met up with Chalmers at a consciousness conference in Budapest, Hungary.

I asked him how his bet with Koch was shaping up. Looking a bit dejected, he told me that they had met three months earlier in New York, and the subject of the wager came up. “Sadly, neither of us could remember the exact terms,” he said. It was then that I realised I might be able to help. Although I wasn’t with them at the bar when it all happened, the following day I had interviewed Chalmers, who mentioned the bet he had made just hours earlier.
Back home, I started looking for notes from our long-ago meeting. Eventually, I found a cassette stowed away in a box on top of a shelf in my study. The faded label read: “David Chalmers interview”. After more searching, I found a tape recorder and pressed play.
Chalmers is describing how he left the conference banquet after midnight and continued to a bar with Koch and a few others. “We had a good time. It got light very early and we walked back around 6 o’clock.” Even though he still hasn’t slept, his voice is remarkably alert. Distant memories re-emerge. We are in an outdoor restaurant under a hazy sky. Chalmers wears a black leather jacket and has shoulder-length hair.

Fast-forwarding through the tape I find what I’m looking for. Towards the end, Chalmers mentions the bet and specifies what kind of signature of consciousness it refers to: “a small set of neurons characterised by a small list of intrinsic properties. I think we said less than 10.” Intrinsic properties could be, say, a neuron’s pattern of electrical firing, or genes regulating the production of various neurotransmitters. “And it will be clear in 25 years,” he says. Bingo! I emailed a clip from the recording to Chalmers. He immediately replied: “Thanks – this is great!” and forwarded the message to Koch.

TV show Humans

In the TV show Humans, finding the consciousness switch is easy – it’s under the chin

Everett Collection Inc/Alamy
These days, the prevailing ideas of consciousness have more to do with the properties of networks of neurons than those of specific cells. One proposal that has been particularly influential is called global workspace theory. It suggests that information coming from the outside world competes for attention in the cortex and a structure in the centre of the brain called the thalamus. If the signal it generates outcompetes other information, it is broadcast across the brain. Only then do you consciously register it. A popular version of this theory is that special “workspace neurons” in the cortex, primarily in the front of the brain, broadcast information through their long-range connections.

Measuring experience

Other network-based ideas suggest that consciousness is the result of information being combined so that it is more than the sum of its parts. One that has grabbed much attention is integrated information theory (IIT). It is the brainchild of neuroscientist Giulio Tononi at the University of Wisconsin, who believes that the amount of consciousness in any system – which he calls phi – can be measured. Very approximately, phi will be high in a system of specialised modules that can interact rapidly and effectively. This is true for large parts of the brain’s cortex. In contrast, phi is low in the cerebellum, which contains 69 billion of the 86 billion nerve cells that make up the human brain, but is composed of modules that work largely independently of each other.

How do these two ideas stack up? Global workspace theory seems to fit with a lot of findings about the brain. However, it does not convince some, including philosopher Ned Block at New York University, who questions whether it explains subjective experience itself or just indicates when information is available for reasoning, speech and bodily control. IIT also reflects some observations about the brain. A stroke or tumour, for example, may destroy the cerebellum without significantly affecting consciousness, whereas similar damage to the cortex usually disrupts subjective experience, and can even cause a coma. The theory is quite controversial – not least because it posits that something inanimate like a grid of certain logic gates may have an extremely high degree of consciousness – but it also has some high-profile supporters. One of them is Koch.

This is something that Chalmers pointed out in his first message: “I’m thinking that with your current advocacy of IIT, this bet is looking pretty good for me.” Koch replied the same day in an upbeat tone, defending his allegiance to the idea. “There has been a lot of progress in the intervening years concerning the neuronal correlates of consciousness (NCC). The latest, best estimate for the NCC is a hot-zone in the posterior cortex but, rather surprisingly and contrary to Global Workspace Theories, not in the front of the cortex.”

He attached two papers he had recently co-authored to support this last point. They link the front of the cortex with monitoring and reporting, but not subjective experience. He also pointed out that the back of the cortex has much higher phi than the front, due to the connectivity of its neurons, and is therefore more closely coupled to consciousness according to IIT. “The intrinsic properties that we spoke about in Bremen would be the intrinsic connectivity of neurons in [the] posterior cortex,” he wrote. So, the posterior hot zone is Koch’s horse in the race.

Chalmers replied a few hours later, highlighting the details of the bet. It is about finding a link between a subjective experience and a small number of nerve cells with a handful of intrinsic properties. Chalmers didn’t think that something associated with phi should count as intrinsic: “I’m thinking that phi is a network property of a group of neurons, so not an intrinsic property of specific neurons,” he wrote. Another question was whether Koch’s hot zone constituted a small group of neurons. “Overall, I agree that IIT is cool and shares something of the bold spirit of what you proposed then, but the specific idea seems different.”

“A consciousness-o-meter would work by zapping the brain with magnetic pulses”
The reply arrived the following day: “You are right – it is unlikely that there is a special ‘magical’ property inside some particular neurons,” Koch wrote. But he then proceeded to describe how intrinsic factors, such as genes, might shape connectivity in a way that makes it difficult to distinguish between intrinsic and extrinsic properties.

Although intriguing, these emails left many questions unanswered about the bet, so I arranged a Skype interview with Chalmers and Koch. I began by asking them how they see their chances of winning.

“Let’s go over what we said in Bremen,” says Koch, and gets into an exposition on the number of neurons that could be reasonably described as a “small set”. Chalmers looks amused. He then leans forward: “I think fairly crucial to Christof’s original view is that it would be a special kind of neuron, with some special properties. And that is what to me looks somewhat unlikely,” he says. “Well, that we simply don’t know, Dave. This is what our institute does, and we just got $120 million to do more of it,” says Koch, referring to the Allen Institute’s efforts to map and characterise different cell types in the brain. “OK,” says Chalmers. “But ‘we don’t know’ isn’t good enough for you. You need to actually discover this by 2023 for the purposes of this bet. And that’s looking unlikely.” Koch stares into the distance for a moment, nods and then smiles: “I agree, it’s unlikely because the networks are so complex.” But he hasn’t given up all hope. “A lot can happen in five years!”

Whoever wins, Chalmers and Koch are united in their belief that this is an important quest. Would success mean that we had cracked the mystery of consciousness? “Well, there is a difference between finding a correlate and finding an explanation,” says Chalmers. Nevertheless, he hopes that neural correlates of consciousness will get us closer to answering the question of why it exists. Koch agrees: “The only thing I’d like to add is that looking for NCCs also allows us to answer a host of practical, clinical and legal questions.”

One such is how to measure the level of consciousness in brain-damaged people who lack the ability to communicate. Koch has recently launched a project that he hopes will solve this within a decade by creating a “consciousness-o-meter” to detect consciousness for instance by zapping the brain with magnetic pulses.

That would be impressive, but it is still far from solving the hard problem. I ask Koch if he ever feels that consciousness is too great a mystery to be solved by the human mind. “No!” he answers. Chalmers is more circumspect, however. He suggests consciousness may even be something fundamental that doesn’t have an explanation. “But we won’t know until we try. And even if there are some things that remain mysteries, there is going to be a whole lot of other stuff that we can understand.”

Artificial intelligence could even solve the riddle for us one day, he says. It sounds like wild speculation, but he is serious. “Absolutely!” AI might eventually evolve consciousness, he says, but perhaps that’s not even essential. “If it turns out that there is some completely rational reason that there is consciousness in the universe, maybe even an unconscious system could figure that out.”
On 20 June 2023, the bet will be settled. Koch wants the occasion marked by a workshop and an official announcement of the winner. He is an optimist. But even if he were to win, finding the NCC is just the first step towards the big goal: a fundamental theory of consciousness. Chalmers hopes it will be as definitive and widely accepted as current theories in physics – but he believes it will take far longer than five years. “It’s going to be 100, 200 years. So let’s re-evaluate then.”

This article appeared in print under the headline “The consciousness wager”

Sunday 17 June 2018

Simulating The Universe

Universe Is Not a Simulation, but We Can Now Simulate It
By
NATALIE WOLCHOVER
June 12, 2018
https://www.quantamagazine.org/coder-physicists-are-simulating-the-universe-to-unlock-its-secrets-20180612/

Computer simulations have become so accurate that cosmologists can now use them to study dark matter, supermassive black holes and other mysteries of the real evolving cosmos.

In the early 2000s, a small community of coder-cosmologists set out to simulate the 14-billion-year history of the universe on a supercomputer. They aimed to create a proxy of the cosmos, a Cliffs Notes version in computer code that could run in months instead of giga-years, to serve as a laboratory for studying the real universe.

The simulations failed spectacularly. Like mutant cells in a petri dish, mock galaxies grew all wrong, becoming excessively starry blobs instead of gently rotating spirals. When the researchers programmed in supermassive black holes at the centers of galaxies, the black holes either turned those galaxies into donuts or drifted out from galactic centers like monsters on the prowl.

But recently, the scientists seem to have begun to master the science and art of cosmos creation. They are applying the laws of physics to a smooth, hot fluid of (simulated) matter, as existed in the infant universe, and seeing the fluid evolve into spiral galaxies and galaxy clusters like those in the cosmos today.

 “I was like, wow, I can’t believe it!” said Tiziana Di Matteo, a numerical cosmologist at Carnegie Mellon University, about seeing realistic spiral galaxies form for the first time in 2015 in the initial run of BlueTides, one of several major ongoing simulation series. “You kind of surprise yourself, because it’s just a bunch of lines of code, right?”

With the leap in mock-universe verisimilitude, researchers are now using their simulations as laboratories. After each run, they can peer into their codes and figure out how and why certain features of their simulated cosmos arise, potentially also explaining what’s going on in reality. The newly functional proxies have inspired explanations and hypotheses about the 84 percent of matter that’s invisible — the long-sought “dark matter” that seemingly engulfs galaxies. Formerly puzzling telescope observations about real galaxies that raised questions about the standard dark matter hypothesis are being explained in the state-of-the-art facsimiles.

The simulations have also granted researchers such as Di Matteo virtual access to the supermassive black holes that anchor the centers of galaxies, whose formation in the early universe remains mysterious. “Now we are in an exciting place where we can actually use these models to make completely new predictions,” she said.

Until about 15 years ago, most cosmological simulations didn’t even attempt to form realistic galaxies. They modeled only dark matter, which in the standard hypothesis interacts only gravitationally, making it much easier to code than the complicated atomic stuff we see.

The dark-matter-only simulations found that roundish “halos” of invisible matter spontaneously formed with the right sizes and shapes to potentially cradle visible galaxies within them. Volker Springel, a leading coder-cosmologist at Heidelberg University in Germany, said, “These calculations were really instrumental to establish that the now-standard cosmological model, despite its two strange components — the dark matter and the dark energy — is actually a pretty promising prediction of what’s going on.”

Researchers then started adding visible matter into their codes, stepping up the difficulty astronomically. Unlike dark matter halos, interacting atoms evolve complexly as the universe unfolds, giving rise to fantastic objects like stars and supernovas. Unable to code the physics in full, coders had to simplify and omit. Every team took a different approach to this abridgement, picking and programming what they saw as the key astrophysics.

Then, in 2012, a study by Cecilia Scannapieco of the Leibniz Institute for Astrophysics in Potsdam gave the field a wake-up call. “She convinced a bunch of people to run the same galaxy with all their codes,” said James Wadsley of McMaster University in Canada, who participated. “And everyone got it wrong.” All their galaxies looked different, and “everyone made too many stars.”

Scannapieco’s study was both “embarrassing,” Wadsley said, and hugely motivational: “That’s when people doubled down and realized they needed black holes, and they needed the supernovae to work better” in order to create credible galaxies. In real galaxies, he and others explained, star production is diminishing. As the galaxies run low on fuel, their lights are burning out and not being replaced. But in the simulations, Wadsley said, late-stage galaxies were “still making stars like crazy,” because gas wasn’t getting kicked out.

The first of the two critical updates that have fixed the problem in the latest generation of simulations is the addition of supermassive black holes at spiral galaxies’ centers. These immeasurably dense, bottomless pits in the space-time fabric, some weighing more than a billion suns, act as fuel-burning engines, messily eating surrounding stars, gas and dust and spewing the debris outward in lightsaber-like beams called jets. They’re the main reason present-day spiral galaxies form fewer stars than they used to.

The other new key ingredient is supernovas — and the “superbubbles” formed from the combined shockwaves of hundreds of supernovas exploding in quick succession. In a superbubble, “a small galaxy over a few million years could blow itself apart,” said Wadsley, who integrated superbubbles into a code called GASOLINE2 in 2015. “They’re very kind of crazy extreme objects.” They occur because stars tend to live and die in clusters, forming by the hundreds of thousands as giant gas clouds collapse and later going supernova within about a million years of one another. Superbubbles sweep whole areas or even entire small galaxies clean of gas and dust, curbing star formation and helping to stir the pushed-out matter before it later recollapses. Their inclusion made small simulated galaxies much more realistic.

Jillian Bellovary, a wry young numerical cosmologist at Queensborough Community College and the American Museum of Natural History in New York, coded some of the first black holes, putting them into GASOLINE in 2008. Skipping or simplifying tons of physics, she programmed an equation dictating how much gas the black hole should consume as a function of the gas’s density and temperature, and a second equation telling the black hole how much energy to release. Others later built on Bellovary’s work, most importantly by figuring out how to keep black holes anchored at the centers of mock galaxies, while stopping them from blowing out so much gas that they’d form galactic donuts.

Simulating all this physics for hundreds of thousands of galaxies at once takes immense computing power and cleverness. Modern supercomputers, having essentially maxed out the number of transistors they can pack upon a single chip, have expanded outward across as many as 100,000 parallel cores that crunch numbers in concert. Coders have had to figure out how to divvy up the cores — not an easy task when some parts of a simulated universe evolve quickly and complexly, while little happens elsewhere, and then conditions can switch on a dime. Researchers have found ways of dealing with this huge dynamic range with algorithms that adaptively allocate computer resources according to need.

They’ve also fought and won a variety of logistical battles. For instance, “If you have two black holes eating the same gas,” Bellovary said, and they’re “on two different processors of the supercomputer, how do you have the black holes not eat the same particle?” Parallel processors “have to talk to each other,” she said.

Saving Dark Matter

The simulations finally work well enough to be used for science. With BlueTides, Di Matteo and collaborators are focusing on galaxy formation during the universe’s first 600 million years. Somehow, supermassive black holes wound up at the centers of dark matter halos during that period and helped pull rotating skirts of visible gas and dust around themselves. What isn’t known is how they got so big so fast. One possibility, as witnessed in BlueTides, is that supermassive black holes spontaneously formed from the gravitational collapse of gargantuan gas clouds in over-dense patches of the infant universe. “We’ve used the BlueTides simulations to actually predict what this first population of galaxies and black holes is like,” Di Matteo said. In the simulations, they see pickle-shaped proto-galaxies and miniature spirals taking shape around the newborn supermassive black holes. What future telescopes (including the James Webb Space Telescope, set to launch in 2020) observe as they peer deep into space and back in time to the birth of galaxies will in turn test the equations that went into the code.

Another leader in this back-and-forth game is Phil Hopkins, a professor at the California Institute of Technology. His code, FIRE, simulates relatively small volumes of the cosmos at high resolution. Hopkins “has pushed the resolution in a way that not many other people have,” Wadsley said. “His galaxies look very good.” Hopkins and his team have created some of the most realistic small galaxies, like the “dwarf galaxy” satellites that orbit the Milky Way.

These small, faint galaxies have always presented problems. The “missing satellite problem,” for instance, is the expectation, based on standard cold dark matter models, that hundreds of satellite galaxies should orbit every spiral galaxy. But the Milky Way has just dozens. This has caused some physicists to contemplate more complicated models of dark matter. However, when Hopkins and colleagues incorporated realistic superbubbles into their simulations, they saw many of those excess satellite galaxies go away. Hopkins has also found potential resolutions to two other problems, called “cusp-core” and “too-big-to-fail,” that have troubled the cold dark matter paradigm.

With their upgraded simulations, Wadsley, Di Matteo and others are also strengthening the case that dark matter exists at all. Arguably the greatest source of lingering doubt about dark matter is a curious relationship between the visible parts of galaxies. Namely, the speeds at which stars circumnavigate the galaxy closely track with the amount of visible matter enclosed by their orbits — even though the stars are also driven by the gravity of dark matter halos. There’s so much dark matter supposedly accelerating the stars that you wouldn’t expect the stars’ motions to have much to do with the amount of visible matter. For this relationship to exist within the dark matter framework, the amounts of dark matter and visible matter in galaxies must be fine-tuned such that they are tightly correlated themselves and galactic rotation speeds track with either one.

An alternative theory called modified Newtonian dynamics, or MOND, argues that there is no dark matter; rather, visible matter exerts a stronger gravitational force than expected at galactic outskirts. By slightly tweaking the famous inverse-square law of gravity, MOND broadly matches observed galaxy rotation speeds (though it struggles to account for other phenomena attributed to dark matter).

The fine-tuning problem appeared to sharpen in 2016, when the cosmologist Stacy McGaugh of Case Western Reserve University and collaborators showed how tightly the relationship between stars’ rotation speeds and visible matter holds across a range of real galaxies. But McGaugh’s paper met with three quick rejoinders from the numerical cosmology community. Three teams (one including Wadsley; another, Di Matteo; and the third led by Julio Navarro of the University of Victoria) published the results of simulations indicating that the relation arises naturally in dark-matter-filled galaxies.

Making the standard assumptions about cold dark matter halos, the researchers simulated galaxies like those in McGaugh’s sample. Their galaxies ended up exhibiting linear relationships very similar to the observed one, suggesting dark matter really does closely track visible matter. “We essentially fit their relation — pretty much on top,” said Wadsley. He and his then-student Ben Keller ran their simulation prior to seeing McGaugh’s paper, “so we felt that the fact that we could reproduce the relation without needing any tweaks to our model was fairly telling,” he said.

In a simulation that’s running now, Wadsley is generating a bigger volume of mock universe to test whether the relation holds for the full range of galaxy types in McGaugh’s sample. If it does, the cold dark matter hypothesis is seemingly safe from this quandary. As for why dark matter and visible matter end up so tightly correlated in galaxies, based on the simulations, Navarro and colleagues attribute it to angular momentum acting together with gravity during galaxy formation.

Beyond questions of dark matter, galactic simulation codes continue to improve, and reflect on other unknowns. The much-lauded, ongoing IllustrisTNG simulation series by Springel and collaborators now includes magnetic fields on a large scale for the first time. “Magnetic fields are like this ghost in astronomy,” Bellovary explained, playing a little-understood role in galactic dynamics. Springel thinks they might influence galactic winds — another enigma — and the simulations will help test this.

A big goal, Hopkins said, is to combine many simulations that each specialize in different time periods or spatial scales. “What you want to do is just tile all the scales,” he said, “where you can use, at each stage, the smaller-scale theory and observations to give you the theory and inputs you need on all scales.”

With the recent improvements, researchers say a philosophical debate has ensued about when to say “good enough.” Adding too many astrophysical bells and whistles into the simulations will eventually limit their usefulness by making it increasingly difficult to tell what’s causing what. As Wadsley put it, “We would just be observing a fake universe instead of a real one, but not understanding it.”

Friday 15 June 2018

The Many Faces of the Multiverse

It's looking likely our universe is just one of many,. But what kind of universe do we live in asks Robert Adler

From New Scientist - The Collections: 21 Great Mysteries of the Universe 

WHETHER we are searching the cosmos or probing the subatomic realm, our most successful theories lead to the inescapable conclusion that our universe is just a speck in a vast sea of universes.
Until recently many physicists were reluctant to accept the idea of this so-called multiverse. Recent progress in cosmology, string theory and quantum mechanics, though, has brought about a change of heart. “The multiverse is not some kind of optional thing, like can you supersize or not,” says Raphael Bousso, a theoretical physicist at the University of California, Berkeley. Our own cosmological history, he says, tells us that “it’s there and we need to deal with it”.

These days researchers like Bousso are treating multiverses as real, investigating them, testing them and seeing what they say about our universe. One of their main motivations is the need to explain why the physical laws underlying our universe seem so finely tuned as to allow galaxies, stars, planets, complex chemistry, life – and us – to exist. Rather than appealing to God or blind luck, some argue that our existence sets parameters that reliably pluck universes like ours from the bottomless grab-bag of the multiverse.
Yet there is a problem. Different theories spin off very different kinds of multiverses. Our current standard theory of how the universe came to be, for example, predicts an infinite expanse of other universes, including an infinite number in which duplicates of you are reading this sentence and wondering if those other versions of you really exist. Meanwhile, string theory, which hoped to derive the particles, forces and constants of our universe from fundamental principles, instead discovered a wilderness of 10500 universes fundamentally different from ours. Even quantum mechanics implies that our universe is a single snowflake in a blizzard of parallel universes (see diagrams, “The multiverse hierarchy”).

We are now faced with trying to understand and relate these ideas to one another. Of late, we have made enormous strides in making theoretical sense of concepts of the multiverse. At the same time, several groups claim to have made astronomical observations in support of the idea. It seems that we are finally beginning to find our place within the universe of universes.
We can trace the idea of the multiverse back to the early 1980s, when physicists realised that the big bang created an equally big problem. When astronomers measured its afterglow, radiation known as the cosmic microwave background (CMB), they found that it was unfathomably uniform – even at opposite ends of the visible universe. Finding a temperature match between such widely separated regions to within 1/10,000th of a degree, as we now know it to be, was as surprising as finding that their inhabitants, if any, spoke the same language, says Brian Greene, a physicist at Columbia University in New York City.




The problem was solved brilliantly by cosmologists Alan Guth at the Massachusetts Institute of Technology and Andrei Linde at Stanford University, California, among others. Their insight was that in the first 10-35 seconds of the universe’s existence, space itself expanded by a factor of roughly 1030. This stupendous stretching, known as inflation, accounts for the uniform temperature of the CMB and resolves another conundrum: why space appears flat, like a three-dimensional version of an infinite table top. Inflation has become an incredibly successful theory, precisely predicting the subtle ripples since measured in the CMB, which are echoes of quantum perturbations thought to have seeded galaxies and stars.

Guth and Linde’s attempt to explain our universe also led directly to a multiverse. That’s because inflation didn’t conveniently stop at the farthest regions from which light can travel to us today. Depending on how inflation unfolded, Guth says, the universe could be 1010 times, 1020 times, or even infinitely larger than the region we see. Inflation implies an expansion faster than the speed of light, meaning that beyond the horizon of our observable universe lie other parts of the universe that are effectively separated from ours. No influence can travel between these regions, essentially creating an infinite number of other worlds.

What would they look like? Max Tegmark, also a cosmologist at MIT, points out that although inflation predicts an abundance of universes, they all feature the same particles, forces and physical laws we find in our cosmic patch. But while in our universe elementary particles come together to make stars and galaxies in a particular way, the universe next door will contain a different arrangement of stars and galaxies, and so will our neighbours’ neighbours. Still, Tegmark has shown that even if a universe like ours were completely filled with elementary particles, they can only be arranged in a finite number of ways. It’s a huge number, 2 to the power of 10118, but since there’s no sign that space is finite, there’s room for every arrangement to repeat.

This means that if you travel far enough, you will eventually encounter a universe containing an identical copy of you. Tegmark has calculated that your nearest copy lives about 10 to the power of 1029 metres away. Carry on and you will find our universe’s twin lies 10 to the power of 10118 metres from here (arxiv.org/abs/0905.1283v1). Since an infinite universe hosts an infinite number of variations, somewhere you have just won the Olympic 100 metres. Congratulations!

Abundant as these universes are, there is nothing exotic about them. Tegmark classes the universes implied by simple inflation or an infinite expanse of space as the first level of a four-tier hierarchy that gets much, much stranger.
Take the second type of multiverse. Soon after inflation was discovered, Linde realised that it could be an ongoing or eternal process. When the enormous energy of empty space creates an inflating baby universe, the space around it, still crackling with energy, continues to expand even faster. That space could then sprout more universes that themselves inflate, and so on and on. “Practically all models of inflation that have been discussed predict eternal inflation,” says Alexander Vilenkin at Tufts University in Boston, who pioneered the idea in the 1980s. Guth dubs it the ultimate free lunch.

The eternal-inflation smorgasbord includes an infinite number of level 1 universes, but many other varieties as well. Each universe turns out in a different way, so features once thought universal, such as the masses of elementary particles and the strength of fundamental forces, differ. The bubble universes from eternal inflation include every permutation the laws of physics allow, leading Linde to quip that it’s not just a free lunch but “the only one at which all possible dishes are available”.
Those universes are part of the second level of Tegmark’s multiverse hierarchy. It also includes some 10500 sorts of universe implied by string theory, the leading contender for a “theory of everything” that would explain all the particles and forces of nature.
Today’s standard model of particle physics includes a score of parameters whose values physicists can measure but can’t explain, such as the mass of an electron. String theorists hoped their theory would explain why those parameters have the values they do, and so why our universe is the way it is.
They were sorely disappointed. Rather than producing one perfect snowflake – the particles, forces and interactions underpinning our universe – string theory loosed an avalanche of universes, a daunting expanse that Leonard Susskind, a theoretical physicist at Stanford University, dubbed the string theory landscape.

What sets these universes apart is the nature of their space-time. In string theory, nature’s particles and forces come about from vibrations of tiny strings in 10 dimensions. The reason we only experience four dimensions is because the rest are “compactified” or knotted into intricate structures too small for us to experience. The physics that plays out in any given universe depends on how many dimensions are scrunched up, and their structure. Researchers have identified an enormous number of possible shapes that interact with string-theory fields to define a vast number of universes, most with unfamiliar physical laws and radically different forces and particles.
Eternal inflation provided a convincing mechanism for populating every point in the string theory landscape with an infinite number of real universes. “Originally, string theorists did not like the idea of the multiverse and wanted to have just one solution, but instead they found 10500,” says Linde. “Now we must learn how to live in a universe which offers us an enormous multitude of choices.”



Finding out why our universe is as it is, when there is such a vast number of alternatives, remains one of cosmology’s biggest challenges. Our universe seems inexplicably finely tuned to produce the conditions necessary for life. If gravity were a bit stronger, the big bang would have been a squib. A bit weaker and it couldn’t have made galaxies and stars. If the electron’s charge differed by a few per cent, stars could not have created the heavy elements that make Earth-like planets. If the strong force varied by half a per cent, carbon wouldn’t exist, so neither would life as we know it. Topping the fine-tuning list is our universe’s small cosmological constant – the tiny dose of dark energy that is the source of the accelerating expansion of the universe.
The discovery in the late 1990s that the universe’s expansion is accelerating shocked most cosmologists. Quantum theory predicts a level of dark energy roughly 10120 times larger than what was found. Since that would blast the universe apart, most researchers had assumed that some undiscovered symmetry would cancel that huge number, leaving a cosmological constant of zero. Nobody predicted that it would not be zero – except one person.

A decade earlier, Steven Weinberg, a Nobel-prize-winning physicist at the University of Texas, Austin, had predicted a small positive cosmological constant. That coup came from applying anthropic reasoning to the multiverse, an approach that is still hotly contested. Weinberg reasoned that for a universe to generate galaxies – and so stars, planets and observers – the amount of dark energy had to fall within a certain range for us to be here to measure it. This amounts to homing in on a subset of universes within the multiverse that have those properties. This probabilistic approach allowed Weinberg to predict the value of the cosmological constant with remarkable accuracy.

“The discovery of the cosmological constant was one of the most unexpected discoveries of the last century, and it was predicted by the multiverse,” says Vilenkin. “So it’s indirect evidence that we are living in a multiverse.”
Since then, other researchers have used anthropic reasoning to constrain the amount of dark energy, the ratio of dark matter to ordinary matter, and the mass of elementary particles such as neutrinos and quarks. Using anthropic reasoning to winnow out our kind of universe from the multiverse, it seemed, might explain that mysterious fine-tuning. It might, in fact, be the only way. “If the laws and physical constants are different in other places,” says Martin Rees, an astronomer at the University of Cambridge, “there’s no avoiding it.”

Unfortunately there’s a catch in using this approach to elucidate our universe’s place in the multiverse: the usual rules of probability may not apply, making it impossible to estimate the likelihood of universes like ours. “You have infinitely many places where you win the lottery, and infinitely many where you don’t. So what’s your basis for saying that winning the lottery is unlikely?” says Bousso. “It pulls the rug out from under us to prove a theory right or wrong.”
This “measure problem” may have been solved by Bousso and his student I-Sheng Yang, now at Columbia University. They got rid of the troublesome infinities by deriving probabilities for a universe’s parameters from local “causal patches” – everything an observer can ever interact with. The probabilities they found matched those from an alternative approach pioneered by Vilenkin and Jaume Garriga, at the University of Barcelona in Spain, putting predictions from the multiverse on what could be a solid footing for the first time (New Scientist, 6 March 2010, p 28).

While theorists have made great strides in understanding cosmology’s multiverses, another two levels of Tegmark’s hierarchy remain. Level 3 has its origins in quantum theory. Physicists accept that quantum mechanics works brilliantly. It can be used, for example, to calculate a value for the magnetic moment of the electron which matches measurements to 1 part in a billion. However, they have yet to agree on what it means. In the quantum realm, particles don’t exist as discrete entities that you can pin down, but as “probability waves”. It’s the evolution of those waves that lets quantum physicists predict how electrons cloak an atom, how quarks and gluons interact, and even how objects as large as buckyballs can interfere like light waves (New Scientist, 8 May 2010, p 36).




The pivotal question is what happens to an object’s probability wave – its wave function – when someone measures it. Niels Bohr, a founder of quantum mechanics, declared that observing the wave function caused it to collapse and a particle to appear at a particular time and place. That, he said, explains why we see just one outcome out of the infinite possibilities embodied in the wave function.

Yet Bohr’s interpretation has long been criticised because it suggests that nothing becomes a reality until someone observes it. In the 1950s, such arguments led Hugh Everett, then a graduate student at Princeton University, to explore what would happen if he jettisoned Bohr’s argument. When Everett pictured the wave function rolling majestically on, as the maths of quantum theory said it did, he arrived at a still astonishing and controversial conclusion. Known as the many-worlds interpretation, it calls for the existence of a vast swarm of universes paralleling ours, in which all the possibilities can play out.

How real are those parallel worlds? As real as dinosaurs, says David Deutsch, a quantum physicist at the University of Oxford. “We’ve only ever seen fossils, and dinosaurs are the only rational explanation for them,” he says. “Many worlds is the only rational explanation for the quantum phenomena we see. I think this is as good as dinosaurs.”

The parallel worlds of quantum theory and the multiple universes created by eternal inflation could not seem more different. However, theorists have started to explore the idea that the quantum landscape and the inflationary landscape are one and the same (New Scientist, 4 June, p 8). Bousso and Susskind argue that they produce the same collection of parallel universes. “We both thought for a long time that the multiverse idea and the many-worlds idea were redundant descriptions of the same thing,” Susskind says. Earlier this year, though, he and Bousso developed a consistent way of applying quantum rules to the multiverse. The inflationary multiverse, they conclude, is simply the collection of all the bubble universes that quantum mechanics allows. “The virtual realities of quantum mechanics become genuine realities in the multiverse,” Susskind says.

Tegmark also equates the infinite variants of our universe in the level 1 multiverse to the infinity of quantum worlds. “The only difference between level 1 and level 3,” he says, “is where your doppelgängers reside.”
The multiverses mentioned so far demote our universe to the status of a pebble in a vast landscape, but at least they allow it to be real. Philosopher Nick Bostrom at the University of Oxford ups the ante by arguing that the universe we experience is just a simulation running on an advanced civilisation’s supercomputer.

His argument is simple. Civilisations that last will develop essentially unlimited computing power. Some of those will run “ancestor simulations”, reconstructions of their forebears or other beings. Just as millions of us play video games like The Sims, a trans-human civilisation would probably run multiple simulations, making it plausible that we are in one of them. Bostrom doubts that we will find evidence that we do or don’t live in a simulation. He argues that an advanced civilisation smart enough to create a simulation in the first place would prevent people inside from noticing a problem, or erase the evidence. Tegmark categorises this and several even more speculative forays as level 4 multiverses.
We might not be able to test or disprove a simulation, but what about other kinds of multiverses? Theorists point to several ways that other universes could have left signs we can see. For example, the bubble universes of eternal inflation can collide. One result could be annihilation. “If you’re in a bubble and the wall accelerates towards you, it’s bad news,” says Matthew Kleban, a theoretical physicist at New York University. But in some cases bubbles bounce apart, leaving a telltale smudge in the CMB. Kleban and his colleagues have calculated the details – a symmetrical spot within a particular range of sizes that stands out from its surroundings because it has a different temperature and polarisation (arxiv.org/abs/1109.3473).

Something similar has been seen in the CMB, but Kleban admits that the evidence for a collision is still weak. Still, he says, a collision could be confirmed by data from the Planck satellite, which is currently studying the CMB, or by future missions. Such a find would provoke a new Copernican revolution. “It would tell us that we’re inside a bubble that is embedded with a huge number of other bubbles with different laws of physics,” he says. “That would be a very, very important discovery.”
Quantum mechanics predicts that emerging universes start out entangled with each other. That early entanglement may leave long-lasting signs. “The moment you have a physical mechanism that shows how the universe is born, you end up with a whole series of predictions of how that universe should look at later times,” says Laura Mersini-Houghton, a cosmologist at the University of North Carolina in Chapel Hill. She and her colleagues used this approach to generate four predictions all of which, she says, have been confirmed.



One was the existence of a giant void in our universe. Data from NASA’s WMAP satellite and the Sloan Digital Sky Survey show that something similar lurks in the constellation Eridanus (New Scientist, 24 November 2007, p 34). They also predicted that the overall intensity of the CMB should be about 20 per cent less than predicted by inflation and, surprisingly, that everything in our universe is flowing in a particular direction. “That was so outrageous that nobody believed it,” Mersini-Houghton says, “but it has been confirmed” (New Scientist, 24 January 2009, p 50).

Her group’s most controversial prediction involves particle physics. Most physicists expect that collisions at the Large Hadron Collider near Geneva in Switzerland will reveal the first signs of a new symmetry of nature known as supersymmetry. Considered an essential ingredient of string theory, supersymmetry requires that every particle in the standard model has a hefty partner waiting to be discovered. Most researchers believe that the high energies at the LHC are sufficient to create these beasts.

However, Mersini-Houghton has predicted that 10,000 times more energy is needed. So far no signs of supersymmetry have appeared (New Scientist, 19 March, p 10). Although some researchers are intrigued, most are taking a wait-and-see attitude towards this ambitious application of quantum mechanics to the multiverse.

Theorists have started to explore other testable predictions that follow from various kinds of multiverses. These include whether space is flat or slightly curved and particles and forces other than those predicted by the standard model or supersymmetry (New Scientist, 19 July 2008, p 36).

Three decades after the concept was born, many researchers now agree that at least some kinds of multiverse stand on firm theoretical foundations, yield predictions that astronomers and particle physicists can test, and help explain why our universe is the way it is. Still, Bousso says, multiverses exist at the frontier of science. “You never know anything for sure when you’re working on the edge of knowledge,” he says. “But if I didn’t think it was by far the best bet, I wouldn’t waste my time on it.” Susskind agrees, but adds: “We’re nowhere near the end of questions. Surprises will happen.”



Tuesday 12 June 2018

THE DARKENING AGE - The Christian Destruction of the Classical World By Catherine Nixey




How Christians Destroyed the Ancient World

THE DARKENING AGE  - The Christian Destruction of the Classical World
By Catherine Nixey
Illustrated. 315 pp. Houghton Mifflin Harcourt. $28.

Vandalizing the Parthenon temple in Athens has been a tenacious tradition. Most famously, Lord Elgin appropriated the “Elgin marbles” in 1801-5. But that was hardly the first example. In the Byzantine era, when the temple had been turned into a church, two bishops — Marinos and Theodosios — carved their names on its monumental columns. The Ottomans used the Parthenon as a gunpowder magazine, hence its pockmarked masonry — the result of an attack by Venetian forces in the 17th century. Now Catherine Nixey, a classics teacher turned writer and journalist, takes us back to earlier desecrations, the destruction of the premier artworks of antiquity by Christian zealots (from the Greek zelos — ardor, eager rivalry) in what she calls “The Darkening Age.”
Using the mutilation of faces, arms and genitals on the Parthenon’s decoration as one of her many, thunderingly memorable case studies, Nixey makes the fundamental point that while we lionize Christian culture for preserving works of learning, sponsoring exquisite art and adhering to an ethos of “love thy neighbor,” the early church was in fact a master of anti-intellectualism, iconoclasm and mortal prejudice. This is a searingly passionate book. Nixey is transparent about the particularity of her motivation. The daughter of an ex-nun and an ex-monk, she spent her childhood filled with respect for the wonders of postpagan Christian culture. But as a student of classics she found the scales — as it were — falling from her eyes. She wears her righteous fury on her sleeve. This is scholarship as polemic.
Nixey writes up a storm. Each sentence is rich, textured, evocative, felt. Christian monks in silent orders summoned up pagan texts from library stores with a gagging hand gesture. The destruction of the extraordinary, frankincense-heavy temple of Serapis in Alexandria is described with empathetic detail; thousands of books from its library vanished, and the temple’s gargantuan wooden statue of the god was dismembered before being burned. One pagan eyewitness, Eunapius, remarked flintily that the only ancient treasure left unlooted from the temple was its floor.
Christians became known as those “who move that which should not be moved.” Their laudable appeal to have-nots at the bottom of the pile, both free and unfree, meant that bishops had a citizen-army of pumped-up, undereducated young men ready to rid the world of sin. Enter the parabalini, sometime stretcher-bearers, sometime assassins, who viciously flayed alive the brilliant Alexandrian mathematician and pagan philosopher Hypatia. Or the circumcellions (feared even by other Christians), who invented a kind of chemical weapon using caustic lime soda and vinegar so they could carry out acid attacks on priests who didn’t share their beliefs.
Debate — philosophically and physiologically — makes us human, whereas dogma cauterizes our potential as a species. Through the sharing of new ideas the ancients identified the atom, measured the circumference of the earth, grasped the environmental benefits of vegetarianism.
To be sure, Christians would not have a monopoly on orthodoxy, or indeed on suppression: The history of the ancient world typically makes for stomach-churning reading. Pagan philosophers too who flew in the face of religious consensus risked persecution; Socrates, we must not forget, was condemned to death on a religious charge.
But Christians did fetishize dogma. In A.D. 386 a law was passed declaring that those “who contend about religion … shall pay with their lives and blood.” Books were systematically burned. The doctrinal opinions of one of the most celebrated early church fathers, St. John Chrysostom — he of the Golden Mouth — were enthusiastically quoted in Nazi Germany 1,500 years after his death: The synagogue “is a den of robbers and a lodging for wild beasts … a dwelling of demons.”
Actions were extreme because paganism was considered not just a psychological but a physical miasma. Christianity appeared on a planet that had been, for at least 70,000 years, animist. (Asking the women and men of antiquity whether they believed in spirits, nymphs, djinns would have been as odd as asking them whether they believed in the sea.) But for Christians, the food that pagans produced, the bathwater they washed in, their very breaths were thought to be infected by demons. Pollution was said to make its way into the lungs of bystanders during animal sacrifice. And once Christianity became championed by Rome, one of the most militaristic civilizations the world has known, philosophical discussions on the nature of good and evil became martial instructions for purges and pugilism. 
Still, contrary to Nixey, there was not utter but rather partial destruction of the classical world. The vigorous debates in Byzantine cultures about whether, for example, magical texts were demonic suggest that these works continued to have influence in Christian Europe. The material culture of the time also lends nuance to Nixey’s story: Silverware and dining services in Byzantium were proudly decorated with images of the “Iliad” and “Odyssey.” And while 90 percent of all ancient literature has been lost, paganism still had a foothold on the streets.
In Constantinople, the spiritual headquarters of Eastern Christendom, the seventh-century church was still frantically trying to ban the Bacchanalian festivities that legitimized cross-dressing, mask-wearing and Bacchic adulation. I read this book while tracing the historical footprint of the Bacchic cult. On the tiny Greek island of Skyros, men and children, even today, dress as half human, half animal; they wear goat masks, and dance and drink on Bacchus’ festival days in honor of the spirit of the god. It seems that off the page there was a little more continuity than Christian authorities would like to admit.
But the spittle-flecked diatribes and enraging accounts of gruesome martyrdoms and persecution by pagans were what the church chose to preserve and promote. Christian dominance of academic institutions and archives until the late 19th century ensured a messianic slant for Western education (despite the fact that many pagan intellectuals were disparaging about the boorish, ungrammatical nature of early Christian works like the Gospels). As Nixey puts it, the triumph of Christianity heralded the subjugation of the other.
And so she opens her book with a potent description of black-robed zealots from 16 centuries ago taking iron bars to the beautiful statue of Athena in the sanctuary of Palmyra, located in modern-day Syria. Intellectuals in Antioch (again in Syria) were tortured and beheaded, as were the statues around them. The contemporary parallels glare. The early medieval author known as Pseudo-Jerome wrote of Christian extremists: “Because they love the name martyr and because they desire human praise more than divine charity, they kill themselves.” He would have found shocking familiarity in the news of the 21st century.
Nixey closes her book with the description of another Athena, in the city of her name, being decapitated around A.D. 529, her defiled body used as a steppingstone into what was once a world-renowned school of philosophy. Athena was the deity of wisdom. The words “wisdom” and “historian” have a common ancestor, a proto-Indo-European word meaning to see things clearly. Nixey delivers this ballista-bolt of a book with her eyes wide open and in an attempt to bring light as well as heat to the sad story of intellectual monoculture and religious intolerance. Her sympathy, corruscatingly, compellingly, is with the Roman orator Symmachus: “We see the same stars, the sky is shared by all, the same world surrounds us. What does it matter what wisdom a person uses to seek for the truth?”

Bettany Hughes is the author of “Istanbul: A Tale of Three Cities.” Her latest film, “Bacchus Uncovered,” was recently broadcast on BBC World. She is currently making a documentary about the worship of war, “Mars Uncovered.”
Follow New York Times Books on Facebook and Twitter, sign up for our newsletter or our literary calendar. And listen to us on the Book Review podcast



Another review, this time from the Guardian...

https://www.theguardian.com/books/2017/dec/28/the-darkening-age-the-christian-destruction-of-the-classical-world-by-catherine-nixey


The Darkening Age: The Christian Destruction of the Classical World by Catherine Nixey





The clash between the classical order and Christianity is a tale of murder and vandalism wrought by religious zealotry, evoking modern-day parallels

The theologian,” wrote Edward Gibbon in his classic The History of the Decline and Fall of the Roman Empire, “may indulge the pleasing task of describing Religion as she descended from Heaven, arrayed in her native purity. A more melancholy duty is imposed on the historian. He must discover the inevitable mixture of error and corruption, which she contracted in a long residence upon earth, among a weak and degenerate race of beings.” Gibbon was a child of the European Enlightenment, and he viewed his task as a historian of early Christianity as a dispassionate, scientific one: to see things as they are, rather than as the pious would want them to be. The conclusions he reached were, perhaps inevitably, controversial in his day. The pre-Christian Roman empire, he believed, was characterised by “religious harmony”, and the Romans were interested more in good governance than in imposing religious orthodoxy on their many subjects. A distinctive feature of early Christianity, by contrast, was for Gibbon its “exclusive zeal for the truth of religion”, a blinkered, intolerant obsessiveness that succeeded by bullying and intimidation, and promoted a class of wide-eyed mystics. Indeed, Christian zealotry, was, he thought, ultimately responsible for the fall of the Roman empire, by creating citizens contemptuous of their public duty.

This spirit permeates Catherine Nixey’s book. In her view, the standard modern picture of the Roman empire’s conversion remains, even 200 years after Gibbon, glossed by Christian triumphalism. History, she believes, has given the Church an undeservedly easy ride. Pre-Christian Rome tends to be imagined as cruel, arbitrary and punitive; it is thought to be, in her fine phrase, “a chilly, nihilistic world”. Christianity, conversely, is painted as brave, principled, kind, inclusive and optimistic. The task she sets herself – her own melancholy duty – is to rip away this veneer and expose the error and corruption of the early Church.

This is also, however, a book for the 21st century. What concerned Gibbon was the clash between faith and reason; for Nixey, the clashes are physical ones. This is, fundamentally, a study of religious violence. Her cover displays a statue of Athena deliberately damaged: its eyes have been gouged and its nose smashed, and a cross has been etched into its forehead. The story of this defacement is told in her prologue and reprised in her final words. The events happened in Palmyra in the late fourth century, when some of the oasis city’s magnificent temples were repurposed as sites of Christian worship. Her choice to begin in Palmyra is, of course, a careful one. When she speaks of the destruction wrought on the architecture of the Syrian city by “bearded, black-robed zealots”, the reader thinks not of marauding fourth-century Christian fundamentalists but of television images from recent history. “There have been,” she writes, and “there still are … those who use monotheism and its weapons to terrible ends.” What is revealing about that last sentence is not the connection she draws between savage practices in Christian late antiquity and in the name of Islamic State but the phrase “monotheism and its weapons”. Many modern commentators like to speak of religious terrorism as a horrific distortion of religious truth; for Nixey, monotheism is always weaponised and waiting only for someone to pull the trigger.

The story of the destruction of Athena is the amuse-bouche for a feast of tales of murder, vandalism, wilful destruction of cultural heritage and general joylessness. We hear of the brutal end of Hypatia, the Alexandrian philosopher, mathematician and astronomer who was murdered by a Christian crowd in the early fifth century (an event dramatised in the Spanish movie Agora). Less well known, in the anglophone world at any rate, is the case of Shenoute. A contemporary of Hypatia’s, he lived further south, in rural Egypt, where he became the abbot of the complex now known as the White Monastery (which still stands in today’s town of Sohag). Shenoute is now considered a saint in the Coptic church, but his piety manifested itself in a particularly ugly guise: he was part of a gang of thugs who would break into the houses of locals whose theological views they felt to be unsound, and smash up any property they objected to on religious grounds.

Even more than the physical violence, it is the cultural devastation that draws Nixey’s eye. Early in the book, she describes how she was brought up in her youth to think of late-antique and medieval Christians as enlightened curators of the classical heritage, diligently copying philosophical texts and poems throughout the ages so that they were saved from oblivion. Her views in this matter have evidently shifted somewhat over time. In this book, early Christians are much more likely to close down the academies, shut temples, loot and destroy artwork, forbid traditional practices and burn books. Rather than praising Christians for preserving slivers of classical wisdom, she argues, we should acknowledge how much was knowingly erased.
Where did this appetite for destruction come from? Nixey’s short answer is a simple one: demons. Many ancient Christians believed that the world we inhabit is a perilous place, crowded with malevolent supernatural beings, who sometimes manifest themselves in the form of fake gods. It is the Christian’s duty to root these out. Destroying a “pagan” statue or burning a book, then, is a no more violent act than amputating a gangrenous limb: you save the healthy whole by preventing the spread of the infection. If you think that a marble statue is possessed by a demon, then it makes a kind of sense to dig out its eyes and score a cross in its forehead. If you think, along with the North-African theologian Tertullian, that “Satan and his angels have filled the whole world” and laid traps for the virtuous in the form of sensual pleasures, then avoiding the Romans’ bathhouses, dinners and spectacles is perfectly rational – as is a disdain for sexuality. The early Christian world was in a state of perpetual metaphysical war, and choosing sides inevitably meant knowing your enemies.
But demons are only half of the story. The real blame, for Nixey, lies at the door of the church fathers, whose spine-tingling sermons ramped up the polarising rhetoric of violent difference. They wove “a rich tapestry of metaphor”, construing theological opponents of all kinds as bestial, verminous, diseased and – naturally – demonic. It was language itself – the forceful, lurid language of a handful of elite males – that stoked the fires of Christian rage against its enemies, fires that blazed for a millennium: “the intellectual foundations for a thousand years of theocratic oppression were being laid.”
Nixey has a great story to tell, and she tells it exceptionally well. As one would expect from a distinguished journalist, every page is full of well-turned phrases that leap from the page. She has an expert eye for arresting details, and brings characters and scenarios to life without disguising anything of the strangeness of the world she describes. Most of all, she navigates through these tricky waters with courage and skill. Writing critically about Christian history is doubly difficult: not only are the ancient sources complex, scattered and disputed, but also there are legions of modern readers waiting to pounce on the tiniest perceived error, infelicity or offence.
If there is a weakness in this book, it stems precisely from its Gibbonian roots. This is, fundamentally, a restatement of the Enlightenment view that the classical heritage was essentially benign and rational, and the advent of Christianity marked civilisation’s plunge into darkness (until it was fished out by Renaissance humanists). Nixey studied classics, and her affection for classical culture runs deep: she writes with great affection about the sophisticated philosophies of the stoics and epicureans, the buoyantly sexual (and not infrequently sexist) poetry of Catullus and Ovid, the bluff bonhomie of Horace and the unsentimental pragmatism of men of affairs such as Cicero and Pliny. When she speaks of classical culture and religion she tends to use such descriptions as “fundamentally liberal and generous” and “ebullient”. How, then, do we explain the Romans’ unfortunate habit of killing Christians? Nixey thinks, like Gibbon, that they were interested, principally, in good governance and in maintaining the civic order that the unruly Christians imperilled. Ancient accounts, she argues, show imperial officials who “simply do not want to execute”; rather, they are forced into it by the Christians’ perverse lust for martyrdom. Now, martyrdom certainly has a strangely magnetic allure, as we know from our own era, but the Romans were hardly bemused, passive bystanders in all of this. There is something of the zero-sum game at work here: in seeking to expose the error and corruption of the early Christian world, Nixey comes close to veiling the pre-Christian Romans’ own barbarous qualities.
But this book is not intended as a comprehensive history of early Christianity and its complex, embattled relationship to the Roman empire, and it would be unfair to judge it against that aim. It is, rather, a finely crafted, invigorating polemic against the resilient popular myth that presents the Christianisation of Rome as the triumph of a kinder, gentler politics. On those terms, it succeeds brilliantly.