Monday 1 October 2012

REALITY - A universe of information

By Michael Brooks

Published in New Scientist Magazine issue 2884

Published 29 September 2012

WHATEVER kind of reality you think you’re living in, you’re probably wrong. The universe is a computer, and everything that goes on in it can be explained in terms of information processing.
The connection between reality and computing may not be immediately obvious, but strip away the layers and that is exactly what some researchers think we find. We think of the world as made up of particles held together by forces, for instance, but quantum theory tells us that these are just a mess of fields we can only properly describe by invoking the mathematics of quantum physics.

That’s where the computer comes in, at least if you think of it in conceptual terms as something that processes information rather than as a boxy machine on your desk. “Quantum physics is almost phrased in terms of information processing,” says Vlatko Vedral of the University of Oxford. “It’s suggestive that you will find information processing at the root of everything.”

.


Information certainly has a special place in quantum theory. The famous uncertainty principle – which states that you can’t simultaneously know the momentum and position of a particle – comes down to information. As does entanglement, where quantum objects share properties and exchange information irrespective of the physical distance between them.

In fact, every process in the universe can be reduced to interactions between particles that produce binary answers: yes or no, here or there, up or down. That means nature, at its most fundamental level, is simply the flipping of binary digits or bits, just like a computer. The result of the myriad bit flips is manifest in what we perceive as the ongoing arrangement, rearrangement and interaction of atoms – in other words, reality.

According to Ed Fredkin of the Massachusetts Institute of Technology, if we could dig into this process we would find that the universe follows just one law, a single information-processing rule that is all you need to build a cosmos. In Fredkin’s view, this would be some form of “if – then” procedure; the kind of rule used in traditional computing to manipulate the bits held by transistors on a chip and operate the logic gates, but this time applied to the bits of the universe.

Vedral and others think it’s a little more complex than that. Because we can reduce everything in the universe to entities that follow the laws of quantum physics, the universe must be a quantum computer rather than the classical type we are familiar with.

One of the attractions of this idea is that it can supply an answer to the question “why is there something rather than nothing?”. The randomness inherent in quantum mechanics means that quantum information – and by extension, a universe – can spontaneously come into being, Vedral says.

For all these theoretical ideas, proving that the universe is a quantum computer is a difficult task. Even so, there is one observation that supports the idea that the universe is fundamentally composed of information. In 2008, the GEO 600 gravitational wave detector in Hannover, Germany, picked up an anomalous signal suggesting that space-time is pixellated. This is exactly what would be expected in a “holographic” universe, where 3D reality is actually a projection of information encoded on the two-dimensional surface of the boundary of the universe (New Scientist, 17 January 2009, p 24).

“IN 2008, A SIGNAL SUGGESTED THAT SPACE-TIME IS PIXELLATED”

This bizarre idea arose from an argument over black holes. One of the fundamental tenets of physics is that information cannot be destroyed, but a black hole appears to violate this by swallowing things that contain information then gradually evaporating away. What happens to that information was the subject of a long debate between Stephen Hawking and several of his peers. In the end, Hawking lost the debate, conceding that the information is imprinted on the event horizon that defines the black hole’s boundary and escapes as the black hole evaporates. This led theoretical physicists Leonard Susskind and Gerard’t Hooft to propose that the entire universe could also hold information at its boundary – with the consequence that our reality could be the projection of that information into the space within the boundary. If this conjecture is true, reality is like the image of Princess Leia projected by R2D2 in Star Wars: a hologram.

REALITY - How does consciousness fit in?

By Michael Brooks author of The Secret Anarchy of Science (Profile/Overlook)

From New Scientist Magazine issue 2884

Published 29 September 2012

DESCARTES might have been onto something with “I think therefore I am”, but surely “I think therefore you are” is going a bit far? Not for some of the brightest minds of 20th-century physics as they wrestled mightily with the strange implications of the quantum world.

According to prevailing wisdom, a quantum particle such as an electron or photon can only be properly described as a mathematical entity known as a wave function. Wave functions can exist as “superpositions” of many states at once. A photon, for instance, can circulate in two different directions around an optical fibre; or an electron can simultaneously spin clockwise and anticlockwise or be in two positions at once.

When any attempt is made to observe these simultaneous existences, however, something odd happens: we see only one. How do many possibilities become one physical reality?

This is the central question in quantum mechanics, and has spawned a plethora of proposals, or interpretations. The most popular is the Copenhagen interpretation, which says nothing is real until it is observed, or measured. Observing a wave function causes the superposition to collapse.
However, Copenhagen says nothing about what exactly constitutes an observation. John von Neumann broke this silence and suggested that observation is the action of a conscious mind. It’s an idea also put forward by Max Planck, the founder of quantum theory, who said in 1931, “I regard consciousness as fundamental. I regard matter as derivative from consciousness.”

That argument relies on the view that there is something special about consciousness, especially human consciousness. Von Neumann argued that everything in the universe that is subject to the laws of quantum physics creates one vast quantum superposition. But the conscious mind is somehow different. It is thus able to select out one of the quantum possibilities on offer, making it real – to that mind, at least.

Henry Stapp of the Lawrence Berkeley National Laboratory in California is one of the few physicists that still subscribe to this notion: we are “participating observers” whose minds cause the collapse of superpositions, he says. Before human consciousness appeared, there existed a multiverse of potential universes, Stapp says. The emergence of a conscious mind in one of these potential universes, ours, gives it a special status: reality.

There are many objectors. One problem is that many of the phenomena involved are poorly understood. “There’s a big question in philosophy about whether consciousness actually exists,” says Matthew Donald, a philosopher of physics at the University of Cambridge. “When you add on quantum mechanics it all gets a bit confused.”

Donald prefers an interpretation that is arguably even more bizarre: “many minds”. This idea – related to the “many worlds” interpretation of quantum theory, which has each outcome of a quantum decision happen in a different universe – argues that an individual observing a quantum system sees all the many states, but each in a different mind. These minds all arise from the physical substance of the brain, and share a past and a future, but cannot communicate with each other about the present.

Though it sounds hard to swallow, this and other approaches to understanding the role of the mind in our perception of reality are all worthy of attention, Donald reckons. “I take them very seriously,” he says.

REALITY - Is it a simulation?


By Richard Webb

From New Scientist Magazine issue 2884

Published 29 September 2012 
                  

BEFORE cursing the indolence of today’s youth, absorbed in the ever-more intricate virtual realities of video games rather than scrumping the ripe fruits of real reality outside, consider this. Perhaps they are actually immersing themselves in our future – or even our present.

The story of our recent technological development has been one of ever-increasing computational power. At some future time we are unlikely to be content with constructing tightly circumscribed game worlds. We will surely begin to simulate everything, including the evolutionary history that led to where we are.

Flicking the switch on such a world simulation could have fundamental ramifications for our concept of reality, according to philosopher Nick Bostrom of the University of Oxford. If we can do it, that makes it likely it has been done before. In fact, given the amount of computing power advanced civilisations are likely to have at their fingertips, it will probably have been done a vast number of times.



.

So switching on our own simulation will tell us that we are almost undoubtedly in someone else’s already. “We would have to think we are one of the simulated people, rather than one of the rare, exceptional non-simulated people,” says Bostrom.


“SWITCHING ON A SIMULATION WILL TELL US THAT WE’RE IN SOMEONE ELSE’S ALREADY”

Probably, anyway. There has to be a basement level of reality somewhere, in which the “master” simulation exists. It is possible that we live in that reality. Depending on its laws of physics, the basement’s computing resources are likely to be finite. And those resources must support not only the master simulation, but any simulations people in that simulation decide to create – perhaps limiting their number, and thus increasing the chances that ours is the base reality.

Either way, our ability to check our own status, and that of the fundamental physical laws we discover, is limited. If we are in the basement, we have nowhere to drill down to, and if we aren’t, whether we can depends on the rules put in place by those who built the simulation. So even if we do end up constructing what could be reality for someone else, we’ll probably never know for sure where we ourselves stand. Who’s to say video games are the lesser reality?


REALITY - How can we know it exists?




Proving whether or not reality is an illusion is surprisingly difficult



From New Scientist Magazine issue 2884

Published 29 September 2012



PHILOSOPHERS are not being rude when they describe the approach most of us take as naïve realism. After all, when they cross the street on the way to work, they tend to accept implicitly – as we all do – that there is an external reality that exists independently of our observations of it. But at work, they have to ask: if there is, how can we know?

In other words, the question “what exists?” reduces, for what in philosophy passes for practical purposes, to questions such as “what do we mean by ‘know’?”

Plato had a go at it 2400 years ago, defining “knowledge” as “justified true belief”. But testing the justification or the truth of beliefs traces back to our perceptions, and we know these can deceive us.





Two millennia later, René Descartes decided to work out what he was sure he knew. Legend has it that he climbed into a large stove to do so in warmth and solitude. He emerged declaring that the only thing he knew was that there was something that was doubting everything.

The logical conclusion of Descartes’s doubt is solipsism, the conviction that one’s own consciousness is all there is. It’s an idea that is difficult to refute.

“IT IS DIFFICULT TO REFUTE THE IDEA THAT CONSCIOUSNESS IS ALL THERE IS”

Samuel Johnson’s notoriously bluff riposte to the questioning of the reality of objects – “I refute it thus!”, kicking a stone – holds no philosophical water. As Descartes pointed out a century earlier, it is impossible to know we are not dreaming.

Nor has anyone had much luck making sense of dualism – the idea that mind and matter are distinct. One response is that there is only matter, making the mind an illusion that arises from neurons doing their thing. The opposite position is “panpsychism”, which attributes mental properties to all matter. As the astrophysicist Arthur Eddington expressed it in 1928: “the stuff of the world is mind-stuff… not altogether foreign to the feelings in our consciousness”.

Quite separately, rigorous logicians such as Harvard’s Willard Van Orman Quine abandoned the search for a foundation of reality and took “coherentist” positions. Let go of the notion of a pyramid of knowledge, they argued: think instead of a raft built out of our beliefs, a seaweedy web of statements about perceptions and statements about statements, not “grounded” in anything but hanging together and solid enough to set sail upon. Or even, possibly, to be a universe.

This idea is circular, and it’s cheating, say critics of a more foundationist bent. It leads back to the suspicion that there actually is no reality independent of our observations. But if there is – how can we know?

REALITY - Is everything made of numbers?

Amanda Gefter is a writer and New Scientist consultant based in Boston, Massachusetts

From New Scientist Magazine issue 2884

Published 29 September 2012


WHEN Albert Einstein finally completed his general theory of relativity in 1916, he looked down at the equations and discovered an unexpected message: the universe is expanding.

Einstein didn’t believe the physical universe could shrink or grow, so he ignored what the equations were telling him. Thirteen years later, Edwin Hubble found clear evidence of the universe’s expansion. Einstein had missed the opportunity to make the most dramatic scientific prediction in history.

How did Einstein’s equations “know” that the universe was expanding when he did not? If mathematics is nothing more than a language we use to describe the world, an invention of the human brain, how can it possibly churn out anything beyond what we put in? “It is difficult to avoid the impression that a miracle confronts us here,” wrote physicist Eugene Wigner in his classic 1960 paper “The unreasonable effectiveness of mathematics in the natural sciences” (Communications on Pure and Applied Mathematics, vol 13, p 1).


The prescience of mathematics seems no less miraculous today. At the Large Hadron Collider at CERN, near Geneva, Switzerland, physicists recently observed the fingerprints of a particle that was arguably discovered 48 years ago lurking in the equations of particle physics.

How is it possible that mathematics “knows” about Higgs particles or any other feature of physical reality? “Maybe it’s because math is reality,” says physicist Brian Greene of Columbia University, New York. Perhaps if we dig deep enough, we would find that physical objects like tables and chairs are ultimately not made of particles or strings, but of numbers.

“These are very difficult issues,” says philosopher of science James Ladyman of the University of Bristol, UK, “but it might be less misleading to say that the universe is made of maths than to say it is made of matter.”

Difficult indeed. What does it mean to say that the universe is “made of mathematics”? An obvious starting point is to ask what mathematics is made of. The late physicist John Wheeler said that the “basis of all mathematics is 0 = 0”. All mathematical structures can be derived from something called “the empty set”, the set that contains no elements. Say this set corresponds to zero; you can then define the number 1 as the set that contains only the empty set, 2 as the set containing the sets corresponding to 0 and 1, and so on. Keep nesting the nothingness like invisible Russian dolls and eventually all of mathematics appears.

Mathematician Ian Stewart of the University of Warwick, UK, calls this “the dreadful secret of mathematics: it’s all based on nothing” (New Scientist, 19 November 2011, p 44). Reality may come down to mathematics, but mathematics comes down to nothing at all.

That may be the ultimate clue to existence – after all, a universe made of nothing doesn’t require an explanation. Indeed, mathematical structures don’t seem to require a physical origin at all. “A dodecahedron was never created,” says Max Tegmark of the Massachusetts Institute of Technology.

“To be created, something first has to not exist in space or time and then exist.” A dodecahedron doesn’t exist in space or time at all, he says – it exists independently of them. “Space and time themselves are contained within larger mathematical structures,” he adds. These structures just exist; they can’t be created or destroyed.

That raises a big question: why is the universe only made of some of the available mathematics?

“There’s a lot of math out there,” Greene says. “Today only a tiny sliver of it has a realisation in the physical world. Pull any math book off the shelf and most of the equations in it don’t correspond to any physical object or physical process.”

It is true that seemingly arcane and unphysical mathematics does, sometimes, turn out to correspond to the real world. Imaginary numbers, for instance, were once considered totally deserving of their name, but are now used to describe the behaviour of elementary particles; non-Euclidean geometry eventually showed up as gravity. Even so, these phenomena represent a tiny slice of all the mathematics out there.

Not so fast, says Tegmark. “I believe that physical existence and mathematical existence are the same, so any structure that exists mathematically is also real,” he says.

“PHYSICAL EXISTENCE AND MATHEMATICAL EXISTENCE ARE ONE AND THE SAME”


So what about the mathematics our universe doesn’t use? “Other mathematical structures correspond to other universes,” Tegmark says. He calls this the “level 4 multiverse”, and it is far stranger than the multiverses that cosmologists often discuss. Their common-or-garden multiverses are governed by the same basic mathematical rules as our universe, but Tegmark’s level 4 multiverse operates with completely different mathematics.

All of this sounds bizarre, but the hypothesis that physical reality is fundamentally mathematical has passed every test. “If physics hits a roadblock at which point it turns out that it’s impossible to proceed, we might find that nature can’t be captured mathematically,” Tegmark says. “But it’s really remarkable that that hasn’t happened. Galileo said that the book of nature was written in the language of mathematics – and that was 400 years ago.”

If reality isn’t, at bottom, mathematics, what is it? “Maybe someday we’ll encounter an alien civilisation and we’ll show them what we’ve discovered about the universe,” Greene says. “They’ll say, ‘Ah, math. We tried that. It only takes you so far. Here’s the real thing.’ What would that be? It’s hard to imagine. Our understanding of fundamental reality is at an early stage.”

REALITY - The Definition

From New Scientist Magazine issue 2884, published 29 September 2012


Jan Westerhoff is a philosopher at the University of Durham and the University of London's School of Oriental and African Studies, both in the UK, and author of Reality: A very short introduction (Oxford University Press, 2011

WHAT DO we actually mean by reality? A straightforward answer is that it means everything that appears to our five senses – everything that we can see, smell, touch and so forth. Yet this answer ignores such problematic entities as electrons, the recession and the number 5, which we cannot sense but which are very real. It also ignores phantom limbs and illusory smells. Both can appear vividly real, but we would like to say that these are not part of reality.

We could tweak the definition by equating reality with what appears to a sufficiently large group of people, thereby ruling out subjective hallucinations. Unfortunately there are also hallucinations experienced by large groups, such as a mass delusion known as koro, mainly observed in South-East Asia, which involves the belief that one’s genitals are shrinking back into one’s body. Just because sufficiently many people believe in something does not make it real.

Another possible mark of reality we could focus on is the resistance it puts up: as the science fiction writer Philip K. Dick put it, reality is that which, if you stop believing in it, does not go away. Things we just make up yield to our wishes and desires, but reality is stubborn. Just because I believe there is a jam doughnut in front of me doesn’t mean there really is one. But again, this definition is problematic.

Things that we do not want to regard as real can be stubborn too, as anyone who has ever been trapped in a nightmare knows. And some things that are real, such as stock markets, are not covered by this definition because if everyone stopped believing in them, they would cease to exist.

There are two definitions of reality that are much more successful. The first equates reality with a world without us, a world untouched by human desires and intentions. By this definition, a lot of things we usually regard as real – languages, wars, the financial crisis – are nothing of the sort. Still, it is the most solid one so far because it removes human subjectivity from the picture.

The second equates reality with the most fundamental things that everything else depends on. In the material world, molecules depend on their constituent atoms, atoms on electrons and a nucleus, which in turn depends on protons and neutrons, and so on. In this hierarchy, every level depends on the one below it, so we might define reality as made up of whatever entities stand at the bottom of the chain of dependence, and thus depend on nothing else.

This definition is even more restrictive than “the world without us” since things like Mount Everest would not count as part of reality; reality is confined to the unknown foundation on which the entire world depends. Even so, when we investigate whether something is real or not, these final two definitions are what we should have in mind.

REALITY - Built on emptiness?


By Valerie Jamieson

From New Scientist Magazine Issue 2884 - 29 September 2012


Leaving aside the question of whether your senses can be trusted, what are you actually kicking? When it boils down to it, not a lot. Science needs remarkably few ingredients to account for a rock: a handful of different particles, the forces that govern their interactions, plus some rules laid down by quantum mechanics.
This seems like a solid take on reality, but it quickly starts to feel insubstantial. If you take a rock apart, you’ll find that its basic constituent is atoms – perhaps 1000 trillion trillion of them, depending on the rock’s size. Atoms, of course, are composed of smaller subatomic particles, namely protons and neutrons – themselves built of quarks – and electrons. Otherwise, though, atoms (and hence rocks) are mostly empty space. If an atom were scaled up so that its nucleus was the size of the Earth, the distance to its closest electrons would be 2.5 times the distance between the Earth and the sun. In between is nothing at all. If so much of reality is built on emptiness, then what gives rocks and other objects their form and bulk?


Physics has no problem answering this question: electrons. Quantum rules dictate that no two electrons can occupy the same quantum state. The upshot of this is that, no matter how hard you try, you cannot cram two atoms together into the same space. “Electrons do all the work when it comes to the structure of matter we see all around us,” says physicist Sean Carroll at the California Institute of Technology in Pasadena.
That’s not to say the nucleus is redundant. Most of the mass of an atom comes from protons and neutrons and the force binding them together, which is carried by particles called gluons.

And that, essentially, is that. Electrons, quarks (mostly of the up and down variety) and gluons account for most of the ordinary stuff around us.
But not all. Other basic constituents of reality exist too – 17 in total, which together comprise the standard model of particle physics (see illustration). The model also accounts for the mirror world of antimatter with a complementary set of antiparticles.

Some pieces of the standard model are commonplace, such as photons of light and the various neutrinos streaming through us from the sun and other sources. Others, though, do not seem to be part of everyday reality, including the top and bottom quarks and the heavy, electron-like tau particle. “On the face of it, they don’t play a role,” says Paul Davies of Arizona State University in Tempe. “Deep down, though, they may all link up.”
That’s because the standard model is more than a roll call of particles. Its foundations lie in symmetry and group theory, one example of the mysterious connections between reality and mathematics (see “Reality: Is everything made of numbers?“).

The standard model is arguably even stranger for what it doesn’t include. It has nothing to say about the invisible dark matter than seems to make up most of the matter in the universe. Nor does it account for dark energy. These are serious omissions when you consider that dark matter and dark energy together comprise about 96 per cent of the universe. It is also totally unclear how the standard model relates to phenomena that seem to be real, such as time and gravity.
So the standard model is at best a fuzzy approximation, encompassing some, but not all, of what seems to comprise physical reality, plus bits and pieces that do not. Most physicists would agree that the standard model is in serious need of an overhaul. It may be the best model we have of reality, but it is far from the whole story.

Friday 22 June 2012

Quantum mysticism


Quantum mysticism is a set of metaphysical beliefs and associated practices that seek to relate consciousness, intelligence, spirituality, or mystical world-views to the ideas of quantum mechanics and its interpretations
In other words... pseudoscience.

A notable researcher in this field is Dean Radin. In this paper, Radin tries to show that consciousness can cause the collapse of the quantum wavefunction.   The double slit experiment is normally used to show how the collapse of a quantum wavefunction is caused when individual atoms are measured by a non-conscious measurement device. Radin uses volunteers who try to influence the outcome of the experiment by using their thoughts.

(Note that this experiment is trying to prove something different to the false claim from religionists that the double slit experiment shows conscious observation creates or influences matter).

Radin's idea is interesting, and the experiment is better than most one finds in pseudoscience, but unfortunately the results are inconclusive.


Here's a review of Radin's paper.


Unfortunately this paper again does not report the actual physical outcome. They don’t report how much the subjects were able to influence the pattern. What seems clear is that none of them was able to alter the pattern in a way that stood out above the noise.


It would have been interesting to know if the measurements were any more precise than those conducted by Jeffers and Ibison. If their apparatus is better, and the effect still couldn’t be clearly measured, then that would suggest confirmation that Ibison’s result was just chance.


They calculate that, on average, the pattern in periods where subjects paid attention was slightly different from the pattern when they did not pay attention. That is valid in principle because you can get a good measurement with a bad apparatus by simply repeating the process a lot of times.


However, any physicist or engineer would try their utmost to improve that apparatus rather than rely on repetitions.


On the whole, this is not handled like a physics experiment but more like one in social science.


Most importantly, the effect size reported is not about any physically measurable change to the interference pattern. It is simply a measure for how much the result deviated from “chance expectation”. That’s not exactly what should be of top interest.


The paper reports six experiments. To give credit where credit should not be due, some would have pooled the data and pretended to have run fewer but larger experiments to make the results seem more impressive. That’s not acceptable, of course, but still done (EG by Daryl Bem).


Results and Methods


The first four experiments presented all failed to reach a significant result, even by the loose standards common in social science. However, they all pointed somewhat in the right direction which might be considered encouraging enough to continue.


Among these experiments there were two ill-conceived attempts to identify other factors that might influence the result.


The first idea is that somewhat regular ups and downs in the outcome measure could have coincided with periods of attention and no attention. I can’t stress enough that this would have been better addressed by trying to get the apparatus to behave.


Instead, Radin performs a plainly bizarre statistical analysis. I’m sure this was thought up by him rather than a co-author because it is just his style.


Basically, he takes out all the big ups and downs. So far so fine. This should indeed remove any spurious ups and downs coming from within the apparatus. But wait, it should also remove any real effect!


Radin, however, is satisfied with still getting a positive result even when there is nothing left of that could cause a true positive result. The “positive” result Radin gets is obviously a meaningless coincidence that almost certainly would not repeat in any of the other experiments. And indeed, he reports the analysis only for this one experiment.


Make no mistake here, once a method for such an analysis has been implemented on the computer for one set of data, it takes only seconds to perform it on any other set of data.


The second attempt concerns the possibility that warmth might have affected the result. A good way to test this is probably to introduce heat sources into the room and see how that affects the apparatus.


What is done is quite different. Four thermometers are placed in the room while an experiment is conducted. The idea seems to have been that if the room gets warmer this indicates that warmth may have been responsible. Unfortunately, since you don’t know if you are measuring at the right places, you can’t conclude that warmth is not responsible if you don’t find any. Besides it might not be a steady increase you are looking for. In short, you don’t know if your four thermometers could pick up anything relevant or how to recognize it, if they did.


Conversely, even if the room got warmer with someone in it, this would not necessarily affect the measurement adversely.


In any case, temperature indeed seemed to increase slightly. Why the same temperature measurements were not conducted in the other experiments, or why the possible temperature influence was not investigated further, is unclear to me. They believe this should work, so why don’t they continue with it?


The last two experiments were somewhat more elaborate. They were larger, comprising about 50 subjects rather than about 30, and took an EEG of subjects. The fifth experiment is the one success in the lot insofar that it reports a significant result.


Conclusion


If you have read the first part of this series then you have encountered a mainstream physics articles that studied how the thermal emission of photons affects the interference pattern. What that paper shares with this one is that both are interested in how a certain process affects the interference pattern.


And yet the papers could hardly be more different. The mainstream paper contains extensive theoretical calculations that place the results in the context of known physics. The fringe paper has no such calculations and relies mainly on pop science accounts of quantum physics.


The mainstream paper presents a clear and unambiguous change in the interference pattern. Let’s look at it again.






The dots are the particles and the lines mark the theoretically expected interference patterns fitted to the actual results. As you can see the dots don’t exactly follow the lines. That’s just unavoidable random variation due to any number of reasons. And yet the change in the pattern can be clearly seen.


From what is reported in Radin’s paper we can deduce that the change associate with attention was not even remotely as clean. In fact, the patterns should be virtually identical the whole time.


That means, that if there is a real effect in Radin’s paper, it is tiny. So tiny that it can’t be properly seen with the equipment they used.


That is hardly a surprising result. If paying attention on something was able to change its quantum behavior in a noticeable way, then this should have been noticed long ago. Careful experiments should be plagued by inexplicable noise, depending on what the experimenters are thinking about.


The “positive” result that he reports suffer from the same problem as virtually all positive results in parapsychology and also many in certain recognized scientific disciplines. It may simply be due to kinks in the social science methodology employed.


Some of the weirdness in the paper, not all of which I mentioned, leaves me with no confidence that there is more than “flexible methods” going on here.


Poor Quantum Physics


Radin believes that a positive result supports “consciousness causes collapse”.  He bemoans a lack of experimental tests of that idea and attributes it, quite without justification, to a “taboo” against including consciousness in physics.


Thousands upon thousands of physicists and many times more students have out of some desire to conform simply refused to do some simple and obvious experiment. I think it says a lot about Radin and the company he keeps that he has no problem believing that.


I don’t know about you, my dear readers, but when I am in such a situation would have thought differently. Either all those people who should know more about the subject than me have their heads up their behinds. Or maybe it is just me. And I would have wondered if there was maybe something I am missing. And I would have found out what it was and avoided making an ass of myself. Then again, I would have (and have) also avoided book deals and the adoration of many fans and the like, all of which Radin secured for himself.


So who’s to say that reasonable thinking is actually the same as sensible thinking.


But back to the physics. As is obvious when one manages to find the relevant literature, conscious awareness of any information is not necessary to affect an interference pattern. Moreover, wave function collapse is not necessary to explain this. Both of this should be plain from the mainstream paper mentioned here.


Outlook


My advice to anyone who thinks that there’s something about this is to try to build a more sensitive apparatus and/or to calibrate it better. If the effect still doesn’t rise over the noise, it probably still wasn’t there in the first place. If it does, however, future research becomes much easier.


For example, if tiny magnetic fields influence this, as Radin suggests, that could be learned in a few days.


Unfortunately, it does not appear that this is the way Dean Radin and his colleagues are going about but I’ll refrain from comment until I have more solid information.


But at least, the are continuing the line of investigation. They deserve some praise for that. It is all too often the case that parapsychologists present a supposedly awesome, earth-shattering result and then move on to do something completely different.


Update


I omitted to comment on a lot of details in the second paper to keep things halfway brief. In doing so I overlooked one curiosity that really should be mentioned.


The fourth experiment is “retrocausal”. That means, in this case, that the double-slit part of the experiment was run and recorded three months before the humans viewed this record, and tried to influence it. The retrocausality in itself is not really such an issue. Time is a curious thing in modern physics and not at all like we intuit.


The curious thing is that it implies that the entire recording was in a state of quantum superposition for a whole three months. Getting macroscopic objects into and keeping them in such states is enormously difficult. It certainly does not just happen on its own. What they claim to have done there is simply impossible as far as mainstream quantum physics is concerned. Not just in theory, but it can’t be done in practice despite physicists trying really hard.

Tuesday 5 June 2012

The Fine Tuning Myth



Fine Tuning and the Argument from Probability



In his 1995 book, The Creator and the Cosmos, physicist Hugh Ross listed thirty-three characteristics a planet must have to support life. He also estimated the probability that such a combination be found in the universe as "much less than one in a million trillion." for human life.

He concluded that only "divine design" could account

However, Ross presented no estimate of the probability for divine design. Perhaps it is even lower! Ross and others who attempt to prove the existence of God on the basis of probabilities make a fundamental logical error. When using probabilities to decide between two or more possibilities, you must have a number for each possibility in order to compare. In this vast universe, highly unlikely events happen every day.

In a 2004 book called The Privileged Planet, astronomer Guillermo Gonzalez and theologian Jay Richards have carried the notion further, asserting that our place in the cosmos is not only special but also designed for discovery. They contend that conditions on Earth, particularly those that make human life possible, are also optimised for scientific investigation and that this constitutes "a signal revealing a universe so skilfully created for life and discovery that it seems to whisper of an extraterrestrial intelligence immeasurably more vast, more ancient, and more magnificent than anything we've been willing to expect or imagine.

Read why these arguments are misleading here  (from page 144)

Jason Waller provides a version of the argument for fine tuning that goes like this...

1) The universe is fine-tuned for the evolution of intelligent life.

2) This fine-tuning is most likely the result of either (i) chance or (ii) Minimal Theism.

3) The likelihood that the fine-tuning is the result of chance is astronomically low (something like 1 in 10 to the 50.)

4) The likelihood of Minimal Theism (while itself pretty low) is not astronomically low(that is, it is much higher than 1 in 10 to the 50.)

5) Therefore, Minimal Theism is probably true.

Some issues with Jason's logic:

The probability that our universe exists as it does is 1.

But if we're trying to figure out what the probability of the universe existing was before it came into existence (which is the only meaningful way to address this question) then it can't be calculated because we have no idea what variables led up to its creation. We cannot (as far as we know) look back to "before" the big bang and look at all the parameters involved, so there's no way to put a value on how likely any of them are, let alone all of them happening as they did. Waller's number "1 in 10 to the 50" is entirely made up.

It is possible that our universe is the only possible universe - that no matter what happened before the big bang, the only way the universe could shake out would be exactly as it has. It's hard to imagine, for example, that pi could be set to any other value because its value comes directly from its abstract definition. Perhaps all the fundamental constants are the same way. They couldn't be set otherwise.

It's equally possible (because we can't put values on these possibilities) that our universe is nearly infinitely unlikely - that every possible parameter we can think of could be set at any possible value and the chances of ours arising over others is statistically zero (far smaller than the 10^-50 figure Waller uses). In fact, if any one parameter could exist on a sliding scale, say, the speed of light could have been set at any value above 0 and below infinity, then the probability of the universe existing with c set as it is is infinitely low (unless the values have to be some multiple of some other constant, aka, quantized, but even then, assuming the quantization value is on the order of the Planck length, it would be much smaller than 10^-50).

And even if we could enumerate all the possible values of the variables involved, we're going to get stuck on the definition of "as it does". Does "as it does" refer only to the values of the fundamental physics constants? Does it also require the exact amount of initial energy? Does it require the same perturbances in the initial fields that set all the matter on the exact same course that it did in our universe? Does it require that I like vanilla ice cream more than chocolate?

Waller's argument seems to define "as it does" to mean "able to give rise to intelligent life" but we have no way of knowing this probability because we have no idea what a universe would look like if the initial conditions had been different. Who can say that life could or couldn't be possible if c were set to some other value?

So, not only are the parameters impossible to evaluate, but the requirement for satisfying the probability isn't well-defined.

Anyone who puts a value on this is making it up.


Victor Stenger's Respose to The Argument from Probability

If we properly compute, according to statistical theory, the probability for the universe existing with the properties it has, the result is unity! The universe exists with one hundred percent probability (unless you are an idealist who believes everything exists only in your own mind). On the other hand, the probability for one of a random set of universes being our particular universe is a different question. And the probability that one of a random set of universes is a universe that supports some form of life is a third question. I submit it is this last question that is the important one and that we have no reason to be sure that this probability is small.

I have made some estimates of the probability that a chance distribution of physical constants can produce a universe with properties sufficient that some form of life would have likely had sufficient time to evolve. In this study, I randomly varied the constants of physics (I assume the same laws of physics as exist in our universe, since I know no other) over a range of ten orders of magnitude around their existing values. For each resulting "toy" universe, I computed various quantities such as the size of atoms and the lifetimes of stars. I found that almost all combinations of physical constants lead to universes, albeit strange ones, that would live long enough for some type of complexity to form (Stenger 1995: chapter 8). This is illustrated in figure 1.



Figure 1. Distribution of stellar lifetimes for 100 random universes in which four basic physics constants (the proton and electron masses and the strengths of the electromagnetic and strong forces) are varied by ten orders of magnitude around their existing values in our universe. Otherwise, the laws of physics are unchanged. Note that in well over half the universes, stars live at least a billion years. From Stenger 1995.
Every shuffle of a deck of cards leads to a 52-card sequence that has low a priori probability, but has unit probability once the cards are all on the table. Similarly, the "fine-tuning" of the constants of physics, said to be so unlikely, could very well have been random; we just happen to be in the universe that turned up in that particular deal of the cards.

Note that my thesis does not require more than one universe to exist, although some cosmological theories propose this. Even if ours is the only universe, and that universe happened by chance, we have no basis to conclude that a universe without some form of life was so unlikely as to have required a miracle.


Thursday 5 April 2012

What is an Angel?

A brief discussion on the nature of angels...




Posted by A Christian  on 31 Mar 2012 at 1:41PM

My own personal belief is that angels are created beings, but they were never human. Billy Graham, the famous evangelist and Christian leader, wrote a book on the subject of angels in 1975. I have never read the book, but I remember my father had the book and read it. He died in 2006, but my mother probably still has the book on her bookshelf. I might look for it next time I am there and read it for myself.
Anyway, I did find a review of the book online. The blogger says, "Graham addresses the reality of angels, compares them and their place in the order of things with man and God, explains their organization, rank, and duties and of course their roles in the end-times."
It appears to be an interesting book, and I suggest that anyone who is truly interested in the subject of angels consider reading it.
Posted by Another Christian on 31 Mar 2012 at 2:44PM
I AM,,, as I feel and sense I have one watching over me and giving me messages from God... thanks for the info teacher!
Posted by JimC  on 31 Mar 2012 at 2:53PM

Seems to be a serious attempt to explain angels from a Christian perspective. A quotation from the book...
 "Millions of angels are at God's command and at our service. The hosts of heaven stand at attention as we make our way from earth to glory, and Satan's guns are no match for God's heavy artillery"
What I'd like to know is why angels are usually depicted with wings. There's no mention of wings in the bible
Posted by A Christian  on 31 Mar 2012 at 3:14PM

Billy Graham is a Christian. I would not expect him to explain something from any other perspective than a Christian perspective.
Isaiah 6: 1-2
"In the year that king Uzziah died I saw also the LORD sitting upon a throne, high and lifted up, and his train filled the temple.
2 Above it stood the seraphims: each one had six wings; with twain he covered his face, and with twain he covered his feet, and with twain he did fly."
In the Bible, a seraphim is considered to be a celestial or heavenly being, which I interpret to be an angel
Posted by JimC  on 31 Mar 2012 at 3:32PM

I thought seraphim and cherubim were different to angels. Maybe they are different types.
I shall have to read the book.
Posted by A Pantheist on 31 Mar 2012 at 4:58PM
Seraphim are the highest rank of angels, then cherubim. Angels are the lowest, with arch-angels above them. One of the few bible teachings I actually remember.
Posted by JimC  on 31 Mar 2012 at 5:13PM

One of my favourite biblical scenes is in 2 Samuel. God has smoke coming out of his nose, and fire coming out of his mouth, while he rides on the back of a flying cherub. (or cherubim)
Posted by A Legendary Outlaw on 31 Mar 2012 at 5:56PM
There's a teenager who drives his car down my street late at night, no doubt he looks like that.