Friday 22 June 2012

Quantum mysticism


Quantum mysticism is a set of metaphysical beliefs and associated practices that seek to relate consciousness, intelligence, spirituality, or mystical world-views to the ideas of quantum mechanics and its interpretations
In other words... pseudoscience.

A notable researcher in this field is Dean Radin. In this paper, Radin tries to show that consciousness can cause the collapse of the quantum wavefunction.   The double slit experiment is normally used to show how the collapse of a quantum wavefunction is caused when individual atoms are measured by a non-conscious measurement device. Radin uses volunteers who try to influence the outcome of the experiment by using their thoughts.

(Note that this experiment is trying to prove something different to the false claim from religionists that the double slit experiment shows conscious observation creates or influences matter).

Radin's idea is interesting, and the experiment is better than most one finds in pseudoscience, but unfortunately the results are inconclusive.


Here's a review of Radin's paper.


Unfortunately this paper again does not report the actual physical outcome. They don’t report how much the subjects were able to influence the pattern. What seems clear is that none of them was able to alter the pattern in a way that stood out above the noise.


It would have been interesting to know if the measurements were any more precise than those conducted by Jeffers and Ibison. If their apparatus is better, and the effect still couldn’t be clearly measured, then that would suggest confirmation that Ibison’s result was just chance.


They calculate that, on average, the pattern in periods where subjects paid attention was slightly different from the pattern when they did not pay attention. That is valid in principle because you can get a good measurement with a bad apparatus by simply repeating the process a lot of times.


However, any physicist or engineer would try their utmost to improve that apparatus rather than rely on repetitions.


On the whole, this is not handled like a physics experiment but more like one in social science.


Most importantly, the effect size reported is not about any physically measurable change to the interference pattern. It is simply a measure for how much the result deviated from “chance expectation”. That’s not exactly what should be of top interest.


The paper reports six experiments. To give credit where credit should not be due, some would have pooled the data and pretended to have run fewer but larger experiments to make the results seem more impressive. That’s not acceptable, of course, but still done (EG by Daryl Bem).


Results and Methods


The first four experiments presented all failed to reach a significant result, even by the loose standards common in social science. However, they all pointed somewhat in the right direction which might be considered encouraging enough to continue.


Among these experiments there were two ill-conceived attempts to identify other factors that might influence the result.


The first idea is that somewhat regular ups and downs in the outcome measure could have coincided with periods of attention and no attention. I can’t stress enough that this would have been better addressed by trying to get the apparatus to behave.


Instead, Radin performs a plainly bizarre statistical analysis. I’m sure this was thought up by him rather than a co-author because it is just his style.


Basically, he takes out all the big ups and downs. So far so fine. This should indeed remove any spurious ups and downs coming from within the apparatus. But wait, it should also remove any real effect!


Radin, however, is satisfied with still getting a positive result even when there is nothing left of that could cause a true positive result. The “positive” result Radin gets is obviously a meaningless coincidence that almost certainly would not repeat in any of the other experiments. And indeed, he reports the analysis only for this one experiment.


Make no mistake here, once a method for such an analysis has been implemented on the computer for one set of data, it takes only seconds to perform it on any other set of data.


The second attempt concerns the possibility that warmth might have affected the result. A good way to test this is probably to introduce heat sources into the room and see how that affects the apparatus.


What is done is quite different. Four thermometers are placed in the room while an experiment is conducted. The idea seems to have been that if the room gets warmer this indicates that warmth may have been responsible. Unfortunately, since you don’t know if you are measuring at the right places, you can’t conclude that warmth is not responsible if you don’t find any. Besides it might not be a steady increase you are looking for. In short, you don’t know if your four thermometers could pick up anything relevant or how to recognize it, if they did.


Conversely, even if the room got warmer with someone in it, this would not necessarily affect the measurement adversely.


In any case, temperature indeed seemed to increase slightly. Why the same temperature measurements were not conducted in the other experiments, or why the possible temperature influence was not investigated further, is unclear to me. They believe this should work, so why don’t they continue with it?


The last two experiments were somewhat more elaborate. They were larger, comprising about 50 subjects rather than about 30, and took an EEG of subjects. The fifth experiment is the one success in the lot insofar that it reports a significant result.


Conclusion


If you have read the first part of this series then you have encountered a mainstream physics articles that studied how the thermal emission of photons affects the interference pattern. What that paper shares with this one is that both are interested in how a certain process affects the interference pattern.


And yet the papers could hardly be more different. The mainstream paper contains extensive theoretical calculations that place the results in the context of known physics. The fringe paper has no such calculations and relies mainly on pop science accounts of quantum physics.


The mainstream paper presents a clear and unambiguous change in the interference pattern. Let’s look at it again.






The dots are the particles and the lines mark the theoretically expected interference patterns fitted to the actual results. As you can see the dots don’t exactly follow the lines. That’s just unavoidable random variation due to any number of reasons. And yet the change in the pattern can be clearly seen.


From what is reported in Radin’s paper we can deduce that the change associate with attention was not even remotely as clean. In fact, the patterns should be virtually identical the whole time.


That means, that if there is a real effect in Radin’s paper, it is tiny. So tiny that it can’t be properly seen with the equipment they used.


That is hardly a surprising result. If paying attention on something was able to change its quantum behavior in a noticeable way, then this should have been noticed long ago. Careful experiments should be plagued by inexplicable noise, depending on what the experimenters are thinking about.


The “positive” result that he reports suffer from the same problem as virtually all positive results in parapsychology and also many in certain recognized scientific disciplines. It may simply be due to kinks in the social science methodology employed.


Some of the weirdness in the paper, not all of which I mentioned, leaves me with no confidence that there is more than “flexible methods” going on here.


Poor Quantum Physics


Radin believes that a positive result supports “consciousness causes collapse”.  He bemoans a lack of experimental tests of that idea and attributes it, quite without justification, to a “taboo” against including consciousness in physics.


Thousands upon thousands of physicists and many times more students have out of some desire to conform simply refused to do some simple and obvious experiment. I think it says a lot about Radin and the company he keeps that he has no problem believing that.


I don’t know about you, my dear readers, but when I am in such a situation would have thought differently. Either all those people who should know more about the subject than me have their heads up their behinds. Or maybe it is just me. And I would have wondered if there was maybe something I am missing. And I would have found out what it was and avoided making an ass of myself. Then again, I would have (and have) also avoided book deals and the adoration of many fans and the like, all of which Radin secured for himself.


So who’s to say that reasonable thinking is actually the same as sensible thinking.


But back to the physics. As is obvious when one manages to find the relevant literature, conscious awareness of any information is not necessary to affect an interference pattern. Moreover, wave function collapse is not necessary to explain this. Both of this should be plain from the mainstream paper mentioned here.


Outlook


My advice to anyone who thinks that there’s something about this is to try to build a more sensitive apparatus and/or to calibrate it better. If the effect still doesn’t rise over the noise, it probably still wasn’t there in the first place. If it does, however, future research becomes much easier.


For example, if tiny magnetic fields influence this, as Radin suggests, that could be learned in a few days.


Unfortunately, it does not appear that this is the way Dean Radin and his colleagues are going about but I’ll refrain from comment until I have more solid information.


But at least, the are continuing the line of investigation. They deserve some praise for that. It is all too often the case that parapsychologists present a supposedly awesome, earth-shattering result and then move on to do something completely different.


Update


I omitted to comment on a lot of details in the second paper to keep things halfway brief. In doing so I overlooked one curiosity that really should be mentioned.


The fourth experiment is “retrocausal”. That means, in this case, that the double-slit part of the experiment was run and recorded three months before the humans viewed this record, and tried to influence it. The retrocausality in itself is not really such an issue. Time is a curious thing in modern physics and not at all like we intuit.


The curious thing is that it implies that the entire recording was in a state of quantum superposition for a whole three months. Getting macroscopic objects into and keeping them in such states is enormously difficult. It certainly does not just happen on its own. What they claim to have done there is simply impossible as far as mainstream quantum physics is concerned. Not just in theory, but it can’t be done in practice despite physicists trying really hard.

Tuesday 5 June 2012

The Fine Tuning Myth



Fine Tuning and the Argument from Probability



In his 1995 book, The Creator and the Cosmos, physicist Hugh Ross listed thirty-three characteristics a planet must have to support life. He also estimated the probability that such a combination be found in the universe as "much less than one in a million trillion." for human life.

He concluded that only "divine design" could account

However, Ross presented no estimate of the probability for divine design. Perhaps it is even lower! Ross and others who attempt to prove the existence of God on the basis of probabilities make a fundamental logical error. When using probabilities to decide between two or more possibilities, you must have a number for each possibility in order to compare. In this vast universe, highly unlikely events happen every day.

In a 2004 book called The Privileged Planet, astronomer Guillermo Gonzalez and theologian Jay Richards have carried the notion further, asserting that our place in the cosmos is not only special but also designed for discovery. They contend that conditions on Earth, particularly those that make human life possible, are also optimised for scientific investigation and that this constitutes "a signal revealing a universe so skilfully created for life and discovery that it seems to whisper of an extraterrestrial intelligence immeasurably more vast, more ancient, and more magnificent than anything we've been willing to expect or imagine.

Read why these arguments are misleading here  (from page 144)

Jason Waller provides a version of the argument for fine tuning that goes like this...

1) The universe is fine-tuned for the evolution of intelligent life.

2) This fine-tuning is most likely the result of either (i) chance or (ii) Minimal Theism.

3) The likelihood that the fine-tuning is the result of chance is astronomically low (something like 1 in 10 to the 50.)

4) The likelihood of Minimal Theism (while itself pretty low) is not astronomically low(that is, it is much higher than 1 in 10 to the 50.)

5) Therefore, Minimal Theism is probably true.

Some issues with Jason's logic:

The probability that our universe exists as it does is 1.

But if we're trying to figure out what the probability of the universe existing was before it came into existence (which is the only meaningful way to address this question) then it can't be calculated because we have no idea what variables led up to its creation. We cannot (as far as we know) look back to "before" the big bang and look at all the parameters involved, so there's no way to put a value on how likely any of them are, let alone all of them happening as they did. Waller's number "1 in 10 to the 50" is entirely made up.

It is possible that our universe is the only possible universe - that no matter what happened before the big bang, the only way the universe could shake out would be exactly as it has. It's hard to imagine, for example, that pi could be set to any other value because its value comes directly from its abstract definition. Perhaps all the fundamental constants are the same way. They couldn't be set otherwise.

It's equally possible (because we can't put values on these possibilities) that our universe is nearly infinitely unlikely - that every possible parameter we can think of could be set at any possible value and the chances of ours arising over others is statistically zero (far smaller than the 10^-50 figure Waller uses). In fact, if any one parameter could exist on a sliding scale, say, the speed of light could have been set at any value above 0 and below infinity, then the probability of the universe existing with c set as it is is infinitely low (unless the values have to be some multiple of some other constant, aka, quantized, but even then, assuming the quantization value is on the order of the Planck length, it would be much smaller than 10^-50).

And even if we could enumerate all the possible values of the variables involved, we're going to get stuck on the definition of "as it does". Does "as it does" refer only to the values of the fundamental physics constants? Does it also require the exact amount of initial energy? Does it require the same perturbances in the initial fields that set all the matter on the exact same course that it did in our universe? Does it require that I like vanilla ice cream more than chocolate?

Waller's argument seems to define "as it does" to mean "able to give rise to intelligent life" but we have no way of knowing this probability because we have no idea what a universe would look like if the initial conditions had been different. Who can say that life could or couldn't be possible if c were set to some other value?

So, not only are the parameters impossible to evaluate, but the requirement for satisfying the probability isn't well-defined.

Anyone who puts a value on this is making it up.


Victor Stenger's Respose to The Argument from Probability

If we properly compute, according to statistical theory, the probability for the universe existing with the properties it has, the result is unity! The universe exists with one hundred percent probability (unless you are an idealist who believes everything exists only in your own mind). On the other hand, the probability for one of a random set of universes being our particular universe is a different question. And the probability that one of a random set of universes is a universe that supports some form of life is a third question. I submit it is this last question that is the important one and that we have no reason to be sure that this probability is small.

I have made some estimates of the probability that a chance distribution of physical constants can produce a universe with properties sufficient that some form of life would have likely had sufficient time to evolve. In this study, I randomly varied the constants of physics (I assume the same laws of physics as exist in our universe, since I know no other) over a range of ten orders of magnitude around their existing values. For each resulting "toy" universe, I computed various quantities such as the size of atoms and the lifetimes of stars. I found that almost all combinations of physical constants lead to universes, albeit strange ones, that would live long enough for some type of complexity to form (Stenger 1995: chapter 8). This is illustrated in figure 1.



Figure 1. Distribution of stellar lifetimes for 100 random universes in which four basic physics constants (the proton and electron masses and the strengths of the electromagnetic and strong forces) are varied by ten orders of magnitude around their existing values in our universe. Otherwise, the laws of physics are unchanged. Note that in well over half the universes, stars live at least a billion years. From Stenger 1995.
Every shuffle of a deck of cards leads to a 52-card sequence that has low a priori probability, but has unit probability once the cards are all on the table. Similarly, the "fine-tuning" of the constants of physics, said to be so unlikely, could very well have been random; we just happen to be in the universe that turned up in that particular deal of the cards.

Note that my thesis does not require more than one universe to exist, although some cosmological theories propose this. Even if ours is the only universe, and that universe happened by chance, we have no basis to conclude that a universe without some form of life was so unlikely as to have required a miracle.