February 1st, 2009
So there’s an inflammatory post on the physics preprint server blog with the headline Massive Miscalculation Makes LHC Safety Assurances Invalid. It’s based on a paper by Toby Ord and others titled “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes.” Here is the abstract:
Some risks have extremely high stakes. For example, a worldwide pandemic or asteroid impact could potentially kill more than a billion people. Comfortingly, scientific calculations often put very low probabilities on the occurrence of such catastrophes. In this paper, we argue that there are important new methodological problems which arise when assessing global catastrophic risks and we focus on a problem regarding probability estimation. When an expert provides a calculation of the probability of an outcome, they are really providing the probability of the outcome occurring, given that their argument is watertight. However, their argument may fail for a number of reasons such as a flaw in the underlying theory, a flaw in the modeling of the problem, or a mistake in the calculations. If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect. We develop this idea formally, explaining how it differs from the related distinctions of model and parameter uncertainty. Using the risk estimates from the Large Hadron Collider as a test case, we show how serious the problem can be when it comes to catastrophic risks and how best to address it.
In addition to the discussion at the archive blog, there’s a long slashdot discussion as well, if such things can be called discussions. And we already had the Fox News story spinning this as “Scientists Not So Sure ‘Doomsday Machine’ Won’t Destroy the World.” Ugh.
First, before I attack some stupid ideas and presentation issues, let me say what I like and agree with. From a post of mine last year about “stupid smart people”:
I’m equating wisdom here with a type of intelligence, one that “smart†people should have or be capable of achieving. When smart people do/say/believe stupid things, it’s akin to them lacking wisdom, and the stupid things could be avoided if only they applied some of their smarts in a different or more global way. It’s often a failure to see the forest for the trees. Sometimes it’s forgetting that forests are made of trees.
One example I see in astronomy all the time has to do with uncertainties. It’s pounded into our heads as graduate students that a data point doesn’t mean much if you don’t know its error bars, and we often spend more time generating the uncertainties than we do determining the data values. That’s fine as far as it goes. But here’s where the stupid comes in sometimes. There are two kinds of uncertainties: formal and systematic. It’s often possible to calculate and show formal uncertainties, which are usually based on well-understood statistics of shot noise or error propagation. A lot of the time these are worthless, because they’re much smaller than the systematic uncertainties, which depend on the validity of the technique. A simple example of the difference is calibrating how bright a star is in absolute terms. We do this regularly by comparison of photons received in a time period compared to some standard reference stars, and use statistics of photons and detector noise to determine formal uncertainties. The systematic error comes up in the choice of reference stars (or the change in seeing without changing extraction apertures, etc.) — if the standard star turns out to be a variable for some reason, then the formal uncertainty means nothing.
In astronomy, adding those formal error bars to a plot, even when the systematic uncertainties are known to be much larger and more important, will make many an audience member smile happily even when they don’t mean anything. That’s being a stupid smart person.
The paper is in part about seeing the forest, putting a calculation in context, in recognizing that errors with the method, or other unforeseen aspect of the calculation could mean that the actual result doesn’t mean anything. I remember a case back in the early 1990s in my field. It was a calculation of the probability that a certain category of quasar, a broad absorption line quasar, could be what we call “radio loud.” The probability thrown out was one chance in a trillion based on the survey work to date. I published a paper in 1998 reporting my discover of five of them, and now hundreds are known. They’re not common, but they sure exist. The problem was that parameter space had not been uniformly explored to that point, and there is a relationship between radio properties and the presence of broad absorption lines, but it is far from absolute or clearly understood.
Now, here’s the thing. Careful scientists usually couch there statements in qualifiers. I’m sure that calculation of one in a trillion started with a bunch of assumptions, and that those assumptions were clearly started. One of those assumptions was wrong, is all. There was interesting science in why it was wrong. It wasn’t a “mistake” to make the assumption and do the calculation.
The kind of statistical argument made here in the Ord et al. paper has no physics in it. It’s more of a mathematical or philosophical argument. It’s full of its own assumptions, too, which it does state, but seems somewhat unaware of their own limitations. That is, there is the notion that mistakes are made in physics calculations so often that one shouldn’t believe any extremely rare probabilities. Well, yes and no. Sometimes the probabilities can be verified by experiment. Events are rare, but they can still be studied by virtue of having a lot of chances for them to be observed.
There are physics theories that predict a proton decay timescale of something like 1031 years. Well, that’s a dman small probability of any given proton decaying this year. But by watching a lot of protons, we know know that the decay timescale is at least a few orders of magnitude larger, and there is no evidence of proton decay of any sort.
Another post I made related to this topic was after reading The Black Swan, by Nicholas Taleb. Taleb’s big complaint is how the economics industry adopts Gaussian errors, which may or may not be a valid description of the uncertainty for unlikely events. We can prove Gaussians are good for many applications in physics. The problem is that they are assumed in economics, often without compelling evidence, and often with catastrophic consequences.
But I digress a bit. I’m interested in all the tangents on this topic I’m afraid, of pointing out the promise and danger of our explorations. Anyway, when it comes to communicating results to the public, all the qualifiers tend to be dropped. The results tend to drop the assumptions and it sounds like an absolute result, with lots of significance, and such things are rare in science. Then Ord et al.’s argument comes in: people doing calculuations are wrong one time out of a thousand, or ten thousand. On average. Their claim is that this is serious when assessing risk for high cost disasters.
Which says to me we might as well stop doing anything with high stakes, no matter the benefits, and start doing anything to avert risks, even if the dangers are not proven.
Hear this, Fox News? The chances of catastrophic climate change ending civilization as we know it are far from zero according to the experts. But they might be wrong. It might be much higher than they think!
I don’t quite buy this. This is a brand of numerology, philosophy. There are right and wrong answers in the hard sciences. We can be careful. We can check, double check, and cross check. Some authors are better and more careful than others, and so are their referees. Don’t just say, there’s a tiny chance that is wrong and that chance is larger than the tinier chance calculated. Be more careful. Do more checks. Keep checking.
And why pick on the Large Hardron Collider? For the PR? And when the actual conclusion is that there’s still little risk, and when fear mongers like Fox News are likely to pick up on it and spread fear?
I mean, can anyone say how often the very best people do a calculation wrong, have it checked and found to be okay by everyone else in the field, sit around thinking about it for years facing criticism and questions from people about if it is wrong, and still fail to see any errors? I submit that in this scenario, consistent with the business with the LHC, hasn’t happened often. I don’t believe anyone can assess the liklihood of error here, except to say it is smaller than usual.
Again, I am really tempted to rewrite the blog entry with the same arguments, only replacing the Large Hadron Collider with Anthopogenic Climate Change, to argue that we’re mistaken about how likely total destruction of the planet is via runaway greenhouse effect. I haven’t seen anyone putting high odds on that. If it’s lower than 1 in a 1000, this paper by Ord et al. suggests it might as well be considered 1 in a 1000. How about a little Kyoto, Fox News, in the face of rolling the thousand-side die on the extinction of mankind?
You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
And I forgot to even go into one of the cross checks! This argument seems to address only addresses mistakes. It ignores the fact that much more energetic collisions happen between cosmic rays and our atmosphere. These cosmic rays also hit other planets, stars like the sun, denser objects like neutron stars…and have been doing so for billions of years. Collisions in the LHC energy range don’t seem obviously dangerous, empirically.
Hello Mike,
The only problem with that analysis is cosmic ray created (initially neutral) particles are not stopped by planets and may not be stopped by (super fluid) stars.
Some LHC created particles (from head on collisions) will be stopped by earth.
Simple conclusion? Safety is unknown with any certainty, caution is justified.
Caution is always justified, and caution is being taken. No one has come up with any convincing reason to think that the LHC is actually a threat, have they? And a lot of reasons to think it’s safe, yes, without resorting to quantitative lies like “any certainty”?
I hope you don’t drive a car or get into an airplane, and spend more time worrying about preventing global warming and having the capability to avoid asteroid impacts. The chances of catastrophic climate change or an asteroid destroying civilization as we know it seem quite a bit higher, don’t they?
I could give a slightly more serious answer, but your website seems to share a lot of similarities to those of creationists who pick at “problems” with evolution, or global warming deniers, grabbing random quotes or little factoids out of context, and providing only one biased side of the “facts.”
I mean, do you know with “any certainty” that atmospheric cosmic rays can’t create black holes that are “stopped” by the Earth?
Mike writes: “No one has come up with any convincing reason to think that the LHC is actually a threat, have they?”
If scientists believed the LHC was probably a threat then it would not operate. The problem is when potentially dangerous experiments are conducted with only limited safety knowledge or base safety theories on unverified, potentially flawed science.
In this case safety is only estimated with unknown accuracy or precision. Proof of safety should be the requirement to proceed, not lack of convincing proof of danger. (Recall that shuttle Challenger managers launched in freezing weather against warnings from Engineers because there was no convincing proof of danger. They should have required reasonable proof of safety, not lack of proof of danger.)
A good example of the willingness to take big risks for big science (LHC) was demonstrated by physicist yy2bggggs on blog XKCD last year[1]:
yy2bggggs wrote:
“We don’t want to know if it’s possible we will blow up the world–because, quite frankly, we already know the answer. And the answer is, quite frankly, despite all the testing we will ever do–yes. It’s possible. That doesn’t help us.”
“What we want to know is if we are going to destroy the world. And we can’t know this with certainty, but in reality, we don’t really care about certainty. We care about whether or not it’s probable–that is, likely, that we will destroy the world. So again, possibility–irrelevant. Likelihood–key.” [1]
But to answer your question more directly, yes a few scientists hold minority opinions that safety has not been proven and theorize that dangerous particles could be produced, and suggest that attacks of their conclusions are more politically than scientifically motivated. (See Dr. Eric Johnson’s blog for insight on motivations to quietly accept risk rather than express unpopular minority opinions). [2]
[1] XKCD, LHC Dangerous?, Page 5, yy2bggggs (12 Apr 2008) http://forums.xkcd.com/viewtopic.php?f=18&t=11690&st=0&sk=t&sd=a&start=160#p622085
[2] Culture and Inscrutable Science, Dr. Eric Johnson, Assistant Professor of Law (24 Oct 2008) http://prawfsblawg.blogs.com/prawfsblawg/2008/10/culture-and-ins.html
It sounds like you’re asking for the impossible. If we could prove exactly what will happen, we need not do the experiment. This has been true of other experiments before the LHC, and will be true of other things to come. “Proof” is impossible, but a clear criterion should be provided if you make an objection. And even if there is proof, it seems to me that some will not accept it and will still raise objections anyway, don’t you agree?
The proof is not ever going to be found in personality, culture, or internet forum comments. More serious criticisms in more serious places are appropriate.
Again, by any measure climate change and asteroids are much more serious threats. I just haven’t seen any serious scenario for a threat from the LHC other than attacks that the assurances are not 100.00000000% certain. The threat scenarios really depend on us having a poor understanding of a whole lot of things that we don’t have evidence we have a poor understanding about, and always unsupported speculation.
What would reassure you that the LHC is safe? If the answer is nothing, then there’s no point in talking.
I don’t require 100.00000000% proof of safety, but my study of current safety arguments is that they don’t appear to indicate more than perhaps a 75% chance of safety (similar odds as I would have estimated for the safety of launching the shuttle Challenger in freezing weather, danger is a reasonable and un-excluded possibility).
A reasonable possibility of losing Earth requires a much higher level of safety assurance than the danger of losing a space shuttle. (In 1988 NASA estimated the odds of catastrophic shuttle failure as 1 in 50). [1]
LHC particle collissions at energies at least 2.5 times as powerful as any collider before it are planned for 2009 (70% of full power or 5 TeV).
At these energies we can expect the unexpected. Particles will be created at rest on Earth that have never been created at rest on Earth before. The scenario that causes most concern is creation of ultra stable particles (strange matter or black holes) that might have the potential to turn Earth into a lump of something un-inhabitable in some unknown amount of time.
Some scientists predict that stable strange matter or stable micro black holes might be created (both probably low probabilities but unknown) and safety scenarios that might exclude them are challenged and unverified. That is the concern.
We only get one chance to end Earth, reasonable caution is justified. One scientist, Dr. Habil Rainer Plaga recommends at least that scientists very slowly increase energies and search for possible signs of danger before higher energy levels are attempted.[2] There is no guarantee that this would prevent possible danger, but reason argues that it is a reasonable and necessary precaution.
A 250% increase in energy over prior colliders is clearly not a small incremental increase in energy levels. Perhaps study of 10% energy level increases might be reasonable to me, perhaps smaller increases would satisfy others, but 250% increases clearly is not cautious.
[1] SFGate.com, Latest assessment finds greater danger for shuttle (2005) http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2005/07/26/MNGESDTHBO1.DTL&type=printable
[2] On the potential catastrophic risk from metastable quantum-black holes produced at particle colliders, Dr. Habil. Rainer Plaga (26 Sep 2008) http://arxiv.org/abs/0808.1415
75% chance of what, exactly? This is ridiculous number from some perspectives (e.g., destruction of the Earth). 75% chance physicists know exactly what will happen? 75% the scientists have assessed the risk accurately? 75% chance their risk assessment is off by 20 orders of magnitude?
This is the kind of sloppy statement I wrote the original post to combat. This is the kind of thing Fox News uses to scare people.
A 250% increase in energy is only little more than a doubling. Sounds small to me, as these things go. Again, cosmic rays are orders of magnitude more energetic, and black hole decay times of minutes are still long compared to the growth timescale.
And I don’t know why you think the LHC isn’t going to be taken only slowly up to peak energies, luminosities. That’s their plan!
Let’s be reasonable, but let’s not be paranoid about every exotic and speculative theory that hasn’t passed any tests whatsoever. They’re more likely wrong than not.
First, there is no accepted theory that has passed any tests that suggests black holes will be produced at all, let alone a 25%. Theory that is accepted indicates essentially a 100% chance any such black hole will decay before it can grow. There is no evidence of black holes, dangerous or not, being produced by much more energetic cosmic rays hitting the Earth regularly throughout history. There is no evidence for black holes smaller than neutron stars by this sort of event happening in nature.
You have to string together a long list of exceptions to even get a hint of a threat, let alone the destruction of Earth. What is the 75% chance again?
Look, I’m saying don’t look into it. I’m saying don’t scare monger about total longshots without something a bit more serious to base it on. It’s also not like particle physics has advanced much in the decade+ that the LHC has been under construction. This seems like a sexy issue for some people to get some attention to me, rather than serious criticisms.
I would like nothing better than to learn that the LHC would follow the spirit of Dr. Plaga’s recommendation to only achieve significantly higher energy levels after thoroughly studying results of prior energy levels. A 2.5 fold increase in luminosity is not a cautious increase, its an increase that would give almost no chance for spotting warning signs before micro black holes or similar might be produced.
Talk to me about iterative luminosity increases closer to 10% and I might believe your advocating reasonable caution.
Mike who is being unrealistic?
You write: “Theory that is accepted indicates essentially a 100% chance any such black hole will decay before it can grow.”
Do I need to cite the several recent papers predicting black holes do not radiate?
Are you aware of the 2004 Delphi study that found significant doubt among physicists that micro black holes would evaporate (estimates for HR failure: 0%, 0%, 0.000000001%, 0.1%, 1%, 1%, 1%, 2%, 2%, 7%, 10%, 10%, 30%, 35%, and 50%).
I read that as closer to a 10% chance, not a 0% chance.
I don’t think my issues are all that unreasonable either. As a scientist, I tend to think more realistically than most. Opinion doesn’t mean much. I mean, over 40% of Americans think that evolution is bunk, and they’re merely ignorant of the facts and blissfully unaware, while being totally wrong. Who is right, who is wrong, and what can we do to resolve the issues within reason?
In astronomy, we see no evidence for microblack holes which should have been produced in large quantities in the early universe if the LHC is a threat.
Black hole evaporation was suggested 30+ years ago and is still the dominant point of view without being to be in error. Perhaps some of the skeptics are equal unsure that black holes even exist, let alone evaporate?
I still don’t know why 10% is small and a factor of two is large. We’ve gone through many orders of magnitude in collision energies, and we’re not actually close to energy levels many find more interesting. Why is it now a problem? And given past collider history, increases in luminosity aren’t going to be huge in short time intervals, and there is a legion of particle physicists waiting to pour over every result, every single one dying to be the first to find something unusual.
Again, it’s a matter of the quality of the arguments being put forth. If high stakes is the only thing making them worthwhile, I’m skeptical.
Compelling arguments but less than conclusive I think.
Perhaps if stable micro black holes were created in the early universe is it possible they may have merged to form larger black holes before galaxies formed around them?
From NewScientist: Black Holes blew up from Nothing [1]
“Chris Carilli of the National Radio Astronomy Observatory in Socorro, New Mexico, and colleagues studied four galaxies from less than 2 billion years after the big bang. They found the black holes at the centres of these galaxies were as heavy as anything seen in the modern universe, with one estimated to have the mass of 20 billion suns.”
“Meanwhile, it looked as if the host galaxies had only begun to grow, with only modest masses compared with galaxies harbouring similarly sized black holes today. “This suggests the black holes came first,” says Carilli, who presented the results at a meeting of the American Astronomical Society in Long Beach, California.”
[1] NewScientist, Black Holes blew up from Nothing (17-23 Jan 2009) http://www.newscientist.com/article/mg20126915.500-black-holes-blew-up-from-nothing.html
Yeah, I know Chris a little bit and that’s in my area. Big questions, big unknowns. Not compelling evidence for or against at this point.
Perhaps it’d have been better if LHC had happened 10-15 years later, once more of the theory of complex Hawking radiation interactions had had a chance to be worked out in advance. But now that they’ve built it, they obviously want to use the thing ASAP.
This naturally makes some outsiders a little concerned about safety assessments, if they suspect that the people conducting those assessments are already “invested” in the project and personally committed to the project going ahead. There are some people within the LHC organisation who do make great efforts to be fair and honest about the situation, but they seem to be outnumbered by other guys who go around quoting silly numbers and declaring that the thing is known to be “perfectly safe”.
Exaggerated safety assurances meant to reassure tend to have the opposite effect. It’s natural for people who suspect that they’re being bluffed to assume the opposite of what they’ve being told, so the more that these guys insist that LHC can’t possibly be dangerous in any conceivable way, and they more they try to paint folks with concerns as “stupid” or “anti-science”, the more worried people are going to get.
Not invalid points. Let me make one more.
Eric, sometimes the problem is that the complaints ARE stupid or anti-scientific. It’s all too easy to whip some people into a movement based on fast talk and an anti-whatever attitude (hear that Scalzi?). Scaring people in newspapers with half-assed (or fully assed) versions of perhaps serious complaints is also damaging. Too many people do make judgments based on their perception of people — I grant that — and ignore whatever facts there are. That’s a double-edged sword.