The Value of Decentralization in Science

Currently, the future of particle physics is completely uncertain. So far, there were no surprising findings at the LHC and no one knows for sure what the best next move is. Would another huge collider, even larger than the LHC, finally bring us the much needed experimental information that helps us to figure out the correct theory beyond the standard model? Or would the money be better invested in small projects?

There is an interesting new essay by Adam Falkowski (well known for his blog Résonaances) about exactly this question. He concludes: “Shifting the focus away from high-energy colliders toward precision experiments may be the most efficient way to continue exploration of fundamental interactions in the decades ahead. It may even allow particle physics to emerge stronger from its current crisis.

His conclusions and arguments immediately reminded me of a much older essay titled “Six Cautionary Tales for Scientists” by Freeman Dyson, published in “From Eros to Gaia”. The essay is from 1988 but reads amazingly relevant.

Dyson compares 6 situations where people had to decide between a Plan A = several small projects vs. a Plan B = one huge project. His main conclusion is that in most situations “Plan A” is the better option.

One relevant example of his is when “the community of molecular biologists in the United States has been struggling with the question of whether to set up a large project to map and sequence the human genome, the set of 3 billion base pairs in the genes of a human being.” They had the option to “establish an industrial-scale facility for sequencing and would aim to have the whole job done by an army of technicians within a few years. In conjunction with the sequencing project, there would also be a large centralized mapping project, using the sequence data to identify all known and unknown human genes with precisely known places in the genome. Plan B would require a large new expenditure of public funds, with the usual attendant problems of deciding who should administer the funds and who should receive them.

The other option would be to “continue unchanged so far as possible the existing way of doing things, with mapping and sequencing activities carried on in decentralized fashion by many groups of scientists investigating particular problems of human genetics. In Plan A there would be no centralized big project, and no drive to sequence the 3 billion bases of the human genome in their entirety irrespective of their genetic significance.

He then argues that Plan A is the much better option:

The complete sequence should be done when, and only when, we have developed the technology to do the job cheaply and quickly. When it can be done cheaply, go ahead and do it.

Moreover, he concludes that it is clever

to stay flexible and avoid premature commitment to rigid programs. […] Unfortunately, in the history of committees planning scientific programs, such wisdom is rare.

Instead of investing a large junk of the available budget into one big project, he argues, it makes more sense to give the same money to smaller projects. These smaller projects will help to develop the needed technology further and drive the costs for experiments down. After some years it will be probably possible to get the same results for millions instead of billions. In addition, there is much more flexibility and more possibilities for surprising findings.

His arguments against Plan B’s are even more convincing and relevant for the question whether a new huge collider should be built when he talks about the SSC.

The SSC is an extreme example of Plan B. The question we have to address is whether SSC is a good Plan B like the Very Large Array or a disastrous Plan B like Zelenchukskaya. […] I do not claim to be infallible when I make guesses about the future. But the SSC shows all the characteristic symptoms of a bad Plan B. It is bad politically because it is being pushed by economic interests and by considerations of national prestige having little to do with scientific merit. It is bad educationally because it pours money into a project which offers little opportunity for creative involvement of students. It is bad scientifically because the proton-proton collisions which it produces are peculiarly difficult to interpret. It is bad ecologically because it squeezes out other avenues of research which are likely to lead to more cost-effective high-energy accelerators. None of these arguments by itself is conclusive, but together they make a strong case against the SSC. There is a serious risk that the SSC will be as great a setback to particle physics as the Zelenchukskaya Observatory has been to astronomy.

When I discuss these misgivings with my particle physicist friends, some who belong to HEPAP and some who don’t, they usually say things like this: “But look, we have no alternative. If we want to see the Higgs boson, or the top quark, or the photino, or any other new particles going beyond the standard model, we have to go to higher energy. It is either the SSC or nothing.” This is the same kind of talk you always hear when people are arguing for Plan B. It is either Plan B or nothing. This argument usually prevails because Plan B is one big thing and Plan A is a lot of little things. When your eyes are blinded by the glitter of something big, all the little things look like nothing. […] But to answer the physicists who say “SSC or nothing,” we must produce a practical alternative to the SSC. We must have a Plan A. Plan A does not mean giving up on high-energy physics. It does not mean that we stop building big accelerators. It does not mean that we lose interest in Higgs bosons and top quarks. My Plan A is rather like the plan recommended by the Alberts Committee. It says, let us put more money into exploring new ideas for building cost-effective accelerators. Let us build several clever accelerators instead of one dumb accelerator. Let us measure the value of an accelerator by its scientific output rather than by its energy input. And meanwhile, while the technology for cheaper and better accelerators is being developed, let us put more money into using effectively the accelerators we already have.

The advocates of the SSC often talk as if the universe were one-dimensional, with energy as the only dimension. Either you have the highest energy or you have nothing. But in fact, the world of particle physics is three-dimensional. The three dimensions in a particle physics experiment are energy, accuracy, and rarity. Energy and accuracy have obvious meanings. Energy is determined mainly by the accelerator, accuracy by the detection equipment. Rarity means the fraction of particle collisions that produce the particular process that the experiment is designed to study. To observe events of high rarity, we need an accelerator with high intensity to make a large number of collisions, and we also need a detector with good diagnostics to discriminate rare events from background. I am not denying that energy is important. Let us by all means build accelerators of higher energy when we can do so cost-effectively. But energy is not the only important variable.

[…]

My Plan A for the future of particle physics is a program giving roughly equal emphasis to the three frontiers. Plan A should be a program of maximum flexibility, encouraging the exploitation of new tools and new discoveries wherever they may occur. To encourage work on the accuracy frontier means continuing to put major effort into new detectors to be used with existing accelerators. To encourage work on the rarity frontier means building some new accelerators which give high intensity of particles with moderate energy. After these needs are taken care of, Plan A will still include big efforts to move ahead on the energy frontier. But the guiding principle should be: more money for experiments and less for construction. Let us find out how to explore the energy frontier cheaply before we get ourselves locked into a huge construction project.
Let us follow the good example of the Alberts Committee when they say: “Because the technology required to meet most of the project’s goals needs major improvement, the committee specifically recommends against establishing one or a few large sequencing centers at present.”
Plan A consists of a mixture of many different programs, looking for opportunities to do great science on all three frontiers. Plan A lacks the grand simplicity and predictability of the SSC. And that is to my mind the main reason for preferring Plan A. There is no illusion more dangerous than the belief that the progress of science is predictable. If you look for nature’s secrets in only one direction, you are likely to miss the most important secrets, those which you did not have enough imagination to predict.”

The Sociology of Theoretical Particle Physics according to H. Georgi

I recently stumbled upon an essay by Howard Georgi called “Effective quantum field theories”, which was published in a book called “The New Physics” edited by P. Davies. Quite surprisingly, near the end of the essay, he starts writing “about how theoretical particle physics works as a sociological and historical phenomenon“.

Georgi is not only the inventors of one of my favorite ideas for physics beyond the standard model (Grand Unified Theories) but also the author of “Lie algebras in particle physics”, one of the best books on the role on group theory in physics. (The role of group theory in physics happens to be my favorite topic). So he knows what he is talking about.

The essay is from 1989, but in the light of recent experimental null results, I thought it sounded like it could’ve been written a few days ago. He outlines a quite unique perspective and I’m pretty sure not many people have heard of it. Especially when we take into account how good this passage is hidden in a big book, in an essay about a completely different topic.

Thus, here is the relevant passage:

“The progress of the field is determined, in the long run, by the progress of experimental particle physics. Theorists are, after all, parasites. Without our experimental friends to do the real work, we might as well be mathematicians or philosophers.

When the science is healthy, theoretical and experimental particle physics track along together, each reinforcing the other. These are the exciting times. But there are often short periods during which one or the other aspect of the field gets way ahead. Then theorists tend to lose contact with reality. This can happen either because there are no really surprising or convincing experimental results being produced (in which case I would say that theory is ahead – this was the situation in the late 1970s and early 1980, before the discovery of the W and Z) or because the experimental results, while convincing, are completely mysterious (in which case I would say that experiment is ahead – this was the situation during much of the 1960s).

During such periods, without experiment to excite them, theorists tend to relax back into their ground states, each doing, whatever comes most naturally. As a result, since different theorists have different skills, the field tends to fragment into little subfields. Finally, when the crucial ideas or the crucial experiments come along and the field regains its vitality, most theorists find that they have been doing irrelevant things.

But the wonderful thing about physics is that good theorists don’t keep doing irrelevant things after experiment has spoken. The useless subfields are pruned away and everyone does more or less the same thing for a while, until the next boring period. […]

As I suggested at the beginning of this chapter, I am somewhat concerned about the present state of particle theory. The problem is, as I mentioned before, that we are in a period during which experiment is not pushing us in any particular direction. As such times, particle physicists must be especially careful.

We now understand the strong, weak and electromagnetic interactions pretty well. Of course, that doesn’t mean that there isn’t anything left to do in these fields any more than the fact that we understand quantum electrodynamics means that there is nothing left to do in atomic physics. The strong interactions, quantum chromodynamics, in particular will rightly continue to absorb the energies of lots of theorists for many decades to come. But it is no longer frontier particle physics in the sense that it was fifteen years ago.”

 

Making Sense of Particle Physics Research

I recently attended a Workshop on “Open Questions in Particle Physics and Cosmology” in Göttingen, and there, among other things, I learned a classification for ideas/models beyond the standard model.

This categorization helps me a lot as a young researcher in understanding what is currently going on in modern particle physics. It not only helps me to understand the work of others better but also allows me to formulate better what kind of research I’m currently doing and want to do in the future.

Broadly the categorization goes as follows:

1.) Curiosity Driven Models (a.k.a. Why not?)

In these kinds of models anything goes, that is allowed by basic principles and data. Curiosity-driven research is characterized by no restrictions regarding the type of particle and interaction.  In general, there is no further motivation for the addition of some particle, besides that it is not yet excluded by the data.

For this reason, many such research projects include a large number of such “curiosity-driven” particle and interaction additions and then perform parameter scans to check what the current experimental bounds for the various models are.

For example, a prototypical curiosity-driven research project checks what the current mass and interaction strength bounds for spin 0, spin 1/2, spin 1,  … particles are that would show up by a specific signature at the LHC.

Usually, such models are called simplified models.

The motivation behind such efforts is to scan the possible “model landscape” as systematically as possible.

2.) Data Driven Models

Models in this second category are invented as a response to an experimental “anomaly”. An experimental “anomaly” is when the observation of an experiment is not exactly what is expected from the theory. Usually, the statistical significance is between 2-4 sigma and thus below the “magical” 5 sigma, when people start talking about a discovery. There can be many reasons for such an anomaly: an experimental error, an error in the interpretation of the experimental data, an error in the standard theory prediction or possibly it’s just a statistical fluctuation.

Some examples of such anomalies are:

  • The current flavor anomalies in $R_K$ and $R_{K^\star}$ observables.
  • The long-standing discrepancy in the anomalous magnetic moment of the muon, usually just called (“g-2”).
  • The infamous 750 GeV diphoton excess.
  • The Fermi LAT GC excess.
  • The Reactor Antineutrino Anomalies
  • The 3.5 keV X-ray line.
  • The positron fraction excess.
  • The DAMA/LIBRA annual modulation effect.
  • The “discovery” of gravitational waves by the BICEP2 experiment

It is not uncommon, those data-driven models try to explain several of these anomalies at once. For an example of a data-driven model, have a look at this paper and for further examples, see slide 7 here.

(Take note that most of the “anomalies” from the list above are no longer “hot”. For example, the 750 GeV diphoton excess is now regarded as a statistical fluctuation, the positron fraction can be explained by pulsars, the significance of the 3.5 keV X-ray line is decreasing,  the reactor antineutrino anomalies can be explained by a “miscalculation“, the DAMA/LIBRA “discovery” has been refuted by several other experiments, the “discovery” of gravitational waves by the BICEP2 experiment is “now officially dead”…)

The motivation behind such research efforts is, of course, to be the first who proposed the correct explanation if the “anomaly” turns out to be a real discovery.

3.) Theory-Driven Models

Research projects in this third category try to solve some big theoretical problem/puzzle and predict something that can be measured as a byproduct.

Examples of such puzzles are:

  • The gauge hierarchy puzzle.
  • The strong CP puzzle.
  • The quantization of electric charge.

Again, as for the data-driven models, many models in this category try to solve more than one of these puzzles. Examples are supersymmetric models, which solves the gauge hierarchy puzzle; axion models, which solve the strong CP puzzle; and GUT models which explain the quantization of electric charge.

It is important to note that this classification is only valid for research in the category “hep-ph“, i.e. high-energy physics phenomenology.  

In addition, there is, of course, a lot that is going on in “hep-th“, i.e. high-energy physics theory. Research projects in this category are not started to make a prediction for an experiment, but rather to understand some fundamental aspect of, say Yang-Mills theory better or to invent new methods to calculate amplitudes.

Quite prophetic and relevant for the classification above, is the following quote by Nobel Prize winner Sheldon Lee Glashow from a discussion at the “Conceptual Foundations of Quantum Field Theory” conference in 1996

“The age of model building is done, except if you want to go beyond this theory. Now the big leap is string theory, and they want to do the whole thing; that’s very ambitious. But others would like to make smaller steps. And the smaller steps would be presumably, many people feel, strongly guided by experiment. That is to say the hope is that experiment will indicate some new phenomenon that does not agree with the theory as it is presently constituted, and then we just add bells and whistles insofar as it’s possible. But as I said, it ain’t easy. Any new architecture has, by its very nature, to be quite elaborate and quite enormous. Low energy supersymmetry is one of the things that people talk about. It’s a hell of a lot of new particles and new forces which may be just around the corner. And if they see some of these particles, you’ll see hundreds, literally hundreds of people sprouting up, who will have claimed to predict exactly what was seen. In fact they’ve already sprouted up and claimed to have predicted various things that were seen and subsequently retracted. But you see you can’t play this small modification game anymore. It’s not the way it was when Sam and I were growing up and there were lots of little tricks you could do here and there that could make our knowledge better. They’re not there any more, in terms of changing the theory. They’re there in terms of being able to calculate things that were too hard to calculate yesterday. Some smart physicists will figure out how to do something slightly better, that happens. But we can’t monkey around. So it’s either the big dream, the big dream for the ultimate theory, or hope to seek experimental conflicts and build new structures. But we are, everybody would agree that we have right now the standard theory, and most physicists feel that we are stuck with it for the time being. We’re really at a plateau, and in a sense it really is a time for people like you, philosophers, to contemplate not where we’re going, because we don’t really know and you hear all kinds of strange views, but where we are. And maybe the time has come for you to tell us where we are. ‘Cause it hasn’t changed in the last 15 years, you can sit back and, you know, think about where we are.”