I recently attended a Workshop on “Open Questions in Particle Physics and Cosmology” in Göttingen, and there, among other things, I learned a classification for ideas/models beyond the standard model.
This categorization helps me a lot as a young researcher in understanding what is currently going on in modern particle physics. It not only helps me to understand the work of others better but also allows me to formulate better what kind of research I’m currently doing and want to do in the future.
Broadly the categorization goes as follows:
1.) Curiosity Driven Models (a.k.a. Why not?)
In these kinds of models anything goes, that is allowed by basic principles and data. Curiosity-driven research is characterized by no restrictions regarding the type of particle and interaction. In general, there is no further motivation for the addition of some particle, besides that it is not yet excluded by the data.
For this reason, many such research projects include a large number of such “curiosity-driven” particle and interaction additions and then perform parameter scans to check what the current experimental bounds for the various models are.
For example, a prototypical curiosity-driven research project checks what the current mass and interaction strength bounds for spin 0, spin 1/2, spin 1, … particles are that would show up by a specific signature at the LHC.
Usually, such models are called simplified models.
The motivation behind such efforts is to scan the possible “model landscape” as systematically as possible.
2.) Data Driven Models
Models in this second category are invented as a response to an experimental “anomaly”. An experimental “anomaly” is when the observation of an experiment is not exactly what is expected from the theory. Usually, the statistical significance is between 2-4 sigma and thus below the “magical” 5 sigma, when people start talking about a discovery. There can be many reasons for such an anomaly: an experimental error, an error in the interpretation of the experimental data, an error in the standard theory prediction or possibly it’s just a statistical fluctuation.
Some examples of such anomalies are:
- The current flavor anomalies in $R_K$ and $R_{K^\star}$ observables.
- The long-standing discrepancy in the anomalous magnetic moment of the muon, usually just called (“g-2”).
- The infamous 750 GeV diphoton excess.
- The Fermi LAT GC excess.
- The Reactor Antineutrino Anomalies
- The 3.5 keV X-ray line.
- The positron fraction excess.
- The DAMA/LIBRA annual modulation effect.
- The “discovery” of gravitational waves by the BICEP2 experiment
It is not uncommon, those data-driven models try to explain several of these anomalies at once. For an example of a data-driven model, have a look at this paper and for further examples, see slide 7 here.
(Take note that most of the “anomalies” from the list above are no longer “hot”. For example, the 750 GeV diphoton excess is now regarded as a statistical fluctuation, the positron fraction can be explained by pulsars, the significance of the 3.5 keV X-ray line is decreasing, the reactor antineutrino anomalies can be explained by a “miscalculation“, the DAMA/LIBRA “discovery” has been refuted by several other experiments, the “discovery” of gravitational waves by the BICEP2 experiment is “now officially dead”…)
The motivation behind such research efforts is, of course, to be the first who proposed the correct explanation if the “anomaly” turns out to be a real discovery.
3.) Theory-Driven Models
Research projects in this third category try to solve some big theoretical problem/puzzle and predict something that can be measured as a byproduct.
Examples of such puzzles are:
- The gauge hierarchy puzzle.
- The strong CP puzzle.
- The quantization of electric charge.
Again, as for the data-driven models, many models in this category try to solve more than one of these puzzles. Examples are supersymmetric models, which solves the gauge hierarchy puzzle; axion models, which solve the strong CP puzzle; and GUT models which explain the quantization of electric charge.
It is important to note that this classification is only valid for research in the category “hep-ph“, i.e. high-energy physics phenomenology.
In addition, there is, of course, a lot that is going on in “hep-th“, i.e. high-energy physics theory. Research projects in this category are not started to make a prediction for an experiment, but rather to understand some fundamental aspect of, say Yang-Mills theory better or to invent new methods to calculate amplitudes.
Quite prophetic and relevant for the classification above, is the following quote by Nobel Prize winner Sheldon Lee Glashow from a discussion at the “Conceptual Foundations of Quantum Field Theory” conference in 1996
“The age of model building is done, except if you want to go beyond this theory. Now the big leap is string theory, and they want to do the whole thing; that’s very ambitious. But others would like to make smaller steps. And the smaller steps would be presumably, many people feel, strongly guided by experiment. That is to say the hope is that experiment will indicate some new phenomenon that does not agree with the theory as it is presently constituted, and then we just add bells and whistles insofar as it’s possible. But as I said, it ain’t easy. Any new architecture has, by its very nature, to be quite elaborate and quite enormous. Low energy supersymmetry is one of the things that people talk about. It’s a hell of a lot of new particles and new forces which may be just around the corner. And if they see some of these particles, you’ll see hundreds, literally hundreds of people sprouting up, who will have claimed to predict exactly what was seen. In fact they’ve already sprouted up and claimed to have predicted various things that were seen and subsequently retracted. But you see you can’t play this small modification game anymore. It’s not the way it was when Sam and I were growing up and there were lots of little tricks you could do here and there that could make our knowledge better. They’re not there any more, in terms of changing the theory. They’re there in terms of being able to calculate things that were too hard to calculate yesterday. Some smart physicists will figure out how to do something slightly better, that happens. But we can’t monkey around. So it’s either the big dream, the big dream for the ultimate theory, or hope to seek experimental conflicts and build new structures. But we are, everybody would agree that we have right now the standard theory, and most physicists feel that we are stuck with it for the time being. We’re really at a plateau, and in a sense it really is a time for people like you, philosophers, to contemplate not where we’re going, because we don’t really know and you hear all kinds of strange views, but where we are. And maybe the time has come for you to tell us where we are. ‘Cause it hasn’t changed in the last 15 years, you can sit back and, you know, think about where we are.”