I always thought that such a suggestion is ridiculous. How could a robot ever do what physicists do? While many jobs seem to be in danger because of recent advances in automation – up to 47 % according to recent studies – the last thing that will be automated, if ever, are jobs like that of physicists which need creativity, right?
For example, this site, which was featured in many major publications, states that there is only a 10% chance that robots will take the job of physicists:
As physicist I have only 10% Probability of Automation in Future! Robots will have to work harder to get my job 🤖 https://t.co/NSLtxuzXDa
— Freya Blekman (@freyablekman) June 18, 2017
Recently author James Gleick commented on how shocked professional “Go” players are by the tactics of Google’s software “AlphaGo”:
If humans are this blind to the truth of Go, how well are we doing with theoretical physics? https://t.co/UeeGqyxRD8
— James Gleick (@JamesGleick) October 18, 2017
Sean Caroll answered and summarized how most physicists think about this:
We’re doing great with theoretical physics! It’s the worst possible analogy to how AI does better than humans at complex games like Go. https://t.co/Cki1q0hRgS
— Sean Carroll (@seanmcarroll) October 18, 2017
Fundamental physics is analogous to "the rules of Go." Which are simple and easily mastered. Go *strategy* is more like bio or neuroscience.
— Sean Carroll (@seanmcarroll) October 18, 2017
A Counterexample
Until very recently I would have agreed. However, a few weeks ago I discovered this little paper and it got me thinking. The idea of the paper is really simple. Just feed measurement data into an algorithm. Give him a fixed set of objects to play around with and then let the algorithm find the laws that describe the data best. The authors argue that their algorithm is able to rediscover Maxwell’s equations. These equations are still the best equations to describe how light behaves. Their algorithm was able to find these equations “in about a second”. Moreover, they describe their program as a “computational embodiment of the scientific method: observation, consideration of candidate theories, and validation.” That’s pretty cool. Once more I was reminded that “everything seems impossible until it’s done.”
Couldn’t we do the same to search for new laws by feeding such an algorithm the newest collider data? Aren’t the jobs of physicists that safe after all?
What do physicists do?
First of all, the category “physicist” is much too broad to discuss the danger of automation. For example, there are experimental physicists and theoretical physicists. And even inside these subcategories, there are further important sub-sub-categories.
On the experimental side, there are people who actually build experiments. Those are the guys who know how to use a screwdriver. In addition, there are people who analyze the data gathered by experiments.
On the theoretical side, there are theorists and phenomenologists. The distinction here is not so clear. For example, one can argue that phenomenology is a subfield of theoretical physics. Many phenomenologists call themselves theoretical physicists. Broadly, the job of a theoretical physicist is to explain and predict how nature behaves by writing down equations. However, there are many different approaches how to write down new equations. I find the classification outlined here helpful. There is:
- Curiosity Driven Research; where “anything goes, that is allowed by basic principles and data. […] In general, there is no further motivation for the addition of some particle, besides that it is not yet excluded by the data.”
- Data-Driven Research; where new equations are written down as a response to experimental anomalies.
- Theory-Driven Research; which is mostly about “aesthetics” and “intuition”. The prototypical example, is of course, Einstein’s invention of General Relativity.
The job of someone working in each such sub-sub-category is completely different to the jobs in another sub-sub-category. Therefore, there is certainly no universal answer to the question how likely it is for “robots” to replace physicists. Each of sub-sub-categories mentioned above must be analyzed on its own.
What could robots do?
Let’s start with the most obvious one. Data analysis is an ideal job for robots. Unsurprisingly several groups are already working or experimenting with neural networks to analyze LHC data. In the traditional approach to collider data analyses, people have to invent criteria for how we can distinguish different particles in the detector. If the angle of two detected photons is large than X°, the overall energy of them smaller than Y GeV the particle is with a probability Z% some given particle. In contrast, if you use a neural network, you just have train it using Monte-Carlo data, where you know which particle is where. Then you can let the trained network analyze the collider data. In addition, after the training, you can investigate the network to see what it has learned. This way neural networks can be used to find new useful variables that help to distinguish different particles in a detector. I should mention that this approach is not universally favored because some feel that a neural network is too much of a black box to be trusted.
What about theoretical physicists?
In the tweet quoted above, Sean Carroll argues that “Fundamental physics is analogous to “the rules of Go.” Which are simple and easily mastered. Go *strategy* is more like bio or neuroscience.” Well yes, and no. Finding new fundamental equations is certainly similar to inventing new rules for a game. This is broadly the job of a theoretical physicist. However, the three approaches to “doing theoretical physics”, mentioned above, are quite different.
In the first and second approach, the “rules of the game” are pretty much fixed. You write down a Lagrangian and afterward compare its predictions with measured data. The new Lagrangian involves new fields, new coupling constants etc., but must be written down according to fixed rules. Usually, only terms that respect rules of special relativity are allowed. Moreover, we know that the simplest possible terms are the most important ones, so you focus on them first. (More complicated terms are “non-renormalizable” and therefore suppressed by some large scale.) Given some new field or fields writing down the Lagrangian and deriving the corresponding equations of motion is a straight-forward. Moreover, while deriving the experimental consequences of some given Lagrangian can be quite complicated, the general rules of how to do it are fixed. The framework that allows us to derive predictions for colliders or other experiments starting from a Lagrangian is known as Quantum Field Theory.
This is exactly the kind of problem that was solved, although in a much simpler setting, by the Mark A. Stalzer and Chao Ju in the paper mentioned above. There are already powerful algorithms, like, for example, SPheno or micrOMEGAs which are capable of deriving many important consequences of a given Lagrangian, almost completely automagically. So with further progress in this direction, it seems not completely impossible that an algorithm will be able to find the best possible Lagrangian to describe given experimental data.
As an aside: A funny name for this goal of theoretical physics that deals with the search for the “ultimate Lagrangian of the world” was coined by Arthur Wightman, who called it “the hunt for the Green Lion”. (Source: Conceptual Foundations of Quantum Field Theory: Tian Yu Cao)
What then remains on the theoretical side is “Theory-Driven Research”. I have no idea how a “robot” could do this kind of research, which is probably what Sean Caroll had in mind in his tweets. For example, the algorithm by Mark A. Stalzer and Chao Ju only searches for laws that consist of some predefined objects, vectors, tensors and uses predefined rules of how to combine them: scalar products, cross products etc. It is hard to imagine how paradigm-shifting discoveries could be made by an algorithm like this. General relativity is a good example. The correct theory of gravity needed completely new mathematics that wasn’t previously used by physicists. No physicists around 1900 would have programmed crazy rules such as those of non-Euclidean geometry into the set of allowed rules. An algorithm that was designed to guess Lagrangian will always only spit out a Lagrangian. If the fundamental theory of nature cannot be written down in Lagrangian form, the algorithm would be doomed to fail.
To summarize, there will be physicists in 100 years. However, I don’t think that all jobs currently done by theoretical and experimental physicists will survive. This is probably a good thing. Most physicists would love to have more time to think about fundamental problems like Einstein did.