Will physicists be replaced by robots?

I always thought that such a suggestion is ridiculous. How could a robot ever do what physicists do? While many jobs seem to be in danger because of recent advances in automation – up to 47 % according to recent studies – the last thing that will be automated, if ever, are jobs like that of a physicists which need creativity, right?

For example, this site, which was featured in many major publications, states that there is only a 10% chance that robots will take the job of a physicists:

Recently author James Gleick commented on how shocked professional “Go” players are by the tactics of Google’s software “AlphaGo”:

Sean Caroll answered and summarized how most physicists think about this:

A Counterexample

Until very recently I would have agreed. However, a few weeks ago I discovered this little paper and it got me thinking. The idea of the paper is really simple. Just feed measurement data into an algorithm. Give him a fixed set of objects to play around with and then let the algorithm find the laws that describe the data best. The authors argue that their algorithm is able to rediscover Maxwell’s equations. These equations are still the best equations to describe how light behaves. Their algorithm was able to find these equations “in about a second”. Moreover, they describe their program as a “computational embodiment of the scientific method: observation, consideration of candidate theories, and validation.” That’s pretty cool. Once more I was reminded that “everything seems impossible until it’s done.”

Couldn’t we do the same to search for new laws by feeding such an algorithm the newest collider data? Aren’t the jobs of physicists that safe after all?

What do physicists do?

First of all, the category “physicist” is much too broad to discuss the danger of automation. For example, there are experimental physicists and theoretical physicists. And even inside these subcategories, there are further important sub-sub-categories.

On the experimental side, there are people who actually build experiments. Those are the guys who know how to use a screwdriver. In addition, there are people who analyse the data gathered by experiments.

On the theoretical side, there are theorists and phenomenologists. The distinction here is no so clear. For example, one can argue that phenomenology is a subfield of theoretical physics. Many phenomenologists call themselves theoretical physicists. Broadly, the job of a theoretical physicist is to explain and predict how nature behaves by writing down equations. However, there are many different approaches how to write down new equations. I find the classification outlined here helpful. There is:

  1. Curiosity Driven Research; where “anything goes, that is allowed by basic principles and data. […] In general, there is no further motivation for the addition of some particle, besides that it is not yet excluded by the data.”
  2. Data Driven Research; where new equations are written down as a response to experimental anomalies.
  3. Theory Driven Research; which is mostly about “aesthetics” and “intuition”. The prototypical example, is of course, Einstein’s invention of General Relativity.

The job of someone working in each such sub-sub-category is completely different to the jobs in another sub-sub-category. Therefore, there is certainly no universal answer to the question how likely it is for “robots” to replace physicists. Each of sub-sub-categories mentioned above, must be analyzed on its own.

What could robots do?

Let’s start with the most obvious one. Data analysis is an ideal job for robots. Unsurprisingly several groups are already working or experimenting with neural networks to analyze LHC data. In the traditional approach to collider data analyses people have to invent criteria for how we can distinguish different particles in the detector. If the angle of two detected photons is large than X°, the overall energy of them smaller than Y GeV the particle is with a probability Z% some given particle. In contrast, if you use a neural network, you just have train it using Monte-Carlo data, where you know which particle is where. Then you can let the trained network analyse the collider data. In addition, after the training you can investigate the network to see what it has learned. This way neural networks can be used to find new useful variables that help to distinguish different particles in a detector. I should mention that this approach is not universally favored, because some feel that a neural network is too much of a blackbox to be trusted.

What about theoretical physicists?

In the tweet quoted above, Sean Carroll argues that “Fundamental physics is analogous to “the rules of Go.” Which are simple and easily mastered. Go *strategy* is more like bio or neuroscience.” Well yes, and no. Finding new fundamental equations is certainly similar to inventing new rules for a game. This is broadly the job of a theoretical physicist. However, the three approaches to “doing theoretical physics”, mentioned above, are quite different.

In the first and second approach the “rules of the game” are pretty much fixed. You write down a Lagrangian and afterwards compare its predictions with measured data. The new Lagrangian involves new fields, new coupling constants etc., but must be written down according to fixed rules. Usually, only terms that respect rules of special relativity are allowed. Moreover, we know that the simplest possible terms are the most important ones, so you focus on them first. (More complicated terms are “non-renormalizable” and therefore suppressed by some large scale.) Given some new field or fields writing down the Lagrangian and deriving the corresponding equations of motion is a straight-forward. Moreover, while deriving the experimental consequences of some given Lagrangian can be quite complicated, the general rules of how to do it are fixed. The framework that allows us to derive predictions for colliders or other experiments starting from a Lagrangian is known as Quantum Field Theory.

This is exactly the kind of problem that was solved, although in a much simpler setting, by the Mark A. Stalzer and Chao Ju in the paper mentioned above. There are already powerful algorithms, like, for example, SPheno or micrOMEGAs  which are capable of deriving many important consequences of a given Lagrangian, almost completely automagically. So with further progress in this direction it seems not completely impossible that an algorithm will be able to find the best possible Lagrangian to describe given experimental data.

As an aside: A funny name for this goal of theoretical physics that deals with the search for the “ultimate Lagrangian of the world” was coined by Arthur Wightman, who called it “the hunt for the Green Lion”. (Source: Conceptual Foundations of Quantum Field Theory: Tian Yu Cao)

What then remains on the theoretical side is “Theory Driven Research”. I have no idea how a “robot” could do this kind of research, which is probably what Sean Caroll had in mind in his tweets. For example, the algorithm by Mark A. Stalzer and Chao Ju only searches for laws that consist of some predefined objects, vectors, tensors and uses predefined rules of how to combine them: scalar products, cross products etc. It is hard to imagine how paradigm shifting discoveries could be made by an algorithm like this. General relativity is a good example. The correct theory of gravity needed completely new mathematics that wasn’t previously used by physicists. No physicists around 1900 would have programmed crazy rules such as those of non-Euclidean geometry into the set of allowed rules. An algorithm that was designed to guess Lagrangian will always only spit out a Lagrangian. If the fundamental theory of nature cannot be written down in Lagrangian form, the algorithm would be doomed to fail.

To summarize, there will be physicists in 100 years. However, I don’t think that all jobs currently done by theoretical and experimental physicists will survive. This is probably a good thing. Most physicists would love to have more time to think about fundamental problems like Einstein did.

Unfortunately, repetition is a convincing argument.

I recently wrote about the question “When do you understand?“. In this post I outlined a pattern that I observed how I end up with a deep understanding of a given topic. However, there is also a second path that I totally missed in this post.

The path to understanding that I outlined requires massive efforts to get to the bottom of things. I argued that you only understand something when you are able to explain it in simple terms.

The second path that I missed in my post doesn’t really lead to understanding. Yet, the end result is quite similar. Oftentimes, it’s not easy to tell if someone got to his understanding via path 1 or path 2. Even worse, oftentimes you can’t tell if you got to your own understanding via path 1 or path 2.

So what is this second path?

It consists of reading something so often that you start to accept it as a fact. The second path makes use of repetition as a strong argument.

Once you know this, it is shocking to observe how easy oneself gets convinced by mere repetition.

However, this isn’t as bad as it may sound. If dozens of experts that you respect, repeat something, it is a relatively safe bet to believe them. This isn’t a bad strategy. At least, not always. Especially when you are starting, you need orientation. If you want to move forward quickly, you can’t get to the bottom of every argument.
Still, there are instances where this second path is especially harmful. Fundamental physics is definitely one of them. If we want to expand our understanding of nature at the most fundamental level, we need to constantly ask ourselves:

Do we really understand this? Or have we simply accepted it, because it got repeated often enough?

The thing is that physics is not based on axioms. Even if you could manage to condense our current state of knowledge into a set of axioms, it would be a safe bet that at least one of them will be dropped in the next century.

Here’s an example.

Hawking and the Expanding Universe

In 1983 Stephen Hawking gave a lecture about cosmology, in which he explained

The De Sitter example was useful because it showed how one could solve the Wheeler-DeWitt equation and apply the boundary conditions in a simple case. […] However, if there are two facts about our universe which we are reasonably certain, one is that it is not exponentially expanding and the other is that it contains matter.”

Only 15 years later, physicists were no longer “reasonable certain” that the universe isn’t exponentially expanding. On the contrary, we are now reasonable certain of the exact opposite. By observing the most distant supernovae two experimental groups established the accelerating expansion as an experimental fact. This was a big surprise for everyone and rightfully led to an Nobel prize for its discoverers.

The moral of this example isn’t, of course, that Hawking is stupid. He only summarized what everyone at this time believed to know. This example shows how quickly our most basic assumptions can change. Although most experts were certain that the expansion of the universe isn’t accelerating, they were all wrong.

Theorems in Physics and the Assumptions Behind Them

If you want further examples, just have a look at almost any theorem that is commonly cited in physics.

Usually, the short final message of the theorem is repeated over and over. However you almost never hear about the assumptions that are absolutely crucial for the proof.
This is especially harmful, because, as the example above demonstrated, our understanding of nature constantly changes.

Physics is never as definitive as mathematics. Even theorems aren’t bulletproof in physics, because the assumptions can turn out to be wrong through new experimental findings. What we currently think to be true about physics will be completely obsolete in 100 years. That’s what history teaches us.

An example, closely related to the accelerating universe example from above, is the Coleman-Mandula theorem. There is probably no theorem that is cited more often. Most talks related to supersymmetry mention it at some point. It is no exaggeration when I say that I have heard at least 100 talks that mentioned the final message of the proof: “space-time and internal symmetries cannot be combined in any but a trivial way”.

Yet, so far I’ve found no one who was able to discuss the assumptions of the theorem. The theorem got repeated so often in the last decades that it is almost universally accepted to be true. And yes, the proof is, of course, correct.
However, what if one of the assumptions that go into the proof isn’t valid?

Let’s have a look.

An important condition, already mentioned in the abstract of the original paper is Poincare symmetry. This original paper was published in 1967 and then it was reasonable certain the we are living in a universe with Poincare symmetry.
However, as already mentioned above, we know since 1998 that this isn’t correct. The expansion of the universe is accelerating. This means the cosmological constant is nonzero. The correct symmetry group that preserves the constant speed of light and the value of a nonzero cosmological constant is the De Sitter group and not the Poincare group. In the limit of a vanishing cosmological constant, the De Sitter group contracts to the Poincare group. The cosmological constant is indeed tiny and therefore we aren’t too wrong if we use the Poincare group instead of the De Sitter group.
Yet, for a mathematical proof like the the one proposed by Coleman and Mandula, whether we use De Sitter symmetry or Poincare symmetry makes all the difference in the world.

The Poincare group is a quite ugly group and consist of Lorentz transformations and translations: $ \mathbb{R}(3,1) \rtimes SL(2,\mathbb{C}) .$ The Coleman-Mandula proof makes crucial use of the inhomogeneous translation part of this group $\mathbb{R}(3,1)$. In contrast, the De Sitter group is a simple group. There is no inhomogeneous part. As far as I know, there is no Coleman-Mandula theorem if we replace the assumption: “Poincare symmetry” with “De Sitter symmetry”.

This is an example where repetition is the strongest argument. The final message of the Coleman-Mandula theorem is universally accepted as a fact. Yet, almost no one had a look at the original paper and its assumptions. The strongest argument for the Coleman-Mandula theorem seems to be that is was repeated so often in the last decades.

Maybe you think: what’s the big deal?

Well, if the Coleman mandula no-go theorem is no longer valid, because we live in a universe with De Sitter symmetry, a whole new world would open up in theoretical physics. We could start thinking about how spacetime symmetries and internal symmetries fit together.

The QCD Vacuuum

Here is another example of something people take for given only because it was repeated often enough: The structure of the CP vacuum. I’ve written about this at great length here.

I talked to several PhD students who work on problems related to the strong CP problem and the vacuum in quantum field theory. Few knew the assumptions that are necessary to arrive at the standard interpretation of the QCD vacuum. No one new where the assumptions actually come from and if they are really justified. The thing is that when you dig deep enough you’ll notice that the restriction to gauge trafos that satisfy $U \to 1$ at infinity is not based on something bulletproof, but simply an assumption. This is a crucial difference and if you want to think about the QCD vacuum and the strong CP problem you should know this. However, most people take this restriction for granted, because it has been repeated often enough.

Progress in Theoretical Physics without Experimental Guidance

The longer I study physics the more I become convinced that people should be more careful about what they think is definitely correct. Actually there are very few things we know for certain and it never hurts to ask: what if this assumption everyone uses is actually wrong?

For a long time, physics was strongly guided by experimental findings. From what I’ve read these must have been amazing exciting times. There was tremendous progress after each experimental finding. However in the last decades there were no experimental results that have helped to understand nature better at a fundamental level. (I’ve written about the status of particle physics here).

So currently a lot of people are asking: How can there be progress without experimental results that excite us?

I think a good idea would be to take a step back and talk openly, clearly and precisely about what we know and understand and what we don’t.

Already in 1996, Nobel Prize winner Sheldon Lee Glashow noted:

[E]verybody would agree that we have right now the standard theory, and most physicists feel that we are stuck with it for the time being. We’re really at a plateau, and in a sense it really is a time for people like you, philosophers, to contemplate not where we’re going, because we don’t really know and you hear all kinds of strange views, but where we are. And maybe the time has come for you to tell us where we are. ‘Cause it hasn’t changed in the last 15 years, you can sit back and, you know, think about where we are.”

The first step in this direction would be that more people were aware that while repetition is a strong argument, it is not a good one when we try to make progress. The examples above hopefully made clear that only because many people state that something is correct, does not mean that it is actually correct. The message of a theorem can be invalid, although the proof is correct, simply because the assumptions are no longer up to date.

This is what science is all about. We should always question what we take for given. As for many things, Feynman said it best:

Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers in the preceding generation. . . Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.

The Academic System in 2050

One day robots will be able to do all the tasks that are necessary for our modern day lives.

Surprisingly many people I talked to can’t imagine that robots will take all the usual jobs . The main argument goes like this: “Well isn’t this what people already thought a hundred years ago? Have a look, everybody is still working 8 hours a day!”

Of courses, this is correct. For example, John Maynard Keynes predicted in 1930 that until the end of the century we would have 15-hour work week . This is not what happened. Everyone is still working 40 hours. This is surprising, because the problem isn’t that technology evolved too slow. The technology that fits into our pockets today is beyond anything Keynes could’ve imagined.

The reason for our 40 hour work weeks aren’t particular important for the point I’m trying to make here. Still, for an interesting perspective, have a look at “Bullshit Jobs” by David Graeber. He argues:

Technology has been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed.

The reasons do not matter, because there will be a day in the near (!) future when robots will do all these “bullshit jobs”. Only because it hasn’t happened so far, does not mean that it won’t happen in the future. At some point there will be no way around it.

Of course, robots won’t replace every employee in the next century. Yet, a large number of employees will. Starting with bus drivers, taxi drivers, etc…

What should sound like an utopian vision, is usually regarded as a horror scenario. Mass unemployment! Poverty! What should all these people do?

The Solution and its Side-Effects

The best answer is “universal basic income”. (Negative income tax?)

This will allow all people who don’t enjoy working, to spent all their time in virtual realities. Those who want luxuries beyond “basic needs”, whatever that means in 30 years, will have to work for it. There still will be jobs, but fewer and not enough for everyone. But, with a sufficient “universal basic income” this won’t be a problem. Most people don’t enjoy their job anyway. In addition to these two categories of people, there will be a third, and this is what I want to talk about here.

A universal basic income will have a particular nice side-effect: It will revolutionize the academic system. With no worries about money, more people would do whatever is of interest to them. As a result there would a lot more scientists. This is a usually overlooked aspect of an “universal basic income”.

In such a future, many people would do what, for example, Julian Barbour and Garret Lisi are currently doing: serious theoretical research without an academic position.

For some people, money without a job will mean that they stop doing anything that requires effort. The will start watching TV or playing video games all day.

Yet, for some it will mean the opposite.

In her essay “On The Leisure Track: Rethinking the Job Culture“, JoAnne Swanson summarizes exactly this opposite point of view:

Whatever else you can say about a shitty job that pays the bills, one thing’s for sure: as long as you have that job, or are busy looking for another one, you’ve got a built-in, airtight, socially acceptable excuse for any lack of progress toward your dreams in life. […] When I say I don’t want a job, I definitely don’t mean that I refuse to do anything that involves effort, or that I want to do nothing but watch TV and sleep. What I mean is that to the extent that it is possible, I want to find a way to use my gifts effectively without worrying about where my support will come from, and I want to help make it possible for others to use their own gifts in the same manner. The job culture is not designed to reward people for developing their gifts and working with love and joy; it’s designed, primarily, to concentrate wealth at the top.

In addition to the slackers there will be a huge number of people who will see the chance to finally fulfill their dreams. For many this will mean that they can now educate themselves about all the things they were always interested in. A lot more people will have the time, muse and knowledge to work on fundamental problems in science.

Currently, there is a fierce competition in science. People fight for grants and positions. As a result no one has the time to spent, say, three years to think about one problem. The risk is too large the she/he will not find a solution. In the current academic system such a long failed project would mean career suicide.

With a universal basic income people could think, read and write without any pressure. There would be no problem if they quit their studies after several years without any groundbreaking discovery. People would do research not to collect citations, but to understand and discover.

As Einstein put it:

Of all the communities available to us, there is not one I would want to devote myself to except for the society of the true seekers, which has very few living members at any one time.

With a universal basic income this “society of the true seekers” would get a lot of new members.

New Einsteins?

Of course, not everyone can and will be a new Einstein. Not everyone can make groundbreaking discoveries. Still, there would be dramatic benefits to science, when people could work without the pressure to collect citations

One example: A lot more people would start to write down what they understand, because they have the time. Currently almost no one is doing this. The time you spent writing about what you’ve learned last week does not help your career. You don’t collect citations with some introductory text. No matter how good your explanation is, everyone will still cite the original discoverer. Writing down what you’ve learned is not something that the academic system values. The same is true for pedagogical explanations. But without the need to optimize their h-index, people would have enough time for such “fun projects”.

But … aren’t there already enough textbooks, review articles, etc.?

No! The current problem is: Who is actually “allowed” to write textbooks nowadays? Who has actually the time to write textbooks? Who writes review articles? Only a tiny number of people: people with permanent academic positions. Even worse, in practice only a tiny number of those people who actually could write introductory texts, actually do it. This is not surprising. Writing a textbook is from a monetary standpoint not a smart move. (I’m speaking from experience.)

However, without the pressure to make money more people could write. This means, we would get lots of new unique perspectives. Everyone would have the chance to find an author who actually speaks a language he understands.

There would be hundreds of great introductory texts to any topic. If you’re having problems understanding something, no problem: just have a look at another explanation. We could learn every topic much easier.

Also, people would have the time to study all the things they were always interested in. A lot more people would learn the fundamentals of quantum mechanics, quantum field theory and general relativity.

In summary: the chances of groundbreaking scientific discoveries would skyrocket. All the obstacles we currently have in the academic system that prevent a “new Einstein”, would no longer exist. While not everyone would be a “new Einstein”, the chances of one new Einstein would definitely be much higher.

A Glimpse of the Future

Still, we must keep in mind: Academia offers much more than money for people to do research. It also offers a community, a stimulating environment and access to scientific literature.

The last item on this list is the easiest thing to get access to for people without an academic affiliation. In 2050 we will laugh about the fact that we used to pay to download scientific papers. The existence of the arXiv is great sign for the change in this direction.

What about community? People need other people with similar interests to discuss topics and problems. While sites like StackExchange help a lot, they are not enough.

Luckily, we can already see first signs how this could work in practice. The crucial idea is: there can be a scientific community outside of the established academic system. This is possible through independent science institutes. Existing examples are the Ronin Institute for Independent Scholarship, the CORES Science and Engineering Institute and the Pacific Science Institute.

Such institutes offer a community trough regular in-person meet-ups. Besides, they can also enable access to the established scientific community. For example, in this nature article, Jon Wilkins, the founder of the Ronin Institute explains one of the benefits:

Simply giving people an affiliation and an e-mail address means that when they submit a paper to a journal or a conference, it will get read and their work will have a shot at surviving on its own merits.

With such concepts starting to emerge, a future with a  large community of independent researchers is something to look forward to. All the obstacles researchers outside of academia currently face can and will be overcome. Through a universal basic income, the scientific community would grow and change for the better. This is an aspect that people should emphasize more in discussions about basic income. In the long run, everyone would benefit from the resulting scientific advances. Although it always takes time, ultimately scientific advances result in technological breakthroughs. This way, the unemployed researchers would pay back their universal basic income “salary”.