Unfortunately, repetition is a convincing argument.

I recently wrote about the question “When do you understand?“. In this post I outlined a pattern that I observed how I end up with a deep understanding of a given topic. However, there is also a second path that I totally missed in this post.

The path to understanding that I outlined requires massive efforts to get to the bottom of things. I argued that you only understand something when you are able to explain it in simple terms.

The second path that I missed in my post doesn’t really lead to understanding. Yet, the end result is quite similar. Oftentimes, it’s not easy to tell if someone got to his understanding via path 1 or path 2. Even worse, oftentimes you can’t tell if you got to your own understanding via path 1 or path 2.

So what is this second path?

It consists of reading something so often that you start to accept it as a fact. The second path makes use of repetition as a strong argument.

Once you know this, it is shocking to observe how easy oneself gets convinced by mere repetition.

However, this isn’t as bad as it may sound. If dozens of experts that you respect, repeat something, it is a relatively safe bet to believe them. This isn’t a bad strategy. At least, not always. Especially when you are starting, you need orientation. If you want to move forward quickly, you can’t get to the bottom of every argument.
Still, there are instances where this second path is especially harmful. Fundamental physics is definitely one of them. If we want to expand our understanding of nature at the most fundamental level, we need to constantly ask ourselves:

Do we really understand this? Or have we simply accepted it, because it got repeated often enough?

The thing is that physics is not based on axioms. Even if you could manage to condense our current state of knowledge into a set of axioms, it would be a safe bet that at least one of them will be dropped in the next century.

Here’s an example.

Hawking and the Expanding Universe

In 1983 Stephen Hawking gave a lecture about cosmology, in which he explained

The De Sitter example was useful because it showed how one could solve the Wheeler-DeWitt equation and apply the boundary conditions in a simple case. […] However, if there are two facts about our universe which we are reasonably certain, one is that it is not exponentially expanding and the other is that it contains matter.”

Only 15 years later, physicists were no longer “reasonable certain” that the universe isn’t exponentially expanding. On the contrary, we are now reasonable certain of the exact opposite. By observing the most distant supernovae two experimental groups established the accelerating expansion as an experimental fact. This was a big surprise for everyone and rightfully led to an Nobel prize for its discoverers.

The moral of this example isn’t, of course, that Hawking is stupid. He only summarized what everyone at this time believed to know. This example shows how quickly our most basic assumptions can change. Although most experts were certain that the expansion of the universe isn’t accelerating, they were all wrong.

Theorems in Physics and the Assumptions Behind Them

If you want further examples, just have a look at almost any theorem that is commonly cited in physics.

Usually, the short final message of the theorem is repeated over and over. However you almost never hear about the assumptions that are absolutely crucial for the proof.
This is especially harmful, because, as the example above demonstrated, our understanding of nature constantly changes.

Physics is never as definitive as mathematics. Even theorems aren’t bulletproof in physics, because the assumptions can turn out to be wrong through new experimental findings. What we currently think to be true about physics will be completely obsolete in 100 years. That’s what history teaches us.

An example, closely related to the accelerating universe example from above, is the Coleman-Mandula theorem. There is probably no theorem that is cited more often. Most talks related to supersymmetry mention it at some point. It is no exaggeration when I say that I have heard at least 100 talks that mentioned the final message of the proof: “space-time and internal symmetries cannot be combined in any but a trivial way”.

Yet, so far I’ve found no one who was able to discuss the assumptions of the theorem. The theorem got repeated so often in the last decades that it is almost universally accepted to be true. And yes, the proof is, of course, correct.
However, what if one of the assumptions that go into the proof isn’t valid?

Let’s have a look.

An important condition, already mentioned in the abstract of the original paper is Poincare symmetry. This original paper was published in 1967 and then it was reasonable certain the we are living in a universe with Poincare symmetry.
However, as already mentioned above, we know since 1998 that this isn’t correct. The expansion of the universe is accelerating. This means the cosmological constant is nonzero. The correct symmetry group that preserves the constant speed of light and the value of a nonzero cosmological constant is the De Sitter group and not the Poincare group. In the limit of a vanishing cosmological constant, the De Sitter group contracts to the Poincare group. The cosmological constant is indeed tiny and therefore we aren’t too wrong if we use the Poincare group instead of the De Sitter group.
Yet, for a mathematical proof like the the one proposed by Coleman and Mandula, whether we use De Sitter symmetry or Poincare symmetry makes all the difference in the world.

The Poincare group is a quite ugly group and consist of Lorentz transformations and translations: $ \mathbb{R}(3,1) \rtimes SL(2,\mathbb{C}) .$ The Coleman-Mandula proof makes crucial use of the inhomogeneous translation part of this group $\mathbb{R}(3,1)$. In contrast, the De Sitter group is a simple group. There is no inhomogeneous part. As far as I know, there is no Coleman-Mandula theorem if we replace the assumption: “Poincare symmetry” with “De Sitter symmetry”.

This is an example where repetition is the strongest argument. The final message of the Coleman-Mandula theorem is universally accepted as a fact. Yet, almost no one had a look at the original paper and its assumptions. The strongest argument for the Coleman-Mandula theorem seems to be that is was repeated so often in the last decades.

Maybe you think: what’s the big deal?

Well, if the Coleman mandula no-go theorem is no longer valid, because we live in a universe with De Sitter symmetry, a whole new world would open up in theoretical physics. We could start thinking about how spacetime symmetries and internal symmetries fit together.

The QCD Vacuuum

Here is another example of something people take for given only because it was repeated often enough: The structure of the CP vacuum. I’ve written about this at great length here.

I talked to several PhD students who work on problems related to the strong CP problem and the vacuum in quantum field theory. Few knew the assumptions that are necessary to arrive at the standard interpretation of the QCD vacuum. No one new where the assumptions actually come from and if they are really justified. The thing is that when you dig deep enough you’ll notice that the restriction to gauge trafos that satisfy $U \to 1$ at infinity is not based on something bulletproof, but simply an assumption. This is a crucial difference and if you want to think about the QCD vacuum and the strong CP problem you should know this. However, most people take this restriction for granted, because it has been repeated often enough.

Progress in Theoretical Physics without Experimental Guidance

The longer I study physics the more I become convinced that people should be more careful about what they think is definitely correct. Actually there are very few things we know for certain and it never hurts to ask: what if this assumption everyone uses is actually wrong?

For a long time, physics was strongly guided by experimental findings. From what I’ve read these must have been amazing exciting times. There was tremendous progress after each experimental finding. However in the last decades there were no experimental results that have helped to understand nature better at a fundamental level. (I’ve written about the status of particle physics here).

So currently a lot of people are asking: How can there be progress without experimental results that excite us?

I think a good idea would be to take a step back and talk openly, clearly and precisely about what we know and understand and what we don’t.

Already in 1996, Nobel Prize winner Sheldon Lee Glashow noted:

[E]verybody would agree that we have right now the standard theory, and most physicists feel that we are stuck with it for the time being. We’re really at a plateau, and in a sense it really is a time for people like you, philosophers, to contemplate not where we’re going, because we don’t really know and you hear all kinds of strange views, but where we are. And maybe the time has come for you to tell us where we are. ‘Cause it hasn’t changed in the last 15 years, you can sit back and, you know, think about where we are.”

The first step in this direction would be that more people were aware that while repetition is a strong argument, it is not a good one when we try to make progress. The examples above hopefully made clear that only because many people state that something is correct, does not mean that it is actually correct. The message of a theorem can be invalid, although the proof is correct, simply because the assumptions are no longer up to date.

This is what science is all about. We should always question what we take for given. As for many things, Feynman said it best:

Science alone of all the subjects contains within itself the lesson of the danger of belief in the infallibility of the greatest teachers in the preceding generation. . . Learn from science that you must doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.

The Academic System in 2050

One day robots will be able to do all the tasks that are necessary for our modern day lives.

Surprisingly many people I talked to can’t imagine that robots will take all the usual jobs . The main argument goes like this: “Well isn’t this what people already thought a hundred years ago? Have a look, everybody is still working 8 hours a day!”

Of courses, this is correct. For example, John Maynard Keynes predicted in 1930 that until the end of the century we would have 15-hour work week . This is not what happened. Everyone is still working 40 hours. This is surprising, because the problem isn’t that technology evolved too slow. The technology that fits into our pockets today is beyond anything Keynes could’ve imagined.

The reason for our 40 hour work weeks aren’t particular important for the point I’m trying to make here. Still, for an interesting perspective, have a look at “Bullshit Jobs” by David Graeber. He argues:

Technology has been marshaled, if anything, to figure out ways to make us all work more. In order to achieve this, jobs have had to be created that are, effectively, pointless. Huge swathes of people, in Europe and North America in particular, spend their entire working lives performing tasks they secretly believe do not really need to be performed.

The reasons do not matter, because there will be a day in the near (!) future when robots will do all these “bullshit jobs”. Only because it hasn’t happened so far, does not mean that it won’t happen in the future. At some point there will be no way around it.

Of course, robots won’t replace every employee in the next century. Yet, a large number of employees will. Starting with bus drivers, taxi drivers, etc…

What should sound like an utopian vision, is usually regarded as a horror scenario. Mass unemployment! Poverty! What should all these people do?

The Solution and its Side-Effects

The best answer is “universal basic income”. (Negative income tax?)

This will allow all people who don’t enjoy working, to spent all their time in virtual realities. Those who want luxuries beyond “basic needs”, whatever that means in 30 years, will have to work for it. There still will be jobs, but fewer and not enough for everyone. But, with a sufficient “universal basic income” this won’t be a problem. Most people don’t enjoy their job anyway. In addition to these two categories of people, there will be a third, and this is what I want to talk about here.

A universal basic income will have a particular nice side-effect: It will revolutionize the academic system. With no worries about money, more people would do whatever is of interest to them. As a result there would a lot more scientists. This is a usually overlooked aspect of an “universal basic income”.

In such a future, many people would do what, for example, Julian Barbour and Garret Lisi are currently doing: serious theoretical research without an academic position.

For some people, money without a job will mean that they stop doing anything that requires effort. The will start watching TV or playing video games all day.

Yet, for some it will mean the opposite.

In her essay “On The Leisure Track: Rethinking the Job Culture“, JoAnne Swanson summarizes exactly this opposite point of view:

Whatever else you can say about a shitty job that pays the bills, one thing’s for sure: as long as you have that job, or are busy looking for another one, you’ve got a built-in, airtight, socially acceptable excuse for any lack of progress toward your dreams in life. […] When I say I don’t want a job, I definitely don’t mean that I refuse to do anything that involves effort, or that I want to do nothing but watch TV and sleep. What I mean is that to the extent that it is possible, I want to find a way to use my gifts effectively without worrying about where my support will come from, and I want to help make it possible for others to use their own gifts in the same manner. The job culture is not designed to reward people for developing their gifts and working with love and joy; it’s designed, primarily, to concentrate wealth at the top.

In addition to the slackers there will be a huge number of people who will see the chance to finally fulfill their dreams. For many this will mean that they can now educate themselves about all the things they were always interested in. A lot more people will have the time, muse and knowledge to work on fundamental problems in science.

Currently, there is a fierce competition in science. People fight for grants and positions. As a result no one has the time to spent, say, three years to think about one problem. The risk is too large the she/he will not find a solution. In the current academic system such a long failed project would mean career suicide.

With a universal basic income people could think, read and write without any pressure. There would be no problem if they quit their studies after several years without any groundbreaking discovery. People would do research not to collect citations, but to understand and discover.

As Einstein put it:

Of all the communities available to us, there is not one I would want to devote myself to except for the society of the true seekers, which has very few living members at any one time.

With a universal basic income this “society of the true seekers” would get a lot of new members.

New Einsteins?

Of course, not everyone can and will be a new Einstein. Not everyone can make groundbreaking discoveries. Still, there would be dramatic benefits to science, when people could work without the pressure to collect citations

One example: A lot more people would start to write down what they understand, because they have the time. Currently almost no one is doing this. The time you spent writing about what you’ve learned last week does not help your career. You don’t collect citations with some introductory text. No matter how good your explanation is, everyone will still cite the original discoverer. Writing down what you’ve learned is not something that the academic system values. The same is true for pedagogical explanations. But without the need to optimize their h-index, people would have enough time for such “fun projects”.

But … aren’t there already enough textbooks, review articles, etc.?

No! The current problem is: Who is actually “allowed” to write textbooks nowadays? Who has actually the time to write textbooks? Who writes review articles? Only a tiny number of people: people with permanent academic positions. Even worse, in practice only a tiny number of those people who actually could write introductory texts, actually do it. This is not surprising. Writing a textbook is from a monetary standpoint not a smart move. (I’m speaking from experience.)

However, without the pressure to make money more people could write. This means, we would get lots of new unique perspectives. Everyone would have the chance to find an author who actually speaks a language he understands.

There would be hundreds of great introductory texts to any topic. If you’re having problems understanding something, no problem: just have a look at another explanation. We could learn every topic much easier.

Also, people would have the time to study all the things they were always interested in. A lot more people would learn the fundamentals of quantum mechanics, quantum field theory and general relativity.

In summary: the chances of groundbreaking scientific discoveries would skyrocket. All the obstacles we currently have in the academic system that prevent a “new Einstein”, would no longer exist. While not everyone would be a “new Einstein”, the chances of one new Einstein would definitely be much higher.

A Glimpse of the Future

Still, we must keep in mind: Academia offers much more than money for people to do research. It also offers a community, a stimulating environment and access to scientific literature.

The last item on this list is the easiest thing to get access to for people without an academic affiliation. In 2050 we will laugh about the fact that we used to pay to download scientific papers. The existence of the arXiv is great sign for the change in this direction.

What about community? People need other people with similar interests to discuss topics and problems. While sites like StackExchange help a lot, they are not enough.

Luckily, we can already see first signs how this could work in practice. The crucial idea is: there can be a scientific community outside of the established academic system. This is possible through independent science institutes. Existing examples are the Ronin Institute for Independent Scholarship, the CORES Science and Engineering Institute and the Pacific Science Institute.

Such institutes offer a community trough regular in-person meet-ups. Besides, they can also enable access to the established scientific community. For example, in this nature article, Jon Wilkins, the founder of the Ronin Institute explains one of the benefits:

Simply giving people an affiliation and an e-mail address means that when they submit a paper to a journal or a conference, it will get read and their work will have a shot at surviving on its own merits.

With such concepts starting to emerge, a future with a  large community of independent researchers is something to look forward to. All the obstacles researchers outside of academia currently face can and will be overcome. Through a universal basic income, the scientific community would grow and change for the better. This is an aspect that people should emphasize more in discussions about basic income. In the long run, everyone would benefit from the resulting scientific advances. Although it always takes time, ultimately scientific advances result in technological breakthroughs. This way, the unemployed researchers would pay back their universal basic income “salary”.

To Collider?

If you don’t understand the title of this post, have a look at this comic.

Currently the future of particle physics is completely uncertain. So far, there were no surprising findings at the LHC and no one knows for sure what the best next move is. Would another huge collider, even larger than the LHC, finally bring us the much needed experimental information that help us to figure out the correct theory beyond the standard model? Or would the money be better invested in small projects?

There is an interesting new essay by Adam Falkowski (well known for his blog Résonaances) about exactly this question. He concludes: “Shifting the focus away from high-energy colliders toward precision experiments may be the most efficient way to continue exploration of fundamental interactions in the decades ahead. It may even allow particle physics to emerge stronger from its current crisis.

His conclusions and arguments immediately reminded me of a much older essay titled “Six Cautionary Tales for Scientists” by Freeman Dyson, published in “From Eros to Gaia”. The essay is from 1988, but reads amazingly relevant.

Dyson compares 6 situations where people had to decide between a Plan A = several small projects vs. a Plan B = one huge project. His main conclusion is that in most situations “Plan A” is the better option.

One relevant example of his is when “the community of molecular biologists in the United States has been struggling with the question of whether to set up a large project to map and sequence the human genome, the set of 3 billion base pairs in the genes of a human being.” They had the option to “establish an industrial-scale facility for sequencing, and would aim to have the whole job done by an army of technicians within a few years. In conjunction with the sequencing project there would also be a large centralized mapping project, using the sequence data to identify all known and unknown human genes with precisely known places in the genome. Plan B would require a large new expenditure of public funds, with the usual attendant problems of deciding who should administer the funds and who should receive them.

The other option would be to “continue unchanged so far as possible the existing way of doing things, with mapping and sequencing activities carried on in decentralized fashion by many groups of scientists investigating particular problems of human genetics. In Plan A there would be no centralized big project, and no drive to sequence the 3 billion bases of the human genome in their entirety irrespective of their genetic significance.

He then argues that Plan A is the much better option: “The complete sequence should be done when, and only when, we have developed the technology to do the job cheaply and quickly. When it can be done cheaply, go ahead and do it.” Moreover, he concludes that it is clever “to stay flexible and avoid premature commitment to rigid programs. […] Unfortunately, in the history of committees planning scientific programs, such wisdom is rare.

Instead of investing a large junk of the available budget into one big project, he argues, it makes more sense to give the same money to smaller projects. These smaller projects will help to develop the needed technology further and drive the costs for experiments down. After some years it will be probably possible to get the same results for millions instead of billions. In addition, there is much more flexibility and more possibilities for surprising findings.

His arguments against Plan B’s are even more convincing and relevant for the question whether a new huge collider should be built, when he talks about the SSC.

The SSC is an extreme example of Plan B. The question we have to address is whether SSC is a good Plan B like the Very Large Array or a disastrous Plan B like Zelenchukskaya. […] I do not claim to be infallible when I make guesses about the future. But the SSC shows all the characteristic symptoms of a bad Plan B. It is bad politically because it is being pushed by economic interests and by considerations of national prestige having little to do with scientific merit. It is bad educationally because it pours money into a project which offers little opportunity for creative involvement of students. It is bad scientifically because the proton-proton collisions which it produces are peculiarly difficult to interpret. It is bad ecologically because it squeezes out other avenues of research which are likely to lead to more cost-effective high-energy accelerators. None of these arguments by itself is conclusive, but together they make a strong case against the SSC. There is a serious risk that the SSC will be as great a setback to particle physics as the Zelenchukskaya Observatory has been to astronomy.

When I discuss these misgivings with my particle physicist friends, some who belong to HEPAP and some who don’t, they usually say things like this: “But look, we have no alternative. If we want to see the Higgs boson, or the top quark, or the photino, or any other new particles going beyond the standard model, we have to go to higher energy. It is either the SSC or nothing.” This is the same kind of talk you always hear when people are arguing for Plan B. It is either Plan B or nothing. This argument usually prevails because Plan B is one big thing and Plan A is a lot of little things. When your eyes are blinded by the glitter of something big, all the little things look like nothing. […] But to answer the physicists who say “SSC or nothing,” we must produce a practical alternative to the SSC. We must have a Plan A. Plan A does not mean giving up on high-energy physics. It does not mean that we stop building big accelerators. It does not mean that we lose interest in Higgs bosons and top quarks. My Plan A is rather like the plan recommended by the Alberts Committee. It says, let us put more money into exploring new ideas for building cost-effective accelerators. Let us build several clever accelerators instead of one dumb accelerator. Let us measure the value of an accelerator by its scientific output rather than by its energy input. And meanwhile, while the technology for cheaper and better accelerators is being developed, let us put more money into using effectively the accelerators we already have.

The advocates of the SSC often talk as if the universe were one-dimensional, with energy as the only dimension. Either you have the highest energy or you have nothing. But in fact the world of particle physics is three-dimensional. The three dimensions in a particle physics experiment are energy, accuracy, and rarity. Energy and accuracy have obvious meanings. Energy is determined mainly by the accelerator, accuracy by the detection equipment. Rarity means the fraction of particle collisions that produce the particular process that the experiment is designed to study. To observe events of high rarity, we need an accelerator with high intensity to make a large number of collisions, and we also need a detector with good diagnostics to discriminate rare events from background. I am not denying that energy is important. Let us by all means build accelerators of higher energy when we can do so cost-effectively. But energy is not the only important variable.


My Plan A for the future of particle physics is a program giving roughly equal emphasis to the three frontiers. Plan A should be a program of maximum flexibility, encouraging the exploitation of new tools and new discoveries wherever they may occur. To encourage work on the accuracy frontier means continuing to put major effort into new detectors to be used with existing accelerators. To encourage work on the rarity frontier means building some new accelerators which give high intensity of particles with moderate energy. After these needs are taken care of, Plan A will still include big efforts to move ahead on the energy frontier. But the guiding principle should be: more money for experiments and less for construction. Let us find out how to explore the energy frontier cheaply before we get ourselves locked into a huge construction project.
Let us follow the good example of the Alberts Committee when they say: “Because the technology required to meet most of the project’s goals needs major improvement, the committee specifically recommends against establishing one or a few large sequencing centers at present.”
Plan A consists of a mixture of many different programs, looking for opportunities to do great science on all three frontiers. Plan A lacks the grand simplicity and predictability of the SSC. And that is to my mind the main reason for preferring Plan A. There is no illusion more dangerous than the belief that the progress of science is predictable. If you look for nature’s secrets in only one direction, you are likely to miss the most important secrets, those which you did not have enough imagination to predict.