Jakob Schwichtenberg

Demystifying Symmetry Breaking and Goldstone’s theorem

What an imperfect world it would be if every symmetry was perfect
B. G. Wybourne

Symmetry breaking is an incredibly important phenomenon in modern physics. The best theory of nature at the fundamental level that we have, the standard model, wouldn’t make sense without it. Mathematically, symmetry breaking is easy to describe.

What is much harder is to understand intuitively what is going on.

For example, every advanced student of physics knows Goldstone’s theorem. The punchline of the theorem is that every time a symmetry gets broken, massless particles automatically appear in the theory. These particles are known as Goldstone bosons.

So far, so good.

However, what almost no student knows is why this happens. In addition to the punchline, the only thing that is presented in the standard textbooks and lectures is a proof of the theorem. But knowing the punchline + knowing that you can somehow prove the punchline does not equal understanding. To express it in terms I introduced here: what is missing is a “first layer” explanation. Students are usually only shown abstract second and third layer explanations.

I strongly believe there is always an intuitive explanation and at least Goldstone’s theorem and the famous Higgs loophole are no exceptions.

Symmetry Breaking Intuitively

Speaking colloquially, a symmetry is broken when the system we are considering is in some sense stiff. Before we consider a stiff system and why this means that a symmetry is broken, let’s consider the opposite situation first.

A gas of molecules is certainly not stiff. Consequently, we have the usual symmetries: rotational symmetry and translation symmetry.

What this means is the following:

The molecules move chaotically and if you close your eyes for a moment, I perform a global translation, i.e. move all molecules in some direction, you open your eyes again, it is impossible for you to tell that I changed something at all. The is the definition of a symmetry: You close your eyes, then I perform a transformation on an object/system and if you can’t tell that I changed anything at all, the transformation I performed is a symmetry of the object/system.

Hence, translations are a symmetry of a gas of molecules. Equally, we can argue rotations are a symmetry of the system.

Now, systems of molecules can not only appear as a gas but also as a solid system. IF we cool down the gas it will become fluid and eventually freeze. The thing is that solid systems like an ice-crystal are stiff and possess less symmetry than a gas. This is the opposite of what most laypersons would suspect. For example, thinking about beautiful ice crystals, most people would agree immediately that ice is much more symmetric than water or steam.

However, this is wrong. An ice crystal can only be rotated by very special angles, like 120 degrees or 240 degrees and still looks the same. In contrast, water or steam can be rotated arbitrarily and always looks the same. “Looks the same” means as described above that you close your eyes, I perform a transformation, and if you can’t tell the difference, the transformation is a symmetry.

An important side note: The picture of the gas above has no symmetry at all. They are just randomly jumbled together with no long-range pattern. However, the gas is not well described by an image. Instead, a video would be much better. The gas molecules are floating around widely. So better imagine a series of snapshots like the one above. Such a series of snapshots will look the same if, for example,  rotated. 

Next, we want to understand, as promised, Goldstone’s theorem intuitively. To do this, let’s first talk about energy for a moment.

A crystal consists of molecules arranged in regular, repeating rows and columns. The molecules arrange like this because the perfect arrangement in a lattice is the configuration with the lowest energy. As noted above, symmetry is broken if the atoms are arranged like this. This means directly, that I can no longer perform move the molecules around freely like I could in a gas. It now costs energy to move molecules.

With this in mind, we are ready to understand Goldstone’s theorem.

Why do we expect Goldstone bosons when a continuous symmetry gets broken?

We just noted that symmetry breaking means that a system becomes stiff. This, in turn, means that it now costs energy to move molecules around.

However, there is no resistance if we try to move all the atoms at once by the same amount. This is a result of the previously existing translational symmetry. This observation is exactly what is made precise in Goldstone’s famous theorem.

The relation between displacements and the corresponding energy cost is called dispersion relation. In technical terms, a dispersion relation describes the connection between the wavelength $\lambda$ and the frequency $\phi$ or equivalently the energy $E$. The observation mentioned above that moving all atoms at once by the same amount is a wave with infinite wavelength. The corresponding energy cost is zero because no atoms are brought closer to each other or are separated.

The interactions among the atoms are completely unaltered by such a global shift. Therefore, we have dispersion relation $\phi (\lambda) = \frac{1}{\lambda}$. As $\lambda$ goes to infinity, the frequency and thus the energy becomes zero. Such a “wave” with infinite wavelength is called in this context a Goldstone mode. While you can always consider waves with infinite wavelength in any system, the special thing here is that here they cost zero energy.

This is a result of the translational symmetry of the physical laws, which is only broken by the ground state, i.e. the perfect lattice configuration. Shifting the complete perfect lattice costs no energy. Only individual displacements cost energies. Such individual displacements correspond to waves with lower wavelength and hence have a non-zero frequency.

At a first sight Goldstone’s theorem is surprising. Why should moving all the atoms at once cost no energy, whereas small changes to the lattice structure cost much more energy?

The reason for this surprising fact is the translational invariance of the laws of physics and here of the background spacetime where we imagine our crystal lives in. The spacetime is everywhere the same and hence it makes no difference to which location we move our crystal. Hence, there is no energy penalty for changes that move the complete crystal at once.

In contrast, there is a huge energy penalty for displacing individual atoms in the lattice, because the perfect lattice is configuration with the lowest energy, In this sense, the ground state configuration is stiff.

So to summarize: Whenever we have a system that is described by physical laws which posses some symmetry, where the state with the lowest energy (the ground state) does not respect this symmetry, waves with infinite wavelength cost no energy. Expressed more concisely: Whenever a global symmetry gets broken by the ground state, we get Goldstone modes.

In a crystal low-frequency, phonons are the Goldstone modes.

Completely analogous, we can discuss what happens in a magnet. Above the Curie temperature, all the spins are aligned randomly and therefore we have rotational symmetry. However, below the Curie temperature the individual spins conspire and align in some direction. This leads to magnetic stiffness which means that the individual spins resist twists. However, a uniform twist of all spins costs not energy. The corresponding Goldstone modes are called spin waves.

A spin wave – inspired by Fig. 8.4 in Quantum Field Theory by Lewis H. Ryder

In the ground state below the Curie temperature, all spins are aligned along some direction. This random choice of alignment breaks the rotational symmetry. The Goldstone modes correspond to those transformations that transform the various possible ground states into each other.

It is convenient to introduce the notion “order parameter” in this context. The order parameter is a way to classify in what phase a given system is in. In the case of the ferromagnet, the overall magnetization (= the total spin vector) is the order parameter.

Above the Curie temperature all spins are directly randomly and thus the total sum of all spins is zero. However, below the Curie temperature, the spins align and we get a non-zero overall spin vector = a non-zero order parameter.

A nice example to keep all this in mind is a chair. The above observation is exactly what allows us to move all the atoms in a chair at once. Instead of allowing a deformation of the lattice structure, the $10^9$ atoms that make up the chair prefer to move all at once.

In a second essay, I will try to explain which loophole Peter Higgs (and others) discovered that makes it possible to have symmetry breaking without Goldstone bosons.

Why there is rarely only one viable explanation

“Nature is a collective idea, and, though its essence exist in each individual of the species, can never in its perfection inhabit a single object.” ―Henry Fuseli

I recently came across a WIRED story titled “There’s no one way to explain how flying works”. The author published a video in which he explained how airplanes fly. Afterward, he got attacked in the comments because he didn’t mention “Bernoulli’s principle”, which is the conventional way to explain how flying works.

Was his explanation wrong? No, as he emphasizes himself in the follow-up article mentioned above.

So is the conventional “Bernoulli’s principle” explanation wrong? Again, the answer is no.

It’s not just for flying that there are lots of absolutely equally valid ways to explain something. In fact, such a situation is more common than otherwise.

The futility of psychology in economics

Another good example is economics. Economists try to produce theories that describe the behavior of large groups of people. In this case, the individual humans are the fundamental building blocks and a more fundamental theory would explain economic phenomena in terms of how humans act in certain situations.

An economic phenomenon that we can observe is that that stock prices move randomly most of the time. How can we explain this?

So let’s say I’m an economist and I propose a model that explains the random behavior of stock prices. My model is stunningly simple: humans are crazy and unpredictable. Everyone does what he feels is right. Some buy because they feel the price is cheap. Others buy because they think the same price is quite high. Humans act randomly and this is why stock prices are random. I call my fundamental model that explains economic phenomena in terms of individual random behavior the theory of the “Homo randomicus”.

This hypothesis certainly makes sense and we can easily test it in experiments. There are numerous experiments that exemplify how irrational humans act most of the time. A famous one is the following “loss aversion” experiment:

Participants were given \$50. Then they were asked if they would rather keep \$30 or flip a coin to decide if they can keep all \$50 or lose it all. The majority decided to avoid gambling and simply keep the \$30.

However, then the experimenters changed the setup a bit. Again the participants were given \$50, but then they were asked the participants if they would rather lose \$20 or flip a coin to decide if they can keep all \$50 or lose it all. This time the majority decided to gamble.

This behavior certainly makes no sense. The rules are exactly the same but only framed differently. The experiment, therefore, proves that humans act irrationally.

So my model makes sense and is backed up by experiments. End of the story right?

Not so fast. Shortly after my proposal another economist comes around and argues that he has a much better model. He argues that humans act perfectly rational all the time and use all the available information to make a decision.  In other words that humans act as “Homo oeconomicus”. With a bit of thought it is easy to deduce from this model that stock prices move randomly.

This line of thought was first proposed by Louis Bachelier and you can read a nice excerpt that explains it from the book “The Physics of Wall Street” by James Owen Weatherall by clicking on the box below.

Why stocks move randomly even though people act rational

But why would you ever assume that markets move randomly? Prices go up on good news; they go down on bad news. there’s nothing random about it. Bachelier’s basic assumption, that the likelihood of the price ticking up at a given instant is always equal to the likelihood of its ticking down, is pure bunk. this thought was not lost on Bachelier. As someone intimately familiar with the workings of the Paris exchange, Bachelier knew just how strong an effect information could have on the prices of securities. And looking backward from any instant in time, it is easy to point to good news or bad news and use it to explain how the market moves. But Bachelier was interested in understanding the probabilities of future prices, where you don’t know what the news is going to be. Some future news might be predictable based on things that are already known. After all, gamblers are very good at setting odds on things like sports events and political elections — these can be thought of as predictions of the likelihoods of various outcomes to these chancy events. But how does this predictability factor into market behavior?
Bachelier reasoned that any predictable events would already be reflected in the current price of a stock or bond. In other words, if you had reason to think that something would happen in the future that would ultimately make a share of Microsoft  worth more — say, that Microsoft  would invent a new kind of computer, or would win a major lawsuit — you should be willing to pay more for that Microsoft  stock now than someone who didn’t think good things would happen to Microsoft , since you have reason to expect the stock to go up. Information that makes positive future events seem likely pushes prices up now; information that makes negative future events seem likely pushes prices down now.
But if this reasoning is right, Bachelier argued, then stock prices must be random. think of what happens when a trade is executed at a given price. this is where the rubber hits the road for a market. A trade means that two people — a buyer and a seller — were able to agree on a price. Both buyer and seller have looked at the available information and have decided how much they think the stock is worth to them, but with an important caveat: the buyer, at least according to Bachelier’s logic, is buying the stock at that price because he or she thinks that in the future the price is likely to go up. the seller, meanwhile, is selling at that price because he or she thinks the price is more likely to go down. taking this argument one step further, if you have a market consisting of many informed investors who are constantly agreeing on the prices at which trades should occur, the current price of a stock can be interpreted as the price that takes into account all possible information. It is the price at which there are just as many informed people willing to bet that the price will go up as are willing to bet that the price will go down. In other words, at any moment, the current price is the price at which all available information suggests that the probability of the stock ticking up and the probability of the stock ticking down are both 50%. If markets work the way Bachelier argued they must, then the random walk hypothesis isn’t crazy at all. It’s a necessary part of what makes markets run.
– Quote from “The Physics of Wall Street” by James Owen Weatherall

 

Certainly, it wouldn’t take long until a third economist comes along and proposes yet another model. Maybe in his model humans act rational 50% of the time and randomly 50% of the time. He could argue that just like photons sometimes act like particles and sometimes as waves, humans sometimes act like as a “Homo oeconomicus” and sometimes as a “Homo randomicus” . A fitting name for his model would be the theory of the “Homo quantumicus”.

Which model is correct?

Before tackling this question it is instructive to talk about yet another example. Maybe it’s just that flying is so extremely complicated and that humans are so strange that we end up in the situation where we have multiple equally valid explanations for the same phenomenon?

The futility of microscopic theories that explain the ideal gas law

Another great example is the empirical law that the pressure of an ideal gas is inversely proportional to the volume:

$$ P \propto \frac{1}{V} $$

This means if we have a gas like air in some bottle and then make the bottle smaller, the pressure inside the bottle increases. Conversely, if we have a bottle and increase the pressure, the gas will expand the volume if possible. It’s important the relationship is exactly as written above and not something like $ P \propto \frac{1}{V^2}$ or $ P \propto \frac{1}{V^{1.3}}$. How can we explain this?

It turns out there are lots of equally valid explanation.

The first one was provided by Boyle (1660) who compared the air particles to coiled-up balls of wool or springs. These naturally resist compression and expand if they are given more space. Newton quantified this idea and proposed a repelling force between nearest neighbors whose strength is inversely proportional to the distance between them squared. He was able to show that this explains the experimental observation $ P \propto \frac{1}{V} $ nicely.

However, some time afterward he showed that the same law can be explained if we consider air as a swarm of almost free particles, which only attract each other when they come extremely close to each other. Formulated differently, he explained $ P \propto \frac{1}{V} $ by proposing an attractive short-ranged force. This is almost exactly the opposite of the explanation above, where he proposed an attractive force as an explanation.

Afterwards other famous physicists started to explain $ P \propto \frac{1}{V} $. For example, Bernoulli proposed a model where air consists of hard spheres that collide elastically all the time. Maxwell proposed a model with an inverse power law, similar to Newton’s first proposal above, but instead preferred a fifth power law instead of a second power law.

The story continues. In 1931 Lennard–Jones took the now established quantum–mechanical electrical structure of orbitals into account and proposed a seventh-power attractive law.

Science isn’t about opinions. We do experiments and test our hypothesis. That’s how we find out which hypothesis is favored over a competing one. While we can never achieve 100% certainty, it’s possible to get an extremely high quantifiable confidence into a hypothesis. So how can it be that there are multiple equally valid explanations for the same phenomenon?

Renormalization

There is a great reason why and it has to do with the following law of nature:

Details become less important if we zoom out and look at something from a distance.

For laws of ideal gases this means not only that there are lots of possible explanations, but on the contrary that almost any microscopic model works. You can use an attractive force, you can use a repulsing force or even no force at all (= particles that only collide with the container walls). You can use a power law or an exponential law. It really doesn’t matter.

Your microscopic model doesn’t really matter as long as we are only interested in something macroscopic like air. If we zoom in all these microscopic models look completely different. The individual air particles will move and collide completely different. But if we zoom out and only have a look at the properties of the whole set of air particles as a gas, these microscopic details become unimportant.

The law $ P \propto \frac{1}{V} $ is not the result of some microscopic model. None of the models mentioned above is the correct one. Instead, $ P \propto \frac{1}{V} $ is a generic macroscopic expression of certain conservation laws and therefore of symmetries.

Analogously it is impossible to incorporate the individual psychology of each human into an economic theory. When we describe the behavior of large groups of people we must gloss over many details. As a result, things that we observe in economics can be explained by many equally valid “microscopic” models.

You can start with the “Homo oeconomicus”, the “Homo randomicus” or something in between. It really doesn’t matter since we always end up with the same result: stock markets move randomly. Most importantly, the pursuit of the one correct more fundamental theory is doomed to fail, since all the microscopic details get lost anyway when we zoom out.

This realization has important implications for many parts of science and especially for physics.

What makes theoretical physics difficult?

The technical term for the process of “zooming out” is renormalization. We start with a microscopic theory and zoom out by renormalizing it.

The set of transformations which describe the “zooming out” process are called the renormalization group.

Now the crux is that this renormalization group is not really a group, but a semi-group. This difference between a group and a semi-group is that there is no unique inverse element for semi-group elements. So while we can start with a microscopic theory and zoom out using the renormalization group, we can’t do the opposite. We can’t start with a macroscopic theory and zoom in to get the correct microscopic theory. In general, there are many, if not infinitely many, theories that yield exactly the same macroscopic theory.

This is what makes physics so difficult and why physics is currently in a crisis.

We have a nice model that explains the behavior of elementary particles and their interactions. This model is called the “standard model“. However, there are lots of things left unexplained by it. For example, we would like to understand what dark matter is. In addition, we would like to understand why the standard model is the way it is. Why aren’t the fundamental interactions described by different equations?

Unfortunately, there are infinitely many microscopic models that yield the standard model as a “macroscopic” theory, i.e. when we zoom out. There are infinitely many ways to add one or several new particles to the standard model which explain dark matter, but become invisible at present-day colliders like the LHC. There are infinitely many Grand Unified Theories, that explain why the interactions are the way they are.

We simply can’t decide which one is correct without help from experiments.

The futility of arguing over fundamental models

Every time we try to explain something in terms of more fundamental building block, we must be prepared that there are many equally valid models and ideas.

The moral of the whole story is that explanations in terms of a more fundamental model are often not really important. It makes no sense to argue about competing models if you can’t differentiate between them when you zoom out. Instead, we should focus on the universal features that survive the “zooming out” procedure. For each scale (think: planets, humans, atoms, quarks, …) there is a perfect theory that describes what we observe. However, there is no unique more fundamental theory that explains this theory. While we can perform experiments to check which of the many fundamental theories is more likely to be correct, this doesn’t help us that much with our more macroscopic theory which remains valid. For example, a perfect theory of human behavior will not give us a perfect theory of economics. Analogously, the standard model will remain valid, even when the correct theory of quantum gravity will be found.

The search for the one correct fundamental model can turn into a disappointing endeavor, not only in physics but everywhere and it often doesn’t make sense to argue about more fundamental models that explain what we observe.

PS: An awesome book to learn more about renormalization is “The Devil in the Details” by Robert Batterman. A great and free course to learn more it in a broader context (computer science, sociology, etc.) is “Introduction to Renormalization” by Simon DeDeo.

Why experts are bad teachers* and who you should learn from instead

When I started studying it didn’t take long until I was confused and disappointed.

Why were almost all lectures boring and useless?

Typically the lecturer dwelled endlessly on trivialities and rushes with lightning speed through everything complicated. Still, I continued visiting the lectures, simply because I thought that this is how you learn at the university level. I thought somehow something will stick subconsciously even though I didn’t learn anything consciously. The main reason I went to the lectures was that I feared I would miss something crucial if I didn’t go.

I also discovered that most textbooks are boring and useless. When you go the library and read the textbook your professor recommended, you usually end up more confused. As a beginner student, you only know a few textbooks and chances are high that they are all horrible.

Today I know that I wouldn’t have missed anything if I would have skipped the lectures. Today I know that there is no way to magically learn something subconsciously. Today I know why lectures and textbooks are typically boring and useless. 

What do bad lectures and textbooks have in common?

The thing that lectures and most textbooks have in common is that they are made by experts. Only after at least a decade of intensive research, you get into a position where you are allowed to give lectures. Analogously, usually, only textbooks written by experts are published or at least recommended by professors.

This sounds reasonable. To teach something you must be an expert. To write a book on something you must be an expert. What’s the problem?

The problem here is the more we know about some subject, the more we think about it in abstract terms. This isn’t a bad thing per so. Abstraction allows us to compress vast amounts of knowledge into manageable pieces. The evolution towards abstraction is the reason why every mature field has its own jargon. If you are an expert, this jargon is immensely helpful, because it allows you to express things correctly and concisely.

However, abstract explanations and jargon are big obstacles for beginners. Beginners need simple words, pictures, and analogies.

The root of all confusion

So… why aren’t experts using simple words when talking to beginners and abstract formulations when talking to fellow experts?

This question was answered in 1990 by a Stanford University graduate student in psychology named Elizabeth Newton.

She conducted an experiment in which a person was instructed to tap out a given famous song like, for example, Jingle Bells, with their fingers. A second person listened to the tapping and had to guess the name of the song.

The tappers had to estimate how many songs the person listening would guess correctly. On average they estimated that 50% of the songs would be guessed correctly. However, the real figure was only 2.5%.

The people who tapped the songs on the table heard the song in their heard and thus for them the task to guess the song seemed easy. Thus, the reason why the tapper’s estimates and the real figure are so different is that once we know something, like in the experiment the melody of the song, it’s usually incredibly hard to imagine what it’s like not knowing it.

This phenomenon is called “the curse of knowledge“. What it boils down to is that most people find it incredibly hard to put themselves in the listener’s shoes. That’s why experts talk to beginners like they would talk to a fellow expert. That’s why most textbooks and lectures are useless for their intended audience.

The curse of knowledge in the wild

Hundreds of perfect examples of the curse of knowledge in action can be seen at Scholarpedia. The project describes itself as a “peer-reviewed open-access encyclopedia, where knowledge is curated by communities of experts.” While this sounds great in theory, it’s worth examining a few articles to see how this works in practice.

Here’s an article I recently stumbled upon http://www.scholarpedia.org/article/Lagrangian_formalism_for_fields.

I was astonished how useless and confusing it is for any beginner. The problems start right at the beginning. The introductory sentences are incredibly overstuffed with buzzwords and jargon.  Then in the first section, instead of sticking to the 4-dimensional spacetime we are living in, the article “explains” everything for a general D–dimensional space-time.  This is something only experts care about and that can confuse beginners immensely.  Finally, check out the references. There is not one article or book among them that I would recommend to a beginner student. Every beginner will be hopelessly confused after reading this article.

So without any doubt, the author of the Scholarpedia article knows what he is writing about. But unfortunately, he is subject to the curse of knowledge.

Another good place to observe the “curse of knowledge” in action is Wikipedia.
Almost any page on a math topic is completely useless for a beginner. The reason for this is, of course, that nowadays almost any math page was (re-)written by an expert. On the one hand, this is a good thing, because it means that most things on Wikipedia are nowadays correct. Wikipedia has reached a high-level of accuracy. Nevertheless, this also makes Wikipedia a horrible place to learn things. Unfortunately, most experts are not only subject to the curse of knowledge but also think that explanations in simple terms are a bit silly, trivial and naive. Thus when you try to add some explanations to Wikipedia that would be valuable for beginners, they almost always get deleted immediately.

Wikipedia wants to offer one page on a given topic that caters to all audiences. However, what experts find illuminating can confuse a beginner endlessly. There is no way to present a topic such that is a great read for any audience.

Okay fine, this is a problem. But what’s the solution?

I quote it all the time, but here it is once more:

“It often happens that two schoolboys can solve difficulties in their work for one another better than the master can. […] The fellow-pupil can help more than the master because he knows less. The difficulty we want him to explain is one he has recently met. The expert met it so long ago he has forgotten. He sees the whole subject, by now, in a different light that he cannot conceive what is really troubling the pupil; he sees a dozen other difficulties which ought to be troubling him but aren’t.” (C. S. Lewis)

The only possibility to fight the curse of knowledge is to write down what you learn while you learn it. This way you can always see what problems you struggled with when you were a beginner. After this realization, I started to write down everything I learn and I encourage others to do the same.

Unfortunately, many beginners feel that their notes are not valuable and their thoughts aren’t good enough to be written down. Nothing could be further from the truth. Only because there are already 50 textbooks on a topic by experts, this doesn’t mean that notes written by a beginner can’t help hundreds of fellow students.

For example, before I wrote my book “Physics from Symmetry” already hundreds (!) of textbooks on group theory and symmetries in physics existed. To quote Predrag Cvitanovic

“Almost anybody whose research requires sustained use of group
theory (and it is hard to think of a physical or mathematical problem
that is wholly devoid of symmetry) writes a book about it.”

Nevertheless, after my book was published I received dozens of messages from students all around the world who told me my book was exactly what they needed. This is not some lame attempt to brag. Instead, I mention this to demonstrate that what beginner writes can be valuable for others, especially fellow students.

Of course, not everyone has the time to write a complete book. For this reason, I started a small project called the Physics Travel Guide.

It’s also a wiki like Wikipedia and Scholarpedia, but it contains multiple layers. This means that it takes the various layers of understanding into account by offering several versions of each page.

Each page contains a laymen section that explains the topic solely in terms of analogies and pictures without any equations. Then there is a student section, that uses some math but is still beginner-friendly. Finally, there is the abstract layer, called researcher section, where the topic is explained in abstract terms and as rigorous as possible.

This way everyone can find an explanation in a language he understands. In addition, people interested in participating can see what kind of information is missing and don’t get discouraged because there is already lots of high-level stuff available.

To get a better idea what I am talking about, compare the Physics Travel Guide page for the Lagrangian formalism with the Scholarpedia page I mentioned above.

PS: Even if you think such a layered Wiki is a stupid idea, please, whenever you learn something, write it down and make it publicly available. There are too few people who currently do this, although such notes are incredibly valuable for anyone who tries to learn something. It doesn’t matter if you publish what you learn on a personal blog, a personal Wiki or if you participate in a Wiki project. The only thing that matters is that we get more explanations for each layer of understanding.


*Of course, there are rare exceptions like, for example, Richard Feynman who was an expert and a great teacher.