cphil_gluons

 

GLUONS

by Ray Shelton

 

INTRODUCTION

The quarks and the leptons are the primary constituents in the physicist’s view of reality. These particles interact among themselves. Understanding the characteristics of these interactions have been the primary task of the physicist during the last century. Using high-energy particle accelerators by which to probe matter, physicists found that the complexity of the natural world disappear at tiny distances, and the interaction between quantum particles become simple and fall into symmetrical patterns. A simple picture of physical reality emerged with the realization that the complicated interactions among the quarks and leptons are in fact mediated by a definite set of quantum particles called gluons. As the name implies, the gluons cause the quantum particles to stick together; gluons are the glue that holds the physical world together.

But what are gluons, and how did the physicists come to understand their role in quantum interactions? Physicists are always seeking patterns in the world which imply an underlying simplicity that can be understood by the human mind. They are like detectives who are looking for clues; patterms that are clues to the nature of physical reality. What patterns have the physicists discovered? On the macroscopic level they have discovered two patterns of interaction of matter with matter: gravitational interactions and electromagnetic interactions. On the submicroscopic level they have also discovered two patterns of interaction: the weak interaction responsible for radioactivity and the strong interaction holding the protons and neutrons together in the nucleus of the atom. What is the nature of these four interaction? Physicists have attempted to explain these four interactions as quantum interactions; that is, associated with each interaction on the quantum level is a gluon, whose “stickiness” is a measure of the strength of the interaction.

A remarkable feature of these quantum interactions of the gluons is that the strength of their interaction depends upon the energy of the interacting gluons. Quarks and leptons, interacting at relatively low energies that are available in modern-day laboratories, experience four separate interactions. But at much higher energies the strengths of the four interactions — the stickiness of the gluons — might all become equal and the distinction between them would vanish. The four interactions might really be the manifestation of one universal interaction, a super-force. This possibility is the basis for unified field theory (UFT) which has long been the dream of physicists. During the 1980s physicists have partially realize this dream of unifying the different interactions. Theories were formulated which unify the electromagnetic, weak, and strong interactions into one interaction. Such unified field theories have been the result of long years of effort, and while the theoretical physicists argue about details, the principle of unification is firmly established.

The central idea of the modern concept of interactions is that the interactions are mediated by quanta of energy. Each of the four fundamental interactions has associated with them quanta of energy called gluons. The gluon associated with the electromagnetic interaction is the photon, the gluon associated with gravity interaction is the graviton. Weak gluons mediate weak interaction, and colored gluons provide the quark binding force. Particles such as the quarks and leptons, by exchanging these gluons like two tennis players exchange a tennis ball, can interact.

 

ELECTROMAGNETIC INTERACTIONS

A good place to begin our thinking about quantum interactions is the simplest atom, hydrogen, in which a single electron is bound by the electric field of the proton, as an example of electromagnetic interaction. In the classical way of thinking about the hydrogen atom, the electron is bound to the proton by an electric field. In the quantum field way thinking about the hydrogen atom is that the two quanta — the electron and proton — are exchanging a third quanta — the photon. From this point of view there are not particles and fields; there are only quanta. There are two quanta exchanging a third quanta between themselves. We can think of the electron and proton as two tennis players hitting a ball — the photon — back and forth between them. This exchange of the ball binds the two players together — the photon acts as a kind of glue that holds the components of the hydrogen atom together. The photon was the first example of a class of particles that the physicists called gluons.

The gluon assocatied with electromagnetic interaction is the photon. The photon is the particle of light postulated by Einstein in his 1905 article on the photoelectric effect. At the time he postulated the photon few physicists believed it existed. But finally in 1923 experiments were done in which recoiling electrons hit by photons could be detected, and this convinced most physicists of the reality of the photon. The photon was the first — and so far the only — gluon that has been directly confirmed by experiment.

 

QUANTUM ELECTRODYNAMICS

The modern theory that so successfully accounts for these experiments is called quantum electrodynamics (QED), and it was first formulated in the 1920s by Werner Heisenberg (1901-1976), Wolfgang Pauli (1900-1958), Pascual Jordan, and Paul Dirac (1902-1984). The photon was described by the quantized electromagnetic field and the electron by the quantized electron field. In the late 1940s, after many struggles with the mathematics of the theory, the final version of quantum electrodynamics was completed by Richard Feynman (1918-1988), Julian Schwinger (1918-??), and Sin-Itiro Tomonaga (1906-1979), an accomplishment for which they were awarded in 1965 the Nobel Prize in physics. Feynman and Schwinger made their contributions to QED just after World War II and Tomonaga worked in isolation from the American scientists in Japan during the war and published his results in 1943, but his work did not become known to the English-speaking world until 1947. The three of them had arrived at the same model by three different routes, which is good indication that the model they came up with described a fundamental feature of nature.

Quantum electrodynamics was the first practical example of what physicists call a relativistic quantum field theory. It was “relativistic” because it incorporated the principles of Einstein’s special theory of relativity; and it was “quantum” because it embraced the ideas of the new quantum mechanics; and it was a “field theory” because the primary objects of investigation were fields like the electric and magnetic fields. Quantum electrodynamics incorporated the photon as the gluon of the electromagnetic field. It was enormously successful, so that it became the paradigm for future efforts to devise mathematical descriptions of the world of quanta. There is no doubt that it was the success of the photon quanta concept and quantum electrodynamics that encouraged physicists to promote the view that all interactions are mediated by gluons. This viewpoint met with success when physicists turned to examining the weak interaction.

But there was still the problem of the infinities and renormalization. The marriage of quantum mechanics with the special theory of relativity produced only a series of meaningless infinities. Quantum mechanics, like nineteenth century physics, was based on point particles. In high school physics, we learned that force fields, like gravity and the electric field, obeyed something called the “inverse square law”. That is, the further one moves from the particle that is the source of the field, the weaker the field becomes. For example, the further one travels from the the sun, the weaker its gravitational pull will be. This means that as one approaches the particle that is the source of field, the force increases dramatically. In fact, the force field of a point particle at the surface of the particle must be the inverse of zero squared, which is 1/0. Mathematical expressions like 1/0 are infinities and not defined. The result of introducing point particles into our theory of fields is that all our calculations of physical quantities, such as energy, are riddle with 1/0s. This is enough to make the theory useless; calculations with a theory plagued with infinities cannot be made because the results cannot be trusted. The original assumption made by Heisenberg and Schrodinger was that quantum mechanics should be based on point particles.

 

RICHARD FEYNMAN

In the 1940s, Richard Feynman (1918-1988) found a partial solution to the infinity problem in QED. Feynman’s solution was quite novel, albeit controversal. QED is theory with two parameters — the charge and mass of electron. Feynman made diagrams of what happens when electrons collide with one another. These diagrams was actually a shorthand notation for a tremendous amount of tedious mathematics. Feynman was able to condense hundred pages of algebra, which required months of painstaking work, and isolate the troublesome infinities. These mathematical doodlings allowed him to see further than those who were lost in a jungle of complex mathematics. Think about playing with Tinker Toys. Assume that there are only three types of Tinker Toys: a straight stick (representing a moving electron), a wavy stick (representing a moving photon), and a joint (representing the interaction) that can connect a wavy stick with two straight sticks. Now suppose we connect these Tinker Toys in all possible ways. For example, start with the collision of two electrons. Very simply, with the use of these Tinker Toys we can create an infinite sequence of diagrams representing how these two electrons collide, each diagram representing a definite mathematical expression. In essence, there are two types of Tinker Toy diagrams that can be assembled: “loops” and “trees”, which contain no loops but resemble branches of a tree. Feynman found that trees were finite and experimentally yielded good results. But the loop diagrams were troublesome, yielding meaningless infinities. Feynman found that he could regroup a large set of diagrams so that he could redefine the charge and mass of the electron, and thus absorb or cancel the infinities. At first, this juggling of infinities was greeted with skepticism. After all, this meant that the original mass and charge of the electron (the “bare” mass and charge) were essentially infinite to start with, but they absorbed (that is, “renormalized”) the infinities emerging from the graphs and then became finite.

Can infinity minus infinity yield a meaningful result? (Or, in the mathematical language of physics, can ∞ – ∞ = 0?) To the critics, using one set of infinities (arising from loops) to cancel another set of infinities (coming from the electronic charge and mass) looks like a parlor trick, not a fundamental leap in understanding of how to marry quantum theory with special relativity. For years Dirac criticized the renormalization theory as too clumsy to represent a genuinely profound leap in the understanding of nature. To Dirac, the renormalization theory was like a cardshark rapidly reshuffling his deck of Feynman diagrams until the cards with the infinities mysteriously disappeared.

However, the experimental results were undeniable. In the 1950s, Feynman’s new theory of renormalization (which provided a way to absorb the infinities) allowed the physicists to calculate with incredible precision the energy levels of the hydrogen atom. No other theory came close to approaching the fantastic calculation precision of QED. Although the theory worked for electrons and photons (and not for weak, strong, or gravitational forces), it was undenialbly a stuning success. After it was shown that Feynman’s version was equivalent to Julian Schwinger’s and Sin-Itiro Tomonaga’s, the three shared the Nobel Prize in 1965 for eliminating the infinities from QED. In the 1950s and 1960s, Feynman’s rules and diagrams were the rage in physics. The blackboards of the nations’s laboratories, which were once filled with dense equations, now blossomed with diagrams filled with loops and trees. It seemed that everyone suddenly became an expert in doodlying on scraps of paper, and constructing Tinker Toy-like diagrams. Physicists reasoned that if Feynman’s rules and renormalization theory were so successful in solving the problems of QED, then maybe lightning would strike twice and the strong and weak interaction could also be “renormalized”.

But the 1950s and 1960s were confusing decades marked by many false starts. The Feynman’s rules were not enough to renormalize the strong or weak interactions. Finally, after two decades of chaos, the key breakthrough were made in the weak interactions. For the first time in an hundred years, since the time of Maxwell, the forces of nature took another step toward unification.

 

GRAVITATIONAL INTERACTIONS

Like electromagnetic interaction, gravity, which was seen as a force exerted by a gravitational field, is also long-ranged. But here the similarity to electromagnetic fields ends. The electromagnetic interaction between charged particles is billions of billions times stronger than gravity. And unlike electromagnetic interaction, which has the source in moving electrically charged particles that can be either positively or negatively charged, gravity has its source in masses which are always positive quantities. Unlike electromagnetic forces, which can be either attractive (between oppositely charged particles) or repulsive (between like-charge particles), gravity is always attractive (repulsive gravity or antigravity just doesn’t seem possible in the present theory of gravity, the general theory of relativity). These are just a few of the differences between gravity and electromagnetic interactins.

According to modern quantum field theory, every field like the gravity field has an associated quantum particle. For the gravity field these quanta are called gravitons; they are the gluons that bind large masses like the stars together. Instead of thinking of the gravity field as some kind of force field that extends between the two bodies of mass, like the earth and the moon, the quantum field physicists see that such gravity field is “quantized” into countless gravitons. And the two bodies are exchanging gravitons, and these exchanges make up what we perceive as the gravitational field between these bodies. This is an unfamilar but accepted way to understand the effect of gravity and other fields. Although physicists accept the idea that gravity fields are really quantized, it is unlikely that the graviton — the quantum of gravity — will ever be directly detected. In order to observe quantum particles like the graviton and not just there associated fields, one must look at the level of quantum interaction and at that level graviton interactions are just too weak to be observed. That is, if a graviton should hit a proton, the proton would recoil; but this recoil would be so tiny that we would never detect it. Gravity is the weakling of the quantum interactions.

 

THE WEAK INTERACTIONS

The weak interactions are responsible for the disintegration of many of the quantum particles encountered in the laboratory; in particular it accounts for radioactivity — the disintegration of the atomic nuclei. In the large number of quantum particles, there is only a small number of them, the electron, photon, proton, and neutrino, that are observed to be stable; that is, if they are left to themselves they do not disintegrate. Other particles, such as muons, neutrons, and other hadrons, will disintegrate rather rapidly into stables ones. These particle decay processes provide important information about the properties of those particles. Physicists have identified a special weak interaction responsible for the decay.

Unlike the electromagnetic and gravitational interactions, which have long-ranged effects that we can see around us, the weak interactions are extremely short-ranged — their effects can be only studied by a careful examination of the quantum world. For a long time physicists did not understand these mysterious weak interactions. Historically, men first encountered the weak interaction in the strange glow of radium salts seen in the late nineteenth century. As physicists investigated this glow, it became clear that the glow could not have a chemical origin — the energy was too great for such an explanation. Eventually it was found that the origin of the glow was due to the emission of particles from the atomic nucleus. Physicists found that the nuclei of some atoms were unstable and as they disintegrated the emitted particles were detected as radioactivity. The study of this radioactivity took the physicists into the nuclei and eventually into the subnuclear world of hadrons. For decades there was much confusion in the theoretical and experimental accounts of radioactivity. After much agony and confusion, a theory was developed that could account for the experimentally observed properties of what was called weak interaction. The cornerstone of the theory is the assumption that, like the gravitational and electromagnetic interactions, the weak interaction is also mediated by gluons — “weak gluons”. These weak gluons, unlike the graviton and photon, are extremely massive that no existing particle accelerator has the energy to create them.

How do weak gluons cause particles like hadrons to disintegrate? The other gluons, the photon and the graviton, do not do that — why are the weak gluons so different? To understand how a hadron disintegrates we have to remember what a hadron is made of — different flavors of quarks: up, down, strange, charmed, bottom, and top. What the weak gluons do is change the flavor of the quarks, and that is how they make the hadrons disintegrate. For example, a strange quark in a hadron can be converted into an up or down quark by interacting with a weak gluon. This means that hadrons possessing a strange quark in them can change into hadrons with only up and down quarks in them — an example of a decay of strange hadrons. Likewise, charmed quarks can change into up or down quarks through weak interactions. So this is the role of weak gluons — they let strangeness and charm leak away, leaving only ordinary hadrons with, ultimately, only the proton as the remaining stable hadron. Weak gluons also interact with leptons and cause them to decay.

Faced with the problem of weak interactions, the physicists used a time-honored technique: applying analogies borrowed from previous theories to create new theories. The essence of the QED explained the force between electrons as an exchange of photons between them. By the same reasoning, physicists conjectured that the force between electrons and neutrinos was caused by the exchange of a new set of particles, called the W-particles (W for “weak”). The resulting theory (with electrons, neutrinos, and W-particles) can be explained with three kinds of Tinker Toys: a straight stick (representing an electron), a dotted stick (representing the neutrino), and a spiral (representing a W-particle). According to this W-particle theory, an electron (represented by the straight stick) collides with an neutrino (represented by dotted stick) and exchanges a series of W-particles (represented by spirals). It is not difficult to assemble hundreds of these Feynman diagrams for weak-interaction processes created by the exchange of W-particles. But the problem was that this theory of weak-interactions was nonrenormalizable. No matter how cleverly Feynman’s bag of tricks was used, the theory was still plagued with infinities. The problem was not the rules that Feynman had cooked up, but the W-particle theory itself. The W-particle theory was a tremendous flop. As a consequence the theory of weak-interactions languished for three decades. Not only were the experiments difficult to perform (because the notoriously elusive neutrino), but the W-particle theory was unacceptable.

 

ELECTRO-WEAK THEORY

Julian Schwinger had the ideal background to pick up the Yang-Mills concept and apply it to the weak and electromagnetic interactions. He was something of a child prodigy in mathematics. He was born in 1918 and entered the City College of New York at the age of fourteen, then transferred to Columbia University, where he gained his B.A. at the age of seventeen, and a Ph.D. degree three years later. He worked with Robert Oppenheimer (the “father of the atomic bomb”) at the University of California, then at the University of Chicago and at MIT before joining the faculty of Harvard University in 1945. A year later, at the age of twenty-eight, he became one of the youngest full professors ever appointed to the august university. Schwinger made major contributions to the development of QED, and in 1965 he shared the Nobel Prize in Physics with Richard Feynman and Sin-Itiro Tomonaga, of Tokyo University, for his work. But the rules of the game are slightly different with weak interaction. In beta decay, for example, a neutron is converted into a proton, so the isotopic spin (isospin) symmetry is disturbed. But at the same time, in such an interaction, a neutrino is converted into an electron (or anti-neutrino and an electron are created together, which is the same thing.), so there has been a transformation in the lepton world analogous to the isospin change in the hadron world. This leads to the idea of “weak isospin”, a quantum parameter like isospin but one that applies to leptons as well as to hadrons. In 1957, Schwinger took over the non-Abelian local gauge theory developed by Yang and Mills for strong interaction and applied it to the weak interaction and electromagnetism (QED) together. Like the Yang and Mills, Schwinger’s version had three vector bosons, one without charge and the other two carrying charge. And like Yang and Mills, Schwinger identified the uncharged field quanta with photons. But, unlike Yang and Mills, in Schwinger’s treatment the two charged vector bosons were regarded as the W+ and W, the carriers of the weak force. There was still the problem with masses. Masses had to be added to the theory for the W particles more or less by hand, on an ad hoc basis. But this theory, in spite of its obvious flaws, again raised interesting new ideas. It implied that the weak force and the electromagnetic force were “really” the same strength as each other, in some sense symmetric, but the symmetry had gotten lost, or was broken, because the W particles had mass (and therefore a limited range) while the photon had none (and therefore of infinite range).

This led to two lines of development in field theory. Sidney Bludman of the University of California at Berkeley took up the links with Yang-Mills theory and pointed out in 1958 that the weak force alone could be described by a local, non-Abelian gauge theory with three particles, the W+, the W, and a third vector boson, with zero charge, which he called Z0 or simply Z. This left electromagnetism completely out of the picture for the time being, but carried with it the implication that there ought to be a weak interactions that involved no change in electric charge — ones that are mediated by the Z particle and are known as neutral current interactions. All these quanta are massless in the Bludman’s model, so the model was far from being realistic. It was far from the “answer” than the earlier models.

Meanwhile, Sheldon Lee Glashow, a physicist who had been born in the Bronx in 1932 and graduated from Cornell University in 1954, had been studying for his Ph.D. degree at Harvard under the supervision of Schwinger. Glashow found a way to take a Bludman’s version on the theme and combine it with a description of electromagnetism, producing a model, which he published in 1961. It included both a triplet of vector bosons to carry the weak force and a single vector boson to carry the electromagnetic force. The only immediate benefit of this approach was that it proved possible to ensure that the way the singlet and triplet mixed together produced one very massive neutral particle, the Z, and took all the mass away from the other one, the photon, instead of having two neutral particles that each had mass. But the mass still had to be put in by hand, to destroy the symmetry between the electromagnetic and weak forces in the basic question, and worst of all, the theory did not seem to be renormalizable but was plagued by the kind of infinities that cropped up in QED but are removed by suitable mathematical sleight of hand. The mathematical sleight of hand needed to put mass into the earlier electroweak models made it impossible to carry out the renormalization trick as well.

At the same time, starting out in the late 1950s and continuing into the early 1960s, the Pakistani physicist Abdus Salam and his colleague John Ward were developing an electroweak theory very similar to the one proposed by Glashow. Salam was born in Jhang, in what is now Pakistan, in 1926. After attending Punjab University he went on to Cambridge where he was awarded his Ph.D. degree in 1952, and taught in Lahore and at Punjab University until 1954, when he returned to Cambridge University and, among other things, supervised the work of student Ronald Shaw until 1954. Since the subject chosen by students for investigation usually reflects the interest of their supervisiors, Ronald Shaw’s work was no exception. Salam was interested in gauge theories of the basic forces of nature, along the lines of the Yang-Mills theory. In 1957 he took up the post as professor of theoretical physics at Imperial College in London, and in 1964 he was the moving force behind the establishment of the International Centre for Theoretical Physics in Trieste, an institute that provides research opportunities for physicists from developing countries. Since then, Salam has been director of the Centre in Trieste and spent some his time there and some at the Imperial College.

Salam and his colleague Ward developed a variation on the electroweak theme (Later John Ward, a British physicist, worked at several U.S. institutionsin the 1960s, including John Hopkins University); it suffered from the same defects as Glashow’s version — the masses had to be put in by hand, and largely as a result of this, it was impossible to renormalize the theory. The first step toward solving this problem was taken in 1967, when Salam and, independently, the American physicist, Steven Weinberg, found a way to make the masses of the weak vector bosons to appear naturally (well, almost naturally) out of the equations. The trick involved spontaneous symmetry breaking, and once again it depended upon ideas that had been developed initially in the context of the strong field.

Symmetry breaking can be easily understood by looking at the weak gravity field of the earth. To an astronaut in free fall in a spacelab, there is no special direction in space. If the astronaut lets go of a pen, it floats off in the particlar direction in space that the astronaut pushes it. All directions are equivalent; there is a basic symmetry. However, on the surface of the earth, things are different. If you give the pen a push in any direction and let go of it, it always falls the same way, downwards. “Downwards” means toward the center of the earth. Drop a pen at the North Pole, it falls downwards; drop a pen at the South Pole, it falls downwards. But the two “downwards” are opposite in direction to one another. The basic symmetry is hidden, or broken by the earth’s gravitational field.

Another form of hidden symmetry can be seen by another example, involving gravity again. Imagine a perfectly smooth, symmetrical surface shaped like a Mexican hat, with the brim turned up. If the “hat” rests upon a horizontal surface, it is completely symmetrical in the earth’s gravitational field. Now imagine placing a small, round ball on the top of the hump in the middle of the hat. Everything is perfectly symmetrical as long as the ball does not move. But we know what will actually happen in such a situation. The ball is unstable, balanced at the highest point of the hump, and it will soon roll off and roll down the side of the hump to rest in the rim of the hat. Once this happens, the hat and ball are no longer symmetrical. The unstable symmetry is broken. There is a special direction associated with the system, a direction defined by a line pointing outward from the center of the hat through the place where the ball rests on the rim. The system is now stable, in the lowest energy state that it can reach, but it is no longer symmetrical. It turns out that the masses associated with the field quanta in a Yang-Mills type of theory can arise from a similar symmetry breaking involving the abstract “internal space” in which the arrows of isospin point.

The idea gradually brewed up in the 1950s and 1960s from the work of several theoretical physicists, but it came to full flower with the work of Peter Higgs at the University of Edinburgh between 1964 and 1966. Higgs, who was born in 1929, had studied at King’s College in London from 1947 onward and received his Ph.D. in 1954. He took up a post in Edinburgh in 1960. Higgs proposed that there must be an extra field added to the Yang-Mills model, one that had the unusual property that it does not have the least possible energy when the value of the field is zero, but when the field has a value bigger than zero. The electromagnetic field, and most other fields, have zero energy when the value of the field is zero, and the state in which all fields have minimum energy is what we call the vacuum. If all fields were like the electromagnetic field,that would be the same as saying that in the vacuum state all fields are zero. But the Higgs field has a nonzero value even in its state of minimum energy, and this gives the vacuum itself a character it would not otherwise possess. Reducing the Higgs field to zero would actually involve putting energy into system.

The implications of this is profond. In terms of isospin, the Higgs field provides a frame of reference, a direction against which the arrow defines proton and neutron can be measured. A proton can be distinguished from a neutronby comparing the direction of its isospin arrow with the direction defined by the Higgs field. But when the isospin arrow rotates during a gauge transformation, the Higgs arrow rotates as well, so that the angle stays the same. The angle that used to correspond to a proton now corresponds to a neutron, and vice versa. Without the Higgs mechanism, there would be no way to tell the difference between proton and neutron at all, because there would be nothing to measure their different isospin against. All that can be measured is the relative arrow between the isospin and the Higgs arrow, not any absolute orientation of isospin. And the Higgs field does this even though the field itself is a scalar, which has only magnitude, and does not point in any preferred direction at each point of “real” space.

The effect of all this on the vector boson is dramatic. There are four scalar Higgs boson required by the field theory, and the basic Yang-Mills approach gives three massless vector bosons. When the two elements are put together, three of the Higgs boson and three vector bosons merge with one another — in the graphic terminology used by Abdus Salam — the vector bosons “eat” one Higgs bosons. And when this happens, the vector bosons gains both mass and a spin corresponding to the spin carried by Higgs bosons. Instead of having three massless vector bosons and three four Higgs particles, the theory predicts that there should be three observable vector bosons that each have a definite mass, plus one scalar Higgs boson, which also has a large mass but whose precise mass cannot by the theory. The Higgs field breaks the underlying symmetry in just the right way to fit in with what we observe. At the cost of one extra undetected particle, mass appears naturally in all the variations in the Yang-Mills approach.

Higgs himself had been working in the context of the strong field. But soon his ideas were taken over into the developing electroweak theory. First off the mark was Steven Weinberg in 1967. Weinberg had been an exact contemporary of Glashow (although he was six months the younger, having been born in May 1933) at the Bronx High School of Science, from which Weinberg graduated in 1950, and Cornell University, where he graduated in 1954. But then he followed a different path to end up with a model very similar to Glashow’s description of the electroweak interaction, but with the bonus of a Higgs-type mechanism included. By 1960 Weinberg arrived at Berkeley, where he stayed until 1969 before moving on first to MIT and then, in 1973, to Harvard. Weinberg’s approach to electroweak unification was largely his own, but drawing upon the same culture — the same background pool of knowledge — that Glashow and Salam were drawing on. His interest in the weak interaction went back to his Ph.D. work at Princeton, and in the 1960s he worked toward an equivalent of the Higgs mechanism in his own way. His electroweak model, including masses for the vector bosons generated by spontaneous symmetry breaking, was submitted for publication in October 1967 and appeared in the journal Physical Review Letters before the end of year.

Salam heard about the Higgs mechanism from a colleague at Imperal College a few months before Weinberg submitted that paper for publication. Salam took the electroweak model that he developed with Ward and added the Higgs mechanism to it, giving essentially the same basic model that Weinberg had developed, with the masses now occurring naturally. He gave a series of lectures on the new model at the Imperial College in 1967, followed by a talk at the Nobel Symposium in May 1968, later published in the symposium proceedings.

In due course, in 1979. Glashow, Salam, and Weinberg joinly received the Nobel Prize in Physics for their role in creating the electroweak theory, a step as important as Maxwell’s development of the unified electromagnetic theory a century before. It took some time for even most theorists to appreciate fully the significance of the Weinberg-Salam model, because it was not until 1971 that a Dutch physicist, Gerald t’Hooft, showed that the electroweak theory was, indeed, renormalizable. And then, in 1973, experiments at CERN come up with evidence of the elusive neutral current interactions that the theory predicted, interaction mediated by the neutral Z particle. It was the normalization of gauge theory by t’Hooft that led to the explosive development of field theory in the 1970s, to a theory of the strong interactions, and to understanding what happened in earliest moments of the Universe itself.

In 1967-68, Steven Weinberg, Abdus Salam, and Sheldon Lee Glaslow noticed the amazing similarity between the photon and the W-particle. They made the following observation: Although Einstein had tried to unite light and the gravitational force, perhaps the correct unification scheme was to unite the photon and the W-particle. The result was a new W-particle theory, called the electro-weak theory, which differed decisively from the previous W-particle theories, because it used the most sophisticated form of gauge theory available at the time, the Yang-Mills theory. This theory, formulated in 1954, possessed more symmetries than Maxwell ever dreamed of. The Yang-Mills theory contained a new mathematical symmetry [represented mathematically as SU(2) × U(1)] that allowed Weinberg and Salam to unite the weak and electromagnetic forces on the same footing. This theory also treats the electron and the neutrino symmetrically as members of one “family”. As far as the theory was concerned, the electron and the neutrino were actually two sides of the coin. (But the theory did not explain why there was three redundant families.)

Although the Yang-Mills theory was the most ambitious and advanced theory of the time, it was ignored, because it was assumed that it was probably nonrenormalizable, like all other dead ends and therefore riddle with infinities. However, all that changed in 1971. After three decades of agonizing over the infinities festering within the W-particle theory, a dramatic break-through was made by a twenty-four-year-old Dutch graduate student, Gerard ‘t Hooft, who finally proved that the Yang-Mills theory was renormalizable. To double check his calculation showing the cancelation of infinities, ‘t Hooft placed the calculations on a computer. One can imagine the excitement that ‘t Hooft must have felt as he awaited the results of the calculations. He later recalled, “The results of that test were available by July 1971; the output of the program was an uninterrupted string of zeros. Every infinity canceled exactly.”

As the word of the cancelation spread, within months, hundreds of physicists rushed to learn the techniques of ‘t Hooft and the theory of Weinberg and Salam. For the first time, real numbers, not infinities, poured out of theory for S-matrix. Earlier, from 1968 to 1970, not a single paper published made reference to the theory of Weinberg and Salam. But by 1973, when the full impact of their results was being appreciated, 162 papers on the theory was published.

Somehow, in ways that physicists still do not completely understand, the symmetries built into the Yang-Mills theory completely eliminated the infinities that had plagued the earlier W-particle theory. It was a replay of the discovery made by the physicists studying QED years earlier; the symmetries somehow canceled the divergences in a quantum field theory.

 

THE STRONG INTERACTION

The strong interaction is the quark binding interaction. The hadrons are made out of quarks — but what holds the quarks together? Why don’t they just fly apart when hadrons collide? The answer that theoretial physicists have come up with is that the quarks are held together by a new set of gluons which are super sticky that the quarks can never come apart and unglued. The necessity for these new gluons became apparent in the same famous electron-scattering experiments at Stanford Linear Accelerator that first saw the quarks inside the proton. It became clear again that interaction are mediated by gluons. Physicists turned their attention to understanding the new quark-binding gluons, and soon a new theory was developed called the quantum chromodynamics (QCD). This was a relativistic quantum field theory that gave a mathematical description of these strong gluons just as the quantum electrodynamics, QED, gave a description of the photon as the gluon of electromagnetic fields.

Meanwhile, in the early 1970s, the excitement over Weinberg and Salam’s electro-weak theory was spilling over into the quark model. The natural question was asked: Why not try symmetry and the Yang-Mills field to eliminate the divergencies? As successful as the quark model was, one nagging question remained: Where was a satisfactory renormalizable theory that could explain the force that held these quarks together? The quark model was still incomplete. Today there is practically universal consensus in the physics community that the Yang-Mills theory, with all its properties and symmetry, can sucessfully explain how the quarks bind together into a renormalizable framework. This application of the Yang-Mills theory to the problem of binding the quarks together is called quantum chromodynamics theory. QCD was proposed in 1973 by H. Fritzsch, H. Leutwyler, and Murray Gell-Mann (1929-) [the last of whom was one of original proponents of quarks in 1963], although a similar idea had been put forth in 1966 by Yoichiro Nambu (1922-).

The main idea of quantum chromodynamic theory is that each of the quark flavors — the up, down, charmed, top, and bottom quarks — also come in three “colors”. Of course, the quarks are not really colored any more than they have flavors — that is just a way of talking about the quarks and to describe how they act. The “color” of the quarks refers to a new set of charges for the quarks. The new strong-binding gluons stick or couple to the color charges in the same way that a photon couples to electic charge. According to the quantum chromodynamic theory, there are eight “color gluons” that provide the strong quark-binding forces. Because there are three quarks inside the proton, three quark “colors” are needed to distinguish them uniquely, namely, red, green, and blue. So the net effect of the introduction of color is to triple the number of quarks; each of the flavors must come in three colors.

These forces, mediated by the colored gluons, are supposed to be so strong that all quanta which possess the colored charge (so the colored gluons will stick to them) are permanently bound together. Consequently, the quarks, since they have colored charge, also stick to themselves so tightly that they never become unglued. But there may be combinations of colored quarks and gluons for which the sum of all colored charge add to zero, just as the combinations of positive electric charge of an atomic nucleus cancels the negative electric charge of the orbiting electrons. Such “color neutral” combinations of color quarks correspond exactly to the observed hadrons. This set of concepts of the quantum chromodynamic theory has a lot of experimental data.

 

THE STANDARD MODEL

The electroweak theory and quantum chromodynamic (QCD) have proved so successful in describing the particle world that the combination is called the Standard Model. It has reduced the scope of particle physics to two sets of particles (the leptons and the quarks) and two forces (the electroweak and the strong), ignoring gravity for the time being. But it is still incomplete. QCD has yet to be combined with the electroweak theory into one Grand Unified Theory or GUT; and gravity isn’t included at all. So there is plenty to keep the theorist busy for awhile.  According to current view of particle physics, all matter is made out of three kinds of elementary particles: leptons, quarks, and mediators.


1.  There are six leptons, classified according to there charge Q, electron number Le, muon number Lμ, and the tau number Lτ. They are the electron e, the electron neutrino νe, the muon μ, the muon neutrino νμ, the tau τ, and the tau neutrino ντ.
They fall naturally into three families or generations:

 

LEPTION CLASSIFICATION
GenerationleptonQLeLμLτ
Firste-1100
νe0100
Secondμ-1010
νμ0010
Thirdτ-1001
ντ0001

 

There are six antileptons, with all the signs reversed. The positron, for example, carries a charge of +1 and an electron number of -1. So there are really 12 leptons, all told.

2.  Similarly, there are six “flavors” of quarks, which are classified according to charge Q, downness D, upness U, strangeness S, charm C, beauty B, and truth T.   They are down d, up u, strange s, charm c, bottom b, and top t.  The quarks also fall into three generations:

 

QUARK CLASSIFICATION
GenerationquarkQDUSCBT
Firstd-1/3-100000
u2/3010000
Seconds-1/300-1000
c2/3000100
Thirdb-1/30000-10
t2/3000001

 

3.  Finally, every interaction has its mediators: the photon for the electromagnetic interaction, two W‘s and a Z for the weak interaction, and graviton for gravitational interaction. What is the mediator for the strong interaction? According to Yukawa’s original theory (1934), the mediator of the strong force was the pion, but with the discovery of the heavy mesons this simple picture could not stand; the protons and neutrons could now exchange rho’s and eta’s and K‘s and phi’s and all of the rest. The quark model brought even a more radical revision, for if the neutron, proton, and mesons are complicated composite structures, there is no reason to believe that interactions should be simple. So to study strong interaction at the fundamental level, the physicist should look at the interaction between individual quarks. So the question becomes: What particle is exchanged between two quarks in a strong interaction? The mediator of the strong interaciton is called the gluon, and in the Standard Model there are eight of them. The gluons themselves carry color, and therefore (like the quarks) should not exist as isolated particles. Thus gluons can be detected only within hadrons, or in colorless combinations with other gluons called glueballs. There is substantial indirect experimental evidence for the existence of gluons.


This adds up to an extraordinary large number of elementary particles in the the Standard Model: 12 leptons, 36 quarks, 12 mediators (the graviton is not counted since the gravitational interaction is not included in the Standard Model). Since the Glashow-Weinberg-Salam theory calls for at least one Higgs particle, so the minimum number of particles in the Standard Model must be at least 61 particles.