Monday, October 27, 2008
Black Hole
A brief history of black holes
A black hole is an object or region of space where the pull of gravity is so strong that nothing can escape from it, i.e. the escape velocity exceeds the speed of light. The term was coined in 1968 by the physicist John Wheeler. However, the possibility that a lump of matter could be compressed to the point at which its surface gravity would prevent even the escape of light was first suggested in the late 18th century by the English physicist John Michell (c.1724-1793), and then by Pierre Simon, Marquis de Laplace (1749-1827).
Black holes began to take on their modern form in 1916 when the German astronomer Karl Schwarzschild (1873-1916) used Einstein's general theory of relativity to find out what would happen if all the mass of an object were squeezed down to a dimensionless point – a singularity. He discovered that around the infinitely compressed matter would appear a spherical region of space out of which nothing could return to the normal universe. This boundary is known as the event horizon since no event that occurs inside it can ever be observed from the outside. Although Schwarzschild's calculations caused little stir at the time, interest was rekindled in them when, in 1939, J. Robert Oppenheimer, of atomic bomb fame, and Hartland Snyder, a graduate student, described a mechanism by which black holes might actually be created in the real universe. A star that has exhausted all its useful nuclear fuel, they found, can no longer support itself against the inward pull of its own gravity. The stellar remains begin to shrink rapidly. If the collapsing star manages to hold on to a critical amount of mass, no force in the Universe can halt its contraction and, in a fraction of a second, the material of the star is squeezed down into the singularity of a black hole.
Stellar black holes
Artist's impression of Cygnus X-1
In theory, any mass if sufficiently compressed would become a black hole. The Sun would suffer this fate if it were shrunk down to a ball about 2.5 km in diameter. In practice, a stellar black hole is only likely to result from a heavyweight star whose remnant core exceeds the Oppenheimer-Volkoff limit following a supernova explosion.
More than two dozen stellar black holes have been tentatively identified in the Milky Way, all of them part of binary systems in which the other component is a visible star. A handful of stellar black holes have also been discovered in neighboring galaxies. Observations of highly variable X-ray emission from the accretion disk surrounding the dark companion together with a mass determined from observations of the visible star, enable a black hole characterization to be made.
Among the best stellar black hole candidates are Cygnus X-1, V404 Cygni, and several microquasars. The two heaviest known stellar black holes lie in galaxies outside our own. One of these black hole heavyweights, called M33 X-7, is in the Triangulum Galaxy (M33), 3 million light-years from the Milky Way, and has a mass of 15.7 times that of the Sun. Another, whose discovery was announced in October 2007, just a few weeks after that of M33 X-7, is called IC 10 X-1, and lies in the nearby dwarf galaxy, IC 10, 1.8 million light-years away. IC 10 X-1 shattered the record for a stellar black hole with its mass of 24 to 33 times that of the Sun. Given that massive stars lose a significant fraction of their content through violent stellar winds toward the end of their lives, and that interaction between the members of a binary system can further increase the mass loss of the heavier star, it is a challenge to theorists to explain how any star could retain enough matter to form a black hole as heavy as that of IC 10 X-1.
The microquasar V4641 Sagittarii contains the closest known black hole to Earth, with a distance of about 1,500 light-years.
Supermassive, intermediate-mass and mini black holes
Supermassive black holes are known almost certainly to exist at the center of many large galaxies, and to be the ultimate source of energy behind the phenomenon of the active galactic nucleus. At the other end of the scale, it has been hypothesized that countless numbers of mini black holes may populate the universe, having been formed in the early stages of the Big Bang; however, there is yet no observational evidence for them.
In 2002, astronomers found a missing link between stellar-mass black holes and the supermassive variety in the form of middleweight black holes at the center of some large globular clusters. The giant G1 cluster in the Andromeda Galaxy appears to contain a black hole of some 20,000 solar masses. Another globular cluster, 32,000 light-years away within our own Milky Way, apparently harbors a similar object weighing 4,000 solar masses. Interestingly, the ratio of the black hole's mass to the total mass of the host cluster appears constant, at about 0.5%. This proportion matches that of a typical supermassive black hole at a galaxy's center, compared to the total galactic mass. If this result turns out to be true for many more cluster black holes, it will suggest some profound link between the way the two types of black hole form. It is possible that supermassive black holes form when clusters deposit their middleweight black hole cargoes in the galactic centers, and they merge together.
Inside a black hole
According to the general theory of relativity, the material inside a black hole is squashed inside an infinitely dense point, known as a singularity. This is surrounded by the event horizon at which the escape velocity equals the speed of light and that thus marks the outer boundary of a black hole. Nothing from within the event horizon can travel back into the outside universe; on the other hand, matter and energy can pass through this surface-of-no-return from outside and travel deeper into the black hole.
For a non-rotating black hole, the event horizon is a spherical surface, with a radius equal to the Schwarzschild radius, centered on the singularity at the black hole's heart. For a spinning black hole (a much more likely contingency in reality), the event horizon is distorted – in effect, caused to bulge at the equator by the rotation. Within the event horizon, objects and information can only move inward, quickly reaching the singularity. A technical exception is Hawking radiation, a quantum mechanical process that is unimaginably weak for massive black holes but that would tend to cause the mini variety to explode.
Three distinct types of black hole are recognized:
* A Schwarzschild black hole is characterized solely by its mass, lacking both rotation and charge. It possesses both an event horizon and a point singularity.
* A Kerr black hole is formed by rotating matter, possesses a ring singularity, and is of interest in connection with time travel since it permits closed time-like paths (through the ring).
* A Reissner-Nordstrom black hole is formed from non-rotating but electrically-charged matter. When collapsing, such an object forms a Cauchy horizon but whether it also forms closed time-like paths is uncertain.
The equations of general relativity also allow for the possibility of spacetime tunnels, or wormholes, connected to the mouths of black holes. These could act as shortcuts linking remote points of the universe. Unfortunately, they appear to be useless for travel or even for sending messages since any matter or energy attempting to pass through them would immediately cause their gravitational collapse. Yet not all is lost. Wormholes, leading to remote regions in space, might be traversable if some means can be found to hold them open long enough for a signal, or a spacecraft, to pass through.
How black holes work
Escape Velocity
If ball is thrown upwards from the surface of the Earth it reaches a certain height and then falls back. The harder it is thrown, the higher it goes. Laplace calculated the height it would reach for a given initial speed. He found that the height increased faster than the speed, so that the height became very large for a not very great speed. At a speed of 40000 km/h (25000 mph, only 20 times faster than Concorde) the height becomes very great indeed - it tends to infinity, as the mathematician would say. This speed is called the `escape velocity' from the surface of the Earth, and is the speed which must be achieved if a space craft is to reach the Moon or any of the planets. Being a mathematician, Laplace solved the problem for all round bodies, not just the Earth.
He found a very simple formula for the escape velocity. This formula says that small but massive objects have large escape velocities. For example if the Earth could be squeezed and made four times smaller, the escape velocity would need to be twice as large. This surprisingly simple derivation gives exactly the same answer as is obtained from the full theory of relativity.
Light travels at just over 1000 million km/h (670 million mph), and in 1905 Albert Einstein proved in the Special Theory of Relativity that nothing can travel faster than light. The above Laplace formula can be turned around to tell us what radius an object must have if the escape velocity from its surface is to be the speed of light. This particular radius is called the `Schwarzschild radius' in honor of the German astronomer who first derived it from Einstein's theory of gravity (General Theory of Relativity). The formula tells us that the Schwarzschild radius for the Earth is less than a centimeters, compared with its actual radius of 6357 km.
Apparent versus Event Horizon
As a doomed star reaches its critical circumference, an "apparent" event horizon forms suddenly. Why "apparent?" Because it separates light rays that are trapped inside a black hole from those that can move away from it. However, some light rays that are moving away at a given instant of time may find themselves trapped later if more matter or energy falls into the black hole, increasing its gravitational pull. The event horizon is traced out by "critical" light rays that will never escape or fall in. Even before the star meets its final doom, the event horizon forms at the center, balloons out and breaks through the star's surface at the very moment it shrinks through the critical circumference. At this point in time, the apparent and event horizons merge as one: the horizon. For more details, see the caption for the above diagram. The distinction between apparent horizon and event horizon may seem subtle, even obscure. Nevertheless the difference becomes important in computer simulations of how black holes form and evolve. Beyond the event horizon, nothing, not even light, can escape. So the event horizon acts as a kind of "surface" or "skin" beyond which we can venture but cannot see. Imagine what happens as you approach the horizon, and then cross the threshold.
Care to take a one-way trip into a black hole?
The Singularity
At the center of a black hole lies the singularity, where matter is crushed to infinite density, the pull of gravity is infinitely strong, and space-time has infinite curvature. Here it's no longer meaningful to speak of space and time, much less space-time. Jumbled up at the singularity, space and time cease to exist as we know them.
The Limits of Physical Law
Newton and Einstein may have looked at the universe very differently, but they would have agreed on one thing: all physical laws are inherently bound up with a coherent fabric of space and time. At the singularity, though, the laws of physics, including General Relativity, break down. Enter the strange world of quantum gravity. In this bizarre realm in which space and time are broken apart, cause and effect cannot be unraveled. Even today, there is no satisfactory theory for what happens at and beyond the singularity.
It's no surprise that throughout his life Einstein rejected the possibility of singularities. So disturbing were the implications that, by the late 1960s, physicists conjectured that the universe forbade "naked singularities." After all, if a singularity were "naked," it could alter the whole universe unpredictably. All singularities within the universe must therefore be "clothed." But inside what? The event horizon, of course! Cosmic censorship is thus enforced. Not so, however, for that ultimate cosmic singularity that gave rise to the Big Bang
If ball is thrown upwards from the surface of the Earth it reaches a certain height and then falls back. The harder it is thrown, the higher it goes. Laplace calculated the height it would reach for a given initial speed. He found that the height increased faster than the speed, so that the height became very large for a not very great speed. At a speed of 40000 km/h (25000 mph, only 20 times faster than Concorde) the height becomes very great indeed - it tends to infinity, as the mathematician would say. This speed is called the `escape velocity' from the surface of the Earth, and is the speed which must be achieved if a space craft is to reach the Moon or any of the planets. Being a mathematician, Laplace solved the problem for all round bodies, not just the Earth.
He found a very simple formula for the escape velocity. This formula says that small but massive objects have large escape velocities. For example if the Earth could be squeezed and made four times smaller, the escape velocity would need to be twice as large. This surprisingly simple derivation gives exactly the same answer as is obtained from the full theory of relativity.
Light travels at just over 1000 million km/h (670 million mph), and in 1905 Albert Einstein proved in the Special Theory of Relativity that nothing can travel faster than light. The above Laplace formula can be turned around to tell us what radius an object must have if the escape velocity from its surface is to be the speed of light. This particular radius is called the `Schwarzschild radius' in honor of the German astronomer who first derived it from Einstein's theory of gravity (General Theory of Relativity). The formula tells us that the Schwarzschild radius for the Earth is less than a centimeters, compared with its actual radius of 6357 km.
Apparent versus Event Horizon
As a doomed star reaches its critical circumference, an "apparent" event horizon forms suddenly. Why "apparent?" Because it separates light rays that are trapped inside a black hole from those that can move away from it. However, some light rays that are moving away at a given instant of time may find themselves trapped later if more matter or energy falls into the black hole, increasing its gravitational pull. The event horizon is traced out by "critical" light rays that will never escape or fall in. Even before the star meets its final doom, the event horizon forms at the center, balloons out and breaks through the star's surface at the very moment it shrinks through the critical circumference. At this point in time, the apparent and event horizons merge as one: the horizon. For more details, see the caption for the above diagram. The distinction between apparent horizon and event horizon may seem subtle, even obscure. Nevertheless the difference becomes important in computer simulations of how black holes form and evolve. Beyond the event horizon, nothing, not even light, can escape. So the event horizon acts as a kind of "surface" or "skin" beyond which we can venture but cannot see. Imagine what happens as you approach the horizon, and then cross the threshold.
Care to take a one-way trip into a black hole?
The Singularity
At the center of a black hole lies the singularity, where matter is crushed to infinite density, the pull of gravity is infinitely strong, and space-time has infinite curvature. Here it's no longer meaningful to speak of space and time, much less space-time. Jumbled up at the singularity, space and time cease to exist as we know them.
The Limits of Physical Law
Newton and Einstein may have looked at the universe very differently, but they would have agreed on one thing: all physical laws are inherently bound up with a coherent fabric of space and time. At the singularity, though, the laws of physics, including General Relativity, break down. Enter the strange world of quantum gravity. In this bizarre realm in which space and time are broken apart, cause and effect cannot be unraveled. Even today, there is no satisfactory theory for what happens at and beyond the singularity.
It's no surprise that throughout his life Einstein rejected the possibility of singularities. So disturbing were the implications that, by the late 1960s, physicists conjectured that the universe forbade "naked singularities." After all, if a singularity were "naked," it could alter the whole universe unpredictably. All singularities within the universe must therefore be "clothed." But inside what? The event horizon, of course! Cosmic censorship is thus enforced. Not so, however, for that ultimate cosmic singularity that gave rise to the Big Bang
ELECTROWEAK PHYSICS
W. Hollik
Max-Planck-Institut für Physik
D-80805 Munich, Germany
A. Accomando, F. del Aguila, M. Awramik, A. Ballestrero
J. van der Bij, W. Beenakker, R. Bonciani, M. Czakon
G. Degrassi, A. Denner, K. Diener, S. Dittmaier, A. Ferroglia
J. Fleischer, A. Freitas, N. Glover, J. Gluza, T. Hahn
S. Heinemeyer, S. Jadach, F. Jegerlehner, W. Kilian
M. Krämer, J. Kühn, E. Maina, S. Moretti, T. Ohl
C.G. Papadopoulos, G. Passarino, R. Pittau, S. Pozzorini
M. Roth, T. Riemann, J.B. Tausk, S. Uccirati
A. Werthenbach, and G. Weiglein
NSCR Democritos Athens, CERN, Univ. Granada, Univ. Durham, DESY
Univ. Edinburgh, Univ. Freiburg, Univ. Karlsruhe, Univ. Katowice
HNINP Krakow, Max Planck Institut für Physik München, Univ. Nijmegen
Univ. Roma 3, Univ. Southampton, Univ. Torino, PSI Villigen, Univ. Würzburg
(Received October 18, 2004)
Work on electroweak precision calculations and event generators for
electroweak physics studies at current and future colliders is summarized.
PACS numbers: 12.15.Lk, 13.66.Jn, 14.70.Fm, 14.70.Hp
1. Introduction
Apart from the still missing Higgs boson, the Standard Model (SM)
has been impressively confirmed by successful collider experiments at the
particle accelerators LEP, SLC, and Tevatron during the last decade with
high precision, at the level of radiative corrections. Future colliders like
the upcoming LHC or an e
+
e
−
International Linear Collider (ILC), offer an
even greater physics potential, and in turn represent a great challenge for
∗
Presented at the final meeting of the European Network “Physics at Colliders”,
Montpellier, France, September 26–27, 2004.
(2533)
Page 2
2534
W. Hollik et al.
theory to provide even more precise calculations. Accurate predictions are
necessary not only to increase the level of precision of SM tests, but also to
study the indirect effects of possible new particles.
With increasing energies many new processes with heavy particle pro-
duction will be accessible. Such heavy particles are unstable, so that their
production leads to many-particle final states, with e.g. four or six fermions.
Predictions for such reactions should be based on full transition matrix el-
ements, improved by radiative corrections as far as possible, and call for
proper event generators.
Joint work done within the network is reviewed here; more details can
be found also in previous studies for future colliders [1–3].
2. Precision observables and multi-loop calculations
2.1. Muon lifetime and W–Z mass correlation
The precise measurement of the muon lifetime, or equivalently of the
Fermi constant G
µ
, sets an important constraint on the SM parameters,
G
µ
=
πα(0)
√2M
W
2
s
w
2
(1 + ∆r),
(1)
with s
w
2
= 1 − c
w
2
= 1 − M
W
2
/M
Z
2
, where the quantity ∆r comprises
the radiative corrections to muon decay (apart from the photonic correc-
tions in the Fermi model). Solving this relation for the W-boson mass M
W
yields a precise prediction for M
W
that can be compared with the directly
measured value. Recently the full electroweak two-loop calculation has been
completed, with the contributions from closed fermion loops [4, 5], from
bosonic loops involving virtual Higgs-bosons [4], and the complete bosonic
corrections [5, 6]. The two-loop fermionic contributions influence the M
W
prediction at the level of ∼ 50MeV, the two-loop bosonic corrections by
1−2MeV.
The predictions at the two-loop level have been further improved by
universal higher-order contributions to the ρ-parameter. The terms of the
order O(G
2
µ
m
4
t
α
s
) and O(G
3
µ
m
6
t
) have been obtained in [7] and were found
to change M
W
at the level of 5MeV and 0.5MeV, respectively. The leading
three-loop bosonic contribution to the ρ-parameter in the large Higgs mass
limit [8], yielding the power M
4
H
/M
4
W
, is opposite in sign to the leading
two-loop correction ∼ M
2
H
/M
2
W
. The two terms cancel each other for a
Higgs boson mass around 480 GeV. This interesting new result stabilizes the
perturbative expansion and makes a strongly interacting Higgs sector very
unlikely in view of the electroweak precision data.
The prediction for M
W
, including the above-mentioned two-loop and
leading three-loop effects, carries, besides the parametric uncertainty, an
Page 3
Electroweak Physics
2535
intrinsic theoretical uncertainty, which is estimated to be of 3−4MeV [9,10],
which has to be compared with the aimed precision of 7MeV in the M
W
determination at a future ILC [3].
2.2. Precision observables at the Z resonance
In order to describe the Z-boson resonance at LEP1 within satisfac-
tory precision it was possible to parametrize the cross section near the
resonance in such a way [11, 12] that a Born-like form with generalized
“effective” couplings is convoluted with QED structure functions model-
ing initial-state radiation (ISR). From these effective Z-boson–fermion cou-
plings so-called “pseudo-observables” were derived, such as various asymme-
tries, the hadronic Z-peak cross section, partial Z-decay widths, etc. The
precisely calculated pseudo-observables are implemented in the programs
Zfitter and Topaz0 [13]. A critical overview on high-precision physics at
the Z pole, in particular focusing on the theoretical uncertainties, can be
found in [14]. The status of precision pseudo-observables in the MSSM is
summarized in [15].
Following the formal tree-level like parametrization of the couplings, an
“effective weak mixing angle”, sin
2
θ
eff
f
, was derived for each fermion species.
Among these parameters the leptonic variable sin
2
θ
eff
l
plays a particularly
important role, since it is measured with the high accuracy of 1.7 × 10
−4
and is very sensitive to the Higgs-boson mass. Quite recently, a substantial
subclass of the electroweak two-loop contributions, the fermionic contribu-
tions to sin
2
θ
eff
l
, were calculated [16]; they reduce the intrinsic theoretical
uncertainty to ∼ 5 × 10
−5
.
Whether the pseudo-observable approach will also be sufficient for
Z-boson physics at the high-luminosity GigaZ option remains to be investi-
gated carefully. In any case, substantial theoretical progress will be needed
to match the aimed GigaZ precision on the theoretical side, e.g. for the ex-
pected experimental accuracy in sin
2
θ
eff
l
of about 1.3 × 10
−5
. A full control
of observables at the two-loop level, improved by leading higher-order effects,
seems indispensable.
An important entry for the precision observables with a large paramet-
ric uncertainty is the photonic vacuum polarization at the Z scale. The
hadronic part is determined via a dispersion relation from the cross sec-
tion for e
+
e
−
→ hadrons with experimental data as input in the low-energy
regime. Possible scans with the radiative-return method require a careful
theoretical treatment to reach the required precision [17].
Page 4
2536
W. Hollik et al.
2.3. Deep-inelastic neutrino scattering
An independent precise determination of the electroweak mixing angle
in terms of the M
W
/M
Z
ratio has been done in deep-inelastic neutrino scat-
tering off an isoscalar target by the NuTeV Collaboration [18], yielding a
deviation of about 3 standard deviations from the value derived from the
global analysis of the other precision observables. A new calculation of the
electroweak one-loop corrections was performed [19] to investigate the stabil-
ity and size of the quantum effects, showing that the theoretical error of the
analysis in [18] was obviously underestimated. The new theoretical result is
now being used for a re-analysis of the experimental data (see also [20] for
another recent recalculation of the electroweak radiative corrections).
2.4. At the 2-loop frontier
Although there are no complete next-to-next-to-leading (NNLO) pre-
dictions for 2 → 2 scattering reactions and 1 → 3 decays (with one truly
massive leg) available yet, significant progress was reached in this direction
in recent years.
Complete virtual two-loop amplitudes for (massless) Bhabha scattering
[21] and light-by-light scattering [23] have been worked out. Also first steps
have been made towards massive Bhabha scattering
In Ref. [22] the coefficient of the O(α
2
log(s/m
2
e
)) fixed-order contri-
bution to elastic large-angle Bhabha scattering is derived by adapting the
classification of infrared divergences that was recently developed within di-
mensional regularization, and applying it to the regularization scheme with
a massive photon and electron.
The subset of factorizable corrections, resulting from one-loop subrenor-
malization, is considered in [24]. This requires the evaluation of one-loop
diagrams in arbitrary dimension d = 4 − ε. The ε-expansion covering the
orders 1/ε and ε, in particular for the box graphs needed in Bhabha scat-
tering, was performed in [25], based on the work of [26]
1
. For the genuine
two-loop QED corrections to Bhabha scattering, the master integrals for the
box with the fermionic loop were calculated [28], and the cross section with
the corrections at two loop resulting from these diagrams [29].
A complete set of master integrals for massive two-loop Bhabha scatter-
ing has been derived in the meantime [30], and for form factors with mass-
less [31] and massive propagators [32]. Moreover, two-loop QCD corrections
for the electroweak forward-backward asymmetries were obtained [33]. Also
two-loop master integrals and form factors from virtual light fermions were
derived and applied to Higgs boson production and decays [34].
1
Techniques applied in the latter work have also been used in [27] for the analytic
calculation of Higgs boson decay into two photons.
Page 5
Electroweak Physics
2537
2.5. Electroweak radiative corrections at high energies
Electroweak corrections far above the electroweak scale, e.g. in the TeV
range, are dominated by soft and collinear gauge-boson exchange, leading to
corrections of the form α
N
log
M
(s/M
W
2
) with M ≤ 2N. The leading terms
(M = 2N) are called Sudakov logarithms. At the one-loop (N = 1) and
two-loop (N = 2) level the leading and subleading corrections to a 2 → 2
process at √s ∼ 1TeV typically amount to [35]
δ
1−loop
LL
∼ −
α
πs
w
2
log
2
( s
M
W
2
) ≃ −26%,
δ
1−loop
NLL
∼ +
3α
πs
w
2
log (
s
M
W
2
) ≃ 16%,
δ
2−loop
LL
∼ +
α
2
2π
2
s
w
4
log
4
( s
M
W
2
) ≃ 3.5%,
δ
2−loop
NLL
∼ −
3α
2
π
2
s
w
4
log
3
( s
M
W
2
) ≃ −4.2%,
(2)
revealing that these corrections become significant in the high-energy phase
of a future ILC. In contrast to QED and QCD, where the Sudakov logarithms
cancel in the sum of virtual and real corrections, these terms need not com-
pensate in the electroweak SM for two reasons. The weak charges of quarks,
leptons, and electroweak gauge bosons are open, not confined, i.e. there is
(in contrast to QCD) no need to average or to sum over gauge multiplets in
the initial or final states of processes. Even for final states that are inclusive
with respect to the weak charges Sudakov logarithms do not completely can-
cel owing to the definite weak charges in the initial state [36]. Moreover, the
large W- and Z-boson masses make an experimental discrimination of real
W- or Z-boson production possible, in contrast to unobservable soft-photon
or gluon emission.
In recent years several calculations of these high-energy logarithms have
been carried out in the Sudakov regime, where all kinematical invariants
(p
i
p
j
) of different particle momenta p
i
, p
j
are much larger than all particle
masses. A complete analysis of all leading and subleading logarithms at the
one-loop level can be found in [37]. Diagrammatic calculations of the leading
two-loop Sudakov logarithms have been carried out in [35,38]. Diagrammatic
results on the so-called “angular-dependent” subleading logarithms have been
presented in [35]. All these explicit results are compatible with proposed
resummations [39,40] that are based on a symmetric SU(2)×U(1) theory at
high energies matched with QED at the electroweak scale. In this ansatz,
improved matrix elements M result from lowest-order matrix elements M
0
Page 6
2538
W. Hollik et al.
upon dressing them with (operator-valued) exponentials,
M ∼ M
0
⊗ exp(δ
ew
) ⊗ exp(δ
em
) .
(3)
Explicit expressions for the electroweak and electromagnetic corrections δ
ew
and δ
em
, which do not commute with each other, can, for instance, be found
in [35]. For 2 → 2 neutral-current processes of four massless fermions,
also subsubleading logarithmic corrections have been derived and resummed
[40] using an infrared evolution equation that follows the pattern of QCD.
Applications to vector-boson pair production in proton–proton collisions can
be found in [41].
In supersymmetric models the form of radiative corrections at high en-
ergies has also been worked out for a broad class of processes [42]. Based on
one-loop results their exponentiation has been proposed.
2.6. Higher-order initial-state radiation
Photon radiation off initial-state electrons and positrons leads to large
radiative corrections of the form α
N
log
N
(m
2
e
/s). These logarithmic correc-
tions are universal and governed by the DGLAP evolution equations. The
solution of these equations for the electron-photon system yields the struc-
ture functions, generically denoted by Γ(x) below, which can be used via
convolution to improve hard scattering cross sectionsσ(p
e
+
,p
e
−
) by photon
emission effects,
σ(p
e
+
,p
e
−
) =
1
∫
0
dx
+
Γ(x
+
)
1
∫
0
dx
−
Γ(x
−
)
׈σ(x
+
p
e
+
,x
−
p
e
−
).
(4)
While the soft-photon part of the structure functions (x → 1) can be re-
summed, resulting in an exponential form, the contributions of hard pho-
tons have to be calculated order by order in perturbation theory. In [43]
the structure functions are summarized up to O(α
3
). Ref. [44] describes a
calculation of the (non-singlet) contributions up to O(α
5
) and of the small-x
terms [α log
2
(x)]
N
to all orders (for previous calculations see papers cited in
Ref. [44]).
Page 7
Electroweak Physics
2539
3. Radiative corrections to 2
→
3
,
4
, . . . processes
3.1. Four-fermion final states and W-pair production
Four-fermion final states in e
+
e
−
collisions, which involve electroweak
boson pair production, are of special interest since they allow the mechanism
of spontaneous symmetry breaking and the non-Abelian structure of the
Standard Model to be directly tested by the experiments. Moreover they
provide a very important background to most searches for new physics.
LEP2 has provided in this respect an ideal testing ground for the SM. The
W profile has been measured with great accuracy, new bounds on anomalous
trilinear gauge-boson couplings have been set, and single W, single Z, ZZ
and Zγ cross sections have been determined for the first time. These studies
will be continued with much higher statistics and energy at a future e
+
e
−
Linear Collider.
In this context, the Monte Carlo four fermion generator WPHACT 2.0 has
been completed [45], adapted to experimental requests and used for simu-
lation of the LEP2 data [46]. WPHACT 2.0 computes all Standard Model
processes with four fermions in the final state at e
+
e
−
colliders, it makes
use of complete, fully massive helicity amplitude calculations and includes
the Imaginary Fermion Loop gauge restoring scheme
2
. Thanks to these
features and new phase space mappings, WPHACT has been extended to all
regions of phase space, including kinematical configurations dominated by
small momentum transfer and small invariant masses like single W, single Z,
Zγ
∗
and γ
∗
γ
∗
processes. Special attention has been devoted to QED effects,
which have a large numerical impact, with new options for the description of
Initial State Radiation (ISR) and of the scale dependence of the electromag-
netic coupling. Moreover, there is the possibility of including CKM mixing,
and to account for resonances in q¯q channels.
The theoretical treatment and the presently gained level in accuracy in
the description of W-pair-mediated 4f production were triggered by LEP2,
as it is reviewed in Refs. [43,48]. The W bosons are treated as resonances
in the full 4f processes, e
+
e
−
→ 4f (+γ). Radiative corrections are split
into universal and non-universal corrections. The former comprise leading-
logarithmic corrections from ISR, higher-order corrections included by using
appropriate effective couplings, and the Coulomb singularity. These cor-
rections can be combined with the full lowest-order matrix elements easily.
The remaining corrections are called non-universal, since they depend on
the process under investigation. For LEP2 accuracy, it was sufficient to in-
clude these corrections by the leading term of an expansion about the two
W poles, defining the so-called double-pole approximation (DPA). Different
2
In [47] the incorporation of the fermion-loop corrections into e
+
e
−
→ n
fermions is
discussed in terms of an effective Lagrangian approach.
Page 8
2540
W. Hollik et al.
versions of such a DPA have been used in the literature [49–51]. Although
several Monte Carlo programs exist that include universal corrections, only
two event generators, YFSWW [50] and RacoonWW [51,52], include non-
universal corrections, as well as the option of anomalous gauge couplings [53].
In the DPA approach, the W-pair cross section can be predicted within
a relative accuracy of ∼ 0.5%(0.7%) in the energy range between 180GeV
(170GeV) and 500GeV, which was sufficient for the LEP2 accuracy of ∼ 1%
for energies 170−209GeV. At threshold (
√s <
∼
170GeV), the present state-
of-the-art prediction results from an improved Born approximation based
on leading universal corrections only, because the DPA is not reliable there,
and thus possesses an intrinsic uncertainty of about 2%, which demonstrates
the necessary theoretical improvements for the threshold region. At energies
beyond 500GeV, effects beyond O(α), such as the above-mentioned Sudakov
logarithms at higher orders, become important and have to be included in
predictions at per-cent accuracy.
At LEP2, the W-boson mass has been determined by the reconstruc-
tion of W bosons from their decay products with a final accuracy of about
30MeV. In [54] the theoretical uncertainty is estimated to be of the order
of ∼ 5MeV. Theoretical improvements are, thus, desirable.
The above discussion illustrates the necessity of a full one-loop calcu-
lation for the e
+
e
−
→ 4f process and of further improvements by leading
higher-order corrections.
3.2. Single-W production
The single-W production process e
+
e
−
→ eν
e
W → eν
e
+2f plays a par-
ticularly important role among the 4f production processes at high scatter-
ing energies. The process is predominantly initiated by eγ
∗
collision, where
the photon is radiated off the electron (or positron) by the Weizsäcker–
Williams mechanism, i.e. with a very small off-shellness q
2
γ
.
Consequently the cross section rises logarithmically with the scattering
energy and is of the same size as the W-pair production cross section around
√s = 500GeV; for higher energies single-W dominates over W-pair produc-
tion.
Theoretically the dominance of photon exchange at low q
2
γ
poses several
complications. Technically, q
2
γ
→ 0 means that the electrons (or positrons)
are produced in the forward direction and that the electron mass has to be
taken into account in order to describe the cross section there. Moreover,
the mere application of s-dependent leading-logarithmic structure functions
does not describe the leading photon-radiation effects properly, since ISR
and final-state radiation (FSR) show sizable interferences for forward scat-
tering. Thus, the improvement of lowest-order calculations by leading radi-
Page 9
Electroweak Physics
2541
ation effects is more complicated than for s-channel-like processes. Finally,
the running of the electromagnetic coupling α(q
2
γ
) has to be evaluated in the
region of small momentum transfer (q
2
γ
< 0) where the fit for the hadronic
part of the photon vacuum polarisation [55] should be used.
The Monte Carlo generator KoralW [56] has recently been updated to
include the ISR-FSR interference effects as well as the proper running of
α(q
2
γ
). Therefore, this program now has reached a level of accuracy similar
to the other state-of-the-art programs for single-W production: Grc4f [57],
Nextcalibur [58], Swap [59], Wphact [45,60], and Wto [61]. It should
be kept in mind that none of these calculations includes non-universal elec-
troweak corrections, leading to a theoretical uncertainty of about ∼ 5% in
cross-section predictions. Although the final solution for a high-energy Lin-
ear Collider certainly requires a full O(α) calculation of the 4f-production
process, a first step of improvement could be done by a careful expansion
about the propagator poles of the photon and W boson. The electroweak
O(α) corrections to the process eγ → ν
e
W, which are known [62], represent
a basic building block in this calculation.
3.3. Progress for multi-particle production processes
One-loop integrals become more and more cumbersome if the number N
of external legs in diagrams increases. For N > 4, however, not all exter-
nal momenta are linearly independent because of the four-dimensionality of
space-time. As known for a long time [63], this fact opens the possibility to
relate integrals with N > 4 to integrals with N ≤ 4. In recent years, various
techniques for actual evaluations of one-loop integrals with N = 5,6 have
been worked out [64,65] (see also references therein for older methods and
results). The major complication in the treatment of 2 → 3 processes at one
loop concerns the numerical evaluation of tensor 5-point integrals; in partic-
ular, the occurrence of inverse Gram determinants in the usual Passarino–
Veltman reduction to scalar integrals leads to numerical instabilities at the
phase-space boundary. A possible solution to this problem was worked out
in Ref. [65] where the known direct reduction [63] of scalar 5-point to 4-point
integrals was generalized to tensor integrals, thereby avoiding the occurrence
of leading Gram determinants completely. More work on one-loop N-point
integrals can be found in [66].
In the evaluation of real corrections, such as bremsstrahlung, a proper
and numerically stable separation of infrared (soft and collinear) divergences
represents one of the main problems. In the phase-space slicing approach
(see [67] and references therein) the singular regions are excluded from the
“regular” phase-space integration by small cuts on energies, angles, or invari-
ant masses. Using factorization properties, the integration over the singular
Page 10
2542
W. Hollik et al.
regions can be done in the limit of infinitesimally small cut parameters. The
necessary fine-tuning of cut parameters is avoided in so-called subtraction
methods (see [68–70] and references therein), where a specially tuned aux-
iliary function is subtracted from the singular integrand in such a way that
the resulting integral is regular. The auxiliary function has to be chosen
simple enough, so that the singular regions can be integrated over analyt-
ically. In [68] the so-called “dipole subtraction approach” has been worked
out for massless QCD. and subsequently extended for photon emission off
massive fermions [69] and for QCD with massive quarks [70].
Applications were preformed for complete one-loop calculations of elec-
troweak radiative corrections for specific 2 → 3 processes of special in-
terest for a future ILC, e
+
e
−
→ ν¯νH [71, 72] and e
+
e
−
→ t
¯
tH [73–75].
In [72, 73, 75] the technique [65] for treating tensor 5-point integrals was
employed. While [71, 73, 74] make use of the slicing approach for treat-
ing soft-photon emission, the results of Refs. [72,75] have been obtained by
dipole subtraction and checked by phase-space slicing for soft and collinear
bremsstrahlung. Analytic results for the one-loop corrections are provided
in [76].
The Yukawa coupling of the top quark could be measured at a future ILC
with high energy and luminosity at the level of ∼ 5% [2] by analyzing the
process e
+
e
−
→ t
¯
tH. A thorough prediction for this process, thus, has to
control QCD and electroweak corrections. Results on the electroweak O(α)
corrections of Refs. [74,75] show agreement within ∼ 0.1%. The results of
the previous calculation [73] roughly agree with the ones of Refs. [74,75] at
intermediate values of √s and M
H
, but are at variance at high energies (TeV
range) and close to threshold (large M
H
).
4. Event generators for multi-particle final states
4.1. Multi-purpose generators at parton level
The large variety of different final states for multi-particle production
renders multi-purpose Monte Carlo event generators rather important, i.e.
generators that deliver an event generator for a user-specified (as much as
possible) general final state based on full lowest-order amplitudes. As results,
these tools yield lowest-order predictions for observables, or more generally
Monte Carlos samples of events, that are improved by universal radiative
corrections, such as initial-state radiation at the leading-logarithmic level
or beamstrahlung effects. Most of the multi-purpose generators are also
interfaced to parton-shower and hadronization programs. The generality
renders these programs, however, rather complex devices and, at present,
they are far from representing tools for high-precision physics, because non-
universal radiative corrections are not taken into account in predictions.
Page 11
Electroweak Physics
2543
The following multi-purpose generators for multi-parton production, in-
cluding program packages for the matrix-element evaluation, are available:
• Amegic [77]: Helicity amplitudes are automatically generated by the
program for the SM, the MSSM, and some new-physics models. Vari-
ous interfaces (ISR, PDFs, beam spectra, Isajet, etc.) are supported.
The phase-space generation was successfully tested for up to six par-
ticles in the final state.
• Grace [78]: The amplitudes are delivered by a built-in package, which
can also handle SUSY processes. The phase-space integration is done
by BASES [79]. Tree-level calculations have been performed for up
to (selected) six-fermion final states. The extension of the system to
include one-loop corrections is the Grace-Loop [80] program.
• Madevent [81] + Madgraph [82]: The Madgraph algorithm can
generate tree-level matrix elements for any SM process (fully support-
ing particle masses), but a practical limitation is 9,999 diagrams. In
addition, Madgraph creates Madevent, an event generator for the
requested process.
• Phegas [83] + Helac [84]: The Helac program delivers amplitudes
for all SM processes (including all masses). The phase-space integra-
tion done by Phegas has been tested for selected final states with
up to seven particles. Recent applications concern channels with six-
fermion final states [85].
• Whizard [86] + Comphep [87] / Madgraph [82] / O’mega [88]:
Matrix elements are generated by an automatic interface to (older
versions of) Comphep, Madgraph, and (the up-to-date version of)
O’mega. Phase-space generation has been tested for most 2 → 6
and some 2 → 8 processes; unweighted events are supported, and a
large variety of interfaces (ISR, beamstrahlung, Pythia, PDFs, etc.)
exists. The inclusion of MSSM amplitudes (O’mega) and improved
phase-space generation (2 → 6) are work in progress.
• Alpgen [89] is a specific code for computing the perturbative part of
observables in high energy hadron-hadron collisions, which require a
convolution of the perturbative quantities with structure or fragmen-
tation functions that account for non perturbative effects.
Tuned comparisons of different generators, both at parton and detector level,
are extremely important, but become more and more laborious owing to the
large variety of multi-particle final states. Some progress to a facilitation
and automization of comparisons are made by MC-tester project [90] and
Java interfaces [91].
Page 12
2544
W. Hollik et al.
4.2. Event generators and results for e
+
e
−
→ 6f
Particular progress was reached in recent years in the description of
six-fermion production processes. Apart from the multi-purpose genera-
tors listed in the previous section, also dedicated Monte Carlo programs and
generators have been developed for this class of processes:
• Sixfap [92]: Matrix elements are provided for all 6f final states (with
finite fermion masses), including all electroweak diagrams. The gen-
eralization to QCD diagrams and the extension of the phase-space
integration for all final states is in progress.
• eett6f [93]: Only processes relevant for t
¯
t production are supported
(a new version includes e
±
in the final state and QCD diagrams); finite
fermion masses are possible.
• Lusifer [94]: All 6f final states are possible, including QCD diagrams
with up to four quarks; representative results for all these final states
have been presented. External fermions are massless. An unweighting
algorithm and an interface to Pythia are available.
• Phase [95]: All Standard Model process with six fermions in the final
state at the LHC. It employs the full set of tree-level Feynman dia-
grams, taking into account a finite mass for the b quark. An interface
to hadronization is provided.
A comparison of results [96] for some processes e
+
e
−
→ 6f relevant for t
¯
t
production for massless external fermions reveals good agreement between
the various programs, where minor differences are presumably due to the dif-
ferent treatments of the bottom-quark Yukawa coupling, which is neglected
in some cases.
A tuned comparison of results obtained with Lusifer and Whizard for
a large survey of 6f final states has been presented in Ref. [94].
5. Other developments
5.1. Automatization of loop calculations
Once the necessary techniques and theoretical subtleties of a perturbative
calculation are settled, to carry out the actual calculation is an algorithmic
matter. Thus, an automization of such calculations is highly desirable, in
order to facilitate related calculations. Various program packages have been
presented in the literature for automatized tree-level, one-loop, and multi-
loop calculations. A comprehensive overview can, e.g., be found in [97]; in
the following we have to restrict ourselves to a selection of topics, where
emphasis is put on electroweak aspects.
Page 13
Electroweak Physics
2545
The generation of Feynman graphs and amplitudes is a combinatorial
problem that can be attacked with computer algebra. The program pack-
ages FeynArts [98] (which has been extended in [99] for the MSSM) and
Diana [100] (based on Qgraf [101]) are specifically designed for this task;
also the Grace-Loop [80] system automatically generates Feynman dia-
grams and loop amplitudes. Moreover, the task of calculating virtual one-
loop and the corresponding real-emission corrections to 2 → 2 scattering
reactions is by now well understood. Such calculations are widely auto-
mated in the packages FormCalc combined with LoopTools [102], and
Grace-Loop [80].
An illustrating example was provided for the differential cross section
for e
+
e
−
→ t
¯
t in lowest order as well as including electroweak O(α) cor-
rections. A program FA+FC [103] was obtained from the output of the
FeynArts and FormCalc packages and makes use of the LoopTools
library for the numerical evaluation. Another program, Topfit [103,104],
was developed from an algebraic reduction of Feynman graphs (delivered
from Diana) within Form; for the numerics LoopTools is partially em-
ployed. A completely independent approach has been made by the Sanc
project [105]. The Sanc program contains another independent calculation
of the O(α) corrections to e
+
e
−
→ t
¯
t, the results of which are also included
in [103]. More details on comparisons, including also other fermion flavors,
can be found in [103,106]. The agreement between the numerical results at
the level of 10 digits reflects the enormous progress achieved in recent years
in the automatization of one-loop calculations.
The one-loop calculation for the process e
+
e
−
→ t
¯
t(γ) including hard
bremsstrahlung was originally performed in [104] without full automatiza-
tion; it was repeated in later course (apart from the hard bremsstrahlung
part) as an example for automatization in [107] . The extension of Di-
ana towards full automatization in terms of the package aıTalc is a new
development [107,108]; automatization of the full calculation is performed
including renormalization and the creation and running of a FORTRAN
code. Applications to the calculation of the processes e
+
e
−
→ f
¯
f(γ) for
various final fermions: t,b,µ,e and also s
¯
b,c
¯
t,µ¯τ are performed. For further
work in automatization see [109,110].
5.2. Numerical approaches to loop calculations
Most of the various techniques of performing loop calculations share the
common feature that the integration over the loop momenta is performed
analytically. This procedure leads to complications at one loop if five or more
external legs are involved, since both speed and stability of programs become
more and more jeopardized. At the two-loop level, already the evaluation of
Page 14
2546
W. Hollik et al.
self-energy and vertex corrections can lead to extremely complicated higher
transcendental functions that are hard to evaluate numerically.
An idea to avoid these complications is provided by a more or less purely
numerical evaluation of loop corrections. There are two main difficulties in
this approach. Firstly, the appearing ultraviolet and infrared divergences
have to be treated and canceled carefully. Secondly, even finite loop integrals
require a singularity handling of the integrand near particle poles, where
Feynman’s iϵ prescription is used as regularization.
In [111] a method for a purely numerical evaluation of loop integrals
is proposed. Each integral is parametrized with Feynman parameters and
subsequently rewritten with partial integrations. The final expression con-
sists of a quite simple part containing the singular terms and another more
complicated looking part that can be integrated numerically. The actual
application of the method to a physical process is still work in progress.
There are five papers in a series devoted to the numerical evaluation of
multi-loop, multi-leg Feynman diagrams. In [111] the general strategy is
outlined and in [112] a complete list of results is given for two-loop functions
with two external legs, including their infrared divergent on-shell derivatives.
Results for one-loop multi-leg diagrams are shown in [113] and additional
material can be found in [114]. Two-loop three-point functions for infrared
convergent configurations are considered in [115], where numerical results
can be found.
Ref. [113] presents a detailed investigation of the algorithms, based on
the Bernstein–Tkachov (BT) theorem [116], which form the basis for a fast
and reliable numerical integration of one-loop multi-leg (up to six in this
paper) diagrams. The rationale for this work is represented by the need
of encompassing a number of problems that one encounters in assembling
a calculation of some complicated process, e.g. full one-loop corrections to
e
+
e
−
→ 4fermions. Furthermore, in any attempt to compute physical ob-
servables at the two-loop level, we will have to include the one-loop part, and
it is rather obvious that the two pieces should be treated on equal footing.
All algorithms that aim to compute Feynman diagrams numerically are
based on some manipulation of the original integrands that brings the final
answer into something smooth. This has the consequence of bringing the
original (Landau) singularity of the diagram into some overall denominator
and, usually, the method overestimates the singular behavior around some
threshold. In these regions an alternative derivation is needed. Instead of
using the method of asymptotic expansions, a novel algorithm is introduced
based on a Mellin–Barnes decomposition of the singular integrand, followed
by a sector decomposition that allows one to write the Laurent expansion
around threshold.
Page 15
Electroweak Physics
2547
Particular care has been devoted to analyze those situations where a
sub-leading singularity may occur, and to properly account for those cases
where the algorithm cannot be applied because the corresponding BT factor
is zero although the singular point in parametric space does not belong to
the integration domain.
Finally, a description of infrared divergent one-loop virtual configura-
tions is given in the framework of dimensional regularization: here both the
residue of the infrared pole and the infrared finite remainder are cast into
a form that can be safely computed numerically. The collection of formu-
las that cover all corners of phase space have been translated into a set of
FORM codes and the output has been used to create a FORTRAN code
whose technical description will be given elsewhere.
Ref. [117] addresses the problem of deriving a judicious and efficient way
to deal with tensor Feynman integrals, namely those integrals that occur in
any field theory with spin and non trivial structures for the numerators of
Feynman propagators. This paper forms a basis for a realistic calculation of
physical observables at the two-loop level.
The complexity of handling two-loop tensor integrals is reflected in the
following simple consideration: the complete treatment of one-loop tensor in-
tegrals was confined to the appendices of [118], while the reduction of general
two-loop self-energies to standard scalar integrals already required a consid-
erable fraction of [119]; the inclusion of two-loop vertices requires the whole
content of this paper. The past experience in the field has shown the encom-
passed convenience of gathering in one single place the complete collection
of results needed for a broad spectrum of applications. In recent years, the
most popular and quite successful tool in dealing with multi-loop Feynman
diagrams in QED/QCD (or in selected problems in different models, charac-
terized by a very small number of scales), has been the Integration-By-Parts
Identities method [120]. However, reduction to a set of master integrals is
poorly known in the enlarged scenario of multi-scale electroweak physics.
In [121] another new method is presented in which almost all the work
can be performed numerically: the tensor integrals are numerically reduced
to the standard set of one-loop scalar functions and any amplitude is calcu-
lated simply contracting the numerically computed tensor integrals with the
external tensors. To this aim, a recursion relation is introduced that links
high-rank tensor integrals to lower-rank ones. Singular kinematical configu-
rations give a smoother behavior than in other approaches because, at each
level of iteration, only inverse square roots of Gram determinants appear.
Page 16
2548
W. Hollik et al.
6. Electroweak effects in hadronic processes
In Refs. [122–126] (see also [127]) it was proved the importance of elec-
troweak one-loop corrections to hadronic observables, such as b
¯
b, ‘prompt
photon + jet’ and ‘Z + jet’ production at Tevatron and LHC and jet and b
¯
b
production in linear colliders, which can compete in size with QCD correc-
tions. Their inclusion in experimental analyses is thus important, especially
in view of searches for new physics. In case of ‘Z + jet’ production they can
rise to O(15–20%) at large transverse momentum at the LHC, while being
typically half the size in case of ‘photon + jet’ production. As these two
channels are the contributors to the Drell-Yan process, and since the latter
is envisaged to be used as one of the means to measure the LHC luminosity,
it is clear that neglecting them in experimental analyses would spoil the
luminosity measurements.
Ref. [128] emphasised the importance of NLO electroweak effects in
three-jet production in e
+
e
−
scattering at the Z-pole (SLC, LEP and GigaZ),
showing typical corrections of O(2–4%) (e.g. in jet-rates and thrust), compa-
rable to the SLC and LEP experimental accuracy and certainly larger than
the one expected at GigaZ. They also introduce sizable parity-violating ef-
fects into the fully differential structure of three-jet events in presence of
beam polarisation, which are of relevance as background to new physics
searches in SLC and GigaZ data.
The complete set of electroweak O(α) corrections to the Drell–Yan-like
production of Z and W bosons have been studied in [129–131]. These cor-
rections are phenomenologically relevant both at the Tevatron and the LHC.
It is shown that the pole expansion yields a good description of resonance
observables, but it is not sufficient for the high-energy tail of transverse-
momentum distributions, relevant for new-physics searches. Horace and
Winhac are Monte Carlo event generators [132], developed for single W
production and decay, which in their current versions include higher-order
QED corrections in leptonic W decays, a crucial entry for precision deter-
mination of the W mass and width at hadron colliders.
Production of vector-boson pairs is an important probe for potential non-
standard anomalous gauge couplings. In order to identify possible deviations
from the SM predictions, an accurate knowledge of the electroweak higher-
order contributions is mandatory as well, in particular for large transverse
momenta. A complete electroweak one-loop calculation was performed for
γZ production [133]; for other processes like γW,... the large logarithms in
the Sudakov regime were derived [41].
A further aspect that should be recalled is that weak corrections nat-
urally introduce parity-violating effects in observables, detectable through
asymmetries in the cross-section. These effects are further enhanced if polar-
Page 17
Electroweak Physics
2549
isation of the incoming beams is exploited, such as at RHIC-Spin [134,135]
and will be used to measure polarised structure functions.
7. Conclusions
During the recent years there has been continuous progress in the devel-
opment of new techniques and in making precise predictions for electroweak
physics at future colliders. However, to be prepared for the LHC and a future
e
+
e
−
linear collider with high energy and luminosity, an enormous amount of
work is still ahead of us. Not only technical challenges, also field-theoretical
issues such as renormalization, the treatment of unstable particles, etc., de-
mand a higher level of understanding. Both loop calculations as well as the
descriptions of multi-particle production processes with the help of Monte
Carlo techniques require and will profit from further improving computing
devices. It is certainly out of question that the list of challenges and in-
teresting issues could be continued at will. Electroweak physics at future
colliders will be a highly exciting issue.
REFERENCES
[1] G. Altarelli, M. Mangano (Eds.), Proc. of the CERN Workshop on Standard
Model Physics (and More) at the LHC, CERN 2000-004.
[2] J.A. Aguilar-Saavedra et al., TESLA Technical Design Report Part
III: Physics at an e
+
e
−
Linear Collider; T. Abe et al. [Ameri-
can Linear Collider Working Group Collaboration], in Proc. of the
APS/DPF/DPB Summer Study on the Future of Particle Physics (Snow-
mass 2001)
ed. R. Davidson, C. Quigg, SLAC-R-570, Resource book
for Snowmass 2001 hep-ex/0106055, hep-ex/0106056, hep-ex/0106057,
hep-ex/0106058; K. Abe et al. [ACFA Linear Collider Working Group Col-
laboration], ACFA Linear Collider Working Group report, hep-ph/0109166.
[3] U. Baur et al., in Proc. of the APS/DPF/DPB Summer Study on the Fu-
ture of Particle Physics (Snowmass 2001) ed. N. Graf, hep-ph/0111314,
hep-ph/0202001.
[4] A. Freitas, W. Hollik, W. Walter, G. Weiglein, Phys. Lett. B495, 338 (2000),
[Erratum Phys. Lett. B570, 260 (2003)]; Nucl. Phys. B632, 189 (2002),
[Erratum Nucl. Phys. B666, 305 (2003)].
[5] M. Awramik, M. Czakon, Phys. Lett. B568, 48 (2003).
[6] M. Awramik, M. Czakon, Phys. Rev. Lett. 89, 241801 (2002); A. On-
ishchenko, O. Veretin, Phys. Lett. B551, 111 (2003); M. Awramik, M. Cza-
kon, A. Onishchenko, O. Veretin, Phys. Rev. D68, 053004 (2003).
[7] J. van der Bij, K. Chetyrkin, M. Faisst, G. Jikia, T. Seidensticker, Phys. Lett.
B498, 156 (2001); M. Faisst, J.H. Kühn, T. Seidensticker, O. Veretin, Nucl.
Phys. B665, 649 (2003).
Page 18
2550
W. Hollik et al.
[8] R. Boughezal, J.B. Tausk, J.J. van der Bij, hep-ph/0410216.
[9] W. Hollik, in proceedings of the Zeuthen workshop Electroweak Precision
Data and the Higgs Mass, Zeuthen, 2003, eds. S. Dittmaier, K. Mönig, DESY-
PROC-2003-01 and MPP-2003-50.
[10] M. Awramik, M. Czakon, A. Freitas, G. Weiglein, Phys. Rev. D69, 053006
(2004).
[11] F.A. Berends et al., in Z Physics at LEP 1, eds. G. Altarelli, R. Kleiss,
C. Verzegnassi (CERN 89-08, Geneva, 1989), Vol. 1, p. 89; W. Beenakker,
F.A. Berends, S.C. van der Marck, Z. Phys. C46, 687 (1990); A. Borrelli,
M. Consoli, L. Maiani, R. Sisto, Nucl. Phys. B333, 357 (1990).
[12] D.Y. Bardin et al., hep-ph/9709229, in Precision Calculations for the Z
Resonance, D. Bardin, W. Hollik, G. Passarino (Eds.), CERN 95-03.
[13] D.Y. Bardin, G. Passarino, hep-ph/9803425; G. Montagna, O. Nicrosini,
F. Piccinini, G. Passarino, Comput. Phys. Commun. 117, 278 (1999);
G. Montagna, F. Piccinini, O. Nicrosini, G. Passarino, R. Pittau, Comput.
Phys. Commun. 76, 328 (1993); G. Montagna, F. Piccinini, O. Nicrosini,
G. Passarino, R. Pittau, Nucl. Phys. B401, 3 (1993); D.Y. Bardin,
M. Grünewald, G. Passarino, hep-ph/9902452; D.Y. Bardin et al., Comput.
Phys. Commun. 133, 229 (2001).
[14] Contributions of W. Hollik, W. van Neerven, G. Passarino to the proceedings
of the Zeuthen workshop Electroweak Precision Data and the Higgs Mass,
Zeuthen, 2003, eds. S. Dittmaier, K. Mönig, DESY-PROC-2003-01 and MPP-
2003-50.
[15] S. Heinemeyer, hep-ph/0406245.
[16] M. Awramik, M. Czakon, A. Freitas, G. Weiglein, hep-ph/0407317;
M. Awramik, M. Czakon, A. Freitas, G. Weiglein, hep-ph/0408207.
[17] H. Czyz, J.H. Kühn, G. Rodrigo, Nucl. Phys. Proc. Suppl. 116, 249 (2003);
J. Gluza, A. Hoefer, S. Jadach, F. Jegerlehner, Eur. Phys. J. C28, 261
(2003); H. Czyz, A. Grzelinska, J.H. Kühn, G. Rodrigo, Eur. Phys. J. C33,
333 (2004).
[18] G.P. Zeller et al. [NuTeV Collaboration], Phys. Rev. Lett. 88, 091802 (2002),
[Erratum Phys. Rev. Lett. 90, 239902 (2003)].
[19] K.P.O. Diener, S. Dittmaier, W. Hollik, Phys. Rev. D69, 073005 (2004).
[20] A.B. Arbuzov, D.Y. Bardin, L.V. Kalinovskaya, hep-ph/0407203.
[21] Z. Bern, L.J. Dixon, A. Ghinculov, Phys. Rev. D63, 053007 (2001).
[22] E.W.N. Glover, J.B. Tausk, J.J. van der Bij, Phys. Lett. B516, 33 (2001).
[23] Z. Bern et al., J. High Energy Phys. 0111, 031 (2001).
[24] J. Fleischer, T. Riemann, O.V. Tarasov, A. Werthenbach, Nucl. Phys. Proc.
Suppl. 116, 43 (2003).
[25] J. Fleischer, T. Riemann, O.V. Tarasov, Acta Phys. Pol. B 34, 5345 (20003).
[26] J. Fleischer, F. Jegerlehner, O.V. Tarasov, Nucl. Phys. B672, 303 (2003).
[27] J. Fleischer, O.V. Tarasov, V.O. Tarasov, Phys. Lett. B584, 294 (2004).
Page 19
Electroweak Physics
2551
[28] R. Bonciani, A. Ferroglia, P. Mastrolia, E. Remiddi, J.J. van der Bij, Nucl.
Phys. B681, 261 (2004).
[29] R. Bonciani, A. Ferroglia, P. Mastrolia, E. Remiddi, J.J. van der Bij,
hep-ph/0405275 to appear in Nucl. Phys. B.
[30] M. Czakon, J. Gluza, T. Riemann, hep-ph/0406203.
[31] T.G. Birthwright, E.W.N. Glover, P. Marquard, J. High Energy Phys. 0409,
042 (2004).
[32] U. Aglietti, R. Bonciani, Nucl. Phys. B668, 3 (2003); U. Aglietti, R. Bon-
ciani, hep-ph/0401193.
[33] R. Bonciani, P. Mastrolia, E. Remiddi, Nucl. Phys. B690, 138 (2004).
[34] U. Aglietti, R. Bonciani, G. Degrassi, A. Vicini, hep-ph/0407162; U. Aglietti,
R. Bonciani, G. Degrassi, A. Vicini, Phys. Lett. B595, 432 (2004).
[35] A. Denner, M. Melles, S. Pozzorini, Nucl. Phys. B662, 299 (2003).
[36] M. Ciafaloni, P. Ciafaloni, D. Comelli, Phys. Lett. B501, 216 (2001).
[37] A. Denner, S. Pozzorini, Eur. Phys. J. C18, 461 (2001); Eur. Phys. J. C21,
63 (2001).
[38] W. Beenakker, A. Werthenbach, Phys. Lett. B489, 148 (2000); Nucl. Phys.
B630, 3 (2002); M. Melles, Phys. Lett. B495, 81 (2000); M. Hori, H. Kawa-
mura, J. Kodaira, Phys. Lett. B491, 275 (2000).
[39] V.S. Fadin, L.N. Lipatov, A.D. Martin, M. Melles, Phys. Rev. D61, 094002
(2000); J.H. Kühn, A.A. Penin, V.A. Smirnov, Eur. Phys. J. C17, 97 (2000);
M. Melles, Phys. Rep. 375, 219 (2003); Eur. Phys. J. C24, 193 (2002).
[40] J.H. Kühn, S. Moch, A.A. Penin, V.A. Smirnov, Nucl. Phys. B616, 286
(2001) [Erratum Nucl. Phys. B648, 455 (2003)].
[41] E. Accomando, A. Denner, S. Pozzorini, Phys. Rev. D65, 073003 (2002).
E. Accomando, A. Denner, A. Kaiser, hep-ph/0409247.
[42] M. Beccaria et al., Phys. Rev. D64, 053016 (2001); Phys. Rev. D65, 093007
(2002); LC-TH-2002-005; Phys. Rev. D68, 035014 (2003); LC-TH-2002-017
Int. J. Mod. Phys. A18, 5069 (2003); Nucl. Phys. B663, 394 (2003).
[43] W. Beenakker et al., in Physics at LEP2, eds. G. Altarelli, T. Sjöstrand,
F. Zwirner (CERN 96-01, Geneva, 1996), Vol. 1, p. 79.
[44] J. Blümlein, H. Kawamura, Acta Phys. Pol. B 33, 3719 (2002).
[45] E. Accomando, A. Ballestrero, E. Maina, Comput. Phys. Commun. 150, 166
(2003).
[46] A. Ballestrero, R. Chierici, F. Cossutti, E. Migliore, Comput. Phys. Commun.
152, 175 (2003).
[47] W. Beenakker, A.P. Chapovsky, A. Kanaki, C.G. Papadopoulos, R. Pittau,
Nucl. Phys. B667, 359 (2003).
[48] M.W. Grünewald et al., in Reports of the Working Groups on Precision Cal-
culations for LEP2 Physics, eds. S. Jadach, G. Passarino, R. Pittau (CERN
2000-009, Geneva, 2000), p. 1.
Page 20
2552
W. Hollik et al.
[49] W. Beenakker, F.A. Berends, A.P. Chapovsky, Nucl. Phys. B548, 3 (1999);
Y. Kurihara, M. Kuroda, D. Schildknecht, Nucl. Phys. B565, 49 (2000);
Phys. Lett. B509, 87 (2001).
[50] S. Jadach et al., Phys. Rev. D61, 113010 (2000); Phys. Rev. D65, 093010
(2002); Comput. Phys. Commun. 140, 432 (2001); Comput. Phys. Commun.
140, 475 (2001).
[51] A. Denner, S. Dittmaier, M. Roth, D. Wackeroth, Phys. Lett. B475, 127
(2000); LC-TH-2000-014; Nucl. Phys. B587, 67 (2000); hep-ph/0101257;
Comput. Phys. Commun. 153, 462 (2003).
[52] A. Denner, S. Dittmaier, M. Roth, D. Wackeroth, Nucl. Phys. B560, 33
(1999); Eur. Phys. J. C20, 201 (2001).
[53] A. Denner, S. Dittmaier, M. Roth, D. Wackeroth, hep-ph/0110402.
[54] S. Jadach et al., Phys. Lett. B523, 117 (2001).
[55] F. Jegerlehner, LC-TH-2001-035, hep-ph/0105283.
[56] S. Jadach et al., Comput. Phys. Commun. 119, 272 (1999); Eur. Phys. J.
C27, 19 (2003).
[57] J. Fujimoto et al., hep-ph/9603394; Comput. Phys. Commun. 100, 128
(1997).
[58] F.A. Berends, C.G. Papadopoulos, R. Pittau, hep-ph/0002249; Comput.
Phys. Commun. 136, 148 (2001).
[59] G. Montagna el al., Eur. Phys. J. C20, 217 (2001).
[60] E. Accomando, A. Ballestrero, Comput. Phys. Commun. 99, 270 (1997);
E. Accomando, A. Ballestrero, E. Maina, Phys. Lett. B479, 209 (2000);
Comput. Phys. Commun. 150, 166 (2003).
[61] G. Passarino, Comput. Phys. Commun. 97, 261 (1996); hep-ph/9810416;
Nucl. Phys. B578, 3 (2000).
[62] A. Denner, S. Dittmaier, Nucl. Phys. B398, 239 (1993); M. Böhm,
S. Dittmaier, Nucl. Phys. B409, 3 (1993).
[63] D.B. Melrose, Nuovo Cimento XL A, 181 (1965).
[64] J. Fleischer, F. Jegerlehner, O.V. Tarasov, Nucl. Phys. B566, 423 (2000);
T. Binoth, J.P. Guillet, G. Heinrich, Nucl. Phys. B572, 361 (2000); F. Tra-
montano, Phys. Rev. D67, 114005 (2003); T. Binoth, G. Heinrich, N. Kauer,
Nucl. Phys. B654, 277 (2003).
[65] A. Denner, S. Dittmaier, Nucl. Phys. B658, 175 (2003).
[66] W.T. Giele, E.W.N. Glover, J. High Energy Phys. 0404, 029 (2004);
S. Dittmaier, Nucl. Phys. B675, 447 (2003).
[67] B.W. Harris, J.F. Owens, Phys. Rev. D65, 094032 (2002).
[68] S. Catani, M.H. Seymour, Nucl. Phys. B485, 291 (1997) [Erratum Nucl.
Phys. B510, 291 (1997)].
[69] S. Dittmaier, Nucl. Phys. B565, 69 (2000).
[70] L. Phaf, S. Weinzierl, J. High Energy Phys. 0104, 006 (2001); S. Catani,
S. Dittmaier, M.H. Seymour, Z. Trócsányi, Nucl. Phys. B627, 189 (2002).
Page 21
Electroweak Physics
2553
[71] G. Bélanger et al., Phys. Lett. B559, 252 (2003).
[72] A. Denner, S. Dittmaier, M. Roth, M.M. Weber, Phys. Lett. B560, 196
(2003); Nucl. Phys. B660, 289 (2003).
[73] Y. You et al., Phys. Lett. B571, 85 (2003).
[74] G. Bélanger et al., Phys. Lett. B571, 163 (2003).
[75] A. Denner, S. Dittmaier, M. Roth, M.M. Weber, Phys. Lett. B575, 290
(2003).
[76] F. Jegerlehner, O. Tarasov, Nucl. Phys. Proc. Suppl. 116, 83 (2003).
[77] F. Krauss, R. Kuhn, G. Soff, J. High Energy Phys. 0202, 044 (2002);
A. Schalicke, F. Krauss, R. Kuhn, G. Soff, J. High Energy Phys. 0212, 013
(2002).
[78] T. Ishikawa et al., [Minami-Tateya Collaboration], KEK-92-19; H. Tanaka
et al., [Minami-Tateya Collaboration], Nucl. Instrum. Methods Phys. Res.
A389, 295 (1997); F. Yuasa et al., Prog. Theor. Phys. Suppl. 138, 18 (2000).
[79] S. Kawabata, Comput. Phys. Commun. 88, 309 (1995).
[80] J. Fujimoto el al., Acta Phys. Pol. B 28, 945 (1997).
[81] F. Maltoni, T. Stelzer, J. High Energy Phys. 0302, 027 (2003).
[82] T. Stelzer, W.F. Long, Comput. Phys. Commun. 81, 357 (1994); H. Mu-
rayama, I. Watanabe, K. Hagiwara, KEK-91-11.
[83] C.G. Papadopoulos, Comput. Phys. Commun. 137, 247 (2001).
[84] A. Kanaki, C.G. Papadopoulos, Comput. Phys. Commun. 132, 306 (2000).
[85] T. Gleisberg, F. Krauss, C.G. Papadopoulos, A. Schaelicke, S. Schumann,
Eur. Phys. J. C34, 173 (2004).
[86] W. Kilian, LC-TOOL-2001-039.
[87] A. Pukhov et al., hep-ph/9908288.
[88] M. Moretti, T. Ohl, J. Reuter, LC-TOOL-2001-040, hep-ph/0102195.
[89] M.L. Mangano, M. Moretti, F. Piccinini, R. Pittau, A.D. Polosa, J. High
Energy Phys. 0307, 001 (2003); M.L. Mangano, M. Moretti, R. Pittau, Nucl.
Phys. B632, 343 (2002); F. Caravaglios, M.L. Mangano, M. Moretti, R. Pit-
tau, Nucl. Phys. B539, 215 (1999); F. Caravaglios, M. Moretti, Phys. Lett.
B358, 332 (1995).
[90] P. Golonka, T. Pierzchala, Z. Was, Comput. Phys. Commun. 157, 39 (2004).
[91] M.T. Ronan, LC-TOOL-2003-015 [physics/0306019].
[92] G. Montagna, M. Moretti, O. Nicrosini, F. Piccinini, Eur. Phys. J. C2, 483
(1998); F. Gangemi et al., Eur. Phys. J. C9, 31 (1999); Nucl. Phys. B559,
3 (1999); F. Gangemi, hep-ph/0002142.
[93] K. Kołodziej, Eur. Phys. J. C23, 471 (2002); Comput. Phys. Commun. 151,
339 (2003); A. Biernacik, K. Kołodziej, Nucl. Phys. Proc. Suppl. 116, 33
(2003); A. Biernacik, K. Kolodziej, A. Lorca, T. Riemann, Acta Phys. Pol.
B 34, 5487 (2003).
[94] S. Dittmaier, M. Roth, Nucl. Phys. B642, 307 (2002).
[95] E. Accomando, A. Ballestrero, E. Maina, hep-ph/0404236.
Page 22
2554
W. Hollik et al.
[96] S. Dittmaier, hep-ph/0308079.
[97] R. Harlander, M. Steinhauser, Prog. Part. Nucl. Phys. 43, 167 (1999).
[98] J. Küblbeck, M. Böhm, A. Denner, Comput. Phys. Commun. 60, 165 (1990);
T. Hahn, Comput. Phys. Commun. 140, 418 (2001).
[99] T. Hahn, C. Schappacher, Comput. Phys. Commun. 143, 54 (2002).
[100] M. Tentyukov, J. Fleischer, Comput. Phys. Commun. 132, 124 (2000).
[101] P. Nogueira, J. Comput. Phys. 105, 279 (1993).
[102] T. Hahn, M. Perez-Victoria, Comput. Phys. Commun. 118, 153 (1999);
T. Hahn, Nucl. Phys. Proc. Suppl. 89, 231 (2000).
[103] J. Fleischer, T. Hahn, W. Hollik, T. Riemann, C. Schappacher, A. Werthen-
bach, LC-TH-2002-002 hep-ph/0202109.
[104] J. Fleischer, A. Leike, T. Riemann, A. Werthenbach, Eur. Phys. J. C31, 37
(2003).
[105] A. Andonov el al., hep-ph/0209297; A. Andonov el al., hep-ph/0202112;
Phys. Part. Nucl. 34, 577 (2003) [Fiz. Elem. Chast. Atom. Yadra 34, 1125
(2003)]
[106] T. Hahn, W. Hollik, A. Lorca, T. Riemann, A. Werthenbach, LC-TH-2003-
083 hep-ph/0307132.
[107] J. Fleischer, A. Lorca, T. Riemann, hep-ph/0409034.
[108] A. Lorca, T. Riemann, hep-ph/0407149.
[109] M. Tentyukov, J. Fleischer, Comput. Phys. Commun. 160, 167 (2004).
[110] J. Fleischer, A. Lorca, M. Tentyukov, Diana and Applications to Fermion
Production in Electron Positron Annihilation, to appear in the proceedings
of ACAT03, December 1993.
[111] G. Passarino, Nucl. Phys. B619, 257 (2001); G. Passarino, S. Uccirati, Nucl.
Phys. B629, 97 (2002); A. Ferroglia, M. Passera, G. Passarino, S. Uccirati,
Nucl. Phys. B650, 162 (2003).
[112] G. Passarino, S. Uccirati, Nucl. Phys. B629, 97 (2002).
[113] A. Ferroglia, M. Passera, G. Passarino, S. Uccirati, Nucl. Phys. B650, 162
(2003).
[114] A. Ferroglia, G. Passarino, S. Uccirati, M. Passera, Nucl. Instrum. Methods
Phys. Res. A502, 391 (2003).
[115] A. Ferroglia, M. Passera, G. Passarino, S. Uccirati, Nucl. Phys. B680, 199
(2004).
[116] F.V. Tkachov, Nucl. Instrum. Methods Phys. Res. A389, 309 (1997);
L.N. Bernstein, Functional Analysis and its Applications, 6, 66 (1972).
[117] S. Actis, A. Ferroglia, G. Passarino, M. Passera, S. Uccirati,
hep-ph/0402132.
[118] G. Passarino, M. Veltman, Nucl. Phys. B160, 151 (1979).
[119] G. Weiglein, R. Scharf, M. Böhm, Nucl. Phys. B416, 606 (1994).
Page 23
Electroweak Physics
2555
[120] G. ’t Hooft, M. Veltman, Nucl. Phys. B44, 189 (1972); F.V. Tkachov, PhD
thesis, INR, Moscow, March 1984; F.V. Tkachov, Phys. Lett. B100, 65
(1981); K.G. Chetyrkin, F.V. Tkachov, Nucl. Phys. B192, 159 (1981).
[121] F. del Aguila, R. Pittau, J. High Energy Phys. 0407, 017 (2004);
[122] E. Maina, S. Moretti, M.R. Nolten, D.A. Ross, hep-ph/0401093.
[123] E. Maina, S. Moretti, D.A. Ross, Phys. Lett. B593, 143 (2004).
[124] E. Maina, S. Moretti, M.R. Nolten, D.A. Ross, hep-ph/0407150.
[125] E. Maina, S. Moretti, M.R. Nolten, D.A. Ross, Phys. Lett. B570, 205 (2003).
[126] J.H. Kühn, A. Kulesza, S. Pozzorini, M. Schulze, hep-ph/0408308.
[127] M. Dobbs et al., hep-ph/0403100.
[128] E. Maina, S. Moretti, D.A. Ross, J. High Energy Phys. 0304, 056 (2003);
hep-ph/0403269.
[129] O. Brein, U. Baur, W. Hollik, C. Schappacher, D. Wackeroth, Phys. Rev.
D65, 033007 (2002).
[130] S. Dittmaier, M. Krämer, Phys. Rev. D65, 073007 (2002).
[131] U. Baur, D. Wackeroth, hep-ph/0405191.
[132] C.M. Carloni Calame, S. Jadach, G. Montagna, O. Nicrosini, W. Placzek,
Acta Phys. Pol. B 35, 1643 (2004).
[133] W. Hollik, C. Meier, Phys. Lett. B590, 69 (2004).
[134] C. Bourrely, J.P. Guillet, J. Soffer, Nucl. Phys. B361, 72 (1991).
[135] J.R. Ellis, S. Moretti, D.A. Ross, J. High Energy Phys. 0106, 043 (2001
Max-Planck-Institut für Physik
D-80805 Munich, Germany
A. Accomando, F. del Aguila, M. Awramik, A. Ballestrero
J. van der Bij, W. Beenakker, R. Bonciani, M. Czakon
G. Degrassi, A. Denner, K. Diener, S. Dittmaier, A. Ferroglia
J. Fleischer, A. Freitas, N. Glover, J. Gluza, T. Hahn
S. Heinemeyer, S. Jadach, F. Jegerlehner, W. Kilian
M. Krämer, J. Kühn, E. Maina, S. Moretti, T. Ohl
C.G. Papadopoulos, G. Passarino, R. Pittau, S. Pozzorini
M. Roth, T. Riemann, J.B. Tausk, S. Uccirati
A. Werthenbach, and G. Weiglein
NSCR Democritos Athens, CERN, Univ. Granada, Univ. Durham, DESY
Univ. Edinburgh, Univ. Freiburg, Univ. Karlsruhe, Univ. Katowice
HNINP Krakow, Max Planck Institut für Physik München, Univ. Nijmegen
Univ. Roma 3, Univ. Southampton, Univ. Torino, PSI Villigen, Univ. Würzburg
(Received October 18, 2004)
Work on electroweak precision calculations and event generators for
electroweak physics studies at current and future colliders is summarized.
PACS numbers: 12.15.Lk, 13.66.Jn, 14.70.Fm, 14.70.Hp
1. Introduction
Apart from the still missing Higgs boson, the Standard Model (SM)
has been impressively confirmed by successful collider experiments at the
particle accelerators LEP, SLC, and Tevatron during the last decade with
high precision, at the level of radiative corrections. Future colliders like
the upcoming LHC or an e
+
e
−
International Linear Collider (ILC), offer an
even greater physics potential, and in turn represent a great challenge for
∗
Presented at the final meeting of the European Network “Physics at Colliders”,
Montpellier, France, September 26–27, 2004.
(2533)
Page 2
2534
W. Hollik et al.
theory to provide even more precise calculations. Accurate predictions are
necessary not only to increase the level of precision of SM tests, but also to
study the indirect effects of possible new particles.
With increasing energies many new processes with heavy particle pro-
duction will be accessible. Such heavy particles are unstable, so that their
production leads to many-particle final states, with e.g. four or six fermions.
Predictions for such reactions should be based on full transition matrix el-
ements, improved by radiative corrections as far as possible, and call for
proper event generators.
Joint work done within the network is reviewed here; more details can
be found also in previous studies for future colliders [1–3].
2. Precision observables and multi-loop calculations
2.1. Muon lifetime and W–Z mass correlation
The precise measurement of the muon lifetime, or equivalently of the
Fermi constant G
µ
, sets an important constraint on the SM parameters,
G
µ
=
πα(0)
√2M
W
2
s
w
2
(1 + ∆r),
(1)
with s
w
2
= 1 − c
w
2
= 1 − M
W
2
/M
Z
2
, where the quantity ∆r comprises
the radiative corrections to muon decay (apart from the photonic correc-
tions in the Fermi model). Solving this relation for the W-boson mass M
W
yields a precise prediction for M
W
that can be compared with the directly
measured value. Recently the full electroweak two-loop calculation has been
completed, with the contributions from closed fermion loops [4, 5], from
bosonic loops involving virtual Higgs-bosons [4], and the complete bosonic
corrections [5, 6]. The two-loop fermionic contributions influence the M
W
prediction at the level of ∼ 50MeV, the two-loop bosonic corrections by
1−2MeV.
The predictions at the two-loop level have been further improved by
universal higher-order contributions to the ρ-parameter. The terms of the
order O(G
2
µ
m
4
t
α
s
) and O(G
3
µ
m
6
t
) have been obtained in [7] and were found
to change M
W
at the level of 5MeV and 0.5MeV, respectively. The leading
three-loop bosonic contribution to the ρ-parameter in the large Higgs mass
limit [8], yielding the power M
4
H
/M
4
W
, is opposite in sign to the leading
two-loop correction ∼ M
2
H
/M
2
W
. The two terms cancel each other for a
Higgs boson mass around 480 GeV. This interesting new result stabilizes the
perturbative expansion and makes a strongly interacting Higgs sector very
unlikely in view of the electroweak precision data.
The prediction for M
W
, including the above-mentioned two-loop and
leading three-loop effects, carries, besides the parametric uncertainty, an
Page 3
Electroweak Physics
2535
intrinsic theoretical uncertainty, which is estimated to be of 3−4MeV [9,10],
which has to be compared with the aimed precision of 7MeV in the M
W
determination at a future ILC [3].
2.2. Precision observables at the Z resonance
In order to describe the Z-boson resonance at LEP1 within satisfac-
tory precision it was possible to parametrize the cross section near the
resonance in such a way [11, 12] that a Born-like form with generalized
“effective” couplings is convoluted with QED structure functions model-
ing initial-state radiation (ISR). From these effective Z-boson–fermion cou-
plings so-called “pseudo-observables” were derived, such as various asymme-
tries, the hadronic Z-peak cross section, partial Z-decay widths, etc. The
precisely calculated pseudo-observables are implemented in the programs
Zfitter and Topaz0 [13]. A critical overview on high-precision physics at
the Z pole, in particular focusing on the theoretical uncertainties, can be
found in [14]. The status of precision pseudo-observables in the MSSM is
summarized in [15].
Following the formal tree-level like parametrization of the couplings, an
“effective weak mixing angle”, sin
2
θ
eff
f
, was derived for each fermion species.
Among these parameters the leptonic variable sin
2
θ
eff
l
plays a particularly
important role, since it is measured with the high accuracy of 1.7 × 10
−4
and is very sensitive to the Higgs-boson mass. Quite recently, a substantial
subclass of the electroweak two-loop contributions, the fermionic contribu-
tions to sin
2
θ
eff
l
, were calculated [16]; they reduce the intrinsic theoretical
uncertainty to ∼ 5 × 10
−5
.
Whether the pseudo-observable approach will also be sufficient for
Z-boson physics at the high-luminosity GigaZ option remains to be investi-
gated carefully. In any case, substantial theoretical progress will be needed
to match the aimed GigaZ precision on the theoretical side, e.g. for the ex-
pected experimental accuracy in sin
2
θ
eff
l
of about 1.3 × 10
−5
. A full control
of observables at the two-loop level, improved by leading higher-order effects,
seems indispensable.
An important entry for the precision observables with a large paramet-
ric uncertainty is the photonic vacuum polarization at the Z scale. The
hadronic part is determined via a dispersion relation from the cross sec-
tion for e
+
e
−
→ hadrons with experimental data as input in the low-energy
regime. Possible scans with the radiative-return method require a careful
theoretical treatment to reach the required precision [17].
Page 4
2536
W. Hollik et al.
2.3. Deep-inelastic neutrino scattering
An independent precise determination of the electroweak mixing angle
in terms of the M
W
/M
Z
ratio has been done in deep-inelastic neutrino scat-
tering off an isoscalar target by the NuTeV Collaboration [18], yielding a
deviation of about 3 standard deviations from the value derived from the
global analysis of the other precision observables. A new calculation of the
electroweak one-loop corrections was performed [19] to investigate the stabil-
ity and size of the quantum effects, showing that the theoretical error of the
analysis in [18] was obviously underestimated. The new theoretical result is
now being used for a re-analysis of the experimental data (see also [20] for
another recent recalculation of the electroweak radiative corrections).
2.4. At the 2-loop frontier
Although there are no complete next-to-next-to-leading (NNLO) pre-
dictions for 2 → 2 scattering reactions and 1 → 3 decays (with one truly
massive leg) available yet, significant progress was reached in this direction
in recent years.
Complete virtual two-loop amplitudes for (massless) Bhabha scattering
[21] and light-by-light scattering [23] have been worked out. Also first steps
have been made towards massive Bhabha scattering
In Ref. [22] the coefficient of the O(α
2
log(s/m
2
e
)) fixed-order contri-
bution to elastic large-angle Bhabha scattering is derived by adapting the
classification of infrared divergences that was recently developed within di-
mensional regularization, and applying it to the regularization scheme with
a massive photon and electron.
The subset of factorizable corrections, resulting from one-loop subrenor-
malization, is considered in [24]. This requires the evaluation of one-loop
diagrams in arbitrary dimension d = 4 − ε. The ε-expansion covering the
orders 1/ε and ε, in particular for the box graphs needed in Bhabha scat-
tering, was performed in [25], based on the work of [26]
1
. For the genuine
two-loop QED corrections to Bhabha scattering, the master integrals for the
box with the fermionic loop were calculated [28], and the cross section with
the corrections at two loop resulting from these diagrams [29].
A complete set of master integrals for massive two-loop Bhabha scatter-
ing has been derived in the meantime [30], and for form factors with mass-
less [31] and massive propagators [32]. Moreover, two-loop QCD corrections
for the electroweak forward-backward asymmetries were obtained [33]. Also
two-loop master integrals and form factors from virtual light fermions were
derived and applied to Higgs boson production and decays [34].
1
Techniques applied in the latter work have also been used in [27] for the analytic
calculation of Higgs boson decay into two photons.
Page 5
Electroweak Physics
2537
2.5. Electroweak radiative corrections at high energies
Electroweak corrections far above the electroweak scale, e.g. in the TeV
range, are dominated by soft and collinear gauge-boson exchange, leading to
corrections of the form α
N
log
M
(s/M
W
2
) with M ≤ 2N. The leading terms
(M = 2N) are called Sudakov logarithms. At the one-loop (N = 1) and
two-loop (N = 2) level the leading and subleading corrections to a 2 → 2
process at √s ∼ 1TeV typically amount to [35]
δ
1−loop
LL
∼ −
α
πs
w
2
log
2
( s
M
W
2
) ≃ −26%,
δ
1−loop
NLL
∼ +
3α
πs
w
2
log (
s
M
W
2
) ≃ 16%,
δ
2−loop
LL
∼ +
α
2
2π
2
s
w
4
log
4
( s
M
W
2
) ≃ 3.5%,
δ
2−loop
NLL
∼ −
3α
2
π
2
s
w
4
log
3
( s
M
W
2
) ≃ −4.2%,
(2)
revealing that these corrections become significant in the high-energy phase
of a future ILC. In contrast to QED and QCD, where the Sudakov logarithms
cancel in the sum of virtual and real corrections, these terms need not com-
pensate in the electroweak SM for two reasons. The weak charges of quarks,
leptons, and electroweak gauge bosons are open, not confined, i.e. there is
(in contrast to QCD) no need to average or to sum over gauge multiplets in
the initial or final states of processes. Even for final states that are inclusive
with respect to the weak charges Sudakov logarithms do not completely can-
cel owing to the definite weak charges in the initial state [36]. Moreover, the
large W- and Z-boson masses make an experimental discrimination of real
W- or Z-boson production possible, in contrast to unobservable soft-photon
or gluon emission.
In recent years several calculations of these high-energy logarithms have
been carried out in the Sudakov regime, where all kinematical invariants
(p
i
p
j
) of different particle momenta p
i
, p
j
are much larger than all particle
masses. A complete analysis of all leading and subleading logarithms at the
one-loop level can be found in [37]. Diagrammatic calculations of the leading
two-loop Sudakov logarithms have been carried out in [35,38]. Diagrammatic
results on the so-called “angular-dependent” subleading logarithms have been
presented in [35]. All these explicit results are compatible with proposed
resummations [39,40] that are based on a symmetric SU(2)×U(1) theory at
high energies matched with QED at the electroweak scale. In this ansatz,
improved matrix elements M result from lowest-order matrix elements M
0
Page 6
2538
W. Hollik et al.
upon dressing them with (operator-valued) exponentials,
M ∼ M
0
⊗ exp(δ
ew
) ⊗ exp(δ
em
) .
(3)
Explicit expressions for the electroweak and electromagnetic corrections δ
ew
and δ
em
, which do not commute with each other, can, for instance, be found
in [35]. For 2 → 2 neutral-current processes of four massless fermions,
also subsubleading logarithmic corrections have been derived and resummed
[40] using an infrared evolution equation that follows the pattern of QCD.
Applications to vector-boson pair production in proton–proton collisions can
be found in [41].
In supersymmetric models the form of radiative corrections at high en-
ergies has also been worked out for a broad class of processes [42]. Based on
one-loop results their exponentiation has been proposed.
2.6. Higher-order initial-state radiation
Photon radiation off initial-state electrons and positrons leads to large
radiative corrections of the form α
N
log
N
(m
2
e
/s). These logarithmic correc-
tions are universal and governed by the DGLAP evolution equations. The
solution of these equations for the electron-photon system yields the struc-
ture functions, generically denoted by Γ(x) below, which can be used via
convolution to improve hard scattering cross sectionsσ(p
e
+
,p
e
−
) by photon
emission effects,
σ(p
e
+
,p
e
−
) =
1
∫
0
dx
+
Γ(x
+
)
1
∫
0
dx
−
Γ(x
−
)
׈σ(x
+
p
e
+
,x
−
p
e
−
).
(4)
While the soft-photon part of the structure functions (x → 1) can be re-
summed, resulting in an exponential form, the contributions of hard pho-
tons have to be calculated order by order in perturbation theory. In [43]
the structure functions are summarized up to O(α
3
). Ref. [44] describes a
calculation of the (non-singlet) contributions up to O(α
5
) and of the small-x
terms [α log
2
(x)]
N
to all orders (for previous calculations see papers cited in
Ref. [44]).
Page 7
Electroweak Physics
2539
3. Radiative corrections to 2
→
3
,
4
, . . . processes
3.1. Four-fermion final states and W-pair production
Four-fermion final states in e
+
e
−
collisions, which involve electroweak
boson pair production, are of special interest since they allow the mechanism
of spontaneous symmetry breaking and the non-Abelian structure of the
Standard Model to be directly tested by the experiments. Moreover they
provide a very important background to most searches for new physics.
LEP2 has provided in this respect an ideal testing ground for the SM. The
W profile has been measured with great accuracy, new bounds on anomalous
trilinear gauge-boson couplings have been set, and single W, single Z, ZZ
and Zγ cross sections have been determined for the first time. These studies
will be continued with much higher statistics and energy at a future e
+
e
−
Linear Collider.
In this context, the Monte Carlo four fermion generator WPHACT 2.0 has
been completed [45], adapted to experimental requests and used for simu-
lation of the LEP2 data [46]. WPHACT 2.0 computes all Standard Model
processes with four fermions in the final state at e
+
e
−
colliders, it makes
use of complete, fully massive helicity amplitude calculations and includes
the Imaginary Fermion Loop gauge restoring scheme
2
. Thanks to these
features and new phase space mappings, WPHACT has been extended to all
regions of phase space, including kinematical configurations dominated by
small momentum transfer and small invariant masses like single W, single Z,
Zγ
∗
and γ
∗
γ
∗
processes. Special attention has been devoted to QED effects,
which have a large numerical impact, with new options for the description of
Initial State Radiation (ISR) and of the scale dependence of the electromag-
netic coupling. Moreover, there is the possibility of including CKM mixing,
and to account for resonances in q¯q channels.
The theoretical treatment and the presently gained level in accuracy in
the description of W-pair-mediated 4f production were triggered by LEP2,
as it is reviewed in Refs. [43,48]. The W bosons are treated as resonances
in the full 4f processes, e
+
e
−
→ 4f (+γ). Radiative corrections are split
into universal and non-universal corrections. The former comprise leading-
logarithmic corrections from ISR, higher-order corrections included by using
appropriate effective couplings, and the Coulomb singularity. These cor-
rections can be combined with the full lowest-order matrix elements easily.
The remaining corrections are called non-universal, since they depend on
the process under investigation. For LEP2 accuracy, it was sufficient to in-
clude these corrections by the leading term of an expansion about the two
W poles, defining the so-called double-pole approximation (DPA). Different
2
In [47] the incorporation of the fermion-loop corrections into e
+
e
−
→ n
fermions is
discussed in terms of an effective Lagrangian approach.
Page 8
2540
W. Hollik et al.
versions of such a DPA have been used in the literature [49–51]. Although
several Monte Carlo programs exist that include universal corrections, only
two event generators, YFSWW [50] and RacoonWW [51,52], include non-
universal corrections, as well as the option of anomalous gauge couplings [53].
In the DPA approach, the W-pair cross section can be predicted within
a relative accuracy of ∼ 0.5%(0.7%) in the energy range between 180GeV
(170GeV) and 500GeV, which was sufficient for the LEP2 accuracy of ∼ 1%
for energies 170−209GeV. At threshold (
√s <
∼
170GeV), the present state-
of-the-art prediction results from an improved Born approximation based
on leading universal corrections only, because the DPA is not reliable there,
and thus possesses an intrinsic uncertainty of about 2%, which demonstrates
the necessary theoretical improvements for the threshold region. At energies
beyond 500GeV, effects beyond O(α), such as the above-mentioned Sudakov
logarithms at higher orders, become important and have to be included in
predictions at per-cent accuracy.
At LEP2, the W-boson mass has been determined by the reconstruc-
tion of W bosons from their decay products with a final accuracy of about
30MeV. In [54] the theoretical uncertainty is estimated to be of the order
of ∼ 5MeV. Theoretical improvements are, thus, desirable.
The above discussion illustrates the necessity of a full one-loop calcu-
lation for the e
+
e
−
→ 4f process and of further improvements by leading
higher-order corrections.
3.2. Single-W production
The single-W production process e
+
e
−
→ eν
e
W → eν
e
+2f plays a par-
ticularly important role among the 4f production processes at high scatter-
ing energies. The process is predominantly initiated by eγ
∗
collision, where
the photon is radiated off the electron (or positron) by the Weizsäcker–
Williams mechanism, i.e. with a very small off-shellness q
2
γ
.
Consequently the cross section rises logarithmically with the scattering
energy and is of the same size as the W-pair production cross section around
√s = 500GeV; for higher energies single-W dominates over W-pair produc-
tion.
Theoretically the dominance of photon exchange at low q
2
γ
poses several
complications. Technically, q
2
γ
→ 0 means that the electrons (or positrons)
are produced in the forward direction and that the electron mass has to be
taken into account in order to describe the cross section there. Moreover,
the mere application of s-dependent leading-logarithmic structure functions
does not describe the leading photon-radiation effects properly, since ISR
and final-state radiation (FSR) show sizable interferences for forward scat-
tering. Thus, the improvement of lowest-order calculations by leading radi-
Page 9
Electroweak Physics
2541
ation effects is more complicated than for s-channel-like processes. Finally,
the running of the electromagnetic coupling α(q
2
γ
) has to be evaluated in the
region of small momentum transfer (q
2
γ
< 0) where the fit for the hadronic
part of the photon vacuum polarisation [55] should be used.
The Monte Carlo generator KoralW [56] has recently been updated to
include the ISR-FSR interference effects as well as the proper running of
α(q
2
γ
). Therefore, this program now has reached a level of accuracy similar
to the other state-of-the-art programs for single-W production: Grc4f [57],
Nextcalibur [58], Swap [59], Wphact [45,60], and Wto [61]. It should
be kept in mind that none of these calculations includes non-universal elec-
troweak corrections, leading to a theoretical uncertainty of about ∼ 5% in
cross-section predictions. Although the final solution for a high-energy Lin-
ear Collider certainly requires a full O(α) calculation of the 4f-production
process, a first step of improvement could be done by a careful expansion
about the propagator poles of the photon and W boson. The electroweak
O(α) corrections to the process eγ → ν
e
W, which are known [62], represent
a basic building block in this calculation.
3.3. Progress for multi-particle production processes
One-loop integrals become more and more cumbersome if the number N
of external legs in diagrams increases. For N > 4, however, not all exter-
nal momenta are linearly independent because of the four-dimensionality of
space-time. As known for a long time [63], this fact opens the possibility to
relate integrals with N > 4 to integrals with N ≤ 4. In recent years, various
techniques for actual evaluations of one-loop integrals with N = 5,6 have
been worked out [64,65] (see also references therein for older methods and
results). The major complication in the treatment of 2 → 3 processes at one
loop concerns the numerical evaluation of tensor 5-point integrals; in partic-
ular, the occurrence of inverse Gram determinants in the usual Passarino–
Veltman reduction to scalar integrals leads to numerical instabilities at the
phase-space boundary. A possible solution to this problem was worked out
in Ref. [65] where the known direct reduction [63] of scalar 5-point to 4-point
integrals was generalized to tensor integrals, thereby avoiding the occurrence
of leading Gram determinants completely. More work on one-loop N-point
integrals can be found in [66].
In the evaluation of real corrections, such as bremsstrahlung, a proper
and numerically stable separation of infrared (soft and collinear) divergences
represents one of the main problems. In the phase-space slicing approach
(see [67] and references therein) the singular regions are excluded from the
“regular” phase-space integration by small cuts on energies, angles, or invari-
ant masses. Using factorization properties, the integration over the singular
Page 10
2542
W. Hollik et al.
regions can be done in the limit of infinitesimally small cut parameters. The
necessary fine-tuning of cut parameters is avoided in so-called subtraction
methods (see [68–70] and references therein), where a specially tuned aux-
iliary function is subtracted from the singular integrand in such a way that
the resulting integral is regular. The auxiliary function has to be chosen
simple enough, so that the singular regions can be integrated over analyt-
ically. In [68] the so-called “dipole subtraction approach” has been worked
out for massless QCD. and subsequently extended for photon emission off
massive fermions [69] and for QCD with massive quarks [70].
Applications were preformed for complete one-loop calculations of elec-
troweak radiative corrections for specific 2 → 3 processes of special in-
terest for a future ILC, e
+
e
−
→ ν¯νH [71, 72] and e
+
e
−
→ t
¯
tH [73–75].
In [72, 73, 75] the technique [65] for treating tensor 5-point integrals was
employed. While [71, 73, 74] make use of the slicing approach for treat-
ing soft-photon emission, the results of Refs. [72,75] have been obtained by
dipole subtraction and checked by phase-space slicing for soft and collinear
bremsstrahlung. Analytic results for the one-loop corrections are provided
in [76].
The Yukawa coupling of the top quark could be measured at a future ILC
with high energy and luminosity at the level of ∼ 5% [2] by analyzing the
process e
+
e
−
→ t
¯
tH. A thorough prediction for this process, thus, has to
control QCD and electroweak corrections. Results on the electroweak O(α)
corrections of Refs. [74,75] show agreement within ∼ 0.1%. The results of
the previous calculation [73] roughly agree with the ones of Refs. [74,75] at
intermediate values of √s and M
H
, but are at variance at high energies (TeV
range) and close to threshold (large M
H
).
4. Event generators for multi-particle final states
4.1. Multi-purpose generators at parton level
The large variety of different final states for multi-particle production
renders multi-purpose Monte Carlo event generators rather important, i.e.
generators that deliver an event generator for a user-specified (as much as
possible) general final state based on full lowest-order amplitudes. As results,
these tools yield lowest-order predictions for observables, or more generally
Monte Carlos samples of events, that are improved by universal radiative
corrections, such as initial-state radiation at the leading-logarithmic level
or beamstrahlung effects. Most of the multi-purpose generators are also
interfaced to parton-shower and hadronization programs. The generality
renders these programs, however, rather complex devices and, at present,
they are far from representing tools for high-precision physics, because non-
universal radiative corrections are not taken into account in predictions.
Page 11
Electroweak Physics
2543
The following multi-purpose generators for multi-parton production, in-
cluding program packages for the matrix-element evaluation, are available:
• Amegic [77]: Helicity amplitudes are automatically generated by the
program for the SM, the MSSM, and some new-physics models. Vari-
ous interfaces (ISR, PDFs, beam spectra, Isajet, etc.) are supported.
The phase-space generation was successfully tested for up to six par-
ticles in the final state.
• Grace [78]: The amplitudes are delivered by a built-in package, which
can also handle SUSY processes. The phase-space integration is done
by BASES [79]. Tree-level calculations have been performed for up
to (selected) six-fermion final states. The extension of the system to
include one-loop corrections is the Grace-Loop [80] program.
• Madevent [81] + Madgraph [82]: The Madgraph algorithm can
generate tree-level matrix elements for any SM process (fully support-
ing particle masses), but a practical limitation is 9,999 diagrams. In
addition, Madgraph creates Madevent, an event generator for the
requested process.
• Phegas [83] + Helac [84]: The Helac program delivers amplitudes
for all SM processes (including all masses). The phase-space integra-
tion done by Phegas has been tested for selected final states with
up to seven particles. Recent applications concern channels with six-
fermion final states [85].
• Whizard [86] + Comphep [87] / Madgraph [82] / O’mega [88]:
Matrix elements are generated by an automatic interface to (older
versions of) Comphep, Madgraph, and (the up-to-date version of)
O’mega. Phase-space generation has been tested for most 2 → 6
and some 2 → 8 processes; unweighted events are supported, and a
large variety of interfaces (ISR, beamstrahlung, Pythia, PDFs, etc.)
exists. The inclusion of MSSM amplitudes (O’mega) and improved
phase-space generation (2 → 6) are work in progress.
• Alpgen [89] is a specific code for computing the perturbative part of
observables in high energy hadron-hadron collisions, which require a
convolution of the perturbative quantities with structure or fragmen-
tation functions that account for non perturbative effects.
Tuned comparisons of different generators, both at parton and detector level,
are extremely important, but become more and more laborious owing to the
large variety of multi-particle final states. Some progress to a facilitation
and automization of comparisons are made by MC-tester project [90] and
Java interfaces [91].
Page 12
2544
W. Hollik et al.
4.2. Event generators and results for e
+
e
−
→ 6f
Particular progress was reached in recent years in the description of
six-fermion production processes. Apart from the multi-purpose genera-
tors listed in the previous section, also dedicated Monte Carlo programs and
generators have been developed for this class of processes:
• Sixfap [92]: Matrix elements are provided for all 6f final states (with
finite fermion masses), including all electroweak diagrams. The gen-
eralization to QCD diagrams and the extension of the phase-space
integration for all final states is in progress.
• eett6f [93]: Only processes relevant for t
¯
t production are supported
(a new version includes e
±
in the final state and QCD diagrams); finite
fermion masses are possible.
• Lusifer [94]: All 6f final states are possible, including QCD diagrams
with up to four quarks; representative results for all these final states
have been presented. External fermions are massless. An unweighting
algorithm and an interface to Pythia are available.
• Phase [95]: All Standard Model process with six fermions in the final
state at the LHC. It employs the full set of tree-level Feynman dia-
grams, taking into account a finite mass for the b quark. An interface
to hadronization is provided.
A comparison of results [96] for some processes e
+
e
−
→ 6f relevant for t
¯
t
production for massless external fermions reveals good agreement between
the various programs, where minor differences are presumably due to the dif-
ferent treatments of the bottom-quark Yukawa coupling, which is neglected
in some cases.
A tuned comparison of results obtained with Lusifer and Whizard for
a large survey of 6f final states has been presented in Ref. [94].
5. Other developments
5.1. Automatization of loop calculations
Once the necessary techniques and theoretical subtleties of a perturbative
calculation are settled, to carry out the actual calculation is an algorithmic
matter. Thus, an automization of such calculations is highly desirable, in
order to facilitate related calculations. Various program packages have been
presented in the literature for automatized tree-level, one-loop, and multi-
loop calculations. A comprehensive overview can, e.g., be found in [97]; in
the following we have to restrict ourselves to a selection of topics, where
emphasis is put on electroweak aspects.
Page 13
Electroweak Physics
2545
The generation of Feynman graphs and amplitudes is a combinatorial
problem that can be attacked with computer algebra. The program pack-
ages FeynArts [98] (which has been extended in [99] for the MSSM) and
Diana [100] (based on Qgraf [101]) are specifically designed for this task;
also the Grace-Loop [80] system automatically generates Feynman dia-
grams and loop amplitudes. Moreover, the task of calculating virtual one-
loop and the corresponding real-emission corrections to 2 → 2 scattering
reactions is by now well understood. Such calculations are widely auto-
mated in the packages FormCalc combined with LoopTools [102], and
Grace-Loop [80].
An illustrating example was provided for the differential cross section
for e
+
e
−
→ t
¯
t in lowest order as well as including electroweak O(α) cor-
rections. A program FA+FC [103] was obtained from the output of the
FeynArts and FormCalc packages and makes use of the LoopTools
library for the numerical evaluation. Another program, Topfit [103,104],
was developed from an algebraic reduction of Feynman graphs (delivered
from Diana) within Form; for the numerics LoopTools is partially em-
ployed. A completely independent approach has been made by the Sanc
project [105]. The Sanc program contains another independent calculation
of the O(α) corrections to e
+
e
−
→ t
¯
t, the results of which are also included
in [103]. More details on comparisons, including also other fermion flavors,
can be found in [103,106]. The agreement between the numerical results at
the level of 10 digits reflects the enormous progress achieved in recent years
in the automatization of one-loop calculations.
The one-loop calculation for the process e
+
e
−
→ t
¯
t(γ) including hard
bremsstrahlung was originally performed in [104] without full automatiza-
tion; it was repeated in later course (apart from the hard bremsstrahlung
part) as an example for automatization in [107] . The extension of Di-
ana towards full automatization in terms of the package aıTalc is a new
development [107,108]; automatization of the full calculation is performed
including renormalization and the creation and running of a FORTRAN
code. Applications to the calculation of the processes e
+
e
−
→ f
¯
f(γ) for
various final fermions: t,b,µ,e and also s
¯
b,c
¯
t,µ¯τ are performed. For further
work in automatization see [109,110].
5.2. Numerical approaches to loop calculations
Most of the various techniques of performing loop calculations share the
common feature that the integration over the loop momenta is performed
analytically. This procedure leads to complications at one loop if five or more
external legs are involved, since both speed and stability of programs become
more and more jeopardized. At the two-loop level, already the evaluation of
Page 14
2546
W. Hollik et al.
self-energy and vertex corrections can lead to extremely complicated higher
transcendental functions that are hard to evaluate numerically.
An idea to avoid these complications is provided by a more or less purely
numerical evaluation of loop corrections. There are two main difficulties in
this approach. Firstly, the appearing ultraviolet and infrared divergences
have to be treated and canceled carefully. Secondly, even finite loop integrals
require a singularity handling of the integrand near particle poles, where
Feynman’s iϵ prescription is used as regularization.
In [111] a method for a purely numerical evaluation of loop integrals
is proposed. Each integral is parametrized with Feynman parameters and
subsequently rewritten with partial integrations. The final expression con-
sists of a quite simple part containing the singular terms and another more
complicated looking part that can be integrated numerically. The actual
application of the method to a physical process is still work in progress.
There are five papers in a series devoted to the numerical evaluation of
multi-loop, multi-leg Feynman diagrams. In [111] the general strategy is
outlined and in [112] a complete list of results is given for two-loop functions
with two external legs, including their infrared divergent on-shell derivatives.
Results for one-loop multi-leg diagrams are shown in [113] and additional
material can be found in [114]. Two-loop three-point functions for infrared
convergent configurations are considered in [115], where numerical results
can be found.
Ref. [113] presents a detailed investigation of the algorithms, based on
the Bernstein–Tkachov (BT) theorem [116], which form the basis for a fast
and reliable numerical integration of one-loop multi-leg (up to six in this
paper) diagrams. The rationale for this work is represented by the need
of encompassing a number of problems that one encounters in assembling
a calculation of some complicated process, e.g. full one-loop corrections to
e
+
e
−
→ 4fermions. Furthermore, in any attempt to compute physical ob-
servables at the two-loop level, we will have to include the one-loop part, and
it is rather obvious that the two pieces should be treated on equal footing.
All algorithms that aim to compute Feynman diagrams numerically are
based on some manipulation of the original integrands that brings the final
answer into something smooth. This has the consequence of bringing the
original (Landau) singularity of the diagram into some overall denominator
and, usually, the method overestimates the singular behavior around some
threshold. In these regions an alternative derivation is needed. Instead of
using the method of asymptotic expansions, a novel algorithm is introduced
based on a Mellin–Barnes decomposition of the singular integrand, followed
by a sector decomposition that allows one to write the Laurent expansion
around threshold.
Page 15
Electroweak Physics
2547
Particular care has been devoted to analyze those situations where a
sub-leading singularity may occur, and to properly account for those cases
where the algorithm cannot be applied because the corresponding BT factor
is zero although the singular point in parametric space does not belong to
the integration domain.
Finally, a description of infrared divergent one-loop virtual configura-
tions is given in the framework of dimensional regularization: here both the
residue of the infrared pole and the infrared finite remainder are cast into
a form that can be safely computed numerically. The collection of formu-
las that cover all corners of phase space have been translated into a set of
FORM codes and the output has been used to create a FORTRAN code
whose technical description will be given elsewhere.
Ref. [117] addresses the problem of deriving a judicious and efficient way
to deal with tensor Feynman integrals, namely those integrals that occur in
any field theory with spin and non trivial structures for the numerators of
Feynman propagators. This paper forms a basis for a realistic calculation of
physical observables at the two-loop level.
The complexity of handling two-loop tensor integrals is reflected in the
following simple consideration: the complete treatment of one-loop tensor in-
tegrals was confined to the appendices of [118], while the reduction of general
two-loop self-energies to standard scalar integrals already required a consid-
erable fraction of [119]; the inclusion of two-loop vertices requires the whole
content of this paper. The past experience in the field has shown the encom-
passed convenience of gathering in one single place the complete collection
of results needed for a broad spectrum of applications. In recent years, the
most popular and quite successful tool in dealing with multi-loop Feynman
diagrams in QED/QCD (or in selected problems in different models, charac-
terized by a very small number of scales), has been the Integration-By-Parts
Identities method [120]. However, reduction to a set of master integrals is
poorly known in the enlarged scenario of multi-scale electroweak physics.
In [121] another new method is presented in which almost all the work
can be performed numerically: the tensor integrals are numerically reduced
to the standard set of one-loop scalar functions and any amplitude is calcu-
lated simply contracting the numerically computed tensor integrals with the
external tensors. To this aim, a recursion relation is introduced that links
high-rank tensor integrals to lower-rank ones. Singular kinematical configu-
rations give a smoother behavior than in other approaches because, at each
level of iteration, only inverse square roots of Gram determinants appear.
Page 16
2548
W. Hollik et al.
6. Electroweak effects in hadronic processes
In Refs. [122–126] (see also [127]) it was proved the importance of elec-
troweak one-loop corrections to hadronic observables, such as b
¯
b, ‘prompt
photon + jet’ and ‘Z + jet’ production at Tevatron and LHC and jet and b
¯
b
production in linear colliders, which can compete in size with QCD correc-
tions. Their inclusion in experimental analyses is thus important, especially
in view of searches for new physics. In case of ‘Z + jet’ production they can
rise to O(15–20%) at large transverse momentum at the LHC, while being
typically half the size in case of ‘photon + jet’ production. As these two
channels are the contributors to the Drell-Yan process, and since the latter
is envisaged to be used as one of the means to measure the LHC luminosity,
it is clear that neglecting them in experimental analyses would spoil the
luminosity measurements.
Ref. [128] emphasised the importance of NLO electroweak effects in
three-jet production in e
+
e
−
scattering at the Z-pole (SLC, LEP and GigaZ),
showing typical corrections of O(2–4%) (e.g. in jet-rates and thrust), compa-
rable to the SLC and LEP experimental accuracy and certainly larger than
the one expected at GigaZ. They also introduce sizable parity-violating ef-
fects into the fully differential structure of three-jet events in presence of
beam polarisation, which are of relevance as background to new physics
searches in SLC and GigaZ data.
The complete set of electroweak O(α) corrections to the Drell–Yan-like
production of Z and W bosons have been studied in [129–131]. These cor-
rections are phenomenologically relevant both at the Tevatron and the LHC.
It is shown that the pole expansion yields a good description of resonance
observables, but it is not sufficient for the high-energy tail of transverse-
momentum distributions, relevant for new-physics searches. Horace and
Winhac are Monte Carlo event generators [132], developed for single W
production and decay, which in their current versions include higher-order
QED corrections in leptonic W decays, a crucial entry for precision deter-
mination of the W mass and width at hadron colliders.
Production of vector-boson pairs is an important probe for potential non-
standard anomalous gauge couplings. In order to identify possible deviations
from the SM predictions, an accurate knowledge of the electroweak higher-
order contributions is mandatory as well, in particular for large transverse
momenta. A complete electroweak one-loop calculation was performed for
γZ production [133]; for other processes like γW,... the large logarithms in
the Sudakov regime were derived [41].
A further aspect that should be recalled is that weak corrections nat-
urally introduce parity-violating effects in observables, detectable through
asymmetries in the cross-section. These effects are further enhanced if polar-
Page 17
Electroweak Physics
2549
isation of the incoming beams is exploited, such as at RHIC-Spin [134,135]
and will be used to measure polarised structure functions.
7. Conclusions
During the recent years there has been continuous progress in the devel-
opment of new techniques and in making precise predictions for electroweak
physics at future colliders. However, to be prepared for the LHC and a future
e
+
e
−
linear collider with high energy and luminosity, an enormous amount of
work is still ahead of us. Not only technical challenges, also field-theoretical
issues such as renormalization, the treatment of unstable particles, etc., de-
mand a higher level of understanding. Both loop calculations as well as the
descriptions of multi-particle production processes with the help of Monte
Carlo techniques require and will profit from further improving computing
devices. It is certainly out of question that the list of challenges and in-
teresting issues could be continued at will. Electroweak physics at future
colliders will be a highly exciting issue.
REFERENCES
[1] G. Altarelli, M. Mangano (Eds.), Proc. of the CERN Workshop on Standard
Model Physics (and More) at the LHC, CERN 2000-004.
[2] J.A. Aguilar-Saavedra et al., TESLA Technical Design Report Part
III: Physics at an e
+
e
−
Linear Collider; T. Abe et al. [Ameri-
can Linear Collider Working Group Collaboration], in Proc. of the
APS/DPF/DPB Summer Study on the Future of Particle Physics (Snow-
mass 2001)
ed. R. Davidson, C. Quigg, SLAC-R-570, Resource book
for Snowmass 2001 hep-ex/0106055, hep-ex/0106056, hep-ex/0106057,
hep-ex/0106058; K. Abe et al. [ACFA Linear Collider Working Group Col-
laboration], ACFA Linear Collider Working Group report, hep-ph/0109166.
[3] U. Baur et al., in Proc. of the APS/DPF/DPB Summer Study on the Fu-
ture of Particle Physics (Snowmass 2001) ed. N. Graf, hep-ph/0111314,
hep-ph/0202001.
[4] A. Freitas, W. Hollik, W. Walter, G. Weiglein, Phys. Lett. B495, 338 (2000),
[Erratum Phys. Lett. B570, 260 (2003)]; Nucl. Phys. B632, 189 (2002),
[Erratum Nucl. Phys. B666, 305 (2003)].
[5] M. Awramik, M. Czakon, Phys. Lett. B568, 48 (2003).
[6] M. Awramik, M. Czakon, Phys. Rev. Lett. 89, 241801 (2002); A. On-
ishchenko, O. Veretin, Phys. Lett. B551, 111 (2003); M. Awramik, M. Cza-
kon, A. Onishchenko, O. Veretin, Phys. Rev. D68, 053004 (2003).
[7] J. van der Bij, K. Chetyrkin, M. Faisst, G. Jikia, T. Seidensticker, Phys. Lett.
B498, 156 (2001); M. Faisst, J.H. Kühn, T. Seidensticker, O. Veretin, Nucl.
Phys. B665, 649 (2003).
Page 18
2550
W. Hollik et al.
[8] R. Boughezal, J.B. Tausk, J.J. van der Bij, hep-ph/0410216.
[9] W. Hollik, in proceedings of the Zeuthen workshop Electroweak Precision
Data and the Higgs Mass, Zeuthen, 2003, eds. S. Dittmaier, K. Mönig, DESY-
PROC-2003-01 and MPP-2003-50.
[10] M. Awramik, M. Czakon, A. Freitas, G. Weiglein, Phys. Rev. D69, 053006
(2004).
[11] F.A. Berends et al., in Z Physics at LEP 1, eds. G. Altarelli, R. Kleiss,
C. Verzegnassi (CERN 89-08, Geneva, 1989), Vol. 1, p. 89; W. Beenakker,
F.A. Berends, S.C. van der Marck, Z. Phys. C46, 687 (1990); A. Borrelli,
M. Consoli, L. Maiani, R. Sisto, Nucl. Phys. B333, 357 (1990).
[12] D.Y. Bardin et al., hep-ph/9709229, in Precision Calculations for the Z
Resonance, D. Bardin, W. Hollik, G. Passarino (Eds.), CERN 95-03.
[13] D.Y. Bardin, G. Passarino, hep-ph/9803425; G. Montagna, O. Nicrosini,
F. Piccinini, G. Passarino, Comput. Phys. Commun. 117, 278 (1999);
G. Montagna, F. Piccinini, O. Nicrosini, G. Passarino, R. Pittau, Comput.
Phys. Commun. 76, 328 (1993); G. Montagna, F. Piccinini, O. Nicrosini,
G. Passarino, R. Pittau, Nucl. Phys. B401, 3 (1993); D.Y. Bardin,
M. Grünewald, G. Passarino, hep-ph/9902452; D.Y. Bardin et al., Comput.
Phys. Commun. 133, 229 (2001).
[14] Contributions of W. Hollik, W. van Neerven, G. Passarino to the proceedings
of the Zeuthen workshop Electroweak Precision Data and the Higgs Mass,
Zeuthen, 2003, eds. S. Dittmaier, K. Mönig, DESY-PROC-2003-01 and MPP-
2003-50.
[15] S. Heinemeyer, hep-ph/0406245.
[16] M. Awramik, M. Czakon, A. Freitas, G. Weiglein, hep-ph/0407317;
M. Awramik, M. Czakon, A. Freitas, G. Weiglein, hep-ph/0408207.
[17] H. Czyz, J.H. Kühn, G. Rodrigo, Nucl. Phys. Proc. Suppl. 116, 249 (2003);
J. Gluza, A. Hoefer, S. Jadach, F. Jegerlehner, Eur. Phys. J. C28, 261
(2003); H. Czyz, A. Grzelinska, J.H. Kühn, G. Rodrigo, Eur. Phys. J. C33,
333 (2004).
[18] G.P. Zeller et al. [NuTeV Collaboration], Phys. Rev. Lett. 88, 091802 (2002),
[Erratum Phys. Rev. Lett. 90, 239902 (2003)].
[19] K.P.O. Diener, S. Dittmaier, W. Hollik, Phys. Rev. D69, 073005 (2004).
[20] A.B. Arbuzov, D.Y. Bardin, L.V. Kalinovskaya, hep-ph/0407203.
[21] Z. Bern, L.J. Dixon, A. Ghinculov, Phys. Rev. D63, 053007 (2001).
[22] E.W.N. Glover, J.B. Tausk, J.J. van der Bij, Phys. Lett. B516, 33 (2001).
[23] Z. Bern et al., J. High Energy Phys. 0111, 031 (2001).
[24] J. Fleischer, T. Riemann, O.V. Tarasov, A. Werthenbach, Nucl. Phys. Proc.
Suppl. 116, 43 (2003).
[25] J. Fleischer, T. Riemann, O.V. Tarasov, Acta Phys. Pol. B 34, 5345 (20003).
[26] J. Fleischer, F. Jegerlehner, O.V. Tarasov, Nucl. Phys. B672, 303 (2003).
[27] J. Fleischer, O.V. Tarasov, V.O. Tarasov, Phys. Lett. B584, 294 (2004).
Page 19
Electroweak Physics
2551
[28] R. Bonciani, A. Ferroglia, P. Mastrolia, E. Remiddi, J.J. van der Bij, Nucl.
Phys. B681, 261 (2004).
[29] R. Bonciani, A. Ferroglia, P. Mastrolia, E. Remiddi, J.J. van der Bij,
hep-ph/0405275 to appear in Nucl. Phys. B.
[30] M. Czakon, J. Gluza, T. Riemann, hep-ph/0406203.
[31] T.G. Birthwright, E.W.N. Glover, P. Marquard, J. High Energy Phys. 0409,
042 (2004).
[32] U. Aglietti, R. Bonciani, Nucl. Phys. B668, 3 (2003); U. Aglietti, R. Bon-
ciani, hep-ph/0401193.
[33] R. Bonciani, P. Mastrolia, E. Remiddi, Nucl. Phys. B690, 138 (2004).
[34] U. Aglietti, R. Bonciani, G. Degrassi, A. Vicini, hep-ph/0407162; U. Aglietti,
R. Bonciani, G. Degrassi, A. Vicini, Phys. Lett. B595, 432 (2004).
[35] A. Denner, M. Melles, S. Pozzorini, Nucl. Phys. B662, 299 (2003).
[36] M. Ciafaloni, P. Ciafaloni, D. Comelli, Phys. Lett. B501, 216 (2001).
[37] A. Denner, S. Pozzorini, Eur. Phys. J. C18, 461 (2001); Eur. Phys. J. C21,
63 (2001).
[38] W. Beenakker, A. Werthenbach, Phys. Lett. B489, 148 (2000); Nucl. Phys.
B630, 3 (2002); M. Melles, Phys. Lett. B495, 81 (2000); M. Hori, H. Kawa-
mura, J. Kodaira, Phys. Lett. B491, 275 (2000).
[39] V.S. Fadin, L.N. Lipatov, A.D. Martin, M. Melles, Phys. Rev. D61, 094002
(2000); J.H. Kühn, A.A. Penin, V.A. Smirnov, Eur. Phys. J. C17, 97 (2000);
M. Melles, Phys. Rep. 375, 219 (2003); Eur. Phys. J. C24, 193 (2002).
[40] J.H. Kühn, S. Moch, A.A. Penin, V.A. Smirnov, Nucl. Phys. B616, 286
(2001) [Erratum Nucl. Phys. B648, 455 (2003)].
[41] E. Accomando, A. Denner, S. Pozzorini, Phys. Rev. D65, 073003 (2002).
E. Accomando, A. Denner, A. Kaiser, hep-ph/0409247.
[42] M. Beccaria et al., Phys. Rev. D64, 053016 (2001); Phys. Rev. D65, 093007
(2002); LC-TH-2002-005; Phys. Rev. D68, 035014 (2003); LC-TH-2002-017
Int. J. Mod. Phys. A18, 5069 (2003); Nucl. Phys. B663, 394 (2003).
[43] W. Beenakker et al., in Physics at LEP2, eds. G. Altarelli, T. Sjöstrand,
F. Zwirner (CERN 96-01, Geneva, 1996), Vol. 1, p. 79.
[44] J. Blümlein, H. Kawamura, Acta Phys. Pol. B 33, 3719 (2002).
[45] E. Accomando, A. Ballestrero, E. Maina, Comput. Phys. Commun. 150, 166
(2003).
[46] A. Ballestrero, R. Chierici, F. Cossutti, E. Migliore, Comput. Phys. Commun.
152, 175 (2003).
[47] W. Beenakker, A.P. Chapovsky, A. Kanaki, C.G. Papadopoulos, R. Pittau,
Nucl. Phys. B667, 359 (2003).
[48] M.W. Grünewald et al., in Reports of the Working Groups on Precision Cal-
culations for LEP2 Physics, eds. S. Jadach, G. Passarino, R. Pittau (CERN
2000-009, Geneva, 2000), p. 1.
Page 20
2552
W. Hollik et al.
[49] W. Beenakker, F.A. Berends, A.P. Chapovsky, Nucl. Phys. B548, 3 (1999);
Y. Kurihara, M. Kuroda, D. Schildknecht, Nucl. Phys. B565, 49 (2000);
Phys. Lett. B509, 87 (2001).
[50] S. Jadach et al., Phys. Rev. D61, 113010 (2000); Phys. Rev. D65, 093010
(2002); Comput. Phys. Commun. 140, 432 (2001); Comput. Phys. Commun.
140, 475 (2001).
[51] A. Denner, S. Dittmaier, M. Roth, D. Wackeroth, Phys. Lett. B475, 127
(2000); LC-TH-2000-014; Nucl. Phys. B587, 67 (2000); hep-ph/0101257;
Comput. Phys. Commun. 153, 462 (2003).
[52] A. Denner, S. Dittmaier, M. Roth, D. Wackeroth, Nucl. Phys. B560, 33
(1999); Eur. Phys. J. C20, 201 (2001).
[53] A. Denner, S. Dittmaier, M. Roth, D. Wackeroth, hep-ph/0110402.
[54] S. Jadach et al., Phys. Lett. B523, 117 (2001).
[55] F. Jegerlehner, LC-TH-2001-035, hep-ph/0105283.
[56] S. Jadach et al., Comput. Phys. Commun. 119, 272 (1999); Eur. Phys. J.
C27, 19 (2003).
[57] J. Fujimoto et al., hep-ph/9603394; Comput. Phys. Commun. 100, 128
(1997).
[58] F.A. Berends, C.G. Papadopoulos, R. Pittau, hep-ph/0002249; Comput.
Phys. Commun. 136, 148 (2001).
[59] G. Montagna el al., Eur. Phys. J. C20, 217 (2001).
[60] E. Accomando, A. Ballestrero, Comput. Phys. Commun. 99, 270 (1997);
E. Accomando, A. Ballestrero, E. Maina, Phys. Lett. B479, 209 (2000);
Comput. Phys. Commun. 150, 166 (2003).
[61] G. Passarino, Comput. Phys. Commun. 97, 261 (1996); hep-ph/9810416;
Nucl. Phys. B578, 3 (2000).
[62] A. Denner, S. Dittmaier, Nucl. Phys. B398, 239 (1993); M. Böhm,
S. Dittmaier, Nucl. Phys. B409, 3 (1993).
[63] D.B. Melrose, Nuovo Cimento XL A, 181 (1965).
[64] J. Fleischer, F. Jegerlehner, O.V. Tarasov, Nucl. Phys. B566, 423 (2000);
T. Binoth, J.P. Guillet, G. Heinrich, Nucl. Phys. B572, 361 (2000); F. Tra-
montano, Phys. Rev. D67, 114005 (2003); T. Binoth, G. Heinrich, N. Kauer,
Nucl. Phys. B654, 277 (2003).
[65] A. Denner, S. Dittmaier, Nucl. Phys. B658, 175 (2003).
[66] W.T. Giele, E.W.N. Glover, J. High Energy Phys. 0404, 029 (2004);
S. Dittmaier, Nucl. Phys. B675, 447 (2003).
[67] B.W. Harris, J.F. Owens, Phys. Rev. D65, 094032 (2002).
[68] S. Catani, M.H. Seymour, Nucl. Phys. B485, 291 (1997) [Erratum Nucl.
Phys. B510, 291 (1997)].
[69] S. Dittmaier, Nucl. Phys. B565, 69 (2000).
[70] L. Phaf, S. Weinzierl, J. High Energy Phys. 0104, 006 (2001); S. Catani,
S. Dittmaier, M.H. Seymour, Z. Trócsányi, Nucl. Phys. B627, 189 (2002).
Page 21
Electroweak Physics
2553
[71] G. Bélanger et al., Phys. Lett. B559, 252 (2003).
[72] A. Denner, S. Dittmaier, M. Roth, M.M. Weber, Phys. Lett. B560, 196
(2003); Nucl. Phys. B660, 289 (2003).
[73] Y. You et al., Phys. Lett. B571, 85 (2003).
[74] G. Bélanger et al., Phys. Lett. B571, 163 (2003).
[75] A. Denner, S. Dittmaier, M. Roth, M.M. Weber, Phys. Lett. B575, 290
(2003).
[76] F. Jegerlehner, O. Tarasov, Nucl. Phys. Proc. Suppl. 116, 83 (2003).
[77] F. Krauss, R. Kuhn, G. Soff, J. High Energy Phys. 0202, 044 (2002);
A. Schalicke, F. Krauss, R. Kuhn, G. Soff, J. High Energy Phys. 0212, 013
(2002).
[78] T. Ishikawa et al., [Minami-Tateya Collaboration], KEK-92-19; H. Tanaka
et al., [Minami-Tateya Collaboration], Nucl. Instrum. Methods Phys. Res.
A389, 295 (1997); F. Yuasa et al., Prog. Theor. Phys. Suppl. 138, 18 (2000).
[79] S. Kawabata, Comput. Phys. Commun. 88, 309 (1995).
[80] J. Fujimoto el al., Acta Phys. Pol. B 28, 945 (1997).
[81] F. Maltoni, T. Stelzer, J. High Energy Phys. 0302, 027 (2003).
[82] T. Stelzer, W.F. Long, Comput. Phys. Commun. 81, 357 (1994); H. Mu-
rayama, I. Watanabe, K. Hagiwara, KEK-91-11.
[83] C.G. Papadopoulos, Comput. Phys. Commun. 137, 247 (2001).
[84] A. Kanaki, C.G. Papadopoulos, Comput. Phys. Commun. 132, 306 (2000).
[85] T. Gleisberg, F. Krauss, C.G. Papadopoulos, A. Schaelicke, S. Schumann,
Eur. Phys. J. C34, 173 (2004).
[86] W. Kilian, LC-TOOL-2001-039.
[87] A. Pukhov et al., hep-ph/9908288.
[88] M. Moretti, T. Ohl, J. Reuter, LC-TOOL-2001-040, hep-ph/0102195.
[89] M.L. Mangano, M. Moretti, F. Piccinini, R. Pittau, A.D. Polosa, J. High
Energy Phys. 0307, 001 (2003); M.L. Mangano, M. Moretti, R. Pittau, Nucl.
Phys. B632, 343 (2002); F. Caravaglios, M.L. Mangano, M. Moretti, R. Pit-
tau, Nucl. Phys. B539, 215 (1999); F. Caravaglios, M. Moretti, Phys. Lett.
B358, 332 (1995).
[90] P. Golonka, T. Pierzchala, Z. Was, Comput. Phys. Commun. 157, 39 (2004).
[91] M.T. Ronan, LC-TOOL-2003-015 [physics/0306019].
[92] G. Montagna, M. Moretti, O. Nicrosini, F. Piccinini, Eur. Phys. J. C2, 483
(1998); F. Gangemi et al., Eur. Phys. J. C9, 31 (1999); Nucl. Phys. B559,
3 (1999); F. Gangemi, hep-ph/0002142.
[93] K. Kołodziej, Eur. Phys. J. C23, 471 (2002); Comput. Phys. Commun. 151,
339 (2003); A. Biernacik, K. Kołodziej, Nucl. Phys. Proc. Suppl. 116, 33
(2003); A. Biernacik, K. Kolodziej, A. Lorca, T. Riemann, Acta Phys. Pol.
B 34, 5487 (2003).
[94] S. Dittmaier, M. Roth, Nucl. Phys. B642, 307 (2002).
[95] E. Accomando, A. Ballestrero, E. Maina, hep-ph/0404236.
Page 22
2554
W. Hollik et al.
[96] S. Dittmaier, hep-ph/0308079.
[97] R. Harlander, M. Steinhauser, Prog. Part. Nucl. Phys. 43, 167 (1999).
[98] J. Küblbeck, M. Böhm, A. Denner, Comput. Phys. Commun. 60, 165 (1990);
T. Hahn, Comput. Phys. Commun. 140, 418 (2001).
[99] T. Hahn, C. Schappacher, Comput. Phys. Commun. 143, 54 (2002).
[100] M. Tentyukov, J. Fleischer, Comput. Phys. Commun. 132, 124 (2000).
[101] P. Nogueira, J. Comput. Phys. 105, 279 (1993).
[102] T. Hahn, M. Perez-Victoria, Comput. Phys. Commun. 118, 153 (1999);
T. Hahn, Nucl. Phys. Proc. Suppl. 89, 231 (2000).
[103] J. Fleischer, T. Hahn, W. Hollik, T. Riemann, C. Schappacher, A. Werthen-
bach, LC-TH-2002-002 hep-ph/0202109.
[104] J. Fleischer, A. Leike, T. Riemann, A. Werthenbach, Eur. Phys. J. C31, 37
(2003).
[105] A. Andonov el al., hep-ph/0209297; A. Andonov el al., hep-ph/0202112;
Phys. Part. Nucl. 34, 577 (2003) [Fiz. Elem. Chast. Atom. Yadra 34, 1125
(2003)]
[106] T. Hahn, W. Hollik, A. Lorca, T. Riemann, A. Werthenbach, LC-TH-2003-
083 hep-ph/0307132.
[107] J. Fleischer, A. Lorca, T. Riemann, hep-ph/0409034.
[108] A. Lorca, T. Riemann, hep-ph/0407149.
[109] M. Tentyukov, J. Fleischer, Comput. Phys. Commun. 160, 167 (2004).
[110] J. Fleischer, A. Lorca, M. Tentyukov, Diana and Applications to Fermion
Production in Electron Positron Annihilation, to appear in the proceedings
of ACAT03, December 1993.
[111] G. Passarino, Nucl. Phys. B619, 257 (2001); G. Passarino, S. Uccirati, Nucl.
Phys. B629, 97 (2002); A. Ferroglia, M. Passera, G. Passarino, S. Uccirati,
Nucl. Phys. B650, 162 (2003).
[112] G. Passarino, S. Uccirati, Nucl. Phys. B629, 97 (2002).
[113] A. Ferroglia, M. Passera, G. Passarino, S. Uccirati, Nucl. Phys. B650, 162
(2003).
[114] A. Ferroglia, G. Passarino, S. Uccirati, M. Passera, Nucl. Instrum. Methods
Phys. Res. A502, 391 (2003).
[115] A. Ferroglia, M. Passera, G. Passarino, S. Uccirati, Nucl. Phys. B680, 199
(2004).
[116] F.V. Tkachov, Nucl. Instrum. Methods Phys. Res. A389, 309 (1997);
L.N. Bernstein, Functional Analysis and its Applications, 6, 66 (1972).
[117] S. Actis, A. Ferroglia, G. Passarino, M. Passera, S. Uccirati,
hep-ph/0402132.
[118] G. Passarino, M. Veltman, Nucl. Phys. B160, 151 (1979).
[119] G. Weiglein, R. Scharf, M. Böhm, Nucl. Phys. B416, 606 (1994).
Page 23
Electroweak Physics
2555
[120] G. ’t Hooft, M. Veltman, Nucl. Phys. B44, 189 (1972); F.V. Tkachov, PhD
thesis, INR, Moscow, March 1984; F.V. Tkachov, Phys. Lett. B100, 65
(1981); K.G. Chetyrkin, F.V. Tkachov, Nucl. Phys. B192, 159 (1981).
[121] F. del Aguila, R. Pittau, J. High Energy Phys. 0407, 017 (2004);
[122] E. Maina, S. Moretti, M.R. Nolten, D.A. Ross, hep-ph/0401093.
[123] E. Maina, S. Moretti, D.A. Ross, Phys. Lett. B593, 143 (2004).
[124] E. Maina, S. Moretti, M.R. Nolten, D.A. Ross, hep-ph/0407150.
[125] E. Maina, S. Moretti, M.R. Nolten, D.A. Ross, Phys. Lett. B570, 205 (2003).
[126] J.H. Kühn, A. Kulesza, S. Pozzorini, M. Schulze, hep-ph/0408308.
[127] M. Dobbs et al., hep-ph/0403100.
[128] E. Maina, S. Moretti, D.A. Ross, J. High Energy Phys. 0304, 056 (2003);
hep-ph/0403269.
[129] O. Brein, U. Baur, W. Hollik, C. Schappacher, D. Wackeroth, Phys. Rev.
D65, 033007 (2002).
[130] S. Dittmaier, M. Krämer, Phys. Rev. D65, 073007 (2002).
[131] U. Baur, D. Wackeroth, hep-ph/0405191.
[132] C.M. Carloni Calame, S. Jadach, G. Montagna, O. Nicrosini, W. Placzek,
Acta Phys. Pol. B 35, 1643 (2004).
[133] W. Hollik, C. Meier, Phys. Lett. B590, 69 (2004).
[134] C. Bourrely, J.P. Guillet, J. Soffer, Nucl. Phys. B361, 72 (1991).
[135] J.R. Ellis, S. Moretti, D.A. Ross, J. High Energy Phys. 0106, 043 (2001
PHYSICA
Periodic, aperiodic, and transient patterns
in vibrated granular layers
Paul B. Umbanhowar3, Francisco Melob, Harry L. Swinney3 *
1 Center for Nonlinear Dynamics and Department of Physics. The University of Texas at Austin, Austin,
TX 78712, USA
b Deparlamento de Fisica, Universidad de Santiago, Avenida Ecuador 3493, Casilla 307 Correo 2,
Santiago. Chile
Abstract
Experiments on vertically vibrated granular layers in evacuated containers reveal a variety of
patterns for acceleration amplitudes above a critical value ( « 2.5 g). Stripes, squares, hexagons,
spirals, triangles, and targets, as well as particle-like localized excitations ("oscillons") and fronts
("kinks") between regions with different vibrational phase are observed as the layer depth and the
container oscillation frequency and amplitude are varied. A zig-zag instability, unstable hexagons,
phase-disordered patterns, and "two-phase" squares are also observed. With a few noteworthy
exceptions, the patterns are essentially independent of the lateral boundary conditions. © 1998
Elsevier Science B.V. All rights reserved.
Granular materials are composed of macroscopic particles (grains). Energy is dissi-
pated in each grain collision. The typical energy of a grain is many orders of magnitude
greater than kBT, so temperature plays no role in grain dynamics. These two character-
istics give granular media properties that differ from those of solids and fluids. Granular
media are important in many geological and industrial processes but are much less well
understood than solids and fluids. Research in the past decade has revealed a variety
of interesting collective phenomena in granular media, e.g. see Ref. [1] and references
therein. Here we present observations of spatial patterns in vertically vibrated granular
layers.
Our earlier experiments investigated extended standing wave patterns (planar stripes,
squares, and hexagons) [2] and localized structures ("oscillons") [3] as a function of the
sinusoidal container oscillation frequency f and the dimensionless acceleration ampli-
tude r = 4n2Af2g~], where A is the amplitude and g the acceleration due to gravity.
* Corresponding author. E-mail: swinneyfwehaos.ph.utcxas.edu.
0378-4371/98/S19.00 Copyright © 1998 Elsevier Science B.V. All rights reserved
/7/S0378-4371(97)00425-1
Page 2
PB. Umbanhowar et al. I Physica A 249 (1998) 1-9
^-^_ DISORDERED
SQUARES (f/4)
STRIPES (f/4)
SQUARES (f/2) !
FLAT WITH FRONTS
HEXAGONS (f/2)
STRIPES (f/2)
FLAT
50
100
Fig. I. Stability diagram showing transitions in a 7-particle deep layer. Shaded regions indicate the co-
existence of slowly evolving patterns of squares and stripes. Solid (dashed) lines denote transitions with
decreasing (increasing) r. The transition from a flat layer to square patterns is strongly hysteretic.
In this paper we present aspects of planar patterns and oscillons not presented in
the earlier work. A more detailed description of the various phenomena will appear
elsewhere [4].
The patterns described here, unless noted otherwise, are produced in a layer of
non-cohesive, 0.15-0.18 mm diameter, bronze spheres placed in the bottom of an evac-
uated, upright, 126 mm diameter, cylindrical container [2,3]. The principal control
parameters are the oscillation frequency / (10-120 Hz), F (0-10) and the dimen-
sionless layer depth N =H/D (3-30), where H is the layer depth and D the particle
diameter. Patterns are visualized by strobed side lighting, and images are acquired with
a CCD camera mounted on axis.
Fig. 1 is a phase diagram showing the regions in which different patterns are observed
as a function of the container acceleration amplitude and frequency. The effective forc-
ing of the granular layer is different from the container forcing for T > 1 because the
layer loses contact with the container when the container acceleration is less than —g.
The effective forcing magnitude is determined by the collision acceleration, while the
effective drive frequency is set by the time between layer-container collisions [2], Hys-
teretic subharmonic standing waves arise when the effective forcing magnitude exceeds
a critical value (r«2.4). At low / the patterns are squares, while at larger /' stripe
patterns appear. As r is increased past «4, the layer free-flight time exceeds /"',
at which point there is a doubling of the period in the motion of the layer center of
mass: successive free flights last more than a period and less than a period. Hexagonal
patterns, which can have two possible phases with respect to the drive signal, arise at
this period doubling.
Further increases in F reduce both the shorter free flight-time and the effective forc-
ing magnitude until the layer gently collides with the container only once every two
Page 3
PB. Umbcmhowar et allPhvsica A 249 (1998) 19
Fig. 2. Fronts between domains with opposite phase in a 7-particle deep layer: (a) / = 67 Hz, r = 5.8,
(b) / = 28Hz, r = 5.2, (c) / = 35Hz, r = 5.1, and (d) / = 40Hz, T = 5.1. The fronts in (a) and (b) are
stationary, while the isolated regions in (d), created by a step change in r, shrink and disappear.
Fig. 3. (a) and (b) Disordered patterns separated in time by 5 s; / = 70Hz, T==8.6, and N = 53 (image
size, 89 x 89m). (c) Pattern in a 0.7mm deep layer of D < 0.15mm titanium balls, / = 72Hz, and i' = 8.5
(image size, 100 x 75 mm).
container cycles. At this point (T « 5) there are no extended wave patterns, just re-
gions with opposite oscillation phase connected by fronts (Fig. 2). The regions with
opposite phases collide with the container on alternate cycles, while a front separating
these regions collides with the container each cycle and experiences a larger effec-
tive forcing than the rest of the layer. At high frequencies the fronts are undecorated
(Fig. 2a), while at lower frequencies, a subharmonic standing wave instability develops
(Fig. 2b-d). The instability wavelength increases with decreasing /; at very low / the
instability vanishes because grains are driven out of the front region due to the violence
of the collision. A variety of front shapes are formed by quickly increasing r into the
front-forming region. A region embedded within another of opposite phase is unstable
and shrinks, while fronts which extend to the wall are often stable [5].
As r is increased beyond «5.5, the effective forcing magnitude is large enough so
that squares and stripes reappear, as do hexagons at even higher F in conjunction with
a second-period doubling. This sequence does not continue indefinitely; at T « 7.8, the
patterns become disordered in both space and time and remain that way for further
increases in r (Fig. 3). The transition to disorder results from layer/plate collisions
that do not occur simultaneously or uniformly in time. With titanium beads, a similar
transition occurs but the pattern appears to have less disorder (Fig. 3c).
We now return to the primary bifurcation. At higher frequencies the primary bifur-
cation is to stripe patterns. Asymptotically, the stripes form locally parallel domains
Page 4
PB. Umbanhowar el al. I Physica A 249 (1998) 19
(a)?
Fig. 4. Spirals (a) at / = 70Hz, T = 3.6, 7V=10, and D = 0.07mm and (b) with a 2° upward sloping
"beach" at / = 80Hz, T = 3.6, and N = 5 (images size, 90 x 90mm). Pattern (a) evolves to stripes, while
(b) evolves from a state similar to (a).
and prefer to align perpendicular to the sidewall. However, at onset, and when the
aspect ratio is large, patterns reminiscent of the spiral-defect chaos observed in binary
fluid convection arise [6] (Fig. 4a). The granular spirals can be single- or multi-armed
and rotate in the direction of chirality. These patterns are transient, eventually being
destroyed by the encroachment of stripes growing perpendicular to the wall. However,
by adding a "beach" to the perimeter of the container bottom (here the beach is a
2.3 cm wide annular region with a 2 upward slope in a 147 mm diameter container),
the orthogonality condition is relaxed and spirals grow until a single large spiral fills
the container (Fig. 4b). Like the small spirals, the large spiral rotates in the winding
direction and can be multi-armed. The center of the large spiral drifts about and can
even "leave" the container.
In layers approximately 15 particles deep or deeper, more patterns arise. Among these
are stable localized excitations or "oscillons", which appear for T slightly below the
value for planar patterns and for / in the vicinity of the square-to-stripe transition [3].
Oscillons are subharmonic: on one cycle they form a peak and on the next a crater.
Oscillons of like phase repel while oscillons of opposite phase attract and bind to
form more complex structures with coordination number up to three (Fig. 5a-Fig. 5d).
When r is increased above the upper stability boundary, oscillons grow by nucleating
excitations of opposite phase at their outer edges for low / (Fig. 5e), while at higher /,
they expand as a domain of growing stripes (Fig. 5f), similar to the transient localized
states observed in binary fluid convection [7] On occasion, an unstable oscillon appears
near the wall and forms a chain that grows to encircle the container (Fig. 5g). It appears
that sidewall-driven convection reduces the layer depth along the wall, which makes
oscillons unstable there (as in Fig. 5e). In the interior, the layer is deep enough so
that oscillons are stable and do not grow.
Page 5
Page 6
PB Umbanhowar et al. I Physica A 249 (1998) 1-9
H fé
Fig. 6. (a) Stable target pattern (/ = 28Hz, rss3, and N = 17) and (b) waves nucleated at a point along
the wall, spreading into flat region (/ = 26Hz, fa; 3, and Nxs20). (b) image size is 80 x 80 mm.
Fig. 7. (a) //4 instability of stripe patterns (/ =
square regime, 1% above onset (/ = 24Hz, T =
= 23 Hz, r = 3.0, and N = 18) and (b) unstable hexagons in
2.4, and N ~ 20). Image sizes are 60 X 60 mm.
Another phenomenon, also related to sidewall friction, is the target pattern (Fig. 6a).
Targets are formed by stepping F from below onset to a value within the hysteretic
region for / in the stripe regime. Waves are nucleated at the wall and travel inward.
(Fig. 6b shows a wave nucleated at a point along the wall spreading into the container
interior.) If the nucleation occurs uniformly, a target is formed; otherwise, spiral or
disordered patterns result. Although stable target patterns are observed, they usually
decay via the introduction of defects at the wall, which travel to the center and create
a spiral. Spirals accumulate more arms with the introduction of further defects until they
break up and form a stripe pattern. We speculate that targets, once they are formed,
are stable if the container radius is an integral number of wavelengths.
Page 7
P.B. Umbanhowar et al. IPhysica A 249 (1998) 19
Fig. 8. Two-phase squares with single-frequency forcing, (a) peak phase and (b) cellular phase (/= 17
Hz, r = 2.5, and N = 12); these uniform phase patterns are atypical - domains with opposite phase are
usually present simultaneously (note, row of peaks at bottom of (b)). Hexagons with two-frequency forcing
A] sin 2nft+Ai cos Tift, (c) peak phase and (d) cellular phase (/ = 32 Hz, r = 2.8, A2 =0.\AU and N = 5.3).
Other phenomena, in addition to targets and oscillons, are also observed in deep
layers (N ^ 15). For / near the square/stripe transition region, a lateral zig-zag insta-
bility of the stripe pattern occurs (Fig. 7a). Zig-zags oscillate at /74 and grow during
layer free flight; at low / and large /\ the transverse oscillation amplitude is about
as large as the pattern wavelength. Another pattern, unstable hexagons, occurs close to
the lower stability boundary (T=s2.4) for / in the square regime (Fig. 7b). Patches
of hexagons are typically 2-4 wavelengths wide and persist for about 100/-1 before
reverting to squares.
Another phenomenon, perhaps related to transient hexagons, is a regime with squares
with two distinct phases, as shown in Fig. 8a and Fig. 8b. Unlike normal squares, which
Page 8
PB. Umbanhowar el ai. IPhysica A 249 (1998) 1-9
Fig. 9. (a) and (b) Two phases of triangular patterns obtained by alternately forcing with sin $ sin 2;r/i f for
5 cycles and cos > sin 271/2 í for 4 cycles (f\ = 37.5 Hz, fi = 30Hz, = 60°, f = 3.8, and N = 13). Images
are 60 x 60mm. Pattern has rotated between (a) and (b).
have the same profile from cycle to cycle, two-phase squares resemble hexagons (Fig.
8c and Fig. 8d): on one cycle the pattern is cellular, while on the next a lattice of
peaks. Two-phase squares arise near onset. Typically, both peak and cellular square
phases exist simultaneously in different spatial domains, but a single phase, as shown
in Fig. 8a and Fig. 8b can persist for hundreds of cycles.
The hexagons that form at T«4 arise from an intrinsic two-frequency forcing at f
and //2, which, as described, arises from a period doubling of the oscillating layer.
Two-frequency forcing of the container itself can also produce hexagons, as Fig. 8c
and Fig. 8d illustrates; unlike the hexagons produced by period doubling, the hexagons
in Fig. 8c and Fig. 8d do not have the phase degeneracy that leads to fronts between
domains of different phase.
Driving the container with more complicated waveforms can lead to other patterns.
For example, forcing with composed sinusoids at / and \f produces a triangular
pattern (Fig. 9). We have not yet, however, observed quasi-patterns like those found
with multi-frequency driving of viscous fluids [8],
The variety of patterns observed in vibrated granular layers provides a challenge to
theory. Any theory that can reproduce these patterns (e.g. Ref. [9]) should yield new
general insights into the dynamics of granular media.
Acknowledgements
This research was supported by the US Department of Energy Office of Basic Energy
Sciences, the Texas Advanced Research Program, the US National Science Foundation
Division of International Programs, and Fondecyt (Chile)
in vibrated granular layers
Paul B. Umbanhowar3, Francisco Melob, Harry L. Swinney3 *
1 Center for Nonlinear Dynamics and Department of Physics. The University of Texas at Austin, Austin,
TX 78712, USA
b Deparlamento de Fisica, Universidad de Santiago, Avenida Ecuador 3493, Casilla 307 Correo 2,
Santiago. Chile
Abstract
Experiments on vertically vibrated granular layers in evacuated containers reveal a variety of
patterns for acceleration amplitudes above a critical value ( « 2.5 g). Stripes, squares, hexagons,
spirals, triangles, and targets, as well as particle-like localized excitations ("oscillons") and fronts
("kinks") between regions with different vibrational phase are observed as the layer depth and the
container oscillation frequency and amplitude are varied. A zig-zag instability, unstable hexagons,
phase-disordered patterns, and "two-phase" squares are also observed. With a few noteworthy
exceptions, the patterns are essentially independent of the lateral boundary conditions. © 1998
Elsevier Science B.V. All rights reserved.
Granular materials are composed of macroscopic particles (grains). Energy is dissi-
pated in each grain collision. The typical energy of a grain is many orders of magnitude
greater than kBT, so temperature plays no role in grain dynamics. These two character-
istics give granular media properties that differ from those of solids and fluids. Granular
media are important in many geological and industrial processes but are much less well
understood than solids and fluids. Research in the past decade has revealed a variety
of interesting collective phenomena in granular media, e.g. see Ref. [1] and references
therein. Here we present observations of spatial patterns in vertically vibrated granular
layers.
Our earlier experiments investigated extended standing wave patterns (planar stripes,
squares, and hexagons) [2] and localized structures ("oscillons") [3] as a function of the
sinusoidal container oscillation frequency f and the dimensionless acceleration ampli-
tude r = 4n2Af2g~], where A is the amplitude and g the acceleration due to gravity.
* Corresponding author. E-mail: swinneyfwehaos.ph.utcxas.edu.
0378-4371/98/S19.00 Copyright © 1998 Elsevier Science B.V. All rights reserved
/7/S0378-4371(97)00425-1
Page 2
PB. Umbanhowar et al. I Physica A 249 (1998) 1-9
^-^_ DISORDERED
SQUARES (f/4)
STRIPES (f/4)
SQUARES (f/2) !
FLAT WITH FRONTS
HEXAGONS (f/2)
STRIPES (f/2)
FLAT
50
100
Fig. I. Stability diagram showing transitions in a 7-particle deep layer. Shaded regions indicate the co-
existence of slowly evolving patterns of squares and stripes. Solid (dashed) lines denote transitions with
decreasing (increasing) r. The transition from a flat layer to square patterns is strongly hysteretic.
In this paper we present aspects of planar patterns and oscillons not presented in
the earlier work. A more detailed description of the various phenomena will appear
elsewhere [4].
The patterns described here, unless noted otherwise, are produced in a layer of
non-cohesive, 0.15-0.18 mm diameter, bronze spheres placed in the bottom of an evac-
uated, upright, 126 mm diameter, cylindrical container [2,3]. The principal control
parameters are the oscillation frequency / (10-120 Hz), F (0-10) and the dimen-
sionless layer depth N =H/D (3-30), where H is the layer depth and D the particle
diameter. Patterns are visualized by strobed side lighting, and images are acquired with
a CCD camera mounted on axis.
Fig. 1 is a phase diagram showing the regions in which different patterns are observed
as a function of the container acceleration amplitude and frequency. The effective forc-
ing of the granular layer is different from the container forcing for T > 1 because the
layer loses contact with the container when the container acceleration is less than —g.
The effective forcing magnitude is determined by the collision acceleration, while the
effective drive frequency is set by the time between layer-container collisions [2], Hys-
teretic subharmonic standing waves arise when the effective forcing magnitude exceeds
a critical value (r«2.4). At low / the patterns are squares, while at larger /' stripe
patterns appear. As r is increased past «4, the layer free-flight time exceeds /"',
at which point there is a doubling of the period in the motion of the layer center of
mass: successive free flights last more than a period and less than a period. Hexagonal
patterns, which can have two possible phases with respect to the drive signal, arise at
this period doubling.
Further increases in F reduce both the shorter free flight-time and the effective forc-
ing magnitude until the layer gently collides with the container only once every two
Page 3
PB. Umbcmhowar et allPhvsica A 249 (1998) 19
Fig. 2. Fronts between domains with opposite phase in a 7-particle deep layer: (a) / = 67 Hz, r = 5.8,
(b) / = 28Hz, r = 5.2, (c) / = 35Hz, r = 5.1, and (d) / = 40Hz, T = 5.1. The fronts in (a) and (b) are
stationary, while the isolated regions in (d), created by a step change in r, shrink and disappear.
Fig. 3. (a) and (b) Disordered patterns separated in time by 5 s; / = 70Hz, T==8.6, and N = 53 (image
size, 89 x 89m). (c) Pattern in a 0.7mm deep layer of D < 0.15mm titanium balls, / = 72Hz, and i' = 8.5
(image size, 100 x 75 mm).
container cycles. At this point (T « 5) there are no extended wave patterns, just re-
gions with opposite oscillation phase connected by fronts (Fig. 2). The regions with
opposite phases collide with the container on alternate cycles, while a front separating
these regions collides with the container each cycle and experiences a larger effec-
tive forcing than the rest of the layer. At high frequencies the fronts are undecorated
(Fig. 2a), while at lower frequencies, a subharmonic standing wave instability develops
(Fig. 2b-d). The instability wavelength increases with decreasing /; at very low / the
instability vanishes because grains are driven out of the front region due to the violence
of the collision. A variety of front shapes are formed by quickly increasing r into the
front-forming region. A region embedded within another of opposite phase is unstable
and shrinks, while fronts which extend to the wall are often stable [5].
As r is increased beyond «5.5, the effective forcing magnitude is large enough so
that squares and stripes reappear, as do hexagons at even higher F in conjunction with
a second-period doubling. This sequence does not continue indefinitely; at T « 7.8, the
patterns become disordered in both space and time and remain that way for further
increases in r (Fig. 3). The transition to disorder results from layer/plate collisions
that do not occur simultaneously or uniformly in time. With titanium beads, a similar
transition occurs but the pattern appears to have less disorder (Fig. 3c).
We now return to the primary bifurcation. At higher frequencies the primary bifur-
cation is to stripe patterns. Asymptotically, the stripes form locally parallel domains
Page 4
PB. Umbanhowar el al. I Physica A 249 (1998) 19
(a)?
Fig. 4. Spirals (a) at / = 70Hz, T = 3.6, 7V=10, and D = 0.07mm and (b) with a 2° upward sloping
"beach" at / = 80Hz, T = 3.6, and N = 5 (images size, 90 x 90mm). Pattern (a) evolves to stripes, while
(b) evolves from a state similar to (a).
and prefer to align perpendicular to the sidewall. However, at onset, and when the
aspect ratio is large, patterns reminiscent of the spiral-defect chaos observed in binary
fluid convection arise [6] (Fig. 4a). The granular spirals can be single- or multi-armed
and rotate in the direction of chirality. These patterns are transient, eventually being
destroyed by the encroachment of stripes growing perpendicular to the wall. However,
by adding a "beach" to the perimeter of the container bottom (here the beach is a
2.3 cm wide annular region with a 2 upward slope in a 147 mm diameter container),
the orthogonality condition is relaxed and spirals grow until a single large spiral fills
the container (Fig. 4b). Like the small spirals, the large spiral rotates in the winding
direction and can be multi-armed. The center of the large spiral drifts about and can
even "leave" the container.
In layers approximately 15 particles deep or deeper, more patterns arise. Among these
are stable localized excitations or "oscillons", which appear for T slightly below the
value for planar patterns and for / in the vicinity of the square-to-stripe transition [3].
Oscillons are subharmonic: on one cycle they form a peak and on the next a crater.
Oscillons of like phase repel while oscillons of opposite phase attract and bind to
form more complex structures with coordination number up to three (Fig. 5a-Fig. 5d).
When r is increased above the upper stability boundary, oscillons grow by nucleating
excitations of opposite phase at their outer edges for low / (Fig. 5e), while at higher /,
they expand as a domain of growing stripes (Fig. 5f), similar to the transient localized
states observed in binary fluid convection [7] On occasion, an unstable oscillon appears
near the wall and forms a chain that grows to encircle the container (Fig. 5g). It appears
that sidewall-driven convection reduces the layer depth along the wall, which makes
oscillons unstable there (as in Fig. 5e). In the interior, the layer is deep enough so
that oscillons are stable and do not grow.
Page 5
Page 6
PB Umbanhowar et al. I Physica A 249 (1998) 1-9
H fé
Fig. 6. (a) Stable target pattern (/ = 28Hz, rss3, and N = 17) and (b) waves nucleated at a point along
the wall, spreading into flat region (/ = 26Hz, fa; 3, and Nxs20). (b) image size is 80 x 80 mm.
Fig. 7. (a) //4 instability of stripe patterns (/ =
square regime, 1% above onset (/ = 24Hz, T =
= 23 Hz, r = 3.0, and N = 18) and (b) unstable hexagons in
2.4, and N ~ 20). Image sizes are 60 X 60 mm.
Another phenomenon, also related to sidewall friction, is the target pattern (Fig. 6a).
Targets are formed by stepping F from below onset to a value within the hysteretic
region for / in the stripe regime. Waves are nucleated at the wall and travel inward.
(Fig. 6b shows a wave nucleated at a point along the wall spreading into the container
interior.) If the nucleation occurs uniformly, a target is formed; otherwise, spiral or
disordered patterns result. Although stable target patterns are observed, they usually
decay via the introduction of defects at the wall, which travel to the center and create
a spiral. Spirals accumulate more arms with the introduction of further defects until they
break up and form a stripe pattern. We speculate that targets, once they are formed,
are stable if the container radius is an integral number of wavelengths.
Page 7
P.B. Umbanhowar et al. IPhysica A 249 (1998) 19
Fig. 8. Two-phase squares with single-frequency forcing, (a) peak phase and (b) cellular phase (/= 17
Hz, r = 2.5, and N = 12); these uniform phase patterns are atypical - domains with opposite phase are
usually present simultaneously (note, row of peaks at bottom of (b)). Hexagons with two-frequency forcing
A] sin 2nft+Ai cos Tift, (c) peak phase and (d) cellular phase (/ = 32 Hz, r = 2.8, A2 =0.\AU and N = 5.3).
Other phenomena, in addition to targets and oscillons, are also observed in deep
layers (N ^ 15). For / near the square/stripe transition region, a lateral zig-zag insta-
bility of the stripe pattern occurs (Fig. 7a). Zig-zags oscillate at /74 and grow during
layer free flight; at low / and large /\ the transverse oscillation amplitude is about
as large as the pattern wavelength. Another pattern, unstable hexagons, occurs close to
the lower stability boundary (T=s2.4) for / in the square regime (Fig. 7b). Patches
of hexagons are typically 2-4 wavelengths wide and persist for about 100/-1 before
reverting to squares.
Another phenomenon, perhaps related to transient hexagons, is a regime with squares
with two distinct phases, as shown in Fig. 8a and Fig. 8b. Unlike normal squares, which
Page 8
PB. Umbanhowar el ai. IPhysica A 249 (1998) 1-9
Fig. 9. (a) and (b) Two phases of triangular patterns obtained by alternately forcing with sin $ sin 2;r/i f for
5 cycles and cos > sin 271/2 í for 4 cycles (f\ = 37.5 Hz, fi = 30Hz,
are 60 x 60mm. Pattern has rotated between (a) and (b).
have the same profile from cycle to cycle, two-phase squares resemble hexagons (Fig.
8c and Fig. 8d): on one cycle the pattern is cellular, while on the next a lattice of
peaks. Two-phase squares arise near onset. Typically, both peak and cellular square
phases exist simultaneously in different spatial domains, but a single phase, as shown
in Fig. 8a and Fig. 8b can persist for hundreds of cycles.
The hexagons that form at T«4 arise from an intrinsic two-frequency forcing at f
and //2, which, as described, arises from a period doubling of the oscillating layer.
Two-frequency forcing of the container itself can also produce hexagons, as Fig. 8c
and Fig. 8d illustrates; unlike the hexagons produced by period doubling, the hexagons
in Fig. 8c and Fig. 8d do not have the phase degeneracy that leads to fronts between
domains of different phase.
Driving the container with more complicated waveforms can lead to other patterns.
For example, forcing with composed sinusoids at / and \f produces a triangular
pattern (Fig. 9). We have not yet, however, observed quasi-patterns like those found
with multi-frequency driving of viscous fluids [8],
The variety of patterns observed in vibrated granular layers provides a challenge to
theory. Any theory that can reproduce these patterns (e.g. Ref. [9]) should yield new
general insights into the dynamics of granular media.
Acknowledgements
This research was supported by the US Department of Energy Office of Basic Energy
Sciences, the Texas Advanced Research Program, the US National Science Foundation
Division of International Programs, and Fondecyt (Chile)
Page 1 IMAGE FUSION FOR CONFORMAL RADIATION THERAPY
Marc L. Kessler, Ph.D. and Kelvin Li, M.S.
Department of Radiation Oncology, The University of Michigan
1. INTRODUCTION
2. IMAGING MODALITIES IN TREATMENT PLANNING
A. X-ray CT
B. Magnetic Resonance Imaging
C. Nuclear Medicine Imaging
D. Ultrasound Imaging
3. TECHNIQUES
A. Dataset Registration
1) Surface-based Registration
2) Image-based Registration
3) Interactive Registration
B.Data Mapping and Image Fusion
1) Structure Mapping
2) Image Mapping and Fusion
4. CLINICAL EXAMPLE
5. SUMMARY
6. REFERENCES
Acknowledgements
This paper is reprinted with permission from 3D Conformal Radiation Therapy and Intensity
Modulated Radiation Therapy: Physics and Clinical Considerations. Ed. J. A. Purdy, W. Grant III,
J. R. Palta, E. B. Butler, C. A. Perez, Madison, WI, Advanced Medical Publishing; 2001.
Page 2
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
2
1. INTRODUCTION
Medical imaging is now a fundamental tool in conformal radiation therapy. Almost every aspect of
patient management involves some form of two or three dimensional image data acquired using
one or more modality. Image data are now used for diagnosis and staging, for treatment planning
and delivery and for monitoring patients after therapy. While x-ray computed tomography (CT) is
the primary modality for most image-based treatment planning, other modalities such as magnetic
resonance imaging (MR), positron and single photon emission computed tomography (PET and
SPECT) and ultrasound imaging (US) can provide important data that may improve overall
patient management. For example, MR provides excellent soft tissue contrast, allowing superior
delineation of normal tissues and tumor volumes in many sites. SPECT and PET provide unique
metabolic information capable of resolving ambiguities in anatomic image data and can quantify
partial organ function. Ultrasound imaging can now provide real-time volumetric information for
delineating organ boundaries for both treatment planning and treatment delivery. The data from
these modalities help the clinician to develop a more complete description of the patient. Each of
these modalities alone, however, does not provide all of the physical and geometric information
required for conformal radiotherapy treatment planning. Therefore, in order to take full
advantage of the information from each modality, it must be integrated with the primary CT data
before being used for treatment planning and delivery (Kessler, Rosenman).
This presentation will describe the general techniques available to carry out this integration
process in a quantitative fashion. First, an overview of the advantages and limitations of each
image modality is presented. Next, the basic procedures implemented in academic and
commercial treatment planning systems for supporting the use of multimodality data are
described. Finally, a clinical example is presented to illustrate the steps involved in using
multiple image datasets for conformal radiotherapy treatment planning.
2. IMAGING MODALITIES IN TREATMENT PLANNING
The radiation oncologist now has access to image data from several imaging modalities. In
addition to x-ray CT, data from MR, PET and SPECT and ultrasound imaging may be acquired
during the course of patient management. Each of these modalities has both advantages and
limitations for use in conformal radiation therapy. By registering and fusing the data from the
different modalities, the advantages of each modality can be combined and a more accurate
representation of the patient obtained (Figure 1).
Figure 1. Data from multiple imaging modalities are registered to a common patient reference system (usually
the planning CT) and then combined to construct a more complete representation of the patient.
Page 3
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
3
A. X-ray Computed Tomography
X-ray computed tomography (CT) provides the primary dataset for most aspects of conformal
therapy treatment planning. The data from CT are used to construct geometric and physical
models of the patient. The geometric models are used to define anatomic structures, target
volumes and to aid in radiation beam placement and shaping. The physical models provide
density information required by most dose calculation algorithms. The planning CT dataset is
also used to generate graphical aids such as beams-eye-view (BEV) displays and digitally
reconstructed radiographs (DRRs) for planning and treatment verification. The major drawback of
CT data is the limited soft tissue contrast, which can hinder accurate tissue discrimination.
B. Magnetic Resonance Imaging
Magnetic resonance imaging (MR) now plays an important role in treatment planning for several
tumor sites, offering several advantages over CT. The excellent soft tissue contrast provided by
MR, permits better discrimination between normal tissues and many tumors (Khoo). A wide
variety of MR imaging pulse sequences are available that can improve image contrast by
enhancing or suppressing specific tissues such as fat and conditions such as edema. Also, MR
images can be directly acquired along sagittal and coronal planes, offering better visualization of
certain tissues. While these features make MR an excellent choice as a primary dataset for
treatment planning, several limitations have prevented the use of MR data alone. These
drawbacks include the greater susceptibility of MR to spatial distortions and intensity artifacts,
the lack of signal from cortical bone, and image intensity values that have no relationship to
electron or physical density (Fraass, Schad).
C. Emission Computed Tomography (Nuclear Medicine Imaging)
SPECT and PET permit imaging of the transport and accumulation of biologically active tracers.
Examples of tracers for PET include
18
F-deoxyglucose and
82
Rb. and for SPECT include
131
I and
99m
Tc. SPECT and PET studies provide information about physiology rather than anatomy. The
data from these modalities can provide important information about tumor metabolism and
partial-volume tissue function. For patient monitoring, PET imaging has proven useful for
discriminating between tumor recurrence and radiation necrosis (Doyle). For treatment planning
SPECT data has been used to demonstrate regional lung function to help determine beam
directions (Marks) and to quantify the distribution of radiolabeled mono-clonal antibodies (Koral).
D. Ultrasound
Ultrasound imaging uses mechanical (sound) energy rather than electromagnetic energy to image
tissue. Basically, ultrasound images tissue boundaries rather than internal structure. It can also
be used to image blood flow. The major advantages of ultrasound are that images are produced in
real-time, the required apparatus is relatively small, and it does not involve ionizing radiation.
Three-dimensional data acquisition with ultrasound is now also possible. Together, these features
have been exploited for radioactive source placement and boundary definition for prostate
treatments (Narayana, Pathak). A disadvantage of ultrasound is that not all regions of the body,
such as the brain and lung can be imaged effectively due to signal loss at large tissue density
changes such as at tissue-bone and tissue-air interfaces.
While the imaging strengths of CT, MR, PET, SPECT, and ultrasound imaging techniques make
them important tools in patient management, each method has its drawbacks. The goal of image
fusion is to overlap the strengths of each modality. In the next section, the techniques to carry this
out in a quantitative fashion are described.
Page 4
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
4
3. TECHNIQUES
The process of incorporating data from multiple imaging studies into the treatment planning
process involves two main tasks. The first task is to estimate the parameters of the coordinate
transformation that relates homologous points in the two studies (data registration). The second
task is to apply the resultant transformation to map structures or features of interest from one
study to another (structure mapping) or to directly combine grayscale data from the two studies
(data fusion). For the discussion that follows the two datasets will be labeled Study A and Study B.
Study A is the base or reference dataset held fixed and Study B the homologous dataset that is
manipulated to be brought into geometric alignment with Study A.
A. Dataset Registration
Numerous techniques for estimating the coordinate transformation that relates homologous points
in two imaging studies have been reported and reviewed in the literature (van den Elsen). The
general approach of these methods is to devise a metric that measures the degree of mismatch (or
similarity) between homologous features in two datasets and to use standard numerical methods
to determine the transformation parameters that minimize (maximize) the metric (Press).
Features used to compute this metric are typically geometric structures (homologous points, lines
or surfaces or combinations of these) extracted from the datasets or the native grayscale data.
Geometric features include manually placed point fiducials and stereotactic frames or internal
landmarks such as anatomic points or extracted surfaces.
The parameters used to model the coordinate transformation between two datasets depend on the
modalities involved and the clinical site. In the simplest situation, it is only necessary to account
for differences in patient orientation (pose) at the time of imaging. For rigid anatomy such as the
skull and pelvis, only 3 rotations (θ
x
, θ
y
, θ
z
) and 3 translations (t
x
, t
y
, t
z
) are required. The presence
of image distortions or mis-calibrated imaging devices would require more degrees of freedom
(DOF) such as anisotropic scaling (s
x
, s
y
, s
z
). While this really a quality control issue and not a
registration problem, one rarely has control over the all the imaging equipment to correct for these
before registration. The situation is more difficult when the anatomy involved is not rigid. In
these cases, a more complicated (spatially variant) transformation involving a larger number of
degrees of freedom is required to register the data properly. While (ad hoc) approaches have been
described to address such situations, it is still an active area of investigation. As such, the major
uses of multimodality image data in conformal therapy are regions of the brain and the pelvis.
To illustrate in more detail the process of dataset registration, two popular algorithms in clinical
use will be described. One algorithm makes use of extracted anatomic structures (surface
matching), and the other uses the grayscale information directly (mutual information).
1) Surfaced-based Registration
In surface-based registration, the surfaces of one or more anatomic structures are extracted from
the image data and used for computing and minimizing the mismatch between the datasets. The
most common anatomic structures used are the skull for brain studies and the pelvis for prostate
studies. These structures can be easily extracted using automated techniques and minor hand
editing. The surfaces from Study A are represented as a binary volume or as an explicit polygon
surface and the surfaces from Study B are represented as a set of points sampled from the surface
(Figure 2). The metric, which represents the degree of mismatch between the two datasets, is
computed as the sum or average of the distances from the points from Study B to the surfaces from
Study A (van Herk).
Page 5
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
5
Figure 2. Illustration of surface-based dataset registration. a) Extracted surfaces from the different datasets
(surface from Study B is represented by points sampled from surface, surface from Study A is represented as
an explicit polygon surface. b) Distance of closest approach from each point to the surface is computed. c)
Distances between points and surface minimized.
2) Image-based Registration
In image-based registration, the grayscale data is used directly to compute a measure of mismatch
or similarity between two datasets. One advantage of this approach over surface-based algorithms
is that much less pre-processing of the data is required. Several metrics which measure the
similarity between the grayscale distributions of the two datasets have been described (Meyer,
Studholme, Wells). One method that is being widely developed and used is registration using a
mutual information (MI) based metric.
Mutual information, a concept from the field of information theory, is a measure of how much
redundant information is present between the pair of datasets being analyzed. The amount of
information content in a dataset, in this case images, is determined by integration of the joint and
individual probability densities. The joint probability distribution is computed from the 2-D
histogram of grayscale pairs from the datasets at each iteration of the coordinate transformation
estimate (Figure 3). When two images are not properly aligned, the probability of one voxel value
predicting the content of the homologous voxel value is small, resulting in a reduces MI (less
clustered histogram). Both increasing information redundancy and a higher measure of mutual
information MI (more clustered histogram) is achieved by maximizing these joint probabilities
when the images are aligned.
3) Interactive Techniques
In addition to numerical approaches, interactive dataset registration techniques are also
available. These techniques replace the numerical metric with some form of visual feedback and
the appropriate widgets to manipulate and register two datasets. These techniques are effective in
cases when the transformation can be represented by a small number of degrees of freedom such
as only rotations (3 DOF), translations (3 DOF) and possibly isotropic scaling (1 DOF).
Unfortunately, interactive alignment with more degrees of freedom, such as anisotropic scaling (3
DOF) becomes increasingly difficult and error prone.
Even when using numerical methods, interactive tools can help to determine a good first
approximation for the parameters of the coordinate transformation. As with all search methods, a
good starting point can greatly reduce the time required for the algorithm to converge.
Page 6
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
6
Figure 3. Illustration of image-based registration using mutual information. a) T1-weighted and T2-weighted
images from the same anatomic plane but displaced by 6mm. b) The 2-D histogram of joint grayscale values
from the mis-registered images. c) The same images but with no displacement. d) The 2-D histogram for these
images shows that the data is now more clustered (increased MI). (Data courtesy of Charles Meyer, University
of Michigan).
B. Structure Mapping and Image Fusion
Once the transformation relating imaging studies is calculated, part or all of the data relating to
one study may be integrated or fused with that of another. Depending on the goal of the clinician,
one of two types of fusion is generally employed. One approach, called "structure mapping" maps
the outlines of anatomic structures or treatment volumes defined from one imaging study to the
other. Another approach called “image mapping” or “image fusion” involves transforming and
reformatting image data from one study to match the orientation and scale of the images of the
other. This makes it possible to simultaneously visualize grayscale information from
corresponding anatomic planes.
1) Structure Mapping
Structure mapping produces outlines of structures defined from one study on the images of
another. Figure 4 illustrates this process for a patient with an astrocytoma, imaged well by MR
but difficult to define on the CT. The first step is to construct a 3-D approximation of the structure
from the collection of 2-D outlines in Study B. This is accomplished by tiling outlines in adjacent
images and "capping" the top and bottom outlines to create a closed surface (Fuchs). Most
treatment planning systems incorporate such algorithms. The 3-D surface model defined in Study
B is then mapped to Study B using the computed transformation. The desired 2-D outlines for
Study A are derived by "slicing" the transformed surface model along the image planes of Study A.
The resultant outlines are used as input for the treatment planning process.
2) Image Mapping and Fusion
Another approach to integrating information from Study A and Study B is to simultaneously
display images of corresponding anatomic planes from these two imaging studies, regardless of
differences in the imaging parameters under which they were acquired. To accomplish this, the
images from one study are re-sampled to match the scale and orientation of the other (Figure 5).
Various display methods can then be used to see the relationship between the data contained in
the two studies (Figure 6).
Page 7
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
7
Figure 4. Process of mapping a structure from one set of images to another. a) The structure is outlined on
the images of Study B (the MR). b) The outlines are stacked and c) used to construct a surface model of the
structure. d) The computed transformation is applied to the model. e) The transformed model is intersected
along the image planes of Study A to produce f) the corresponding outlines of the structure on Study A (the
CT).
Figure 5. Image Mapping. The goal is to create a version of Study B with images that match the scale,
location and orientation of those in Study A. The voxel values for Study B' are determined by back
transforming the coordinates of the voxel into Study B using the inverse of the computed transformation and
interpolating between the surrounding voxels.
Page 8
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
8
Figure 6. Various image fusion displays possible using the Study B’and Study A. a) An “electronic
pantograph.” Points drawn on Study A are displayed Study B’and vice versa. b) Split screen display of Study
B’and Study A. c) Movable sub-window over Study B’that displays the grayscale data from Study A.
One method is to display identical anatomic planes in a side-by-side fashion. Such a presentation
allows the definition of outlines of structures using both of the imaging studies simultaneously.
This is achieved by tracking the movement of the cursor over an image from one study with
corresponding movement over the analogous image of the other and vice versa. This technique can
be thought of as an electronic version of a mechanical pantograph. Other presentations made
possible using registered datasets are synthetic images that consist of selected information from
each study. The different information can be composed using overlays, pseudo-coloring, or
grayscale information directly. For example the hard bone features of a CT imaging study can be
combined with the soft tissue features of an MR imaging study by adding the bone extracted from
the CT to the MR dataset. Similarly, a rectangular region from an image of one study can be
removed and replaced with the corresponding region from the other study (Figure 6)
4.
CLINICAL EXAMPLE
An example is presented to illustrate the general process of multimodality data fusion. The case
involves a patient with a large left skull base lesion that was determined to be a benign
menigioma Although there was nothing extraordinary about this particular case, it highlights
many of the steps and requirements involved when using multiple image datasets for conformal
therapy treatment planning.
Data Acquisition
A treatment planning CT study was performed with the patient in a customized immobilization
mask. An MR study was also performed, but without the mask because of space limitations within
the head coil. No attempt was made to adjust the orientation of the image plane (pitch) on the
MRI study to match that used for the CT study, although having the patient in a similar position
during the different imaging studies can greatly simplify the registration process. The CT study
was acquired without contrast (80 images, 3.0mm thickness, 3.0mm separation, 0.67mm pixel
size, 512 x 512 image size). The MR study involved three series using a spin echo sequence;
proton- and T2-weighted axial images (21 images, TE = 15 & 90msec, TR = 4918msec), T1-
weighted post-gadolinium axial images (22 images, TE = 550msec, TR = 1400msec) and T1-
weighted post-gadolinium coronal images (22 images, TE = 16, TR = 500). All MR images were
256 x 256 pixels and had a pixel size of 0.78mm, a slice thickness of 5.0mm, and were separated
by 7.0mm. All scans included the entire head to help facilitate accurate dataset registration and
other aspects of treatment planning (such as DRR generation).
Page 9
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
9
Figure 7. Visual validation of estimated coordinate transformation. a) CT were reformatted to match the MR
images and displayed together as an alternating "checkerboard" or "Venetian blind" of two images at the
anatomic same level b) Threshold extracted outlines from a reformatted CT image overlaid on the
corresponding MR image.
Dataset Registration
Registration of the CT and MR datasets was performed using a mutual information-based image
registration program originally developed in the U-M Digital Image Processing Laboratory
(www.med.umich.edu/dipl/). The CT images were downsized to 256 x 256 and the data outside the
head cropped (e.g. air and head holder) to reduce memory and computational requirements. Four
points were placed in approximately the same anatomic locations on both data sets (centers of the
orbits and the top and back of brain) to initialize the algorithm. Convergence to a maximal MI
took less than five minutes. The accuracy of the registration was accessed visually using the
displays shown in Figure 7.
Tumor and Target Volume Definition
The gross tumor volume (GTV) was defined as the region of enhancement in the post Gd-DPTA
contrast MR studies. The physician outlined this volume on both the axial and coronal set of
images (Figure 8). Surface representations of the GTV on both studies were created from the
manually drawn outlines. These surfaces were then transformed in the coordinate system of the
planning CT by applying the computed transformation to the vertices of the surfaces. Once
transformed, the surfaces were re-sliced along the planes defined by the CT images yielding a set
of contours for the coronal-MR GTV and the axial-MR GTV. These contours were used to define a
"composite" GTV from the union of the two sets of MR-based GTV outlines. For this case, the CT
grayscale data did not contribute any information for the definition of the GTV. Had the physician
outlined a CT-based GTV it could have been incorporated directly the composite GTV or compared
to the MR based outlines to reconcile potential ambiguities. The final planning target volume was
created by expanding the composite GTV surface isotropically using a 5 mm margin for setup
uncertainty (in this case the clinical target volume (CTV) was the same as the GTV). The
expanded surface was then re-sliced back onto the CT images and checked by the physician for
final approval.
Treatment Planning and Dose Visualization
At this point, with the necessary coordinate transformations in hand; most of the steps of
treatment planning can be accomplished using any dataset. Because of the confidence in the
geometric accuracy and extent of the external patient surface and the density information from the
planning CT data, this dataset is chosen. Except for these criteria, the MR and CT data are
interchangeable; all options in the planning system can operate with either. For example, it is
just as simple to display dose over the MR data as the CT (Figure 9).
Page 10
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
10
Figure 8. Left side a) Post contrast axial with GTV. b) Reconstructed sagittal CT with composite MR derived
GTV. c) Axial CT with coronal and axial MR derived GTVs and composite GTV. d) Post contrast coronal MR
with GTV. The thin white lines show the intersection of the plane of the other image in the same row. Right
side a) MR axial derived GTV. b) MR coronal derived GTV volume, optic nerves, and optic chiasm. c)
Composite GTV from MR axial and MR coronal tumor volumes. Orbits are derived directly from the CT
images.
Figure 9. Dose display using a CT registered MR dataset. a) Isodose lines. a) Dose "Colorwash."
Page 11
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
11
5.
SUMMARY
In this presentation the benefits and process of combining multiple imaging modalities together to
enhance the accuracy of conformal radiation therapy have been described. While the imaging
strengths existing in CT, MR, PET, SPECT, and ultrasound imaging techniques make them
important tools in patient management, each method has its drawbacks. The goal of image fusion
is to effectively overlap the strengths of one imaging modality over the weaknesses of another.
Various types of image registration techniques are now being used in the clinic. The objective of
each is to derive a coordinate transformation that maps homologous anatomic points from one
dataset to the other. Each method uses a different measure of success. Surface-based registration
measures success by minimizing the distances between points and surface representations of
anatomic structures extracted from each dataset. Image-based registration measures success
directly from voxel grayscale data in the two datasets. One robust image-based algorithm operates
by maximizing the mutual information content between the two datasets. Interactive techniques
are also in wide use and involve manual manipulations of structure or image orientation and
allow immediate visual feedback and verification of success.
Once a coordinate transformation has been computed between two datasets, structure mapping
and image fusion can be used to visualize the data from one dataset in relation to another.
Structure mapping permits the display of outlines drawn on one dataset over the images of the
other. Image mapping and fusion allows 2-D images from the two datasets to be visualized
adjacent to or superimposed over the other.
The application of these techniques has been demonstrated using a typical clinical example using
CT and MR studies for a patient with a menigioma. While only two imaging modalities were used
in this example, the general steps required to exploit data from any imaging modality for
treatment planning in conformal therapy are the same.
6.
REFERENCES
Doyle WK, et al, Differentiation of cerebral radiation necrosis from tumor recurrence by [18F]FDG
and 82Rb positron emission tomography. J Comput Assist Tomogr, 11(4):563-70, 1987.
Fraass BA, et al. Integration of magnetic resonance imaging into radiation therapy treatment
planning: I. Technical considerations. Int J Radiat Oncol Biol Phys, 13:1897-908, 1987.
Fuchs H, Kedem ZM, Uselton SP. Optimal Surface Reconstruction from Planar Contours. Comm
ACM 1977; 10:693-703.
Kessler ML, Pitluck S, Petti P, Castro JR. Integration of multimodality imaging data for
radiotherapy treatment planning. Int J Radiat Oncol Biol Phys, 21:1653-67, 1991.
Khoo V.S., et al., Magnetic resonance imaging (MRI): considerations and applications in
radiotherapy treatment planning, Radiother Oncol,42(1): 1-15; 1997.
Koral KF, Lin S, Fessler JA, Kaminski MS, Wahl RL, Preliminary results from intensity-based
CT-SPECT fusion in I-131 anti-B1 monoclonal-antibody therapy of lymphoma. Cancer, 80(12
Suppl):2538-44, 1997.
Page 12
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
12
Marks LB, et al, The role of three dimensional functional lung imaging in radiation treatment
planning: the functional dose-volume histogram. Int J Radiat Oncol Biol Phys, 30;33(1):65-75,
1995
Meyer CR, et al, Demonstration of accuracy and clinical versatility of mutual information for
automatic multimodality image fusion using affine and thin-plate spline warped geometric
deformations, Med Image Anal, 1(3):195-206, 1997
Narayana V, Roberson PL, Winfield RJ, McLaughlin PW, Impact of ultrasound and computed
tomography prostate volume registration on evaluation of permanent prostate implants, Int J
Radiat Oncol Biol Phys, 39(2):341-6, 1997
Pathak SD, Grimm PD, Chalana V, Kim Y, Pubic arch detection in transrectal ultrasound guided
prostate cancer therapy, IEEE Trans Med Imaging 17(5):762-71, 1998.
Press WH, Teukolsky SA, Vetterling WT, Flannery BP, Numerical Recipes in C: the art of
scientific computing. 2nd ed, Port Chester, NY: Cambridge University Press, 1992.
Rosenman, J.G., et al., Image registration: an essential part of radiation therapy treatment
planning, Int J Radiat Oncol Biol Phys, 40(1): 197-205; 1998.
Schad LR, Loot S, Schmidt F, Sturm V, Lorenz WJ. Correction of spatial distortion in MR imaging:
a prerequisite for accurate stereotaxy. J Comput Assist Tomogr, 11:499-505. 1987.
Studholme C, Hill DL, Hawkes DJ, Automated 3-D registration of MR and CT images of the head, Med Image Ana1,
1(2):163-75, 1996.
Ten Haken R.K., et al., A quantitative assessment of the addition of MRI to CT-based, 3-D
treatment planning of brain tumors, Radiother Oncol 25(2):121-133, 1992.
van den Elsen PA, Pol MA, Viergever MA. Medical Image Matching - A review with classification.
IEEE Engineering in Med and Biol, 12:26-39, 1993.
van Herk M., et al., Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT
using chamfer matching, Med Phys, 21(7): 1163-1178; 1994.
Wells WM 3rd, Viola P, Atsumi H, Nakajima S, Kikinis R, Multi-modal volume registration by
maximization of mutual information. Med Image Anal, (1):35-51, 1996.
Department of Radiation Oncology, The University of Michigan
1. INTRODUCTION
2. IMAGING MODALITIES IN TREATMENT PLANNING
A. X-ray CT
B. Magnetic Resonance Imaging
C. Nuclear Medicine Imaging
D. Ultrasound Imaging
3. TECHNIQUES
A. Dataset Registration
1) Surface-based Registration
2) Image-based Registration
3) Interactive Registration
B.Data Mapping and Image Fusion
1) Structure Mapping
2) Image Mapping and Fusion
4. CLINICAL EXAMPLE
5. SUMMARY
6. REFERENCES
Acknowledgements
This paper is reprinted with permission from 3D Conformal Radiation Therapy and Intensity
Modulated Radiation Therapy: Physics and Clinical Considerations. Ed. J. A. Purdy, W. Grant III,
J. R. Palta, E. B. Butler, C. A. Perez, Madison, WI, Advanced Medical Publishing; 2001.
Page 2
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
2
1. INTRODUCTION
Medical imaging is now a fundamental tool in conformal radiation therapy. Almost every aspect of
patient management involves some form of two or three dimensional image data acquired using
one or more modality. Image data are now used for diagnosis and staging, for treatment planning
and delivery and for monitoring patients after therapy. While x-ray computed tomography (CT) is
the primary modality for most image-based treatment planning, other modalities such as magnetic
resonance imaging (MR), positron and single photon emission computed tomography (PET and
SPECT) and ultrasound imaging (US) can provide important data that may improve overall
patient management. For example, MR provides excellent soft tissue contrast, allowing superior
delineation of normal tissues and tumor volumes in many sites. SPECT and PET provide unique
metabolic information capable of resolving ambiguities in anatomic image data and can quantify
partial organ function. Ultrasound imaging can now provide real-time volumetric information for
delineating organ boundaries for both treatment planning and treatment delivery. The data from
these modalities help the clinician to develop a more complete description of the patient. Each of
these modalities alone, however, does not provide all of the physical and geometric information
required for conformal radiotherapy treatment planning. Therefore, in order to take full
advantage of the information from each modality, it must be integrated with the primary CT data
before being used for treatment planning and delivery (Kessler, Rosenman).
This presentation will describe the general techniques available to carry out this integration
process in a quantitative fashion. First, an overview of the advantages and limitations of each
image modality is presented. Next, the basic procedures implemented in academic and
commercial treatment planning systems for supporting the use of multimodality data are
described. Finally, a clinical example is presented to illustrate the steps involved in using
multiple image datasets for conformal radiotherapy treatment planning.
2. IMAGING MODALITIES IN TREATMENT PLANNING
The radiation oncologist now has access to image data from several imaging modalities. In
addition to x-ray CT, data from MR, PET and SPECT and ultrasound imaging may be acquired
during the course of patient management. Each of these modalities has both advantages and
limitations for use in conformal radiation therapy. By registering and fusing the data from the
different modalities, the advantages of each modality can be combined and a more accurate
representation of the patient obtained (Figure 1).
Figure 1. Data from multiple imaging modalities are registered to a common patient reference system (usually
the planning CT) and then combined to construct a more complete representation of the patient.
Page 3
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
3
A. X-ray Computed Tomography
X-ray computed tomography (CT) provides the primary dataset for most aspects of conformal
therapy treatment planning. The data from CT are used to construct geometric and physical
models of the patient. The geometric models are used to define anatomic structures, target
volumes and to aid in radiation beam placement and shaping. The physical models provide
density information required by most dose calculation algorithms. The planning CT dataset is
also used to generate graphical aids such as beams-eye-view (BEV) displays and digitally
reconstructed radiographs (DRRs) for planning and treatment verification. The major drawback of
CT data is the limited soft tissue contrast, which can hinder accurate tissue discrimination.
B. Magnetic Resonance Imaging
Magnetic resonance imaging (MR) now plays an important role in treatment planning for several
tumor sites, offering several advantages over CT. The excellent soft tissue contrast provided by
MR, permits better discrimination between normal tissues and many tumors (Khoo). A wide
variety of MR imaging pulse sequences are available that can improve image contrast by
enhancing or suppressing specific tissues such as fat and conditions such as edema. Also, MR
images can be directly acquired along sagittal and coronal planes, offering better visualization of
certain tissues. While these features make MR an excellent choice as a primary dataset for
treatment planning, several limitations have prevented the use of MR data alone. These
drawbacks include the greater susceptibility of MR to spatial distortions and intensity artifacts,
the lack of signal from cortical bone, and image intensity values that have no relationship to
electron or physical density (Fraass, Schad).
C. Emission Computed Tomography (Nuclear Medicine Imaging)
SPECT and PET permit imaging of the transport and accumulation of biologically active tracers.
Examples of tracers for PET include
18
F-deoxyglucose and
82
Rb. and for SPECT include
131
I and
99m
Tc. SPECT and PET studies provide information about physiology rather than anatomy. The
data from these modalities can provide important information about tumor metabolism and
partial-volume tissue function. For patient monitoring, PET imaging has proven useful for
discriminating between tumor recurrence and radiation necrosis (Doyle). For treatment planning
SPECT data has been used to demonstrate regional lung function to help determine beam
directions (Marks) and to quantify the distribution of radiolabeled mono-clonal antibodies (Koral).
D. Ultrasound
Ultrasound imaging uses mechanical (sound) energy rather than electromagnetic energy to image
tissue. Basically, ultrasound images tissue boundaries rather than internal structure. It can also
be used to image blood flow. The major advantages of ultrasound are that images are produced in
real-time, the required apparatus is relatively small, and it does not involve ionizing radiation.
Three-dimensional data acquisition with ultrasound is now also possible. Together, these features
have been exploited for radioactive source placement and boundary definition for prostate
treatments (Narayana, Pathak). A disadvantage of ultrasound is that not all regions of the body,
such as the brain and lung can be imaged effectively due to signal loss at large tissue density
changes such as at tissue-bone and tissue-air interfaces.
While the imaging strengths of CT, MR, PET, SPECT, and ultrasound imaging techniques make
them important tools in patient management, each method has its drawbacks. The goal of image
fusion is to overlap the strengths of each modality. In the next section, the techniques to carry this
out in a quantitative fashion are described.
Page 4
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
4
3. TECHNIQUES
The process of incorporating data from multiple imaging studies into the treatment planning
process involves two main tasks. The first task is to estimate the parameters of the coordinate
transformation that relates homologous points in the two studies (data registration). The second
task is to apply the resultant transformation to map structures or features of interest from one
study to another (structure mapping) or to directly combine grayscale data from the two studies
(data fusion). For the discussion that follows the two datasets will be labeled Study A and Study B.
Study A is the base or reference dataset held fixed and Study B the homologous dataset that is
manipulated to be brought into geometric alignment with Study A.
A. Dataset Registration
Numerous techniques for estimating the coordinate transformation that relates homologous points
in two imaging studies have been reported and reviewed in the literature (van den Elsen). The
general approach of these methods is to devise a metric that measures the degree of mismatch (or
similarity) between homologous features in two datasets and to use standard numerical methods
to determine the transformation parameters that minimize (maximize) the metric (Press).
Features used to compute this metric are typically geometric structures (homologous points, lines
or surfaces or combinations of these) extracted from the datasets or the native grayscale data.
Geometric features include manually placed point fiducials and stereotactic frames or internal
landmarks such as anatomic points or extracted surfaces.
The parameters used to model the coordinate transformation between two datasets depend on the
modalities involved and the clinical site. In the simplest situation, it is only necessary to account
for differences in patient orientation (pose) at the time of imaging. For rigid anatomy such as the
skull and pelvis, only 3 rotations (θ
x
, θ
y
, θ
z
) and 3 translations (t
x
, t
y
, t
z
) are required. The presence
of image distortions or mis-calibrated imaging devices would require more degrees of freedom
(DOF) such as anisotropic scaling (s
x
, s
y
, s
z
). While this really a quality control issue and not a
registration problem, one rarely has control over the all the imaging equipment to correct for these
before registration. The situation is more difficult when the anatomy involved is not rigid. In
these cases, a more complicated (spatially variant) transformation involving a larger number of
degrees of freedom is required to register the data properly. While (ad hoc) approaches have been
described to address such situations, it is still an active area of investigation. As such, the major
uses of multimodality image data in conformal therapy are regions of the brain and the pelvis.
To illustrate in more detail the process of dataset registration, two popular algorithms in clinical
use will be described. One algorithm makes use of extracted anatomic structures (surface
matching), and the other uses the grayscale information directly (mutual information).
1) Surfaced-based Registration
In surface-based registration, the surfaces of one or more anatomic structures are extracted from
the image data and used for computing and minimizing the mismatch between the datasets. The
most common anatomic structures used are the skull for brain studies and the pelvis for prostate
studies. These structures can be easily extracted using automated techniques and minor hand
editing. The surfaces from Study A are represented as a binary volume or as an explicit polygon
surface and the surfaces from Study B are represented as a set of points sampled from the surface
(Figure 2). The metric, which represents the degree of mismatch between the two datasets, is
computed as the sum or average of the distances from the points from Study B to the surfaces from
Study A (van Herk).
Page 5
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
5
Figure 2. Illustration of surface-based dataset registration. a) Extracted surfaces from the different datasets
(surface from Study B is represented by points sampled from surface, surface from Study A is represented as
an explicit polygon surface. b) Distance of closest approach from each point to the surface is computed. c)
Distances between points and surface minimized.
2) Image-based Registration
In image-based registration, the grayscale data is used directly to compute a measure of mismatch
or similarity between two datasets. One advantage of this approach over surface-based algorithms
is that much less pre-processing of the data is required. Several metrics which measure the
similarity between the grayscale distributions of the two datasets have been described (Meyer,
Studholme, Wells). One method that is being widely developed and used is registration using a
mutual information (MI) based metric.
Mutual information, a concept from the field of information theory, is a measure of how much
redundant information is present between the pair of datasets being analyzed. The amount of
information content in a dataset, in this case images, is determined by integration of the joint and
individual probability densities. The joint probability distribution is computed from the 2-D
histogram of grayscale pairs from the datasets at each iteration of the coordinate transformation
estimate (Figure 3). When two images are not properly aligned, the probability of one voxel value
predicting the content of the homologous voxel value is small, resulting in a reduces MI (less
clustered histogram). Both increasing information redundancy and a higher measure of mutual
information MI (more clustered histogram) is achieved by maximizing these joint probabilities
when the images are aligned.
3) Interactive Techniques
In addition to numerical approaches, interactive dataset registration techniques are also
available. These techniques replace the numerical metric with some form of visual feedback and
the appropriate widgets to manipulate and register two datasets. These techniques are effective in
cases when the transformation can be represented by a small number of degrees of freedom such
as only rotations (3 DOF), translations (3 DOF) and possibly isotropic scaling (1 DOF).
Unfortunately, interactive alignment with more degrees of freedom, such as anisotropic scaling (3
DOF) becomes increasingly difficult and error prone.
Even when using numerical methods, interactive tools can help to determine a good first
approximation for the parameters of the coordinate transformation. As with all search methods, a
good starting point can greatly reduce the time required for the algorithm to converge.
Page 6
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
6
Figure 3. Illustration of image-based registration using mutual information. a) T1-weighted and T2-weighted
images from the same anatomic plane but displaced by 6mm. b) The 2-D histogram of joint grayscale values
from the mis-registered images. c) The same images but with no displacement. d) The 2-D histogram for these
images shows that the data is now more clustered (increased MI). (Data courtesy of Charles Meyer, University
of Michigan).
B. Structure Mapping and Image Fusion
Once the transformation relating imaging studies is calculated, part or all of the data relating to
one study may be integrated or fused with that of another. Depending on the goal of the clinician,
one of two types of fusion is generally employed. One approach, called "structure mapping" maps
the outlines of anatomic structures or treatment volumes defined from one imaging study to the
other. Another approach called “image mapping” or “image fusion” involves transforming and
reformatting image data from one study to match the orientation and scale of the images of the
other. This makes it possible to simultaneously visualize grayscale information from
corresponding anatomic planes.
1) Structure Mapping
Structure mapping produces outlines of structures defined from one study on the images of
another. Figure 4 illustrates this process for a patient with an astrocytoma, imaged well by MR
but difficult to define on the CT. The first step is to construct a 3-D approximation of the structure
from the collection of 2-D outlines in Study B. This is accomplished by tiling outlines in adjacent
images and "capping" the top and bottom outlines to create a closed surface (Fuchs). Most
treatment planning systems incorporate such algorithms. The 3-D surface model defined in Study
B is then mapped to Study B using the computed transformation. The desired 2-D outlines for
Study A are derived by "slicing" the transformed surface model along the image planes of Study A.
The resultant outlines are used as input for the treatment planning process.
2) Image Mapping and Fusion
Another approach to integrating information from Study A and Study B is to simultaneously
display images of corresponding anatomic planes from these two imaging studies, regardless of
differences in the imaging parameters under which they were acquired. To accomplish this, the
images from one study are re-sampled to match the scale and orientation of the other (Figure 5).
Various display methods can then be used to see the relationship between the data contained in
the two studies (Figure 6).
Page 7
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
7
Figure 4. Process of mapping a structure from one set of images to another. a) The structure is outlined on
the images of Study B (the MR). b) The outlines are stacked and c) used to construct a surface model of the
structure. d) The computed transformation is applied to the model. e) The transformed model is intersected
along the image planes of Study A to produce f) the corresponding outlines of the structure on Study A (the
CT).
Figure 5. Image Mapping. The goal is to create a version of Study B with images that match the scale,
location and orientation of those in Study A. The voxel values for Study B' are determined by back
transforming the coordinates of the voxel into Study B using the inverse of the computed transformation and
interpolating between the surrounding voxels.
Page 8
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
8
Figure 6. Various image fusion displays possible using the Study B’and Study A. a) An “electronic
pantograph.” Points drawn on Study A are displayed Study B’and vice versa. b) Split screen display of Study
B’and Study A. c) Movable sub-window over Study B’that displays the grayscale data from Study A.
One method is to display identical anatomic planes in a side-by-side fashion. Such a presentation
allows the definition of outlines of structures using both of the imaging studies simultaneously.
This is achieved by tracking the movement of the cursor over an image from one study with
corresponding movement over the analogous image of the other and vice versa. This technique can
be thought of as an electronic version of a mechanical pantograph. Other presentations made
possible using registered datasets are synthetic images that consist of selected information from
each study. The different information can be composed using overlays, pseudo-coloring, or
grayscale information directly. For example the hard bone features of a CT imaging study can be
combined with the soft tissue features of an MR imaging study by adding the bone extracted from
the CT to the MR dataset. Similarly, a rectangular region from an image of one study can be
removed and replaced with the corresponding region from the other study (Figure 6)
4.
CLINICAL EXAMPLE
An example is presented to illustrate the general process of multimodality data fusion. The case
involves a patient with a large left skull base lesion that was determined to be a benign
menigioma Although there was nothing extraordinary about this particular case, it highlights
many of the steps and requirements involved when using multiple image datasets for conformal
therapy treatment planning.
Data Acquisition
A treatment planning CT study was performed with the patient in a customized immobilization
mask. An MR study was also performed, but without the mask because of space limitations within
the head coil. No attempt was made to adjust the orientation of the image plane (pitch) on the
MRI study to match that used for the CT study, although having the patient in a similar position
during the different imaging studies can greatly simplify the registration process. The CT study
was acquired without contrast (80 images, 3.0mm thickness, 3.0mm separation, 0.67mm pixel
size, 512 x 512 image size). The MR study involved three series using a spin echo sequence;
proton- and T2-weighted axial images (21 images, TE = 15 & 90msec, TR = 4918msec), T1-
weighted post-gadolinium axial images (22 images, TE = 550msec, TR = 1400msec) and T1-
weighted post-gadolinium coronal images (22 images, TE = 16, TR = 500). All MR images were
256 x 256 pixels and had a pixel size of 0.78mm, a slice thickness of 5.0mm, and were separated
by 7.0mm. All scans included the entire head to help facilitate accurate dataset registration and
other aspects of treatment planning (such as DRR generation).
Page 9
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
9
Figure 7. Visual validation of estimated coordinate transformation. a) CT were reformatted to match the MR
images and displayed together as an alternating "checkerboard" or "Venetian blind" of two images at the
anatomic same level b) Threshold extracted outlines from a reformatted CT image overlaid on the
corresponding MR image.
Dataset Registration
Registration of the CT and MR datasets was performed using a mutual information-based image
registration program originally developed in the U-M Digital Image Processing Laboratory
(www.med.umich.edu/dipl/). The CT images were downsized to 256 x 256 and the data outside the
head cropped (e.g. air and head holder) to reduce memory and computational requirements. Four
points were placed in approximately the same anatomic locations on both data sets (centers of the
orbits and the top and back of brain) to initialize the algorithm. Convergence to a maximal MI
took less than five minutes. The accuracy of the registration was accessed visually using the
displays shown in Figure 7.
Tumor and Target Volume Definition
The gross tumor volume (GTV) was defined as the region of enhancement in the post Gd-DPTA
contrast MR studies. The physician outlined this volume on both the axial and coronal set of
images (Figure 8). Surface representations of the GTV on both studies were created from the
manually drawn outlines. These surfaces were then transformed in the coordinate system of the
planning CT by applying the computed transformation to the vertices of the surfaces. Once
transformed, the surfaces were re-sliced along the planes defined by the CT images yielding a set
of contours for the coronal-MR GTV and the axial-MR GTV. These contours were used to define a
"composite" GTV from the union of the two sets of MR-based GTV outlines. For this case, the CT
grayscale data did not contribute any information for the definition of the GTV. Had the physician
outlined a CT-based GTV it could have been incorporated directly the composite GTV or compared
to the MR based outlines to reconcile potential ambiguities. The final planning target volume was
created by expanding the composite GTV surface isotropically using a 5 mm margin for setup
uncertainty (in this case the clinical target volume (CTV) was the same as the GTV). The
expanded surface was then re-sliced back onto the CT images and checked by the physician for
final approval.
Treatment Planning and Dose Visualization
At this point, with the necessary coordinate transformations in hand; most of the steps of
treatment planning can be accomplished using any dataset. Because of the confidence in the
geometric accuracy and extent of the external patient surface and the density information from the
planning CT data, this dataset is chosen. Except for these criteria, the MR and CT data are
interchangeable; all options in the planning system can operate with either. For example, it is
just as simple to display dose over the MR data as the CT (Figure 9).
Page 10
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
10
Figure 8. Left side a) Post contrast axial with GTV. b) Reconstructed sagittal CT with composite MR derived
GTV. c) Axial CT with coronal and axial MR derived GTVs and composite GTV. d) Post contrast coronal MR
with GTV. The thin white lines show the intersection of the plane of the other image in the same row. Right
side a) MR axial derived GTV. b) MR coronal derived GTV volume, optic nerves, and optic chiasm. c)
Composite GTV from MR axial and MR coronal tumor volumes. Orbits are derived directly from the CT
images.
Figure 9. Dose display using a CT registered MR dataset. a) Isodose lines. a) Dose "Colorwash."
Page 11
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
11
5.
SUMMARY
In this presentation the benefits and process of combining multiple imaging modalities together to
enhance the accuracy of conformal radiation therapy have been described. While the imaging
strengths existing in CT, MR, PET, SPECT, and ultrasound imaging techniques make them
important tools in patient management, each method has its drawbacks. The goal of image fusion
is to effectively overlap the strengths of one imaging modality over the weaknesses of another.
Various types of image registration techniques are now being used in the clinic. The objective of
each is to derive a coordinate transformation that maps homologous anatomic points from one
dataset to the other. Each method uses a different measure of success. Surface-based registration
measures success by minimizing the distances between points and surface representations of
anatomic structures extracted from each dataset. Image-based registration measures success
directly from voxel grayscale data in the two datasets. One robust image-based algorithm operates
by maximizing the mutual information content between the two datasets. Interactive techniques
are also in wide use and involve manual manipulations of structure or image orientation and
allow immediate visual feedback and verification of success.
Once a coordinate transformation has been computed between two datasets, structure mapping
and image fusion can be used to visualize the data from one dataset in relation to another.
Structure mapping permits the display of outlines drawn on one dataset over the images of the
other. Image mapping and fusion allows 2-D images from the two datasets to be visualized
adjacent to or superimposed over the other.
The application of these techniques has been demonstrated using a typical clinical example using
CT and MR studies for a patient with a menigioma. While only two imaging modalities were used
in this example, the general steps required to exploit data from any imaging modality for
treatment planning in conformal therapy are the same.
6.
REFERENCES
Doyle WK, et al, Differentiation of cerebral radiation necrosis from tumor recurrence by [18F]FDG
and 82Rb positron emission tomography. J Comput Assist Tomogr, 11(4):563-70, 1987.
Fraass BA, et al. Integration of magnetic resonance imaging into radiation therapy treatment
planning: I. Technical considerations. Int J Radiat Oncol Biol Phys, 13:1897-908, 1987.
Fuchs H, Kedem ZM, Uselton SP. Optimal Surface Reconstruction from Planar Contours. Comm
ACM 1977; 10:693-703.
Kessler ML, Pitluck S, Petti P, Castro JR. Integration of multimodality imaging data for
radiotherapy treatment planning. Int J Radiat Oncol Biol Phys, 21:1653-67, 1991.
Khoo V.S., et al., Magnetic resonance imaging (MRI): considerations and applications in
radiotherapy treatment planning, Radiother Oncol,42(1): 1-15; 1997.
Koral KF, Lin S, Fessler JA, Kaminski MS, Wahl RL, Preliminary results from intensity-based
CT-SPECT fusion in I-131 anti-B1 monoclonal-antibody therapy of lymphoma. Cancer, 80(12
Suppl):2538-44, 1997.
Page 12
Marc L Kessler, PhD and Kelvin Li, MS Image Fusion for Conformal Radiotherapy AAPM 2001
12
Marks LB, et al, The role of three dimensional functional lung imaging in radiation treatment
planning: the functional dose-volume histogram. Int J Radiat Oncol Biol Phys, 30;33(1):65-75,
1995
Meyer CR, et al, Demonstration of accuracy and clinical versatility of mutual information for
automatic multimodality image fusion using affine and thin-plate spline warped geometric
deformations, Med Image Anal, 1(3):195-206, 1997
Narayana V, Roberson PL, Winfield RJ, McLaughlin PW, Impact of ultrasound and computed
tomography prostate volume registration on evaluation of permanent prostate implants, Int J
Radiat Oncol Biol Phys, 39(2):341-6, 1997
Pathak SD, Grimm PD, Chalana V, Kim Y, Pubic arch detection in transrectal ultrasound guided
prostate cancer therapy, IEEE Trans Med Imaging 17(5):762-71, 1998.
Press WH, Teukolsky SA, Vetterling WT, Flannery BP, Numerical Recipes in C: the art of
scientific computing. 2nd ed, Port Chester, NY: Cambridge University Press, 1992.
Rosenman, J.G., et al., Image registration: an essential part of radiation therapy treatment
planning, Int J Radiat Oncol Biol Phys, 40(1): 197-205; 1998.
Schad LR, Loot S, Schmidt F, Sturm V, Lorenz WJ. Correction of spatial distortion in MR imaging:
a prerequisite for accurate stereotaxy. J Comput Assist Tomogr, 11:499-505. 1987.
Studholme C, Hill DL, Hawkes DJ, Automated 3-D registration of MR and CT images of the head, Med Image Ana1,
1(2):163-75, 1996.
Ten Haken R.K., et al., A quantitative assessment of the addition of MRI to CT-based, 3-D
treatment planning of brain tumors, Radiother Oncol 25(2):121-133, 1992.
van den Elsen PA, Pol MA, Viergever MA. Medical Image Matching - A review with classification.
IEEE Engineering in Med and Biol, 12:26-39, 1993.
van Herk M., et al., Automatic three-dimensional correlation of CT-CT, CT-MRI, and CT-SPECT
using chamfer matching, Med Phys, 21(7): 1163-1178; 1994.
Wells WM 3rd, Viola P, Atsumi H, Nakajima S, Kikinis R, Multi-modal volume registration by
maximization of mutual information. Med Image Anal, (1):35-51, 1996.
Subscribe to:
Posts (Atom)