7314 Updates




Apr 28

There's a typo in AH 9.4(b) (3rd ed): Projection operators
always satisfy P^2 = P (not 1)(why?).  So P_L^2 = P_L and P_R^2 = P_R.

For the Euler-Lag eqn for Psi_L, to show it satisfies partial^2 Psi_L = 0,
I'd recommend acting on its E-L eqn, sigma_tilda_mu partial^mu Psi_L = 0
with sigma_nu partial^nu (that is, with the other sigma).  There's
a useful identity for Pauli spinors,

 { sigma_i, sigma_j } = 2 delta_i_j

similar to the one for gammas, that should help.  This shouldn't be
very involved.


Apr 13

For the charged scalar current, follow the usual procedure for a
global U(1) symmetry.  You should find it looks similar to the current
in the free case, except A_mu now also appears, which is curious;
it makes the current gauge invariant, as you'll see.  This doesn't happen 
for the fermion current, which doesn't have a derivative in its definition.  

To get the Feynman rule for a vertex, I'd recommend picking the 
simplest matrix element you can that gives something nonzero if
you expand the time evolution operator to first order, and compute 
it directly using Wick's theorem.  Then compare it to the vertex you'd
need using Feynman rules.  Most of the rules are general: propagators,
momentum-conserving delta functions, and wavefunctions for external
on-shell particles are the same for every theory.  The only differences
are in the type and form of vertices, so it's not very difficult
to identify.  Note that the matrix element for a scattering process
is a Lorentz scalar, so all the Lorentz indices (spinor and vector)
need to be tied up by the vertex.


Apr 7

For the problem about minus signs, this should be fairly
simple.  You don't even need to write down an explicit propagator.
What you should pay attention to is the minus signs as you
move around fields before Wick contracting them (after expanding
the time evolution operator to second order).  Under the time-ordering
operator T, you're free to commute scalar fields, and anti-commute
fermion fields, because T ultimately sets the order.  So, for
example,  = -.
(The first one is the feynman propagator.)


Mar 18

For the first problem in which you add a single non-renormalizable
(effective) interaction, I thought I should give a few more
comments about what I have in mind.  This is just a sketch with
some counting of powers of Lambda, not a calculation.  Suppose
you add an interaction like phi^6.  You should assume it comes
with a coupling constant like g^2/Lambda^2, with g of order 1,
based on our class discussion.  The idea is that this adds only
small corrections of order 1/Lambda^2 to what the renormalizable
theory gave.  This is clearly true at tree level; for example
to 2->4 scattering (ie to the 6-pt function).  But you might worry 
about what happens when you compute loops: do the extra powers
of Lambda these give in numerators ruin this idea?  Look at
the corrections it gives via loops to 2->2 scattering (the 4-pt 
function) and the propagator (2-pt function).  These do indeed
contribute extra powers of Lambda in the numerators, but you can
show that they only end up making order-1 shifts in the couplings
m and lambda you already have (which is fine; these need to be
fixed by data anyway and so there is no net effect), along with 
additional small corrections suppressed by 1/Lambda^2, as expected.

I hope that helps.  This isn't meant to be complicated; don't
spend an enormous amount of time on it.


For AH 7.8, note that there are many representations (explicit
forms) for the Dirac gamma matrices.  The only requirement is that
they satisfy the Dirac algebra {gamma_mu, gamma_nu} = 2 g_mu_nu.
Different choices have the same physical content, but the matrices
and spinors differ by a unitary transformation.  The choice you're
asked to use was the one Dirac introduced (now called the Dirac
representation), which emphasized the nonrelativistic limit.  Another 
choice more useful in HEP (which you saw later in Ch 4) is called the 
chiral representation useful for the high-energy limit.  It emphasizes
the chirality (or handedness or helicity) of the particles.

The main thing is to not mix the spinors and gammas from different
representations.  Any consistent choice will give the right answer.


To derive the Hamiltonian for fermion fields and use it to
check the Heisenberg equation, you shouldn't have to write out
any components of the gamma matrices.  This is a simple
exercise, but you want to leave expressions in as simple
a form as possible.  I think the only property of gammas
you'll need is that gamma^0 squared is one.  You can check
the form of your Hamiltonian with the one in Bjorken and Drell
or Peskin (I think).

This should be a very simple exercise - not more than half
a page.


Mar 4

For the radial integral needed for dimensional regularization, I looked 
it up, though I don't think it's too hard to evaluate.  I recall there's
a contour integral way to do it.  There are a couple useful versions:
The most direct is: (in latex)

\int_0^\infinity dp \frac{p^{n-1}}{p^2 + m^2} = 
 m^{n-2} \int_0^\infinity du \frac{u^{n-1}}{u^2 + 1} = m^{n-2} (\pi/2) \csc(n \pi/2)

another version that's equivalent and as useful uses

 \int_0^\infinity du \frac{u^{a-1}}{(u + 1)^{a+b}} = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}

Either should work.
 

Feb 24

Lots of comments from previous years:

1. The cut-off integrals are very simple, but you may use Mathematica for
them and their series if you like.

2. For the regulator exercise, you may find that one of the first
two integrals only has the leading log Lambda term and no
additional 1/Lambda terms; that's ok.  Also, you may be more 
familiar with expansions around zero than infinity; you can just 
replace Lambda with 1/x, and expand x around zero.

3. For the dimen. reg. exercise, you'll need to figure out how Gamma
behaves near various special points.  You can ask Mathematica,
look it up online, find it in most field theory texts, or if you're 
ambitious, you can find an integral representation of Gamma and
work it out yourself.  

4. For the imaginary part of the self energy, you'll need to remind
yourself that log z has a branch cut, usually chosen to be along
the negative real axis.  For certain cases in this problem, you'll
find z along that axis, and the i epsilon prescription will tell
you which side of the branch cut to choose.

5. For the two fields near the same point, you can observe that the
expression is a Lorentz scalar, and so must be a function of
epsilon_mu epsilon^mu.  You can then choose any frame you want
to evaluate it, which might help simplify it.  In principle, you 
should compute it separately for timelike and spacelike cases, but 
either one alone is fine for this exercise.  If the main contribution 
comes from large momentum, this can also simplify your integral.
Finally, it might help to give epsilon a small imaginary part to
define the large-k part of the exponent.

6. (Some more, somewhat redundant, comments about fields at a point from previous course:)
For the problem where you extract the short-distance singular behavior
of the vev of two fields at nearly the same point, the dominant contribution
comes from large momentum.  So in estimating the leading behavior,
you may ignore the mass, and use E(p) approx= |p|.  This should
make the integrals much simpler.  Also, when I computed this, I treated
timelike and spacelike eps^2's separately, taking advantage of a
special frame in each case (since it's a scalar fn of eps).  This
may be useful, depending on how you set your calculation up.

Also, you may end up with an integral that oscillates wildly at the
endpoints.  You can make sense of these types of integrals (which show
up all over in fourier analysis, delta functions, field theory, ...)
by giving the integrand an exponential damping.  You can do this to
damp out a region of the integral that doesn't contribute anyway because
it oscillates to zero.  Ask me if this doesn't help.

(Aside: the two-dimensional integrals you're looking at in the current
homework have purely spatial dimensions.  Just for your information, 
I'll mention that if all dimensions are spatial, with a unit-matrix norm, 
the space is referred to as Euclidean.  If one dimension is time-like, with 
a relative minus sign in the norm, it's Minkowskian.  Often in field theory 
it's useful to analytically continue t to i t, which turns a Minkowskian space 
into a Euclidean one.  This isn't important for this homework; I just thought 
it was useful to mention.)

Finally, for the dimensionally regulated integral, it's important to expand 
in epsilon wherever it appears.


Feb 11

For the problem where you consider the g^4 contribution to the self-energy,
I have in mind the ABC theory again.  One way to approach this is to
work backwards; start with the self-energy in the denominator, and
expand it consistently in g.

Feb 5

For the n-point Green's function, the main new ingredient for Feynman
rules that you should discover has to do with the cancellation of
diagrams with vacuum bubbles; that is, diagrams where particles appear
and disappear into the vacuum, without affecting the particles coming
in or out.  These processes represent corrections to the ground state
itself, and you should find that they cancel when the matrix element
is properly normalized by dividing out the vacuum inner product.

This result (which you'll find to only first order) is general, not
hard to prove, and greatly simplifies higher-order calculations.

Most of the rest should look the same as for the ABC model except for
the vertex.  The rule for it should be similar, but the vertex will
involve a different number of particles, and only one type.

For the n-point Green's function, I should point out that this is
not explicitly a scattering calculation, though you can turn it into
one via the LSZ reduction.  So you should keep all the contractions,
including the leading ones at order lambda^0, including the leading 1 in
the denominator.  You'll need these to see how the denominator
removes the vacuum bubbles to order lambda.

I should also mention that the process of putting your result into
momentum space just means transform the x_1 ... x_4 at the end.
All it does is peel off factors like  integral d^4x exp(-i k x)
left over from the propagators, and gives you a simpler momentum-space
expression.  It should be very simple to implement; I would save it 
until the end. 



Jan 22

For the differential cross section problem, think in terms of
cylindrical coordinates along the beam axis (z), then convert
pz to rapidity.  The p_perp^2 in the expression is just the square of
the usual 2d radius.  You can start with the generic cross section 
expression for colliding beams, choose one of the sums 
over final states (that is, the phase space integral for one of the 
final particles), and convert it to the new variables p_perp^2, y, and
the cylindrical angle theta.  Everything left in the integrand along
with the Jacobian will be the diff cross section.  


Dec 6

In the current homework, note that the matrix element involves
two identical particles.  This will make the matrix element slightly
different from what we're doing in lecture in the ABC model.  We'll
also discuss in class a modification we'll need to make in the final 
state normalization; don't worry about that here.

More specifically, if you have two factors of the field phi_B,
these don't operate in separate spaces, so you can't factor the
full matrix element into the product of three one-field matrix
elements.  To do this properly, please express the states
and operators in terms of creation and destruction operators,
and use commutators to get the matrix element by moving
destruction operators right, creation left, as we discussed
in class.  


Nov 23

This one should be fairly straightforward.  As always, ask if you
have questions.


Nov 13

For the string, you'll want to consider a field expanded in
terms of solutions satisfying periodic boundary conditions.
This means the momenta will be discrete, and the field will
be a sum over solutions rather than an integral.  The necessary
orthogonality and completelness relations are included on
the second page.

It also means that the label for the creation and destruction
operators (which are the coefficients in this expansion) will
be discrete.  I recommend then using the usual commutation
relations, with [a_k, a_l^\dagger] = delta_{k,l} for example,
just as in quantum mechanics.  It will make the interpretation
simplest.  The Hamiltonian and momentum operators will be
particularly simple.

To get this requires having just the right overall constant in the 
expansion for the field, or you'll get unwanted constants in your
commutation relation.  I'd recommend working completely backwards.
Assume the above creation/destruction commutation relation, and
leave the normalization in the field sum undetermined.  Then get
the normalization right by requiring the proper commutator
between pi(x) and phi(y); that is, the usual -i delta(x-y).
(Note that the argument x of the field is still continuous, so
the commutation for the fields is still a Dirac delta, even though
it's a Kronicker delta for the creation and destruction ops.

Ask if this isn't clear.

One other point that seems to cause some confusion.  The energy,
omega_n, in the plane-wave solutions is chosen to be positive, even
if the momentum k_n is negative.  This is a convention, but it's hard
to interpret your results if you don't use it, and it won't look
like what we did in lecture.


Nov 6

The J3, K3 problem is intended to be very simple.  For the first
part, I'd like you to simply apply the general form for these generators
that we worked out in lecture to the particular case of the lambda phi^4
theory, so you can see what they look like.  The only thing interesting
in this problem is resolving the paradox, which also doesn't require a
calculation.  You just need to know what the Heisenberg equations look
like for the general case where an operator has an explicit time dependence.


Nov 3

For the internal symmetry problem, you'll want to think about
how you'd make a scalar out of the components of a three-vector.
(It's an internal space in this case, but the form will be
the same.) I didn't mention it, but you should restrict your 
Lagrangian to integer powers of the fields.  So you can have 
phi^2, but not phi^(1/2), for example.  And finally, only allow
renormalizable terms.  You should find that there are only
a few terms.

For the transformation of the conjugate momentum field pi, it 
might help to think about how pi is defined in terms of L and 
the field phi, and use the chain rule.  You can work out an 
arbitrary finite transformation first, then think about how that
changes the infinitesimal transformation, which is what we want.

For the problem on the conservation of the boost generator, it 
might help to review how the Poisson bracket equations (in classical 
mechanics), or the Heisenberg equations (in quantum mechanics) are 
modified when a function or operator depends on time not only implicitly 
through the coordinates and momenta q(t) and p(t), but also has an
extra explicit t dependence.

It's not necessary to actually compute the commutator of the boost 
generators with H to answer this, but it's great if you'd
like to try that to see that it all works out.


Oct 25

For the Maxwell equation problem, just a reminder that when you
have repeated indices, so that these are summed over, you can't
use those indices elsewhere; they've already been used up.
So when you compute d/d(d_mu A_nu) of L, you should use
different indices for F other than mu and nu; for example
F_alpha_beta F^alpha^beta.

Also, d(d_alpha A_beta)/d(d_mu A_nu) is zero unless alpha and mu
match, and beta and nu match.  So this is delta_alpha^mu delta_beta^nu.
You also recall that delta_alpha^mu = g_alpha^mu.  That is, the identity
matrix is equal to the metric but with one lower and one upper index.
(Note that the derivative with respect to a lower index produces
an upper index.)

A last comment about Maxwell's equations: the E-L equations
give the two equations that involve J.  The other two
are simply constraints that follow from the antisymmetry
of F_mu_nu.  One way to write them is to use the identity
that epsilon^{mu nu alpha beta} d_nu F_{alpha beta} = 0,
then consider mu = 0, and mu=i separately.  If you use this identity,
first explain why it's correct.


For the commutator problem, one commutator you'll need to compute is
between a spatial derivative of the field phi and its conjugate
momentum.  Remember that in field theory, the fields are the operators, 
not derivatives, which can be pulled out of commutators.  If it's not
clear why this works, you could try defining the derivative as a finite 
difference, compute the commutator, and then take the difference to zero.
You should end up with the spatial derivative of a delta function,
which you can evaluate by partial integration, as usual.

I should also mention that the commutation relations we derived
in lecture are only true for fields at the same time.  However,
the Hamiltonian is conserved; that is, independent of time.  So
you can set the fields inside the H integral to be any time you
want; in particular, equal to the time of the field you're computing
the commutator with.


Oct 19

d^4x is the infinitesimal volume element in four-dimensional integrals.  
(It's like d^3x when you integrate over all space, but now including time.)  
If you think of a Lorentz transformation as a change of variables, you can 
think of what happens to integrals under such a change.  Recall that you'll 
need a Jacobian to account for possible changes in volume from the variable 
change.  Here the Jacobian is | det[dx^m'/dx^n]|, which is very simply 
related to the Lorentz transformation matrix.

It will help to first know what the determinant of Lambda is.  You can learn
a lot by considering how we're defining Lambda in terms of g, and these
useful facts about determinants:

det(A B) = det(A) det(B)

det(A^(-1)) = 1/det(A) (where A^(-1) is A inverse).  This follows
immediately from the identity above.

det(A^T) = det(A)  (where A^T is A transpose, with [A^T]ij = [A]ji )

(Just as we'll do for the metric, it's useful to turn this exercise
around and define a proper Lorentz transformation as any transformation 
that leaves both g and epsilon invariant.  An improper transformation, 
such as parity and time reversal, leaves only g invariant, but changes 
epsilon by a sign.)


Oct 7

For the spin expectation value, some half-angle identities 
might help:

cos(x) = cos^2(x/2) - sin^2(x/2)
sin(x) = 2 cos(x/2) sin(x/2)

To show that the contravariant version of the metric is invariant, one
way would be to show that the Lorentz-transformed version satisfies exactly
the same equation as the untransformed version.  There are probably a lot
of other ways.

For time ordering, there are several ways to solve this; none
of them are complicated.  The way I did it was to first consider 
the size of Lambda^0_0.  (Since we want a general property, it's 
easiest to refer to the defining eqn for Lambda; that is, that it 
leaves g_mu_nu invariant, and consider mu = nu = 0.  But you could 
also just look at the explicit form of Lambda, I think.)  Then I 
considered boosting from the proper time frame.  Another method 
involves simply considering the invariance of x^2.

Incidentally, this property is essential for showing that special 
relativity is consistent with causality.  Note that it doesn't hold 
for spacelike separations.


Sep 26

For problem 4, where you show that psibar psi is invariant under
rotations and boosts, the relevant rotation and boost transformations
in the 3rd edition are eqs 4.83 and 4.90.  For the 4th edition, they
are 4.28 and 4.49.  If you end up looking at both editions, note
that at that point in the 3rd edition text, they're using the Dirac
Representation for Dirac matrices, while in the 4th edition they've
already switched over to what's called the Chiral Representation, which
is more useful for high energies.  Solving the problem, though, doesn't
require explicit use of their form in either case, just some very
general properties, and should only take a few lines.

For the boost and rotation properties of spinors, you'll be reading
ahead of what we're covering in lectures, but absorb as much as possible,
and ask questions.  It's important material.


Sep 16

For the Galilean invariance question, it's very straightforward
if you apply the transformation directly to the equations of motion.
If you apply it to the Lagrangian (as the question asks), you'll find
there's a slight subtlety, which recurs in field theory.  Recall that
the action S is the time integral of the Lagrangian L.  Not only are
constant terms in L irrelevant, but terms which are total time derivatives
also don't matter.  (These would give boundary terms after integration, 
which we usually assume vanish.) 

The discussion in Appx G, from eqs G.1 to G.9 might be useful
to answer part (i) of the gravity question.  You can imagine
solving Poisson's equation in dimensions other than three.
I found it very helpful to think in terms of Gauss' Law and lines of 
force, which applies to solutions of Poisson's equation in any number 
of dimensions.  Those tricks you learned in E&M apply here, also.

For the last part of AH 2.6 on gravity in extra dimensions, after you've
thought about it for a while, you might want to take a look at the PDG
web site, which has a review on the subject.  It includes a discussion of
experimental searches.


Sep 8

The authors have a list of corrections for the third edition for Volume I and Volume II, 
as well as the fourth, for Volume I and Volume II.
It might be a good idea to get in the habit of checking these and noting them 
in your book before starting to read a new chapter or starting a set of problems 
to save yourself some confusion and pain.

For some reason, even though we're working in natural units with
hbar = c = 1, AH sometimes revert to including them in the problems.
Fortunately, you don't have to (and I'd prefer you didn't).  I think
they've done this less in the 4th edition.

For 2.2, recall that in QM, elastic scattering means energy is conserved,
inelastic means energy is lost.  But that's possible because it's an approximation.
In particle physics, we can't lose energy.  In that case, elastic means the
same types of particles come out as went in.  Inelastic means some of the initial
energy goes into changing one or more particles.  This can result in one particle
being excited, so its internal energy and therefore its mass increases,
or it can mean a particle breaks up into constituents if it's a bound
state, or it can turn into one or more other particles.

Quasi-elastic (I think) means that the electron scatters elastically off one 
of the nuclei inside the nucleus almost as if it's a free particle, as if none
of the other nucleons are there.


Aug 18 

Class information and some suggestions for homework will appear here.


<--Back to the Physics 7314 Home Page