May 5

I've returned HW 8 and posted solutions.  I hope your EM final went well.

May 2

I've now posted lectures through the end of the course.  There is
a lot more in there for Ch 7 than we'll have time to cover, including
derivations of the partition function for the three cases of quantum
ideal gases we've considered.  Much of the material is from Reif,
ch 7 and ch 9, which covers these topics in more detail.

Apr 30

I've returned hw 7, and posted solutions.  I'll have a few more
problems posted shortly to finish during finals week.

Apr 29

As I understand it, the essential point of Bose condensation is that you
have a macroscopic number of particles (something comparable to N) in a single
quantum state, the ground state.  The main point of the discussion in the text
(I think) is that you can see this happening when the continuum approx he's
made to fix mu via N fails. It's because the energy discretization is essential,
and using continuous momentum states is a poor approximation.  It seems a lot less
mysterious if you just keep everything discrete and don't try to do an integral.
The distributions we worked out for fermions, bosons and photons are perfectly
fine (in fact were designed) for discrete energy levels.

Bose condensation requires the ave n0 in the ground state to be macroscopic and
much larger than in the first excited state, n1.  You can see the essential
role mu plays for this to occur.

Apr 28

I should mention that when in lecture I compute the various averages
for n_s for bosons and fermions (following Reif), I'm considering the
case where N and T is fixed, but derive an approximate expression
good for large N.  So in this case the Canonical Distr applies,
and the chem potential mu appears as a trick.  In problem 1(b), you
won't have to make this approximation, since there are so few
particles; you can just enumerate.  In prob 1(c), N is allowed
to fluctuate, so the Grand Canon Distr applies, and you fix mu to
get the average N right.

For large N, the approach in Reif for N fixed, and the Grand Canon
Distr give the similar results, because for large systems,
N fluctuates so little; it might as well be fixed.

Apr 27

Ch 7 lecture notes are up to date, plus some.

Apr 17

I've posted the remaining lecture notes through the end of Ch 6.

Apr 17

I've got set 5 graded and solutions posted.  I've written up
5.5 and 5.12 in some detail; they might be worth looking at.

Apr 11

transformations; they don't seem completely correct to me.  I've
posted a paper that gives a more general treatment on our lectures
web page.

Apr 11

I'll likely make the current hw set due next Monday (rather
than this Thursday).

For the rubber band problem, there are a couple ways to get Z
in closed form.  When summing over states, note that many states
have the same energy; it's useful to combine these as exp(-beta*E)
times the degeneracy, which should be very familiar by now.
In this form there's a clever way to sum the series.

It's also possible to consider Z for one link at a time. (Some
students solved it this way for the qual; I didn't catch this
myself.)

Apr 8

For an extensive quantity A which is a function of other extensive
quantities B, C, D ..., what happens to A if you rescale B, C, D, ..
all by the same constant?  (For example, double each?)

Apr 6

For Sethna's Lagrange multiplier problem, what you can show is
that the distribution is of the form  Pi is a*exp(-b Ei),
where a and b are determined by the two constraints.  In particular,
you can define b to be beta = 1/kT, and its value is whatever is needed
to get the average energy right.  What you won't be able to do in general
is solve explicitly for b (or beta or T) as a function of ave E; that's
too difficult.

If you're not familiar with Lagrange multipliers, you can look in
Reif, or any mechanics text, or you can ask me.

Apr 5

For the flying brick problem, you'll have an integral to evaluate
over momentum for the last part.  The normalization is just a Gaussian,
but for the probability itself you'll need, roughly (in mathematica
notation here):
Integrate[Exp[- a p^2], {p, p0, Infinity}]
which gives
Sqrt[Pi/a] (1/2) Erfc[Sqrt[a] p0]
It's the complementary error function.  Error functions appear when
you integrate Gaussians over a limited range.  You can leave your
answer in the form of Erf or Erfc, though it would be helpful if you
plotted or sketched this function so you'd see that it's giving a
reasonable answer, or even better, give its approximate form when
the argument is large.

Apr 4

I've updated the lectures into Ch 6, and replaced Ch 5 with
a better copy.

For the brick problem, recall that
1. our fundamental postulate relates probabilities to numbers of states
which is related to entropy
2. you can track changes in entropy by heat
3. you only need relative probabilities; you can get the
overall constant by normalization

Apr 2

I've posted solutions from set 4 and graded and returned them
(finally).

Mar 31

I won't be around much today, but will try to check email for
questions.  I wanted to point out for the first problem from Ch4,
I think by near and far, Sethna just means in the same cell or
a different cell, so it's just a counting-cells problem (simple).
You'll find it very rare to end up back in the same cell, so it's
hard to test ergodicity.

Mar 29

I've updated posted lectures through ch 5.

Mar 22

Just to be clear, for the second spin problem (with plotting), don't
use your Gaussian approximation from the first problem, which is only
good around E=0. Go back to the previous expression.  The log/hyperbolic
functions below should help simplify expressions such as E(T).

Mar 15

For the Maxwell relation problem, I'm pretty sure Sethna has
in mind deriving the other two obvious relations involving
second derivatives of E(S,V,N).  But if you were to define
some new functions, such as the Free Energy F(T,V,N) = E - T S,
and take second derivatives, you can derive a very large
number of other relations.  Some of them are surprising.
They're often useful, because they can relate something
obscure but theoretically useful, like changes in S, to
something very easy to get at in an experiment (T, p or V).

For example: (ignoring N here)

(dS/dp)_T = -(dV/dT)_p

which says, surprisingly, that one can determine the rate at which
the log of the number of available states changes with pressure
but T fixed by measuring the thermal expansion (how does the volume
increase as you heat something up) at fixed pressure.

Mar 15

I've graded set 3 and left it in your box and posted some solutions.

Mar 11

There are some simple relations that connect inverse hyperbolic
functions to logs.  I'll list a few below.  These might help in
simplifying an expression in one of the spin problems.

asinh(x) = sinh^(-1)(x) = log( x + (x^2 + 1)^(1/2) )
acosh(x) = log( x +- (x^2 - 1)^(1/2) )
atanh(x) = 1/2 log(|(1+x)/(1-x)|)
acoth(x) = 1/2 log(|(x+1)/(x-1)|)

You can save yourself a great deal of work in the spin problem
where you combine two systems by writing Omega_total = Omega*Omega'
explicitly as a function of E.  You should find, by completing
the square in the exponent, that the product of two Gaussians
is also a Gaussian, which is a very good thing.

Also, for that problem, you're asked for the relation between
E and E' at the peak.  There are really two relations:
the first (obvious) one is from energy conservation; the
second is from the equilibrium condition we discussed in
lecture.  Give the second one.  (Knowing both relations will
let you solve for them explicitly.)

Mar 11

The posted lectures are now up to date through Ch 4.

I'll likely have some suggestions for the new spin problem
posted later today.

Mar 8

For the first spin problem, the counting of states is a lot like
what you'd need to decide the number of states where you have n_r
particles on the right side of a box divided in half, and n_l on the
left.  Also, when considering the number of states within a range
E to E + \delta E, you'll have a sum.  But if \delta E is much less
than E, you can simplify the sum into (a single term at E)x(number
of states in the range E to \delta E).  Most of the rest is just
Stirling's approximation.

More detail:
To get the Gaussian approximation, you'll need an expansion in
log Omega(E) out to second order in E around 0, but you'll also want to
simplify in 1/N, keeping only terms which don't go to zero as N
becomes infinite.  But to get this done consistently, I don't think
you'll want to assume E/N is small in that limit, or else you'll lose
the Taylor series in E.  You'll see that the Gaussian does allow for
E to be comparable to sqrt(N).

Mar 1

If anyone wants to try doing fourier transforms with python, I've
made a first stab in fourtrans.py.
It's crude and in need of debugging, but you're welcome to use and
improve it.  It's got a transform by brute force integration, as well
as an FFT (fast fourier transform), which is the right way to do it.
(I'm sure Mathematica would be much simpler, but I'm trying to learn
python.)

I've updated my lectures for Ch 3.

For the pressure problem, you'll know you've got the analytic parts
put together properly if you can reproduce the ideal gas law (which
is impressive, I think).

For the Gaussian/Fourier Appx problem, I haven't been able to find any
related software on the textbook site.  Fortunately, the computing
part of this problem is trivial, and you shouldn't have any trouble
plotting a Gaussian or taking its Fourier transform in Mathematica.
I'll locate, post and test out some python code for a transform
as soon as I can.

Feb 27

For the last part of the (x(1-x))^N, I'd like you to estimate the
range in validity for the approximations for (x - 1/2).  (You know
that any Taylor series will fail if you go too far away from the
point of expansion.)

The answer will depend on N; you should find in the first case that
the range becomes narrow as N increases, but not in the second.  You
can use a standard tool for estimating a series when you only have a
few elements: you compare successive terms and use the idea
that each term should be smaller than the previous one; each
term is supposed to be an increasingly smaller refinement.

Feb 27

I've been trying to run the software from Sethna's site, without
success as yet.  Don't waste your time on the simulation parts of
the pressure exercise 3.4 until/unless I get that working.  I'll
let you know if I can get it functioning, and will set up
at least one linux pc where it's working if so.  I would still
like you to do the analytic part of the problem, though.

Feb 17

Incidentally, for anyone using python/matplotlib, please don't
hesitate to ask for suggestions for functions to use or for me
to look over your code.  I just sorted this out myself, and am
happy to save you some time.

I'll be gone after about 2:30pm today, but accessible by email.

Feb 13

If you'd like to use python, you'll probably need a few additional
packages that go with it: numpy, for numerical work, including
generating random numbers, and matplotlib, which seems like a very
good plotting package that's patterned after matlab.  Like python,
they're open source and freely available.

I tried matplotlib for the first time today, and was able to learn
enough to get through problem 2.5 in just a few hours.  (If I can, no
doubt you can, but faster.)

If you prefer, we also have Mathematica on nearly all the dept machines,
and I've posted a short tutorial under references.  It comes with
built in plotting routines as well as the numerical routines you'd need
for this problem.

As I mentioned in class, please let me know if I can help get you
set up and started with some type of math software.  Whatever you
choose, you'll find it extremely useful, as you finish classes and
start research, to be able to perform simple calculations and generate
plots quickly; it's a good investment.

For Sethna 2.5(b), I think 10,000 is a typo. 1000 walks
for N=1 and 10 are sufficient.  It will help to plot with
a different color if you can.

Feb 7

If you're still working on the Maxwell distribution, note that what you're
really trying to do is convert from cartesian to spherical coordinates (in v).
What I often find useful in cases like this is to write the normalization
integral and change variables; it guarantees you'll get the factor from
the Jacobian (ie from the measure) included properly.  Note the curious
fact that, although the cartesian distribution is peaked at v = 0,
there's zero probability of having v = 0, due to the v^2 in front.  It's
due to the measure.

Feb 3

We're closed again tomorrow (Friday).  What an odd week.  Let's move
the due date to Monday.

How's everyone doing?

Feb 3

I'll be around for perhaps an hour or two from 4:30pm on this
afternoon.

Feb 3

Let's push the due date to Friday.  I'll try to be in the department
today (Thursday) for at lease some time in case you have questions;
I'll post a time here when I know.  In any case, you can send questions
by email; I'll check on and off through the day.

Feb 1

Prob 1.5 might look a bit involved, but most of it is just description;
the actual work is very simple.  It does pay to look at his notes on
the radius of convergence if you haven't seen that before, and anything
you read in Bender and Orszag (which we have in our grad library) to
learn about asymptotic series will be good for you; it's a great
text, and asymptotic series are extremely useful.

I think most of 1.2 is pretty straightforward, but the distribution
for x+y confused me for a bit.  I recommend breaking z into two
regions: one from 0 to 1, the other from 1 to 2.  You can check
you answer by testing the normalization.

Jan 28

I've posted the initial lectures from Ch 2.

Jan 28

I'll post hints for current problems, answers to questions, and
other useful information here.  Given that this is a new course,
I haven't collected many hints for problems yet; I typically post
those in response to questions, so ask away.