In Chap. 5 we introduced statistical entropy defined in terms of probabilities of molecular distribution in the energy levels. The energy levels may be discrete or continuous. In this chapter we find the said probability distributions when the system attains thermodynamic equilibrium subject to constraints on it. The probability distribution in question for a system in the state of equilibrium is determined based on the principle of maximum entropy. The entropy so obtained is identified as the thermodynamic entropy. This enables one to establish relationship between statistical mechanical and thermodynamic descriptions.

6.1 Principle of Maximum Entropy

In Chap. 5 we introduced the concept of statistical entropy in terms of the phase space distribution function for classical systems and the probabilities of occupation of energy levels when the system is quantized. The evolution of the said probabilities is governed by relevant equations of motion and it is expected that the distributions approaches a definite limit, called the equilibrium distribution, as \(t\rightarrow \infty \). The thermodynamic properties derived from the statistical entropy corresponding to the said equilibrium distribution are expected to match with the thermodynamic predictions in the so-called thermodynamic limit \(N\rightarrow \infty \), \(V\rightarrow \infty \), N/V remaining finite.

As we noted in the kinetic theory, the approach based on determining the equilibrium distribution as the \(t\rightarrow \infty \) limit of time-dependent distribution is, however, not simple and in fact may not always be possible to work with. Going by the fact that the state of thermodynamic equilibrium is one in which thermodynamic entropy is maximum, it is envisaged that it would be the state of maximum statistical entropy too. Accordingly, the equilibrium distribution is determined using the postulate of maximum entropy:

The state of thermal equilibrium is described by such probability distribution of microstates for which the statistical entropy is maximum subject to the constraints on the system.

The constraints of general interest are the ones pertaining to whether the system exchanges energy and/or particles with its environment. Those constraints define three types of ensembles:

  1. 1.

    Microcanonical ensemble. It represents a system which is isolated and exchanges neither energy nor particles with the environment. 

  2. 2.

    Canonical ensemble. It represents a system which exchanges energy but not particles with the environment. 

  3. 3.

    Grand Canonical ensemble. It represents a system which exchanges energy as well as particles with the environment. 

In the following sections we derive the probabilities for each of the above-mentioned ensembles. We treat separately the systems having fixed number of particles and the ones which exchange particles with the environment.

6.2 Systems Having Fixed Number of Particles

Following Chap. 5,

  1. 1.

    We denote by \(|E_{1N}\rangle , E_{2N}\rangle ,\ldots \) the energy eigenstates of the Hamiltonian \(\hat{H}_N\) governing the evolution of the quantum system of N particles and by \(p_{mN}\) the probability of occupation of \(|E_{mN}\rangle \).

  2. 2.

    A classical system of N particles whose evolution is governed by the Hamiltonian \(H(\{\textbf{r}_i, \textbf{p}_i\}_N)\) is described by the phase space distribution function \(f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)\).

Accordingly, the averages are defined by

  1. 1.

    In the quantum formalism, the average for a fixed number of particles of an operator \(\hat{O}\) (caret on a symbol denotes operator) commuting with the Hamiltonian is given by

    $$\begin{aligned} \langle \hat{O}\rangle _N =\sum _{m}p_{mN}\langle E_{mN}|\hat{O}|E_{mN}\rangle ,~~~~~ ~~\text{ quantum } \text{ systems }. \end{aligned}$$
    (6.1)

    The general case of an arbitrary \(\hat{O}\) is addressed in Chap. 13 in terms of the density matrix formalism.

  2. 2.

    The average of a phase space function \(O(\{\textbf{r}_i, \textbf{p}_i\}_N)\) in the classical theory for fixed number of particles is given by

    $$\begin{aligned} \langle O\rangle _N =\int f_N(\{\textbf{r}_i, \textbf{p}_i\}_N) O(\{\textbf{r}_i, \textbf{p}_i\}_N)~\textrm{d}\tau _N,~~\text{ classical } \text{ systems }. \end{aligned}$$
    (6.2)

In particular the expressions for entropy for the two kinds of systems are:

  1. 1.

    The quantum statistical entropy for fixed number of particles is

    $$\begin{aligned} S_N=-k_\textrm{B}\sum _{m}p_{mN}\textrm{ln}(p_{mN}). \end{aligned}$$
    (6.3)
  2. 2.

    The classical statistical entropy for fixed number of particles is

    $$\begin{aligned} S_N=-k_\textrm{B}\int f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)\textrm{ln}(f_N(\{\textbf{r}_i, \textbf{p}_i\}_N))~\textrm{d}\tau _N. \end{aligned}$$
    (6.4)

We extremize \(S_N\) subject to the normalization of the probabilities:

$$\begin{aligned} \sum _{m}p_{mN}=1,\qquad \qquad \qquad \quad ~\text{ quantum } \text{ systems }, \end{aligned}$$
(6.5)
$$\begin{aligned} \int f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)~\textrm{d}\tau _N=1,\qquad ~\text{ classical } \text{ systems }. \end{aligned}$$
(6.6)

and the constraints

$$\begin{aligned} \langle \hat{A}^{(k)}\rangle \equiv \sum _m p_{mN}A^{(k)}_{mN} \equiv C_k,\quad k=1,2,\ldots , r, \end{aligned}$$
(6.7)

where \(\hat{A}^{(k)}\)’s are assumed to commute with the Hamiltonian of the system, the \(C_k\)’s are constants and

$$\begin{aligned} A^{(k)}_{mN}= \langle E_{mN}|\hat{A}^{(k)}|E_{mN}\rangle . \end{aligned}$$
(6.8)

For the classical system, the constraints are given by the equations

$$\begin{aligned} \langle A^{(k)}(\{\textbf{r}_i, \textbf{p}_i\}_N\rangle =C_k,\quad k=1,2,\ldots , r. \end{aligned}$$
(6.9)

Invoking the results derived in Sect. 5.2, the probability distribution which maximizes the statistical entropy for the quantum systems may be seen to be given by

$$\begin{aligned} p_{mN} =\frac{1}{Z_N}\exp \Big (-\sum _{k=1}^{r} \alpha _k A^{(k)}_{mN}\Big ). \end{aligned}$$
(6.10)

The normalization condition (6.5) for quantum systems gives

$$\begin{aligned} Z_N=\sum _{m}\exp \Big (-\sum _{k=1}^{r} \alpha _k A^{(k)}_{mN}\Big ). \end{aligned}$$
(6.11)

The \(Z_N\) is called the N-particle quantum partition function.

The phase space distribution function which maximizes the classical statistical entropy reads

$$\begin{aligned} f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)= \frac{1}{Z_N}\exp \Big (-\sum _{k=1}^{r} \alpha _k A^{(k)}(\{\textbf{r}_i, \textbf{p}_i\}_N)\Big ). \end{aligned}$$
(6.12)

The normalization condition (6.6) for classical systems leads to

$$\begin{aligned} Z_N=\int \exp \Big (-\sum _{k=1}^{r}\alpha _k A^{(k)}(\{\textbf{r}_i, \textbf{p}_i\}_N\Big ) \textrm{d}\tau _N. \end{aligned}$$
(6.13)

The \(Z_N\) is the classical partition function.

In the following we list some consequences following from the form of the equilibrium probabilities. We will work with the quantum formalism as the corresponding classical results follow by replacing the quantum averages by the classical ones.

  1. 1.

    It is readily seen that

    $$\begin{aligned} \langle \hat{A}^{(k)}\rangle =-\frac{\partial \textrm{ln}(Z_N)}{\partial \alpha _k}, \quad k=1,2,\ldots , r. \end{aligned}$$
    (6.14)

    The Lagrange multipliers \(\{\alpha _k\}\) can be determined in terms of given constant values of \(\{\langle \textbf{A}_k\rangle \}\) by inverting the equations above.

  2. 2.

    We leave it as an exercise to show that the expression (6.3) for entropy assumes the form

    $$\begin{aligned} \frac{S_N}{k_\textrm{B}}=\sum _{k=1}^{r}\alpha _k C_k+\textrm{ln}(Z_N),\qquad C_k\equiv \langle \hat{A}^{(k)}\rangle . \end{aligned}$$
    (6.15)

    Due to (6.14), it is readily seen that \(S/k_\textrm{B}\) is the Legendre transform of \(\textrm{ln}(Z_N)\) with respect to the \(\alpha _k\)’s to the corresponding conjugate variables \(C_k\)’s. This implies

    $$\begin{aligned} \alpha _k=\frac{1}{k_\textrm{B}}\frac{\partial S_N}{\partial C_k}. \end{aligned}$$
    (6.16)

    This may alternatively be obtained by partial differentiation of (6.15) with respect to \(C_k\) keeping \(\alpha _k\)’s fixed and noting that \(Z_N\) is a function of the \(\alpha _k\)’s.

  3. 3.

    It is straightforward to see that

    $$\begin{aligned} \langle \hat{A}^{(k)2}\rangle =\frac{1}{Z_N}\frac{\partial ^2 Z_N}{\partial \alpha ^2_k},\quad k=1,2,\ldots , r. \end{aligned}$$
    (6.17)

    We leave it as an exercise to show that the variance in the measurement of \(\hat{A}_k\) is given by

    $$\begin{aligned} \sigma ^2_k\equiv \langle \hat{A}^{(k)2}\rangle -\langle \hat{A}^{(k)}\rangle ^2=\frac{\partial ^2\textrm{ln}(Z_N)}{\partial \alpha ^2_k}. \end{aligned}$$
    (6.18)

We use the results derived above to construct ensembles for systems with fixed number of particles, namely, microcanonical and canonical ensembles.

Exercises

  1. Ex. 6.1.

    Show that for \(p_{mN}\) as in (6.10), the expression (6.3) for entropy is given by (6.15).

  2. Ex. 6.2.

    Show that

    $$\begin{aligned} \langle \hat{A}^{(k)}_j\hat{A}^{(k)}_k\rangle -\langle \hat{A}^{(k)}_j\rangle \langle \hat{A}^{(k)}_k\rangle =\frac{\partial ^2\textrm{ln}(Z_N)}{\partial \alpha _j\alpha _k}. \end{aligned}$$
    (6.19)

    For \(j=k\) we recover (6.18).

6.2.1 Microcanonical Ensemble

The microcanonical ensemble describes an isolated system. It is characterized by a constant number of particles whereas its energy E is given to lie in some small interval (\(E_0-\varDelta ,~ E_0+\varDelta \)) about a fixed value \(E_0\). Since in this case, apart from the normalization condition, there is no other constraint, \(\alpha _k=0\). Consequently, the general equilibrium distributions (6.10) and (6.12) lead to the following results for quantum and classical systems:

  1. 1.

    With \(\alpha _k=0\) the (6.12) for classical systems reduces to

    $$\begin{aligned} f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)=\frac{1}{Z_N}, \end{aligned}$$
    (6.20)

    where \(Z_N\) is given by (6.13) in which the integral is to be carried on the part of the phase space for which energy E lies in the interval \((E_0-\varDelta ,~ E_0+\varDelta )\) so that if \(\varGamma \) is the volume of the said part then

    $$\begin{aligned} Z_N=\int _{E_0-\varDelta \le E\le E_0+\varDelta }~\textrm{d}\tau _N=\varGamma . \end{aligned}$$
    (6.21)

    The \(Z_N\) above is the classical microcanonical partition function.  The equilibrium phase space distribution for the microcanonical ensemble is thus given by

    $$\begin{aligned} f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)=\frac{1}{\varGamma }, \qquad E_0-\varDelta \le E\le E_0+\varDelta . \end{aligned}$$
    (6.22)

    On substituting this in (6.4), the expression for entropy for a classical system described by microcanonical ensemble reads

    $$\begin{aligned} S_N=k_\textrm{B}\textrm{ln}(\varGamma ). \end{aligned}$$
    (6.23)
  2. 2.

    For quantum systems, with \(\alpha _k=0\), (6.10) reduces to

    $$\begin{aligned} p_{mN}=\frac{1}{Z_N}, \end{aligned}$$
    (6.24)

    where \(Z_N\), given by (6.11), is a sum over states in the energy interval \((E_0-\varDelta ,~E_0+\varDelta )\). If W is the number of states in the said interval then

    $$\begin{aligned} Z_N=W. \end{aligned}$$
    (6.25)

    The \(Z_N\) above is the quantum microcanonical partition function.  On substituting this in (6.3), the expression for entropy for quantum systems reads

    $$\begin{aligned} S_N=k_\textrm{B}\textrm{ln}(W). \end{aligned}$$
    (6.26)

    The number of states W is analogous to the phase space volume \(\varGamma \) accessible to a classical system in the interval \((E_0-\varDelta ,~E_0+\varDelta )\) of energy.

6.2.2 Canonical Ensemble

The canonical ensemble is defined as the one in which the system exchanges energy but not particles with the surroundings. Hence, apart from the normalization conditions, the constraint under which entropy is to be maximized is the value U of internal energy, given for quantum systems by

$$\begin{aligned} U=\langle \hat{H}_N\rangle =\sum _{m}E_{mN}p_{mN}, \end{aligned}$$
(6.27)

where \(\hat{H}_N\) is the system Hamiltonian and the second equation is due to the fact that \(|E_{mN}\rangle \) is the eigenstate of \(\hat{H}_N\) corresponding to energy \(E_{mN}\):

$$\begin{aligned} \hat{H}_N|E_{mN}\rangle =E_{mN}|E_{mN}\rangle . \end{aligned}$$
(6.28)

The corresponding constraint for classical systems is

$$\begin{aligned} U=\int H_N(\{\textbf{r}_i, \textbf{p}_i\}_N) f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)~\textrm{d}\tau _N, \end{aligned}$$
(6.29)

where \(H(\{\textbf{r}_i, \textbf{p}_i\}_N)\) is classical Hamiltonian.

The equilibrium distribution for quantum systems is then given by (6.10) with \(\hat{A}^{(1)}=\hat{H}_N\), \(\alpha _k=0\) (\(k\ne 1\)). Also, due to (6.28), the definition (6.8) of \(A^{(1)}_{mN}\) gives \(A^{(1)}_{mN}=E_{mN}\). Hence, with \(\alpha _1\rightarrow \beta \), (6.10) assumes the form

$$\begin{aligned} p_{mN}=\frac{1}{Z_N}\exp \Big (-\beta E_{mN}\Big ), \end{aligned}$$
(6.30)

where, due to (6.11),

$$\begin{aligned} Z_N=\sum _{m}\exp \left( -\beta E_{mN}\right) . \end{aligned}$$
(6.31)

The \(Z_N\) above is the quantum canonical partition function. 

The equilibrium distribution for classical systems may similarly be shown to be given by

$$\begin{aligned} f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)=\frac{1}{Z_N}\exp \{-\beta H_N(\{\textbf{r}_i, \textbf{p}_i\}_N)\}, \end{aligned}$$
(6.32)

where

$$\begin{aligned} Z_N=\int \exp \left\{ -\beta H_N(\{\textbf{r}_i, \textbf{p}_i\}_N)\right\} ~\textrm{d}\tau _N. \end{aligned}$$
(6.33)

The \(Z_N\) above is the classical canonical partition function.  In the following we derive some relations expressing physical quantities in terms of \(Z_N\) assuming the system to be quantum. Analogous relations hold in the phase space description.

  1. 1.

    It is straightforward to see that 

    $$\begin{aligned} U=-\frac{\partial \textrm{ln}(Z_N)}{\partial \beta }, \end{aligned}$$
    (6.34)

    for quantum as well as classical systems. Since U is extensive, the equation above shows that \(\textrm{ln}(Z_N)\) must be extensive. Inversion of (6.34) gives undetermined Lagrange multiplier \(\beta \) in terms of the internal energy U.

  2. 2.

    Using (6.30), the expression (6.3) for entropy reads 

    $$\begin{aligned} S_N=k_\textrm{B}\left( \beta U+\textrm{ln}(Z_N)\right) , \end{aligned}$$
    (6.35)

    for quantum as well as classical systems.

  3. 3.

    We derive the expression for pressure in terms of \(Z_N\). To that end, recall the well-known expression for pressure,

    $$\begin{aligned} P=-\frac{\partial E}{\partial V}, \end{aligned}$$
    (6.36)

    where E is energy and V the volume. Assume that the energy of the gas at some instance of time is \(E_{m, N}\) so that the pressure at that instance is

    $$\begin{aligned} P_{mN}=-\frac{\partial E_{mN}}{\partial V}. \end{aligned}$$
    (6.37)

    The average pressure would be

    $$\begin{aligned} P= & {} \sum _{m}p_{mN}P_{m, N}=-\sum _{m}p_{m, N} \frac{\partial E_{m, N}}{\partial V}\nonumber \\ = & {} -\frac{1}{Z_N}\sum _{m}\exp (-\beta E_{m, N}) \frac{\partial E_{m, N}}{\partial V}\nonumber \\ = & {} \frac{1}{\beta Z_N}\frac{\partial }{\partial V} \sum _{m}\exp (-\beta E_{m, N}) =\frac{1}{\beta Z_N}\frac{\partial Z_N}{\partial V}. \end{aligned}$$
    (6.38)

    We thus see that 

    $$\begin{aligned} P=\frac{1}{\beta }\frac{\partial \textrm{ln}( Z_N)}{\partial V}. \end{aligned}$$
    (6.39)

    This is the expression for pressure in terms of the canonical partition function. See also Sect. 6.4.2 for an alternative derivation.

  4. 4.

    A measure of fluctuations in the measurement of energy is its variance. Invoking (6.18) we have

    $$\begin{aligned} \varDelta E^2\equiv \langle E^2\rangle -\langle E\rangle ^2 =\frac{\partial ^2 \textrm{ln}(Z_N)}{\partial \beta ^2}. \end{aligned}$$
    (6.40)

    Some consequences of the relation above are:

    1. (a)

      Due to (6.34), (6.40) may be rewritten as

      $$\begin{aligned} \varDelta E^2=k_\textrm{B}T^2\frac{\partial U}{\partial T} =k_\textrm{B}T^2 C_V. \end{aligned}$$
      (6.41)

      This relates fluctuations in energy with the heat capacity.

    2. (b)

      Since \(\varDelta E^2\ge 0\) it follows from (6.40) that

      $$\begin{aligned} \frac{\partial ^2 \textrm{ln}(Z_N)}{\partial \beta ^2}\ge 0. \end{aligned}$$
      (6.42)
    3. (c)

      On dividing (6.40) by \(N^2\) we have

      $$\begin{aligned} \frac{\varDelta E^2}{N^2} =\frac{1}{N^2}\frac{\partial ^2 \textrm{ln}(Z_N)}{\partial \beta ^2}. \end{aligned}$$
      (6.43)

      As argued circa (6.34), \(\textrm{ln}(Z_N)\) is extensive and hence is proportional to N. The equation above therefore shows that fluctuation in the measurement of energy per particle, \(\varDelta E/N\), is proportional to \(1/\sqrt{N}\). Hence, for large N, fluctuations in energy may be ignored.

    4. (d)

      Invoking (6.34), rewrite (6.40) as

      $$\begin{aligned} \varDelta E^2=-\frac{\partial U}{\partial \beta }. \end{aligned}$$
      (6.44)

      Since \(\varDelta E^2\ge 0\), it follows that

      $$\begin{aligned} \frac{\partial U}{\partial \beta }\le 0. \end{aligned}$$
      (6.45)

      This shows that the internal energy is a decreasing function of \(\beta \). The equality holds when \(\varDelta E=0\):

      $$\begin{aligned} \frac{\partial U}{\partial \beta }=0,\qquad \text{ if } \varDelta E=0. \end{aligned}$$
      (6.46)

      Hence U is independent of \(\beta \) in the absence of randomness. We will show that \(\beta =1/k_\textrm{B}T\) where T is the temperature of the system. We therefore see that the concept of temperature is related with randomness. In particular we will see that for a non-interacting free classical gas, \(U=3N/2\beta \). The equation (6.44) in that case leads to

      $$\begin{aligned} \varDelta E^2=\frac{3N}{2}k^2_\textrm{B}T^2. \end{aligned}$$
      (6.47)

      This equation relates temperature directly with energy fluctuations in an ideal classical gas. Clearly, \(T=0\) if \(\varDelta E=0\) though the applicability of ideal gas law is questionable at low temperatures.

Exercises

  1. Ex. 6.3.

    (a) Given that, besides the normalization condition, the probability distribution is constrained by specified average values U and P of energy and pressure, show that the probability that maximizes entropy is

    $$\begin{aligned} p_{mN}=\frac{1}{Z_{iN}}\exp \left( -\beta E_{mN} -\gamma P_{mN}\right) , \end{aligned}$$
    (6.48)
    $$\begin{aligned} Z_{iN}=\sum _{m}\exp \left( -\beta E_{mN} -\gamma P_{mN}\right) , \end{aligned}$$
    (6.49)

    where \(P_{m, N}\) is as in (6.37). This is called isobaric-isothermal distribution. Hint: Recall from (6.38) that pressure is average of \(P_{m, N}\).  (b) Show that the entropy corresponding to the probabilities in (6.49) is

    $$\begin{aligned} S=k_\textrm{B}\left( \beta U+\gamma P+\textrm{ln}(Z_{iN})\right) . \end{aligned}$$
    (6.50)

6.3 Grand Canonical Ensemble

The grand canonical ensemble describes systems which, in addition to exchanging energy with the environment, exchange also the particles. The number of particles then also becomes a variable. We adopt following notation for describing systems in grand canonical ensemble:

  1. 1.

    In a quantum system, the occupation probability of the state \(|E_{m, N}\rangle \) when N is variable will be denoted by \(p_{m}(N)\). The argument N of \(p_{m}(N)\) is to distinguish the probabilities for varying number of particles from those for fixed N which have been symbolized by \(p_{mN}\).

  2. 2.

    In a classical system, N-particle probability density in the vicinity of \(\{\textbf{r}_i, \textbf{p}_i\}_N\) in case the particle number is varying will be denoted by \(f(\{\textbf{r}_i, \textbf{p}_i\}_N, N)\). The N in the argument of f is to distinguish the distribution for varying number of particles from the one for fixed N which has been symbolized by \(f_N(\{\textbf{r}_i, \textbf{p}_i\}_N)\).

  3. 3.

    The average \(\langle \hat{O}\rangle \) of an operator \(\hat{O}\), commuting with the Hamiltonian, in the present case is given by

    $$\begin{aligned} \langle \hat{O}\rangle =\sum _{N=0}^{\infty } \sum _{m}p_{m}(N)\langle E_{mN}|\hat{O}|E_{mN}\rangle . \end{aligned}$$
    (6.51)
  4. 4.

    The average of a phase space distribution function \(O(\{\textbf{r}_i, \textbf{p}_i\}_N)\) in the classical description is given by

    $$\begin{aligned} \langle O(\{\textbf{r}_i, \textbf{p}_i\})\rangle =\sum _{N=0}^\infty \int f(\{\textbf{r}_i, \textbf{p}_i\}_N, N) O(\{\textbf{r}_i, \textbf{p}_i\})~\textrm{d}\tau _N. \end{aligned}$$
    (6.52)
  5. 5.

    Statistical entropy for varying N for quantum systems is

    $$\begin{aligned} S=-k_\textrm{B}\sum _{N=0}^{\infty }\sum _m p_{m}(N)\textrm{ln}(p_{m}(N)). \end{aligned}$$
    (6.53)
  6. 6.

    Statistical entropy for varying N for classical systems is

    $$\begin{aligned} S=-k_\textrm{B}\sum _{N=0}^{\infty }\int f(\{\textbf{r}_i, \textbf{p}_i\}_N, N) \textrm{ln}\{f(\{\textbf{r}_i, \textbf{p}_i\}_N, N)\}~\textrm{d}\tau _N. \end{aligned}$$
    (6.54)
  7. 7.

    The normalization condition for quantum probabilities reads

    $$\begin{aligned} \sum _{N=0}^{\infty }\sum _{m}p_{m}(N)=1. \end{aligned}$$
    (6.55)
  8. 8.

    The normalization condition for classical probability distribution is

    $$\begin{aligned} \sum _{N=0}^{\infty }\int f(\{\textbf{r}_i, \textbf{p}_i\}_N, N)~\textrm{d}\tau _N=1. \end{aligned}$$
    (6.56)

The equilibrium probability distribution in the grand canonical ensemble is one which maximizes entropy subject to its normalization condition, the average values of internal energy, and the number of molecules which in the quantum formalism are

$$\begin{aligned} U=\langle \hat{H}_N\rangle =\sum _{N=0}^{\infty } \sum _{m}E_{mN}p_{m}(N), \end{aligned}$$
(6.57)

and

$$\begin{aligned} \bar{N}=\langle \hat{N}\rangle =\sum _{N=0}^{\infty }\sum _{m}Np_m(N). \end{aligned}$$
(6.58)

Following the general procedure outlined in Sect. 5.2, it is straightforward to see that the \(p_m(N)\) which maximizes entropy is given by

$$\begin{aligned} p_m(N)=\frac{1}{Z_G}\exp \left( -\beta E_{mN}+\alpha N\right) , \end{aligned}$$
(6.59)

where \(\alpha , \beta \) are Lagrange multipliers and \(Z_G\) is the quantum grand canonical partition function defined by 

$$\begin{aligned} Z_G=\sum _{N=0}^{\infty }\sum _{m} \exp \left( -\beta E_{mN}+\alpha N\right) . \end{aligned}$$
(6.60)

The equation above is the result of the normalization condition.

The equilibrium distribution function for classical systems is

$$\begin{aligned} f(\{\textbf{r}_i, \textbf{p}_i\}_N; N)=\frac{1}{Z_G} \exp \left\{ -\beta H_N(\{\textbf{r}_i, \textbf{p}_i\}_N)+\alpha N\right\} , \end{aligned}$$
(6.61)

where the grand partition function is

$$\begin{aligned} Z_G=\sum _{N=0}^{\infty } \int \exp \left\{ -\beta H_N(\{\textbf{r}_i, \textbf{p}_i\}_N)+\alpha N\right\} ~\textrm{d}\tau _N. \end{aligned}$$
(6.62)

The \(Z_G\) above is the classical grand canonical partition function.  In the following we consider quantum systems. The results for the classical systems follow by replacing the quantum averages by classical ones.

  1. 1.

    It is straightforward to see that the internal energy is given by 

    $$\begin{aligned} U=-\frac{\partial \textrm{ln}(Z_G)}{\partial \beta }. \end{aligned}$$
    (6.63)
  2. 2.

    The average number of molecules is

    $$\begin{aligned} \bar{N}=\frac{\partial \textrm{ln}(Z_G)}{\partial \alpha }. \end{aligned}$$
    (6.64)

    The Lagrange multipliers \(\alpha , \beta \) can be determined as functions of \(U, \bar{N}\) by inverting the equations (6.63) and (6.64).

  3. 3.

    A useful relation between \(Z_N\) and \(Z_G\) is obtained by noting that, with \(Z_N\) given by (6.31) and (6.33) respectively for quantum and classical systems, the grand partition function \(Z_G\) in (6.60) for quantum systems, as well as that in (6.62) for classical systems can be written as

    $$\begin{aligned} Z_G=\sum _{N=0}^{\infty }\exp (\alpha N)Z_N =\sum _{N=0}^{\infty }z^NZ_N, \end{aligned}$$
    (6.65)

    where

    $$\begin{aligned} z=\exp (\alpha ) \end{aligned}$$
    (6.66)

    is called the fugacity.  From (6.65) follows the inverse relation

    $$\begin{aligned} Z_N=\frac{1}{N!}\frac{\partial ^N Z_G}{\partial z^N}\bigg |_{z=0}. \end{aligned}$$
    (6.67)

    This express \(Z_N\) in terms of \(Z_G\).

  4. 4.

    The expression for pressure may be derived in the same way as has been done to arrive at the one in (6.39) for the canonical ensemble. The resulting expression for pressure is same as the one in (6.39) with \(Z_N\) therein replaced by \(Z_G\):

    $$\begin{aligned} P=\frac{1}{\beta }\frac{\partial \textrm{ln}( Z_G)}{\partial V}. \end{aligned}$$
    (6.68)

    Now, \(\textrm{ln}(Z_N)\) and \(\textrm{ln}(Z_G)\) are extensive quantities but with a difference: Whereas \(\textrm{ln}(Z_N)\) is a function of two extensive variables, namely, number of particles N and volume V, \(\textrm{ln}(Z_G)\) is a function of only one extensive variable, namely, V. For, as shown in (6.65), \(Z_G\) is obtained from \(Z_N\) by summation over N. Hence \(\textrm{ln}(Z_G)\) must be of the form \(\textrm{ln}(Z_G)=V f(\beta , \alpha )\) so that \(\partial \textrm{ln}(Z_G)/\partial V=f(\beta , \alpha )=\textrm{ln}(Z_G)/V\). Hence 

    $$\begin{aligned} P=\frac{1}{\beta V}\textrm{ln}(Z_G). \end{aligned}$$
    (6.69)
  5. 5.

    The expression for variance in the measurement of energy can be derived in the same manner as in its derivation for canonical ensemble in (6.40). The resulting expression is same as that in (6.40) with \(Z_N\) therein replaced by \(Z_G\):

    $$\begin{aligned} \varDelta E^2=\frac{\partial ^2 \textrm{ln}(Z_G)}{\partial \beta ^2} =-\frac{\partial U}{\partial \beta }. \end{aligned}$$
    (6.70)

    The consequences of these relations have been discussed circa (6.40).

  6. 6.

    The expression for variance in the measurement of number of molecules, obtained using (6.17), reads

    $$\begin{aligned} \varDelta N^2\equiv \langle N^2\rangle -\langle N\rangle ^2 =\frac{\partial ^2 \textrm{ln}(Z_G)}{\partial \alpha ^2}. \end{aligned}$$
    (6.71)

    Since \(\varDelta N^2\ge 0\) it follows that

    $$\begin{aligned} \frac{\partial ^2 \textrm{ln}(Z_G)}{\partial \alpha ^2}\ge 0. \end{aligned}$$
    (6.72)

    On using (6.64), (6.71) may be rewritten as

    $$\begin{aligned} \varDelta N^2=\frac{\partial \bar{N}}{\partial \alpha }. \end{aligned}$$
    (6.73)

    The quantity \(\varDelta N/\bar{N}\) serves as a measure of fluctuations in the number of molecules relative to the mean \(\bar{N}\). From (6.71) we have

    $$\begin{aligned} \frac{\varDelta N^2}{\bar{N}^2}= \frac{1}{\bar{N}^2}\frac{\partial ^2 \textrm{ln}(Z_G)}{\partial \alpha ^2}. \end{aligned}$$
    (6.74)

    Since \(\textrm{ln}(Z_G)\sim V\) and \(V/\bar{N}\) is a constant in the thermodynamic limit \(V\rightarrow \infty , \bar{N}\rightarrow \infty \), it follows that \(\varDelta N/\bar{N}\sim (\bar{N})^{-1/2}\). Thus fluctuation in the number of molecules is negligible in the thermodynamic limit. In particular, we will see that, for a non-interacting free classical gas, \(\partial \bar{N}/\partial \alpha =\bar{N}\) which on substitution in (6.73) yields

    $$\begin{aligned} \varDelta N^2=\bar{N}. \end{aligned}$$
    (6.75)

    This shows that variance in the number distribution of a non-interacting classical gas is same as its mean. This is the defining characteristic of the Poisson distribution. Hence the number distribution in an ideal classical gas is Poissonian.

  7. 7.

    A relation of much importance exists between the number fluctuations and isothermal compressibility. To derive it, divide (6.73) by \(\bar{N}^2\) to obtain

    $$\begin{aligned} \frac{\varDelta N^2}{\bar{N}^2} = & {} \frac{1}{\bar{N}}\left( \frac{\partial \bar{N}}{\partial \alpha }\right) _{V, T} \left( \frac{\partial \textrm{ln}(Z_G)}{\partial \alpha }\right) ^{-1}_{V, T}, =\frac{1}{\beta \bar{N}V}\left( \frac{\partial \bar{N}}{\partial \alpha }\right) _{V, T} \left( \frac{\partial P}{\partial \alpha }\right) ^{-1}_{V, T}\nonumber \\ = & {} \frac{1}{\beta V}\frac{V}{\bar{N}}\left( \frac{\partial \bar{N}/V}{\partial P}\right) _{V, T}, \end{aligned}$$
    (6.76)

    where (6.64) has been used to write an \(\bar{N}\) in the denominator in first equation, (6.69) has been recalled to write \(\textrm{ln}(Z_G)\) in terms of pressure in the second equation, and (A.10) has been invoked to write the last equation. In terms of the specific volume \(v=V/\bar{N}\), (6.76) reads

    $$\begin{aligned} \frac{\varDelta N^2}{\bar{N}^2}= -\frac{1}{\beta \bar{N}}\left( \frac{\partial v}{\partial P}\right) _{T}=\frac{k_\textrm{B}T}{V}\kappa _T, \end{aligned}$$
    (6.77)

    where \(\kappa _T\) is isothermal compressibility defined in (2.110). The expression above may be derived alternatively by using Gibbs–Duhem equation (see 6.3).

  8. 8.

    On substituting (6.59) in (6.53) follows the expression for entropy:

    $$\begin{aligned} S=k_\textrm{B}\left( \beta U-\alpha \bar{N}+\textrm{ln}(Z_G)\right) . \end{aligned}$$
    (6.78)

    Invoking (6.69) for \(\textrm{ln}(Z_G)\) in terms of the pressure P, we get

    $$\begin{aligned} S=k_\textrm{B}\left( \beta U-\alpha \bar{N}+\beta PV\right) . \end{aligned}$$
    (6.79)

    We will show that \(k_\textrm{B}\) is Boltzmann’s constant, \(\beta =1/k_\textrm{B}T\) and \(\alpha =\beta \mu \) where \(\mu \) is chemical potential. Consequently, (6.79) may be rewritten as

    $$\begin{aligned} S=\frac{1}{T}U-\frac{\mu }{T}\bar{N}+\frac{P}{T}V. \end{aligned}$$
    (6.80)

    This is Euler’s thermodynamic equation with N therein replaced by \(\bar{N}\).

While arriving at (6.80), we compared the statistical mechanical expression (6.79) for S with Euler’s equation and, based on the assumption that S in (6.80) is same as the thermodynamic entropy, we deduced the expressions for the Lagrange multipliers \(\beta , \alpha \) in terms of temperature and the chemical potential. Next we show that the assumed equivalence indeed holds.

  1. Ex. 6.4.

    Derive (6.77) using Gibbs–Duhem relation. Hint: With \(v=V/\bar{N}\), \(\alpha =\beta \mu \) rewrite (6.73) as

    $$\begin{aligned} \frac{\varDelta N^2}{\bar{N}^2}= & {} \frac{V}{\beta \bar{N}^2}\left( \frac{\partial \bar{N}/V}{\partial \mu }\right) _{V, T} =-\frac{1}{\beta V}\left( \frac{\partial v}{\partial \mu }\right) _{T} \nonumber \\ = & {} -\frac{1}{\beta V}\left( \frac{\partial v}{\partial P}\right) _{T} \left( \frac{\partial P}{\partial \mu }\right) _{T}. \end{aligned}$$
    (6.81)

    Using Gibbs–Duhem relation (2.49)

    $$\begin{aligned} s\textrm{d}T-v\textrm{d}P+\textrm{d}\mu =0, \end{aligned}$$
    (6.82)

    we have \((\partial P/\partial \mu )_T=1/v\) which on substitution in (6.81) would lead to the desired result (6.77).

6.4 Relation with Thermodynamics

In this section we establish the relationship of statistical description with thermodynamics by using the standard distributions introduced above. It may appear that we may need to use one or the other canonical ensemble depending on the system under consideration i.e. depending on whether energy and particle numbers are given as exactly known or as averages. However, as shown above, in the thermodynamic limit, fluctuations in energy and number of particles are negligibly small. Hence, in that limit, whether an observable value is given exactly or as an average would lead to the same result. In other words, the three ensembles would lead to the same results in the thermodynamic limit. We may therefore choose any of the three ensembles per our convenience. In what follows we work with the canonical ensemble. See [Balian] for further details.

6.4.1 Zeroth Law of Thermodynamics

Recall that the zeroth law of thermodynamics states that if two bodies are separately in thermal equilibrium with a third body then they are in thermal equilibrium with one another. This law defines temperature as the quantity that is equalized between bodies in thermal equilibrium with each other.

Using the zeroth law, we will show that \(\beta \) is a decreasing function of temperature.

To that end, let the probabilities of occupation of the energy eigenstates of systems 1 and 2 be given by (we drop the index indicating the number of particles)

$$\begin{aligned} p^{(k)}_m=\frac{1}{Z_k}\exp \left( -\beta _k E^{(k)}_m\right) , \quad Z_k=\sum _{m}\exp \left( -\beta _k E^{(k)}_m\right) , \quad k=1,2. \end{aligned}$$
(6.83)

The combined probability when the systems are not interacting is

$$\begin{aligned} p^{(c)}_{m, n}=p^{(1)}_mp^{(2)}_n. \end{aligned}$$
(6.84)

The systems interact when they are brought together. The energy levels of the combined system are obtained by taking account of the interaction potential between them. Let \(\{E^{(c)}_M\}\) be the energy eigenstates of the combined system so that the probability of occupation of the state of energy \(E^{(c)}_M\) is given by

$$\begin{aligned} p^{(c)}_M=\frac{1}{Z}\exp \left( -\beta E^{(c)}_M\right) ,\quad Z=\sum _{M}\exp \left( -\beta E^{(c)}_M\right) . \end{aligned}$$
(6.85)

If the interaction is weak then energy eigenvalues of the interacting system will be only negligibly different from the sum of their energies when not interacting. We therefore let \(E^{(c)}_M\approx E^{(1)}_m+E^{(2)}_n\) and rewrite (6.85) as

$$\begin{aligned} p^{(c)}_{m, n}\approx \frac{1}{Z}\exp \left( -\beta \left( E^{(1)}_m +E^{(2)}_n\right) \right) , \quad Z=\sum _{m, n}\exp \left( -\beta \left( E^{(1)}_m +E^{(2)}_n\right) \right) . \end{aligned}$$
(6.86)

This may be rewritten as

$$\begin{aligned} p^{(c)}_{m,n} = & {} \left( \frac{1}{Z^\prime _1}\exp \left( -\beta E^{(1)}_m\right) \right) \left( \frac{1}{Z^\prime _2}\exp \left( -\beta E^{(2)}_n\right) \right) , \end{aligned}$$
(6.87)

where

$$\begin{aligned} Z^\prime _k=\sum _{m}\exp \left( -\beta E^{(k)}_m\right) . \end{aligned}$$
(6.88)

Equations (6.87) show that the two systems attain the same value of the parameter \(\beta \) after attaining equilibrium on interaction with each other. Since the equilibrium is caused only due to exchange of energy and thermodynamic quantity that attains the same value due to the exchange of energy is temperature, the parameter \(\beta \) may be identified with temperature.

The manner in which \(\beta \) is related with temperature may be deduced by noting that, since the two systems together are isolated, their total internal energy does not change on interaction, i.e.

$$\begin{aligned} U_1+U_2=U^\prime _1+U^\prime _2. \end{aligned}$$
(6.89)

Hence if the internal energy of one increases then that of the other decreases. Also, from (6.45) we know that \(\partial U/\partial \beta \le 0\) i.e. U is a decreasing function of \(\beta \). This implies that the \(\beta \) value of that system increases whose energy after the interaction is smaller. Since the temperature of a body goes down when it loses energy, we see that \(\beta \) has reciprocal relationship with temperature.

6.4.2 First Law of Thermodynamics

Recall the first law of thermodynamics stated in (1.29):

$$\begin{aligned} \delta U=\delta Q-\delta W, \end{aligned}$$

where \(\delta Q\) is the amount of heat absorbed and \(\delta W\) the amount of work done by the system. To establish this law using statistical mechanical formalism we start with the expression (6.29) of internal energy and obtain

$$\begin{aligned} \delta U=\sum _{m}\left( p_{mN}\delta E_{mN}+ E_{mN}\delta p_{mN}\right) . \end{aligned}$$
(6.90)

A comparison of last two equations suggests that we may identify the amount of heat received by the system as

$$\begin{aligned} \delta Q=\sum _{m}E_{mN}\delta p_{mN}, \end{aligned}$$
(6.91)

and the amount of work done by it as

$$\begin{aligned} \delta W=-\sum _{m}p_{mN}\delta E_{mN}. \end{aligned}$$
(6.92)

In the following we elaborate the meaning of (6.91) and (6.92).

  1. 1.

    The (6.91) shows that the heat exchange is related with the change in probability distribution between energy levels of the system, i.e. it is the redistribution of population among energy levels which leads to change in the heat content of a body. The relations derived above are independent of any specific form of \(p_{m, N}\). If the system is described by the canonical ensemble then it is straightforward to see that, due to change only in the parameter \(\beta \),

    $$\begin{aligned} \delta Q= & {} \sum _{m}E_{mN}\frac{\partial p_{mN}}{\partial \beta } \delta \beta =\left( \frac{\partial }{\partial \beta }\sum _{m}E_{mN}p_{mN}\right) \delta \beta \nonumber \\ = & {} \frac{\partial U}{\partial \beta }\delta \beta , \end{aligned}$$
    (6.93)

    where the second equation is due to the fact that \(E_{m, N}\) is not a function of \(\beta \). On recalling (6.44), the equation above reads

    $$\begin{aligned} \delta Q=-\varDelta E^2\delta \beta . \end{aligned}$$
    (6.94)

    This shows that the amount of heat exchanged is related with the variance in energy. We will see that \(\beta \sim 1/T\). Hence increase in temperature is caused by the absorption of heat. We have thus been able to express heat in terms of the statistical mechanical theoretic entities.

  2. 2.

    The expression (6.92) for the work done by the system shows that the work is related with the change in the energy levels of the system. That change is caused by the changes in external parameters \(\{\xi _k\}\) on which energy depends. For example, one such parameter could be the volume. Similarly change in the applied fields like the electromagnetic, gravitational, etc. would result in the change in the energy levels. Hence, considering energy a function of \(\{\xi _k\}\), we have

    $$\begin{aligned} \delta E_{mN}(\{\xi _k\})=\sum _{k}F^k_{mN}\delta \xi _k, \end{aligned}$$
    (6.95)

    where

    $$\begin{aligned} F^k_{mN}=\frac{\partial E_{mN}(\{\xi _k\})}{\partial \xi _k}. \end{aligned}$$
    (6.96)

    On substituting this in (6.92) the expression for work done reads

    $$\begin{aligned} \delta W=-\sum _{k}G_k\delta \xi _k,\qquad G_k=\sum _{m}F^k_{mN}p_{mN}. \end{aligned}$$
    (6.97)

    For example, if \(\xi =V\) where V is the volume then corresponding F is pressure \(P=-\langle \partial E_{mN}/\partial V\rangle \) and \(\delta W=P\delta V\).

6.4.3 Second Law of Thermodynamics

In the following we arrive at the second law of thermodynamics by starting with the statistical description. The statistical entropy will turn out to be proportional to the thermodynamic entropy. This will also enable us to find functional relation between \(\beta \) and temperature. Recall that, while comparing statistical theory with the zeroth law of thermodynamics, we found that \(\beta \) is related inversely with temperature but it does not determine the function relating them.

The change in statistical entropy in canonical ensemble description is

$$\begin{aligned} \delta S= & {} -k_\textrm{B}\sum _{m} \left( 1+\textrm{ln}(p_{mN})\right) \delta p_{mN}\nonumber \\ = & {} -k_\textrm{B}\sum _{m}\left( 1-\beta E_{mN} -\textrm{ln}(Z_N)\right) \delta p_{mN}\nonumber \\ = & {} k_\textrm{B}\beta \sum _{m}E_{mN}\delta p_{mN}-k_\textrm{B}\left( 1 -\textrm{ln}(Z_N)\right) \sum _{m}\delta p_{mN}\nonumber \\ = & {} k_\textrm{B}\beta \delta Q-k_\textrm{B}\left( 1 -\textrm{ln}(Z_N)\right) \sum _{m}\delta p_{mN}, \end{aligned}$$
(6.98)

where use has been made of the definition (6.91) of \(\delta Q\). Due to

$$\begin{aligned} \sum _{m}p_{mN}=1\implies \sum _{m}\delta p_{mN}=0, \end{aligned}$$
(6.99)

the (6.98) reduces to

$$\begin{aligned} \delta S=k_\textrm{B}\beta \delta Q. \end{aligned}$$
(6.100)

Recall that the change \(\delta S_\textrm{th}\) in the thermodynamic entropy of a system when it receives the amount \(\delta Q\) of heat reversibly at temperature T is given by \(\delta S_\textrm{th}=\delta Q/T\) which on comparison with (6.100) shows that the statistical and the thermal entropies are proportional to each other. If we demand that the statistical entropy be identical with the thermodynamic entropy then we arrive at the relation \(\beta =1/k_\textrm{B}T\). The constant \(k_\textrm{B}\) is fixed by the choice of the temperature scale. If the unit of temperature is chosen to be Kelvin then \(k_\textrm{B}\) turns out to be the Boltzmann’s constant. Assuming that to be the case we are led to the following relations:

$$\begin{aligned} k_\textrm{B}= \text {Boltzmann's constant}, \qquad \beta =\frac{1}{k_\textrm{B}T}. \end{aligned}$$
(6.101)

This determines the Lagrange multiplier \(\beta \) in terms of a physical characteristic, namely, temperature.

Having identified statistical entropy S as the thermodynamic entropy, we can legitimately equate its expression (6.79) with Euler’s equation to arrive at the relation

$$\begin{aligned} \alpha =\beta \mu , \end{aligned}$$
(6.102)

where \(\mu \) is the chemical potential.

Having identified the statistical entropy with the thermodynamic one, we need to show that the entropy of an isolated system never decreases. That assertion has been proved in Sect. 13.6. Using it, the second law (1.53) may be arrived at by statistical mechanical considerations as follows. Since the system and reservoir together are isolated, their combined entropy increases:

$$\begin{aligned} \textrm{d}S+\textrm{d}S_R\ge 0, \end{aligned}$$
(6.103)

where \(\textrm{d}S\) stands for the change in entropy of the system and \(\textrm{d}S_R\) is that for the reservoir. We know that if \(\textrm{d}Q\) is the amount of heat transmitted from the reservoir to the system at temperature T then change in its entropy is

$$\begin{aligned} \textrm{d}S_R=-\frac{\textrm{d}Q}{T}. \end{aligned}$$
(6.104)

Substitution of (6.104) in (6.103) leads to the second law (1.53).

6.4.4 Third Law of Thermodynamics

Recall that the third law defines the scale for measuring entropy by asserting that entropy vanishes at absolute zero.

In order to see how it follows from statistical considerations, rewrite \(p_{mN}\) for the canonical ensemble in the form

$$\begin{aligned} p_{mN}=\frac{\exp (-\beta E_{mN})}{\sum _{m} \exp (-\beta E_{mN})} =\frac{\exp (-\beta (E_{mN}-E_{0N}))}{\sum _{m} \exp (-\beta (E_{mN}-E_{0N}))}, \end{aligned}$$
(6.105)

where \(E_{0N}\) is the ground state energy so that \(E_{mN}>E_{0N}\) for \(m\ne 0\). If the ground state is non-degenerate then, in the limit \(T\rightarrow 0\), the equation above shows that \(p_{mN}=\delta _{m0}\) which on substitution in the (6.3) for entropy yields \(S=0\). If the ground state is degenerate with W as the number of states then \(S=\textrm{ln}(W)\). Consequently, the entropy per unit volume is

$$\begin{aligned} s=\frac{1}{V}\textrm{ln}(W)\rightarrow 0,\qquad V\rightarrow \infty , \end{aligned}$$
(6.106)

provided W does not grow faster than \(\exp (V)\). Thus the statistical description leads to the third law.

6.5 Thermodynamic Potentials in Terms of Partition Functions

In this section we derive relations between the thermodynamic potentials and the partition functions.

  1. 1.

    With \(\beta \) given by (6.101), the expression (6.35) for entropy in the canonical ensemble reads (with \(S_N\rightarrow S\))

    $$\begin{aligned} TS=U+\beta ^{-1} \textrm{ln}(Z_N). \end{aligned}$$
    (6.107)

    Since we have identified the statistical entropy as the thermodynamic entropy, the expression above for statistical entropy should be same as the expression (2.57) for the thermodynamic entropy in terms of the Helmholtz free energy F(TVN) leading to the following relation between the canonical partition function \(Z_N\) and F(TVN):

    $$\begin{aligned} F(T, V, N)=-\beta ^{-1} \textrm{ln}(Z_N). \end{aligned}$$
    (6.108)

    The equations in (2.60) now assume the form

    $$\begin{aligned} {} & {} S=\left( \frac{\partial \beta ^{-1}\textrm{ln}(Z_N)}{\partial T}\right) _{V,N}, \quad P=\beta ^{-1}\left( \frac{\partial \textrm{ln}(Z_N)}{\partial V}\right) _{N, T},\nonumber \\ {} & {} \mu =-\beta ^{-1}\left( \frac{\partial \textrm{ln}(Z_N)}{\partial N} \right) _{V, T}. \end{aligned}$$
    (6.109)
  2. 2.

    With \(\beta \), \(\alpha \) given by (6.101) and (6.102), rewrite (6.78) as

    $$\begin{aligned} ST=U-\mu N+\beta ^{-1}\textrm{ln}(Z_G). \end{aligned}$$
    (6.110)

    On comparing this with the expression for entropy in terms of the grand potential \(\varOmega (T, V, \mu )\) in (2.75) we see that the entropy therein will be same as that in the equation above if

    $$\begin{aligned} \varOmega (T, V, \mu )=-\beta ^{-1}\textrm{ln}(Z_G)=-PV, \end{aligned}$$
    (6.111)

    where last equation is due to (6.68). We see that \(\varOmega \) in (6.111) has the form (2.76) as it should. The (2.78) now read

    $$\begin{aligned} {} & {} S=\left( \frac{\partial \beta ^{-1}\textrm{ln}(Z_G)}{\partial T}\right) _{V, \mu },\quad P=\beta ^{-1}\left( \frac{\partial \textrm{ln}(Z_G)}{\partial P}\right) _{T, \mu },\nonumber \\ {} & {} N=\beta ^{-1}\left( \frac{\partial \textrm{ln}(Z_G)}{\partial N}\right) _{T, V}. \end{aligned}$$
    (6.112)
  3. 3.

    The Gibbs potential defined in (2.65) may be written in terms of the Helmholtz potential as

    $$\begin{aligned} G(T, P, N)=F(T, V, N)+PV. \end{aligned}$$
    (6.113)

    Using (6.108) follows the expression of the Gibbs potential in terms of the partition function \(Z_N\):

    $$\begin{aligned} G(T, P, N)=-\beta ^{-1} \textrm{ln}(Z_N)+PV. \end{aligned}$$
    (6.114)

Exercises

  1. Ex. 6.5.

    Using (2.16) for entropy of an ideal gas in (6.107), show that

    $$\begin{aligned} \textrm{ln}(Z_N)=N\left\{ \textrm{ln}(V)-c\textrm{ln}(\beta )-\textrm{ln}(N) +B\right\} , \end{aligned}$$
    (6.115)

    where \(B=c\textrm{ln}(c)+A-c\) is a constant. With \(c=3/2\), this is same as \(Z_N\) derived in (7.18) using the theory of statistical mechanics if the factor N! therein is approximated using Stirling’s approximation and the constant B in (6.115) is identified with the corresponding term therein. For, as we know, the constant B cannot be determined by thermodynamics.

  2. Ex. 6.6.

    Compare the expression (6.50) of entropy in terms of \(Z_{iN}\), with the expression (2.65) of G(TPN) to show that if the undetermined Lagrange multiplies \(\gamma \) in (6.50) is

    $$\begin{aligned} \gamma =\beta V, \end{aligned}$$
    (6.116)

    then

    $$\begin{aligned} G(T, P, N)=-\beta ^{-1}\textrm{ln}(Z_{iN}). \end{aligned}$$
    (6.117)

    Consequently, the expression (6.49) for the isobaric-isothermal distribution assumes the form

    $$\begin{aligned} p_{mN}=\frac{1}{Z_i}\exp \left( -\beta E_{mN} -\beta P_{mN} V\right) . \end{aligned}$$
    (6.118)