Arnold Sommerfeld was a very influential physicist who was among those who would define the principles of quantum mechanics alongside Schodinger, Bohr, and other great scientific minds of the time. His thesis advisor was Lindemann, a very notable mathematician, and Heisenberg, Pauli, Debye, and Ewald were several of the students he advised during his career as a scientist and academic. This is what Sommerfeld said about thermodynamics:
Arnold Sommerfeld
Thermodynamics is a funny subject. The first time you go through it, you don’t understand it at all. The second time you go through it, you think you understand it, except for one or two small points. The third time you go through it, you know you don’t understand it, but by that time you are so used to it, it doesn’t bother you any more.
Going through thermodynamics my third time in January 2009, I could affirm that Sommerfeld was absolutely correct. These are the notes I compiled during my preparations for the thermodynamics portion of my Ph.D. qualifying exam.
The Zeroth Law of Thermodynamics
If two systems are separately in thermal equilibrium with a third system, they must be in thermal equilibrium with each other."
There is something called temperature which determines thermal equilibrium. This law says nothing about hotter or colder; it just establishes a quantity that defines a system as being in equilibrium or non-equilibrium.
The First Law of Thermodynamics
If a system is caused to change from an initial state to a final state by adiabatic means alone, the amount of work is the same for all adiabatic paths.
Energy is a function of state, so it doesn’t matter what path is taken between an initial and final energy state; the change will be the same regardless. From this comes the fact that for any closed path, there is no change in energy. This is also a roundabout way of stating that the sum of work and heat is conserved for any process, or
Based upon this framework, enthalpy is also defined as being the sum of energy and the product of pressure and volume. Restating in terms of infinitesimal changes, this means that any small change in energy must be the sum of the change in heat of the system, the product of pressure and the change in volume of the system, and the product of volume and the change in pressure of the system. The change in heat of a system is equal to its enthalpy change in any isobaric process; this makes enthalpy of great interest in experiments which occur in open atmosphere.
From the definition of internal energy and enthalpy, the heat capacity (change in heat per change in temperature) can be conveniently expressed as:
at constant
and
at constant
Many values such as heat of formation arise from this; the heat of formation is defined as the heat absorbed at constant pressure when a compound is formed from its constituent elements. This is a measurable thermodynamic quantity. Similarly, heats of reactions are observable.
The Second Law of Thermodynamics
The second law is commonly defined in two ways. The first, the Clausius definition, states:
No process is possible whose sole result is transfer of heat from a colder body to a hotter body.
The Kelvin definition is stated as
No process is possible whose sole result is complete conversion of heat into work.
Both statements claim the same underlying principles that heat flows from hot to cold and it is possible to extract work from this process, and that work must be put into a process which is to run in reverse. This law gives way to the study of heat engines, or devices which take heat from a hot reservoir, transfer it to a cold reservoir, and produce work on the surroundings.
The First and Second Laws of Thermodynamics
At this point the first and second laws of thermodynamics can be brought together to define the behavior and efficiency of heat engines; the work done by an engine must be equal to the difference between the heat drawn from the hot reservoir and the heat expelled to the cold reservoir to maintain .
The efficiency of a thermodynamic engine is defined as the ratio of the work done by the engine to the heat that is put into the engine. A Carnot engine has the maximum efficiency of any engine held between two given temperature reservoirs and is also termed a “reversible engine” because the cycle it follows involves exclusively reversible expansions and compressions. Specifically,
- Isothermal and reversible expansion (temperature constant, heat in, work out)
- Adiabatic and reversible expansion (temperature decrease, heat constant, work out)
- Isothermal and reversible compression (temperature constant, heat out, work in)
- Adiabatic and reversible compression (temperature increase, heat constant, work in)
The efficiency of an engine following this process is , and this maximum efficiency is shared by any reversible engine. From this arises (with a bit of a proof) the relationship that the ratio of the hot and cold reservoirs is equal to the ratio of the heat transferred from the hot to the engine and the heat expelled from the engine to the cold (). Also, the work done by this process is equal to the area between the four curves (plotted as volume versus pressure).
Knowing this, the work of any cycle of arbitrary P-V complexity can be expressed as the summation of work done by small Carnot cycles inscribed within the complex cycle in P-V space. As the area of each Carnot cycle becomes infinitesimally small, it becomes a more exact approximation of the true cycle. Each of these infinitesimally small Carnot cycles does an infinitesimal amount of work and expels an infinitesimal amount of heat. Given the relationship
and the fact that the heat transfers are conservative since the Carnot cycles are linked, (the infinitesimal heat transfer at the temperature being transferred) must be zero for a complete cycle. Thus, is a state function, and this state function is called entropy. Thus, the entropy change for any reversible process is zero; furthermore, irreversible processes have the effect of increasing the entropy of the surroundings.
It is worth noting that this definition of entropy, the classical definition, simply states that the quotient of a reversible heat transfer divided by the temperature of the transfer is a state function called entropy. The statistical mechanical definition gives it a physical significance, but that is beyond the scope of the second law which is more generalized and was stated from a purely classical standpoint.
The Third Law of Thermodynamics
The third law of thermodynamics is rather anticlimactic that, as described by Arnold Sommerfeld,
As the temperature of a system tends to absolute zero its entropy tends to a constant value S0 which is independent of pressure, state of aggregation, et cetera.
This is important in that it reveals entropy to be a state function which has a determinable, absolute value. At thermodynamic equilibrium, can be taken to be zero and entropy calculated explicitly without loss of generality or self-consistency. However, it should be noted that there are several apparent “exceptions” to the third law where entropy is nonzero at zero Kelvin:
- amorphous materials - the nonzero entropy of glassy materials is due to the fact that the glassy state is never an equilibrium state; the equilibrium state for any material at absolute zero is crystalline.
- mixtures - solubility goes to zero as temperature goes to zero, so solutions are not thermodynamically stable at absolute zero.
- polycrystalline solids - again, polycrystals are not thermodynamically the most stable form of a solid.
Another variation on the third law discusses explicitly another ramification:
It is impossible to reach absolute zero in a finite number of operations.
I have yet to find a clear proof that explains how this statement is justified, but it is a postulate of Nernst’s.
The Statistical Interpretation of Entropy
Boltzmann claimed that there is a relationship between the entropy of a system in a given state and the probability of that state’s existence. This statement was then quantified by Planck as
where W represents the probability that a given state will exist, is Boltzmann’s constant, and by the third law of thermodynamics. Determining exactly involves knowing information about the state of all particles in a system which is impossible at the macroscale, so statistics must be employed to put this concept to practical use.
Necessary Statistics
Given eight distinguishable particles and four boxes (with each box capable of holding eight particles if need be), the first particle can be placed in any of the four boxes. Similarly, the second particle can be placed completely independently of the first, and it too can be placed in any of the four boxes. Thus, the total possible number of ways to place the eight particles in the four boxes would be or 65536.
Considering a different case where the eight particles are either of type A (eg, Cu atoms) or B (eg, Ni atoms). Atoms of a given type are indistinguishable from each other, but not from atoms of other types. This means that the 65536 outcomes from above would have a good deal of redundancy; there would be equivalent configurations of particle arrangement. To determine the number of different configurations, it is necessary to determine the number of possible ways each box can be filled.
For the sake of example, this case will assume that each box will contain two particles after the distribution is complete. There exists a theorem which says that the number of combinations of N (= 8) objects being taken n ( = 2) at a time is
Since there are eight particles and they are being placed two at a time (since two fit in each box and each of the boxes are being filled one at a time), and , and . It is not clear to me how the fact that there are particles of type A and particles of type B factor into this. Perhaps the definitions of only holds for binary mixtures; academic treatments of this topic are always limited to binary mixtures at best, so it is often implicit in such stated “theorems” that they only hold for mixtures of two species. These sorts of implicit limits of applicability are found throughout thermodynamics texts, and they are a large reason why the subject is so frustrating to learn.
There are six particles left, and placing another pair into the next box results in , or 15. The third is , and the final box is . Thus, the number of ways of placing two objects in the first box, two objects in the second, two in the third, and two in the fourth amounts to the product of these four possibilities, or equivalently,
This is a generalized solution which can be applied to any distribution of atoms over boxes which can hold particles. In the calculated case of eight atoms, . That is, there are 2520 unique configurations of two particles being placed in each of four boxes (given the fact that the eight particles are equally divided into four of type A and four of type B?).
Now that there are 2520 unique configurations of this most probable distribution and a maximum of 65536 possible configurations (including degenerate ones), the probability of this most-probable distribution is or 0.03845. Any other possible distribution will have a lower probability of occurrence than this, and as a result of the relation , it will also have a lower entropy as well.
The value obtained only accounts for eight particles; because most systems have on the order of at least particles (atoms), direct calculation of the highest entropy configuration for any system is mathematically prohibitive. Stirling’s approximation mitigates this.
Entropy of Mixing
Given total atoms, of type A and of type B, there will be an entropy associated with the process of combining the two homogeneous sytems and letting them equilibrate to their highest-entropy state (which is a qualifier of equilibrium).
Before mixing, n atoms of A occupy n sites of A; there are n particles and only one box (since A’s are indistinguishable from each other), meaning . Similarly, . Since , . The starting entropy is taken to be zero.
After mixing, the highest entropy configuration will have A and B distributed evenly among sites. Therefore . Calculating the final entropy from this results in the entropy of mixing to be . Stirling’s approximation causes this to simplify considerably, and by taking to be (the fraction A) and to be , the result is the entropy of mixing
The Boltzmann Distribution
Staring with Boltzmann’s hypothesis of and applying the constraints of mass conservation ( is a constant) and energy conservation (the sum of all energies of all particles is constant), can be maximized to determine what the equilibrium distribution of energies will be for a system of particles. The result of this is the Boltzmann distribution,
where the denominator is also termed the partition function. This distribution represents the fraction of particles populating energy state , or equivalently, the probability that a given particle will have energy .
For a sufficiently large distribution of energy levels, the Boltzmann distribution can be integrated. Thus, the probability of a particle having an energy greater than where is any value, not necessarily a discrete energy state, is the integral of the numerator for energy U to infinity divided by the integral of the denominator from zero to infinity. This integral converges neatly to , which (by no coincidence) is the Arrhenius factor for activated processes.
Similarly, the mean total energy of a system can be calculated by multiplying the probability of each energy level being occupied by the energy of that level; at the continuum level, this becomes the integral of the Boltzmann distribution multiplied by the energy at each infinitesimal energy level from zero to infinity with respect to . Again, the result simplifies to the mean energy being equal to .
This gives way to the equipartition theorem, which states that a body has average thermal energy per degree of freedom. Degrees of freedom is any way a particle can store energy. There are three most common modes of energy storage:
- vibrations (in the form of quantum harmonic oscillators)
- rotations (in the form of quantum rigid rotators)
- translations (travelling wave of any energy; no confinement, so translational modes behave classically to the zero Kelvin limit)
One important consequences of this is that the atomic structure of a gaseous molecule can result in new ways for a molecule to store energy—the heat capacity of gases depend upon their molecular nature. For instance, a monatomic gas has only three translational modes, meaning its constant-volume molar heat capacity is . A diatomic gas has an additional vibrational mode and two rotational modes, resulting in a of . More complex gaseous molecules incorporate more degrees of freedom and result in higher heat capacities. This is one major reason that He is used as a gaseous coolant.
In the case of solids, there are three translational and three vibrational degrees of freedom resulting in a constant heat capacity of . However, this principle set forth by Dulong and Petit only holds for solids which are said to be behaving “classically,” or those solids who are at a temperature above their Debye temperature. Most metals behave classically at room temperature and thus have a constant-volume heat capacity of about . Compounds such as ceramic oxides have higher Debye temperatures, though.
Below the Debye temperature, the heat capacity changes. Einstein proposed a model that was sufficiently accurate until very low temperatures, with the deviation due to the approximation he made that atomic oscillations were decoupled. Debye improved upon Einstein’s theory by treating vibrational modes as standing waves; this Debye model correctly reproduced the dependence at very low temperatures.
Note to self: the original copy of these notes are in forrest/html/rci/mse
.