Statistical Physics - Statistics Miscellanea#
Information content and Entropy
Given a discrete random variable \(X\) with probability mass function \(p_X(x)\), the self-information (todo what about mutual information of random variables?) is defined as the opposite of the logaritm of the mass function \(p_X(x)\),
Information content of indenpendent random variables is additive. Since \(p_{X,Y}(x,y) = p_X(x) p_Y(y)\),
Shannon entropy. Shannon entropy of a discrete random variable \(X\) is defined as the expected value of the information content,
Gibbs entropy. Gibbs entropy was defined by J.W.Gibbs in 1878,
Additivity holds for independent random variables.
Boltzmann entropy. Boltmann entropy holds for uniform distributions over \(\Omega\) possible states, \(p_i = \frac{1}{\Omega}\). Gibbs’ entropy of this uniform distribution becomes
Entropy in Quantum Mechanics. todo
Boltzmann distribution
Given a set of discrete states with probability \(p_i\), and the average measure as “macroscopic quantity” \(E = \sum_i p_i E_i\), Boltzann distribution maximizes the entropy (todo Link to min info, max uncertainty)
The distribution follows from the constrained optimization
and thus
and the normalization constant \(C\) is determined by normalization condition
The inverse \(Z = C^{-1}\) is defined as the partition function,
and the probability distribution becomes
Properties.
Thermodynamics. Comparison of statistics and classical thermodynamics
First principle of classical thermodynamics (for a monocomponent gas with no electric charge,…) reads
Entropy for Boltzmann distribution reads
From classical thermodyamics, temperature \(T\) can be defined as the partial derivative of the entropy of a system w.r.t. its internal energy keeping constant all the other independent variables,
todo
write the derivative above clearly in terms of composite functions
microscopical/statistical approach to the first principle of thermodynamics