Math 5062: Theory of Statistics II
Spring 2017

Instructor: Todd Kuffner (kuffner@math.wustl.edu)

Lecture: 2:30-4:00pm, Monday/Wednesday, Cupples I, Room 199

Office Hours: By appointment.

Course Overview: This course is intended for Ph.D. students in Statistics and Mathematics. Math 5061-5062 together form a year-long sequence in mathematical statistics leading to the Ph.D. qualifying exam in statistical theory. The first semester will cover introductory measure-theoretic probability, decision theory, notions of optimality, principles of data reduction, and finite sample estimation and inference. We will discuss foundational issues, and consider several paradigms for testing, such as the Neyman-Pearson, neo-Fisherian, and Bayesian approaches. Roughly half of the first semester is devoted to the measure-theoretic foundations of probability theory and statistics. The second semester will cover asymptotic theory, including convergence in measure, limit theorems, integral and density approximations, and higher-order asymptotics. Maximum likelihood, Bayesian, and bootstrap methods will be considered. Empirical processes, large deviations, and modern topics (e.g. Bayesian nonparametric asymptotics) will be introduced as time permits. The style of the course is theorem-proof based; applications will not be emphasized, and examples will be theoretical. Statistical software is not part of the course.

Prerequisite: It is assumed that students have taken a first course in real analysis, probability, and mathematical statistics, and are familiar with basic topology, multivariate calculus, and matrix algebra. Ph.D. students are strongly encouraged to enroll in Math 5051-5052 concurrently (Ph.D.-level measure theory and functional analysis).

If you are undecided about whether or not to take this course, it may be helpful to look at the Ph.D. qualifying exam from the last time I taught the course (2014-2015). This time there will be more measure theory and probability theory on exams.

Textbook: There are many excellent books and online resources for the material in this course. However, no single book is suitable. Due to the cost of purchasing several books, I will not require that students use any particular books. The recommended readings for each lecture are accompanied by sections of three books listed below, but students are welcome to look at other references for the same material. I will use the same books for Math 5061 and Math 5062. The links give electronic access to two of the books for Washington University students (logged in to library account) through SpringerLink, but I also recommend purchasing these books as they are excellent references for researchers.
Another suggestion: Essentials of Statistical Inference by G.A. Young and R.L. Smith, Cambridge University Press. This book is much shorter and not intended as an encyclopedic reference, but it is perhaps the most clearly-written, insightful treatment of modern statistical inference.

Homework: There will be weekly homework assignments. You are strongly encouraged to write your solutions in LaTeX. If not, then handwritten submissions must be clear and organized. Homework will be graded, but solutions will not be provided to students.

Homework grader: Qiyiwen Zhang (qiyiwenzhang@wustl.edu)

Blackboard: During the semester, homework assignments, homework and midterm exam grades and any other course-related announcements will be posted to Blackboard or sent by email using Blackboard.

Attendance: Attendance is required for all lectures.  The student who misses a lecture is responsible for any assignments and/or announcements made.

Grades: 20% Homework, 25% Midterm 1, 25% Midterm 2, 30% Final

Exams: 2 in-class midterms and 1 final.

Homework: There will be regular homework assignments. The lowest homework grade will be dropped.

Final Course Grade: The letter grades for the course will be determined according to the following numerical grades on a 0-100 scale.
A+
impress me
B+
[87, 90)
C+
[77, 80)
D+
[67, 70)
F
[0,60)
A
93+
B
[83, 87)
C
[73, 77)
D
[63, 67)


A-
[90, 93)
B-
[80, 83)
C-
[70, 73)
D-
[60, 63)




Other Course Policies: Students are encouraged to look at the Faculty of Arts & Sciences policies.
Course Schedule: will be updated after lecture to reflect what was actually covered; AL=Athreya & Lahiri, TPE=Theory of Point Estimation (Lehmann & Casella); TSP=Testing Statistical Hypotheses (Lehmann & Romano)

Lecture 1
Neyman-Pearson testing; optimal simple tests via likelihood ratios; optimal one-sided tests via monotone likelihood ratios
Reading: TSH 3.1-3.2, 3.4
Lecture 2
Optimal tests with composite nulls via least favorable priors
Reading: TSH 3.8-3.9
Lecture 3
Optimal unbiased testing (without nuisance parameters)
Reading: TSH 4.1-4.4, 3.6
Lecture 4
Optimal unbiased testing (with nuisance parameters); optimal invariant testing
Reading: TSH 4.1-4.4, 3.6
Lecture 5
Optimal invariant testing
Reading: TSH 6.1-6.3
Lecture 6
Computing Maximal Invariants in Stages; Confidence Regions
Reading: TSH 6.2, 5.4
Lecture 7
Some philosophy of science; interpretations of probability; paradigms of statistical inference; the p-value controversy
Reading: TSH 3.3, plus
  • Ronald L. Wasserstein & Nicole A. Lazar (2016). The ASA's Statements on p-values: context, process, and purpose. American Statistician 70(2), 129-133. Also, Supplement with 23 essays/comments. http://dx.doi.org/10.1080/00031305.2016.1154108
Lecture 8 Introduction to Likelihood
Sampling rules; principle of repeated sampling; parameterizations; likelihood quantities
Reading: TSH Chapter 12, TPE Chapter 6
Lecture 9 Likelihood Part II
MLE; identities under reparameterization; non-elementary likelihood examples
Reading: TSH Chapter 12, TPE Chapter 6
Other suggested references:
  • G. Alastair Young and R. Smith (2005). Essentials of Statistical Inference, Cambridge University Press.
  • O. Barndorff-Nielsen and D.R. Cox (1994). Inference and Asymptotics, Chapman & Hall.
  • Tom Severini (2000). Likelihood Methods in Statistics, Oxford University Press.
  • L. Pace and A. Salvan (1997). Principles of Statistical Inference, World Scientific.
  • A. Azzalini (1996). Statistical Inference, Chapman & Hall.
Lecture 10 Fisherian Principles Part I
Distribution constant statistics; sufficiency and likelihood principles.

Lecture 11 Fisherian Principles Part II
Completeness; conditionality principle; ancillary statistics; parameterization invariance.
Suggested reading:
  • Anirban DasGupta, editor (2011). Selected Works of Debabrata Basu, Springer.
  • Malay Ghosh (2002). Basu's theorem with applications: a personalistic review. Sankhya A 64(3), 509-531.
  • Chapter 3 of Michael Evans (2015). Measuring Statistical Evidence Using Relative Belief, Chapman & Hall.
Lecture 12 Some Distribution Theory
MGF and CGF; Levy and Fourier inversion theorems; cumulant generating function; stable distributions and selfdecomposable laws
Suggested references:
  • Stuart & Ord (1948). Kendall's Advanced Theory of Statistics Volume 1, 6th edition, Wiley.
  • Cramer (1946). Mathematical Methods of Statistics.
  • Peter McCullagh (1987). Tensor Methods in Statistics, Chapman & Hall.
  • John Kolassa (2006). Series Approximation Methods in Statistics, 3rd edition, Springer.
  • Tom Severini (2005). Elements of Distribution Theory, Cambridge University Press.
Lecture 13 Multivariate Cumulants; Basic Stochastic Convergence
Joint cumulants; relationship between moments and cumulants; stochastic convergence definitions and examples
References: same as previous lecture
Lecture 14 Orders of Magnitude; Stochastic Orders of Magnitude
Definitions, examples; Slutsky's theorem; more stochastic convergence definitions and results
Suggested references:
  • Lehmann's Elements of Large-Sample Theory, Chapter 1
  • Athreya & Lahiri; 8.1-8.4
  • TSH Chapter 11
Lecture 15 Laws of Large Numbers
Asymptotic unbiasedness; consistency; several weak and strong LLNs; revisiting the following theorems: continuous mapping, Helly-Bray, dominated convergence
References: AL 8.1-8.4, 9.1-9.4, 10.1-10.4; TSH 11.2
Lecture 16 Asymptotic Normality
Cramer-Wold, multivariate normal, Lindeberg and Lyapunov conditions
References: A-L 10.1-10.4, 11.1-11.4; TSH 11.2
Lecture 17 CLT
Lindeberg-Feller CLT for triangular arrays
References: Feller's book, Chapter VIII, Section 4 of Volume II (Theorem 3 on p. 262) and also Chapter XV, Section 6 of Volume II (Theorem 1 on p. 518); also AL 11.1-11.4
Lecture 18 Refinements to the CLT
Berry-Esseen theorem; Cramer's condition; second-order Edgeworth expansions
References: AL 11.4; Kolassa's book (Chapters 2 and 3); Bhattacharya & Rao's book (Chapters 3 and 4)
Lecture 19 Local Linear Approximations
Delta method; variance-stabilizing transformations; sampling distributions of order statistics
References: Lehmann's book Elements of Large Sample Theory
Lecture 20 MLE Asymptotics and Efficiency
Consistency and asymptotic normality of MLE; asymptotic efficiency
References: Severini's book Likelihood Methods in Statistics
Lecture 21 Asymptotic theory for likelihood-based inference
Lecture 22 Nonparametric bootstrap
April 28: Last day of spring semester classes
May 8: Final Exam