This page has been updated to reflect
topics actually covered for future reference.
Course Details:
Instructor: Todd Kuffner
Lecture: 8:30-10am, Monday and Wednesday, Cupples I, Room 218
Required textbooks:
- (AL): Athreya &
Lahiri's Measure Theory and
Probability Theory, First Edition, Springer.
- (TPE): Lehmann &
Casella's Theory of Point Estimation,
Second Edition, Springer.
- (TSH): Lehmann &
Romano's Testing Statistical
Hypotheses, Third Edition, Springer.
We also made use the following references:
- Lehmann's Elements of Large
Sample Theory
- Young & Smith's Essentials of Statistical Inference
- Petrov's Limit Theorems in Probability Theory
- Hall's The Bootstrap and Edgeworth Expansion
- DasGupta's Asymptotic Theory of Statistics and Probability
- Severini's Likelihood Methods in Statistics
- Shao's Mathematical Statistics
- Kleijn, van der Vaart and van Zanten's lecture notes
on Bayesian Nonparametrics (related to forthcoming book by Ghosal &
van der Vaart)
- David Hunter's lecture
notes on asymptotic theory
Exams: 1 midterm and 1 final (also a Ph.D. qualifying exam for those
students electing to take it)
Homework: there were homework assignments
In-class presentation: students were required to read a recent paper in
a leading statistics journal and give a critical presentation of the
methods, making reference to tools learned during Math 5061-5062. Two
groups were formed (4 students each) and 40-minute presentations with
slides were given on the following two papers:
- Marin, J.-M., Pillai, N.S., Robert, C.P. and J. Rousseau (2014).
Relevant statistics for Bayesian model choice, Journal of the Royal Statistical Society
B, 76 (5), 833-859.
http://onlinelibrary.wiley.com/doi/10.1111/rssb.12056/abstract
- Martin, R. and C. Liu (2015). Conditional inferential models:
combining information for prior-free probabilistic inference, Journal of the Royal Statistical Society
B, 77 (1), 195-217.
http://onlinelibrary.wiley.com/doi/10.1111/rssb.12070/abstract
Grades: 30% Homework, 25% Midterm, 35% Final, 10% In-class presentation
Topics List (reflecting what was actually covered):
- Preliminaries: stochastic orders of magnitude, stochastic
convergencem, characteristic
functions, multivariate normal distribution, Kullback-Leibler
divergence, pivots, asymptotically pivotal quantities
- Optimal Estimation: asymptotic unbiasedness, consistency, various
WLLN and SLLN
- Convergence in Distribution: continuous mapping theorem,
Helly-Bray theorem, dominated convergence theorem, Skorohod's
representation theorem, the joys of Fatou's lemma
- Asymptotic normality: Cramer-Wold theorem, Lindeberg and Lyapunov
conditions, triangular arrays, Lindeberg-Feller CLT
- Refinements: Berry-Esseen theorem, Edgeworth expansions
- Toolbox: delta method, variance stabilizing transformations
- Applications: asymptotic distributions of sample moments and
order statistics
- Likelihood theory: MLE consistency, Fisher information,
asymptotic normality, MLE with many parameters; profile likelihood;
Wald, score and LR tests; Wilks' theorem
- Bayesian Parametric Asymptotics: Scheffe's theorem, Bernstein-von
Mises theorem, Doob's theorem, difference between Bayes estimates and
MLE
- Optimal Hypothesis Testing: consistency for tests, consistency
under local alternatives, asymptotic relative efficiency (in the
context of testing)
- Frequentist Nonparametric Methods: consistency of nonparametric
bootstrap; why and how the nonparametric bootstrap can be used to
achieve bias correction; why and how the nonparametric bootstrap can be
used to achieve refinement of first-order asymptotic distribution
theory for asymptotically standard normal pivots
- Bayesian Nonparametric Methods: key concepts: prior construction
(random measures), Dirichlet processes, consistency (Schwartz's
theorem), posterior rates of contraction