Full Name
Yannick Baraud
Job Title
Professor of Mathematical Statistics
Company
University of Luxembourg
Speaker Bio
I joined the University of Luxembourg in 2019 as ERA-Chair holder of "SanDAL", a Chair in Mathematical Statistics and Data Science funded by the European Commission Horizon 2020 research and innovation programme.
I work in the area of Mathematical Statistics with a special interest
in model selection, hypothesis testing, parametric and non-parametric estimation, robust estimation. https://math.uni.lu/baraud/Home.html
I work in the area of Mathematical Statistics with a special interest
in model selection, hypothesis testing, parametric and non-parametric estimation, robust estimation. https://math.uni.lu/baraud/Home.html
Speaking At
Abstract
We address the problem of estimating the distribution of presumed i.i.d. observations within the framework of Bayesian statistics. To do this, we consider a statistical model for the distribution of the data as well as a prior on it and we propose a new posterior distribution that shares some similarities with the classical Bayesian one. In particular, when the statistical model is exact, we show that this new posterior distribution concentrates its mass around the target distribution, just as the classical Bayesian posterior would do under appropriate assumptions. Nevertheless, we establish that this concentration property holds under weaker assumptions than those generally required for the classical Bayesian posterior. Specifically, we do not require that the prior distribution allocates sufficient mass on Kullback-Leibler neighbourhoods but only on the larger Hellinger ones. More importantly, unlike the classical Bayesian distribution, ours proves to be robust against a potential misspecification of the prior and the assumptions we started from. We prove that the concentration properties we establish remain stable when the equidistribution assumption is violated or when the data are i.i.d. with a distribution that does not belong to our model but only lies close enough to it. The results we obtain are non-asymptotic, involve explicit numerical constants and are based on the interplay between information theory and robust testing.