Tesis
Bayes Factors Consistency for Nested Linear Models with Increasing Dimensions
Autor
Keveny Innocent, Jean
Pericchi Guerra, Luis R. (Consejero)
Institución
Resumen
The theory of priors has been proved as one of the theories on the regression coe cient
of the normal linear models for testing hypotheses and construct the model selection when
proper priors are considered for the simpler null hypotheses in the computations of Bayes
factors. For the class of normal linear models, it has long been known that for comparison
pairwise of nested models in the frequency senses a decision based on the Bayes factors
produces a consistent model selection Casella and Moreno (2009).
In this sense, the Multivariate Cauchy prior used in Zellner and Siow (1980), the intrinsic
priors and Smooth Cauchy priors introduced by Berger and Pericchi (1996) , the
power expected-posterior (PEP) priors have been introduced and developed by Fouskakis,
Ntzoufras and Draper (2015) in objective Bayesian approach to study the consistency.
The consistency is one of the important reasons that Berger and Pericchi (2001) discuss
the advantage and motivation for using a Bayesian approach to the model comparison
problem over the classical approach.
In this consideration, Casella et al (2009) proved that the Bayes factor for intrinsic
priors is consistent when the number of parameters does not grow with the sample size n:
Moreno et al (2010) proved the Bayes factor for intrinsic prior is consistent for the case
where the number of parameters of both models is of order O(nb) for b < 1 and for b = 1
is consistent except for a small set of alternative model.
By El as Moreno, F. Javier Gir on and George Casella (2010) the properties of the Bayes
factor is not totally understood, they show that Schwartz approximation and Bayes factor
are not asymptotically equivalent. They nd the Schwartz approximation and the Bayes
factor for intrinsic priors are consistent when the rate growth and the dimension of the
bigger model is O(nb) for b < 1; for the case where b = 1 Schwartz approximation is
always inconsistent under the alternative model while the Bayes factor for intrinsic priors
is consistent except for a small set of alternative model.
In order to produce a minimally informative priors that reduce the e ect of the training
sample on the PEP approach, by combining the ideas from the power prior and unitinformation-
prior methodologies Dimitris Fouskakis and Ioannis Ntzoufras (2016) show hat the Bayes factor under the J-PEP is consistent under very mild conditions on the
design matrix. Also, they prove that the asymptotic of J-PEP Bayes factor is equivalent
to those Bayesian Information criteria (BIC) ensuring consistency of the PEP approach
to model selection.
As the BIC was developed as an asymptotic approximation to the Bayes factor between
models, this approximation is valid only under certain conditions and then Stone
(1979) proved that BIC can fail to be asymptotically consistent and that by a counterexample,
he shows a situations in which the BIC is not an adequate approximation.
Here for the case where the linear models are nested, we examine the Bayes factors
for the Intrinsic Priors, Je reys Power Expected Posterior Priors, Multivariate Cauchy
Priors and Smooth Cauchy Priors. We show that they are consistent with the case where
the dimension of both models selected are xed, they are consistent except for a small set
of alternative model when the dimension of the reduced model is xed and the dimension
of the full model increases as the sample size grows to in nity.
The reason for choosing the theme Bayes Factors Consistency for Nested Linear
Models with Increasing Dimension is to explore the Bayesian procedures for variable
selection by computing the Bayes factor under the new priors concept when the number
of parameters increases as the sample size grows to in nity, using the methods of encompassing
to study the validity of its asymptotic consistency by comparing nested models
of regression normal linear models.
Hence, the first chapter of the thesis is: Bayes Factor for Nested Linear mod-
els includes a statement of the problem, the de nition and presentation of the Bayes
factors for all the priors we examine in the thesis, the materials, and theorems, that will
be related to its applications in the studies of the consistency of the Bayes factors under
the priors de ned in the chapters below.
The second chapter is Approximation to the Bayes factors, we introduce the
Stirling's approximation, Laplace approximation, we de ne the Stirling's approximation
and Laplace approximation to the Bayes factors, we approximate our Bayes factors by
using these approximations, which will be applied to study and nd simulation of the
Bayes factors under the priors presented in the chapter below.
In the third chapter, the chapters 4, and 5 are entitled Consistency of the
Bayes Factor under Intrinsic Prior, Consistency of the Bayes Factor under
Je reys Power Expected Posterior and Consistency of the Bayes Factor under Multivariate Cauchy and Smooth Cauchy respectively, we study the validity of the
consistency of the Bayes factors under these priors, in each case we presented a table
that summarizes the study of the consistency. In the chapter 6: Simulations and
E ciency of the Bayes factors, we present the results of some simulations by using
the appropriate approximations of the chapter 2.
At the end of each of the chapters 3, 4, and 5 are shown a series of discussions,
concerning the validity of the consistency.
As the conclusion of the thesis, we present a discussion which is summarized in chapters
three, four and five.
Possible future investigations are also indicated and that the Ph.D. thesis ends with a set
of references articles or books relevant to this thesis.
The contributions of the thesis
We Examine the consistency of the Bayes factors for nested normal linear models for
which the number of regressors is equal to its dimension, the number of parameters of the
simple model is always xed and the full model is of order O(nα ); where α ϵ [0, 1] and
the dimension of the full model increases with a rate as the sample size grows to in finity.
Completely New
Let dim(Mi) = i = O(1) be the dimension of the reduced model Mi. Let dim(Mp) = p =
O(n α ); α ϵ [0, 1] be the order of the full model Mp. We take the sample size n equal to
the training sample n * and the power 1/dj is so that dj ϵ {n,n - p,p,d}, where d is a non
negative natural number.
We de ne and nd the Stirling's approximation to the Bayes factors for Je reys Power
Expected Posterior ( J-PEP ) priors and then use it to study the consistency of the Bayes
factor for J-PEP.
We examine the consistency of the Bayes factor for Je reys Power Expected Posterior
priors by showing that it is consistent when the power is so that dj ϵ {n,n-p}, for the
case where the number of parameters of the full model is of order O(n α); where 0 ≤ α < 1,
and for α = 1 it is inconsistent for a set of the alternative model.
When the power is so that dj = p; the Bayes factor for Je reys Power Expected Posterior
prior is inconsistent under the reduced model and consistent under the full model for the case where the number of parameters of the full model is of order O(n α ); where
0 ≤ α < 1; and for = 1 it is inconsistent for a set of the alternative model.
The Bayes factor for J-PEP is inconsistent for a small set of the alternative model,
when the power is so that dj = d is a large non-negative natural number, for the case
where the number of parameters of the full model is of order O(nα) and α = 1.
We illustrate the results by simulating the Stirling's approximation to the Bayes factors
for J-PEP under the considered situations that will be de ned in the chapter 6.
Partially New
A new consistency proof to the Bayes factor for J-PEP when the dimension of both models
is a xed number of parameters.
Extension of the theorem. 2 in the page 1942 from Moreno, E. , Gir on, F. J. and
Casella, G. ( August 2010) Consistency of Objectives Bayes Factors as the Model Dimen-
sion Grows, Annals of Statistics. Vol 38, No 4, pp 1937-1952 , by adding the case where
the limit of the distance between the both models is δ(r) = r-1/(1+r)[r-1/r]-1 for large value of
r and show that the Bayes factor for intrinsic prior is also consistent under the full model,
when the models increase their number of parameters with rate i = O(nα),p = O(n);
where 0 ≤ a < 1; and n ≈ rp:
Generalization of the theorem. 3.2 in the pages 245-246 by jumping the dimension
1 to the dimension i = O(1) of the reduced model from Berger, J. O. , Ghosh,
J. K. and Mukhopadhyay, N. (2003) Approximations and Consistency of Bayes Factors
as Model Dimension Grows, Journal of Statistical Planning and Inference 122, 241-258.
MR1961733.
New Presentation of Results
Use of Laplace approximation to the Bayes factors for Intrinsic priors.
Use of Laplace approximation to the Bayes factors for Multivariate Cauchy priors and
Smooth Cauchy priors instead of using either the BIC or High Dimension Approximation
from Berger, J. O. , Ghosh, J. K. and Mukhopadhyay, N. (2003).
Use of Stirling's approximation to the Bayes factor for J-PEP, even when Laplace
approximation is valid for p fixed.