<p>Many methods for model selection fall under the “Loss + Penalty” approach, where the “Loss” prizes the model for fitting the data well and the “Penalty” penalizes the model for its complexity. One method in particular (the Fence method) uses a data adaptive bootstrapping algorithm, the Adaptive Fence (AF), to select the penalty. The goal of the AF algorithm is to estimate the optimal penalty, i.e., the penalty that maximize the probability of selecting the true data generating model, which is a common desired objective for model selection. We show how the AF approach for estimating the optimal penalty can be improved and propose a new algorithm that can provide a better estimate, which we coin the Resampling-based Optimal Penalty Estimate (ROPE) algorithm. We show how to implement both algorithms to select the penalty multiplier in the Generalized Information Criteria (GIC) for model selection in linear regression. Then, we carry out a simulation study to compare the two algorithms. The results of the study indicate that the ROPE algorithm outperforms the AF algorithm in our simulation settings.</p>
History
Table of Contents
1. Introduction -- 2. Review -- 3. Methods and Algorithms -- 4. Simulation Study -- References
Notes
A thesis submitted to Macquarie University for the degree of Master of Research
Awarding Institution
Macquarie University
Degree Type
Thesis MRes
Degree
Thesis (MRes), Macquarie University, Faculty of Science and Engineering, 2022
Department, Centre or School
School of Mathematical and Physical Sciences
Year of Award
2022
Principal Supervisor
Samuel Muller
Additional Supervisor 1
Houying Zhu
Rights
Copyright: Ibrahim Joudah
Copyright disclaimer: https://www.mq.edu.au/copyright-disclaimer