Augmented Lagrange for constrained optimizations in empirical likelihood estimations
thesisposted on 28.03.2022, 12:36 by Andrew Locke
Empirical Likelihood is a useful tool for parameter estimation and inference as it does not require knowledge about where thedata comes from. A large strength is its applicability with diﬀerent methods, it can be extended in many ways including regression or adding constraints using estimating equations. The positivity constraint of pi has often been overlooked or ignored but this means existing methodsmay experience diﬃculties for some problems. This thesis looks at enforcing this constraint by applying the Karush–Kuhn–Tucker conditions together with a multiplicative iterative optimization method of updating parameters which ensures movement towards the constrained maximum. For other equality constraints, we apply Augmented Lagrange to the Empirical Likelihood maximisation. We demonstrate our method using simulation examples in linear regression and estimating equations on raw moments.