Presented By: Department of Statistics Dissertation Defenses
Topics on the Generalization and Learnability of Modern Machine Learning
Jake Trauger
Learning theory is a subfield of machine learning research where we analyze the theoretical properties of machine learning problems and algorithms through mathematics. This thesis is a culmination of three standalone works in the field of learning theory.
First, we analyze the generalization bounds of the Transformer architecture to show that it does not depend on maximum sequence length. To do this, we analyze the Rademacher Complexity of the architecture and create novel covering number bounds on linear functions that do not depend on the amount of samples. We also run a simulation and show the results support our theoretical findings.
In the next chapter, we analyze a quirk seen in the training of modern large language models. Most of these models are trained to only predict the next token of the output; however, the output of the model is a sequence of tokens. We study this mismatch in training optimization and output through the lens of the surrogate loss consistency framework. We analyze different ways of decoding these next-token predictors to see when we achieve asymptotic consistency for two use cases when encoded as loss functions.
In the final work of this thesis, the theoretical learnability of multiclass forgiving 0-1 loss functions is studied through the PAC-Learnability framework. We show a generalization the Natarajan dimension [Natarajan, 1989] characterizes the learnability of many instantiations of learning problems that use forgiving 0-1 loss functions. We also show how this setting can be used to model other known settings in the literature.
First, we analyze the generalization bounds of the Transformer architecture to show that it does not depend on maximum sequence length. To do this, we analyze the Rademacher Complexity of the architecture and create novel covering number bounds on linear functions that do not depend on the amount of samples. We also run a simulation and show the results support our theoretical findings.
In the next chapter, we analyze a quirk seen in the training of modern large language models. Most of these models are trained to only predict the next token of the output; however, the output of the model is a sequence of tokens. We study this mismatch in training optimization and output through the lens of the surrogate loss consistency framework. We analyze different ways of decoding these next-token predictors to see when we achieve asymptotic consistency for two use cases when encoded as loss functions.
In the final work of this thesis, the theoretical learnability of multiclass forgiving 0-1 loss functions is studied through the PAC-Learnability framework. We show a generalization the Natarajan dimension [Natarajan, 1989] characterizes the learnability of many instantiations of learning problems that use forgiving 0-1 loss functions. We also show how this setting can be used to model other known settings in the literature.