Quick links

Nataly Brukhim FPO

Date and Time
Friday, June 28, 2024 - 11:00am to 1:00pm
Location
Computer Science 402
Type
FPO

Nataly Brukhim will present her FPO "New Directions in Boosting" on Friday, June 28, 2024 at 11:00 AM in CS 402.

Location: CS 402

The members of Nataly’s committee are as follows:
Examiners: Elad Hazan (Adviser), Ryan Adams, Robert Schapire (Microsoft Research)
Readers: Sanjeev Arora, Shay Moran (Technion)

A copy of her thesis is available upon request.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

Everyone is invited to attend her talk.

Abstract follows below:
Boosting is a fundamental methodology in machine learning used to boost the accuracy of weak learning models, transforming them into strong learners. This thesis establishes new directions in boosting theory, presenting algorithms and their analyses for complex learning scenarios that go beyond the realm of traditional classification. These developments extend the benefits of the boosting methodology to modern and challenging learning settings.

 

This thesis is divided into two main parts: Multiclass Boosting and Boosting for Sequential Decision Making. Part I explores the generalization of classical boosting theory to the multiclass setting, specifically focusing on the challenging case of an extremely large label space. Initially, we present a hardness result that illustrates that boosting does not easily from the binary to the multiclass case. Subsequently, within the broader context of multiclass learning, we develop novel algorithmic strategies which provide a complete characterization of multiclass learnability, resolving a longstanding open problem. Finally, leveraging these novel ideas, we also introduce new boosting techniques that circumvent the aforementioned hardness barrier, leading to efficient multiclass boosting methods.

In Part II, we develop boosting frameworks for sequential decision making. Sequential decision making tasks such as control, bandit and reinforcement learning, can be thought of as challenging generalized variants of statistical learning, which take into account interactive interactions with the environment. As boosting was developed for static datasets, extending the technique to these tasks poses significant challenges, and requires grappling with the effects of feedback and systems that change over time. In this line of work we develop boosting frameworks in various sequential decision making tasks. Namely, the online agnostic learning setting, online control of dynamical systems, the bandit setting in online learning, and reinforcement learning.

Follow us: Facebook Twitter Linkedin