PhD Proposal: Submodular Optimization in Learning
Submodular optimization is a well-established area of combinatorial optimization that forms a powerful and widely used framework that underpins many foundational problems in machine learning and data science.A set function is said to be submodular if it satisfies the diminishing returns property, which captures the intuitive idea that the benefit of adding an element to a set decreases as the set grows.
This property enables the design of efficient approximation for maximizing or minimizing such functions under various constraints, despite the underlying problems often being NP-hard.
Submodular functions naturally arise in a wide range of applications across machine learning, economics, operations research, and network science. In machine learning, they are central to tasks such as data subset selection, data summarization, active learning, feature selection, diversity-aware recommendations, and influence maximization in networks.
Traditional algorithms for problems such as submodular maximization and submodular cover typically assume access to a static ground set. However, the growing prevalence of ever-evolving datasets has spurred significant interest in designing algorithms that operate effectively in large-scale and time-varying environments.
In this proposal, we investigate a range of optimization problems involving submodularity in the fully dynamic setting, where the elements of the ground set may be inserted or deleted over time. Specifically, we address key problems, including submodular maximization under a matroid constraint, weighted submodular cover, and non-monotone submodular maximization under a cardinality constraint, by developing dynamic algorithms that efficiently maintain high-quality approximate solutions as their input changes through a stream of updates.