You are here

Probabilistic models of diversity: applications and algorithms for determinantal point processes

Stefanie Jegelka
MIT IDSS, CSAIL, EECS
Probabilistic models of diversity: applications and algorithms for determinantal point processes

Abstract:  Determinantal Point Processes (DPPs) are gaining popularity in machine learning as elegant probabilistic models of diversity. In other words, these are probability distributions over subsets of a collection of items (data points, features, ...) that prefer diverse subsets. In particular, many computations that are difficult with other models "simply" reduce to linear algebra for DPPs. DPPs have been known to arise in statistical physics, combinatorial probability and random matrix theory, and certain approximation algorithms. The first part of this talk will survey machine learning-related applications of DPPs, from recommendation, feature selection and improving interpretability to matrix approximations for kernel methods and pruning of neural networks.

Despite their ease of modeling, the wide applicability of DPPs has been hindered by computationally expensive sampling algorithms. The second part of the talk will address recent progress in sampling algorithms for DPPs and its implications in theory and practice. Most of the talk will be tutorial-style and does not require any prior knowledge of DPPs.

Based on joint work with Chengtao Li and Suvrit Sra.


Chengtao Li
MIT IDSS, CSAIL, EECS
Primer: A primer on determinantal point processes

Abstract:  The primer will be a short tutorial that introduces Determinantal Point Processes with a bit of detail and intuition, explains its relations to diversity, basic computations, and important models.