Skip to Content

Sponsors

No results

Keywords

No results

Types

No results

Search Results

Events

No results
Search events using: keywords, sponsors, locations or event type
When / Where
All occurrences of this event have passed.
This listing is displayed for historical purposes.

Presented By: Department of Mathematics

Applied Interdisciplinary Mathematics (AIM) Seminar

Metric representation learning

All data has some inherent mathematical structure. One of the many challenges in representation learning is determining ways to judge the quality of the representation learned. In many cases, the consensus is that if $d$ is the natural metric on the representation (such as $L_2$ distance for Euclidean embeddings), then this metric should provide meaningful information about the data. Many examples of this can be seen in areas such as metric learning, manifold learning, and graph embedding. However, most algorithms that solve these problems learn a representation in a metric space first and then extract a metric.

A large part of my research is exploring what happens if the order is switched, that is, learn the appropriate metric first and the embedding later. The philosophy behind this approach is that understanding the inherent geometry of the data is the most crucial part of representation learning. Often, studying the properties of the appropriate metric on the input data sets indicates the type of space, we should be seeking for the representation. Hence giving us more robust representations. Optimizing for the appropriate metric can also help overcome issues such as missing and noisy data.

For learning optimal metric, we are given a dissimilarity matrix $\hat{D}$, some function $f$ and some a subset $S$ of the space of all metrics and we want to find $D in S$ that minimizes $f(D,\hat{D})$. In this talk, we consider the version of the problem when $S$ is the space of metrics defined on a fixed graph. That is, given a graph $G$, we let $S$, be the space of all metrics defined via $G$. For this $S$, we consider the sparse objective function as well as convex objective functions. We also looked at the problem where we want to learn a tree. We also show how the ideas behind learning the optimal metric can be applied to dimensionality reduction in the presence of missing data.
Speaker(s): Rishi Sonthalia (UCLA)

Explore Similar Events

  •  Loading Similar Events...

Keywords


Back to Main Content