Presented By: Applied Interdisciplinary Mathematics (AIM) Seminar - Department of Mathematics
AIM Seminar: Randomized matrix decompositions for faster scientific computing
Robert Webber, Caltech
Abstract: Traditional numerical methods based on expensive matrix factorizations struggle with the scale of modern scientific applications. For example, kernel-based algorithms take a data set of size N, form the kernel matrix of size N x N, and then perform an eigendecomposition or inversion at a cost of O(N^3) operations. For data sets of size N >= 10^5, kernel learning is too expensive, straining the limits of personal workstations and even dedicated computing clusters. Randomized iterative methods have emerged as a faster alternative to the classical approaches. These methods combine randomized exploration with information about which matrix structures are important, leading to significant speed gains.
In this talk, I will review recent developments concerning two randomized algorithms. The first is "randomized block Krylov iteration", which uses an array of random Gaussian test vectors to probe a large data matrix in order to provide a randomized principal component analysis. Remarkably, this approach works well even when the matrix of interest is not low-rank. The second algorithm is "randomly pivoted Cholesky decomposition", which iteratively samples columns from a positive semidefinite matrix using a novelty metric and reconstructs the matrix from the randomly selected columns. Ultimately, both algorithms furnish a randomized approximation of an N x N matrix with a reduced rank k << N, which enables fast inversion or singular value decomposition at a cost of O(N k^2) operations. The speed-up factor from N^3 to N k^2 operations can be 3 million. The newest algorithms achieve this speed-up factor while guaranteeing performance across a broad range of input matrices.
Contact: Shravan Veerapaneni
In this talk, I will review recent developments concerning two randomized algorithms. The first is "randomized block Krylov iteration", which uses an array of random Gaussian test vectors to probe a large data matrix in order to provide a randomized principal component analysis. Remarkably, this approach works well even when the matrix of interest is not low-rank. The second algorithm is "randomly pivoted Cholesky decomposition", which iteratively samples columns from a positive semidefinite matrix using a novelty metric and reconstructs the matrix from the randomly selected columns. Ultimately, both algorithms furnish a randomized approximation of an N x N matrix with a reduced rank k << N, which enables fast inversion or singular value decomposition at a cost of O(N k^2) operations. The speed-up factor from N^3 to N k^2 operations can be 3 million. The newest algorithms achieve this speed-up factor while guaranteeing performance across a broad range of input matrices.
Contact: Shravan Veerapaneni
Co-Sponsored By
Explore Similar Events
-
Loading Similar Events...