Abstract
How much memory does learning a random linear function require? In his breakthrough paper, Raz showed that by making one pass over all the samples, any algorithm requires either quadratic memory or an exponential number of equations [FOCS'16, JACM'19]. There still left the hope of a memory-efficient multi-pass learning algorithm, which are ubiquitous in practice. For example, training of machine learning models usually takes multiple passes (epochs). In this talk, I will show a tight time-space lower bound for constant-pass algorithms, which answers this hope in the negative. This talk is based on joint work with Xin Lyu, Avishay Tal, Junzhao Yang, and a previous work by Sumegha Garg, Ran Raz, and Avishay Tal [CCC'19].
Time
2024-01-05 10:00 - 11:00
Speaker
Hongxun Wu, UC Berkeley
Room
Room 308