During Spring 2023, we held two groups, each focused on a different topic within TCS. The groups were: Deep Learning Theory and Prediction Markets. Each group was run by an undergraduate student organizer along with a graduate student mentor.
Please see the descriptions and tables below for a summary and the list of talks for each of the groups.
This was the sixth iteration of the TCS seminar. The seminar was run by Alex Lindenbaum.
Organizers: Ari and Berkan. Graduate student mentor: Clayton.
Description: We learned about the fundamental questions asked about deep learning theory and attempts to solve them by studying a series of "lenses" that can be used to understand deep learning: approximation theory, classical machine learning theory, benign overfitting, kernel machines, and feature learning.
Date | Topic | Speaker | Reading |
---|---|---|---|
January 27th | Introduction to Deep Learning Theory | Ari, Berkan, and Clayton | |
February 3rd | Universal Approximation | Ari and Berkan | [Cybenko'89]; [HSW'89]; [Barron'93]; [KL'19] |
February 10th | Depth Separation | Rohan and Gilad | [Telgarsky'16]; [ES'15]; [Daniely'17]; [MYSS'21]; [SL'22]; [SC'22] |
February 17th | Classical ML Theory: Overview | Zachary and Noah | Understanding Machine Learning, Chap. 6 |
February 24th | Classical ML Theory for Deep Learning | Paden and Catherine | [BHLM'17]; [BFT'17] |
March 3rd | Limitations of Classical ML Theory | Bill and Eitan | [ZBHRV'16]; [BMM'18] |
March 24th | Benign Overfitting: Overview | Shujun | [BHMM'19]; [NKBYBS'19] |
March 31st | Benign Overfitting for Linear Regression | Jay and Anthony | [BLLT'19]; [BHX'19]; [MNSBHS'20]; [ASH'21] |
April 7th | Random Feature Models | Maksym and Nihar | [RR'07]; [HSSV'21] |
April 14th | Neural Tangent Kernel | Chris and Edward | [JGH'18]; [ADHLSW'19]; [BL'20] |
April 21th | Feature Learning and Limitations of Kernels | Johannes and Berkan | [CB'20]; [DLS'22]; [BBSS'22] |
Organizer: Alex. Graduate student mentor: Jason.
Description: In this seminar, we explored the foundations and practicalities of prediction markets, delved into their diverse instantiations, and scrutinized the efficient aggregation of outcomes and optimal incentivization in uncertainty forecasting using ideas from algorithmic game theory and convex optimization.
Resource | Title | Link |
---|---|---|
Date | Topic | Discussion Leader | Reading |
---|---|---|---|