Columbia Undergraduate Learning Seminar in Theoretical Computer Science

About

The Columbia Undergraduate Learning Seminar in Theoretical Computer Science is a student-run seminar for undergraduates at Columbia interested in theoretical computer science. The goal of the learning seminar is to provide undergraduate students with the opportunity to learn about theoretical computer science in a collaborative, student-driven setting and to meet other students interested in theoretical computer science.

The learning seminar is dedicated to providing an inclusive and welcoming environment for all students interested in theoretical computer science. No background in theoretical computer science is required to participate in the seminar, and everyone is welcome to join!

Each semester, the Columbia Undergraduate Learning Seminar in Theoretical Computer Science will hold one or more seminars on topics related to TCS. The presentations will primarily be given by students, which is a great opportunity to gain experience giving a technical talk in TCS and meet other students interested in the topic.


Join Us!

The seminar is currently run by Ekene Ezeunala. If you have any questions or would like to join the seminar's Slack channel, please email him here.

Sign up for the mailing list here: Mailing List Sign-Up.


Spring 2024

This Spring semester, we will be holding three groups, each focused on a different topic within TCS. The groups are: Neural Manifolds (and TCS), Quantum Complexity Theory, and Randomness in Computation. Each group is run by an undergraduate student organizer along with a graduate student mentor. The groups meet roughly weekly and should be approachable for students of all ranges of prior exposure to TCS.

Please see the descriptions and tables below for a summary and the list of talks for each of the groups.

Neural Manifolds and TCS

Organizer: Shujun. Graduate Student Mentor: Todd.

Description: This seminar aims to ground geometric conceptions of learning with learning in the brain. Manifolds are a key structure to unify these perspectives, and much of this seminar will aim to build towards understanding how they work. The first half of this seminar will build key concepts in learning over geometry, while the second half aims to use these tools to study learning in the brain.

Resource Title Link
Gerstner Neuronal Dynamics Link
Date Topic Speaker Reading
February 23rd Introduction Shujun Gabrie et al (2023), “Neural networks: from the perceptron to deep nets.”
March 2nd Sparse Coding Shujun S. Ganguli & H. Sompolinsky (2012), "Compressed Sensing, Sparsity, and Dimensionality in Neuronal Information Processing and Data Analysis.”
S. Kakade and G. Shakhnarovich (2009), “Random Projections.”
March 8th Perceptrons Christine and Audrey H. Sompolinsky, "Perceptron."
Kernelization of Perceptrons (Slides 28-34, Nakul Verma).
March 15th Break
March 22nd Kernels and Similarity Matrices Shujun J. Diedrichsen & N. Kriegeskorte (2017), "Representational Models: A Common Framework for Understanding Encoding, Pattern-component, and Representational-similarity Analysis.”
A. Jacot, et al (2018), "Neural Tangent Kernel: Convergence and Generalization in Neural Networks.”
March 29th Physics of Capacity Shujun E. Gardner (1988), "The Space of Interactions in Neural Network Models."
April 5th Manifolds Shujun S. Chung et al (2018), "Classification and Geometry of General Perceptual Manifolds.”
Chung S, Abbott LF (2021). “Neural population geometry: An approach for understanding biological and artificial neural networks.”
Kriegeskorte N, Kievit RA (2013), “Representational geometry: integrating cognition, computation, and the brain.”
X. Li and S. Wang (2023), “Toward a computational theory of manifold untangling: from global embedding to local flattening”
April 12th Generalisation Error Theory Shujun H.S. Seung et al (1992), "Statistical Mechanics of Learning from Examples."
A. Canatar et al (2021), "Spectral Bias and Task-model Alignment Explain Generalization in Kernel Regression and Infinitely Wide Neural Networks."
April 19th Deep Learning in Neuroscience: A Look to the Future Shujun D.L. Yamins & J.J. DiCarlo (2016), "Using Goal-driven Deep Learning Models to Understand Sensory Cortex.”
B.A. Richards et al. (2019), "A Deep Learning Framework for Neuroscience.”
A. Saxe, et al (2021), "If Deep Learning Is the Answer, What Is the Question?"
Lillicrap TP, et al (2020), “Backpropagation and the brain.”

Quantum Complexity

Organizers: Mark and Ryan. Graduate Student Mentor: Natalie.

Description: This seminar aims to explore fundamental questions about the relationship of quantum computation to classical computation through the lens of complexity theory, looking high and low for answers. We will cover quantum complexity classes such as BQP and QMA, the classical simulation of quantum circuits, Hamiltonian complexity, and classical-quantum separations for constant-depth circuits.

Resource Title Link
Grier Quantum Complexity Theory Link
DeWolf Quantum Computing Notes Link
Yuen Advanced Topics in Quantum Information Theory Link
Date Topic Speaker Reading
February 23rd Introduction and Overview Ryan and Mark [O'Donnell, Wright] Lecture 01 -- Introduction to the Quantum Circuit Model.
March 1st Quantum complexity theory: Most functions are hard to compute, BPP ⊆ BQP ⊆ PSPACE Shiv DeWolf, Chapter 13
March 8th Hardness of simulating depth-3 quantum circuits Tyler Chapter 2.5 of N. Parham (2022)
March 15th Break
March 22nd Efficient classical simulation of clifford circuits Christine Grier, Lecture 11
March 29th NYC Quantum Algorithms, Complexity and Cryptography (QuACC) Day Natalie
April 5th Bravyi-Gosset-König separation between constant-depth quantum and classical circuits Zara DeWolf, Chapter 14
Yuen, Lecture 3
April 12th QMA and the local Hamiltonian problem. Andrew Yang, Mark Yuen, Lecture 4
April 26th The quantum PCP conjecture. Ryan Yuen, Lecture 5

Randomness in Computation

Organizer: Ekene.

Description: In this group, we'll learn what it means to include randomness as a resource for computation. We'll look at many randomized algorithms for famous problems, look at a complexity-theoretic approach to randomness, and also discuss probabilistic proofs and methods for obtaining deterministic algorithms from existing random ones. Some background knowledge on probability is helpful but not necessary.

Resource Title Link
Motwani and Raghavan Randomised Algorithms Link
Mitzenmacher and Upfal Probability and Computing Link
Goldreich Randomness in Computation Link
Date Topic Speaker Reading
February 17th Introduction, Las Vegas and Monte Carlo algorithms Ekene Chapter 1, Motwani and Raghavan; Chapters 1 and 2, Mitzenmacher and Upfal; Section 1 of Goldreich
February 24th Moments, Deviations, and Concentration Inequalities Nikhil Chapters 3 and 4 of Motwani and Raghavan, Mitzenmacher and Upfal
March 2nd The Probabilistic Method Lucas Chapter 5 of Motwani and Raghavan; Chapter 6 of Mitzenmacher and Upfal
March 9th Random Walks and Markov Chains Mary Chapter 6 of Motwani and Raghavan; Chapter 7 of Mitzenmacher and Upfal
March 16th Approximate Counting, Jerrum-Sinclair-Vigoda Ekene Chapter 11 of Motwani and Raghavan; Jerrum, Sinclair, Vigoda (2004)
March 23rd Pseudorandomness and Derandomisation Ekene Section 2 of Goldreich
March 30th Randomness in Sequential Computation Lisa Chapters 8 and 10 of Motwani and Raghavan
April 6th Probabilistic Proof Systems Niko Section 3 of Goldreich
April 13th Sublinear-time Algorithms with Randomness Ekene Section 5 of Goldreich

Previous Semesters

Fall 2023

Spring 2023

Fall 2022

Summer 2022

Spring 2022

Fall 2021

Summer 2021