avi1@umd.edu, [Google Scholar] [Twitter] [GitHub] [CV]
In 2023, I finished my Ph.D. in the Applied Math and Scientific Computation program at the University of Maryland. I was advised by Tom Goldstein on my work in deep learning. My general interests range from security to generalization and my work focuses on expanding our understanding of when and why neural networks work. My specific interest in data security and model vulnerability has led to work on adversarial attacks and data poisoning. I am also investigating neural networks' ability to extrapolate from easy training tasks to more difficult problems at test time. In the Fall of 2023, I will start a post-doc at Carnegie Mellon University with Zico Kolter.
From June 2022 through March 2023, I was a researcher at Arthur AI in New York City. I worked on consistency of post-hoc explainers and this work was published at AIES 2023.
Before starting at UMD, I received a master's degree in applied math at the University of Washington and a bachelor's degree in applied math at Columbia Engineering.
Avi Schwarzschild, Max Cembalest, Karthik Rao, Keegan Hines, and John Dickerson. Reckoning with the Disagreement Problem: Explanation Consensus as a Training Objective. Artificial Intelligence, Ethics, and Socitey (AIES), 2023. [ArXiv]
Roman Levin, Valeriia Cherepanova, Avi Schwarzschild, Arpit Bansal, C Bayan Bruss, Tom Goldstein, Andrew Gordon Wilson, and Micah Goldblum. Transfer Learning with Deep Tabular Models. International Conference on Learning Representations (ICLR), 2023. [Published Version]
Arpit Bansal*, Avi Schwarzschild*, Eitan Borgnia, Zeyad Emam, Furong Huang, Micah Goldblum, and Tom Goldstein. End-to-end Algorithm Synthesis with Recurrent Networks: Logical Extrapolation Without Overthinking. Neural Information Processing Systems (NeurIPS), 2022. [ArXiv]
Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses. IEEE transactions on pattern analysis and machine intelligence (TPAMI), 2022. [ArXiv][Published Version]
Avi Schwarzschild*, Arjun Gupta*, Amin Ghiasi, Micah Goldblum, and Tom Goldstein. The Uncanny Similarity of Recurrence and Depth. International Conference on Learning Representations (ICLR), 2022. [Published Version]
Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Furong Huang, Uzi Vishkin, Micah Goldblum, and Tom Goldstein. Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks. Neural Information Processing Systems (NeurIPS), 2021. [Published Version]
Micah Goldblum*, Avi Schwarzschild*, Ankit Patel, and Tom Goldstein. Adversarial Attacks on Machine Learning Systems for High-Frequency Trading. International Conference on AI in Finance (ICAIF), 2021. [Published Version]
Avi Schwarzschild*, Micah Goldblum*, Arjun Gupta, John Dickerson, and Tom Goldstein. Just How Toxic is Data Poisoning? A Benchmark for Backdoor and Data Poisoning Attacks. International Conference on Machine Learning (ICML), 2021. [Published Version]
Ahmed Abdelkader, Michael Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Cristoph Studer, and Chen Zhu. Headless Horseman: Adversarial Attacks on Transfer Learning Models. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2020. [Published Version]
Micah Goldblum*, Jonas Geiping*, Avi Schwarzschild, Michael Moeller, and Tom Goldstein. Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory. International Conference on Learning Representations (ICLR), 2020. [Published Version]
Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, and Micah Goldblum. A Cookbook of Self-Supervised Learning. Preprint. [ArXiv]
Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal Guidance for Diffusion Models. Preprint. [ArXiv]
Avi Schwarzschild*, Alex Stein*, Michael Curry, Tom Goldstein, and John Dickerson. Neural Auctions Compromise Bidder Information. Preprint. [ArXiv]
Gowthami Somepalli, Micah Goldblum, Avi Schwarzschild, C Bayan Bruss, and Tom Goldstein. SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training. Under Review. [ArXiv]
Arpit Bansal, Micah Goldblum, Valeriia Cherepanova, Avi Schwarzschild, C Bayan Bruss, and Tom Goldstein. MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data. Under Review. [ArXiv]