My research lies in the intersection of machine learning and security and privacy. Currently, I am exploring the limitations, vulnerabilities, and privacy implications of neural networks. Some of my recent works include protecting neural networks from backdoor and adversarial attacks, using imperceptible perturbation to protect user privacy, etc.
I received my BS in computer science from University of Chicago in 2020. I have spent two summers at Facebook as a software engineer.
Using Honeypots to Catch Adversarial Attacks on Neural Networks (CCS MTD Workshop 2021 Invited Talk)
Fawkes: Protecting Personal Privacy against Unauthorized Deep Learning Models. (USENIX Security 2020)
Gotta Catch’Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks (ACM CCS 2020)