About Me
Background
I’m a third-year PhD student in PACM at Princeton University, advised by Professor Charles Fefferman. My research interest is in formalizing manifold learning and extending it to high-noise settings. I graduated from Columbia University in 2023. My previous work was in computational neuroscience, AI interpretability, and power systems.
Publications
Manifold Learning
- Reconstruction of Manifold Distances from Noisy Observations, preprint (joint with Charles Fefferman and Kevin Ren)
Other
- Adversarial Attacks on the Interpretation of Neuron Activation Maximization, Proceedings of the AAAI Conference on Artificial Intelligence 2024 (joint with Nanfack et al.)
- Connectomic Analysis of the Drosophila Lateral Neural Clock Cells Reveals the Synaptic Basis of Functional Pacemaker Classes, eLife 2022 (joint with Shafer et al.)
- Adversarial Attacks on Feature Visualization Methods, NeurIPS ML Safety Workshop 2022 (joint with Eugene Belilovsky and Michael Eickenberg)
- Economic Incentives for Reducing Peak Power Utilization in Electric Vehicle Charging Stations, IEEE PES ISGT 2018 (joint with Stan Pietrowicz)
