Research
Papers:
- Quantifying Lie Group Learning with Local Symmetry Error (NeurIPS 2023 Workshop, “Symmetry and Geometry in Neural Representations”)
- A tradeoff between universality of equivariant models and learnability of symmetries (preprint)
- Representation Learning with Multisets (preprint; preliminary version accepted at NeurIPS 2019 Workshop on Sets and Partitions)
Some more formal writeups, from course projects and otherwise, that I think are interesting enough to be posted here:
- A note on the first-projection method for proving central limit theorems
- A note on a representation of a Poisson(1/2) random variable as a sum of Bernoulli products
- An Exploration of Worst-Case Deletion Correcting Codes
- Understanding Successive Features via Random Low Rank Structure
- Applying Grover’s Algorithm to Unique-k-SAT
This last writeup is mostly for my own sake, to have a record of a conjecture answering the question “what does a typical Markov chain look like?” The work done here is minimal (as one can tell by the unfinshed state of the document, for which I’ve since lost the source code). Nonetheless the conjecture is interesting and (at least to the best of my knowledge at the time of writing) has no published confirmation or refutation, though a lot of work exists in such topics.