I try to keep this page updated with teasers of what I’m currently working on. Please reach out to me if you’re interested in specifics or possible collaborations.
Visual Reasoning in Self-Supervised Learning: Self-supervised learning is still primarily focused on recognition tasks, where performance is fast approaching or surpassing typical supervised methods. Relative performance is much worse on visual/compositional reasoning tasks and we’re working to close that gap.
Expert Model Selection: Task2Vec is one of my favorite papers to come out in recent years. I’m working on self-supervised versions of this method that don’t require target labels to select the proper expert.
Simplifying Contrastive Learning: Current contrastive learning methods frame a classification problem where each image is its own class. We show that this formulation is not necessary and superior results can be achieved on small/moderately-sized datasets with extremely simplified techniques. Preprint expected to be released at the ICLR deadline (end of September). This work has been preempted by https://arxiv.org/abs/2005.10242