By uncovering and analyzing the hidden capabilities and properties of pretrained neural networks we can gain insights that can be leveraged in a variety of ways. During my PhD, I focused on pre-trained classifiers or self-supervised networks. More recently, I turned this focus to generative models, focusing on how we can expand the generation and editing abilities of text-to-image networks both from inference and training perspectives. Prior to joining OpenAI, I led the early efforts of internal large-scale training of text-to-image models at Salesforce AI.