I am broadly interested in the question of how the brain extracts structured, abstract representations from noisy, high-dimensional perceptual inputs, and uses these representations to achieve intelligent behavior. To better understand these processes, my work exploits a bidirectional interaction between cognitive science and artificial intelligence, with an emphasis on the visual domain. This involves two major components. First, I use recently developed neural network modeling techniques to build models of higher-order cognitive processes (e.g., metacognition, analogical reasoning) that are grounded in realistic perceptual inputs (images). Second, I take inspiration from cognitive science to design novel inductive biases aimed at imbuing deep learning algorithms with a more human-like capacity for reasoning and abstraction.