Carlos Ponce, MD, PhD

Assistant Professor of Neuroscience

Ponce Lab


Research

The goal of the Ponce laboratory is to define how neurons from different cortical areas interact to realize our perception of shape and motion. The lab studies this question in the brain of the rhesus macaque, recording action potentials from neurons that span the entire cortico-visual hierarchy, from V1, V2, V4, MT and inferotemporal cortex (IT). The lab believes that the best explanation for visual processing is mathematical – thus we work to ensure that all of our results can be implemented in computational models like deep neural networks.

To achieve this goal, animals are needed to perform behavioral tasks, and so the lab uses modern techniques (including computer-based automated systems) to train the animals humanely and efficiently. The lab records from their brains using chronically implanted microelectrode arrays, which yield large amounts of data quickly, and sometimes also using single electrodes for novel exploratory projects (i.e. our moonshot division!). While recording, activity manipulation techniques (like cortical cooling, optogenetics and chemogenetics) are used to affect cortical inputs to the neurons under study, and establish results that are causal, not just correlational.

The lab’s experimental work is influenced by machine learning. We use a variety of deep neural network types (including convolutional, recurrent and generative adversarial) to test preliminary hypotheses, interpret results and generate interesting stimuli for biology-based experiments. The programming languages of choice are Matlab and Python.

Solving the problem of visual recognition at the intersection of visual neuroscience and machine learning will yield applications that will improve automated visual recognition in fields like medical imaging, security and self-driving vehicles. But just as importantly, it will illuminate how our inner experience of the visual world comes to be.


Selected publications

  • Wang B and Ponce CR, A Geometric Analysis of Deep Generative Image Models and Its Applications. In Proc. International Conference on Learning Representations, 2021.
  • Arcaro MJ, Ponce CR, Livingstone M. The neurons that mistook a hat for a face. Elife. 2020; 9:e53798. Published 2020 Jun 10.
  • Ponce CR, Xiao W, Schade PF, Hartmann TS, Kreiman G, and Livingstone MS (2019) Evolving images for visual neurons using a deep generative network reveals coding principles and neuronal preferences, Cell. 2019 May 2;177(4):999-1009.
  • Hartmann TS, Livingstone MS and Ponce CR. Feature maps as a common signature for theories of visual recognition. In preparation.
  • Ponce CR, Lomber SG, Livingstone MS. (2017) Posterior inferotemporal cortex cells use multiple visual pathways for shape encoding. J. Neurosci. May 10;37(19):5019-5034.
  • Ponce CR, Hartmann T, Livingstone MS. (2017) Curvature and end-stopping as organizing principles for ventral stream organization. J. Neurosci. December; 2507-16.
  • Ponce CR, Genecin MP, Perez-Melara G and Livingstone, MS. (2016) Automated chair-training of rhesus macaques. J Neurosci Methods, Apr 1; 263:75-80.
  • Ponce CR, Lomber SG, Born RT. (2008) Integrating motion and depth via parallel pathways. Nat Neurosci. Feb; 11(2):216-23.

See a complete list of Dr. Ponce’s publications on PubMed.


Education

2001, BS, Biology and Chemistry, University of Utah

2008, PhD, Neuroscience, Harvard Graduate School of Arts and Sciences

2010, MD, Harvard Medical School