Shashwat Sridhar

Young scientist talk \ Manfred Eigen lecture theatre

A key challenge in computational neuroscience is designing easy-to-fit models that also capture the wide range of responses exhibited by sensory neurons. In the retina, predictive models of retinal ganglion cell (RGC) activity that rely on linear receptive fields (RFs) require few parameters but fail to capture the cells’ sensitivity to high-frequency spatial contrast. Nonlinear models, such as subunit models, which divide the RF into smaller, nonlinearly combined regions, offer more accuracy but are often difficult to fit to experimental data, especially with natural stimuli. To address this gap, we use a model from literature [1] that effectively captures nonlinear spatial integration in RGCs with few tunable parameters, by combining signals representing the mean light intensity and its variance within the RF.

We extend this spatial contrast (SC) model to the spatiotemporal domain and evaluate it on spiking activity we recorded from marmoset retinas under artificial and naturalistic stimulation. We find that the predictive performance of the SC model exceeds that of standard linear models and a subunit model, particularly for cells with larger RFs, and is comparable to that of a 1-layer convolutional neural network. Furthermore, we use the model to estimate the cells’ optimal spatial scale of nonlinear integration, finding that this scale remains consistent across cell types. Our results indicate that the SC model offers a straightforward approach to capturing key aspects of nonlinear spatial integration with minimal parameters, establishing itself as an effective benchmark for comparison with more complex nonlinear models.