Mission

In neuroscience temporal patterns of neurons are heavily studied, these shed light on specific activity that occurs such as repetitive activation of neurons, and the convergence of firing rates in neurons to stable steady states.

In modern neural networks most temporal information is treated with either attention or RNNs. Attention has been highly successful in recent years but does not contain recurrent connections that are a critical component of the biological brain. RNNs on the other hand have major technical limitations with the way information propagates through time.

The temporal patterns in neurons can be explored in the following settings: 1) Deep Learning for time-based modalities such as video and speech, where I believe the temporal aspect can be leveraged for self-supervised learning purposes. 2) In Reinforcement Learning, the temporal sequence of states and actions is rudimentary for modeling the rewards, actions, and the state of the environment. 3) Hopfield Networks and Dense Associative Memory have investigated the convergence of neural networks to stable steady states, yet they are limited in their application to temporal data.

Of these, deep learning in video seems to leverage the temporal sequence of frames most directly and has added explainability over speech due to visual frames.