Presented By: Student AIM Seminar - Department of Mathematics
Student AIM Seminar: Spatially informed biologically interpretable machine learning approaches for analyzing EEG data
Madelyn Cruz
Biological neural networks (BNNs) are machine learning models that enhance the biological interpretability of artificial neural networks by modeling neural dynamics and providing insights underlying neural system behavior. In our previous work, Hodgkin-Huxley neuron models were implemented in BNNs to classify electroencephalogram (EEG) signals. While biologically interpretable, these models treat each electrode independently and ignore spatial relationships, limiting their ability to capture large-scale brain dynamics.
Here, we develop trainable BNNs using time-aware backpropagation applied to networks of biophysically accurate neurons based on modified Hodgkin-Huxley equations, integrated within feedforward and recurrent architectures. Neuron sets represent brain regions, and synaptic weights reflect spatial distances and functional connectivity. These BNNs classify consciousness levels from EEG data, extrapolate deeper brain activity, and exhibit physiologically observed frequencies, while hidden-layer neurons remain biologically interpretable. Overall, spatially informed BNNs can capture deeper structures, generate emergent spatiotemporal patterns, and improve cross-subject generalization, advancing both EEG interpretability and generalizability while bridging neural modeling and modern machine learning.
Here, we develop trainable BNNs using time-aware backpropagation applied to networks of biophysically accurate neurons based on modified Hodgkin-Huxley equations, integrated within feedforward and recurrent architectures. Neuron sets represent brain regions, and synaptic weights reflect spatial distances and functional connectivity. These BNNs classify consciousness levels from EEG data, extrapolate deeper brain activity, and exhibit physiologically observed frequencies, while hidden-layer neurons remain biologically interpretable. Overall, spatially informed BNNs can capture deeper structures, generate emergent spatiotemporal patterns, and improve cross-subject generalization, advancing both EEG interpretability and generalizability while bridging neural modeling and modern machine learning.