As neuroscience continues to advance, the scale and complexity of the data we collect have increased dramatically. Today, it is common to collect **multi-modal datasets** that combine:

**Neural recordings**(electrophysiology, two-photon calcium imaging, fMRI) from hundreds to thousands of neurons,**Behavioral data**(videos, audio, sensor inputs) collected during experiments,**Genetic data**(RNA sequencing, scRNASeq) that track gene expression at single-cell resolution.

These datasets are often vast, encompassing thousands of dimensions, making them difficult to interpret using traditional analysis methods. The complexity of such data can obscure the underlying patterns that are crucial for understanding brain function and behavior. The primary goal of **dimensionality reduction** is to transform this high-dimensional data into a more manageable, lower-dimensional form, while retaining the essential structures and patterns. This process makes the data easier to visualize, analyze, and interpret.

Dimensionality reduction is a set of mathematical techniques that distill the most important information from a high-dimensional dataset into a lower-dimensional representation. This is crucial in neuroscience for several reasons:

**Visualization**: High-dimensional data is often impossible to visualize. Dimensionality reduction methods like**Principal Component Analysis (PCA)**or**t-distributed Stochastic Neighbor Embedding (t-SNE)**allow researchers to project complex datasets into two- or three-dimensional spaces where they can be visually inspected.**Identifying patterns**: In multi-neuron recordings, for example, dimensionality reduction can reveal**population-level dynamics**or**latent neural states**that are not immediately apparent in raw data.**Noise reduction**: High-dimensional data often includes redundant or irrelevant features. Dimensionality reduction helps eliminate noise, allowing researchers to focus on the most meaningful aspects of the data.**Clustering**: By reducing the dimensionality of the data, methods like**UMAP**and**clustering algorithms**(e.g., k-means, DBSCAN) can be employed to identify subpopulations of neurons, cells, or behavioral patterns more effectively.

The course material offers both a theoretical foundation and hands-on practical experience with the most commonly used dimensionality reduction techniques in neuroscience, including:

**Principal Component Analysis (PCA)**: A linear method that simplifies data by projecting it onto principal components that capture the greatest variance.**t-SNE (t-distributed Stochastic Neighbor Embedding)**: A non-linear technique ideal for visualizing high-dimensional data in low-dimensional spaces, often used to explore scRNASeq data and neural population activity.**UMAP (Uniform Manifold Approximation and Projection)**: Another non-linear method that improves on t-SNE by preserving both local and global data structures, making it an excellent tool for large datasets.**Clustering**: Techniques such as**k-means**and**DBSCAN**for grouping similar data points, which can help identify neural states or cell types based on reduced data.**Autoencoders**: A type of neural network that learns compact, latent representations of data, particularly useful for uncovering hidden factors in large datasets.**Variational Autoencoders (VAE)**: A more advanced form of autoencoder that learns probabilistic representations of data, allowing for more flexible and interpretable latent spaces.**Artificial Neural Networks (ANN)**: Since we are using autoencoders, we will also cover the basics of neural networks, how they work and how they can be tuned to perform well on a given task.

The material also covers the mathematical foundations of these methods, but with a strong emphasis on practical application. The hands-on *Python* exercises will help to directly apply these techniques to your own datasets.

This course is designed for a wide range of audiences:

**Neuroscientists**who are grappling with the analysis of complex, high-dimensional neural data and wish to leverage modern machine learning techniques to uncover hidden patterns.**Data scientists**who are looking to apply their expertise to the field of neuroscience and work with biological datasets.**Graduate students**and researchers in related fields who need a practical introduction to dimensionality reduction techniques, as applied to neuroscience.

A basic understanding of *Python* programming is recommended, and some familiarity with machine learning concepts will be helpful, but not required. All material is designed to be approachable, with detailed walkthroughs of code examples and exercises.

The course is structured to balance theoretical lectures with hands-on practice. Both, the theoretical part and the hands-on material are designed in such a way, that they can be easily studied without the need to attend the course. The material is self-contained and can be used as a reference for future projects.

You can access all of the teaching material, including lecture slides, code notebooks, and sample datasets, here and on this my GitHub repositoryꜛ. The material is released under a CC BY 4.0 license, which means you are free to use, modify, and share it.

Feel free to explore the material and experiment with the provided code. If you have any questions or feedback, please don’t hesitate to reach out. I hope it will help you unlock the potential of your own high-dimensional data and inspires to further explore dimensionality reduction techniques in neuroscience and beyond.

]]>Long-term potentiation (LTP) is a process by which the strength of synaptic connections between neurons is increased. It is a key mechanism underlying learning and memory in the brain. LTP is typically induced by repeated stimulation of a synapse, leading to long-lasting changes in synaptic strength. The process of LTP involves both presynaptic and postsynaptic changes that enhance the communication between neurons.

The following sketch illustrates the biological process of LTP between two neurons:

The key steps illustrated include:

**1. Repeated Stimulation**: A dendritic synapse is repeatedly stimulated (or activated) by neurotransmitters from a presynaptic axon terminal.

**2. Postsynaptic Changes**: This repeated stimulation causes an increase in the number of postsynaptic receptors (e.g., AMPA receptors) inserted into the postsynaptic membrane, enhancing its sensitivity.

**3. Presynaptic Changes**: Concurrently, the presynaptic neuron increases the release of neurotransmitters in response to stimuli.

**4. Resulting Synaptic Strengthening**: The cumulative effect is a stronger synaptic connection between the presynaptic axon terminal and the postsynaptic dendrite, thereby strengthening the communication between the two neurons.

The communication between neurons is mediated by neurotransmitters, which are chemical messengers that transmit signals across synapses. Several neurotransmitters play a role in modulating synaptic plasticity, including LTP. Here are some of the key neurotransmitters involved in synaptic plasticity:

A common neurotransmitter is **glutamate**, the main excitatory neurotransmitter in the brain, which binds to postsynaptic receptors such as AMPA and NMDA receptors. Activation of AMPA receptors leads to an influx of sodium ions (Na^{+}), causing depolarization of the postsynaptic membrane. NMDA receptors are typically blocked by magnesium ions (Mg^{2+}) at resting membrane potential. Depolarization (partly through AMPA receptor activation) removes the Mg^{2+} block, allowing calcium ions (Ca^{2+}) to enter the postsynaptic neuron. This calcium influx is critical for the induction of LTP.

**Dopamine** can modulate LTP, particularly in brain regions like the hippocampus and striatum. Dopaminergic signaling can enhance or stabilize LTP, making it more likely that synaptic changes will be retained over long periods. A sources for dopamine are dopaminergic neurons, primarily from areas such as the ventral tegmental area (VTA), project to regions like the hippocampus and modulate synaptic plasticity through dopamine release.

**Norepinephrine** (**Noradrenaline**) is another neuromodulator that can influence LTP by modulating neuronal excitability and synaptic strength. It often acts through beta-adrenergic receptors to enhance the induction and maintenance of LTP. Released from the locus coeruleus, norepinephrine acts on various brain regions, including the hippocampus.

**Serotonin** has complex effects on LTP, which can be either facilitative or inhibitory depending on the receptor subtype and brain region involved. It can modulate synaptic plasticity and influence the stability and consolidation of LTP. Serotonergic neurons from the raphe nuclei project broadly throughout the brain, including to the hippocampus.

**Acetylcholine** is involved in attention and learning processes and can modulate LTP. Cholinergic modulation can enhance synaptic plasticity and the consolidation of LTP. Cholinergic neurons from the basal forebrain project to the hippocampus and cortex, releasing acetylcholine to modulate synaptic function.

Long-term depression (LTD) is a process through which synaptic strength is weakened, contributing to the fine-tuning of neural circuits, learning, and memory.

Key steps include (primarily focusing on the hippocampus):

**1. Low-frequency stimulation**: LTD is typically induced by low-frequency stimulation (LFS) of presynaptic neurons, often at a frequency of 1 Hz for 10-15 minutes. This type of stimulation is insufficient to remove the magnesium block from NMDA receptors and does not cause significant postsynaptic depolarization.

**2. Release of glutamate**: During low-frequency stimulation, glutamate is released from the presynaptic terminal and binds to postsynaptic receptors, including NMDA and AMPA receptors.

**3. Partial NMDA receptor activation**: Unlike LTP, the partial depolarization during LTD induction is not enough to fully unblock NMDA receptors. However, some calcium ions (Ca^{2+}) can still enter the postsynaptic neuron through NMDA receptors, albeit at a lower level than during LTP induction.

**4. Calcium signaling**: The low levels of Ca^{2+} influx activate different intracellular signaling pathways compared to those activated during LTP. Specifically, protein phosphatases (eg., PP1, PP2A, and calcineurin) are activated by the modest rise in intracellular calcium. These phosphatases dephosphorylate target proteins, leading to changes in the postsynaptic density.

**5. Dephosphorylation of AMPA receptors**: Activated phosphatases dephosphorylate AMPA receptors and associated proteins, leading to a reduction in the conductance of these receptors. This results in the internalization of AMPA receptors from the postsynaptic membrane through endocytosis.

**6. Reduction in postsynaptic receptors**: The removal of AMPA receptors from the postsynaptic membrane decreases the synaptic response to glutamate, effectively weakening the synapse.

**7. Synaptic structural changes**: Over time, LTD can lead to structural changes in the synapse, such as the reduction of postsynaptic dendritic spine size or even the elimination of synaptic connections.

The key neurotransmitters and molecules, that are involved, are:

**Glutamate**: The primary excitatory neurotransmitter involved in both LTP and LTD, binding to NMDA and AMPA receptors.**NMDA receptors**: Partially activated during LTD induction, allowing limited calcium influx.**Calcium ions (Ca**: Critical for activating phosphatases that mediate LTD.^{2+})**Protein phosphatases**: Enzymes such as PP1, PP2A, and calcineurin that dephosphorylate target proteins, leading to AMPA receptor internalization.**AMPA Receptors**: Dephosphorylated and internalized during LTD, reducing synaptic strength.

In contrast to long-term plasticity, **short-term plasticity (STP)** refers to transient changes in synaptic strength that occur over a timescale of milliseconds to minutes. These changes in synaptic efficiency are reversible and are generally thought to play a key role in the fine-tuning of neural networks during ongoing neural activity, rather than in long-lasting changes required for memory storage.

STP can manifest in two main forms: **short-term potentiation (STP)** and **short-term depression (STD)**:

**short-term potentiation (STP)**is characterized by a temporary increase in synaptic strength following a brief period of intense presynaptic activity. This increase is often caused by the buildup of presynaptic calcium ions (Ca^{2+}), which enhances neurotransmitter release. Unlike LTP, the synapse’s ability to maintain this enhanced state is short-lived and decays as the calcium concentration returns to baseline.**short-term depression (STD)**, on the other hand, is a temporary decrease in synaptic strength, often occurring when neurotransmitter vesicle pools are depleted due to high-frequency stimulation. Synapses need time to recover and replenish their stores of neurotransmitters, and during this recovery period, the synaptic response is weakened.

Thus, key differences between long-term and short-term plasticity are:

**Timescale**: The most obvious difference between STP/STD and LTP/LTD lies in the timescale. While LTP and LTD can last from hours to the lifetime of the organism, STP and STD occur over much shorter periods, from a few milliseconds to several minutes, and they dissipate once the activity that triggered them ceases.**Mechanisms**: The molecular mechanisms underlying short-term plasticity are distinct from those of long-term plasticity. STP and STD typically involve presynaptic changes, such as alterations in neurotransmitter release probability, rather than the structural changes to the synapse (e.g., receptor insertion or removal) that are characteristic of LTP and LTD.**Role in neural circuits**: STP and STD are thought to play a critical role in regulating information flow during brief periods of neural activity, allowing synapses to adjust their transmission properties on-the-fly. This rapid form of modulation is particularly important for temporal coding, where the timing of neural signals is crucial for processing information. LTP and LTD, in contrast, are more relevant for the long-term storage of information, such as in learning and memory consolidation.**Reversibility**: Unlike the more permanent changes seen in LTP and LTD, STP and STD are fully reversible, with the synaptic strength returning to baseline after the cessation of the stimulus.

Long-term potentiation (LTP) and long-term depression (LTD) are fundamental processes that underlie synaptic plasticity in the brain. These mechanisms allow the brain to adapt to new information, form memories, and refine neural circuits. The balance between LTP and LTD is crucial for maintaining synaptic homeostasis and proper brain function. Dysregulation of these processes has been implicated in various neurological and psychiatric disorders, highlighting the importance of understanding the molecular and cellular mechanisms underlying synaptic plasticity.

In computational neuroscience, models of LTP and LTD are used to simulate learning and memory processes in artificial neural networks. By incorporating biologically plausible mechanisms of synaptic plasticity, we can develop more realistic models of brain function and behavior.

- Robert M. Mulkey, Robert C. Malenka,
*Mechanisms underlying induction of homosynaptic long-term depression in area CA1 of the hippocampus*, 1992, Neuron, Vol. 9, Issue 5, pages 967-975, doi: 10.1016/0896-6273(92)90248-cꜛ - Serena M. Dudek, Mark F. Bear,
*Homosynaptic long-term depression in area CA1 of hippocampus and effects of N-methyl-D-aspartate receptor blockade.*, 1992, Proceedings of the National Academy of Sciences, Vol. 89, Issue 10, pages 4363-4367, doi: 10.1073/pnas.89.10.4363ꜛ - Robert C. Malenka, Mark F. Bear,
*LTP and LTD*, 2004, Neuron, Vol. 44, Issue 1, pages 5-21, doi: 10.1016/j.neuron.2004.09.012ꜛ - G. L. Collingridge, T. V. P. Bliss,
*Memories of NMDA receptors and LTP*, 1995, Trends in Neurosciences, Vol. 18, Issue 2, pages 54-56, doi: 10.1016/0166-2236(95)80016-Uꜛ - Adam J. Granger, Roger A. Nicoll,
*Expression mechanisms underlying long-term potentiation: a postsynaptic view, 10 years on*, 2014, Philosophical Transactions of the Royal Society B: Biological Sciences, Vol. 369, Issue 1633, pages 20130136, doi: 10.1098/rstb.2013.0136ꜛ - Nicoll,
*A Brief History of Long-Term Potentiation*, 2017, Neuron, Vol. 93, Issue 2, pages 281-290, doi: 10.1016/j.neuron.2016.12.015ꜛ - Zucker, Regehr,
*Short-Term Synaptic Plasticity*, 2002, Annual Review of Physiology, Vol. 64, Issue 1, pages 355-405, doi: 10.1146/annurev.physiol.64.092501.114547ꜛ - Citri, Malenka,
*Synaptic Plasticity: Multiple Forms, Functions, and Mechanisms*, 2008, Neuropsychopharmacology, Vol. 33, Issue 1, pages 18-41, doi: 10.1038/sj.npp.1301559 - L. F. Abbott, S. B. Nelson,
*Synaptic plasticity: taming the beast*, 2000, Nature neuroscience, doi: 10.1038/81453ꜛ

The BCM rule was proposed by Elie Bienenstock, Leon Cooper, and Paul Munroꜛ to address how neurons in the visual cortex develop selectivity to specific patterns, such as orientation or spatial frequency. The rule posits that synaptic changes depend not only on the immediate activity of the pre- and postsynaptic neurons but also on the **history of postsynaptic activity**. This historical dependence introduces a **sliding threshold** that determines whether synaptic activity leads to long-term potentiation (LTP) or long-term depression (LTD). LTP refers to the strengthening of synaptic connections, while LTD refers to the weakening of synaptic connections. These processes are essential for learning and memory formation in the brain. In LTP, repeated activation of a synapse leads to an increase in synaptic strength, making it more likely to fire in response to a given input. In contrast, LTD weakens synaptic connections, reducing the likelihood of firing. We will further explain LTP and LTD in the next post.

The core idea is that there is a **dynamic threshold** for synaptic modification, $\theta_M$, which adjusts **based on the postsynaptic activity**. Synaptic strength increases (LTP) when the postsynaptic activity $y$ exceeds this threshold, and decreases (LTD) when it falls below it. Importantly, the dynamic threshold itself changes in response to the average postsynaptic activity, $\langle y \rangle$, allowing the system to adapt to different activity levels.

By incorporating the dynamic threshold into the learning process, the BCM rule captures the interplay between neural activity and synaptic plasticity, providing a mechanism for how neurons develop selectivity and maintain stability over time.

The BCM rule can be mathematically described using a few key equations. Let $x_i$ be the presynaptic activity of the $i$-th neuron, $y$ be the postsynaptic activity,

\[y = \sum_i w_i x_i\]and $w_i$ be the corresponding synaptic weight.

The change in synaptic weight, ${dw_i}/{dt}$, is given by:

\[\frac{dw_i}{dt} = \eta \cdot y \cdot (y - \theta_M) \cdot x_i\]Here, $\eta$ is a learning rate constant, and $\theta_M$ is the modification threshold. The threshold $\theta_M$ itself is a function of the time-averaged postsynaptic activity, $\langle y \rangle$:

\[\theta_M = \phi(\langle y \rangle)\]where $\phi(\langle y \rangle)$ is a monotonically increasing function, typically represented as:

\[\phi(\langle y \rangle) = \langle y \rangle^p\]with $p > 1$.

To provide a complete mathematical model, we consider the dynamics of the time-averaged postsynaptic activity, $\langle y \rangle$, which evolves according to:

\[\tau \frac{d \langle y \rangle}{dt} = - \langle y \rangle + y\]where $\tau$ is a time constant.

In the literature, various extensions and refinements of the BCM rule have been proposed, incorporating additional factors such as metaplasticity, homeostatic regulation, and network interactions. These modifications aim to capture the complex interplay of factors influencing synaptic plasticity in neural circuits.

A variant of the BCM rule adds a weight decay term $-\epsilon w_i$ to the Hebbian-like term $y \cdot (y - \theta_M) \cdot x_i$ instead of multiplying it by the learning rate, allowing for a more nuanced regulation of synaptic changes over time:

\[\frac{dw_i}{dt} = y \cdot (y - \theta_M) \cdot x_i - \epsilon w_i\]The weight decay term acts to stabilize the weight, preventing it from growing indefinitely. This modified BCM rule is different from the previous BCM formulation in that it explicitly includes a mechanism to reduce the synaptic weight over time, proportional to its current value. This is a common modification in neural network models to ensure weights do not grow without bound and to introduce a form of weight regularization.

The BCM rule offers several significant implications for synaptic plasticity:

**Stability of synaptic weights:**- The classical Hebbian learning rule can be expressed as: $\Delta w_i = \eta \cdot x_i \cdot y$, where $\Delta w_i$ is the change in synaptic weight, $\eta$ is again the learning rate, and $x_i$ and $y$ are the presynaptic and postsynaptic activity, respectively. Hebbian learning tends to lead to
**runaway potentiation**, where synaptic weights increase without bound. This is because the positive feedback loop of strengthening synapses leads to further increases in activity, which in turn strengthens the synapses even more. The introduction of the sliding threshold $\theta_M$ in the BCM rule addresses this issue by providing a mechanism for**homeostatic regulation**in the neural network as it ensures that synaptic weights remain stable over time. For example, if a neuron experiences high activity, $\theta_M$ increases, making it less likely for further potentiation. **Synaptic competition and selectivity:**- Classical Hebbian learning does not inherently promote competition among synapses. Without additional mechanisms, such as normalization or competition, all synapses could potentially increase together, which is not biologically realistic. In contrast, the BCM rule naturally leads to synaptic
**competition and selectivity**. Synapses receiving correlated inputs and, thus, frequently having their activity levels above $\theta_M$ are strengthened, while those receiving uncorrelated inputs (i.e., less active synapses) are weakened. This mechanism explains the development of feature selectivity, such as orientation selectivity in visual cortex neurons. **Activity-dependent plasticity:**- By depending on the time-averaged postsynaptic activity, $\langle y \rangle$, the BCM rule adapts to varying activity regimes. This supports both
**homeostatic regulation**, where synapses adjust to maintain overall stability, and**experience-dependent plasticity**, where synapses change based on specific patterns of activity. In contrast, classical Hebbian learning is purely activity-dependent and does not consider the historical activity of the neuron. This can lead to synapses becoming overly strong if there is sustained high activity, or overly weak if there is sustained low activity. **Bidirectional plasticity**- This point is an implication of the previous ones, but still worth mentioning. Classical Hebbian learning primarily accounts for potentiation (strengthening) of synapses. Variants like anti-Hebbian learning are required to explain synaptic weakening (depression). The BCM rule inherently supports bidirectional plasticity. When the postsynaptic activity $y$ is above the threshold $\theta_M$, LTP (long-term potentiation) occurs. When $y$ is below $\theta_M$, LTD (long-term depression) occurs. This bidirectional nature makes the BCM rule more versatile and better suited to modeling biological synaptic plasticity.

To illustrate the dynamics of the BCM rule, consider a simple computational model with two synapses receiving inputs $x_1$ and $x_2$. The postsynaptic activity $y$ is given by:

\[y = w_1 x_1 + w_2 x_2\]The synaptic weights $w_1$ and $w_2$ are updated according to the BCM rule:

\[\begin{align*} \Delta w_1 &= \eta \cdot y \cdot (y - \theta_M) \cdot x_1 \\ \Delta w_2 &= \eta \cdot y \cdot (y - \theta_M) \cdot x_2 \end{align*}\]By simulating this system over time, we can observe the evolution of synaptic weights and the development of selectivity. For instance, if $x_1$ and $x_2$ represent different sensory inputs, the synapse corresponding to the more frequently activated input will strengthen, demonstrating the competitive nature of synaptic plasticity under the BCM rule.

Let’s transfer this model into a simple Python script to simulate the synaptic weight changes over time. We will consider the two-synapse system and simulate the system over multiple time steps to observe how the synaptic weights evolve based on the postsynaptic activity $y$:

```
import os
import numpy as np
import matplotlib.pyplot as plt
# set global properties for all plots:
plt.rcParams.update({'font.size': 12})
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.bottom"] = False
plt.rcParams["axes.spines.left"] = False
plt.rcParams["axes.spines.right"] = False
# for reproducibility:
np.random.seed(1)
# define parameters:
eta = 0.01 # learning rate
tau = 100.0 # time constant for averaging postsynaptic activity
epsilon = 0.001 # decay rate (only used if decay term is included)
simulation_time = 500 # total simulation time in ms
time_step = 1 # time step for the simulation in ms
p = 2 # exponent for the sliding threshold function
# initialize synaptic weights and inputs:
w = np.array([0.5, 0.5]) # Initial synaptic weights
x1 = np.random.rand(simulation_time) # Presynaptic input 1
x2 = np.random.rand(simulation_time) # Presynaptic input 2
inputs = np.vstack((x1, x2))
# initialize variables for storing results:
y = np.zeros(simulation_time)
theta_M = np.zeros(simulation_time)
avg_y = 0 # initial average postsynaptic activity
w_history = np.zeros((simulation_time, 2)) # to store synaptic weights over time
# simulation loop:
for t in range(simulation_time):
# compute postsynaptic activity:
y[t] = np.dot(w, inputs[:, t])
# update average postsynaptic activity:
avg_y = avg_y + (y[t] - avg_y) / tau
# update the sliding threshold:
theta_M[t] = avg_y ** p
# update synaptic weights according to the BCM rule:
delta_w = eta * y[t] * (y[t] - theta_M[t]) * inputs[:, t]
# uncomment the following line to include weight decay:
# delta_w = eta * y[t] * (y[t] - theta_M[t]) * inputs[:, t] - epsilon * w
w += delta_w
# ensure weights remain within a reasonable range:
w = np.clip(w, 0, 1)
# store synaptic weights:
w_history[t] = w
# plotting the results:
plt.figure(figsize=(6, 7))
# Plot synaptic weights
plt.subplot(2, 1, 1)
plt.plot(w_history[:, 0], label='weight 1')
plt.plot(w_history[:, 1], label='weight 2')
plt.xlabel('time [ms]')
plt.ylabel('synaptic weight')
plt.title('Evolution of synaptic weights')
plt.legend()
# Plot postsynaptic activity and sliding threshold
plt.subplot(2, 1, 2)
plt.plot(y, label='postsynaptic activity')
plt.plot(theta_M, label='sliding threshold', linestyle='--')
plt.xlabel('time [ms]')
plt.ylabel('activity / threshold')
plt.title('Postsynaptic activity and sliding threshold')
plt.legend()
plt.tight_layout()
plt.show()
```

Here is the corresponding output of simulation for the default BCM rule:

The top plot shows the evolution of synaptic weights $w_1$ and $w_2$ over time. Initially, both weights increase steadily. After reaching a certain time point, they saturate and remain constant at the maximum value of 1. This behavior is expected under the BCM rule, where the weights increase when the postsynaptic activity $y$ is greater than the sliding threshold $\theta_M$. The saturation indicates that the inputs were sufficiently correlated or frequent to push the weights to their upper limit. The weights being clipped at 1 is a safeguard to prevent unbounded growth as discussed before.

The bottom plot shows the postsynaptic activity $y$ and the sliding threshold $\theta_M$ over time. The postsynaptic activity $y$ increased over time, but fluctuates widely, whereas the sliding threshold $\theta_M$ increases more gradually. The wide fluctuations in $y$ reflect the variability in presynaptic inputs. The gradual increase in $\theta_M$ is indicative of the time-averaging process of the postsynaptic activity, capturing the overall increase in activity over time. This shows how the BCM rule adapts the threshold to maintain stability and avoid runaway potentiation.

Thus, all core features of the BCM rule are demonstrated by this simple simulation:

**Dynamic threshold adaptation:**The sliding threshold $\theta_M$ adapts based on the average postsynaptic activity, which is a core aspect of the BCM rule.**Synaptic plasticity mechanism:**The change in synaptic weights based on the relationship between postsynaptic activity and the sliding threshold demonstrates the core plasticity mechanism of the BCM rule.**Stability of weights:**The saturation of weights at a maximum value showcases the stability mechanism of the BCM rule, preventing unbounded growth.

In case you want to simulate the BCM rules including the weight decay term, simply change the update rule:

```
# update synaptic weights according to the BCM rule:
#delta_w = eta * y[t] * (y[t] - theta_M[t]) * inputs[:, t]
# uncomment the following line to include weight decay:
delta_w = eta * y[t] * (y[t] - theta_M[t]) * inputs[:, t] - epsilon * w
```

While the synaptic weights approach saturation a bit slower this time, the simulation shows that the weight decay term is also able to stabilize the weights over time, preventing them from growing indefinitely. This is a common strategy in neural network models to ensure that weights do not become too large and to introduce a form of regularization.

The BCM rule has received substantial experimental support. Studies on visual cortex plasticity have demonstrated that neurons adapt their response properties based on sensory experience, consistent with the predictions of the BCM theory. Additionally, the rule has been applied to various neural circuits, providing insights into the mechanisms of learning and memory across different brain regions.

**Study on visual cortex plasticity** – For instance, Udeigwe et al. 2017ꜛ have demonstrated that the BCM rule can model how neurons in the visual cortex adapt their response properties based on sensory experience. Specifically, studies such as Lian et al. 2021ꜛ using natural images as stimuli have shown that neurons can develop receptive fields similar to those of simple cells in the visual cortex through a competitive synaptic learning process driven by the BCM rule. These findings highlight the rule’s ability to explain the development of stimulus selectivity in the visual cortex.

**Empirical evidence for bidirectional plasticity** – Experimental studies have confirmed the bidirectional nature of synaptic plasticity as predicted by the BCM rule. For example, experiments by Dudek and Bear (1992)ꜛ and Mulkey and Malenka (1992)ꜛ demonstrated that high-frequency stimulation induces LTP, while low-frequency stimulation leads to LTD in hippocampal neurons. These results are consistent with the BCM rule’s prediction that the direction and magnitude of synaptic changes depend on postsynaptic activity (Shouval et al. 2010ꜛ).

**Modeling experience-dependent plasticity** – The BCM theory has been successfully applied to model various aspects of experience-dependent plasticity in the visual cortex. Studies have shown that under different rearing conditions, such as dark rearing, the BCM rule can account for the observed changes in synaptic strength and neuronal selectivity. Dark rearing, where animals (typically rodents) are raised in complete darkness for extended periods, is an experimental approach to study how visual deprivation affects cortical plasticity. The BCM theory predicts that in such conditions, the threshold for LTP would shift downward due to the overall reduced neuronal activity in the absence of visual stimuli. This decreased threshold makes it easier for synapses to undergo potentiation in response to any remaining input, but may impair the proper tuning of synaptic connections. When normal vision is restored after dark rearing, this shift in the LTP threshold can lead to impaired visual cortical plasticity, as the system is not adequately primed to adjust to the new sensory environment. These findings are supported by studies such as Rittenhouse et al. (1999)ꜛ and Bear et al. (1987)ꜛ, which demonstrate that dark rearing leads to significant changes in the cortical representation of visual stimuli.

**Astrocytic modulation of plasticity** – Recent research by Squadrani et al. (2024)ꜛ has further advanced the experimental justification for the BCM rule by demonstrating the crucial role of astrocytes in synaptic plasticity during cognitive tasks such as reversal learning. In their study, astrocytes were shown to enhance the plasticity response, acting as key modulators of synaptic strength by regulating the activity of neurons. Their findings suggest that astrocytic signaling can influence the threshold dynamics predicted by the BCM rule, adjusting the balance between LTP and LTD in a manner dependent on the cognitive demands of the task. This not only supports the BCM model’s applicability in more complex, behaviorally relevant settings but also emphasizes the importance of glial cells in synaptic plasticity, extending the original neuron-centric framework of the BCM theory to incorporate neuron-astrocyte interactions.

The BCM rule is a fundamental theory in neuroscience, providing a robust framework for understanding synaptic plasticity. Its mathematical formulation captures the dynamic nature of synaptic changes, incorporating both immediate neural activity and historical context. By explaining how neurons develop selectivity and maintain stability and supported by experimental evidence, the BCM theory continues to provide valuable insights into the mechanisms behind learning and memory formation, making it a fundamental tool for advancing research in neural plasticity.

The complete code used in this blog post is available in this Github repositoryꜛ (`bcm_rule.py`

and `bcm_rule_with_decay_term.py`

). Feel free to modify and expand upon it, and share your insights.

- E. L. Bienenstock, L. N. Cooper, P. W. Munro,
*Theory for the development of neuron selectivity: orientation specificity and binocular interaction in [visual cortex*, 1982, Journal of Neuroscience, doi: 10.1523/JNEUROSCI.02-01-00032.1982ꜛ - Intrator, Cooper,
*Objective function formulation of the BCM theory of visual cortical plasticity: Statistical connections, stability conditions*, 1992, Neural Networks, Vol. 5, Issue 1, pages 3-17, doi: 10.1016/S0893-6080(05)80003-6ꜛ - Brian S. Blais and Leon Cooper,
*BCM theory*, 2008, Scholarpedia, 3(3):1570, doi: 10.4249/scholarpedia.1570ꜛ - Lian, Almasi, Grayden, Kameneva, Burkitt, Meffin,
*Learning receptive field properties of complex cells in V1*, 2021, PLOS Computational Biology, Vol. 17, Issue 3, pages e1007957, doi: 10.1371/journal.pcbi.1007957ꜛ - Udeigwe, Munro, Ermentrout,
*Emergent Dynamical Properties of the BCM Learning Rule*, 2017, The Journal of Mathematical Neuroscience, Vol. 7, Issue 1, pages n/a, doi: 10.1186/s13408-017-0044-6ꜛ - Shouval,
*Spike timing dependent plasticity: A consequence of more fundamental learning rules*, 2010, Frontiers in Computational Neuroscience, Vol. n/a, Issue n/a, pages n/a, doi: 10.3389/fncom.2010.00019 - Dudek, Bear,
*Homosynaptic long-term depression in area CA1 of hippocampus and effects of N-methyl-D-aspartate receptor blockade.*, 1992, Proceedings of the National Academy of Sciences, Vol. 89, Issue 10, pages 4363-4367, doi: 10.1073/pnas.89.10.4363ꜛ - Mulkey, Malenka,
*Mechanisms underlying induction of homosynaptic long-term depression in area CA1 of the hippocampus*, 1992, Neuron, Vol. 9, Issue 5, pages 967-975, doi: 10.1016/0896-6273(92)90248-cꜛ - Bear, Cooper, Ebner,
*A Physiological Basis for a Theory of Synapse Modification*, 1987, Science, Vol. 237, Issue 4810, pages 42-48, doi: 10.1126/science.3037696ꜛ - Rittenhouse, Shouval, Paradiso, Bear,
*Monocular deprivation induces homosynaptic long-term depression in visual cortex*, 1999, Nature, Vol. 397, Issue 6717, pages 347-350, doi: 10.1038/16922ꜛ - Squadrani, Wert-Carvajal, Müller-Komorowska, Bohmbach, Henneberger, Verzelli, Tchumatchenko,
*Astrocytes enhance plasticity response during reversal learning*, 2024, Communications Biology, Vol. 7, Issue 1, pages n/a, doi: 10.1038/s42003-024-06540-8ꜛ

The Campbell and Siegert approximation combines Campbell’s theorem with Siegert’s analysis of the so-called first-passage time problem.

Campbell’s theorem states that the mean and variance of the summed input to a neuron driven by a Poisson process can be derived from the properties of the individual inputs. In general, for a point process $N$ defined on an $n$-dimensional Euclidean space $\textbf{R}^n$, Campbell’s theorem offers a way to calculate the expected value of a function $f$ of the point process $N$ as

\[\operatorname{E} \left[\sum _{x\in N}f(x)\right]=\int _{\textbf{R}^n}f(x)\Lambda (dx)\]where the sum is over all points in the point process. $\operatorname{E}$ denotes the expectation operator, and $\Lambda$ is the intensity measure of the point process:

\[\Lambda(B) = E\left[N(B)\right]\]where $N(B)$ is the number of points in the set $B$. $B$ can be any Borel set (any set that can be formed from open sets through the operations of countable union, countable intersection, and relative complement) in $\textbf{R}^n$.

Given a neuron receiving synaptic inputs from Poisson-distributed spike trains, Campbell’s theorem can be used to calculate the mean input current $\mu$:

\[\mu = \sum_{i=1}^{N} w_i \lambda_i\]where $w_i$ is the synaptic weight of the $i$-th synapse and $\lambda_i$ is the firing rate of the $i$-th presynaptic neuron, i.e., the rate of the Poisson process from input $i$. For a large number of inputs with similar properties, this sum becomes:

\[\mu = N \cdot w \cdot \lambda\]where $N$ is the number of inputs, $w$ is the average synaptic weight, and $\lambda$ is the average rate of the presynaptic neurons (or Poisson input). The variance $\sigma^2$ of the input current can be calculated accordingly:

\[\sigma^2 = \sum_{i=1}^{N} w_i^2 \lambda_i\]Similarly, for a large number of similar inputs we have:

\[\sigma^2 = N \cdot w^2 \cdot \lambda\]Siegert’s approximation (Siegert 1950ꜛ) on the other hand is used to estimate the firing rate of a neuron by considering the dynamics of the membrane potential and the first-passage time (the time it takes for the membrane potential to reach the threshold for the first time). This involves solving an integral that describes the distribution of first-passage times, which is influenced by the mean and variance of the input current:

\[\nu \approx \frac{\mu - V_{\text{reset}}}{\Delta V} \exp\left(\frac{\mu^2}{2\sigma^2}\right)\]where:

- $\nu$ is the firing rate of the neuron that is being estimated
- $\mu$ is the mean input current which can be calculated using Campbell’s theorem
- $\sigma^2$ is the variance of the input current, also calculated using Campbell’s theorem
- $V_{\text{reset}}$ is the reset potential of the neuron
- $\Delta V$ is the difference between the threshold potential and the reset potential.

The exponential term captures the effect of the input variance on the firing rate. The term $\frac{\mu - V_{\text{reset}}}{\Delta V}$ normalizes the firing rate to the range between 0 and 1.

The equation above is what is known as the Campbell & Siegert approximation, which provides a simple and effective way to estimate the firing rate of a neuron based on the mean and variance of the input current. This approximation is particularly useful for understanding the behavior of neurons in response to fluctuating inputs and is commonly used in theoretical neuroscience and computational modeling. It simplifies the complex dynamics of the LIF model under stochastic inputs to a more manageable form. However, the approximation has its limitations, especially when the input statistics deviate significantly from the assumptions of the model.

The will replicate the NEST tutorial “Campbell & Siegert approximation example”ꜛ with some modifications in order to calculate the firing rate of a neuron based on the mean and variance of the input current. We will then compare the estimated firing rate with the actual firing rate simulated with the NEST simulator.

Let’s first import the necessary libraries and define the parameters for the simulation:

```
import os
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
from scipy.optimize import fmin
from scipy.special import erf
import nest
import nest.raster_plot
# set global properties for all plots:
plt.rcParams.update({'font.size': 12})
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.bottom"] = False
plt.rcParams["axes.spines.left"] = False
plt.rcParams["axes.spines.right"] = False
# set the simulation time:
simtime = 20000 # (ms) duration of simulation
# define some units:
pF = 1e-12
ms = 1e-3
pA = 1e-12
mV = 1e-3
# set the parameters of the neurons and noise sources:
n_neurons = 10 # number of simulated neurons
weights = [0.1] # (mV) psp amplitudes
rates = [10000.0] # (1/s) rate of Poisson sources
# weights = [0.1, 0.1] # (mV) psp amplitudes
# rates = [5000., 5000.] # (1/s) rate of Poisson sources
C_m = 250.0 # (pF) capacitance
E_L = -70.0 # (mV) resting potential
I_e = 0.0 # (nA) external current
V_reset = -70.0 # (mV) reset potential
V_th = -55.0 # (mV) firing threshold
t_ref = 2.0 # (ms) refractory period
tau_m = 10.0 # (ms) membrane time constant
tau_syn_ex = 0.5 # (ms) excitatory synaptic time constant
tau_syn_in = 2.0 # (ms) inhibitory synaptic time constant
```

Next, we analytically calculate the mean and variance of the input current using the Campbell theorem:

```
# estimate the mean and variance of the input current using the Campbell & Siegert approximation:
mu = 0.0
sigma2 = 0.0
J = []
assert len(weights) == len(rates)
for rate, weight in zip(rates, weights):
if weight > 0:
tau_syn = tau_syn_ex
else:
tau_syn = tau_syn_in
# we define the form of a single PSP (post-synaptic potential), which allows us to match the
# maximal value to or chosen weight:
def psp(x):
return -(
(C_m * pF)
/ (tau_syn * ms)
* (1 / (C_m * pF))
* (np.exp(1) / (tau_syn * ms))
* (
((-x * np.exp(-x / (tau_syn * ms))) / (1 / (tau_syn * ms) - 1 / (tau_m * ms)))
+ (np.exp(-x / (tau_m * ms)) - np.exp(-x / (tau_syn * ms)))
/ ((1 / (tau_syn * ms) - 1 / (tau_m * ms)) ** 2)
)
)
min_result = fmin(psp, [0], full_output=1, disp=0)
# we need to calculate the PSC amplitude (i.e., the weight we set in NEST)
# from the PSP amplitude, that we have specified above:
fudge = -1.0 / min_result[1]
J.append(C_m * weight / (tau_syn) * fudge)
# we now use Campbell's theorem to calculate mean and variance of the input
# due to the Poisson sources. The mean and variance add up for each Poisson source:
mu += rate * (J[-1] * pA) * (tau_syn * ms) * np.exp(1) * (tau_m * ms) / (C_m * pF)
sigma2 += (
rate
* (2 * tau_m * ms + tau_syn * ms)
* (J[-1] * pA * tau_syn * ms * np.exp(1) * tau_m * ms / (2 * (C_m * pF) * (tau_m * ms + tau_syn * ms))) ** 2
)
mu += E_L * mV # add the resting potential and convert to mV
sigma = np.sqrt(sigma2) # convert the variance to standard deviation
```

After calculating the mean and variance of the input current, we can now calculate the firing rate using Siegert’s approximation:

```
num_iterations = 100 # number of iterations for the integral
upper = (V_th * mV - mu) / (sigma * np.sqrt(2))
lower = (E_L * mV - mu) / (sigma * np.sqrt(2))
interval = (upper - lower) / num_iterations
tmpsum = 0.0
for cu in range(0, num_iterations + 1):
u = lower + cu * interval
f = np.exp(u**2) * (1 + erf(u))
tmpsum += interval * np.sqrt(np.pi) * f
r = 1.0 / (t_ref * ms + tau_m * ms * tmpsum) # firing rate
```

`r`

is the estimated firing rate of the neuron based on the mean and variance of the input current.

We can now simulate the neurons receiving Poisson spike trains as input and compare the theoretical firing rate with the empirical value:

```
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
# define the parameters of the neurons for NEST:
neurondict = {
"V_th": V_th,
"tau_m": tau_m,
"tau_syn_ex": tau_syn_ex,
"tau_syn_in": tau_syn_in,
"C_m": C_m,
"E_L": E_L,
"t_ref": t_ref,
"V_m": E_L,
"V_reset": E_L,
}
# create the neurons, Poisson generators, voltmeter, and spike recorder:
neurons = nest.Create("iaf_psc_alpha", n_neurons, params=neurondict)
neuron_free = nest.Create("iaf_psc_alpha", params=dict(neurondict, V_th=1e12))
poissongen = nest.Create("poisson_generator", len(rates), {"rate": rates})
voltmeter = nest.Create("voltmeter", params={"interval": 0.1})
spikerecorder = nest.Create("spike_recorder")
# connect the nodes:
poissongen_n_synspec = {"weight": np.tile(J, ((n_neurons), 1)), "delay": 0.1}
nest.Connect(poissongen, neurons, syn_spec=poissongen_n_synspec)
nest.Connect(poissongen, neuron_free, syn_spec={"weight": [J]})
nest.Connect(voltmeter, neuron_free)
nest.Connect(neurons, spikerecorder)
# simulate the network:
nest.Simulate(simtime)
```

In the simulation code above, we create a network of 10 neurons receiving Poisson spike trains as input. The neurons are modelled using the `iaf_psc_alpha`

neuron model, which is a leaky integrate-and-fire model with alpha-shaped postsynaptic currents. We create an additional neuron `neuron_free`

with a very high threshold potential, which prevents it from spiking and, thus, making a ‘silent’ neuron. This allows to record the membrane potential without the neuron spiking. We also record the spike times of the neurons to calculate the empirical firing rate.

Finally, we plot the membrane potential of the silent neuron,

```
# extract the membrane potential of the silent neuron:
v_free = voltmeter.events["V_m"]
Nskip = 500 # we skip the first 500 ms of the simulation
# plot the membrane potential of the silent neuron:
plt.figure(figsize=(6, 4))
plt.plot(voltmeter.events["times"][Nskip:], v_free[Nskip:], label=f"membrane potential $V_m$")
plt.axhline(y=V_th, color='r', linestyle='--', label=f"threshold potential $V_$")
plt.axhline(y=V_reset, color='g', linestyle='--', label=f"reset potential $V_$")
plt.xlabel("time [ms]")
plt.ylabel("membrane potential [mV]")
plt.ylim([-71, -51])
plt.legend()
plt.title("Membrane potential of silent neuron")
plt.tight_layout()
plt.savefig("figures/campbell_siegert_approximation_membrane_potential.png", dpi=300)
plt.show()
```

and the spike raster plot with the histogram of the spiking rate:

```
# extract the spike times and neuron IDs:
spike_events = nest.GetStatus(spikerecorder, "events")[0]
spike_times = spike_events["times"]
neuron_ids = spike_events["senders"]
# combine the spike times and neuron IDs into a single array and sort by time:
spike_data = np.vstack((spike_times, neuron_ids)).T
spike_data_sorted = spike_data[spike_data[:, 0].argsort()]
# extract sorted spike times and neuron IDs:
sorted_spike_times = spike_data_sorted[:, 0]
sorted_neuron_ids = spike_data_sorted[:, 1]
# spike raster plot and histogram of spiking rate ("manually" plotted):
fig = plt.figure(figsize=(6, 6))
gs = gridspec.GridSpec(5, 1)
# create the first subplot (3/4 of the figure)
ax1 = plt.subplot(gs[0:4, :])
ax1.scatter(sorted_spike_times, sorted_neuron_ids, s=9.0, color='mediumaquamarine', alpha=1.0)
ax1.set_title(f"Spike raster plot and histogram of spiking rate")
#ax1.set_xlabel("time [ms]")
ax1.set_xticks([])
ax1.set_ylabel("neuron ID")
ax1.set_xlim([0, simtime])
ax1.set_ylim([0, n_neurons+1])
ax1.set_yticks(np.arange(0, n_neurons+1, 10))
# create the second subplot (1/4 of the figure)
ax2 = plt.subplot(gs[4, :])
hist_binwidth = 55.0
t_bins = np.arange(np.amin(sorted_spike_times), np.amax(sorted_spike_times), hist_binwidth)
n, bins = np.histogram(sorted_spike_times, bins=t_bins)
heights = 10000 * n / (hist_binwidth * (n_neurons))
ax2.bar(t_bins[:-1], heights, width=hist_binwidth, color='violet')
#ax2.set_title(f"histogram of spiking rate vs. time")
ax2.text(0.05, 0.95, f"calculated firing rate: {np.round(r,2)} Hz",
color='black', fontsize=12, ha='left', va='center',
transform=plt.gca().transAxes, bbox=dict(facecolor='white', edgecolor='white', alpha=0.5))
ax2.text(0.05, 0.7, f"actual firing rate: {np.round(spikerecorder.n_events/(n_neurons * simtime * ms),2)} Hz",
color='black', fontsize=12, ha='left', va='center',
transform=plt.gca().transAxes, bbox=dict(facecolor='white', edgecolor='white', alpha=0.5))
ax2.set_ylabel("firing rate\n[Hz]")
ax2.set_xlabel("time [ms]")
ax2.set_xlim([0, simtime])
plt.tight_layout()
plt.show()
```

To compare the estimated firing rate with the actual firing rate, we can calculate the empirical firing rate from the spike times of the neurons:

```
print(f"mean membrane potential (actual / calculated): {np.mean(v_free[Nskip:])} / {mu * 1000}")
print(f"variance (actual / calculated): {np.var(v_free[Nskip:])} / {sigma2 * 1e6}")
print(f"firing rate (actual / calculated): {spikerecorder.n_events / (n_neurons * simtime * ms)} / {r}")
```

```
mean membrane potential (actual / calculated): -57.79420883300154 / -57.81894163122827
variance (actual / calculated): 0.6809326908771234 / 0.6897398528707916
firing rate (actual / calculated): 0.185 / 0.2898135046747145
```

The estimated firing rate using the Campbell & Siegert approximation is 0.29 Hz, while the empirical firing rate is 0.185 Hz. The membrane potential and variance are also close to the theoretical values calculated using the mean and variance of the input current. The discrepancy between the estimated and empirical firing rates can be attributed to the simplifying assumptions made in the approximation.

The Campbell and Siegert approximation is a valuable tool for estimating the firing rate of neurons in models where the input is stochastic. By using the mean and variance of the input current, this approximation provides insights into how neurons might respond to varying synaptic inputs, making it a crucial concept in the study of neural coding and information processing in computational neuroscience.

The complete code used in this blog post is available in this Github repositoryꜛ (`campbell_siegert_approximation.py`

). Feel free to modify and expand upon it, and share your insights.

- Wikipedia article on Campbell’s theoremꜛ
- Siegert,
*On the First Passage Time Probability Problem*, 1950, Physical Review, Vol. 81, Issue 4, pages 617-623, doi: 10.1103/PhysRev.81.617ꜛ - Athanasios Papoulis, S. Unnikrishna Pillai,
*Probability, Random Variables, And Stochastic Processes*, 2002, McGraw-Hill Companies, ISBN: 9780071122566 - Florian Jug, Matthew Cook, Angelika Steger,
*Recurrent competitive networks can learn locally excitatory topologies*, 2012, The 2012 International Joint Conference on Neural Networks (IJCNN), doi: 10.1109/IJCNN.2012.6252786ꜛ - Luigi M. Ricciardi, Charles E. Smith,
*Diffusion Processes And Related Topics In Biology*, 1977, Springer, ISBN: 9783540081463, source-urlꜛ - Ostojic, S, & Brunel, N,
*From [spiking neuron models to linear-nonlinear models*(2011), PLoS Comput Biol, 7(1), e1001056. doi: 10.1371/journal.pcbi.1001056ꜛ - NEST tutorial “Campbell & Siegert approximation example”ꜛ

In our research, we targeted the medial prefrontal cortex (mPFC) in mice, a critical brain region involved in complex cognitive functions such as decision-making, memory, and emotional regulation. Additionally, we also investigated the hippocampus (hippocampal CA1 neurons) and spinal cord, regions that are typically challenging to access with conventional imaging techniques. Furthermore, we extended our imaging studies to *Drosophila*, utilizing three-photon microscopy to explore neural structures and activity in this model organism. This cross-species approach highlights the versatility of three-photon imaging and its applicability to a wide range of experimental models.

In mice, we focused on imaging the mPFC, including the prelimbic and infralimbic areas, as well as the hippocampus and spinal cord. These regions, located deep within the brain, are typically over 1 mm below the surface and have been largely inaccessible with previous imaging techniques. In *Drosophila*, we combined three-photon microscopy with non-invasive mounting methods to perform calcium imaging in the mushroom body Kenyon cells (KCs) through the intact cuticle, confirming the capability of three-photon imaging in small and complex organisms.

In mice, our three-photon setup enabled us to image structures as deep as **1.6 mm** in the mPFC with sub-cellular resolution. This is a significant improvement over two-photon imaging, which reached only 800 µm under similar conditions. We could visualize neuronal activity and detailed structures such as dendritic spines and microglial processes at these depths.

We recorded calcium transients from neurons and astrocytes up to 1.4 mm deep in the mPFC in awake, head-fixed mice. This allowed us to **observe real-time brain activity during behavior**. In *Drosophila*, we recorded neural activity at cellular resolution in the mushroom bodies, providing insights into the functioning of neural circuits in a different species.

In mice, we performed **longitudinal imaging of dendritic spines** on basal dendrites in the mPFC over a week, revealing their structural plasticity. We also imaged **microglial processes at depths greater than 1 mm**, quantifying their motility and observing stable turnover rates, even at these previously unreachable depths.

We also explored **astrocytic calcium signaling** in the mPFC, observing spontaneous calcium events in deep cortical layers (1.0-1.2 mm). These findings suggest a uniform astrocytic activity pattern across different cortical layers under anesthetized conditions.

This project was a highly collaborative effort, bringing together expertise from multiple research groups across different research institutes. The collaboration between the Neuroimmunology and Imaging Groupꜛ, the Axon Growth and Regeneration Groupꜛ, the Dynamics of Neuronal Circuits Groupꜛ, the Vascular Neurology Groupꜛ, and others was essential in advancing our understanding of three-photon imaging and its applications across various species and brain regions.

This work would also not have been possible without the support of the Core Research Facilities and Servicesꜛ and the Light Microscope Facilityꜛ at our research institute, the German Center for Neurodegenerative Diseases (DZNE)ꜛ.

Our findings demonstrate that three-photon microscopy is a powerful tool for investigating deep brain regions, such as the mPFC, without the need for invasive procedures like GRIN lens or microprism implantation. The successful application of this technique in both mice and *Drosophila* underscores its broad utility for neuroscience research.

The ability to image deep brain regions while animals are awake and behaving opens new avenues for exploring the neural circuits underlying cognition and behavior. This advancement will significantly enhance our understanding of brain function in both health and disease.

For a more detailed exploration of our methods and findings, feel free to read the full preprint hereꜛ.

Please cite the preprint as follows:

Falko Fuhrmann, Felix C Nebeling, Fabrizio Musacchio, Manuel Mittag, Stefanie Poll, Monika Mueller, Eleonora Ambrad Giovannetti, Michael Maibach, Barbara Schaffran, Emily Burnside, Ivy Chi Wai Chan, Alex Lagurin, Nicole Reichenbach, Sanjeev Kaushalya, Hans-Ulrich Fried, Stefan Linden, Gabor Petzold, Gaia Tavosanis, Frank Bradke, Martin Fuhrmann, *Three-photon in vivo imaging of neurons and glia in the medial prefrontal cortex with sub-cellular resolution*, bioRxiv 2024.08.28.610026; doi: 10.1101/2024.08.28.610026ꜛ

We will first derive the exponential Integrate-and-Fire (EIF) model from the Hodgkin-Huxley model. Let’s therefore recall the Hodgkin-Huxley (HH) model and its key equations. The HH model uses four coupled nonlinear differential equations to represent the dynamics of the membrane potential and the three gating variables of sodium ($\text{Na}^+$) and potassium ($\text{K}^+$) ion channels. The membrane potential is given by:

\[C \frac{dV}{dt} = I_{\text{ext}} - \left( I_{\text{Na}} + I_{\text{K}} + I_{\text{leak}} \right)\]where

- $V$ is the membrane potential
- $C$ is the membrane capacitance
- $I_{\text{ext}}$ is the external input current
- $I_{\text{Na}}$, $I_{\text{K}}$, and $I_{\text{leak}}$ are the sodium, potassium, and leak currents, respectively.

The ionic currents are described by:

\[\begin{align*} I_{\text{Na}} &= g_{\text{Na}} m^3 h (V - E_{\text{Na}}) \\ I_{\text{K}} &= g_{\text{K}} n^4 (V - E_{\text{K}}) \\ I_{\text{leak}} &= g_{\text{leak}} (V - E_{\text{leak}}) \end{align*}\]The gating variables $m$, $h$, and $n$ follow first-order kinetics:

\[\begin{align*} \frac{dm}{dt} &= \alpha_m (1 - m) - \beta_m m \\ \frac{dh}{dt} &= \alpha_h (1 - h) - \beta_h h \\ \frac{dn}{dt} &= \alpha_n (1 - n) - \beta_n n \end{align*}\]The EIF model was first introduced by Nicolas Fourcaud-Trocmé et al. in 2003ꜛ. To derive the model, several approximations and simplifications are applied to the Hodgkin-Huxley model. First, the dynamics of the gating variables are assumed to be much faster than the changes in the membrane potential, allowing us to approximate them by their quasi-steady states. Second, focusing on the rapid rise of the membrane potential during an action potential, the sodium current can be approximated by an exponential function of the membrane potential. This is because the activation of sodium channels increases rapidly with voltage. Let’s therefore approximate the sodium current as:

\[I_{\text{Na}} \approx g_L \Delta_T \exp \left( \frac{V - V_T}{\Delta_T} \right)\]where $V_T$ is the threshold potential, $g_L$ is the leak conductance, and $\Delta_T$ is the slope factor.

The leak current remains linear and is given by:

\[I_{\text{leak}} = g_L (V - E_L)\]where $E_L$ is the leak reversal potential.

Combining the approximations for the ionic currents, the total membrane current is:

\[\begin{align*} I_{\text{total}} =& \quad 6 g_L (E_L - V) \\ &+ g_L \Delta_T \exp \left( \frac{V - V_T}{\Delta_T} \right) + I_{\text{ext}} \end{align*}\]Using the membrane capacitance $C$, the equation for the membrane potential $V$ becomes:

\[\begin{align*} C \frac{dV}{dt} =& -g_L (V - E_L) \\ &+ g_L \Delta_T \exp \left( \frac{V - V_T}{\Delta_T} \right) + I_{\text{ext}} \end{align*}\]This is the core equation of the EIF model. Once the membrane potential reaches the threshold $V_T$, a spike is generated, and the membrane potential is reset to a reset value $V_{\text{reset}}$:

\[\text{if } V \geq V_{\text{T}} \text{ then } V \leftarrow V_{\text{reset}}\]The adaptive exponential Integrate-and-Fire (AdEx or AEIF) model was first introduced by Romain Brette and Wulfram Gerstner in 2005ꜛ. It builds upon the EIF model by incorporating an adaptation current to account for the spike-frequency adaptation observed in real neurons. This adaptation mechanism is crucial for modeling the neuron’s ability to adjust its firing rate in response to prolonged stimulation. The adaptation current $w$ represents the slow adaptation mechanism and is modeled as a function of the membrane potential:

\[\tau_w \frac{dw}{dt} = a (V - E_L) - w\]where

- $\tau_w$ is the adaptation time constant, and
- $a$ is the subthreshold adaptation parameter.

The membrane potential equation in the AdEx model is similar to the EIF model but incorporates the adaptation current $w$:

\[\begin{align*} C \frac{dV}{dt} &= -g_L (V - E_L) \\ &+ g_L \Delta_T \exp \left( \frac{V - V_T}{\Delta_T} \right) \\ &- w + I_{\text{ext}} \\ \end{align*}\]As for the Hodgkin-Huxley model, the AdEx model can include additional conductances and currents to capture specific neuronal properties. For instance, an excitatory synaptic conductance $g_\text{ex}$ and an inhibitory synaptic conductance $g_\text{in}$ can be added to model synaptic inputs:

\[\begin{align*} C \frac{dV}{dt} =& -g_L (V - E_L) \\ &+ g_L \Delta_T \exp \left( \frac{V - V_T}{\Delta_T} \right)\\ & - g_\text{ex} (V - E_\text{ex}) \\ &- g_\text{in} (V - E_\text{in}) - w + I_{\text{ext}} \end{align*}\]Once a spike occurs, the membrane potential is reset to a reset value $V_{\text{reset}}$, and the adaptation current is increased by an amount $b$, the spike-triggered adaptation parameter, as each spike causes a jump in the adaptation current:

\[\text{if } V \geq V_{\text{T}} \text{ then } \begin{cases} V \leftarrow V_{\text{reset}} \\ w \leftarrow w + b \end{cases}\]where

- $V_{\text{th}}$ is the threshold potential, and
- $b$ is the spike-triggered adaptation parameter.

The EIF and AdEx models are widely used in computational neuroscience for their balance between biological realism and computational efficiency. Here are a few applications:

**Large-scale neuronal network simulations**- These models are computationally efficient and can be used to simulate large networks of spiking neurons, making them suitable for studying network dynamics, such as oscillations, synchronization, and information processing in the brain.
**Understanding neuronal response properties**- The models help to understand how neurons respond to different input currents and how various parameters affect the firing patterns, such as spike-frequency adaptation in the AdEx model.
**Comparison with experimental data**- These models provide a framework to compare theoretical predictions with experimental data, helping to refine our understanding of the underlying mechanisms of neuronal behavior.

To simulate the AdEx model in Python, we can use the `aief_cond_alpha`

ꜛ neuron model implemented in the NEST simulator. We will create an AdEx neuron with multiple DC inputs, one with a lower amplitude of 500 pA lasting from 0 to 200 ms and another with a higher amplitude of 800 pA lasting from 500 to 1000 ms. We will also record the membrane potential using a voltmeter. We will use a subthreshold adaptation parameter of $a=4.0$ and a spike-triggered adaptation parameter $b=80.5$ to reproduce figure 2C from the original AdEx model paper by Brette and Gerstner (2005)ꜛ. The following Python code is adapted and slightly modified from the NEST tutorial “Testing the adapting exponential integrate and fire model in NEST (Brette and Gerstner Fig 2C)”ꜛ

```
import os
import matplotlib.pyplot as plt
import nest
# set global properties for all plots:
plt.rcParams.update({'font.size': 12})
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.bottom"] = False
plt.rcParams["axes.spines.left"] = False
plt.rcParams["axes.spines.right"] = False
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
# set the simulation and the resolution of the simulation:
T = 1000.0 # ms
nest.resolution = 0.1 # ms
# create an AEIF neuron with multiple synapses:
neuron = nest.Create("aeif_cond_alpha")
# set the parameters of the AEIF neuron:
neuron.set(a=4.0, b=80.5)
# create two DC generators:
dc = nest.Create("dc_generator", 2)
dc.set(amplitude=[500.0, 800.0], start=[0.0, 500.0], stop=[200.0, 1000.0])
# connect the DC generators to the neuron:
nest.Connect(dc, neuron, "all_to_all")
# create a voltmeter to record the membrane potential of the neuron:
voltmeter = nest.Create("voltmeter", params={"interval": 0.1})
nest.Connect(voltmeter, neuron)
# simulate the network:
nest.Simulate(T)
# extract the data from the voltmeter:
Vms = voltmeter.get("events", "V_m")
time = voltmeter.get("events", "times")
# plot the membrane potential:
plt.figure(figsize=(5.5, 4))
plt.plot(time, Vms)
plt.xlabel("time (ms)")
plt.ylabel("membrane potential (mV)")
plt.title(f"AdEx neuron with multiple DC inputs")
plt.grid(True)
plt.tight_layout()
plt.show()
```

From the simulation results, we can distinguish three phases in the membrane potential trace:

**Initial phase (0-200 ms)**: The neuron receives a DC input of 500 pA, which is below the threshold needed to generate spikes. This phase shows subthreshold activity with a small depolarization and possible overshoot due to the subthreshold adaptation mechanism.**Intermediate phase (200-500 ms)**: There is no DC input, so the membrane potential returns closer to the resting potential.**Late phase (500-1000 ms)**: The neuron receives a stronger DC input of 800 pA, which is above the threshold, leading to spiking activity. The spike frequency is high initially but decreases over time due to the spike-triggered adaptation, demonstrating the characteristic spike-frequency adaptation of the AdEx model.

These observations align well with the expected behavior of the AdEx model as described in the original paper by Brette and Gerstnerꜛ.

NEST holds a variant of the `aeif_cond_alpha`

model called `aeif_cond_beta_multisynapse`

ꜛ that allows for an AdEx model with multiple synapses with different synaptic dynamics. The following Python code demonstrates how to create such a model and record the membrane potential using a voltmeter. The neuron receives four synaptic inputs, each arriving at different times (1 ms, 300 ms, 500 ms, and 700 ms). Each synapse has distinct rise and decay times, influencing how the input affects the membrane potential. The code is adapted from the NEST tutorial “Example of an AEIF neuron with multiple synaptic rise and decay time constants”ꜛ:

```
import os
import matplotlib.pyplot as plt
import numpy as np
import nest
import nest.raster_plot
# set global properties for all plots:
plt.rcParams.update({'font.size': 12})
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.bottom"] = False
plt.rcParams["axes.spines.left"] = False
plt.rcParams["axes.spines.right"] = False
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
# define simulation time:
T = 1000.0 # ms
# define neuron parameters:
aeif_neuron_params = {
"V_peak": 0.0, # spike detection threshold in mV
"a": 4.0, # subthreshold adaptation in nS
"b": 80.5, # spike-triggered adaptation in pA
"E_rev": [0.0, 0.0, 0.0, -85.0], # reversal potentials in mV
"tau_decay": [50.0, 20.0, 20.0, 20.0], # synaptic decay time in ms
"tau_rise": [10.0, 10.0, 1.0, 1.0]} # synaptic rise time in ms
# create an AEIF neuron with multiple synapses:
neuron = nest.Create("aeif_cond_beta_multisynapse")
nest.SetStatus(neuron, params=aeif_neuron_params)
# create a spike generator:
spikerecorder = nest.Create("spike_generator", params={"spike_times": np.array([10.0])})
# create a voltmeter to record the membrane potential of the neuron:
voltmeter = nest.Create("voltmeter")
# connect the spike generator to the neuron:
delays = [1.0, 300.0, 500.0, 700.0]
w = [1.0, 1.0, 1.0, 1.0]
for syn in range(4):
nest.Connect(
spikerecorder,
neuron,
syn_spec={"synapse_model": "static_synapse",
"receptor_type": 1 + syn,
"weight": w[syn],
"delay": delays[syn]},
)
# connect the voltmeter to the neuron:
nest.Connect(voltmeter, neuron)
# simulate the network:
nest.Simulate(T)
# extract the data from the voltmeter:
Vms = voltmeter.get("events", "V_m")
ts = voltmeter.get("events", "times")
# plot the membrane potential:
plt.figure(figsize=(6, 4))
plt.plot(ts, Vms)
plt.xlabel("time [ms]")
plt.ylabel("membrane potential [mV]")
plt.title(f"AdEx neuron with multiple synapses")
plt.tight_layout()
plt.show()
```

The different synaptic inputs result in distinct effects on the membrane potential of the AdEx neuron. The first synaptic input occurs at approximately 10 ms. This input causes a rapid depolarization of the membrane potential, peaking at around -68.5 mV. Following this peak, the membrane potential decays back towards the resting potential due to the synaptic decay time and adaptation mechanisms. The second synaptic input occurs at 300 ms. Similar to the first input, this causes another depolarization, peaking at approximately the same level (-68.5 mV), followed by a decay back towards the resting potential. The third synaptic input occurs at 500 ms. This input causes another depolarization with characteristics similar to the previous inputs. The fourth synaptic input occurs at 700 ms. Unlike the previous inputs, this one causes a hyperpolarization, creating a trough in the membrane potential around -70.8 mV. Overall, between the synaptic inputs, the membrane potential shows a decay back towards the resting potential, which is around -70.5 mV. The subthreshold and spike-triggered adaptation mechanisms help modulate the membrane potential’s return to the resting state after each depolarization event.

The Exponential Integrate-and-Fire (EIF) and Adaptive Exponential Integrate-and-Fire (AdEx) models provide powerful tools for studying neuronal dynamics. By simplifying the complex Hodgkin-Huxley model, these models capture essential features of neuronal behavior, such as the sharp onset of action potentials and spike-frequency adaptation, while remaining computationally efficient for large-scale simulations. Understanding and applying these models is crucial for advancing our knowledge in computational neuroscience and developing practical applications in neural engineering.

The complete code used in this blog post is available in this Github repositoryꜛ (`aeif_neuron.py`

and `aeif_neuron_multple_rices_and_decays.py`

). Feel free to modify and expand upon it, and share your insights.

- Nicolas Fourcaud-Trocmé, David Hansel, Carl Van Vreeswijk, Nicolas Brunel,
*How Spike Generation Mechanisms Determine the Neuronal Response to Fluctuating Inputs*, 2003, The Journal of Neuroscience, 23(37), 11628–11640, doi: 10.1523/JNEUROSCI.23-37-11628.2003ꜛ - Wulfram Gerstner, Werner M. Kistler, Richard Naud, and Liam Paninski,
*Chapter 5 Nonlinear Integrate-and-Fire Models*and*Chapter 6 Adaptation and Firing Patterns*in*Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition*, 2014, Cambridge University Press, ISBN: 978-1-107-06083-8, Online-Versionꜛ - Romain Brette and Wulfram Gerstner,
*Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal Activity*, 2005, Journal of Neurophysiology, 94(5), 3637–3642, doi: 10.1152/jn.00686.2005ꜛ - Wulfram Gerstner and Romain Brette,
*Adaptive exponential integrate-and-fire model*, 2009, Scholarpedia, 4(6):8427, doi: 10.4249/scholarpedia.8427ꜛ - Wikipedia article on the EIF and AdEx modelsꜛ
- NEST’s tutorial “Testing the adapting exponential integrate and fire model in NEST (Brette and Gerstner Fig 2C)”ꜛ
- NEST implementation of the AdEx modelꜛ
- NEST’s tutorial “Example of an AEIF neuron with multiple synaptic rise and decay time constants”ꜛ
- NEST’s
`aeif_cond_alpha`

model descriptionꜛ - NEST’s
`aeif_cond_beta_multisynapse`

model descriptionꜛ

The brain processes information through networks of neurons that communicate via electrical impulses or spikes. Traditional models often rely on the rate of spiking to encode information. However, spike-timing-based computation focuses on the precise timing of these spikes to perform complex computations. This approach is particularly relevant in sensory systems like the olfactory system, where the timing of neural responses can convey important information about odors.

To simulate spike-timing-based computation in the olfactory system, we use a network of spiking neurons modeled with the leaky integrate-and-fire (LIF) neuron model with alpha-shaped postsynaptic currents. The key equations governing the dynamics of the membrane potential $V_m(t)$ are:

\[\begin{align} C_m \frac{dV_m(t)}{dt} =& -g_L (V_m(t) - E_L) + \\ & I_{\text{syn}}(t) + I_{\text{ext}}(t) \nonumber \end{align}\]where:

- $C_m$ is the membrane capacitance.
- $g_L$ is the leak conductance.
- $E_L$ is the resting potential.
- $I_{\text{syn}}(t)$ is the synaptic current.
- $I_{\text{ext}}(t)$ is the (constant) external input current.

A spike is emitted at time step $t^*=t_{k+1}$ when the membrane potential reaches a threshold $V_{\text{th}}$, i.e, when

\[\begin{align} V_\text{m}(t_k) < & V_{th} \quad\text{and}\quad V_\text{m}(t_{k+1})\geq V_\text{th}. \end{align}\]After a spike, the membrane potential is reset to a reset potential $V_{\text{reset}}$:

\[\begin{align} V_\text{m}(t) &= V_{\text{reset}} \quad\text{for}\quad t^* \leq t < t^* + t_{\text{ref}}. \end{align}\]The synaptic current $I_{\text{syn}}(t)$ is modeled as an alpha function:

\[\begin{align} I_{\text{syn}}(t) &= I_{\text{syn, ex}}(t) + I_{\text{syn, in}}(t) \end{align}\]where

\[\begin{align} I_{\text{syn, X}}(t) &= \sum_{j} w_j \sum_k i_{\text{syn, X}}(t-t_j^k-d_j) \end{align}\]and $X \in { \text{ex}, \text{in} }$, i.e., either excitatory or inhibitory presynaptic neurons. $w_j$ is the synaptic weight, $t_j$ is the spike time of the presynaptic neuron, and $d_j$ is the axonal delay. $i_{\text{syn, X}}(t)$ controls individual post-synaptic currents (PSCs) and is defined as:

\[\begin{align} i_{\text{syn, X}}(t) &= \frac{t}{\tau_{\text{syn, X}}} e^{1 - \frac{t}{\tau_{\text{syn, X}}}} \Theta(t) \end{align}\]where $\Theta(t)$ is the Heaviside step function and $\tau_{\text{syn, X}}$ is the synaptic time constant. This function causes the synaptic current to rise and decay in an alpha-shaped manner (rapid rise to a peak value, followed by a slower exponential decay). The PSCs are normalized such that:

\[\begin{align} i_{\text{syn, X}}(t= \tau_{\text{syn, X}}) &= 1 \end{align}\]The total charge $q$ transfer due to a single PSC depends on the synaptic time constant according to:

\[\begin{align} q &= \int_0^{\infty} i_{\text{syn, X}}(t) dt = e \tau_{\text{syn, X}}. \end{align}\]In our model, the external input $I_{\text{ext}}(t)$ will consist of a constant bias current and an alternating current (AC) generator to drive oscillations:

\[\begin{align} I_{\text{ext}}(t) &= I_{\text{bias}} + I_{\text{AC}}(t) \end{align}\]with:

\[\begin{align} I_{\text{AC}}(t) &= A \sin(2 \pi f t) \end{align}\]where:

- $I_{\text{bias}}$ is the constant bias current.
- $A$ is the amplitude of the AC generator.
- $f$ is the frequency of the AC generator.

Oscillations in the input currents are crucial for synchronizing neuronal activity and enhancing spike-timing precision. They reflect the natural oscillatory dynamics observed in the olfactory bulb, helping to simulate realistic neural processing and facilitating the segregation and integration of information.

In addition to the deterministic inputs, a noise generator is included to simulate synaptic noise:

\[\begin{align} I_{\text{noise}}(t) &\sim \mathcal{N}(\mu, \sigma^2) \end{align}\]where:

- $\mu$ is the mean of the noise.
- $\sigma$ is the standard deviation of the noise.

By including these elements in our model, we can more accurately represent the complex dynamics of the olfactory system and explore the principles of spike-timing-based computation.

The according simulation code is adapted and slightly modified from the NEST tutorial “Spike synchronization through subthreshold oscillation”ꜛ.

We begin by importing the necessary libraries:

```
import os
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
import nest
# set global matplotlib properties for all plots:
plt.rcParams.update({'font.size': 12})
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.bottom"] = False
plt.rcParams["axes.spines.left"] = False
plt.rcParams["axes.spines.right"] = False
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
```

We create a network of 1000 neurons modeled with NEST’s `iaf_psc_alpha`

ꜛ neuron model implementation:

```
N = 1000 # number of neurons
bias_begin = 140.0 # minimal value for the bias current injection [pA]
bias_end = 200.0 # maximal value for the bias current injection [pA]
T = 600 # simulation time (ms)
```

Next, we set the parameters for the input currents. We set up an alternating current generator (`ac_generator`

) to drive oscillations in the input currents (35 Hz) with an amplitude of 50 pA. Additionally, we include a noise generator (`noise_generator`

) to simulate synaptic noise with a standard deviation of 25 (mean is set to 0):

```
# parameters for the alternating-current generator
driveparams = {"amplitude": 50.0, "frequency": 35.0}
# parameters for the noise generator
noiseparams = {"mean": 0.0, "std": 25.0}
neuronparams = {
"tau_m": 20.0, # membrane time constant
"V_th": 20.0, # threshold potential
"E_L": 10.0, # membrane resting potential
"t_ref": 2.0, # refractory period
"V_reset":0.0, # reset potential
"C_m": 200.0, # membrane capacitance
"V_m": 0.0, # initial membrane potential
}
```

Now we create the corresponding simulation nodes. The neurons will receive a bias current that varies linearly from 140 pA (`bias_begin`

) to 200 pA (`bias_end`

) across the population. We also create a multimeter to record the membrane potential of the neurons:

```
neurons = nest.Create("iaf_psc_alpha", N)
spikerecorder = nest.Create("spike_recorder")
noise = nest.Create("noise_generator")
drive = nest.Create("ac_generator")
multimeter = nest.Create("multimeter", params={"record_from": ["V_m"]})
drive.set(driveparams)
noise.set(noiseparams)
neurons.set(neuronparams)
neurons.I_e = [(n * (bias_end - bias_begin) / N + bias_begin) for n in range(1, len(neurons) + 1)]
```

Next, we connect the nodes with each other, using the default all-to-all connection rule:

```
nest.Connect(drive, neurons)
nest.Connect(noise, neurons)
nest.Connect(neurons, spikerecorder)
nest.Connect(multimeter, neurons)
```

Finally, we simulate the network for `T=600`

ms:

```
nest.Simulate(T)
```

For further analysis, we extract the spike times and neuron IDs from the spike recorder for plotting:

```
# extract spike times and neuron IDs from the spike recorder for plotting:
spike_events = nest.GetStatus(spikerecorder, "events")[0]
spike_times = spike_events["times"]
neuron_ids = spike_events["senders"]
# combine the spike times and neuron IDs into a single array and sort by time:
spike_data = np.vstack((spike_times, neuron_ids)).T
spike_data_sorted = spike_data[spike_data[:, 0].argsort()]
# extract sorted spike times and neuron IDs:
sorted_spike_times = spike_data_sorted[:, 0]
sorted_neuron_ids = spike_data_sorted[:, 1]
# extract recorded data from the multimeter:
multimeter_events = nest.GetStatus(multimeter, "events")[0]
```

And the corresponding plot commands are as follows:

```
# spike raster plot and histogram of spiking rate:
fig = plt.figure(figsize=(6, 6))
gs = gridspec.GridSpec(5, 1)
# create the first subplot (3/4 of the figure)
ax1 = plt.subplot(gs[0:4, :])
ax1.scatter(sorted_spike_times, sorted_neuron_ids, s=9.0, color='mediumaquamarine', alpha=1.0)
ax1.set_title("spike synchronization through subthreshold oscillation:\nspike times (top) and rate (bottom)")
#ax1.set_xlabel("time [ms]")
ax1.set_xticks([])
ax1.set_ylabel("neuron ID")
ax1.set_xlim([0, T])
ax1.set_ylim([0, N])
ax1.set_yticks(np.arange(0, N+1, 100))
# create the second subplot (1/4 of the figure)
ax2 = plt.subplot(gs[4, :])
hist_binwidth = 5.0
t_bins = np.arange(np.amin(sorted_spike_times), np.amax(sorted_spike_times), hist_binwidth)
n, bins = np.histogram(sorted_spike_times, bins=t_bins)
heights = 1000 * n / (hist_binwidth * (N))
ax2.bar(t_bins[:-1], heights, width=hist_binwidth, color='violet')
#ax2.set_title(f"histogram of spiking rate vs. time")
ax2.set_ylabel("firing rate\n[Hz]")
ax2.set_xlabel("time [ms]")
ax2.set_xlim([0, T])
plt.tight_layout()
plt.show()
# plot the membrane potential and synaptic currents for 3 exemplary neurons:
fig = plt.figure(figsize=(6, 2))
sender = 100
idc_sender = multimeter_events["senders"] == sender
plt.plot(multimeter_events["times"][idc_sender], multimeter_events["V_m"][idc_sender],
label=f"neuron ID {sender}: ", alpha=1.0, c="k", lw=1.75, zorder=3)
sender = 200
idc_sender = multimeter_events["senders"] == sender
plt.plot(multimeter_events["times"][idc_sender], multimeter_events["V_m"][idc_sender],
label=f"neuron ID {sender}: ", alpha=0.8)
sender = 800
idc_sender = multimeter_events["senders"] == sender
plt.plot(multimeter_events["times"][idc_sender], multimeter_events["V_m"][idc_sender],
label=f"neuron ID {sender}: ", alpha=0.8)
plt.ylabel("membrane\npotential\n[mV]")
plt.xlabel("time [ms]")
plt.tight_layout()
plt.legend(loc="lower right")
plt.show()
```

Here is the resulting spike raster plot and histogram of the spiking rate:

Let’s take a closer look at three exemplary neurons and their membrane potential traces:

The three traces, each from different regimes of the bias current, indeed show oscillatory behavior. However, while the oscillatory behavior remains evident throughout the simulated time course, synchronicity between the neurons is not always maintained. For some periods, the neurons spike synchronously, while at other times, the spike frequencies begins to diverge. This is due to the interplay between the alternating current generator, the noise generator, and the synaptic currents, which collectively shape the network’s activity. We can further examine this behavior on a more global scale by averaging the membrane potential traces of different groups of neurons, covering different regimes of the bias current, to identify common patterns and deviations:

```
# loop over all senders and collect all senders' V_m traces in a 2D array:
V_m_traces = np.zeros((N, T-1))
for sender_i, sender in enumerate(set(multimeter_events["senders"])):
idc_sender = multimeter_events["senders"] == sender
curr_V_m_trace = multimeter_events["times"][idc_sender]
V_m_traces[sender_i, :] = multimeter_events["V_m"][idc_sender]
# plot neuron averages and std of membrane potential for different groups of neurons:
fig, ax = plt.subplots(5, 1, sharex=True, figsize=(6, 8), gridspec_kw={"hspace": 0.3})
axes = ax.flat
for i, (start, end) in enumerate([(800, 1000), (600, 800), (400, 600), (200, 400),(0, 200) ]):
V_m_mean = np.mean(V_m_traces[start:end, :], axis=0)
V_m_std = np.std(V_m_traces[start:end, :], axis=0)
axes[i].plot(multimeter_events["times"][idc_sender], V_m_mean, label="mean membrane potential", c="k")
axes[i].fill_between(multimeter_events["times"][idc_sender], V_m_mean - V_m_std, V_m_mean + V_m_std, color='gray', alpha=0.5)
axes[i].set_ylabel(f"membrane\npotential $V_m$\n[mV]")
axes[i].set_title(f"average and std of neurons {start} to {end}")
axes[-1].set_xlabel("time [ms]")
plt.tight_layout()
plt.show()
```

From the average membrane potential traces of the different groups of neurons, we can observe that the synchronicity within a group is almost maintained at the beginning of the simulation (low standard deviation). However, as the simulation progresses, the synchronicity within almost all groups starts to break as the standard deviation of the membrane potential increases. This is not true for the group of neurons with IDs 400-600, where synchronicity, frequency, and amplitude of the oscillations remain almost constant. The groups of neurons with IDs 800-1000 and IDs 0 to 200 show the highest standard deviation with decreasing amplitude of the oscillations over the course of the simulation. An interesting behavior is observed for the group of neurons with IDs 600-800, where both the standard deviation increase (and thus the synchronicity decreases) and the amplitude of the oscillations increases for just a short period of time shortly after the beginning of the simulation. The different behavior across the groups of neurons reflects the complex and variable dynamics of the network as already observed in the spike raster plot.

You can further investigate the network dynamics by modifying the parameters of the alternating current generator, the noise generator, and the synaptic currents. By exploring different input patterns and noise levels, you can observe how these factors influence the synchronization and spiking activity of the network. This will provide valuable insights into the principles of spike-timing-based computation and the role of oscillatory inputs and synaptic noise in shaping neural activity.

In this tutorial, we have explored the principles of spike-timing-based computation in the context of olfactory processing using a simple network model proposed by Brody and Hopfieldꜛ. By simulating a network of spiking neurons with NEST, we have demonstrated how oscillatory input currents, synaptic noise, and alpha-shaped postsynaptic currents can shape the dynamics of the network and influence spike synchronization. The resulting spike raster plot and membrane potential traces provide insights into the complex interactions between neurons and the emergence of synchronized spiking activity. By analyzing the average membrane potential traces of different groups of neurons, we have observed how synchronicity within the network evolves over time, highlighting the importance of oscillatory inputs and synaptic noise in shaping neural activity.

This tutorial serves as a starting point for exploring spike-timing-based computation and its applications in neural processing. By modifying the parameters of the network model and exploring different input patterns, you can further investigate the dynamics of the network and gain a deeper understanding of how spiking neurons encode and process information.

The complete code used in this blog post is available in this Github repositoryꜛ (`spike_synchronization_through_oscillation.py`

). Feel free to modify and expand upon it, and share your insights.

- Brody, Hopfield,
*Simple Networks for Spike-Timing-Based Computation, with Application to Olfactory Processing*, 2003, Neuron, Vol. 37, Issue 5, pages 843-852, doi: 10.1016/S0896-6273(03)00120-Xꜛ - NEST’s tutorial “Spike synchronization through subthreshold oscillation”ꜛ
- NEST’s
`iaf_psc_alpha`

model descriptionꜛ - NEST’s
`iaf_psc_alpha`

implementationꜛ

The f-I curve or frequency-current curve is a fundamental concept in neuroscience that describes the relationship between the input current to a neuron and the firing rate (frequency) of that neuron. It is a graphical representation used to understand how a neuron’s output firing rate varies with changes in the input current.

The key concepts of generating a f-I curve is a as follows:

**Input Current (I)**: This is the external current injected into the neuron, often referred to as the “stimulus current” or “input current.” It is typically measured in picoamperes (pA) or nanoamperes (nA).**Firing Rate (f)**: This is the neuron’s output in response to the input current, measured in spikes per second (Hz). It represents how frequently a neuron fires action potentials.

The shape of the f-I curve can provide insights into the neuron’s response properties. Here are some common characteristics:

**threshold**: The minimum input current required to elicit action potentials. Below this threshold, the neuron does not fire.**saturation**: At very high input currents, the firing rate may plateau, indicating that the neuron has reached its maximum firing capacity.**slope**: The steepness of the f-I curve indicates how sensitive the neuron is to changes in input current. A steep slope means small changes in input current cause large changes in firing rate.

Different types of neurons exhibit different f-I curves based on their intrinsic properties and the input they receive. Here are some common types:

**Linear f-I Curve**: Some neurons have a relatively linear relationship between input current and firing rate. This means that as the input current increases, the firing rate increases proportionally.**Non-linear f-I Curve**: Many neurons exhibit a non-linear f-I curve, where the relationship between input current and firing rate is not proportional. This can include sigmoidal shapes, showing a more complex relationship with thresholds and saturation points.**Threshold-linear f-I Curve**: A combination where the curve is linear above a certain threshold current but zero below it.

The f-I curve serves multiple essential functions in both experimental and computational contexts. It provides insights into how neurons translate input currents into output firing rates, thereby playing a key role in understanding neural coding and information processing in the brain.

The f-I curve helps in understanding how neurons encode information. By examining the relationship between input current and firing rate, one can infer how different stimuli are represented by neuronal activity. This is fundamental to deciphering the brain’s ‘language’ of spikes and how sensory inputs are processed and perceived.

Different neurons have different f-I curves, reflecting their unique response properties. For example, some neurons might be highly sensitive to small changes in input current, while others may require larger currents to change their firing rates significantly. By characterizing these properties, one can categorize neurons into functional types and understand their roles in neural circuits.

The f-I curve also reveals important aspects such as the threshold current required to elicit firing and the saturation point where further increases in input current no longer increase the firing rate. These features help in understanding the operating range of neurons and their potential role in processing various intensity levels of stimuli.

In computational neuroscience, the f-I curve is used to develop and refine neuron models. Accurate f-I curves ensure that simulated neurons behave similarly to biological neurons, which is crucial for realistic neural network models. This helps in studying complex neural dynamics and testing hypotheses about brain function.

The f-I curve allows for fine-tuning model parameters to match experimental data. Parameters such as membrane conductance, capacitance, and synaptic weights can be adjusted to ensure that the model’s f-I curve aligns with that of real neurons, thereby enhancing the model’s validity.

Models incorporating accurate f-I curves can predict how neurons will respond to novel stimuli. This predictive capability is valuable for understanding brain function, designing neural prosthetics, and developing treatments for neurological disorders.

Experimentally, f-I curves are used to classify different types of neurons. For example, excitatory and inhibitory neurons often have distinct f-I curves, reflecting their different roles in neural circuits. By analyzing these curves, one can identify neuron types and understand their contributions to brain function.

The shape and parameters of the f-I curve can also indicate the health of neurons. Changes in the f-I curve can signal pathological conditions such as neurodegenerative diseases or the effects of drugs. Monitoring these changes helps in diagnosing and understanding the progression of neurological disorders.

Here’s a simple Python example using the NEST simulator to calculate the f-I curve of a Hodgkin-Huxley neuron. The code simulates a range of input currents and records the firing rate of the neuron in response to each current level. The resulting plot shows the f-I curve of the neuron. The code is adapted from the NEST tutorial “Example using Hodgkin-Huxley neuron”ꜛ

```
import os
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
import nest
import nest.raster_plot
# set global properties for all plots:
plt.rcParams.update({'font.size': 12})
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.bottom"] = False
plt.rcParams["axes.spines.left"] = False
plt.rcParams["axes.spines.right"] = False
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
# define simulation time:
time = 1000
# amplitude range (in pA):
I_start = 0
I_stopt = 2000
I_step = 20
# define simulation step size (in mS):
h = 0.1
# create a Hodgkin-Huxley neuron and a spikerecorder node:
neuron = nest.Create("hh_psc_alpha")
spikerecorder = nest.Create("spike_recorder")
spikerecorder.record_to = "memory"
nest.Connect(neuron, spikerecorder, syn_spec={"weight": 1.0, "delay": h})
# simulation loop:
n_data = int(I_stopt / float(I_step))
amplitudes = np.zeros(n_data)
event_freqs = np.zeros(n_data)
for i, amp in enumerate(range(I_start, I_stopt, I_step)):
neuron.I_e = float(amp)
nest.Simulate(1000) # one second warm-up time for equilibrium state
spikerecorder.n_events = 0 # then reset spike counts
nest.Simulate(time) # another simulation call to record firing rate
n_events = spikerecorder.n_events
amplitudes[i] = amp
event_freqs[i] = n_events / (time / 1000.0)
print(f"Simulating with current I={amp} pA -> {n_events} spikes in {time} ms ({event_freqs[i]} Hz)")
# plot the results:
plt.figure(figsize=(5, 4))
plt.plot(amplitudes, event_freqs, lw=2.0)
plt.xlabel("input current (pA)")
plt.ylabel("firing rate (Hz)")
plt.title("Firing rate vs. input current\nof a Hodgkin-Huxley neuron")
plt.tight_layout()
plt.grid(True)
plt.show()
```

The f-I curve is a crucial tool in neuroscience for understanding the relationship between a neuron’s input and its output. It provides valuable insights into neuronal behavior, response characteristics, and is fundamental in both experimental and computational neuroscience.

The complete code used in this blog post is available in this Github repositoryꜛ (`fi_curve.py`

). Feel free to modify and expand upon it, and share your insights.

- Wikipedia article on “f-I curve”ꜛ
- B. Ermentrout,
*Linearization of F-I curves by adaptation*, 1998, Neural Computation. 10 (7): 1721–1729, doi: 10.1162/089976698300017106ꜛ - A. L. Hodgkin,
*The local electric changes associated with repetitive action in a non-medullated axon*, 1948, The Journal of physiology, doi: 10.1113/jphysiol.1948.sp004260ꜛ - Neuromatch’s “Tutorial 1: Neural Rate Models”ꜛ
- NEST’s tutorial “Example using Hodgkin-Huxley neuron”ꜛ
- NEST’s
`hh_psc_alpha`

model descriptionꜛ

`iaf_psc_alpha`

ꜛ model implemented in the NEST simulator, to simulate the behavior of a single neuron or a population of neurons connected in a network. `iaf_psc_alpha`

stands for “integrate-and-fire neuron with post-synaptic current shaped as an alpha function”. But what does ‘alpha-shaped current’ actually mean? In this short tutorial, we will explore the concept behind it.
In the context of neural modeling, alpha-shaped post-synaptic currents refer to the specific time course of the synaptic conductance change following a presynaptic spike. This time course is characterized by a rapid rise to a peak value, followed by a slower exponential decay. The mathematical form of this conductance change is given by the alpha function, which describes how the synaptic current evolves over time after a spike. The alpha function is often chosen over other functions such as a simple step function or more complex biophysical models due to its simplicity and computational efficiency, while still capturing the essential dynamics of synaptic transmission.

The alpha function $\alpha(t)$ is defined as:

\[\alpha(t) = \begin{cases} \frac{t}{\tau} e^{1 - t/\tau}, & \text{for } t > 0 \\ 0, & \text{for } t \leq 0 \end{cases}\]Here, $\tau$ is a time constant that determines the time scale of the rise and decay of the synaptic current.

The alpha function has the following three characteristics:

**rise phase**: for $t$ close to zero, the function $\frac{t}{\tau}$ dominates, causing the synaptic current to increase linearly with time.**peak value**: the peak of the alpha function occurs at $t=\tau$, where the current reaches its maximum value $e^{1} / \tau$.**decay phase**: for larger $t$, the exponential term $e^{-t/\tau}$ becomes significant, leading to an exponential decay of the current.

The synaptic current $I_{\text{syn}}(t)$ due to a spike at time $t_j$ from a presynaptic neuron can be expressed using the alpha function as follows:

\[I_{\text{syn}}(t) = w \cdot \alpha(t - t_j)\]where:

- $w$ is the synaptic weight.
- $t_j$ is the time of the presynaptic spike.
- $\alpha(t - t_j)$ is the alpha function shifted to start at $t_j$.

If there are multiple presynaptic spikes occurring at times $t_j$, the total synaptic current is the sum of the alpha functions for each spike:

\[I_{\text{syn}}(t) = \sum_j w_j \alpha(t - t_j)\]where $w_j$ is the weight of the synapse for the $j$-th presynaptic neuron.

To visualize the alpha function, consider the plot below:

```
import numpy as np
import matplotlib.pyplot as plt
# set global properties for all plots:
plt.rcParams.update({'font.size': 12})
plt.rcParams["axes.spines.top"] = False
plt.rcParams["axes.spines.bottom"] = False
plt.rcParams["axes.spines.left"] = False
plt.rcParams["axes.spines.right"] = False
def alpha_function(t, tau):
return (t / tau) * np.exp(1 - t / tau) * (t > 0)
t = np.linspace(0, 10, 1000)
tau = 2
alpha = alpha_function(t, tau)
fig=plt.figure(figsize=(4.5,3.5))
plt.plot(t, alpha, lw=2)
plt.title('Alpha function for synaptic current')
plt.xlabel('time (ms)')
plt.ylabel('synaptic current (normalized)')
plt.tight_layout()
plt.show()
```

The alpha-shaped synaptic current is biologically plausible and commonly used in neural modeling because it captures the essential dynamics of synaptic transmission observed in real neurons. The rapid rise corresponds to the quick response of the postsynaptic neuron to an incoming spike, while the slower decay represents the gradual return to baseline as the neurotransmitter effect dissipates.

In summary, the rise and decay of the synaptic current in an alpha-shaped manner provide a realistic and computationally efficient way to model synaptic interactions in neural networks. This approach is particularly useful in simulations where the timing of spikes plays a crucial role in neural computation. However, it is essential to note that the alpha function is a simplification of the complex dynamics of synaptic transmission, and more detailed models may be required to capture specific aspects of synaptic behavior in different contexts.

The complete code used in this blog post is available in this Github repositoryꜛ (`alpha_function.py`

). Feel free to modify and expand upon it, and share your insights.

We will use the code provided in the NEST tutorial “Balanced neuron example”ꜛ and apply only minor modifications.

First, we need to import the necessary libraries and set the simulation parameters. We will also import the `bisect`

functionꜛ from the `scipy`

package which will help us find the optimal rate for the inhibitory population:

```
import matplotlib.pyplot as plt
import numpy as np
import nest
import nest.voltage_trace
from scipy.optimize import bisect
nest.set_verbosity("M_WARNING")
nest.ResetKernel()
t_sim = 25000.0 # simulation time [ms]
n_ex = 16000 # size of the excitatory population
n_in = 4000 # size of the inhibitory population
r_ex = 5.0 # mean rate of the excitatory population
r_in = 20.5 # initial rate of the inhibitory population
epsc = 45.0 # peak amplitude of excitatory synaptic currents
ipsc = -45.0 # peak amplitude of inhibitory synaptic currents
d = 1.0 # synaptic delay
```

We set the firing rate of the excitatory population to 5 Hz and we want to match the firing rate of the neuron to this rate. To do so, we need to find the optimal firing rate for the inhibitory population, which is initially set to 20.5 Hz.

To proceed, we first create the nodes for the neuron and the noise generator (as current input), and a voltmeter, spike recorder, and multimeter for recording the outputs generated by our simulation:

```
# create nodes:
neuron = nest.Create("iaf_psc_alpha") # single neuron with alpha-shaped postsynaptic currents
noise = nest.Create("poisson_generator", 2) # two Poisson generators for the excitatory and inhibitory populations
voltmeter = nest.Create("voltmeter")
spikerecorder = nest.Create("spike_recorder")
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"]) # record the membrane potential of the neuron to which the multimeter will be connected
# define the noise rates for the excitatory and inhibitory populations:
noise.rate = [n_ex * r_ex, n_in * r_in]
```

Nest, we connect the nodes,

```
nest.Connect(neuron, spikerecorder)
nest.Connect(multimeter, neuron)
nest.Connect(voltmeter, neuron)
nest.Connect(noise, neuron, syn_spec={"weight": [[epsc, ipsc]], "delay": 1.0})
```

and run the simulation:

```
lower = 15.0 # lower bound of the search interval using bisect
upper = 25.0 # upper bound of the search interval using bisect
prec = 0.01 # precision of the bisect search
def output_rate(guess):
print("Inhibitory rate estimate: %5.2f Hz" % guess)
rate = float(abs(n_in * guess))
noise[1].rate = rate # update the Poisson firing rate of the inhibitory population
spikerecorder.n_events = 0
nest.Simulate(t_sim)
out = spikerecorder.n_events * 1000.0 / t_sim
print(f" -> Neuron rate: {out} Hz (goal: {r_ex} Hz)")
return out
in_rate = bisect(lambda x: output_rate(x) - r_ex, lower, upper, xtol=prec)
print(f"Optimal rate for the inhibitory population: {in_rate} Hz")
```

Note, that the defined function `output_rate`

is used to estimate and update the firing rate of the inhibitory population. The function takes the firing rate of the inhibitory neurons as an argument. It scales the rate with the size of the inhibitory population and configures the inhibitory Poisson generator (`noise[1]`

) accordingly. Then, the spike counter of the spike_recorder is reset to zero and the network is simulated. The return value `out`

is the firing rate of the target neuron in Hz.

`output_rate`

is called within the `bisect`

function. In general, `bisect`

ꜛ is a root-finding algorithm for a given function and interval. `bisect`

receives four arguments. First a function is passed whose zero crossing is to be determined. Here, the firing rate of the target neuron should equal the firing rate of the neurons of the excitatory population. Thus we define an anonymous function (using Python’s lambda function) that returns the difference between the actual rate of the target neuron (`output_rate(x)`

) and the rate of the excitatory Poisson generator (`r_ex`

). The lower and upper bound of the interval define the search interval for the zero crossing. The fourth argument is the desired relative precision of the zero crossing (here: 0.01).

`bisect`

loops through the search interval and calls `output_rate`

with the current guess for the inhibitory rate and returns the current firing rate of the target neuron. If the firing rate of the target neuron is close enough to the desired rate of the excitatory population, the zero crossing is found and the optimal rate for the inhibitory population is then printed:

```
Inhibitory rate estimate: 15.00 Hz
-> Neuron rate: 347.56 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 25.00 Hz
-> Neuron rate: 0.04 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.00 Hz
-> Neuron rate: 35.80 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 22.50 Hz
-> Neuron rate: 0.00 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 21.25 Hz
-> Neuron rate: 0.80 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.62 Hz
-> Neuron rate: 8.52 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.94 Hz
-> Neuron rate: 3.24 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.78 Hz
-> Neuron rate: 4.96 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.70 Hz
-> Neuron rate: 6.52 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.74 Hz
-> Neuron rate: 5.76 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.76 Hz
-> Neuron rate: 5.12 Hz (goal: 5.00 Hz)
Inhibitory rate estimate: 20.77 Hz
-> Neuron rate: 5.20 Hz (goal: 5.00 Hz)
Optimal rate for the inhibitory population: 20.77 Hz
```

The optimal rate for the inhibitory population is 20.77 Hz.

As a last step, we extract the recorded data from the multimeter and plot corresponding membrane potential of the neuron:

```
# extract recorded data from the multimeter and plot it:
recorded_events = multimeter.get()
recorded_V = recorded_events["events"]["V_m"]
time = recorded_events["events"]["times"]
spikes = spikerecorder.get("events")
senders = spikes["senders"]
plt.figure(figsize=(7, 5))
plt.plot(time, recorded_V, label="membrane potential")
plt.xlabel("time (ms)")
plt.ylabel("membrane potential (mV)")
plt.title(f"Membrane potential of a {neuron.get('model')} neuron")
plt.gca().spines["top"].set_visible(False)
plt.gca().spines["bottom"].set_visible(False)
plt.gca().spines["left"].set_visible(False)
plt.gca().spines["right"].set_visible(False)
plt.legend(loc="lower left")
plt.tight_layout()
plt.show()
```

The plot represents the membrane potential of the modelled `iaf_psc_alpha`

neuron over a simulated time of 300,000 ms receiving excitatory and inhibitory inputs. The membrane potential fluctuates between about -50 mV and roughly -350 mV. These fluctuations are primarily driven by the balance of excitatory and inhibitory synaptic inputs modeled through Poisson spike trains. Sharp drops in membrane potential (hyperpolarizations) can be observed, followed by gradual recoveries (depolarizations). These are indicative of the neuron receiving strong inhibitory inputs followed by periods of reduced input, allowing the membrane potential to return towards a baseline. There are periods where the membrane potential appears relatively stable, especially noticeable around 0 to 25,000 ms and around 100,000 ms until nearing 300,000 ms. During these phases, the balance between inhibitory and excitatory inputs likely reaches a temporary equilibrium.

NEST is very versatile and can be used to study the responses of single neurons or complex neural networks. In the presented NEST tutorial “Balanced neuron example”ꜛ, we simulated a neuron driven by an inhibitory and excitatory population of neurons firing Poisson spike trains. By applying an iterative approach, we found the optimal rate for the inhibitory population that drives the single neuron to fire at the same rate as the excitatory population.

The iterative adjustment process demonstrates the precision NEST offers for neural simulations. This method provides valuable insights into neuronal behavior and can be extended to more complex models and networks.

The modified code used in this blog post is available in this Github repositoryꜛ (`neuron_with_population_inputs.py`

). Feel free to modify and expand upon it, and share your insights.

- NEST tutorial “Balanced neuron example”ꜛ
- Jochen Martin Eppler, Moritz Helias, Eilif Muller, Markus Diesmann, Marc-Oliver Gewaltig,
*PyNEST: A convenient interface to the NEST simulator*, 2009, Front. Neuroinform, DOI: 10.3389/neuro.11.012.2008ꜛ