For a basic introduction into NEST, please refer to the previous post.

Before we start with the simulation, we will first take a general look at how to set up populations of neurons in NEST.

NEST’s `nest.Create()`

ꜛ function receives four arguments: the model name, the number of neurons to create (default: 1), a dictionary of parameters (optional), and a list of positions (optional):

```
nest.Create(model, n=1, params=None, positions=None)
```

The `model`

can be any model supported by NESTꜛ. The `params`

dictionary contains the parameters of the neurons to be created, which depend on the chosen model. If no parameters are provided, NEST will use the default parameters of the selected model. With the `positions`

argument, spatial positions of the neurons can be defined. If no positions are provided, the neurons have no spatial attachment.

Let’s take a look at some examples. We first create a population of 100 iaf_psc_alphaꜛ neurons with a constant external input current of 200 pA and a membrane time constant of 20 ms. The parameters are provided as a dictionary:

```
ndict = {"I_e": 200.0, "tau_m": 20.0}
neuronpop = nest.Create("iaf_psc_alpha", 100, params=ndict)
```

A list of all parameters of a specific model can be obtained using the `nest.GetDefaults()`

ꜛ function:

```
nest.GetDefaults(neuronpop.model)
```

Custom model parameter values can also be set *before* creating the population of neurons, which allows for simulations being more flexible. To do so, we make use of the `nest.SetDefaults()`

ꜛ function:

```
ndict = {"I_e": 200.0, "tau_m": 20.0}
nest.SetDefaults("iaf_psc_alpha", ndict)
neuronpop1 = nest.Create("iaf_psc_alpha", 100)
neuronpop2 = nest.Create("iaf_psc_alpha", 100)
neuronpop3 = nest.Create("iaf_psc_alpha", 100)
```

If we want to have a variant of the `iaf_psc_alpha`

model with a different external input current, we can copy the model with `nest.CopyModel()`

ꜛ and set the new parameters accordingly:

```
edict = {"I_e": 200.0, "tau_m": 20.0}
nest.CopyModel("iaf_psc_alpha", "exc_iaf_psc_alpha")
nest.SetDefaults("exc_iaf_psc_alpha", edict)
```

or in one step:

```
idict = {"I_e": 300.0}
nest.CopyModel("iaf_psc_alpha", "inh_iaf_psc_alpha", params=idict)
```

The new models are added to NEST’s model list (`nest.Models()`

) until you reset the kernel. The copied models can now be used to create different populations of neurons:

```
epop1 = nest.Create("exc_iaf_psc_alpha", 100)
epop2 = nest.Create("exc_iaf_psc_alpha", 100)
ipop1 = nest.Create("inh_iaf_psc_alpha", 30)
ipop2 = nest.Create("inh_iaf_psc_alpha", 30)
```

It is also possible to assign individual parameter values to each neuron in a population. To do so, we would provide a list of dictionaries with the parameter values for each neuron. If we want to assign individual values just for some parameters, NEST has no problem to receive an inhomogeneous set of parameters:

```
parameter_dict = {"I_e": [200.0, 150.0], "tau_m": 20.0, "V_m": [-77.0, -66.0]}
pop3 = nest.Create("iaf_psc_alpha", 2, params=parameter_dict)
print(pop3.get(["I_e", "tau_m", "V_m"]))
```

The two individual values for `I_e`

as well as for `V_m`

are assigned to the two neurons in the population. The single value for `tau_m`

is applied to both neurons.

In case you need a randomization of the parameters of your neurons, you can use list comprehensions to create a list of random values. For instance, to randomize the membrane potential of a neuron population between -70 mV and -55 mV:

```
Vth=-55.
Vrest=-70.
dVms = {"V_m": [Vrest+(Vth-Vrest)*numpy.random.rand() for x in range(len(epop1))]}
epop1.set(dVms)
```

Alternatively, you can also use NEST’s random parameters and distributions. NEST has a number of such parameters which can be combined and used with some mathematical functions provided by NEST:

```
epop1.set({"V_m": Vrest + nest.random.uniform(0.0, Vth-Vrest)})
```

To connect two populations, we can simply use NEST’s `nest.Conncetion()`

function that we have extensively discussed in the previous post. For instance, to connect the `epop1`

population to the `ipop1`

population with a fixed indegree of 10, simply use:

```
conn_dict_epop_to_ipop = {"rule": "fixed_indegree", "indegree": 10}
nest.Connect(epop1, ipop1, conn_dict_epop_to_ipop)
```

Similarly, to connect the two populations to a multimeter:

```
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"])
nest.Connect(multimeter, epop1)
nest.Connect(multimeter, ipop1)
```

or

```
nest.Connect(epop1 + ipop1, multimeter)
```

Please read what to consider when connecting multiple neurons to a single recording device.

Now, let’s set up a SNN simulation with two distinct populations of Izhikevich neurons.

First, we reuse the Izhikevich neuron model definitions from the previous example:

```
import os
import matplotlib.pyplot as plt
import numpy as np
import nest
# set the verbosity of the NEST simulator:
nest.set_verbosity("M_WARNING")
# reset the kernel for safety:
nest.ResetKernel()
# define sets of typical parameters of the Izhikevich neuron model:
p_RS = [0.02, 0.2, -65, 8, "regular spiking (RS)"] # regular spiking settings for excitatory neurons (RS)
p_IB = [0.02, 0.2, -55, 4, "intrinsically bursting (IB)"] # intrinsically bursting (IB)
p_CH = [0.02, 0.2, -51, 2, "chattering (CH)"] # chattering (CH)
p_FS = [0.1, 0.2, -65, 2, "fast spiking (FS)"] # fast spiking (FS)
p_TC = [0.02, 0.25, -65, 0.05, "thalamic-cortical (TC)"] # thalamic-cortical (TC) (doesn't work well)
p_LTS = [0.02, 0.25, -65, 2, "low-threshold spiking (LTS)"] # low-threshold spiking (LTS)
p_RZ = [0.1, 0.26, -65, 2, "resonator (RZ)"] # resonator (RZ)
# copy the Izhikevich neuron model and set the parameters for the different neuron types:
nest.CopyModel("izhikevich", "izhikevich_RS", {"a": p_RS[0], "b": p_RS[1], "c": p_RS[2], "d": p_RS[3]})
nest.CopyModel("izhikevich", "izhikevich_IB", {"a": p_IB[0], "b": p_IB[1], "c": p_IB[2], "d": p_IB[3]})
nest.CopyModel("izhikevich", "izhikevich_CH", {"a": p_CH[0], "b": p_CH[1], "c": p_CH[2], "d": p_CH[3]})
nest.CopyModel("izhikevich", "izhikevich_FS", {"a": p_FS[0], "b": p_FS[1], "c": p_FS[2], "d": p_FS[3]})
nest.CopyModel("izhikevich", "izhikevich_TC", {"a": p_TC[0], "b": p_TC[1], "c": p_TC[2], "d": p_TC[3]})
nest.CopyModel("izhikevich", "izhikevich_LTS", {"a": p_LTS[0], "b": p_LTS[1], "c": p_LTS[2], "d": p_LTS[3]})
nest.CopyModel("izhikevich", "izhikevich_RZ", {"a": p_RZ[0], "b": p_RZ[1], "c": p_RZ[2], "d": p_RZ[3]})
```

Next, we create two neuron populations consisting of 800 regular spiking (RS) and 200 chattering (CH) Izhikevich neurons. We declare the neuron of the first population as *excitatory* and the neurons of the second population as *inhibitory*. We also create a multimeter and a spike recorder to monitor the membrane potential and record the spikes, respectively:

```
# set up a two-neuron-type network according to Izhikevich's original paper:
Ne = 800 # Number of excitatory neurons
Ni = 200 # Number of inhibitory neurons
T = 1000.0 # Simulation time (ms)
population_e = nest.Create("izhikevich_RS", n=Ne)
population_i = nest.Create("izhikevich_CH", n=Ni)
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"])
spikerecorder = nest.Create("spike_recorder")
```

As stimulation input, we use a Gaussian noise generatorꜛ to inject random currents into the neurons:

```
# ensure that the models' default input currents are zero:
I_e = 0.0 # [pA]
population_e.I_e = I_e
population_i.I_e = I_e
# set up the Gaussian-noisy current input:
noise = nest.Create("noise_generator")
noise.mean = 10.0 # mean value of the noise current [pA]
noise.std = 2.0 # standard deviation of the noise current [pA]
noise.std_mod = 0.0 # modulation of the standard deviation of the noise current (pA)
noise.phase=0 # phase of sine modulation (0–360 deg)
```

The commands above will set the average noise current to 10 pA with a standard deviation of 2 pA. The `noise.std_mod`

parameter controls the modulation of the standard deviation of the noise current, and the `noise.phase`

parameter sets the phase of the sine modulation. For more details on the noise generator model, please refer to the NEST documentationꜛ.

Next, we define the connection rules between the two neuron populations. We set up a random fixed indegree connectivity with a connection probability of 10% for the excitatory population and 60% for the inhibitory population:

```
# define connectivity based on a percentage:
conn_prob_ex = 0.10 # connectivity probability of population E
conn_prob_in = 0.60 # connectivity probability of population I
# compute the number of connections based on the probabilities:
num_conn_ex_to_ex = int(Ne * conn_prob_ex)
num_conn_in_to_ex = 70
num_conn_in_to_in = int(Ni * conn_prob_in)
num_conn_ex_to_in = 70
# create connection dictionaries for fixed indegree:
conn_dict_ex_to_ex = {"rule": "fixed_indegree", "indegree": num_conn_ex_to_ex}
conn_dict_ex_to_in = {"rule": "fixed_indegree", "indegree": num_conn_ex_to_in}
conn_dict_in_to_ex = {"rule": "fixed_indegree", "indegree": num_conn_in_to_ex}
conn_dict_in_to_in = {"rule": "fixed_indegree", "indegree": num_conn_in_to_in}
```

We also define the synaptic weights and delays for the excitatory and inhibitory connections:

```
# synaptic weights and delays:
d = 1.0 # synaptic delay [ms]
syn_dict_ex = {"delay": d, "weight": 0.5}
syn_dict_in = {"delay": d, "weight": -1.0}
```

Finally, we connect the neurons as well as the stimulation and recording devices:

```
# connect neurons:
nest.Connect(population_e, population_e, conn_dict_ex_to_ex, syn_dict_ex) # E to E
nest.Connect(population_e, population_i, conn_dict_ex_to_in, syn_dict_ex) # E to I
nest.Connect(population_i, population_i, conn_dict_in_to_in, syn_dict_in) # I to I
nest.Connect(population_i, population_e, conn_dict_in_to_ex, syn_dict_in) # I to E
# connect noise to the populations:
nest.Connect(noise, population_e, syn_spec={'weight': 1.0})
nest.Connect(noise, population_i, syn_spec={'weight': 1.0})
# connect the multimeter to the excitatory population and to the inhibitory population:
nest.Connect(multimeter, population_e + population_i)
nest.Connect(population_e + population_i, spikerecorder)
```

We are now ready to run the simulation:

```
# run a simulation:
nest.Simulate(T)
```

To analyze the simulation results, we extract the recorded membrane potentials and spike times from the multimeter and spike recorder, respectively:

```
spike_events = nest.GetStatus(spikerecorder, "events")[0]
spike_times = spike_events["times"]
neuron_ids = spike_events["senders"]
# combine the spike times and neuron IDs into a single array and sort by time:
spike_data = np.vstack((spike_times, neuron_ids)).T
spike_data_sorted = spike_data[spike_data[:, 0].argsort()]
# Extract sorted spike times and neuron IDs:
sorted_spike_times = spike_data_sorted[:, 0]
sorted_neuron_ids = spike_data_sorted[:, 1]
```

Finally, we plot the spike times of the neurons in a spike raster plot,

```
# plotting spike times:
plt.figure(figsize=(6, 6))
plt.scatter(sorted_spike_times, sorted_neuron_ids, s=0.5, color='black')
plt.title("Spike times")
plt.xlabel("Time (ms)")
plt.ylabel("Neuron ID")
plt.axhline(y=Ne, color='k', linestyle='-', linewidth=1)
plt.text(0.7, 0.76, population_e.get('model')[0],
color='k', fontsize=12, ha='left', va='center',
transform=plt.gca().transAxes,
bbox=dict(facecolor='white', alpha=1))
plt.text(0.7, 0.84, population_i.get('model')[0],
color='k', fontsize=12, ha='left', va='center',
transform=plt.gca().transAxes,
bbox=dict(facecolor='white', alpha=1))
plt.xlim([0, T])
plt.ylim([0, Ne+Ni])
plt.yticks(np.arange(0, Ne+Ni+1, 200))
plt.tight_layout()
plt.show()
```

as well as a histogram of the spiking rate vs. time:

```
# plot histogram of spiking rate [Hz] vs. time [ms]:
hist_binwidth = 5.0
t_bins = np.arange(np.amin(sorted_spike_times), np.amax(sorted_spike_times), hist_binwidth)
n, bins = np.histogram(sorted_spike_times, bins=t_bins)
heights = 1000 * n / (hist_binwidth * (Ne+Ni)) # factor of 1000 is used to convert ms to s
plt.figure(figsize=(6, 2))
plt.bar(t_bins[:-1], heights, width=hist_binwidth, color='blue')
plt.gca().spines["top"].set_visible(False)
plt.gca().spines["bottom"].set_visible(False)
plt.gca().spines["left"].set_visible(False)
plt.gca().spines["right"].set_visible(False)
plt.title(f"histogram of spiking rate vs. time")
plt.ylabel("firing rate [Hz]")
plt.xlabel("time [ms]")
plt.xlim([0, T])
plt.tight_layout()
plt.show()
```

We could also create both plots using NEST’s built-in plotting functions:

```
nest.raster_plot.from_device(spikerecorder, hist=True, hist_binwidth=5.0)
plt.show()
```

The results of the simulation running with the parameters defined above are shown in the following figures:

The plot results show, how well the neurons in both populations synchronize their spiking activity. We also see the emerging oscillatory spiking behavior of the Izhikevich neurons. The regular spiking (RS) neurons show a more regular spiking pattern, while the spiking pattern of the chattering (CH) neurons is more irregular.

In the following, we briefly study the influence of various simulation parameters on the network dynamics.

To assess the influence of the chosen stimulation source on the network dynamics, we run the simulation with a Poisson-noisy current input instead of the Gaussian-noisy current input:

```
# set up some Poisson-noisy current input:
noise = nest.Create("poisson_generator")
noise.rate = 5000.0 # [Hz]
```

You can see, that the spiking activity of the neurons now contains more random spikes. However, the overall synchronization of the spiking activity is still maintained (for the regular spiking neurons).

A small change of the connection probability can have a significant impact on the network dynamics. For instance, changong the connection probability of the excitatory population to 20%,

```
conn_prob_ex = 0.20 # instead of 10%
conn_prob_in = 0.60
```

leads to a more synchronized spiking activity of the regular spiking neurons as well as the chattering neurons:

Increasing the connection probability of the inhibitory population to 200%,

```
conn_prob_ex = 0.10
conn_prob_in = 2.0 # instead of 0.60
```

leads to a more synchronized spiking activity of the chattering neurons:

The type of connection rule itself can also have a significant impact on the network dynamics. For instance, changing the connection rule from a random fixed indegree to a random fixed outdegree connectivity,

```
conn_dict_ex_to_ex = {"rule": "fixed_outdegree", "outdegree": num_conn_ex_to_ex}
conn_dict_ex_to_in = {"rule": "fixed_outdegree", "outdegree": num_conn_ex_to_in}
conn_dict_in_to_ex = {"rule": "fixed_outdegree", "outdegree": num_conn_in_to_ex}
conn_dict_in_to_in = {"rule": "fixed_outdegree", "outdegree": num_conn_in_to_in}
```

will impact the synchronization of the spiking activity of the neurons:

The synaptic weights can also have a significant impact on the network dynamics. For instance, changing the synaptic weight of the inhibitory connections to -2.0,

```
# synaptic weights and delays:
d = 1.0 # synaptic delay [ms]
syn_dict_ex = {"delay": d, "weight": 0.5}
syn_dict_in = {"delay": d, "weight": -2.0} # instead of -1.0
```

will alter the synchronization of the spiking activity of the neurons and increase the synchronization of the chattering neurons:

NEST is a powerful and flexible simulator that allows for the simulation of large-scale, multi-population spiking neural networks with ease. In this post, we have explored how to set up a simple SNN simulation consisting two distinct populations of Izhikevich neurons. We have shown how to create parameterized populations of neurons, randomize the parameters of the neurons, and connect the populations of neurons. We have also discussed the influence of various simulation parameters such as the stimulation source, connection probability, connection rule, and synaptic weights on the network dynamics. The results of the simulations show how well the neurons on both populations synchronize their spiking activity and how the different neuron types exhibit distinct spiking patterns.

The complete code used in this blog post is available in this Github repositoryꜛ. Feel free to modify and expand upon it, and share your insights.

- Gewaltig, M.-O., & Diesmann, M.,
*NEST (NEural Simulation Tool)*, 2007 Scholarpedia, 2(4), 1430, doi: 10.4249/scholarpedia.1430ꜛ - Documentation of the NEST simulatorꜛ
- PyNEST API listingꜛ
- List of all supported neuron and synapse models in NESTꜛ
- Connection concepts in NESTꜛ
- NEST Tutorial “Part 2: Populations of neurons”ꜛ

Connectionsꜛ in NEST are created via the

```
nest.Conncetion(pre, post, conn_spec, syn_spec)
```

commandꜛ, where `pre`

and `post`

are the pre- and post-synaptic nodes (e.g., single neurons or neuron populations), respectively. The `conn_spec`

and `syn_spec`

arguments define the connection rule and the synapse model, respectively. If the latter two are omitted, the default `all_to_all`

connection and the default `static_synapse`

ꜛ are used. The static synapse is a synapse, where the synaptic strength (or weight) does not evolve over time and remains at a defined constant value.

`conn_spec`

expects a connection rule alongside additional rule-specific parameters (if any). The following connection rules are available:

`all_to_all`

(default)`one_to_one`

`pairwise_bernoulli`

(parameter:`p`

)`symmetric_pairwise_bernoulli`

(parameter:`p`

)`pairwise_poisson`

(parameter:`pairwise_avg_num_conns`

)`fixed_indegree`

(parameter:`indegree`

)`fixed_outdegree`

(parameter:`outdegree`

)`fixed_total_number`

(parameter:`N`

)

For instance, connection using the `pairwise_bernoulli`

rule would look like this:

```
n = 5
m = 5
p = 0.5
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
conn_spec = {'rule': 'pairwise_bernoulli', 'p': p}
nest.Connect(S, T, conn_spec)
```

The chosen synapse model (`syn_spec`

) along with its parameters such as the synaptic weight, delay, and model-specific parameters controls the strength of the connection between the pre- and post-synaptic neurons. The synaptic weight determines how strongly a spike from a presynaptic neuron affects the postsynaptic neuron. Thus, it quantifies the efficacy of the synapse in transmitting the signal. An exemplary `syn_spec`

definition looks like this:

```
n = 10
neuron_dict = {'tau_syn': [0.3, 1.5]}
A = nest.Create('iaf_psc_exp_multisynapse', n, neuron_dict)
B = nest.Create('iaf_psc_exp_multisynapse', n, neuron_dict)
syn_spec_dict ={'synapse_model': 'static_synapse',
'weight': 2.5,
'delay': 0.5,
'receptor_type': 1}
nest.Connect(A, B, syn_spec=syn_spec_dict)
```

Further details on synapse specification can be found in the NEST documentationꜛ.

In the following, we will discuss the different connection concepts in more detail. For consistency, we will call a single neuron a “node” and a collection of nodes a “node collection” (or just collection). Pre- and postsynaptic node collections are further referred to as $S$ and $T$ for the source and target node collections, respectively. A single connection between two nodes is called an *edge*, while a group of edges that connect groups of nodes with similar properties (i.e, populations) is called a *projection*. In these terms, the `nest.Conncetion()`

function establishes a projection between the source and target node collection.

The `conn_spec`

argument can receive two additional parameters, `autapses`

and `multapses`

. Autapases are self-conncetions of a node and multapses are multiple connections between the same pair of nodes:

`autapses`

and `multapses`

are both set to `True`

by default.

The `all_to_all`

connection rule connects all pre-synaptic nodes (source) to all post-synaptic nodes (target). This is the default connection rule in NEST. The `all_to_all`

rule does not require any additional parameters.

Example:

```
n = 5
m = 5
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
nest.Connect(S, T, 'all_to_all')
nest.Connect(S, T) # this is equivalent
```

To explicitly connect source and target nodes, the respective node IDs from the node collections must be extracted and then be connected using the one-to-one connection rule. For instance, in the following example we connect the 3rd, 4th, and 1st source nodes to the 8th, 6th, and 9th target nodes, respectively:

```
n = 5
m = 5
S = nest.Create('iaf_psc_alpha', n) # node ids: 1..5
T = nest.Create('iaf_psc_alpha', m) # node ids: 6..10
# source-target pairs: (3,8), (4,1), (1,9)
nest.Connect([3,4,1], [8,6,9], 'one_to_one')
```

The `one_to_one`

connection rule connects each pre-synaptic node to exactly one post-synaptic node, i.e., the $i$-th source node in the source node collection $S$ is connected to the $i$-th target node in the target node collection $T$. The number of pre- and post-synaptic nodes must be equal. The `one_to_one`

rule does not require any additional parameters.

Example:

```
n = 5
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('spike_recorder', n)
nest.Connect(S, T, 'one_to_one')
```

**Deterministic connection rules**: Both `all_to_all`

and `one_to_one`

are deterministic connection rules, i.e., precisely defined sets of connections are established between the source and target nodes without any randomness or variability across network realizations.

**Probabilistic connection rules**: In contrast, probabilistic connection rules such as `pairwise_bernoulli`

or `pairwise_poisson`

establish connections between the source and target nodes based on a probabilistic rule. This leads to variability in the network structure across different network realizations. However, such connectivity leads to specific expectation values of network characteristics, such as degree distributions or correlation structure.

The `pairwise_bernoulli`

connection rule establishes connections between the source and target nodes based on a Bernoulli process. The probability `p`

of establishing a connection between any pair of nodes is given as an additional parameter. The `pairwise_bernoulli`

rule requires the parameter `p`

to be set. Multapses cannot be established with this rule as each possible edge is visited only once, independent of setting `allow_multapses`

to `True`

.

Example:

```
n = 5
m = 5
p = 0.5
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
conn_spec = {'rule': 'pairwise_bernoulli', 'p': p}
nest.Connect(S, T, conn_spec)+
```

The `symmetric_pairwise_bernoulli`

connection rule is similar to the `pairwise_bernoulli`

rule, but it ensures that the connection matrix is symmetric. This means that if node $i$ is connected to node $j$, then node $j$ is also connected to node $i$ (two connections in total). To use this rule, `allow_autapses`

must be `False`

the and `make_symmetric`

argument must be set to `True`

.

Example:

```
n = 10
m = 12
p = 0.2
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
conn_spec = {'rule': 'symmetric_pairwise_bernoulli', 'p': p,
'allow_autapses': False, 'make_symmetric': True}
nest.Connect(S, T, conn_spec)
```

The `pairwise_poisson`

connection rule establishes connections between the source and target nodes based on a Poisson distribution. The average number of connections `pairwise_avg_num_conns`

is given as an additional parameter. Multiple connections between the same pair of nodes are possible, even for a small average number of connections. Thus, multapses can be established and `allow_multapses`

can not be `False`

.

Example:

```
n = 10
m = 12
p_avg_num_conns = 0.2 # can be greater than 1
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
conn_spec = {'rule': 'pairwise_poisson',
'pairwise_avg_num_conns': p_avg_num_conns}
nest.Connect(S, T, conn_spec)
```

The `fixed_total_number`

connection rule randomly establishes a fixed number of total connections between the source and target nodes. The number of connections `N`

is given as an additional parameter and must be specified. While multapses can be established with this rule, you can also disable them by setting `allow_multapses`

to `False`

.

Example:

```
n = 5
m = 5
N = 10
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
conn_spec = {'rule': 'fixed_total_number', 'N': N}
nest.Connect(S, T, conn_spec)
```

The `fixed_indegree`

connection rule randomly establishes a fixed number of incoming connections to each target node. The number of incoming connections `indegree`

is given as an additional parameter and must be specified. While multapses can be established with this rule, you can also disable them by setting `allow_multapses`

to `False`

.

Example:

```
n = 5
m = 5
N = 2
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
conn_spec = {'rule': 'fixed_indegree', 'indegree': N}
nest.Connect(S, T, conn_spec)
```

The `fixed_outdegree`

connection rule randomly establishes a fixed number of outgoing connections from each source node. The number of outgoing connections `outdegree`

is given as an additional parameter and must be specified. While multapses can be established with this rule, you can also disable them by setting `allow_multapses`

to `False`

.

Example:

```
n = 5
m = 5
N = 2
S = nest.Create('iaf_psc_alpha', n)
T = nest.Create('iaf_psc_alpha', m)
conn_spec = {'rule': 'fixed_outdegree', 'indegree': N}
nest.Connect(S, T, conn_spec)
```

For each possible pair of nodes from a source node collection $S$ and a target node collection $T$, a primary connection is created with probability `p_primary`

. For each primary connection, a third-party connection pair involving a node from a third node collection, e.g., an astrocyte population $A$, is created with the conditional probability `p_third_if_primary`

. This connection pair includes a connection from the $S$ node to the $A$ node, and a connection from the $A$ node to the $T$ node. The $A$ node to connect to is chosen at random from a pool, a subset of the nodes in $A$. By default, this pool is all of $A$.

The `pool_type`

parameter controlls the pool formation and can be `random`

(default) or `block`

. The `pool_size`

parameter must be between 1 and the size of $A$ (default). For random pools, for each node from $T$, `pool_size`

nodes from $A$ are chosen randomly without replacement.

For block pools, two variants exist. Let `N_T`

and `N_A`

be the number of nodes in $T$ and $A$, respectively. If `pool_size == 1`

, the first `N_T/N_A`

nodes in $T$ are assigned the first node in $A$ as their pool, the second `N_T/N_A`

nodes in $T$ the second node in $A$ and so forth. In this case, `N_T`

must be a multiple of `N_A`

. If `pool_size > 1`

, the first `pool_size`

elements of $A$ are the pool for the first node in $T$, the second `pool_size`

elements of $A$ are the pool for the second node in $T$ and so forth. In this case, `N_T * pool_size == N_A`

is required.

The corresponding code snippet for each of the presented cases above is:

```
# left plot: random pool
N_S = 6
N_T = 6
N_A = 3
p_primary = 0.2
p_third_if_primary = 1.0
pool_type = 'random'
pool_size = 2
S = nest.Create('aeif_cond_alpha_astro', N_S)
T = nest.Create('aeif_cond_alpha_astro', N_T)
A = nest.Create('astrocyte_lr_1994', N_A)
conn_spec = {'rule': 'tripartite_bernoulli_with_pool',
'p_primary': p_primary,
'p_third_if_primary': p_third_if_primary,
'pool_type': pool_type,
'pool_size': pool_size}
syn_specs = {'third_out': 'sic_connection'}
nest.TripartiteConnect(S, T, A, conn_spec, syn_specs)
```

```
# middle plot: block pool, pool_size = 1
N_S = 6
N_T = 6
N_A = 3
p_primary = 0.2
p_third_if_primary = 1.0
pool_type = 'block'
pool_size = 1
S = nest.Create('aeif_cond_alpha_astro', N_S)
T = nest.Create('aeif_cond_alpha_astro', N_T)
A = nest.Create('astrocyte_lr_1994', N_A)
conn_spec = {'rule': 'tripartite_bernoulli_with_pool',
'p_primary': p_primary,
'p_third_if_primary': p_third_if_primary,
'pool_type': pool_type,
'pool_size': pool_size}
syn_specs = {'third_out': 'sic_connection'}
nest.TripartiteConnect(S, T, A, conn_spec, syn_specs)
```

```
# right plot: block pool, pool_size > 1
N_S = 6
N_T = 3
N_A = 6
p_primary = 0.2
p_third_if_primary = 1.0
pool_type = 'block'
pool_size = 2
S = nest.Create('aeif_cond_alpha_astro', N_S)
T = nest.Create('aeif_cond_alpha_astro', N_T)
A = nest.Create('astrocyte_lr_1994', N_A)
conn_spec = {'rule': 'tripartite_bernoulli_with_pool',
'p_primary': p_primary,
'p_third_if_primary': p_third_if_primary,
'pool_type': pool_type,
'pool_size': pool_size}
syn_specs = {'third_out': 'sic_connection'}
nest.TripartiteConnect(S, T, A, conn_spec, syn_specs)
```

`sic_connection`

ꜛ is a synapse model that is used to connect third-party nodes such as astrocytes to the source and target nodes.

NEST allows for the creation of complex neural networks by providing a variety of connection rules. These rules can be used to establish connections between different types of nodes, such as neurons and astrocytes. The connection rules can be deterministic or probabilistic, and they can be used to create different types of network structures. By using the appropriate connection rules, you can create networks that mimic the connectivity patterns found in the brain and study the dynamics of these networks.

In the next post, we will create a simple neural network using the connection concepts discussed here and explore the dynamics of the network using the NEST simulator.

- Gewaltig, M.-O., & Diesmann, M.,
*NEST (NEural Simulation Tool)*, 2007 Scholarpedia, 2(4), 1430, doi: 10.4249/scholarpedia.1430ꜛ - Documentation of the NEST simulatorꜛ
- PyNEST API listingꜛ
- List of all supported neuron and synapse models in NESTꜛ
- Connection concepts in NESTꜛ
- Synapse specification in NESTꜛ
- Senk, Kriener, Djurfeldt, Voges, Jiang, Schüttler, Gramelsberger, Diesmann, Plesser, van Albada,
*Connectivity concepts in neuronal network modeling*, 2022, PLOS Computational Biology, Vol. 18, Issue 9, pages e1010086, doi: 10.1371/journal.pcbi.1010086ꜛ - Djurfeldt,
*The Connection-set Algebra—A Novel Formalism for the Representation of Connectivity Structure in Neuronal Network Models*, 2012, Neuroinformatics, Vol. 10, Issue 3, pages 287-304, doi: 10.1007/s12021-012-9146-1 - Daniel Hjertholm,
*Statistical tests for connection algorithms for structured neural networks*, 2013, Master’s thesis, Norwegian University of Life Sciences ,Ås, Norway. PDFꜛ

NEST can be easily installedꜛ via `conda`

:

```
conda create -n nest -y python=3.11
conda activate nest
conda install -c conda-forge mamba
mamba install -c conda-forge ipykernel matplotlib numpy pandas scipy nest-simulator
```

**Windows users**: Unfortunately, NEST is not supported on Windows. However, you can try to use the Windows Subsystem for Linux (WSL)ꜛ to run NEST on your Windows machine.

In your script, simply import the `nest`

module via `import nest`

. It is recommended to import any additionally required modules, such as `numpy`

, `matplotlib`

, `scipy`

, or `sklearn`

, before importing `nest`

to avoid any potential conflicts:

```
import numpy as np
import matplotlib.pyplot as plt
import nest
```

A typical NEST network consists of two main components: **nodes** and **connections**. A node is either a neuron, a device or even a sub-network. A connection is a directed link between two nodes. Devices are used to inject current into and stimulate neurons or to record data from them.

Let’s begin by creating our first node, a single neuron. First, choose one of the many neuron modelsꜛ available in NEST. For this example, we will use the `iaf_psc_alpha`

ꜛ neuron model, which is a simple leaky integrate-and-fire neuron with an alpha-shaped postsynaptic current. With `nest.Create(model, n=1, params=None, positions=None)`

ꜛ, we create a single neuron of this type:

```
neuron = nest.Create('iaf_psc_alpha')
```

**List all available models**: You can list all available neuron and synapse models in NEST by using the `nest.Models()`

function. Detailed information about each model can be obtained from the corresponding model documentationꜛ.

By default, a single neuron is created unless `n`

is further specified. The neuron is created with a predefined set of parameters. The `params`

argument can be used to create a neuron with specific parameters. For example, to create a neuron with a membrane potential of -70 mV, a threshold of -55 mV, and a reset potential of -70 mV, we can use:

```
neuron = nest.Create('iaf_psc_alpha', params={'V_m': -70.0, 'V_th': -55.0, 'V_reset': -70.0})
```

To review the parameters of the neuron and their currently set values, we can use the `get()`

function,

```
neuron.get()
```

To retrieve specific parameters, we can specify the key of the parameter we are interested in, e.g.,

```
print(neuron.get("I_e"))
neuron.get(["V_reset", "V_th"])
```

You can change the parameters of the neuron by using the `set()`

function at any time, e.g.:

```
neuron.set("I_e": 376.0)
neuron.set({"V_reset": -70.0, "V_th": -55.0})
```

Another way to retrieve and change parameters is to address them directly, e.g.:

```
neuron.I_e
neuron.I_e = 376.0
```

**NEST is type sensitive**: If a parameter is set to a wrong type, e.g., `float`

, NEST will raise an error if you try to set it to an `int`

.

Recording devicesꜛ are used to collect or sample observables from the simulation such as membrane potentials, spike times, or conductances. NEST comes with a variety of recording devices, such as

`multimeter`

ꜛ: records various analog values from neurons`voltmeter`

ꜛ: a pre-configured multimeter, that records the membrane potential`V_m`

of a neuron`spike_recorder`

ꜛ: records the spiking events produced by a neuron`weight_recorder`

ꜛ: records the synaptic weights of a connection

They can be created again by using the `nest.Create()`

function, e.g.:

```
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"])
spikerecorder = nest.Create("spike_recorder")
```

For the `multimeter`

, we need to specify the observables we want to record. In this case, we record the membrane potential `V_m`

. A sampling interval (default is 1.0 ms) can be set either by providing the corresponding argument while creating the device,

```
multimeter = nest.Create("multimeter", {'interval': 0.1, 'record_from': ['V_m', 'g_ex']})
```

or by using the `SetStatus()`

function:

```
multimeter = nest.Create("multimeter")
multimeter.SetStatus({'interval': 0.1, 'record_from': ['V_m', 'g_ex']})
```

Note that we also provided the `record_from`

in the handed-over parameter dictionary. It is very important to note, that the set of variables to record from and the recording interval must be set *before* the multimeter is connected to the neuron. These properties cannot be changed afterwards.

Recording devices can receive further parameters such as

`n_events`

: number of events that were collected by the recorder can be read out of the n_events entry.`start`

and`stop`

: start and stop time of the recording in ms, relative to origin.`record_to`

: defines the recording backend (default:`memory`

).

The `record_to`

argument is quite interesting as it allows you to specify the recording backend, for which you can select from:

`memory`

ꜛ: writes data to the memory (default)`ascii`

ꜛ: writes data to plain text files`screen`

ꜛ: writes data to the terminal`sionlib`

ꜛ: writes data to a file in the SIONlib formatꜛ (an efficient binary format)`mpi`

ꜛ: sends data with MPI

Changed the recording backend from the default (`memory`

) to a file-based backend can become important when you, e.g., run large-scale simulations and want to avoid memory overflows.

Specific neuron models come with specific parameters that can be recorded. To get a list of all recordable parameters, you can use the `GetDefaults()`

function:

```
nest.GetDefaults(neuron.model)['recordables']
```

Now that we have created one node for the neuron and two nodes for the recording devices, we can connect them. To connect two nodes, we use the `nest.Connect()`

ꜛ function, which connects a pre-node to a post-node. Before we do so, it is important to understand that there is a fundamental difference between **samplers** and **collectors** recording devices. Samplers are recording devices that actively communicate with their target nodes at given time intervals (e.g., to record membrane potentials). Collectors, in contrast, are recording devices, that gather events sent to them (e.g., spikes). It matters in which order or direction you connect them to the nodes: collectors are connected to the neuron(s), while neuron(s) are connected to the samplers:

```
nest.Connect(multimeter, neuron)
nest.Connect(neuron, spikerecorder)
```

The order specified in the `nest.Connect()`

command reflects the actual flow of events: If the neuron spikes, it sends an event to the spike recorder (collector), while the multimeter (sampler) periodically sends requests to the neuron to ask for its membrane potential at that point in time.

Without further specifications, a default connection paradigm is used. However, you can specify the connection parameters, such as the synaptic weight, the delay, or the connection rule. Since this starts becoming relevant when you work with networks with more than one neuron (population), we will not go into detail here and refer to the next post, where we will discuss multi-neuron network simulations.

**Connecting neurons to each other**: Of cause, you can also connect neurons to each otherꜛ. This is done in the same way as connecting a neuron to a recording device.

After setting up the network, we can finally simulate it by using the `nest.Simulate()`

function. This function takes the simulation time in milliseconds as an argument:

```
nest.Simulate(1000.0)
```

And that’s it! By executing this command, NEST will simulate the network and records the defined observables for the subsequent analysis.

After the simulation is finished, you can retrieve the recorded data from the recording devices via the `get()`

function:

```
recorded_events = multimeter.get()
recorded_V = recorded_events["events"]["V_m"]
time = recorded_events["events"]["times"]
spikes = spikerecorder.get("events")
senders = spikes["senders"]
```

You can now plot `time`

vs. `recorded_V`

and `time`

vs. `senders`

to visualize the membrane potential and the spike times, respectively:

The corresponding plot commands are provided in the next subsection, where the entire simulation script is presented.

Here is the complete simulation code including all settings made above:

```
import matplotlib.pyplot as plt
import numpy as np
from pprint import pprint
import nest
# set the verbosity of the NEST simulator:
nest.set_verbosity("M_WARNING")
# reset the kernel for safety:
nest.ResetKernel()
# list all available models:
pprint(nest.Models())
# create the neuron, a spike recorder and a multimeter (all called "nodes"):
neuron = nest.Create("iaf_psc_alpha")
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"])
spikerecorder = nest.Create("spike_recorder")
pprint(neuron.get())
pprint(f"I_e: {neuron.get('I_e')}")
pprint(f"V_reset: {neuron.get('V_reset')}")
pprint(f"{neuron.get(['V_m', 'V_th'])}")
neuron.set({"V_reset": -70.0})
pprint(f"{neuron.get('V_reset')}")
# set a constant input current for the neuron:
I_e = 376.0 # [pA]
neuron.I_e = I_e # [pA]
pprint(f"{neuron.get('I_e')}")
# list all recordable quantities
pprint(f"recordables of {neuron.model}: {nest.GetDefaults(neuron.model)['recordables']}")
# now, connect the nodes:
nest.Connect(multimeter, neuron)
nest.Connect(neuron, spikerecorder)
# run a simulation for 1000 ms:
nest.Simulate(1000.0)
# extract recorded data from the multimeter and plot it:
recorded_events = multimeter.get()
recorded_V = recorded_events["events"]["V_m"]
time = recorded_events["events"]["times"]
spikes = spikerecorder.get("events")
senders = spikes["senders"]
plt.figure(figsize=(8, 4))
plt.plot(time, recorded_V, label="membrane potential")
plt.plot(spikes["times"], spikes["senders"]+np.max(recorded_V), "r.", markersize=10,
label="spike events")
plt.xlabel("Time (ms)")
plt.ylabel("Membrane potential (mV)")
plt.title(f"Membrane potential of a {neuron.get('model')} neuron ($I_e$={I_e} pA)")
plt.gca().spines["top"].set_visible(False)
plt.gca().spines["bottom"].set_visible(False)
plt.gca().spines["left"].set_visible(False)
plt.gca().spines["right"].set_visible(False)
plt.legend(loc="lower left")
plt.tight_layout()
plt.show()
```

Note that we additionally set the verbosity of the NEST simulator to `M_WARNING`

to suppress any unnecessary output. Also, we reset the kernel at the beginning of the script (`nest.ResetKernel()`

) to ensure that we start with a clean slate.

In the above example, we used a constant input current to stimulate the neuron, that was defined by the parameter `I_e`

. However, NEST offers a variety of other stimulation devicesꜛ:

`ac_generator`

: produces an alternating current (AC) input`dc_generator`

: provides a direct current (DC) input`step_current_generator`

: provides a piecewise constant DC input current`noise_generator`

: generates a Gaussian white noise current`poisson_generator`

: generates spikes with Poisson process statistics`spike_generator`

: generates spikes from an array with spike-times`spike_train_injector`

: neuron that emits prescribed spike trains

These devices can be created as any other device in NEST with the `nest.Create()`

function. For instance, here is how to create a `poisson_generator`

to stimulate the neuron with a Poisson process:

```
neuron = nest.Create("iaf_psc_alpha")
noise = nest.Create("poisson_generator")
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m", "I_syn_ex", "I_syn_in"])
spikerecorder = nest.Create("spike_recorder")
```

In order to see just the effect of the Poisson process on the neuron, we should ensure that the neuron is not stimulated by any other external input current:

```
neuron.I_e = 0.0
pprint(f"I_e: {neuron.get('I_e')}")
```

We can further specify the average firing rate (spikes/s) of the Poisson generator by setting the `rate`

parameter:

```
noise.rate = 68000.0 # [Hz]
```

Also for stimulation devices, you can set the `start`

and `stop`

parameters to define the time interval in which the device is active. See the documentation of the respective device for further details.

We further adjust the parameters of our IaF model to generate some spikes and to play around with later:

```
# change the membrane time constant:
nest.SetStatus(neuron, {"tau_m": 11}) # [ms], default is 10 ms
# change the spike threshold:
nest.SetStatus(neuron, {"V_th": -55.0}) # [mV], default is -55 mV
```

By increasing the membrane time constant `tau_m`

, the neuron will integrate the input current over a longer time period, i.e., the neuron will be more sensitive to the input current and will fire more easily. A decrease of the spike threshold `V_th`

will let the neuron fire more easily, i.e., the neuron fires more often and at lower input currents.

We need to connect the Poisson generator to the neuron and run the simulation:

```
nest.Connect(multimeter, neuron)
nest.Connect(noise, neuron)
nest.Connect(neuron, spikerecorder)
nest.Simulate(1000.0)
```

This time, we also extract the excitatory and inhibitory synaptic input currents from the multimeter to see how the injected Poisson current looks like:

```
recorded_events = multimeter.get()
recorded_V = recorded_events["events"]["V_m"]
time = recorded_events["events"]["times"]
spikes = spikerecorder.get("events")
senders = spikes["senders"]
recorded_current_ex = recorded_events["events"]["I_syn_ex"] # Excitatory synaptic input current
recorded_current_in = recorded_events["events"]["I_syn_in"] # Inhibitory synaptic input current
```

Here are the corresponding simulations results:

By increasing the mean firing rate of the Poisson generator to, e.g., 88,000 Hz, the neuron will fire more often:

You may have noticed that the firing rate of the Poisson generator does not directly translate into the firing rate of the neuron. This is because the neuron integrates the input current over time, and the firing rate of the neuron depends on the actual input current and the neuron’s parameters such as the firing threshold or the membrane time constant. Regarding the input current, the `noise.rate`

parameter in the `poisson_generator`

indeed sets the mean rate of the Poisson process used to generate spikes. On average, spikes are generated at this rate, but due to the stochastic nature of the process, the actual number of spikes in any given second can vary.

Another factor affecting the actual spike frequency are the connections weights. So far, we have used default connection parameters and we will further discuss different connection paradigms in the next post. However, the synaptic weight of the connection between the `poisson_generator`

and the neuron determines how much each spike affects the neuron’s membrane potential. Higher weights mean each spike has a greater effect, potentially leading to more frequent firing if the integrated input reaches the threshold more often.

In summary, setting a high `noise.rate`

does not mean that the neuron will fire at that rate. Instead, it means that the neuron will receive a high rate of synaptic inputs, which then interact with the neuron’s properties to determine its actual firing rate. Feel free to play around with the example above and change the `noise.rate`

as well as the neuron’s parameters to see how they affect the actual spiking behavior of the neuron.

With NEST, it is very easy to study the behavior of neuron models under changing conditions. For instance, we can simulate the Hodgkin-Huxley model by using the `hh_psc_alpha`

neuron modelꜛ:

```
# define the neuron model and recording devices:
neuron = nest.Create("hh_psc_alpha")
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"])
spikerecorder = nest.Create("spike_recorder")
# set a constant input current for the neuron:
I_e = 650.0 # [pA] # 630.0: spike train; 620.0: a few spikes; 360.0: single spike
neuron.I_e = I_e # [pA]
pprint(f"{neuron.get('I_e')}")
# connect the nodes:
nest.Connect(multimeter, neuron)
nest.Connect(neuron, spikerecorder)
# run a simulation for 200 ms:
nest.Simulate(200.0)
# extract recorded data:
recorded_events = multimeter.get()
recorded_V = recorded_events["events"]["V_m"]
time = recorded_events["events"]["times"]
spikes = spikerecorder.get("events")
senders = spikes["senders"]
```

For different input currents, we trigger different responses of the neuron model:

- for an input current of 360 pA, the neuron fires a single spike
- for an input current of 620 pA, the neuron fires a few spikes
- for input currents above 630 pA, the neuron fires a spike train

You can further play around with the parameters of the neuron model such as by changing the membrane capacitance, the leak conductances, or several other parameters of the Hodgkin-Huxley model.

Sometimes, you may want to study a neuron model under different conditions, e.g., with different parameters or different initial conditions. In this case, you can create a copy of the model and adjust the parameters of the copy. This can be achieved with the `nest.CopyModel()`

ꜛ command, which creates a copy of a given model with the specified parameters and adds it to the current model zoo. To demonstrate this, let’s recap the Izhikevich neuron model from the previous post with its different typical parameters to generate different firing patterns:

```
# define sets of typical parameters of the Izhikevich neuron model:
p_RS = [0.02, 0.2, -65, 8, "regular spiking (RS)"] # regular spiking settings for excitatory neurons (RS)
p_IB = [0.02, 0.2, -55, 4, "intrinsically bursting (IB)"] # intrinsically bursting (IB)
p_CH = [0.02, 0.2, -51, 2, "chattering (CH)"] # chattering (CH)
p_FS = [0.1, 0.2, -65, 2, "fast spiking (FS)"] # fast spiking (FS)
p_TC = [0.02, 0.25, -65, 0.05, "thalamic-cortical (TC)"] # thalamic-cortical (TC) (doesn't work well)
p_LTS = [0.02, 0.25, -65, 2, "low-threshold spiking (LTS)"] # low-threshold spiking (LTS)
p_RZ = [0.1, 0.26, -65, 2, "resonator (RZ)"] # resonator (RZ)
```

We now create a copy of the original NEST `izhikevich`

ꜛ model for each parameter set:

```
nest.CopyModel("izhikevich", "izhikevich_RS", {"a": p_RS[0], "b": p_RS[1], "c": p_RS[2], "d": p_RS[3]})
nest.CopyModel("izhikevich", "izhikevich_IB", {"a": p_IB[0], "b": p_IB[1], "c": p_IB[2], "d": p_IB[3]})
nest.CopyModel("izhikevich", "izhikevich_CH", {"a": p_CH[0], "b": p_CH[1], "c": p_CH[2], "d": p_CH[3]})
nest.CopyModel("izhikevich", "izhikevich_FS", {"a": p_FS[0], "b": p_FS[1], "c": p_FS[2], "d": p_FS[3]})
nest.CopyModel("izhikevich", "izhikevich_TC", {"a": p_TC[0], "b": p_TC[1], "c": p_TC[2], "d": p_TC[3]})
nest.CopyModel("izhikevich", "izhikevich_LTS", {"a": p_LTS[0], "b": p_LTS[1], "c": p_LTS[2], "d": p_LTS[3]})
nest.CopyModel("izhikevich", "izhikevich_RZ", {"a": p_RZ[0], "b": p_RZ[1], "c": p_RZ[2], "d": p_RZ[3]})
```

You can verify that the models have been created by listing all available models:

```
pprint(nest.Models())
```

Note, that your custom models are not saved permanently. If you restart the kernel, the default NEST model zoo is restored.

Now, let’s simulate and plot all different model variants:

```
model_loop_list = ["izhikevich_RS", "izhikevich_IB", "izhikevich_CH", "izhikevich_FS", "izhikevich_TC", "izhikevich_LTS", "izhikevich_RZ"]
for model in model_loop_list:
# create the neuron, a spike recorder and a multimeter:
neuron = nest.Create(model)
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"])
spikerecorder = nest.Create("spike_recorder")
# set a constant input current for the neuron:
I_e = 10.0 # [pA]
neuron.I_e = I_e # [pA]
# now, connect the nodes:
nest.Connect(multimeter, neuron)
nest.Connect(neuron, spikerecorder)
# run a simulation for 1000 ms:
nest.Simulate(1000.0)
# extract recorded data from the multimeter and plot it:
recorded_events = multimeter.get()
recorded_V = recorded_events["events"]["V_m"]
time = recorded_events["events"]["times"]
spikes = spikerecorder.get("events")
senders = spikes["senders"]
plt.figure(figsize=(8, 4))
plt.plot(time, recorded_V, label="membrane potential")
plt.plot(spikes["times"], spikes["senders"]+np.max(recorded_V), "r.", markersize=10,
label="spike events")
plt.xlabel("Time (ms)")
plt.ylabel("Membrane potential (mV)")
plt.title(f"Membrane potential of a {neuron.get('model')} neuron ($I_e$={I_e} pA)")
plt.gca().spines["top"].set_visible(False)
plt.gca().spines["bottom"].set_visible(False)
plt.gca().spines["left"].set_visible(False)
plt.gca().spines["right"].set_visible(False)
plt.legend(loc="center right")
plt.tight_layout()
plt.show()
```

In the last example you may have noticed that the time for each simulation was not reset. For each new simulation, the time array starts where the previous simulation ended. This is actually due to a mistake that I made in the simulation. I should have reset the kernel before each simulation in the for loop (which unfortunately would have deleted our individual Izhikevich model copies) or created an individual recording device for each model. However, this brings up an important point in NEST regarding the attachment of a single recording device to multiple neurons. If you connect a single recording device to multiple neurons or neuron populations, the data for each $n$ neuron will be stored in a nested format. Thus, to extract the data in the correct order, you need to sliceꜛ the data array with a step. Here is an exampleꜛ:

```
# create two neuron nodes:
neuron1 = nest.Create("iaf_psc_alpha")
neuron1.set({"I_e": 340.0})
neuron2 = nest.Create("iaf_psc_alpha")
neuron2.set({"I_e": 370.0})
# create a multimeter node:
multimeter = nest.Create("multimeter")
multimeter.set(record_from=["V_m"])
spikerecorder = nest.Create("spike_recorder")
# connect all nodes:
nest.Connect(multimeter, neuron1)
nest.Connect(multimeter, neuron2)
nest.Simulate(1000.0)
```

To retrieve the data from the multimeter in the correct order you need to correctly slice the data array:

```
mm = multimeter.get()
Vms1 = mm["events"]["V_m"][::2] # start at index 0: till the end: each second entry
ts1 = mm["events"]["times"][::2]
Vms2 = mm["events"]["V_m"][1::2] # start at index 1: till the end: each second entry
ts2 = mm["events"]["times"][1::2]
plt.figure(1)
plt.plot(ts1, Vms1)
plt.plot(ts2, Vms2)
```

While we use NEST to study the behavior of single neurons throughout this tutorial, NEST is primarily designed to simulate large-scale networks of spiking neurons. There are other simulators that are more suitable for single neuron simulations, such as Brian2ꜛ. Brian2 is a simulator for spiking neural networks written in Python that is designed to be easy to use and highly flexible. It is particularly well-suited for single neuron simulations and small networks.

NEST is a robust and versatile simulator designed for large-scale simulations of spiking neural networks. In this tutorial, we have learned the fundamental aspects of using NEST by modeling and simulating the behavior of single neurons. By starting with the installation and setup of NEST, we progressed through the creation and manipulation of individual neuron models, demonstrated how to connect neurons with recording devices, and explored various stimulation paradigms.

Understanding the behavior of single neurons is crucial as it forms the building block for more complex network simulations. With the skills and knowledge gained from this tutorial, you are now ready to explore and create intricate neural network models. In my next posts, we will learn how to extend these concepts into multi-neuron networks and large-scale simulations, further uncovering the potential of NEST.

The complete code used in this blog post is available in this Github repositoryꜛ. Feel free to modify and expand upon it, and share your insights.

- Gewaltig, M.-O., & Diesmann, M.,
*NEST (NEural Simulation Tool)*, 2007 Scholarpedia, 2(4), 1430, doi: 10.4249/scholarpedia.1430ꜛ - Documentation of the NEST simulatorꜛ
- PyNEST API listingꜛ
- List of all supported neuron and synapse models in NESTꜛ
- NEST Tutorial “Part 1: Neurons and simple neural networks”ꜛ
- NEST tutorial “One neuron example”ꜛ
- NEST tutorial “One neuron with noise”ꜛ
- NEST tutorial “Two neuron example”ꜛ

NEST development began in 1993 by Markus Diesmann and Marc-Oliver Gewaltig at the Ruhr University Bochum, Germany, and the Weizmann Institute of Science in Rehovot, Israel. Initially called SYNOD and utilizing a stack-based simulation language named SLI, the software was renamed NEST in 2001. Until 2004, NEST was exclusively developed and used by the founding members of the NEST Initiativeꜛ, with the first public release appearing in the summer of that year. Since then, NEST has been regularly updated, typically releasing new versions once or twice a year.

In 2007, NEST introduced hybrid parallelism using POSIX threadsꜛ and MPIꜛ, both enabling shared-memory and distributed-memory parallelism. Also the stack-based SLI language was largely replaced by a modern Python interface in 2008 (although SLI remains in use internally). Simultaneously, the simulator-independent specification language PyNNꜛ was developed to support NEST. PyNN is a common interface for neuronal network simulators, allowing users to write code that can run on multiple simulators such as NEURONꜛ, Brianꜛ – and NEST – without modification. In 2012, the NEST Initiative transitioned from the proprietary NEST License to the GNU GPL V2 or later.

NEST is built to handle networks with an extensive number of neurons and synapses, ensuring efficient memory usage and computational speed. Designed to mimic the logic of electrophysiological experiments within a computer environment, NEST requires the users to explicitly define the neural system under investigation, providing all necessary tools for this purpose.

NEST supports a wide range of experimental paradigms, including setups with multiple neuron populations, complex connectivity patterns, and various input stimuli. Different neuron and synapse models can coexist, allowing for diverse neuronal connections.

To manipulate or observe network dynamics, virtual recording devices can be defined within the simulation. These devices simulate instruments used in real experiments for measurement and stimulation, recording data either to memory or to a file. NEST’s extensibility allows the addition of new models for neurons, synapses, and devices as needed.

In summary, the core design principles of NEST are **scalability**, **flexibility**, and **precision**:

**Scalability** – NEST employs advanced algorithms and data structures to manage large-scale simulations efficiently. It leverages parallel computing architectures, such as multi-core processors and distributed computing systems, enhancing performance and making it suitable for networks with millions of neurons and billions of synapses.

**Flexibility** – The simulator supports various neuron and synapse models, including integrate-and-fire neurons, Hodgkin-Huxley-type neurons, and plastic synapses. Users can extend functionality by defining custom models and incorporating them into the simulation environment, making NEST versatile for diverse research applications.

**Precision** – NEST ensures high numerical accuracy through precise time integration methods and accurate representation of neuronal dynamics, critical for capturing intricate neural behavior and producing reliable simulation results.

The primary user interface for NEST is PyNEST, a Python package that provides a high-level interface to the simulator. PyNEST simplifies the construction of neuronal networks, simulation control, and data analysis, and enables to focus on experimental design and data interpretation rather than low-level programming tasks.

NEST offers a wide range of features that make it a versatile tool for neuroscientific research:

**Modeling** – Users can construct complex neuronal networks using a high-level language interface, primarily Python. NEST provides a rich library of pre-defined neuron and synapse modelsꜛ, allowing users to create detailed simulations with minimal effort.

**Simulation control** – NEST offers extensive control over simulation parameters, including the ability to specify simulation duration, time resolution, and input stimuli. This flexibility enables users to tailor simulations to their specific experimental needs.

**Analysis tools** – The simulator includes built-in tools for analyzing simulation data, such as spike train analysis, raster plots, and firing rate calculations. Additionally, NEST supports integration with external data analysis tools, like NumPy and SciPy, for advanced data processing.

**Parallel computing** – To handle large-scale simulations, NEST is designed to run efficiently on parallel computing infrastructuresꜛ. It supports both shared-memory and distributed-memory systems, enabling users to utilize high-performance computing resources effectively.

**Extensibility** – NEST’s modular architecture allows users to extend its functionality by adding custom neuron and synapse models, as well as new simulation and analysis tools. This extensibility is facilitated by a well-documented APIꜛ and a supportive user communityꜛ.

NEST has been instrumental in advancing the understanding of neural dynamics and network behavior. Its applications span various domains within neuroscience:

**Neural coding** – NEST can be used to study how neurons encode and process information. By simulating different coding schemes, such as rate coding and temporal coding, one can investigate the principles underlying neural representation and computation.

**Network dynamics** – NEST allows the exploration of dynamic phenomena in neural networks, such as oscillations, synchrony, and propagation of activity. These studies provide insights into the mechanisms of information processing and communication in the brain.

**Plasticity and learning** – The simulator is employed to model synaptic plasticity mechanisms, such as Hebbian learning and spike-timing-dependent plasticity (STDP). These models help elucidate how learning and memory processes are implemented at the synaptic level.

**Brain disorders** – By simulating pathological conditions, like epilepsy and Parkinson’s disease, NEST contributes to the understanding of disease mechanisms and the development of therapeutic interventions. It enables the testing of hypotheses about disease progression and the effects of potential treatments.

The NEST simulator stands as a cornerstone in computational neuroscience, providing a robust and versatile platform for simulating and analyzing large-scale neuronal networks. Its scalable architecture, flexible modeling capabilities, and precise simulation methods make it invaluable for studying brain function, neural dynamics, and neurological disorders. By enabling detailed investigations into the complex interplay of neurons and synapses, NEST facilitates the exploration of fundamental questions in neuroscience. With its user-friendly Python interface and extensive feature set, NEST enables this research to be conducted efficiently and effectively, from small-scale simulations on a laptop to large-scale models on high-performance computing clusters.

In my next posts, we will explore these capabilities further with short step-by-step examples, demonstrating how to set up and run simulations in NEST and analyze simulation results. Stay tuned.

- Gewaltig, M.-O., & Diesmann, M.,
*NEST (NEural Simulation Tool)*, 2007 Scholarpedia, 2(4), 1430, doi: 10.4249/scholarpedia.1430ꜛ - Eppler, J. M., Helias, M., Muller, E., Diesmann, M., & Gewaltig, M.-O.,
*PyNEST: a convenient interface to the NEST simulator*, 2009, Frontiers in Neuroinformatics, 2, 12, doi: 10.3389/neuro.11.012.2008ꜛ - Davison, A. P.; Brüderle, D.; Eppler, J.; Kremkow, J.; Muller, E.; Pecevski, D.; Perrinet, L.; Yger, P.
*PyNN: A Common Interface for Neuronal Network Simulators*, 2009, Frontiers in Neuroinformatics. 2: 11. doi: 10.3389/neuro.11.011.2008ꜛ - Plesser, Hans E.; Eppler, Jochen M.; Morrison, Abigail; Diesmann, Markus; Gewaltig, Marc-Oliver,
*Efficient Parallel Simulation of Large-Scale Neuronal Networks on Clusters of Multiprocessor Computers*, 2007, Euro-Par 2007 Parallel Processing, Lecture Notes in Computer Science. Vol. 4641. pp. 672–681. doi: 10.1007/978-3-540-74466-5_71ꜛ - Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus,
*Exact Subthreshold Integration with Continuous Spike Times in Discrete-Time Neural Network Simulations*, 2007, Neural Computation. 19 (1): 47–79. doi: 10.1162/neco.2007.19.1.47ꜛ - Jordan, J., Ippen, T., Helias, M., Kitayama, I., Sato, M., Igarashi, J., … & Diesmann, M.,
*Extremely scalable spiking neuronal network simulation code: From laptops to exascale computers*, 2018, Frontiers in Neuroinformatics, 12, 2, doi: 10.3389/fninf.2018.00002ꜛ *NEST - A brain simulator*, Bernstein Network. 2012-07-11, via YouTubeꜛ- Wikipedia article on the NEST simulatorꜛ

Spiking Neural Networks (SNNs) represent a class of artificial neural networks that closely emulate the neuronal dynamics observed in the biological brain. Unlike traditional artificial neural networks (ANNs) that process information through continuous signals and utilize activation functions like ReLU or sigmoid, SNNs operate on a different principle. Neurons within an SNN communicate via discrete spikes, firing only when their membrane potential exceeds a specific threshold. This spike-based communication is event-driven, mirroring the way biological neurons interact.

This fundamental difference in information processing not only enhances the biological plausibility of SNNs but also contributes to their computational efficiency. In SNNs, the absence of constant signal transmission reduces power consumption, making them particularly suitable for energy-efficient computing in fields like robotics and embedded systems.

Let’s recall the basic concept of the Izhikevich neuron model. The model is based on two coupled differential equations that describe the **membrane potential** $v$ and the **recovery variable** $u$ of a neuron:

with the after-spike reset condition:

\[\begin{align} \text{if } v \geq 30 \text{ mV, then } & \begin{cases} v \leftarrow c \\ u \leftarrow u + d \end{cases} \label{eq:reset} \end{align}\]The membrane potential $v$ in Eq. ($\ref{eq:model1}$) represents the voltage across the cell membrane, while the recovery variable $u$ in Eq. ($\ref{eq:model2}$) accounts for the activation of the potassium ionic currents. The **parameters** $a$, $b$, $c$, and $d$ determine the dynamics of the neuron (for a detailed description, please refer to the previous post). The **reset condition** Eq. ($\ref{eq:reset}$) ensures that the membrane potential is reset to a specific value $c$ after a spike (i.e., after reaching the threshold of 30 mV), and the recovery variable $u$ is increased by a constant $d$. The **input current** $I$ represents the sum of all incoming synaptic currents and external inputs.

In order to simulate a network of Izhikevich neurons, we need to extend this model to multiple neurons and define the synaptic connections between them. The network structure can be represented by a **connectivity matrix $S$**, where $S_{ij}$ denotes the synaptic weight from a presynaptic neuron $j$ to a postsynaptic neuron $i$. In this matrix:

- positive values ($S_{ij}>0$) indicate excitatory synaptic connections, meaning that the presynaptic neuron’s firing tends to increase the postsynaptic neuron’s membrane potential, making it more likely to fire.
- negative values ($S_{ij}<0$) indicate inhibitory synaptic connections, where the presynaptic neuron’s firing reduces the postsynaptic neuron’s likelihood of firing by lowering its membrane potential.

This connectivity matrix $S$ not only determines the presence and absence of synaptic connections but also quantifies their strength, profoundly influencing the network dynamics. The overall behavior of the network – such as its ability to exhibit patterns like synchronization, oscillations, or even chaotic activity – depends on the layout and weights of these connections.

In our simulation, we will initialize the synaptic weights randomly, reflecting the diverse connectivity patterns observed in biological neural networks.

In order to make the simulation more realistic, we will also consider two types of neurons in our network: **excitatory** ($N_e$) and **inhibitory neurons** ($N_i$). We will implement these neurons types in such a way, that they differ in their intrinsic properties, such as the parameters $a$, $b$, $c$, and $d$ of the Izhikevich model (in our code, we achieve this by adding for each neuron of each type a random value to the respective parameter). By assigning different parameter values to these neuron types, we can capture the diverse spiking behaviors observed in biological neural networks. And of course you can add more neuron types to the network, each with its own set of parameters.

In our simulation code, we will initialize the network with 800 excitatory and 200 inhibitory neurons which reflects the common ratio of excitatory to inhibitory neurons in the mammalian cortex (4:1).

The time dependent input current $I=I(t)$ in the network is a critical component that drives neuronal activity. It is composed of external inputs and the synaptic currents derived from other neurons within the network. Each neuron’s input current is calculated by summing the products of the synaptic weights $S$ and the membrane potentials $v$ of all presynaptic neurons:

\[\begin{equation} I_i(t) = I_{\text{external}, i}(t) + \sum_{j=1}^{N} S_{ij} \cdot v_j(t) \label{eq:input_current} \end{equation}\]where:

- $I_i(t)$ is the total input current to neuron $i$ at time $t$,
- $I_{\text{external}, i}(t)$ represents external inputs to neuron $i$, which could include random noise or specific stimuli patterns,
- $S_{ij}$ is the synaptic weight from neuron $j$ to neuron $i$,
- $v_j(t)$ is the membrane potential of the presynaptic neuron $j$,
- $N$ is the total number of neurons in the network.

In our simulation, we enhance the realism by incorporating random noise in the external inputs, reflecting the variability observed in biological neural systems. This approach enables the simulation to exhibit complex network dynamics, such as synchronous firing, oscillations, and potentially chaotic behaviors.

Additionally, in our implementation, $I(t)$ is calculated such that only neurons whose membrane potential $v$ exceeded the threshold in the previous time-step contribute to the input current of other neurons at the current time-step. This event-driven mechanism ensures that the network dynamics are not just reactive to the current state of neuronal activations but are influenced by recent spiking activity, mimicking the temporal dynamics seen in real neural circuits.

The Python code below covers all concepts described above and is based on Izhikevich’s originally published Matlab codeꜛ:

```
import numpy as np
import matplotlib.pyplot as plt
# for reproducibility:
np.random.seed(0) #100
# simulation time:
T = 1000 # ms
# constants:
Ne = 800 # Number of excitatory neurons
Ni = 200 # Number of inhibitory neurons
# initialize parameters; pre-define parameters for different neuron types:
re = np.random.rand(Ne, 1) # excitatory neurons, "r" stands for random
ri = np.random.rand(Ni, 1) # inhibitory neurons
p_RS = [0.02, 0.2, -65, 8, "regular spiking (RS)"] # regular spiking settings for excitatory neurons (RS)
p_IB = [0.02, 0.2, -55, 4, "intrinsically bursting (IB)"] # intrinsically bursting (IB)
p_CH = [0.02, 0.2, -51, 2, "chattering (CH)"] # chattering (CH)
p_FS = [0.1, 0.2, -65, 2, "fast spiking (FS)"] # fast spiking (FS)
p_TC = [0.02, 0.25, -65, 0.05, "thalamic-cortical (TC)"] # thalamic-cortical (TC) (doesn't work well)
p_LTS = [0.02, 0.25, -65, 2, "low-threshold spiking (LTS)"] # low-threshold spiking (LTS)
p_RZ = [0.1, 0.26, -65, 2, "resonator (RZ)"] # resonator (RZ)
a_e, b_e, c_e, d_e, name_e = p_RS
a_i, b_i, c_i, d_i, name_i = p_LTS
a = np.vstack((a_e * np.ones((Ne, 1)), a_i + 0.08 * ri))
b = np.vstack((b_e * np.ones((Ne, 1)), b_i - 0.05 * ri))
c = np.vstack((c_e + 15 * re**2, c_i * np.ones((Ni, 1))))
d = np.vstack((d_e-6 * re**2, d_i * np.ones((Ni, 1))))
S = np.hstack((0.5 * np.random.rand(Ne+Ni, Ne), -1*np.random.rand(Ne+Ni, Ni)))
# initial values of v and u:
v = -65 * np.ones((Ne+Ni, 1))
u = b * v
firings = np.array([]).reshape(0, 2) # Spike timings
# initialize variables for recording data:
I_array = np.zeros((Ne+Ni, T))
v_array = np.zeros((Ne+Ni, T))
u_array = np.zeros((Ne+Ni, T))
# simulation of 1000 ms:
for t in range(0, T):
# step 1: input current calculation:
# i.e., calculate input current for each neuron with noise contribution (this is our "I_external(t)").
I = np.vstack((5 * np.random.randn(Ne, 1), 2 * np.random.randn(Ni, 1)))
# summing synaptic contributions if there are any fired neurons in previous time step:
if t > 0:
I += np.sum(S[:, fired], axis=1).reshape(-1, 1)
# step 2: update the membrane potential and recovery variable (neuron dynamics) with Euler's method:
v += 0.5 * (0.04 * v**2 + 5 * v + 140 - u + I)
v += 0.5 * (0.04 * v**2 + 5 * v + 140 - u + I)
u += a * (b * v - u)
# step 3: check for spikes and update the membrane potential and recovery variable:
fired = np.where(v >= 30)[0] # check if the membrane potential exceeds 30 mV
if fired.size > 0:
firings = np.vstack((firings, np.hstack((t * np.ones((fired.size, 1)), fired.reshape(-1, 1)))))
# equalize all spikes at 30 mV by resetting v first to +30 mV and then to c:
v[fired] = c[fired] # reset v for fired neurons
u[fired] = u[fired] + d[fired] # increment u for fired neurons
# step 4: record data:
I_array[:, t] = I.flatten()
v_array[:, t] = v.flatten()
u_array[:, t] = u.flatten()
# plotting the spike timings:
plt.figure(figsize=(7, 7))
plt.scatter(firings[:, 0], firings[:, 1], s=1, c='k')
excitatory = firings[:, 1] < Ne
inhibitory = firings[:, 1] >= Ne
plt.axhline(y=Ne, color='k', linestyle='-', linewidth=1)
plt.text(0.8, 0.76, 'excitatory', color='k', fontsize=12, ha='left', va='center', transform=plt.gca().transAxes, bbox=dict(facecolor='white', alpha=1))
plt.text(0.8, 0.84, 'inhibitory', color='k', fontsize=12, ha='left', va='center', transform=plt.gca().transAxes, bbox=dict(facecolor='white', alpha=1))
plt.xlabel('Time (ms)')
plt.ylabel('Neuron index')
plt.xlim([0, T])
plt.ylim([0, Ne+Ni])
plt.yticks(np.arange(0, Ne+Ni+1, 200))
plt.tight_layout()
plt.show()
```

The code above shows the core concept of the network model simulation. The entire code can be found in this Github repositoryꜛ.

As a first simulation run, we replicate Izikevich’s original example and consider a network with 800 excitatory neurons simulated as regular spiking neurons (RS) and 200 inhibitory neurons simulated as low-threshold spiking neurons (LTS). The simulation is run for 1000 ms. The following plot shows the spike events of each neuron in the network as a function of time:

What we can observe in this simulation is that, for occasionally episodes, a synchronous firing pattern emerges in the network, where groups of neurons fire together in a coordinated manner. These episodes repeat with a frequency of about 10 Hz (first three peaks in the plot) and about 40 Hz (remaining peaks). This behavior is the exact replication of what Izhikevich simulated in his original work, which associates these firing patterns with the brain’s alpha and gamma oscillations, respectively. Between these episodes, the network exhibits a more irregular, Poisson-like firing pattern. The observed firing patterns demonstrate the network’s ability to exhibit complex spiking dynamics, even though the neurons in the network are connected randomly and no synaptic plasticity rules are implemented. The neurons organize themselves into synchronous firing patterns or assemblies, exhibiting a collective rhythmic behavior. And according to Izhikevich, this rhythmic behavior corresponds to that of the mammalian cortex in awake states.

Let’s also take a look at the corresponding input currents of each neuron in the network:

We can see, how the inputs currents of the neuron peak synchronously with the observed clusters of spiking events. This synchronous behavior is a result of the network’s connectivity and the synaptic weights between the neurons. In our model, the input current $I$ for each neuron includes contributions from the spikes of other neurons, as defined by the connectivity matrix $S$ (compare Eq. ($\ref{eq:input_current}$)). This means that when a neuron fires (its membrane potential $v$ surpasses the threshold), it influences the input currents of other neurons according to the synaptic weights specified in $S$. If $S_{ij}$ is positive (excitatory connection), it will increase the input current $I_{i}$ of neuron $i$ when neuron $j$ spikes. If $S_{ij}$ is negative (inhibitory connection), it will decrease $I_{i}$. If there are episodes of synchronous firing, where many neurons fire (nearly) simultaneously, these spiking events contribute collectively to significant peaks in the input currents of neurons connected to them. And this exactly what we observe in the plot above.

This synchronous behavior between spikes and input currents is an important aspect of how neural networks function, both in biological and artificial contexts. It underscores the interconnected nature of neurons, where the action of one neuron can influence many others, leading to complex behaviors such as oscillations, waves of activity, and even synchronized firing patterns seen in various neural processes and disorders.

Finally, let’s take a brief look at the membrane potential and recovery variable traces of one exemplary excitatory neuron (blue) and an inhibitory neuron (orange) in the network:

The plots show Poisson-like spiking patterns of both neuron types along with distinct spiking events that correlate with the spiking clusters within the network. The recovery variable $u$ of the inhibitory neuron (orange) exhibits a different behavior with lower amplitudes compared to the excitatory neuron (blue). This difference in the recovery variable dynamics is a result of the different parameter values assigned to the two neuron types, reflecting the ability to assign diverse spiking behaviors to different neuron populations in the network.

By altering the synaptic weights in the connectivity matrix $S$, we can observe how the network dynamics change and generates different spiking patterns. For instance, we run the simulation for four different scenarios with varying excitatory and inhibitory synaptic weights defined as follows, while keeping the other parameters constant:

```
# define various synaptic weights for different scenarios (uncomment one of the following):
S = np.hstack((0.60 * np.random.rand(Ne+Ni, Ne), -1.6*np.random.rand(Ne+Ni, Ni))) # S1
#S = np.hstack((0.60 * np.random.rand(Ne+Ni, Ne), -0.6*np.random.rand(Ne+Ni, Ni))) # S2
#S = np.hstack((0.30 * np.random.rand(Ne+Ni, Ne), -0.1*np.random.rand(Ne+Ni, Ni))) # S3
#S = np.hstack((0.10 * np.random.rand(Ne+Ni, Ne), -0.1*np.random.rand(Ne+Ni, Ni))) # S4
```

**Scenario 1 (S1)**: Strong excitatory and strong inhibitory synaptic weights**Scenario 2 (S2)**: Strong excitatory and weak inhibitory synaptic weights**Scenario 3 (S3)**: Weak excitatory and very weak inhibitory synaptic weights**Scenario 4 (S4)**: Very weak excitatory and very weak inhibitory synaptic weights

By increasing both the excitatory and inhibitory synaptic weights (S1) (compared to Izhikevich’s original set-up), we observe a more synchronized firing pattern in the network. This synchronous behavior is due to the stronger synaptic connections, which lead to more pronounced interactions between neurons. However, the frequency of the synchronous firing episodes has now changed. By decreasing the inhibitory synaptic weights (S2), the firing patterns change even more dramatically, with less frequent and longer-lasting synchronous firing episodes of heavily synchronized neurons. If we additionally decrease the excitatory synaptic weights (S3), the frequency of synchronous firing episodes becomes higher and, as expected, the synchronicity between neurons decreases. Finally, by setting both excitatory and inhibitory synaptic weights to very low values (S4), the network exhibits a more irregular firing pattern with almost no recognizable synchronicity between neurons.

These results demonstrate how the network dynamics are shaped by the synaptic weights and how different connectivity patterns can lead to distinct spiking behaviors. By adjusting the synaptic weights, we can observe a wide range of network behaviors, from synchronous firing to irregular spiking patterns, reflecting the diverse dynamics observed in biological neural networks.

In addition to altering the synaptic weights, we can also change the neuron types in the network to observe how different spiking behaviors emerge. As an example, we simulate a network with

- regular spiking (RS) and chattering (CH) neurons (
**combination 1**), - regular spiking (RS) and thalamic-cortical (TC) neurons (
**combination 2**), - resonator (RZ) and regular spiking (RS) neurons (
**combination 3**), and - intrinsically bursting (IB) and intrinsically bursting (IB) neurons (
**combination 4**).

We assign different connectivity matrices $S$ to these simulations in order to enhance emerging spiking patterns. We use the connectivity scenarios defined in the previous section and define the following additional scenario S5:

```
S = np.hstack((0.30 * np.random.rand(Ne+Ni, Ne), -0.2*np.random.rand(Ne+Ni, Ni))) # S5
```

By simulating various neuron type combinations, we can observe a wide range of spiking behaviors in the network, each exposing distinct firing patterns. It demonstrates how the intrinsic properties of neurons, such as their parameter values in the Izhikevich model, can shape the network dynamics and lead to diverse spiking behaviors. By combining different neuron types and synaptic weights, we can explore the rich repertoire of spiking patterns that can emerge in neural networks, reflecting the complexity and adaptability of biological neural systems.

In this exploration of the Izhikevich neuron model applied to spiking neural networks (SNNs), we’ve showcased its capability to simulate complex neural dynamics that mirror biological processes. Renowned for its simplicity, computational efficiency, and biological plausibility, the Izhikevich model is an exceptional tool for understanding how neuronal interactions within neural networks lead to phenomena like synchronous firing and oscillatory behaviors. Furthermore, due to its computational efficiency, the Izhikevich SNN enables real-time processing, making it suitable for applications in computational neuroscience without compromising biological realism.

By adjusting network parameters and connectivity, we’ve demonstrated how different neuronal behaviors can be elicited, which enhances our understanding of neural circuit functionality and adaptability. Moving forward, integrating mechanisms like synaptic plasticity could open new pathways for simulating learning and memory, further bridging the gap between artificial and biological neural systems. The insights gained from these models are invaluable for advancing artificial intelligence and developing new treatments for neurological disorders.

The complete code used in this blog post is available in this Github repositoryꜛ. Feel free to experiment with it, modify the parameters, and explore the dynamics of the Izhikevich SNN model further.

- Izhikevich,
*Simple model of spiking neurons*, 2003, IEEE Transactions on Neural Networks, Vol. 14, Issue 6, pages 1569-1572, doi: 10.1109/TNN.2003.820440ꜛ, PDFꜛ - Izhikevich, Eugene M., (2010),
*Dynamical systems in neuroscience: The geometry of excitability and bursting (First MIT Press paperback edition)*, The MIT Press, ISBN: 978-0-262-51420-0, PDFꜛ

The Izhikevich model bridges the gap between detailed biophysical models like the Hodgkin-Huxley model and more abstract models like the Integrate-and-Fire model. While the former is biologically realistic but computationally expensive, the latter is computationally efficient but biologically unrealistic and lacks the ability to reproduce the rich dynamics of real neurons. In contrast, the Izhikevich model offers a compromise by capturing essential neuronal dynamics with a reduced set of equations. This reduction, based on bifurcation methodologies, allows for faster simulations while retaining essential features of neuronal dynamics.

The model is defined by a two-dimensional system of ordinary differential equations (ODE),

\[\begin{align} \frac{dv}{dt} &= 0.04v^2 + 5v + 140 - u + I \label{eq:model1} \\ \frac{du}{dt} &= a(bv - u) \label{eq:model2} \end{align}\]along with an after-spike reset condition:

\[\begin{align} \text{if } v \geq 30 \text{ mV, then } & \begin{cases} v \leftarrow c \\ u \leftarrow u + d \end{cases} \label{eq:reset} \end{align}\]The variable $v$ represents the **membrane potential**, and $u$ is a **recovery variable** that provides negative feedback to $v$. Both variables are dimensionless. $I$ is the synaptic or injected current. $a$, $b$, $c$, and $d$ are dimensionless parameters that define neuron dynamics, and they can be tuned to replicate various types of neuronal behaviors. In particular,

- $a$ controls the
**time scale of the recovery variable $u$**, with smaller values leading to slower recovery. - $b$ determines the
**sensitivity of the recovery variable $u$**to the subthreshold fluctuations of membrane potential $v$. Larger values result in a stronger coupling between $v$ and $u$, which possibly leads to subthreshold oscillations and low-threshold spiking behavior. - $c$ is the
**after-spike reset value of the membrane potential $v$**. - $d$ is the
**increment of the recovery variable $u$ after a spike**. It mimics the slow recovery of the membrane potential after an action potential.

The following plot visualizes the effect of the four parameters on the dynamics of the Izhikevich model:

The specific numbers in Eq. ($\ref{eq:model1}$), $0.04v^2 + 5v + 140$, result from fitting the spike generation dynamics to experimental data of cortical neurons so that the membrane potential $v$ has the scale of mV and the time $t$ has the scale of ms. Other choices of the parameters in Eq. ($\ref{eq:model1}$) are also possible to model different types of neurons.

Eq. ($\ref{eq:model1}$) and ($\ref{eq:model2}$) describe the evolution of the membrane potential and recovery variable over time. The reset condition Eq. ($\ref{eq:reset}$) ensures the model generates realistic action potentials by resetting the membrane potential and adjusting the recovery variable whenever the potential reaches a peak of 30 mV. The Izhikevich model is particularly well-suited for exploring the diverse behaviors of different types of neurons, including regular spiking, fast spiking, and bursting neurons as we will explore in the following sections.

To simulate the model, we choose the Euler method for numerical integration similar to Izhikevich’s original paper. However, we modify the code in such a way that the integration is performed with an adjustable step size `dt`

. Choosing a smaller step size $\lt$1 will increase the accuracy of the simulation but also increase the computational cost. A value of `dt=0.1`

is a good starting point for most simulations with a good balance between accuracy and computational efficiency.

In the code, different sets of parameters are pre-defined for various neuron types, that we will discuss in more detail in the next section. By changing the parameter set, you can simulate different types of neurons. The code also allows you to adjust the input current `I_baseline`

over time as well as the start and end time of the input current.

For plotting the membrane potential $v$, we clip the values to a maximum of 30 mV to visualize realistic action potentials.

```
import numpy as np
import matplotlib.pyplot as plt
# set simulation time and time step size:
T = 400 # total simulation time in ms
dt = 0.1 # time step size in ms
steps = int(T / dt) # number of simulation steps
t_start = 50 # start time for the input current
t_end = T # end time for the input current
# initialize parameters for one excitatory neuron:
p_RS = [0.02, 0.2, -65, 8, "regular spiking (RS)"] # regular spiking settings for excitatory neurons (RS)
p_IB = [0.02, 0.2, -55, 4, "intrinsically bursting (IB)"] # intrinsically bursting (IB)
p_CH = [0.02, 0.2, -51, 2, "chattering (CH)"] # chattering (CH)
p_FS = [0.1, 0.2, -65, 2, "fast spiking (FS)"] # fast spiking (FS)
p_TC = [0.02, 0.25, -65, 0.05, "thalamic-cortical (TC)"] # thalamic-cortical (TC) (doesn't work well)
p_LTS = [0.02, 0.25, -65, 2, "low-threshold spiking (LTS)"] # low-threshold spiking (LTS)
p_RZ = [0.1, 0.26, -65, 2, "resonator (RZ)"] # resonator (RZ)
a, b, c, d, type = p_RS # just change the parameter set here to simulate different neuron types
# initial values of v and u:
v = -65 # mV
u = b * v
# initialize array to store the u, v, I and t values over time:
u_values = np.zeros(steps)
v_values = np.zeros(steps)
I_values = np.zeros(steps)
t_values = np.zeros(steps)
# set the baseline current:
I_baseline = 10 # nA
# simulation:
for t in range(steps):
t_ms = t * dt # current time in ms
if t_ms >= t_start and t_ms <= t_end:
I = I_baseline
else:
I = 0
# check for spike and reset if v >= 30 mV (reset-condition):
if v >= 30:
v = c # reset membrane potential v to c
u += d # increase recovery variable u by d
# Euler's method for numerical integration:
v += dt * 0.5 * (0.04 * v**2 + 5 * v + 140 - u + I)
v += dt * 0.5 * (0.04 * v**2 + 5 * v + 140 - u + I)
u += dt * a * (b * v - u)
# store values for plotting:
u_values[t] = u
v_values[t] = v
I_values[t] = I
t_values[t] = t_ms
# ensure v_values do not exceed 30 mV in the plot:
v_values = np.clip(v_values, None, 30)
# plotting:
fig, ax1 = plt.subplots(figsize=(8,3.85))
# plot v_values on the left y-axis:
ax1.plot(t_values, v_values, label='Membrane potential v(t)', color='k', lw=1.3)
ax1.set_xlabel('Time (ms)')
ax1.set_ylabel('membrane potential $v$ [mV]', color='k')
ax1.tick_params(axis='y', colors='k')
# create a second y-axis for u_values:
ax2 = ax1.twinx()
ax2.plot(t_values, u_values, label='Recovery variable u(t)', color='r', lw=2, alpha=1.0)
ax2.set_ylabel('recovery variable $u$ [a.u.]', color='r')
ax2.tick_params(axis='y', colors='r')
ax2.set_ylim(-20, 10)
# create a third y-axis for I_values:
ax3 = ax1.twinx()
ax3.spines['right'].set_position(('outward', 60))
ax3.plot(t_values, I_values, label='Input Current I(t)', color='b', lw=2, alpha=0.75)
ax3.set_ylabel('input current $I$ [nA]', color='b')
ax3.tick_params(axis='y', colors='b')
ax3.set_ylim(-1,60)
ax3.set_frame_on(True)
ax3.patch.set_visible(False)
plt.title(f'Membrane potential, recovery variable, and input current, {type}\n'
f"Parameters: a={a}, b={b}, c={c}, d={d}", fontsize=12)
plt.tight_layout()
plt.show()
```

The code above shows the core concept of the Izhikevich model simulation. The entire code can be found in this Github repositoryꜛ.

The Izhikevich model can simulate various types of neurons by adjusting the parameters $a$, $b$, $c$, and $d$. Here are some examples of different neuron types and their corresponding parameter sets that can be found in the mammalian neocortex:

The most common type of excitatory neuron in the neocortex are regular spiking (RS) neurons. They are characterized by a regular firing pattern when exposed to a constant input current, showing a short inter-spike interval at the beginning which increases over time. This behavior is also called **spike frequency adaptation**. An increase in the input current will increase the the inter-spike frequency. However, RS neurons will never fire at high frequencies due to the the large spike-afterhyperpolarization. The parameters for RS neurons are $a=0.02$, $b=0.2$, $c=-65$, and $d=8$. $c=-65$ corresponds to a large voltage reset after a spike, and $d=8$ corresponds to a large after-spike increase of the recovery variable $u$.

Intrinsically bursting (IB) neurons are characterized by a burst of action potentials followed by repetitive single spikes. The parameters for IB neurons are $a=0.02$, $b=0.2$, $c=-55$, and $d=4$. During the initial burst, the recovery variable $u$ increases rapidly and then switches to (regular) spiking dynamics.

Chattering (CH) neurons are characterized by bursts of closely spaced action potentials followed by a hyperpolarization. The inter-burst frequency can reach high values up to 40 Hz. The parameters for CH neurons are $a=0.02$, $b=0.2$, $c=-51$ tp $-50$, and $d=2$. The lower value of $d$ compared to IB and RS neurons results in a slower recovery of the membrane potential after a burst.

Fast spiking (FS) neurons are one of two types of inhibitory neurons in the cortex. They fire periodic trains of action potentials at high frequencies with almost no adaptation, i.e., no slowing down of the firing rate. The parameters for FS neurons are $a=0.1$, $b=0.2$, $c=-65$, and $d=2$. The higher value of $a$ compared to RS neurons results in a faster recovery of the membrane potential.

Low-threshold spiking (LTS) neurons are the second type of inhibitory neurons in the cortex. Similar to FS neurons, they fire periodic trains of action potentials at high frequencies. However, LTS neurons exhibit spike-frequency adaptation, leading to a decrease in the firing rate over time. These neurons also exhbit a low-threshold for spiking and can be simulated with the parameters $a=0.02$, $b=0.25$, $c=-65$, and $d=2$. $b=0.25$ accounts for the low-threshold spiking behavior.

The model is also able to simulate thalamic-cortical (TC) neurons, which are found in the thalamus and project to the cortex. TC neurons provide the major input to the cortex and are involved in the generation of sleep rhythms. They have two firing modes. First, when exposed to a constant positive input current, they exhibit a tonic firing pattern. Second, when exposed to a negative input current which abruptly switches to 0, the TC neurons fire a rebound of action potentials. The parameters for TC neurons are $a=0.02$, $b=0.25$, $c=-65$, and $d=0.05$.

In his original paper, Izhikevich shows that the model is able to simulate another interesting type of neurons: Resonator (RZ) neurons. These neurons are able to resonate to rhythmic inputs that have an appropriate frequency. As far as I understand, the model mimics this behavior by exhibiting subthreshold oscillations in response to a constant input current. Furthermore, the neuron would switch to repetitive spiking when exposed to a short-term input current pulse. The corresponding parameters for RZ neurons are $a=0.1$, $b=0.26$, $c=-65$, and $d=2$. However, I was not able to reproduce the resonator behavior with these parameters. I tried different values and different input currents, without success. If you have any suggestions on how to simulate the resonator behavior, please let me in the comments below.

The following plot summarizes the different parameter sets for the various neuron types described above:

The Izhikevich model is a powerful tool for simulating the spiking and bursting behavior of neurons with a remarkable balance between simplicity and biological relevance. By adjusting the parameters $a$, $b$, $c$, and $d$, the model can simulate various types of neurons found in the mammalian neocortex, including regular spiking, fast spiking, and bursting neurons, while being computationally efficient.

Despite its advantages, the Izhikevich model is not without limitations. The model can oversimplify certain neuronal behaviors, particularly those involving complex ion channel dynamics and second messenger systems. Furthermore, the model’s reliance on parameter tuning for different neuron types can make it less predictive compared to more detailed models.

Nevertheless, the Izhikevich model serves as a bridge between biologically detailed, computationally demanding models and more abstract, simplified neuronal models. It provides a versatile platform for exploring neuronal behavior and network dynamics with considerable ease and efficiency. In the next post we will discover, how the Izhikevich model can be used to efficiently simulate networks of spiking neurons.

The complete code used in this blog post is available in this Github repositoryꜛ. Feel free to experiment with it, modify the parameters, and explore the dynamics of the Izhikevich model further. And for any ideas regarding the not yet solved resonator behavior, please leave a comment below.

- Izhikevich,
*Simple model of spiking neurons*, 2003, IEEE Transactions on Neural Networks, Vol. 14, Issue 6, pages 1569-1572, doi: 10.1109/TNN.2003.820440ꜛ, PDFꜛ - Izhikevich, Eugene M., (2010),
*Dynamical systems in neuroscience: The geometry of excitability and bursting (First MIT Press paperback edition)*, The MIT Press, ISBN: 978-0-262-51420-0, PDFꜛ

The model is named after the British physiologists Alan Lloyd Hodgkin and Andrew Fielding Huxley. Both scientists worked at the University of Cambridge and conducted their experiments on the giant axon of the squid. The Hodgkin-Huxley model was published in 1952ꜛ and is based on the voltage-clamp experiments they conducted in the 1940s. The model describes the dynamics of the membrane potential of a neuron and the flow of ions across the cell membrane. The model is based on the opening and closing of ion channels and the flow of sodium and potassium ions across the membrane. The model is a set of four coupled ordinary differential equations (ODE) that describe the dynamics of the membrane potential of a neuron.

Both scientists received the Nobel Prize in Physiology or Medicine in 1963 for their discoveries concerning the ionic mechanisms involved in excitation and inhibition in the peripheral and central regions of the nerve cell membrane. The Hodgkin-Huxley model enabled scientists for the first time to understand the dynamics of action potentials in neurons and the behavior of neural networks by using a mathematical framework, making it a cornerstone of computational neuroscience.

In the following, we derive the Hodgkin-Huxley model step by step. We will first introduce the basic concepts of ion flows and the Nernst equation, followed by the membrane potential equation and the gating variables which comprise the model.

When we discussed the Integrate-and-Fire model or the FitzHugh-Nagumo model, we actually neglected the role of **ion flows** across the cell membrane in the generation of action potentials. Indeed, ion flows across the cell membrane play a crucial role in the generation of action potentials in neurons. They are responsible for the changes in the membrane potential $U_{m}$ that lead to the generation of action potentials. Biologically, cell membranes consist of a lipid bilayer that separates the intracellular and extracellular compartments of the cell. The lipid bilayer is impermeable to ions. However, ion transport is facilitated by specific proteins that act as **gates** through the membrane: **ion channels** and **ion pumps**. Ion channels allow ions to flow passively across the membrane and are selective for specific ions, with their activity regulated by voltage, i.e., the membrane potential. In contrast, ion pumps actively transport ions across the membrane, establishing concentration gradients. For example, inside mammalian neurons, sodium ions are less concentrated ($\sim 10$ mM) compared to the outside ($\sim 145$ mM), while potassium ions are more concentrated inside ($\sim 140$ mM) than outside ($\sim 5$ mM). This imbalance is crucial for neuron function.

Let’s recall, that the **membrane potential** $U_{m}$ is defined as the voltage difference between the intra- and extracellular compartments of the cell, measured at the cell membrane,

The cartoon above demonstrates that differences in ion concentrations across the membrane, particularly potassium’s ($\text{K}$) higher internal concentration and sodium’s ($\text{Na}$) external dominance, contribute to establishing the membrane potential $U_m$. Typically, membrane potentials vary from $-70$ mV to $-40$ mV, with potassium ions being a primary driver due to their outward diffusion, leaving behind negative charges and creating charge separation. While potassium’s role is emphasized for its major contribution to $U_m$, sodium, chloride ($\text{Cl}$), calcium ($\text{Ca}$), and other ions also affect the membrane potential.

When each ion type’s electrochemical gradient is balanced, the ions have reached their **equilibrium potential** $U_\text{eq,ion}$, ceasing net movement. This equilibrium potential is determined by the **Nernst equation**, given by

where $U_\text{eq,ion}$ is the equilibrium potential of the ion, $k_B$ is the Boltzmann constant, $T$ is the temperature, $q$ is the charge of the ion, and $n_{out}$ and $n_{in}$ are the concentrations of the ion outside and inside the cell, respectively. $U_\text{eq,ion}$ is typically measured with the inside of the cell as the reference point, relative to the outside (which is conventionally set as 0 mV). For sodium, for instance, the equilibrium potential $U_\text{eq, Na}$ is around $+67$ mV, thus, there are more sodium ions outside the cell than inside (which turns the logarithm in Eq. ($\ref{eq:nernst}$) positive). At this potential, the electrochemical driving force for sodium ions into the cell balances with the force driving them out, given the ion’s higher concentration outside the cell. The equilibrium potential of potassium $U_\text{eq, K}$ is around $-85$ mV, analogously indicating the opposite.

If the membrane potential $U_{m}$ becomes lower than $U_\text{eq, Na}$, $\text{Na}^+$ ions flow into the cell via ion channels in order to reduce the difference of the sodium concentration between the intra- and extracellular compartments. This flow leads to a depolarization of the membrane, i.e., the membrane potential becomes more positive. If $U_{m}$ is higher than $U_\text{eq, Na}$, $\text{Na}^+$ ions flow out of the cell, leading to a hyperpolarization of the membrane. Thus, the direction of the ion flow is reversed when the membrane potential reaches the equilibrium potential, which is why the equilibrium potential is often also called the **reversal potential**.

Neurons are influenced by multiple ion types simultaneously, contributing to the overall membrane potential. The **resting potential** $u_{\rm rest}$ of a neuron, which is around $-65$ mV, is a balanced state where the outflow of potassium is matched by the inflow of sodium ions (since $U_{eq, K}\lt u_{\rm rest}\lt U_{eq, Na}$), maintained actively by ion pumps. In resting state, no action potentials are generated.

The **membrane potential equation** integrates the effects of all ion flows through the cell membrane, accounting for the passive movement of ions through channels and the active transport by ion pumps. We could think of the neuron’s membrane as an electrical circuit, where the lipid bilayer acts as a capacitor $C$ storing charge (thus, reflecting the membrane’s ability to hold ions on either side, contributing to the membrane potential’s temporal dynamics), and the ion channels and pumps serve as variable resistors $R_{ion} = 1/g_{ion}$, controlling the flow of electrical current. $g_{ion}$ is the conductance of the ion channel.

The relationship between current $I_{ion}$, voltage $U_{m}$, and the equilibrium potential $U_{eq,ion}$ is given by Ohm’s law,

\[\begin{align} I_{ion} &= g_{ion} (U_{m} - U_{eq,ion}), \end{align}\]where $(U_{m} - U_{eq,ion})$ can be considered as the driving force for the ion flow. The overall externally applied current $I_{ext}(t)$ splits into the current at the capacitor, $I_{C} = C \frac{dU_{m}}{dt}$, and the sum of the currents through the ion channels, $\sum_{ion} I_{ion}$,

\[\begin{align} I_{ext}(t) & = I_{C} + \sum_{ion} I_{ion} \\ \Leftrightarrow \; C \frac{dU_{m}}{dt} &= I_{ext}(t) - \sum_{ion} I_{ion}. \label{eq:membrane_potential} \end{align}\]The last transformation is the **membrane potential equation** and the first ODE of the Hodgkin-Huxley model. It describes the dynamics of the membrane potential $U_{m}$ over time $t$. The sum of the currents through the ion channels, $\sum_{ion} I_{ion}$, is the sum of the currents due to, e.g., sodium, potassium, and other leak ions (like chloride), $I_{Na}$, $I_{K}$, and $I_{L}$, respectively. Each of these currents is driven by the difference between the membrane potential and their respective equilibrium (or reversal) potentials, as described by the Nernst equation. The external current $I_{ext}(t)$ is any external current applied to the neuron, such as synaptic inputs or experimental stimulations.

The conductance of the ion channels, $g_{ion}$, is not constant but voltage and time dependent. For instance, when all channels are open, $g_{ion}$ is maximal. However, some of the channels are usually blocked. Hodgkin and Huxley measured how the channel conductance changes with time and voltage. They introduces so-called **gating variables** $m$, $h$, and $n$ for the sodium and potassium channel to describe their findings. The leak channel showed no voltage dependence and is constant:

The gating variables describe the probability of the channels to be open at a given time. $m$ and $h$ control the sodium channel, while $n$ controls the potassium channel. $m$ controls the activation of the sodium channel, $h$ its inactivation, and $n$ the activation of the potassium channel. The gating variables are described by differential equations that depend on the membrane potential $U_{m}$:

\[\begin{align} \frac{dm}{dt} & = \alpha_{m}(U_{m}) (1 - m) - \beta_{m}(U_{m}) m, \label{eq:dm} \\ \frac{dh}{dt} & = \alpha_{h}(U_{m}) (1 - h) - \beta_{h}(U_{m}) h, \label{eq:dh} \\ \frac{dn}{dt} & = \alpha_{n}(U_{m}) (1 - n) - \beta_{n}(U_{m}) n. \label{eq:dn} \end{align}\]Equations ($\ref{eq:dm}$) to ($\ref{eq:dn}$) constitute the second set of ODEs of the Hodgkin-Huxley model.

The functions $\alpha_{p}(U_{m})$ and $\beta_{p}(U_{m})$ with $p \in \lbrace m, h, n\rbrace$ are voltage-dependent functions that describe the opening and closing of the ion channels. They are also called **rate function** and were experimentally determined by Hodgkin and Huxley. They can be expressed as functions of $U_m$ that typically involve exponential or sigmoidal terms, reflecting the voltage-sensitive opening and closing of ion channels. They are given by

where $p_\infty(U_{m})$ and $(1 - p_\infty(U_{m}))$ are the steady state values of the gating variable $p$ at voltage $U_{m}$ for activation and inactivation, respectively. $\tau_{p}$ is the time constant of the gating variable. The steady state values are usually defined by Boltzmann equations and have the general form:

\[\begin{align} \alpha_{p}(U_{m}) & = \frac{\theta_{p,1} (U_{m} - \theta_{p,2})}{\theta_{p,4} - \exp\left(\frac{\theta_{p,2}-U_{m}}{\theta_{p,3}}\right)}, \\ \beta_{p}(U_{m}) & = \theta_{p,5} \exp\left(-\frac{U_{m}}{\theta_{p,6}}\right). \end{align}\]with $\theta_{p,i}$ being the parameters of the Boltzmann equation. In their original work, Hodgkin and Huxley found the following values for the parameters $\theta_{p,i}$:

\[\begin{align*} \alpha_{m}(U_{m}) & = 0.1 \cdot \frac{25 - U_{m}}{\exp\left(\frac{25 - U_{m}}{10}\right) - 1}, \\ \beta_{m}(U_{m}) & = 4 \cdot \exp\left(-\frac{U_{m}}{18}\right) \end{align*}\]and

\[\begin{align*} \alpha_{h}(U_{m}) & = 0.07 \cdot \exp\left(-\frac{U_{m}}{20}\right), \\ \beta_{h}(U_{m}) & = \frac{1}{\exp\left(\frac{30 - U_{m}}{10}\right) + 1}, \end{align*}\]and

\[\begin{align*} \alpha_{n}(U_{m}) & = 0.01 \cdot \frac{10 - U_{m}}{\exp\left(\frac{10 - U_{m}}{10}\right) - 1}, \\ \beta_{n}(U_{m}) & = 0.125 \cdot \exp\left(-\frac{U_{m}}{80}\right). \end{align*}\]In total, the **Hodgkin-Huxley model** consists of a set of four coupled, non-linear ordinary differential equations: one for the membrane potential equation (Eq. ($\ref{eq:membrane_potential}$)) and three for the gating variables (Eq. ($\ref{eq:dm}$) to ($\ref{eq:dn}$)) associated with the sodium and potassium channels. We can express the latter equations even in a slightly more elegant form so that we finally have the following system of ODEs:

with $\alpha_{p}(U_{m})$ and $\beta_{p}(U_{m})$ given by Equations ($\ref{eq:alpha}$) and ($\ref{eq:beta}$) and $p \in \lbrace m, h, n\rbrace$.

Equations ($\ref{eq:hh1}$) and ($\ref{eq:hh2}$) together provide a dynamic system that models the electrical behavior of neurons by describing how ion conductances change in response to membrane potential and how these changes affect the membrane potential over time. The huge advantage of this model over simpler models is that it can reproduce the dynamics of action potentials in neurons, depending on the actual physical properties of the ion channels. Unlike simplified models such as the previously discussed FitzHugh-Nagumo model, which often describe the action potential phenomenologically, the Hodgkin-Huxley model is based on the biophysical properties of the neuron and can therefore be used to study the dynamics of action potentials in neurons in more detail. Also, the model can be extended to include many other ion channel types, which proves the versatility of the model and its applicability to a broad variety of neurons.

The four ODEs cannot generally be solved analytically, but only by numerical approximation methods such as the Runge-Kutta method that will apply in our simulations below.

The units of the variables in the Hodgkin-Huxley model are as follows:

Variable | Unit | Description |
---|---|---|

$U_{m}$ | mV | Membrane potential |

$I_{ext}$ | $\mu$A cm$^{-2}$ | External current |

$C$ | $\mu$F | Membrane capacitance |

$g_{Na}$ | mS cm$^{-2}$ | Sodium conductance |

$g_{K}$ | mS cm$^{-2}$ | Potassium conductance |

$g_{L}$ | mS cm$^{-2}$ | Leak conductance |

$m, h, n$ | - | Gating variables |

$U_{eq,Na}$ | mV | Sodium equilibrium potential |

$U_{eq,K}$ | mV | Potassium equilibrium potential |

$U_{eq,L}$ | mV | Leak equilibrium potential |

$\alpha_{p}$ | ms$^{-1}$ | Rate function |

$\beta_{p}$ | ms$^{-1}$ | Rate function |

$\tau_{p}$ | ms | Time constant |

The normalization by cm$^2$ accounts for a (potential) scaling by the membrane area. The membrane potential change, $\Delta U_{m}$, due to ionic currents is influenced by the membrane’s capacitance, $C_m$, which in turn is proportional to the membrane area or patch ($A$). Thus, $C_m$ is typically given in units of capacitance per unit area ($\mu$F/cm$^2$). When we define current density, $I_{ion}$, in terms of $\mu$A/cm$^2$, we are essentially considering how much current flows per unit area of the neuron’s membrane. This approach ensures that the model’s output can be applied universally to different neurons regardless of their size by normalizing the effect of the current to an area.

In the following, we will go through the main parts of the Python code for simulating the Hodgkin-Huxley model. The entire code can be found in this Github repositoryꜛ.

We use the following constants for our simulations:

Constant | Value | Description |
---|---|---|

$C_m$ | 1.0 $\mu$F/cm$^2$ | Membrane capacitance |

$g_{Na}$ | 120.0 mS/cm$^2$ | Sodium conductance |

$g_{K}$ | 36.0 mS/cm$^2$ | Potassium conductance |

$g_{L}$ | 0.3 mS/cm$^2$ | Leak conductance |

$U_{eq,Na}$ | 120 mV | Sodium equilibrium potential |

$U_{eq,K}$ | -77.0 mV | Potassium equilibrium potential |

$U_{eq,L}$ | -54.387 mV | Leak equilibrium potential |

To calculate the resulting resting membrane potential ($U_{\text{rest}}$) from these constants, we need to consider the equilibrium condition where the net ionic current through the membrane is zero. At rest, the total current flowing across the membrane should equal zero, which means the sum of all ionic currents and any applied currents must cancel out. This calculation typically assumes no external stimulation ($I_{\text{ext}} = 0$):

\[I_{\text{Na}} + I_{\text{K}} + I_{\text{L}} = 0\]We assume that $m$, $h$, and $n$ are at their steady-state values at $U_{\text{m}} = U_{\text{rest}}$. However, because calculating this directly can be complex due to the nonlinear nature of the equations, we make a simplification and consider $m$ and $h$ for the sodium current as negligible if far from the threshold potential, simplifying the equation to mainly consider potassium and leak currents:

\[\begin{align*} I_{\text{K}} + I_{\text{L}} &= 0 \\ \Leftrightarrow \quad g_{\text{K}} n^4 (U_{\text{rest}} - U_{\text{eq,K}}) \; + & \\ + \; g_{\text{L}} (U_{\text{rest}} - U_{\text{eq,L}}) &= 0 \\ \Leftrightarrow \quad \frac{g_{\text{K}} n^4 U_{\text{eq,K}} + g_{\text{L}} U_{\text{eq,L}}}{g_{\text{K}} n^4 + g_{\text{L}}} &= U_{\text{rest}} \end{align*}\]To estimate $n^4$ at rest, we use $n_{\infty}$ (see next section) calculated at an assumed $U_{\text{rest}}$ like $-65$ mV:

```
def n_inf(U_m):
alpha_n = 0.01 * (10 - U_m) / (np.exp((10 - U_m) / 10) - 1)
beta_n = 0.125 * np.exp(-U_m / 80)
return alpha_n / (alpha_n + beta_n)
# constants:
g_K = 36.0 # mS/cm^2
g_L = 0.3 # mS/cm^2
U_K = -77.0 # mV
U_L = -54.387 # mV
# assume an initial U_rest for calculating n_inf:
U_rest_guess = -65.0 # mV
n4 = n_inf(U_rest_guess)**4
# calculate U_rest:
U_rest = (g_K * n4 * U_K + g_L * U_L) / (g_K * n4 + g_L)
print(f"Estimated resting membrane potential U_rest:{U_rest} mv")
```

```
Estimated resting membrane potential U_rest:-54.387000012713244 mv
```

For more accuracy, you might consider an iterative approach where you adjust $U_{\text{rest}}$ based on recalculating $n_{\infty}$ until the changes in $U_{\text{rest}}$ are minimal between iterations.

Our chosen approach provides a close approximation of the resting membrane potential based on our model’s constants and is typically sufficient unless precise dynamics near the threshold are critical.

We want to have a good starting point for the gating variables at the resting membrane potential $U_{\text{rest}}$. This ensures that the starting conditions for the gating variables are consistent with the biophysical properties of the channels at the resting or initial membrane potential.

The steady-state (equilibrium) values $m_{\infty}$, $h_{\infty}$, and $n_{\infty}$ for the gating variables can be calculated using Equation ($\ref{eq:hh2}$):

\[\frac{dp}{dt} = \alpha_p(U_m) (1 - p) - \beta_x(U_m) p\]To find the steady-state values $p_{\infty}$, we set the time derivative of the gating variable to zero (assuming the system is in equilibrium and does not change): $\frac{dp}{dt} = 0$. Plugging this into the equation, we get:

\[\begin{align*} \alpha_p(U_m) (1 - p) - \beta_p(U_m) p &=0 \\ \Leftrightarrow \quad \alpha_p(U_m) (1 - p) &= \beta_p(U_m) p \\ \Leftrightarrow \quad \alpha_p(U_m) - \alpha_p(U_m) p &= \beta_p(U_m) p \\ \Leftrightarrow \quad \beta_p(U_m) p + \alpha_p(U_m) p &= \alpha_p(U_m) \\ \Leftrightarrow \quad \frac{\alpha_p(U_m)}{\alpha_p(U_m) + \beta_p(U_m)} &= p = p_\infty \end{align*}\]$p_{\infty}$ represents the equilibrium or steady-state proportion of channels that are in the open state (for $m$ and $n$) or not inactivated (for $h$) at a given membrane potential. At any given $U_m$, $p_{\infty}$ provides a snapshot of how receptive the channels are to opening (for $m$ and $n$) or how likely they are to be available (for $h$).

Next, we need to select a range of $U_m$ values around the resting membrane potential (e.g., from -100 mV to 100 mV) to calculate the steady-state values. The corresponding values at $U_m=U_{rest}$ will be our initial conditions for the gating variables, $m_0$, $h_0$, and $n_0$:

```
# define the range of membrane potentials:
U_m_range = np.linspace(-100, 100, 200)
# define alpha and beta functions for m, h, and n:
def alpha_m(U_m): return 0.1 * (25 - U_m) / (np.exp((25 - U_m) / 10) - 1)
def beta_m(U_m): return 4.0 * np.exp(-U_m / 18)
def alpha_h(U_m): return 0.07 * np.exp(-U_m / 20)
def beta_h(U_m): return 1 / (np.exp((30 - U_m) / 10) + 1)
def alpha_n(U_m): return 0.01 * (10 - U_m) / (np.exp((10 - U_m) / 10) - 1)
def beta_n(U_m): return 0.125 * np.exp(-U_m / 80)
# calculate the steady-state values for m, h, and n:
m_inf = [alpha_m(V) / (alpha_m(V) + beta_m(V)) for V in U_m_range]
h_inf = [alpha_h(V) / (alpha_h(V) + beta_h(V)) for V in U_m_range]
n_inf = [alpha_n(V) / (alpha_n(V) + beta_n(V)) for V in U_m_range]
# find indices where U_m is closest to U_rest mV:
U_find = U_rest
index_zero = np.argmin(np.abs(U_m_range - U_find))
print(f"At U_m = {U_find:.2f} mV, m_inf = {m_inf[index_zero]:.4f}, h_inf = {h_inf[index_zero]:.4f}, n_inf = {n_inf[index_zero]:.4f}")
# plotting:
plt.figure(figsize=(5.5, 5))
plt.plot(U_m_range, m_inf, label='$m_\infty(U_m)$', c='r')
plt.plot(U_m_range, h_inf, label='$h_\infty(U_m)$', c='g')
plt.plot(U_m_range, n_inf, label='$n_\infty(U_m)$', c='b')
plt.axvline(x=U_find, color='gray', linestyle='--', label=f'$U_m$={U_find:.2f} mV')
# indicate and annotate the steady-state values at U_m = 0 mV
plt.plot(U_find, m_inf[index_zero], 'ro')
plt.text(U_find, m_inf[index_zero], f' {m_inf[index_zero]:.2f}', verticalalignment='bottom',
color='red')
plt.plot(U_find, h_inf[index_zero], 'go')
plt.text(U_find, h_inf[index_zero], f' {h_inf[index_zero]:.2f}', verticalalignment='bottom',
color='green')
plt.plot(U_find, n_inf[index_zero], 'bo')
plt.text(U_find, n_inf[index_zero], f'{n_inf[index_zero]:.2f} ', verticalalignment='bottom',
horizontalalignment='right', color='blue')
plt.title('Finding steady-state values of m, h, and n')
plt.xlabel('Membrane potential $U_m$ (mV)')
plt.ylabel('Steady-state value')
plt.legend()
plt.grid(True)
plt.show()
```

```
At U_m = -54.39 mV, m_inf = 0.0000, h_inf = 0.9998, n_inf = 0.0040
```

Interpreting the results:

- $m_{\infty} = 0.0000$: This indicates that the activation gates of the sodium channels are nearly completely closed at this hyperpolarized potential. Sodium channels are mostly inactive because the membrane potential is far from the threshold needed to open these channels.
- $h_{\infty} = 0.9998$: This shows that the inactivation gates of the sodium channels are fully open, meaning the channels are ready to be activated if the membrane potential depolarizes to the threshold level. This is a protective mechanism ensuring that sodium channels can quickly activate if the neuron begins to depolarize.
- $n_{\infty} = 0.0040$: Reflects that only a very small fraction of the potassium channels’ activation gates are open at this membrane potential. However, since potassium channels play a significant role in maintaining the resting membrane potential, even a small proportion of open channels is significant.

When simulating the neuron with these values, starting from rest, the model accurately reflects the physiological state of the ion channels at rest, ensuring that any response to stimuli or changes in conditions is physiologically realistic.

```
import numpy as np
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
import os
# constants:
C_m = 1.0 # membrane capacitance, in uF/cm^2
g_Na = 120.0 # maximum conductances, in mS/cm^2
g_K = 36.0
g_L = 0.3
U_Na = 120 # reversal potentials, in mV
U_K = -77.0
U_L = -54.387
# set initial conditions: V_m, m, h, n:
m0 = m_inf[index_zero]
h0 = h_inf[index_zero]
n0 = n_inf[index_zero]
y0 = [U_rest, m0, h0, n0]
# parameters for I_ext:
I_amp = 19 # amplitude of external current in uA/cm^2
intervals = [[5, 20]]
# time range:
t = np.linspace(0, 50, 5000) # 50 milliseconds, 5000 points
# define the I_ext function to handle multiple intervals:
def I_ext(t, I_amp=1.0, intervals=[[5, 6], [10, 17]]):
""" Return I_amp if t is within any of the specified intervals, else return 0. """
for (start, end) in intervals:
if start <= t <= end:
return I_amp
return 0
# Hodgkin-Huxley model differential equations:
def hodgkin_huxley(t, y, I_amp, intervals):
V_m, m, h, n = y
I_ext_current = I_ext(t, I_amp, intervals)
dVmdt = (I_ext_current - g_Na * m**3 * h * (V_m - U_Na) - g_K * n**4 * (V_m - U_K) - g_L * (V_m - U_L)) / C_m
alpha_m = 0.1 * (25 - V_m) / (np.exp((25 - V_m) / 10) - 1)
beta_m = 4.0 * np.exp(-V_m / 18)
alpha_h = 0.07 * np.exp(-V_m / 20)
beta_h = 1 / (np.exp((30 - V_m) / 10) + 1)
alpha_n = 0.01 * (10 - V_m) / (np.exp((10 - V_m) / 10) - 1)
beta_n = 0.125 * np.exp(-V_m / 80)
dmdt = alpha_m * (1 - m) - beta_m * m
dhdt = alpha_h * (1 - h) - beta_h * h
dndt = alpha_n * (1 - n) - beta_n * n
return [dVmdt, dmdt, dhdt, dndt]
# solve ODE:
sol = solve_ivp(hodgkin_huxley, [t.min(), t.max()], y0, t_eval=t,
args=(I_amp, intervals))
# plot results:
plt.figure(figsize=(7, 9.75))
# plotting membrane potential:
plt.subplot(4, 1, 1)
plt.plot(sol.t, sol.y[0], 'k', label='$U_m/t)$')
plt.ylabel('membrane potential\n$U_m$ (mV)')
plt.title(f"Membrane potential, gating variables, external current and phase plane plots")
# indicate U_rest:
plt.axhline(y=U_rest, color='gray', linestyle='--', label='$U_{rest}$')
plt.legend(loc='upper right')
plt.ylim(-100, 125)
# plotting gating variables:
plt.subplot(4, 1, 2)
plt.plot(sol.t, sol.y[1], 'r', label='$m$')
plt.plot(sol.t, sol.y[2], 'g', label='$h$')
plt.plot(sol.t, sol.y[3], 'b', label='$n$')
plt.ylabel('gating variables')
plt.legend(loc='upper right')
plt.xlabel('time (ms)')
# plotting external current:
plt.subplot(4, 1, 3)
plt.plot(sol.t, [I_ext(time, I_amp, intervals) for time in sol.t], label='$I_{ext}(t)$')
plt.ylabel('external current\n$I_{ext}$ ($\\mu A/cm^2$)')
plt.legend(loc='upper right')
plt.xlabel('time (ms)')
# plot U_m and m in phase space:
plt.subplot(4, 3, 10)
plt.plot(sol.y[0], sol.y[1], 'r', lw=1)
plt.plot(sol.y[0][0], sol.y[1][0], 'bo', label='start point', alpha=0.75, markersize=7)
plt.plot(sol.y[0][-1], sol.y[1][-1], 'o', c="yellow", label='end point', alpha=0.75, markersize=7)
plt.xlabel('$U_m$ (mV)')
plt.ylabel('$m$/$h$/$n$')
plt.ylim(-0.1, 1.1)
# plot U_m and h in phase space:
plt.subplot(4, 3, 11)
plt.plot(sol.y[0], sol.y[2], 'g', lw=1)
plt.plot(sol.y[0][0], sol.y[2][0], 'bo', label='start point', alpha=0.75, markersize=7)
plt.plot(sol.y[0][-1], sol.y[2][-1], 'o', c="yellow", label='end point', alpha=0.75, markersize=7)
plt.xlabel('$U_m$ (mV)')
plt.ylim(-0.1, 1.1)
# plot U_m and n in phase space:
plt.subplot(4, 3, 12)
plt.plot(sol.y[0], sol.y[3], 'b', lw=1)
plt.plot(sol.y[0][0], sol.y[3][0], 'bo', label='start', alpha=0.75, markersize=7)
plt.plot(sol.y[0][-1], sol.y[3][-1], 'o', c="yellow", label='end', alpha=0.75, markersize=7)
plt.xlabel('$U_m$ (mV)')
plt.ylim(-0.1, 1.1)
plt.legend(loc='upper right')
plt.tight_layout()
plt.show()
```

As input current, we define a Heaviside function with an adjustable amplitude `I_amp`

that is applied for an adjustable set of time intervals:

with $n$ being the number of intervals and $t_{2k-1}$ and $t_{2k}$ the start and end times of the $k$-th interval.

Besides plotting the resulting membrane potential, gating variables, and external current, we also plot the phase plane trajectories of the membrane potential $U_m$ with the gating variables $m$, $h$, and $n$.

Let’s explore, at which minimum current an action potential can be triggered with our model. We will apply a step current of different amplitudes and observe the resulting membrane potential.

The above study reveals, that with the given set of constants, our model fires an action potential (roughly) at a minimum current of 19 $\mu$A/cm$^2$. I.e., a certain **threshold potential** needs to be reached to trigger an action potential. This behavior mirrors a neuron’s physiological response to external stimuli as the threshold potential is an essential property of neurons, determined by the balance of the inward sodium current and the outward potassium current. A certain level of depolarization is required to initiate an action potential.

Now, let’s further investigate, how the the ion channels’ gating variables $m$, $h$, and $n$ behave when the threshold potential is reached.

In the plots above, we have applied an external current of 19 $\mu$A/cm$^2$ to trigger an action potential. The membrane potential $U_m$ shows the typical phases of an action potential: **depolarization**, **repolarization**, and **hyperpolarization**. The phase plane plots show, that the system converges to a limit cycle of one period. No further cycles and, thus, no further oscillations or action potentials were triggered as the current was stopped after 14 ms. And the gating variables reflect the physiological processes inside the cell during an action potential, as we will now discuss in more detail.

During the **depolarization phase**, the activation variable $m$ increases rapidly, while the inactivation variable $h$ decreases. This reflects the sodium channels opening and the sodium current flowing into the cell. The potassium activation variable $n$ also increases, but slower compared to $m$ since the potassium channels open more slowly.

The opening of the potassium channels leads to the subsequent **repolarization phase** as potassium ions start to flow out of the neuron, reversing the depolarization that occurred due to the influx of sodium ions. At the onset of the repolarization phase, the sodium activation variable $m$ reaches its maximum and then starts to decrease rapidly, while the inactivation variable $h$ increases, and both variables subsequently turn back towards their resting values after just a few milliseconds. This reflects the sodium channels closing and the sodium current decreasing quickly. The potassium activation variable $n$ keeps increasing and reaches its maximum at the end of the repolarization phase.

After its maximum, $n$ decreases, but much slower than $m$, thus potassium is still flowing out of the cell while no more sodium is flowing in. This leads to the **hyperpolarization phase** as the membrane potential becomes more negative than the normal resting potential. Hyperpolarization eventually diminishes as the membrane potential stabilizes back to the resting level, assisted by the closing of potassium channels and the restoration of sodium and potassium ion gradients by the sodium-potassium pump.

These dynamics ensure that the neuron does not remain in a permanently activated state and can respond to new stimuli appropriately. As previously describe, the refractory period splits into the **absolute refractory period** and **relative refractory period**. The absolute refractory period spans the depolarization and part of the repolarization phase, during which the neuron cannot fire again, regardless of stimulus intensity. The relative refractory period follows during the hyperpolarization phase, requiring a stronger-than-usual stimulus to initiate another action potential. Together, these periods prevent the neuron from firing again too soon, ensuring that each signal is distinct and leads to controlled neural activity. The interplay of the gating variables and their corresponding ionic currents underpins the complex yet finely tuned process of neuronal firing. The changes in membrane potential and the activities of the ion channels are what make neural signaling possible, allowing for the transmission of information throughout the nervous system.

You may have noticed, that the applied current was turned off not before the end of the depolarization phase. If the current is turned off too early, for the set current amplitude, the threshold potential would not have been reached and no action potential is triggered. However, if we increase the current amplitude, the neuron will reach the threshold potential and fire an action potential, even for current durations shorter than the absolute refractory period:

The shorter, but now increased current pulse is able to trigger the action potential at an earlier time point than in the first simulation. Also, the dynamics of the action potential phases and the gating variables are accelerated.

In the plots above, we applied an external current of 19 $\mu$A/cm$^2$ to trigger an action potential. If we introduce another short current pulse just 0.5 ms after the first, the system will not respond with a second action potential. This is due to the fact that the pulse was triggered still in the absolute refractory period, during which the sodium channels are inactivated and cannot be opened again (the gating variable $h$ is close to 0). The system needs to recover from this inactivation before it can fire another action potential.

Injecting the second pulse right during the relative refractory period, we can see a response in the membrane potential. However, this is not an action potential but a subthreshold response or peak. Neither are all gating variables really responding nor a second limit cycle is established:

However, if we increase the current amplitude further, the system will respond with a second action potential, even if the second pulse is still in the relative refractory period of the first action potential. This indicates that a higher stimulus can overcome the increased threshold of excitability during this phase:

Triggering the second pulse outside the refractory period of the first action potential, the system will respond with a second action potential flawlessly:

In summary, the Hodgkin-Huxley model accurately captures the refractory periods of neurons, reflecting their physiological behavior in response to external stimuli. As stated in the previous section, the refractory periods are essential for ensuring that neurons fire in a controlled manner and do not become overexcited. It is remarkable that the model is able to reproduce these complex physiological dynamics based on the interplay of the gating variables and the ionic currents.

Neurons often fire in patterns of action potentials, known as spike trains. These patterns can encode information and are crucial for neural communication. The Hodgkin-Huxley model can be used to simulate spike trains by applying different current pulses at specific intervals. Here, we will simulate a series of action potentials triggered both by an elongated constant external current pulse and a series of short external current pulses.

Let’s begin with the constant current pulse. We let simulation run a bit longer before to better observe the repetitive firing behavior of our model:

In the simulation above we have applied a constant current pulse of 19 $\mu$xA/cm$^2$ for 95 ms. The simulation triggers only one action potential. We already observed a similar behavior when we investigated the FitzHugh-Nagumo model. With the Hodgkin-Huxley model, we now have the opportunity to investigate the underlying mechanisms from a more physiological perspective.

The sudden increase of external current causes a rapid change in the gating variables and, thus, an opening and closing of the ion channels, which establishes dynamics for generating an action potential as described before. The membrane potential depolarizes rapidly, which is enough to surpass the threshold needed to open voltage-gated sodium channels.

However, after the first action potential, the gating variables do not return to their resting values due to the applied constant external current, which is, however, not sufficient to maintain the membrane potential above the threshold potential. The low current keeps the inactivation ($h$) of the sodium channel at a level that is neither fully inactivated nor fully recovered. This partial recovery of the inactivation variable $h$ prevents the system from firing another action potential, as the activation levels of both sodium ($m$) and potassium ($n$) channels do not reach the thresholds required for another action potential initiation. Even though $m$ may recover quickly enough to respond to continued depolarization, the sodium channels cannot fully activate because a significant portion remains inactivated. Concurrently, the potassium channels, governed by $n$, continue to facilitate the outward flow of potassium ions, which further stabilizes the membrane potential and counteracts depolarization induced by the external current.

The rapid depolarization of the membrane potential due to the abrupt increase in the external current occurs also for shorter current pulses, which was the reason why we could observe the generation of single action potentials in the previous section. We will further investigate this behavior in the subsequent section.

However, if we increase the current amplitude to, e.g. 25 $\mu$A/cm$^2$, the system responds with a series of action potentials:

The applied current is now able to maintain the membrane potential above the threshold potential, allowing the system to fire a series of action potentials. The gating variables $m$, $h$, and $n$ show the typical dynamics of an action potential, with rapid changes in response to the external current. The system is now able to recover from the refractory period and fire multiple action potentials in response to the sustained depolarization. The potassium channels open and close in response to the membrane potential, allowing the neuron to repolarize and hyperpolarize between action potentials. The sodium channels also recover from inactivation, enabling the neuron to fire again. This behavior is consistent with the physiological response of neurons to external stimuli.

Furthermore, if we increase the current amplitude, e.g., to 50 $\mu$A/cm$^2$, the system responds with a higher firing rate:

To further investigate this behavior, we can run the model for a series of external currents and plot the resulting firing rate. To do so, we simulate the model for 500 ms and for different external current amplitudes (each lasting for 490 ms) and count the number of action potentials triggered. We then fit a sigmoid function to the data:

\[f(x) = \frac{L}{1 + e^{-k(x - x_0)}}\]where $L$ is the maximum number of action potentials, $k$ is the slope of the curve, and $x_0$ is the midpoint of the curve. This will allow us to better visualize the relationship between the external current and the firing rate:

```
from scipy.signal import argrelextrema
from scipy.optimize import curve_fit
I_amp_end = 200
intervals = [[5, 495]]
# time range:
t = np.linspace(0, 500, 5000)
# simulate the model for different external currents:
I_amps =[]
spike_counts = []
for I_amp in range(0, I_amp_end, 10):
sol = solve_ivp(hodgkin_huxley, [t.min(), t.max()], y0, t_eval=t, args=(I_amp, intervals))
idx_spikes = argrelextrema(sol.y[0], np.greater)[0]
idx_spikes = [idx for idx in idx_spikes if sol.y[0][idx] > 0]
I_amps.append(I_amp)
spike_counts.append(len(idx_spikes))
# from I_amp=30 on, we fit a sigmoid function to the data:
def sigmoid(x, L, k, x0):
return L / (1 + np.exp(-k * (x - x0)))
popt, pcov = curve_fit(sigmoid, I_amps[3:], spike_counts[3:], bounds=(0, [100., 0.1, 200]))
# generate enough x values to create a smooth plot:
x_values = np.linspace(0, 190, 400)
fitted_values = sigmoid(x_values, *popt)
# plot the frequency of the action potential as a function of the external current:
plt.figure(figsize=(5, 4))
plt.plot(I_amps, spike_counts, 'ko', lw=1.75, label='data points')
plt.plot(x_values, fitted_values, label='fitted sigmoid curve', color='red', lw=1.75)
plt.title(f"Firing rate vs. external current")
plt.xlabel('external current $I_{ext}$ ($\\mu A/cm^2$)')
plt.ylabel('number of spikes')
plt.grid(True)
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.legend()
plt.tight_layout()
plt.show()
```

Indeed, the firing rate increases with the external current amplitude. However, the relationship is not linear but follows a sigmoid curve. Thus, it will become increasingly difficult to trigger additional action potentials as the external current amplitude increases.

Overall, this behavior is consistent with the physiological response of neurons to external stimuli. It highlights the neuron’s capability to encode the intensity and duration of stimuli into the frequency of action potentials, a fundamental feature of neural information processing known as frequency coding.

To inject a series of current pulses, we modify the `intervals`

variable in our code in such a way, that we gain control over the timing of the current pulses:

```
t_start = 5 # ms
t_stop = 150
t_duration = 15 # ms
t_off_time = 10 # ms
intervals = []
t = t_start
while t < t_stop:
intervals.append([t, t + t_duration])
t += t_duration + t_off_time
```

In the code above, we define a start time `t_start`

, a stop time `t_stop`

, a duration of the current pulse `t_duration`

, and an off time `t_off_time`

. We then create a list of intervals for the current pulses, starting at `t_start`

and ending at `t_stop`

, with each pulse lasting for `t_duration`

ms and an off time of `t_off_time`

ms between each pulse.

A pulse frequency of 15 ms with pulses lasting for 4 ms and an amplitude of 50 $\mu$A/cm$^2$ will trigger a moderate series of action potentials:

Increasing the pulse frequency leads to a higher firing rate:

Further increasing the pulse frequency and applying shorter pulses of 2 ms will lead to an even higher firing rate:

Even more complex firing patterns are possible by setting the current pulse frequency appropriately and, e.g., increasing the current amplitude:

One could further modify the external current function in such a way that it follows a more complex pattern, e.g., a sinusoidal function, to investigate the response of the model to more intricate stimuli. Implementing a variable current amplitude and pulse frequency would also allow for the simulation of more realistic scenarios. We omit this here for the sake of brevity, but feel free to further experiment with the given code.

In summary, the Hodgkin-Huxley model is able to simulate a broad range of different firing patterns, from simple to complex structured spike trains. Thus, the model can be used to investigate the response of neurons to various external stimuli and to study the underlying mechanisms of neural information processing.

In our discussion of a constant current injection, we observed that the system could fire a single action potential when a low-amplitude current was applied but failed to trigger further action potentials. This behavior, also noted in the FitzHugh-Nagumo model, results from the abrupt increase in external current which causes rapid depolarization of the membrane potential (in the Hodgkin-Huxley model) or a rapid increase of the $v$ variable (in the FitzHugh-Nagumo model).

To investigate whether the Hodgkin-Huxley model responds differently to a non-abrupt, but ramped current injection, we will simulate the model for a ramped current pulse, similar to our experiments with the FitzHugh-Nagumo model. Here, we modify the external current definition in our code to gradually increase the current, allowing us to examine the neuronal response under different stimulation dynamics:

```
def I_ext_ramped(t, I_amp=10.0, t_ramp_start=5, t_ramp_end=10, t_off=20):
if t < t_ramp_start or t > t_off:
return 0
elif t_ramp_start <= t <= t_ramp_end:
return I_amp * (t - t_ramp_start) / (t_ramp_end - t_ramp_start)
else:
return I_amp
# redefine the Hodgkin-Huxley model to include the ramped external current:
def hodgkin_huxley_ramped(t, y, I_amp, t_ramp_start, t_ramp_end, t_off):
V_m, m, h, n = y
# exchange I_ext with I_ext_ramped:
I_ext_current = I_ext_ramped(t, I_amp, t_ramp_start, t_ramp_end, t_off)
dVmdt = (I_ext_current - g_Na * m**3 * h * (V_m - U_Na) - g_K * n**4 * (V_m - U_K) - g_L * (V_m - U_L)) / C_m
# continue with the rest of the model as before:
alpha_m = 0.1 * (25 - V_m) / (np.exp((25 - V_m) / 10) - 1)
beta_m = 4.0 * np.exp(-V_m / 18)
alpha_h = 0.07 * np.exp(-V_m / 20)
beta_h = 1 / (np.exp((30 - V_m) / 10) + 1)
alpha_n = 0.01 * (10 - V_m) / (np.exp((10 - V_m) / 10) - 1)
beta_n = 0.125 * np.exp(-V_m / 80)
dmdt = alpha_m * (1 - m) - beta_m * m
dhdt = alpha_h * (1 - h) - beta_h * h
dndt = alpha_n * (1 - n) - beta_n * n
return [dVmdt, dmdt, dhdt, dndt]
```

The modified external current function `I_ext_ramped`

smoothly increases the current from 0 to the desired amplitude `I_amp`

between `t_ramp_start`

and `t_ramp_end`

, maintains it until `t_off`

, and then drops to 0, mimicking a more physiologically realistic scenario where stimuli often increase gradually.

Let’s investigate the model with the modified external current function:

Unlike the abrupt pulse scenario, the ramped current injection does not elicit an action potential, even though we apply the same current amplitude of 19 $\mu$A/cm$^2$. This result can be attributed to the slower rate of depolarization, which allows the gating variables, particularly $m$ and $h$, to adjust without rapidly exceeding the threshold potential. The gradual increase in current provides a smoother transition in membrane potential, preventing the sudden changes necessary for triggering an action potential. This reflects a critical aspect of neuronal excitability, where the rate of change in membrane potential can significantly influence the firing behavior. For instance, if we apply the same current amplitude but with a shorter ramping time, the system will respond with an action potential as if the current pulse was applied abruptly:

This experiment underscores the nuanced role of current dynamics in neuronal firing and illustrates the complex interplay between external stimuli and neuronal response mechanisms. By modulating the rate of current application, we gain insights into how neurons might differentially process slowly developing versus sudden stimuli, a key consideration in natural neural function.

We have explored the Hodgkin-Huxley model, a fundamental model of neuronal excitability that describes the dynamics of action potentials in neurons. By simulating the model, we have gained insights into the complex interplay between ion channels, gating variables, and membrane potential that underlie the generation of action potentials. We have investigated the response of the model to different external stimuli, including constant and pulsed current injections, and examined the refractory periods of neurons. We have also explored the model’s ability to simulate spike trains and firing rate modulation, highlighting its utility in studying neural information processing.

The Hodgkin-Huxley model provides a powerful framework for understanding the biophysical mechanisms of neuronal excitability and the generation of action potentials. By capturing the intricate dynamics of ion channels and gating variables, the model offers a detailed and physiologically realistic description of neuronal behavior. Through simulations and analyses, we have demonstrated how the model can be used to investigate a wide range of neuronal responses to external stimuli and to study the underlying mechanisms of neural information processing.

To gain further insights in the dynamics of the model, it is often simplified by reducing the number of variables and parameters. The discussed FitzHugh-Nagumo model is one such simplification. Other simplifications include the Morris-Lecar modelꜛ, the Izhikevich model, or the Integrate-and-Fire model. Each of these models captures different aspects of neuronal dynamics and can be used to study specific phenomena in neural systems.

The entire code used in this blog post is freely available in this Github repositoryꜛ. Feel free to experiment with the code, modify the parameters, and explore the dynamics of the Hodgkin-Huxley model further.

- Hodgkin, A. L., & Huxley, A. F.,
*A quantitative description of membrane current and its application to conduction and excitation in nerve*, 1952, The Journal of Physiology, 117(4), 500–544, doi: 10.1113/jphysiol.1952.sp004764ꜛ - A. L. Hodgkin,
*The local electric changes associated with repetitive action in a non-medullated axon*, 1948, The Journal of physiology, url, doi: 10.1113/jphysiol.1948.sp004260 - Wulfram Gerstner, Werner M. Kistler, Richard Naud, and Liam Paninski,
*Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Chapter 2: Ion Channels and the Hodgkin-Huxley Model*, 2014, Cambridge University Press, ISBN: 978-1-107-06083-8 - Izhikevich, Eugene M., (2010),
*Dynamical systems in neuroscience: The geometry of excitability and bursting (First MIT Press paperback edition)*, The MIT Press, ISBN: 978-0-262-51420-0 - Purves, D. (Hrsg.),
*Neuroscience (Sixth edition)*, 2018, Oxford University Press, ISBN: 978-1-60535-380-7

The FitzHugh-Nagumo model is a two-dimensional system of ordinary differential equations (ODEs) that describes the dynamics of the action potential in neurons, i.e., their firing of voltage spikes or pulses. The model was introduced by Richard FitzHugh in 1961 and later modified by J. Nagumo, S. Arimoto, and S. Yoshizawa in 1962. The model is a simplification of the Hodgkin-Huxley model, which is a more complex system of ODEs that describes the dynamics of the action potential in neurons in greater detail.

Let’s consider the Van der Pol oscillator from our previous post:

\[\begin{align} \ddot{x} - \mu (1-x^{2})\dot{x}+x&=0 \end{align}\]Using the Liénard transformation $y=x-x^{3}/3-{\dot {x}}/\mu$, the oscillator can be transformed into a two-dimensional system of ODEs:

\[\begin{align} \dot{x} &= \mu \left(x-{\tfrac {1}{3}}x^{3}-y\right) \\ \dot{y} &= {\frac {1}{\mu }}x \end{align}\]In order to make the transition from classical mechanics to a model for the action potential in neurons, we rename the system’s variables accordingly. We rename the state variables $x$ and $y$ to $v$ and $w$, which represent the membrane potential and the recovery variable, respectively:

\[\begin{align} \dot{v} &= \mu(v-\tfrac {1}{3}v^{3}-w) \\ \dot{w} &= \frac{1}{\mu}v \end{align}\]From the phase plane analysis we know that the Van der Pol oscillator exhibits a stable limit cycle, which is a closed trajectory in the phase space, and has an unstable fixed point in the origin, where the system’s nullclines intersect. Around this fixed point, the system is unstable with a spiral sink behavior, converging to limit cycle.

In order to gain some more control over this fixed point and increasing flexibility over the system’s dynamics, we adjust the ODE system to allow the fixed point to be either a stabile or unstable one, depending on the parameters. This involves altering the system in such a way that the $\dot{w}=0$ nullcline gets tilted. We achieve this by incorporating two additional terms into the $\dot{w}$ equation and introducing a new term to the $\dot{v}$ equation:

\[\begin{align} \dot{v} &= \mu(v-{\tfrac {1}{3}}v^{3}-w + I_\text{ext}) \label{eq:1} \\ \dot{w} &= \frac{1}{\mu}(v + a -b w) \label{eq:2} \end{align}\]These are the equations of the FitzHugh-Nagumo model. The newly introduced term $I_\text{ext}$ in the $\dot{v}$ equation represents the effect of an external current on the membrane potential $v$. In order to maintain the units of the membrane potential in volts, we should have added $I_{ext}R$ instead of just $I_{ext}$, with $R$ being the membrane resistance. However, we will keep it simple and set $R=1$ and neglect it in the following. The term $a$ in the $\dot{w}$ equation represents the effect of a constant input current on the recovery variable $w$. The term $b w$ represents the effect of the recovery variable $w$ on itself. By setting the newly added parameters to zero, we can recover the original Van der Pol oscillator, making it a special case of the FitzHugh-Nagumo model.

The FitzHugh-Nagumo model is called *excitability* model as the added external current term, $I_{\text{ext}}$, can trigger the system to generate action potentials. The model exhibits excitability as it requires the external input to generate action potentials. Thus, the FitzHugh-Nagumo model is way more biologically plausible and comparable to the natural behavior of neurons than the Van der Pol oscillator, as we can trigger system responses depending on external inputs. The Van der Pol oscillator, in contrast, is a *relaxation* oscillator and exhibits oscillations without external input.

In the literature, one can find different derivation of the FitzHugh-Nagumo model, which lead to slightly different ODE systems.

In his 1961 publicationꜛ, FitzHugh started with the same differential equation of the Van der Pol oscillator as we did above. However, he used a slightly different Liénard transformation, $y=-x+x^{3}/3+{\dot {x}}/\mu$, which alters the signs of some terms in the transformed system:

\[\begin{align} \dot{v} &= \mu(v+{\tfrac {1}{3}}v^{3}+w + I_\text{ext}) \\ \dot{w} &= - \frac{1}{\mu}(v - a +b w) \end{align}\]In his 1969 publicationꜛ, FitzHugh started with a different differential equation,

\[\begin{align} \ddot{x} - (1-x^{2})\dot{x}+\phi x&=0 \end{align}\]applies yet another variant of the Liénard transformation, $y=x-x^{3}/3-{\dot {x}}$, and obtains the following system of ODEs:

\[\begin{align} \dot{v} &= v-{\tfrac {1}{3}}v^{3}-w + I_\text{ext} \\ \dot{w} &= \phi(v + a -b w) \end{align}\]Both, Scholarpediaꜛ and Wikipediaꜛ seem to follow the derivation from the 1969 publication (in the Wikipedia version, $\mu$ is substituted by $\frac{1}{\tau}$ with $\tau$ being some time constant, that is not further explained).

Gerstner et al. (2014)ꜛ use yet another variant of the model (their Eq. (4.11)), which is derived from the Hodgkin-Huxley model:

\[\begin{align} F(u,w) &= u-{1\over 3}u^{3}-w \\ {G}(u,w) &=b_{0}+b_{1}\,u-w \end{align}\]Here, one could identify $F(u,w)$ as $\dot{v}$ and $G(u,w)$ as $\dot{w}$ with $u=v$ as well as $b_{0}$ as our $a$ and $b_{1}$ as our $b$. However, why we would then have $b v$ instead of $b w$ remains unclear.

Nagumo et al. (1962)ꜛ used yet another variant,

\[\begin{align} J &= \frac{1}{\mu} \dot{v} -w - (u-\frac{v^3}{3}) \\ \mu \dot{w} + bw &= a-v \end{align}\]which, by setting $J=0$, yields to:

\[\begin{align} \dot{v} &= \mu(w-\frac{1}{3} v^3 + u) \\ \dot{w} &= \frac{1}{\mu}(v - bw +a) \end{align}\]These equations share the same structure as the ones from FitzHugh (1961), but with different signs.

Overall, it seems that the FitzHugh-Nagumo model can be derived in different ways, depending on the starting equation used for the Van der Pol oscillator and the Liénard transformation applied. Since some sources do not provide the derivation, it is not always clear how the model was obtained. We will continue with the model as derived in Eq. ($\ref{eq:1}$) and ($\ref{eq:2}$).

As we have learned in the previous posts, the nullclines of a dynamical system are the curves in the phase plane where the derivatives of the state variables are zero. Nullcline provide important information about the system’s dynamics, such as the location of fixed points and the direction of the flow in the phase space. To derive the nullclines of the FitzHugh-Nagumo model, we set the derivatives of the state variables to zero and solve for the other state variable.

Let’s start with Eq. ($\ref{eq:1}$) and calculate $\dot{v}$-nullcline by setting $\dot{v}=0$:

\[\begin{align} & \dot{v} = \mu(v-{\tfrac {1}{3}}v^{3}-w + I_\text{ext}) \stackrel{!}{=} 0 \notag \\ \Leftrightarrow & \; w = v-{\tfrac {1}{3}}v^{3} + I_\text{ext} \end{align}\]We do the same for Eq. ($\ref{eq:2}$) to calculate the $\dot{w}$-nullcline:

\[\begin{align} & \; \dot{w} = \frac{1}{\mu}(v + a -b w) \stackrel{!}{=} 0 \notag \\ \Leftrightarrow & \; w = \frac{1}{b}(v + a) \end{align}\]Recall our initial motivation for introducing the additional terms in the Van der Pol equations: We wanted to make the fixed point of the system stable by tilting the $\dot{w}$-nullcline (former $\dot{v}$-nullcline). And indeed, the additional terms have the effect of tilting the $\dot{w}$-nullcline with slope $1/b$ and intercept $a/b$. The $\dot{v}$-nullcline is again a cubic curve. The intersection of the two nullclines is the fixed point of the system.

To study the dynamics of the FitzHugh-Nagumo model, we can reutilize the Python code from the previous post and modify it accordingly:

```
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
plt.rcParams.update({'font.size': 14})
import os
if not os.path.exists('figures'):
os.makedirs('figures')
# define the FitzHugh-Nagumo model:
def fitzhugh_nagumo(t, z, mu, a, b, I_ext):
v, w = z
dvdt = mu * (v - (v**3) / 3 - w + I_ext)
dwdt = (1 / mu) * (v + a - b * w)
return [dvdt, dwdt]
# define the nullclines:
def v_nullcline(v, I_ext):
return v - (v**3) / 3 + I_ext
def w_nullcline(v, a, b):
return (1 / b) * (v + a)
# set time span:
eval_time = 100
t_iteration = 1000
t_span = [0, eval_time]
t_eval = np.linspace(*t_span, t_iteration)
# set initial conditions:
z0 = [-1, -0.8]
mu = 2.0
a = 0.7
b = 0.8
I_ext = 0.25
# calculate the vector field:
mgrid_size = 3
x, y = np.meshgrid(np.linspace(-mgrid_size, mgrid_size, 15),
np.linspace(-mgrid_size, mgrid_size, 15))
u = mu * (x - (x**3)/3 - y + I_ext)
v = (1/mu) * (x + a - b * y)
# calculating the trajectory for the Van der Pol oscillator:
sol = solve_ivp(fitzhugh_nagumo, t_span, z0, args=(mu, a, b, I_ext), t_eval=t_eval)
# define the x-array for the nullclines:
x_null = np.arange(-mgrid_size,mgrid_size,0.001)
# plot vector field and trajectory:
plt.figure(figsize=(6, 6))
plt.clf()
# plot the streamline plot colored by the speed of the flow:
speed = np.sqrt(u**2 + v**2)
plt.streamplot(x, y, u, v, color=speed, cmap='cool', density=2.0)
plt.plot(x_null, v_nullcline(x_null, I_ext), '.', c="darkturquoise", markersize=2)
plt.plot(x_null, w_nullcline(x_null, a, b), '.', c="darkturquoise", markersize=2)
plt.plot(sol.y[0], sol.y[1], 'r-', lw=3, label=f'Trajectory, $z_0$={z0}')
# indicate start point:
plt.plot(sol.y[0][0], sol.y[1][0], 'bo', label='start point', alpha=0.75, markersize=7)
plt.plot(sol.y[0][-1], sol.y[1][-1], 'o', c="yellow", label='end point', alpha=0.75, markersize=7)
# indicate the direction of the trajectory's last point with an arrow:
plt.title(f'phase plane plot: FitzHugh-Nagumo model\na: {a}, b: {b}, $\mu$: {mu}, $I_$: {I_ext}')
plt.xlabel('v')
plt.ylabel('w')
plt.legend(loc='lower right', fontsize=12.5)
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
#plt.xlim(-mgrid_size, mgrid_size)
plt.ylim(-mgrid_size, mgrid_size)
plt.tight_layout()
plt.show()
# plot v over time to visualize the voltage curve:
plt.figure(figsize=(8, 5))
plt.plot(sol.t, sol.y[0], 'b-', lw=2, label='Voltage $v(t)$')
plt.title(f'voltage curve: FitzHugh-Nagumo model\na: {a}, b: {b}, $\mu$: {mu}, $I_$: {I_ext}')
plt.xlabel('Time')
plt.ylabel('Voltage $v$')
plt.legend(loc='best', fontsize=12.5)
plt.grid(True)
ax = plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
plt.tight_layout()
plt.show()
```

Note, that we have added an additional plot of the voltage curve $v(t)$ to visualize the spiking behavior of the simulated neuron. The entire code can be found in this Github repositoryꜛ.

Before we start, let’s briefly recap the phases of an action potential, that we have already discussed in the post on the Integrate and Fire model:

An action potential is a rapid change in the membrane potential of a neuron that allows it to communicate with other neurons. It is evoked when an external stimulus reaches a certain voltage threshold, and consists of three main phases: **depolarization**, **repolarization**, and **hyperpolarization**. The depolarization phase is characterized by a rapid increase in the membrane potential due to the opening of voltage-gated sodium channels and a subsequent influx of sodium ions. This phase is followed by repolarization, where sodium channels start to close and voltage-gated potassium channels open, allowing potassium to exit the neuron and bring the membrane potential back toward the resting level. Hyperpolarization occurs as potassium channels close slowly, causing the membrane potential to become temporarily more negative than the resting potential. This phase is associated with the **relative refractory period**, during which the neuron requires a stronger stimulus to fire another action potential. The **absolute refractory period** spans from the onset of depolarization to the end of repolarization, during which the neuron cannot initiate another action potential regardless of stimulus strength. After the hyperpolarization phase, the neuron returns to its **resting potential**, and the cycle can begin again.

Let’s check whether the FitzHugh-Nagumo model can reproduce these phases. We will simulate the model for different initial conditions and external currents and observe the resulting trajectories in the phase plane. We will also plot the voltage curve $v(t)$ to visualize the action potentials generated by the model.

Setting the initial condition $(v_0, w_0)$ to $(-1.0, -0.8)$ (which corresponds to the `z0`

variable in the Python code), $a$ to $0.7$, $b$ to $0.8$, and $\mu=2$, and applying an external current of $I_{\text{ext}}=0.25$, we obtain the following phase plane plot and voltage curve:

With the chosen set of parameters, the model is indeed able to simulate a single action potential and its different phases. To identify the phases, parts of the trajectory and voltage curve are color-coded according to the different phases of the action potential. The voltage curve starts in the depolarization phase (green), where the membrane potential increases rapidly. In the phase plane plot, we can identify this phase being the part of the trajectory, which starts at the initial condition (blue dot) and follows the streamlines towards the $\dot{v}$-nullcline. Once the nullcline is reached, the trajectory has for a short period of time almost just a vertical $w$-component and then follows the streamlines into negative $v$-direction with a relative low $w$-component. This part of the trajectory can be identified as the repolarization phase (yellow) in the voltage curve. However, when the voltage curve passes $v=\sim-1$ (considered as the level of the resting potential, red), the actual hyperpolarization phase (purple) starts. This phase can be further subdivided into a fast (dashed line) and slow part (straight line). During the fast part, the trajectory in phase space is still on the fast track in negative $v$-direction. During the slow part, the trajectory reaches the left part of the $\dot{v}$-nullcline and drops down to the fixed point (the intersection of the nullclines), again having almost just a vertical $w$-component with only slow changes in the $v$-direction, roughly following the left part of the $\dot{v}$-nullcline. The turning point from fast to slow hyperpolarization overlaps with the voltage curve having its global minimum at around $v=-2$. After the turning point, the trajectory’s $v$-component turns from negative to positive values, indicating the (slow) return to the resting potential.

Both the depolarization phase and the repolarization phase roughly take 2.5 time units (each). The relative refractory period, in contrast, roughly takes 5 time units, which is the time both the depolarization and repolarization phase take together, thus being the slowest phase of the action potential, which is consistent with what is observed in real neurons.

The reason for the observed speed differences in the different phases of the action potential can be explained by the structure of the FitzHugh-Nagumo model. The nonlinear term $-\frac{1}{3}v^3$ in the $\dot{v}$ equation introduces a strong nonlinearity to the $v$-dynamics, allowing for rapid changes in $v$ when $v$ is far from equilibrium. This nonlinearity is absent from the $\dot{w}$ equation, making $w$ dynamics more linear and generally slower. Furthermore, time scale separation is explicitly introduced by $\mu$ and its reciprocal in the equations. Typically, in the FitzHugh-Nagumo model, $\mu$ is chosen such that $v$ evolves on a faster time scale than $w$, representing the fast response of the membrane potential compared to the slower recovery processes modeled by $w$. We have chosen $\mu=2$ in our simulation, which ensures that the $v$-dynamics are faster than the $w$-dynamics.

After the hyperpolarization phase, the trajectory does not completely return to the rest state, but starts a new cycle in our case. This cycle, however, is less pronounced than the first one, reaching lower amplitudes and time intervals, indicating a dampening of the action potential’s intensity over time. This behavior is reflected in the phase plane plot, where the trajectory does indeed not fully return to the equilibrium point, but starts a new, much shorter cycle with reduced dynamical expression compared to the initial one. We will discuss this behavior in more detail in the next section.

Overall, the FitzHugh-Nagumo model is able to simulate the different phases of an action potential and the subsequent refractory periods. The model captures the essential dynamics of the action potential and provides a simple yet effective way to study the behavior of neurons and their firing mechanisms.

Note, that the initial condition that I have chosen for $I_{ext}$ seem to be quite arbitrary. I tried to fit the equilibrium point of the system in phase space, however, I missed the exact point and the system started having already some amount of voltage amplitude and being on a dynamic trajectory in the phase plane (you can verify this by setting $I_{ext}=0$ in the previous simulation). The equilibrium point can actually be calculated exactly by finding the intersection of the nullclines. In the Python code, this can be done by adding the following lines:

```
v_range = np.linspace(-3.5, 3.5, 400)
v_nullcline_w = v_nullcline(v_range, I_ext)
w_nullcline_v = w_nullcline(v_range, a, b)
intersection = np.argmin(np.abs(v_nullcline_w - w_nullcline_v))
# reset z0 to the intersection point:
z0 = [v_range[intersection], w_nullcline_v[intersection]]
print(z0)
```

```
[-1.2017543859649122, -0.6271929824561404]
```

Updating the initial condition to the intersection point, we obtain the following phase plane plot and voltage curve:

As we see, no action potential is generated and the system converges to the resting potential. However, the system is not fully at rest but exhibits small oscillations around the resting potential. In the phase plane plot, the trajectory starts small new cycles each time before reaching the equilibrium point:

This behavior is consistent with what one would expect, as, for the chosen $I_{\text{ext}}$, the intersection of the nullclines becomes a stable fixed point of the system, which converges to a stable limit cycle around this fixed point. The system does not generate an action potential as the trajectory does not generate the necessary dynamics for an action potential to evolve. Instead, the system exhibits low-amplitude oscillations around the resting potential. This can be considered as biophysically plausible, as neurons can exhibit **sub-threshold oscillations** in the absence of external stimuli.

However, as soon as we add some amount of external current, the equilibrium point will start to move away from its former position and the dynamics of the trajectory will change. For example, setting $I_{\text{ext}}=0.03$, the system becomes able to generate an action potential:

The increase of the external current slightly shifts the $\dot{v}$-nullcline to the right, relocating the fixed point, that is now located at

```
[-1.1842105263157894, -0.6052631578947367]
```

The previously chosen initial condition is now clearly located away from the new equilibrium point and a limit cycle with more pronounced dynamics can evolve. The higher the external current, the higher the amplitude of the action potential, as the trajectory becomes more pronounced in the phase plane:

However, an action potential is not necessarily generated for every increase of the external current. The increase must be high enough to shift the equilibrium point sufficiently away from the initial condition. An external current of, e.g., $I_{\text{ext}}=0.005$ is not sufficient to significantly shift the equilibrium point and to trigger an action potential:

The corresponding equilibrium point is still located at

```
[-1.2017543859649122, -0.6271929824561404]
```

Thus, a certain amount of external current is required to trigger the action potential, which is consistent with the behavior of real neurons, where a certain **threshold potential** must be reached to generate the action potential.

Note, that while we increased the external current, also the resting potential of the system increased. The more we increase the current, the more the $\dot{v}$-nullcline shifts to the right and the more the fixed point (equilibrium point) is located at a higher voltage level. The system maintains the new resting potential since the external current is applied constantly for the entire simulation time. As soon as the current would be removed, the system would return to its initial resting potential.

So far, we were able to simulate a single action potential. However, neurons can generate multiple action potentials in a short period of time, forming so-called **spike trains**. Spike trains are the fundamental way in which neurons communicate with each other and encode information. They can be regular or irregular, and their timing and frequency can carry important information about the stimulus the neuron is responding to.

In the previous two sections, we have chosen $I_{\text{ext}}$ in such a way, that the intersection of the nullclines became a stable fixed point and the system converged to a stable limit cycle, resulting in the observed single action potential and sub-threshold oscillations of the membrane potential. To simulate a spike train, we need to modify the external current $I_{\text{ext}}$ in such a way that the fixed point becomes unstable. Let’s therefore apply different external currents to the system, keep all other parameters constant, and observe the resulting trajectories in the phase plane

For $I_{\text{ext}}=0.3$, the system generates a pronounced action potential, as described before. After the first pronounced cycle in phase space (which generates the action potential), the system converges to the low-amplitude limit-cycle around the stable fixed point, generating sub-threshold oscillations.

For $I_{\text{ext}}=0.4$, the system generates a series of action potentials, forming a spike train. The fixed point has now become unstable and the system converges to a pronounced limit cycle around this fixed point. The first part of the trajectory is a bit elongated due to the large distance between the initial condition and the new fixed point. This results in the first action potential having a higher amplitude than the following ones.

The frequency of the action potentials depends on the external current and the resulting dynamics of the system. For $I_{\text{ext}}=0.9$, the system generates a higher frequency of action potentials and thus simulates a neuron that reacts more strongly to the external input and is therefore more excitable.

For $I_{\text{ext}}=1.5$, no spike train is generated and we again get a single action potential followed by sub-threshold oscillations. The increase of the external current was too high this time so that the intersection of the nullclines became a stable fixed point again and the system converges to a stable limit cycle around this fixed point. Before reaching this limit cycle, the trajectory has an elongated part due to the large distance between the initial condition and the new fixed point, which results in the single action potential or voltage peak. This is true for any further increase of the external current. One could interpret this behavior as follows: The system is too excited by the external current and the fixed point is shifted too far away from the initial condition. As a result, the oscillations of the system (the spike trains) are blocked by the too high excitation. In biological terms, this phenomenon is known as the **excitation block**, where repetitive spiking is blocked as the amplitude of the stimulus current increases. The neuron is overstimulated and cannot generate spike trains.

In summary, it seems that there is a certain range of external currents that can trigger the generation of spike trains. Outside this range, the system converges to a stable limit cycle around a fixed point and generates either a single action potential or just a low- or high-amplitude voltage peak. In order to further determine this range, let’s have a detailed look at the nullclines and the fixed points for distinct external currents:

In the plot above, nullclines for distinct external currents are shown. The fixed points at the intersection of the nullclines are indicated by red dots. For $I_{\text{ext}}=0.0$, the fixed point is stable and the system converges to a stable limit cycle around this fixed point. From the previous sections we know that no action potential is generated in this case. By taking a closer look at the plot, we can see that the fixed point is located left to the trough of the $\dot{v}$-nullcline (which is why the system converges to the stable limit cycle).

For $I_{\text{ext}}=0.03$, the fixed point is located almost at the trough of the $\dot{v}$-nullcline, being at the verge of becoming unstable. The system generates a single action potential followed by sub-threshold oscillations as described before. For $I_{\text{ext}}=0.05$, the fixed point is located right to the trough, being unstable. The system converges to a stable limit cycle around this fixed point and generates spike trains. For all other external currents until $I_{\text{ext}}=1.4$ (including), the system generates spike trains (see plot below). All these fixed points are located right to the trough and left to the peak of the $\dot{v}$-nullcline, making them unstable. However, as soon as we exceed the peak to the right, the fixed point becomes stable again and the system converges to a stable limit cycle around this fixed point. This is the case for $I_{\text{ext}}=1.5$ and $I_{\text{ext}}=1.9$.

From these observations, we identify the left trough and the right peak of the $\dot{v}$-nullcline to be critical points in understanding the dynamics of the system. The left and the right part of the N-shaped $\dot{v}$-nullcline are “stable” branches, where the fixed point is stable and the system converges to a stable limit cycle around this fixed point. The middle part is the “unstable” branch, where the fixed point is unstable and the system generates spike trains. This divides the phase plane into two distinct regions:

**outside the trough-peak range**: If the fixed point is on the far left or far right stable branches of the $v$-nullcline and outside the trough-peak range, it becomes stable and small perturbations will return to the fixed point without initiating spike trains or a single action potential (or just a low- or high-amplitude voltage peak). This is because the system is pushed back towards the stable branches of the $v$-nullcline, preventing the generation of sustained oscillations.**within the trough-peak range**: If the fixed point lies within the trough-peak range, it becomes unstable and perturbations can lead the system away from the fixed point, across the unstable middle branch, and into spiking behavior. This is because, within this range, the nonlinear dynamics facilitate repetitive excursions around the phase plane that correspond to limit cycles.

Constant oscillations (spike trains) occur when the system’s dynamics force it to repeatedly traverse a path in the phase plane that loops around, typically encompassing parts of both stable branches and the unstable middle branch of the $v$-nullcline. This looping behavior represents the system repeatedly being pushed away from one stable branch, moving through the unstable region, approaching the other stable branch, and then being pulled back, creating a continuous cycle.

To summarize our observations and verify our findings, we apply a ramping external current to the system, starting from $I_{\text{ext}}=0.0$ and increasing it continuously until $I_{\text{ext}}=1.9$:

```
def fitzhugh_nagumo_time_dependent_ramping(t, z, mu, a, b,
I_ext_start, I_ext_end, eval_time):
v, w = z
# linearly ramp I_ext from I_ext_start to I_ext_end over the time span:
I_ext = I_ext_start + (I_ext_end - I_ext_start) * (t / eval_time)
dvdt = mu * (v - (v**3) / 3 - w + I_ext)
dwdt = (1 / mu) * (v + a - b * w)
return [dvdt, dwdt]
# set parameters for I_ext ramping:
I_ext_start = 0.0 # Starting value of I_ext
I_ext_end = 1.9 # Ending value of I_ext
```

The plot above clearly shows that the system generates spike trains for $I_{\text{ext}}=\sim0.05$ to $I_{\text{ext}}=\sim1.4$, which corresponds to the range where the fixed point is unstable and located within the unstable trough-peak range of the $v$-nullcline. Before and after this range, the system either converges to a stable limit cycle around a fixed point or does not generate any action potentials at all.

Simulating a spike train with a continuous external current is straightforward. However, in reality, neurons receive a variety of inputs from other neurons, which are often not continuous but rather occur in the form of **synaptic inputs**. Synaptic inputs are brief, transient changes in the membrane potential that are caused by the release of neurotransmitters from other neurons. These inputs can be excitatory or inhibitory, and their timing and frequency can have a significant impact on the firing behavior of the neuron. To simulate the effect of synaptic inputs on the FitzHugh-Nagumo model, we can apply a time-dependent external current to the system. This external current can mimic the effect of excitatory or inhibitory synaptic inputs on the neuron and allow us to study how the neuron responds to different input patterns.

To do so, we redefine the FitzHugh-Nagumo model with a time-dependent external current in such a way, that we can define different time ranges during which the external current is applied:

```
# define the FitzHugh-Nagumo model with time-dependent (pulsed) I_ext:
def fitzhugh_nagumo_time_dependent_pulse(t, z, mu, a, b, I_ext, pulse_time_ranges):
v, w = z
# set I_ext to zero except at pulse_time:
I_ext_use = 0.0
# add a pulse during pulse_time_range:
for pulse_time_range in pulse_time_ranges:
if (t >= pulse_time_range[0]) and (t <= pulse_time_range[1]):
I_ext_use = I_ext
dvdt = mu * (v - (v**3) / 3 - w + I_ext_use)
dwdt = (1 / mu) * (v + a - b * w)
return [dvdt, dwdt]
# set time span:
eval_time = 50
t_iteration = 200
# set parameters for I_ext pulsed:
I_ext = 1.0 # external current during pulse
pulse_time_ranges = [[10,11]]
```

The remaining parameters are the same as before except for the evaluation time, which is now set to 50 time units. We also reduced the time resolution to 400 iterations to adapt it to the new evaluation time. This means, that, supposing that one time unit corresponds to one second, one second is resolved with eight time steps. When we apply a current
pulse of 1 second by setting the `pulse_time_ranges`

, e.g., to `[[10,11]]`

, which would correspond to the time span of 1 second, we actually apply a pulse lasting eight time steps. By definition, this is not a delta pulse. Therefore, bear in mind, that a pulse of 1 second in the context of our simulation is not a delta pulse. We can consider it as a short pulse instead.

Let’s evaluate the system with a short pulse external current released at two different time points, $t=9$ and $t=10$:

For the pulse released at $t=9$, the system generates a single action potential. Even though we applied an external current that was high enough to trigger a spike train when applied continuously (see previous section), the short pulse was not sufficient to generate such multiple action potentials. This is no surprise, as the system’s dynamics are changed only for a short period of time and return to the sub-threshold oscillations after the pulse is over.

When applying the same pulse released at $t=10$, however, no action potential is triggered. Thus, depending on the current location of the trajectory in phase space at the onset of the pulse, the system can either spike or not spike when applying a short pulse. Remember, that before the pulse is applied, the external current is zero and the system exhibits the observed sub-threshold oscillations. This variability in response capability actually highlights another key advantage of the model. Specifically, the model’s ability to simulate sub-threshold oscillations due to diminished and chaotic limit cycle behavior around the stable fixed point introduces another element of randomness. This randomness mirrors the stochastic nature observed in real neurons, where synaptic input timing and frequency crucially influence neuronal firing behavior. Consequently, the system’s differential response to identical pulses, contingent upon input timing, underscores the critical role of synaptic input patterns in determining neuronal activity.

Furthermore, the randomness in the system also affects the absolute amplitude of the action potential. The same short pulse released at different times, e.g., at $t=13$ and, $t=14$, leads to different action potential amplitudes, introducing another randomness factor in the simulation and, thus, a more realistic behavior of the model:

By applying a time-dependent external current, we can now also prove that during the absolute refractory period, the system cannot spike. For example, setting the initial pulse at $t=30$ triggers an action potential. Another pulse anywhere between $t=31$ and $34$ does not trigger a new action potential until $t=35$, where another action potential is triggered. This is an important aspect of the model, as it mimics the absolute refractory period of real neurons, where a neuron cannot spike again during this period. The neuron’s response to external stimuli is thus highly dependent on the timing of the stimuli, which is a crucial aspect of neuronal excitability and spike-generating mechanisms.

From the phase plane plots above we can extract further details of the dynamics during pulses released during and after the absolute refractory period. For instance, the double-peak that we observe in the voltage curve for a second pulse released at $t=32$ (upper panel) is not due a potentially triggered second action potential that overlaps the first one. Instead, it is caused by a short-time distortion on the depolarizing branch of the trajectory in phase space that does not result into the beginning of another limit cycle. The same is true for the second pulse being release at $t=34$ (upper middle panel). Here, the short-time distortion lays on the hyperpolarizing branch of the trajectory. The distortion is again insufficient to trigger a new limit cycle and, thus, another action potential. This changes when the second pulse is released at $t=35$ (lower middle panel). Here, the system generates a second limit cycle, that is less pronounced than the first one, resulting in a second action potential of lower amplitude. And finally, when the second pulse is released at $t=36$ (lower panel), the system generates a full established second limit cycle, resulting in a second action potential. The amplitude is a bit lower than the first one as the second limit cycle is a bit less pronounced than the first one. Note, that each pulse activation shifts the $\dot{v}$-nullcline a bit further to the right and, thus, the fixed point of the system is also located at another position, resulting in the observed distortions of the trajectories. Also note, that the trajectories in each simulated scenario starts and ends at the same location (the blue and yellow dots overlap each time), as the system starts at $I_{\text{ext}}=0.0$ and ends at $I_{\text{ext}}=0.0$ due to the chosen pulse intervals.

If we apply a longer pulse, the system will spike again continuously. As soon as the external current is removed, the system returns to the resting state as previously assumed:

Another important aspect the FitzHugh-Nagumo model is its different response to slowly ramped external currents compared to abruptly increased external currents. During a gradually increasing of the external current (upper panel in plot below), we observe that the resting equilibrium point of the model moves slowly towards the right, with the system’s state transitioning smoothly alongside it, without generating any spikes. In contrast, if the stimulation is increased abruptly (lower panel), even by a smaller magnitude, the system’s trajectory doesn’t move straight to the new equilibrium. Instead, it produces a transient spike. This was the reason why we observed a single action potential for, e.g., $I_{\text{ext}}=0.03$ in the previous sections, where the applied external current was always increased abruptly.

The same is true for external currents, that shift the fixed point into the trough-peak range of the $\dot{v}$-nullcline, i.e., cause the system to generate spike trains. If the external current is increased abruptly, the first action potential of the spike train is a bit higher than the following ones, while in the case of gradually increased external current, all action potentials have the same amplitude (except for the spike(s) that may occur during the ramping period; compare plots below). This is due to the fact that the system has more time to adjust to the new equilibrium point in the case of the slowly ramped external current.

The FitzHugh-Nagumo model is a simple two-dimensional system that captures the essential dynamics of neuronal excitability and spike-generating mechanisms. By analyzing the phase plane of the system, we can gain insights into the system’s behavior and understand how different external currents affect the system’s response. The system’s dynamics are highly sensitive to the external input, and small changes in the external current can have a significant impact on the firing behavior of the neuron. The system can exhibit a variety of behaviors, including sub-threshold oscillations, single action potentials, and spike trains, depending on the external current and the system’s initial conditions. The model’s ability to simulate sub-threshold oscillations and stochastic behavior makes it a valuable tool for studying neuronal excitability and spike-generating mechanisms. By applying time-dependent external currents to the system, we can mimic the effect of synaptic inputs on the neuron and study how the neuron responds to different input patterns. The model’s response to different input patterns highlights the critical role of synaptic input timing and frequency in determining neuronal firing behavior.

Overall, the FitzHugh-Nagumo model provides a simple yet powerful framework for studying the dynamics of neuronal excitability and spike generation, and its insights can help us better understand the complex behavior of real neurons. However, we should also bear in mind, that the model is a simplification of the real neuronal dynamics and does not capture all the intricacies of real neurons. For a more detailed and biophysically accurate model, the Hodgkin-Huxley model is often used, which includes additional ion channels and more complex dynamics. Nevertheless, the simplicity of the FitzHugh-Nagumo model makes it a valuable tool for studying the basic principles of neuronal excitability and spike generation and provides a solid foundation for further exploration of more complex models.

You can find the Python code used to generate the plots in this post in this GitHub repositoryꜛ. Feel free to experiment with the code and explore the dynamics of the FitzHugh-Nagumo model further.

- FitzHugh,
*Impulses and Physiological States in Theoretical Models of Nerve Membrane*, 1961, Biophysical Journal, Vol. 1, Issue 6, pages 445-466, doi: 10.1016/s0006-3495(61)86902-6 - FitzHugh R. (1969),
*Mathematical models of excitation and propagation in nerve. Chapter 1*, pp. 1-85 in H.P. Schwan, ed. Biological Engineering, McGraw-Hill Book Co., N.Y., PDF available on ResearchGateꜛ - FitzHugh,
*Mathematical models of threshold phenomena in the nerve membrane*, 1955, The Bulletin of Mathematical Biophysics, Vol. 17, Issue 4, pages 257-278, doi: 10.1007/BF02477753 - Nagumo, Arimoto, Yoshizawa,
*An Active Pulse Transmission Line Simulating Nerve Axon*, 1962, Proceedings of the IRE, Vol. 50, Issue 10, pages 2061-2070, doi: 10.1109/JRPROC.1962.288235 - Wulfram Gerstner, Werner M. Kistler, Richard Naud, and Liam Paninski,
*Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Chapter 4: Dimensionality Reduction and Phase Plane Analysis*, 2014, Cambridge University Press, ISBN: 978-1-107-06083-8 - D. H. Rothman,
*Lecture notesꜛ for 12.006J/18.353J/2.050J, Nonlinear Dynamics: Bifurcations in two dimensionsꜛ*, MIT September 27, 2022 - Ian Cooper, *Doing physics with Python/Matlab: FitzHugh-Nagumo model for spiking neurons*ꜛ
- Scholarpedia article on the FitzHugh-Nagumo modelꜛ

The system is described by a second-order ODE,

\[\begin{align} \frac{d^{2}x}{dt^{2}} -\mu (1-x^{2}){dx \over dt}+x=0, \label{eq:1} \end{align}\]where $x$ is the position of the oscillator and $\mu$ is a parameter that determines the nonlinearity and the damping of the system. The MIT lecture notes by Rothman (2022)ꜛ provide a detailed derivation of the Van der Pol equation and I recommend reading it to get a deeper understanding of the origin of the system parameters.

The ODE ($\ref{eq:1}$) can be converted into a system of first-order ODEs by introducing a new variable $y = \frac{dx}{dt}$:

\[\begin{align*} \dot{x} &= y, \\ \dot{y} &= \mu(1 - x^2)y - x, \end{align*}\]The nullclines of the Van der Pol oscillator are given by the conditions under which the derivatives $\frac{dx}{dt}$ and $\frac{dy}{dt}$ are zero. The $x$-nullcline is determined by setting $\frac{dx}{dt} = 0$, which occurs when $y = 0$, i.e., this nullcline is represented by a horizontal line at $y = 0$. The $y$-nullcline is found by setting $\frac{dy}{dt} = 0$, which occurs when $\mu(1 - x^2)y - x = 0$ $\Leftrightarrow$ $y = \frac{x}{\mu(1-x^2)}$.

For certain values of $\mu$, the Van der Pol oscillator exhibits limit cycle behavior. Let’s explore this behavior by visualizing the phase plane of the Van der Pol oscillator and identifying its limit cycle:

```
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
# define the Van der Pol oscillator model:
def van_der_pol(t, z, mu):
x, y = z
dxdt = y
dydt = mu * (1 - x**2) * y - x
return [dxdt, dydt]
# define the nullclines:
def y_nullcline(x, mu):
return x/(mu*(1-x**2))
def x_nullcline(y, mu):
return 0*y
# set time span:
eval_time = 100
t_iteration = 1000
t_span = [0, eval_time]
t_eval = np.linspace(*t_span, t_iteration)
# set initial conditions:
z0 = [2, 0]
# set Van der Pol oscillator parameter:
mu = 1 # stable: 1
# calculate the vector field:
mgrid_size = 8
x, y = np.meshgrid(np.linspace(-mgrid_size, mgrid_size, 15),
np.linspace(-mgrid_size, mgrid_size, 15))
u = y
v = mu * (1 - x**2) * y - x
# calculating the trajectory for the Van der Pol oscillator:
sol_stable = solve_ivp(van_der_pol, t_span, z0, args=(mu,), t_eval=t_eval)
# define the x-array for the nullclines:
x_null = np.arange(-mgrid_size,mgrid_size,0.001)
# plot vector field and trajectory:
plt.figure(figsize=(6, 6))
plt.clf()
speed = np.sqrt(u**2 + v**2)
plt.streamplot(x, y, u, v, color=speed, cmap='cool', density=2.0)
plt.plot(x_null, y_nullcline(x_null, mu) , '.', c="darkturquoise", markersize=2)
plt.plot(x_null, x_nullcline(x_null, mu) , '.', c="darkturquoise", markersize=2)
plt.plot(sol_stable.y[0], sol_stable.y[1], 'r-', lw=3,
label=f'Trajectory for $\mu$={mu}\nand $z_0$={z0}') # trajectory
# indicate start point:
plt.plot(sol_stable.y[0][0], sol_stable.y[1][0], 'bo', label='start point', alpha=0.75, markersize=7)
plt.plot(sol_stable.y[0][-1], sol_stable.y[1][-1], 'o', c="yellow", label='end point', alpha=0.75, markersize=7)
plt.title('phase plane plot: Van der Pol oscillator')
plt.xlabel('x')
plt.ylabel('y')
plt.legend(loc='lower right') #, bbox_to_anchor=(1, 0.5)
plt.gca().spines['top'].set_visible(False)
plt.gca().spines['right'].set_visible(False)
plt.gca().spines['bottom'].set_visible(False)
plt.gca().spines['left'].set_visible(False)
plt.ylim(-mgrid_size, mgrid_size)
plt.tight_layout()
plt.show()
```

The code above generates a phase plane plot of the Van der Pol oscillator for a given set of initial conditions `z0`

($=(x_0,y_0)$) and a parameter $\mu$. The trajectory is represented by the red curve and its starting and end point are indicated by blue and yellow dots, respectively. The nullclines are plotted in cyan, and the vector field is represented by the color-coded streamlines. To understand the dynamics of the system, we can change the initial conditions and the parameter $\mu$ to see how the system’s behavior changes.

Let’s start with a stable limit cycles by setting $\mu=1$:

For $\mu=1$, the phase plane plot show, that the system converges to a closed limit cycle for all four initial conditions. The vector field indicates that the system’s trajectories are attracted to the limit cycle. Near the origin (`z0=[0.01, 0]`

), the system is unstable with a spiral sink behavior, but evolves to the limit cycle. Far from the origin (`z0=[7, 1]`

and `z0=[-7, 1]`

), the system is damped, but also evolves to the limit cycle.

The $y$-nullcline intersects the $x$-axis at $x = 0$ and approaches the vertical asymptotes at $x = \pm1$, which corresponds to the points where the denominator becomes zero and the dynamics of the system become strongly nonlinear. The shape of the $y$-nullcline helps to understand the regions in the phase plane where the rate of change of the $y$-variable (represented by the motion in the $y$-direction) increases or decreases, which is crucial for analyzing the stability and dynamic behavior of the system.

In general, for all initial conditions with $\mu\gt0$, the system converges to a globally stable limit cycle:

For $\mu=0$, there is no damping and the system becomes a form of a simple harmonic oscillator.

For negative values of $\mu$, the system exhibits unstable behavior and the trajectories diverge from the origin:

How to interpret our findings? The graphical analysis of the phase plane plots allows us to identify parameter values of $\mu$, which lead to a stable oscillatory behavior of the Van der Pol oscillator. It also allows us to identify the stability of the limit cycle and the system’s behavior for different initial conditions. This information is crucial for understanding the dynamic properties of the Van der Pol oscillator and its behavior in different parameter regimes.

Another approach to analyze the Van der Pol oscillator is to transform it into a Liénard system. A Liénard equation is a second-order differential equation of the form

\[{d^{2}x \over dt^{2}}+f(x){dx \over dt}+g(x)=0\]with $f(x)$ and $g(x)$ being continuously differentiable functions on $\mathbb{R}$, where $f$ is an even function and $g$ is an odd function. The Van der Pol oscillator is such a system with $f(x) = -\mu(1-x^2)$ and $g(x) = x$. The advantage of the Liénard equation is that, by applying an appropriate transformation by introducing a new variable, it can be reduced to a first-order system of ODEs, which can be analyzed more easily.

The commonly used Liénard transformation for the Van der Pol oscillator is given by defining a new variable $y=x-x^{3}/3-{\dot {x}}/\mu$, that combines $x$, $\dot{x}$, and a non-linear term of $x$. The transformation is tailored to both linearize parts of the system and clearly separate the dynamics into two interacting parts. Using this transformation, we differentiate $y$ with respect to $t$ to connect it to the original equation:

\[\dot{y} = \dot{x} - x^2\dot{x} - \frac{1}{\mu}\ddot{x}.\]Substituting $\ddot{x}$ from the Van der Pol equation gives us:

\[\dot{y} = \dot{x} - x^2\dot{x} + (1-x^2)\dot{x} - \frac{x}{\mu} = -\frac{x}{\mu},\]which simplifies based on the cancellation of terms. Therefore, the transformation leads to the following two first-order differential equations:

\[\begin{align*} \dot{x} &= \mu \left(x-{\tfrac {1}{3}}x^{3}-y\right), \\ \dot{y} &= {\frac {1}{\mu }}x. \end{align*}\]The $\dot{x}$ equation is derived by rearranging the terms of the $y$ transformation term. The system is now in a form suitable for phase plane analysis, numerical simulation, and further analytical study. The corresponding $\dot{x}=0$ and $\dot{y}=0$-nullclines are

\[\begin{align*} & \dot{x}=0 \\ \Leftrightarrow \; & \mu \left(x - \frac{1}{3}x^3 - y\right) = 0 \\ \Leftrightarrow \; & y = x - \frac{1}{3}x^3 \end{align*}\]and

\[\begin{align*} \dot{y}=0 \Leftrightarrow \; & \frac{1}{\mu}x = 0 \end{align*}\]The $\dot{x}=0$-nullcline is a cubic function and the $\dot{y}=0$-nullcline is a vertical line through the origin.

The transformed ODE and its nullclines do indeed look different from the ones we have derived above, which can lead to variations in the qualitative depiction of the dynamics on the phase plane. However, this difference does not necessarily contradict the equivalence of both transformations in capturing the dynamical behavior of the original equation. The primary goal of both transformations is to make the analysis of the original system more tractable. Each does so by unpacking the oscillator’s dynamics into a format that is easier to analyze. While the nullclines are indeed different between the two transformed systems, each set of nullclines provides unique insights into the oscillator’s behavior. The first-order system transformation directly translates the original dynamics into a two-dimensional phase plane, making the oscillator’s limit cycle behavior directly observable. The Liénard transformation, on the other hand, restructures the dynamics to highlight the energy exchange and damping effects in a different but equivalently valid two-dimensional representation. The difference in nullclines and the resulting phase portraits does not imply a contradiction but rather reflects different perspectives on the same dynamical phenomena. The core dynamical behavior — the presence of self-sustained oscillations and how they are affected by the non-linearity parameter $\mu$ — remains consistent across both representations and the original system, as we will see in the following.

Our previous Python code can be easily adapted to visualize the phase plane of the Van der Pol oscillator using the Liénard transformation. Simply modify the `van_der_pol`

function to represent the Liénard system and adjust the nullclines accordingly. The according code can be found in the Github repository mentioned below.

Let’s interpret the results:

With $\mu=1$, the phase plane plot shows that the system converges to a closed limit cycle for different initial conditions. The vector field indicates that the system’s trajectories are attracted always to the limit cycle. Near the origin (`z0=[0, 0.5]`

), the system is again unstable with a spiral sink behavior, but evolves to the limit cycle. Far from the origin (`z0=[4, 0]`

and `z0=[4, -4]`

), the system is damped, but also evolves to the limit cycle. When the trajectory starts exactly at the origin (`z0=[0, 0]`

), no oscillations are observed, which is consistent with the fact that the origin is a stable equilibrium point for the Liénard system.

For increasing values of $\mu\gt1$, limit cycle becomes smaller and the system’s behavior becomes more damped. This leads to faster convergence to the limit cycle and a decrease in the amplitude of the oscillations.

For $0\lt\mu\lt1$, the system still exhibits stable limit cycle behavior, but the limit cycle becomes larger and the system’s behavior becomes less damped. This leads to slower convergence to the limit cycle and an increase in the amplitude of the oscillations. Note, that for $\mu=0$, no solution can be derived this time as we would have a division by zero in the Liénard transformation.

For negative values of $\mu$, the system exhibits unstable behavior. A negative $\mu$ alters the nature of the damping in the system. Instead of the non-linear damping that stabilizes the limit cycle for positive $\mu$, negative $\mu$ values can lead to an increase in the system’s energy, potentially destabilizing it or altering its convergence behavior.

Depending on the initial conditions, the system either converges to a stable equilibrium point or the trajectories diverge from the origin. Whether the system converges to a stable equilibrium point or the trajectories diverge from the origin depends on whether the initial conditions lie with respect to the $\dot{x}=0$-nullcline. The $\dot{x}=0$ nullcline effectively represents points in the phase space where the horizontal (x-direction) velocity component is zero. The system’s behavior on either side of this nullcline is determined by the sign and magnitude of $\dot{y}$, which, in combination with the effects of $\mu$, influences whether trajectories move towards or away from the nullcline and potentially cross it. The position of the initial condition relative to this nullcline helps to determine the trajectory’s initial direction and behavior, influencing convergence or divergence outcomes.

And again, the larger the absolute value of $\mu$, the faster the convergence to the stable equilibrium point. The magnitude of $\mu$ indeed affects the rate at which trajectories converge to any stable points or diverge. Larger absolute values of $\mu$ tend to result in faster dynamics, either hastening the convergence to equilibrium points or accelerating the divergence from unstable points.

In order to further analyze the fix points and the stability of the system, we can again use the Jacobian matrix and its eigenvalues. The Jacobian matrix of the Liénard system is given by

\[J = \begin{bmatrix} \frac{\partial \dot{x}}{\partial x} & \frac{\partial \dot{x}}{\partial y} \\ \frac{\partial \dot{y}}{\partial x} & \frac{\partial \dot{y}}{\partial y} \end{bmatrix} = \begin{bmatrix} \mu(1 - x^2) & -\mu \\ \frac{1}{\mu} & 0 \end{bmatrix}.\]Evaluating the Jacobian at the fixed point $(0,0)$ gives:

\[J(0,0) = \begin{bmatrix} \mu & -\mu \\ \frac{1}{\mu} & 0 \end{bmatrix}.\]The eigenvalues $\lambda$ of the Jacobian matrix are found by solving the characteristic equation

\[\det(J - \lambda I) = 0,\]where $I$ is the identity matrix.

For the given $J(0,0)$, the characteristic equation becomes:

\[\begin{vmatrix} \mu - \lambda & -\mu \\ \frac{1}{\mu} & -\lambda \end{vmatrix} = 0.\]Solving this equation will give us the eigenvalues:

\[\lambda = \frac{\mu}{2} - \frac{\sqrt{\mu^2 - 4}}{2}, \quad \frac{\mu}{2} + \frac{\sqrt{\mu^2 - 4}}{2}.\]Let’s analyze what the eigenvalues tell us about the system’s behavior in case of small perturbations around the fixed point $(0,0)$ and how the system’s stability depends on the parameter $\mu$:

**for $\mu \gt 2$:**When $\mu \gt 2$, the term under the square root, $\mu^2 - 4$, is positive, leading to real and distinct eigenvalues. Since the value of $\mu/2$ is positive, the sign of the real parts of the eigenvalues depends on the value of $\mu$. Both eigenvalues have positive real parts, given that adding or subtracting a positive square root to/from a positive $\mu/2$ keeps the sign positive. This indicates that the fixed point is indeed an**unstable node**, as all trajectories move away from it.**for $0 \lt \mu \lt 2$:**In this range, the term under the square root becomes negative ($\mu^2 - 4 < 0$), which means the eigenvalues are complex with real parts given by $\mu/2$, and imaginary parts determined by the square root of a negative number. Since the real part of the eigenvalues ($\mu/2$) is positive for any $\mu > 0$, the fixed point is an**unstable spiral**, not a node, implying that trajectories spiral out from the fixed point.**for $-2 \lt \mu \lt 0$:**In this range, the term under the square root is still negative, leading to complex eigenvalues with real parts given by $\mu/2$. Since $\mu/2$ is negative in this case, the fixed point is a**stable spiral**, implying that trajectories spiral in towards the fixed point.**for $|\mu| = 2$:**The term under the square root is zero, resulting in a repeated real eigenvalue $\mu/2$. This is a special case indicating a**bifurcation point**where the nature of the fixed point can change. A bifurcation occurs when the system’s behavior changes qualitatively as a parameter is varied. In this case, the fixed point is critically stable and the system is on the verge of instability. For $\mu = 2$, the eigenvalue is positive, suggesting a critically stable case that is on the verge of instability. For $\mu = -2$, the eigenvalue is negative, indicating a critically stable case that is on the verge of stability.**for $\mu \lt - 2$:**When $\mu < -2$, the term under the square root, $\mu^2 - 4$, is again positive, resulting in real and distinct eigenvalues. Since $\mu/2$ is negative, and the square root term affects the magnitude without changing the sign, both eigenvalues are negative. This means that the fixed point is a**stable node**, with trajectories moving toward it from all directions.

To understand what it means, when the system converges to a stable limit cycle, we can examine its $x(t)$ component. For instance, below are two plots of $x(t)$, one for $\mu=-1$ and `z0=[1.0, 0.0]`

(top), and another one for $\mu=2$ and `z0=[0.5, 0.5]`

(bottom):

In the first case (top plot), the system does not converge to a stable limit cycle. After an initial swing, the system converges rapidly to a stable equilibrium point ($0$). In the second case (bottom plot), the system converges to a stable limit cycle and the $x(t)$ component oscillates around the equilibrium point. Convergence to a stable limit cycle means that the system exhibits a self-sustained oscillation that persists over time. Furthermore, the limit cycle is a **stable attractor**, where the trajectories are attracted to the limit cycle and remain there over time, regardless of the chosen initial conditions. The limit cycle is a characteristic feature of the Van der Pol oscillator and represents the system’s long-term behavior.

In this post, we have applied phase plane analysis to the Van der Pol oscillator. We have visualized the phase plane of the Van der Pol oscillator and identified its limit cycle for different initial conditions and parameter values. We have also discussed the influence of the Liénard transformation on the phase plane and the system’s dynamics. The graphical analysis of the phase plane plots has allowed us to identify parameter values of $\mu$ that lead to a stable oscillatory behavior and those that lead to unstable behavior. We have also identified the stability of the limit cycle and the system’s behavior for different initial conditions. This information is crucial for understanding the dynamic properties of the Van der Pol oscillator and its behavior in different parameter regimes. The phase plane analysis has again provided us with a powerful tool to gain insights into the behavior of a dynamical system and to predict its long-term behavior.

You can find the complete code used in this post in this GitHub repositoryꜛ.

- Scholarpedia article on the Van der Pol oscillatorꜛ
- Wikipedia article on the Liénard equationꜛ
- D. H. Rothman,
*Lecture notesꜛ for 12.006J/18.353J/2.050J, Nonlinear Dynamics: Chaos. Forced oscillators and limit cyclesꜛ*, MIT September 27, 2022

Let’s recall the system of ordinary differential equations (ODEs) that define the Rössler attractor:

\[\begin{align*} \dot{x} &= -y - z \\ \dot{y} &= x + ay \\ \dot{z} &= b + z(x - c) \end{align*}\]The nullclines of a system are the curves where the derivative of the variables with respect to time is zero. Thus, to find the nullclines and the fixed points of the system, we start by setting each equation to zero:

\[\begin{align*} \dot{x} =0& \quad \Leftrightarrow \quad y=-z \\ \dot{y} =0& \quad \Leftrightarrow \quad y=-\frac{x}{a} \\ \dot{z} =0& \quad \Leftrightarrow \quad 0 = b + z(x - c) \end{align*}\]which leads to

\[\begin{align*} y&=-z \\ x&=az \\ 0&= b + z(az - c) = az^2 - cz + b \end{align*}\]$z$ can be solved for using the quadratic formula:

\[z = \frac{c \pm \sqrt{-4ab + c^2}}{2a}\]These are the $\dot{x}$-, $\dot{y}$-, and $\dot{z}$-nullclines, respectively.

Given $z$, we can now further specify the values of $x$ and $y$:

\[\begin{align*} x_{1,2} &= a\left(\frac{c \pm \sqrt{-4ab + c^2}}{2a}\right) \\ y_{1,2} &= -\left(\frac{c \pm \sqrt{-4ab + c^2}}{2a}\right) \\ z_{1,2} &= \frac{c \pm \sqrt{-4ab + c^2}}{2a} \end{align*}\]Thus, there are two fixed points, depending on the value of $z$, which in turn depends on the parameters $a$, $b$, and $c$.

One of the fixed points, usually found near the center of the attractor loop, acts as an “organizing center” for the dynamics, especially in parameter regimes where the attractor exhibits chaotic behavior. Trajectories spiral out from this fixed point, loop around in the characteristic shape of the attractor, and then come back close to it, only to be pushed away again. This fixed point can be thought of as being part of the “chaotic sea” where trajectories move in complex, sensitive, and unpredictable paths.

The other fixed point, lying relatively far from the main attractor loop, typically has a different stability property and doesn’t directly influence the visible structure of the attractor in the same way. Thus, in the following, we will focus on the first fixed point, which is often the more interesting one.

These fixed points are valid assuming that the discriminant $(-4ab + c^2)$ under the square root is non-negative, which is necessary for $z$ to have real values. The presence and stability of these fixed points, as well as the overall behavior of the Rössler attractor, depend significantly on the specific values of the parameters $a$, $b$, and $c$.

To derive the eigenvalues and eigenvectors for the centrally located fixed point, we’ll first specify the fixed point’s coordinates and then linearize the system around this point. The Jacobian matrix $J$ of the system is:

\[J = \begin{bmatrix} \frac{\partial \dot{x}}{\partial x} & \frac{\partial \dot{x}}{\partial y} & \frac{\partial \dot{x}}{\partial z} \\ \frac{\partial \dot{y}}{\partial x} & \frac{\partial \dot{y}}{\partial y} & \frac{\partial \dot{y}}{\partial z} \\ \frac{\partial \dot{z}}{\partial x} & \frac{\partial \dot{z}}{\partial y} & \frac{\partial \dot{z}}{\partial z} \end{bmatrix} = \begin{bmatrix} 0 & -1 & -1 \\ 1 & a & 0 \\ z & 0 & x-c \end{bmatrix}\]To find the eigenvalues $\lambda$, we solve the characteristic equation obtained by setting the determinant of $J - \lambda I$ to zero, where $I$ is the identity matrix:

\[\begin{align*} \text{det}(J - \lambda I) &= 0 \end{align*}\] \[\begin{align*} \Leftrightarrow \, \text{det}\left( \begin{bmatrix} -\lambda & -1 & -1 \\ 1 & a-\lambda & 0 \\ z & 0 & x-c-\lambda \end{bmatrix} \right) &= 0 \end{align*}\]By expanding this determinant and simplifying it we obtain the characteristic polynomial of $J$, which is a cubic equation in terms of $\lambda$:

\[-\lambda^3 + \lambda^2(a+x-c) + \lambda(ac-ax-1-z) +\] \[+ x-c+az = 0\]The characteristic polynomial captures the behavior of the system around the fixed point in terms of the eigenvalues, which indicate the nature of the fixed point (stable, unstable, saddle point, etc.). Inserting the centrally located fixed point’s coordinates and Rössler’s original parameters, $a=0.2$, $b=0.2$, and $c=5.7$, we can solve for the eigenvalues $\lambda$:

\[\begin{align*} \lambda_{1}&=0.0971028+0.995786i \\ \lambda_{2}&=0.0971028-0.995786i \\ \lambda_{3}&=-5.68718 \end{align*}\]The positive eigenvalues indicate repulsion along the corresponding eigenvector, while the negative eigenvalue indicates attraction. The eigenvectors corresponding to these eigenvalues can be calculated by solving the eigenvalue equation:

\[(J - \lambda I) \mathbf{v} = 0,\]which leads to:

\[\begin{align*} v_{1}&={\begin{pmatrix}0.7073\\-0.07278-0.7032i\\0.0042-0.0007i\\\end{pmatrix}} \\ v_{2}&={\begin{pmatrix}0.7073\\0.07278+0.7032i\\0.0042+0.0007i\\\end{pmatrix}} \\ v_{3}&={\begin{pmatrix}0.1682\\-0.0286\\0.9853\\\end{pmatrix}} \end{align*}\]The eigenvectors indicate the directions of the principal axes of the local dynamics around the examined central located fixed point. The first two eigenvalue/eigenvector pairs ($\lambda_{1}/v_1$ and $\lambda_{2}/v_2$) account for the consistent outward movement observed in the primary disc of the attractor. The last pair ($\lambda_{3}/v_3$) is responsible for the attraction along an axis that intersects the manifold’s center, explaining the $z$-motion within the attractor. Furthermore, the complex eigenvalues indicate that the fixed point is a spiral sink, which is consistent with the observable attractor’s behavior. The sign of the real part of the eigenvalues indicates the stability of the fixed point in direction of the corresponding eigenvector: a negative real part indicates stability, while a positive real part indicates instability. The single real, negative eigenvalue of $\lambda_{3}$ indicates stability in the direction associated with that eigenvector ($v_{3}$).

The following plot shows the same plot that we have already generated in the previous post, simulating the attractor using Rössler’s default values ($a = 0.2$, $b = 0.2$, and $c = 5.7$), but with the fixed point and the eigenvectors added:

The plot is consistent with what we have discussed above. The first two eigenvectors (green vectors) span a plane in which the constant outward movement observed in the primary disc of the attractor occurs (repulsing plane). The third eigenvector (magenta vector) indicates the direction of attraction.

By applying phase plane analysis to the Rössler attractor, we have gained valuable insights into the system’s behavior. We have identified the nullclines and fixed points of the system, and analyzed the attractor’s dynamics in the phase space. We have also calculated the eigenvalues and eigenvectors of the centrally located fixed point, which provide further information about the attractor’s local dynamics. The results are consistent with the observed behavior of the attractor, and we were able to gain a deeper understanding of the system’s structure and behavior.

Overall, we have seen that phase plane analysis is a yet powerful tool to investigate the dynamics of dynamical or chaotic systems, and can provide valuable insights into the systems’ behavior, without the need for extensive numerical simulations.

You can find the code used to generate the plot for this post in this GitHub repositoryꜛ.

]]>