Blog
Articles about computational science and data science, neuroscience, and open source solutions. Personal stories are filed under Weekend Stories. Browse all topics here. All posts are CC BY-NC-SA licensed unless otherwise stated. Feel free to share, remix, and adapt the content as long as you give appropriate credit and distribute your contributions under the same license.
tags · RSS · Mastodon · simple view · page 1/15
New teaching material: Functional imaging data analysis – From calcium imaging to network dynamics
We have just completed our new course, Functional Imaging Data Analysis: From Calcium Imaging to Network Dynamics, for the first time as part of the Master of Neuroscience program. This course is now available freely online under ‘Teaching’, alongside my other open educational materials. All resources were created with a strict focus on open content and reproducibility, so that anyone interested can make full use of the lectures, figures, and example code without copyright concerns. Feel free to use and share it.
Miniforge: The minimal, open solution for institutional Python environments
DEVONthink has long been my preferred tool for managing personal knowledge. Its smart features, like automatic Wiki-links and AI-based classification, make it ideal for organizing interconnected notes. But these links often stay invisible without a visual layer. To address this, I developed a tool that generates interactive local graphs, embedded directly into each note. These graphs reveal how notes connect, highlighting clusters and patterns that text alone might miss. Fully local and integrated into DEVONthink 3, the system offers a lightweight way to navigate Markdown notes spatially. After refining it for my own workflow, I’m now sharing it for others who might benefit from clearer structure and better insight into their knowledge base.
Exploring connected notes: Local graph views for DEVONthink knowledge bases
DEVONthink has long been my preferred tool for managing personal knowledge. Its smart features, like automatic Wiki-links and AI-based classification, make it ideal for organizing interconnected notes. But these links often stay invisible without a visual layer. To address this, I developed a tool that generates interactive local graphs, embedded directly into each note. These graphs reveal how notes connect, highlighting clusters and patterns that text alone might miss. Fully local and integrated into DEVONthink 3, the system offers a lightweight way to navigate Markdown notes spatially. After refining it for my own workflow, I’m now sharing it for others who might benefit from clearer structure and better insight into their knowledge base.
Astrocytes enhance plasticity response during reversal learning
Astrocytes, a type of glial cell traditionally considered support cells in the brain, are now recognized as active participants in synaptic plasticity and memory. I found this development particularly compelling and presented the following study in our Journal Club. The paper by Squadrani et al. (2024) explores the role of astrocyte-mediated D-serine regulation in modulating learning flexibility, particularly during reversal learning — the ability to adapt to changes in the environment. The work builds on prior experiments by Bohmbach et al. (2022), which identified an astrocyte-neuron feedback loop involving endocannabinoids and astrocytic D-serine release in the hippocampus.
New teaching material: Dimensionality reduction in neuroscience
We just completed a new two-day course on Dimensionality Reduction in Neuroscience, and I am pleased to announce that the full teaching material is now freely available under a Creative Commons (CC BY 4.0) license. This course is designed to provide an introductory overview of the application of dimensionality reduction techniques for neuroscientists and data scientists alike, focusing on how to handle the increasingly high-dimensional datasets generated by modern neuroscience research.
Long-term potentiation (LTP) and long-term depression (LTD)
Both long-term potentiation (LTP) and long-term depression (LTD) are forms of synaptic plasticity, which refers to the ability of synapses to change their strength over time. These processes are crucial for learning and memory, as they allow the brain to adapt to new information and experiences. Since we are often talking about both processes in the context of computational neuroscience, I thought it would be useful to provide a brief overview of biological mechanisms underlying these processes and their significance in the brain.
Bienenstock-Cooper-Munro (BCM) rule
The Bienenstock-Cooper-Munro (BCM) rule is a cornerstone in theoretical neuroscience, offering a comprehensive framework for understanding synaptic plasticity – the process by which connections between neurons are strengthened or weakened over time. Since its introduction in 1982, the BCM rule has provided critical insights into the mechanisms of learning and memory formation in the brain. In this post, we briefly explore and discuss the BCM rule, its theoretical foundations, mathematical formulations, and implications for neural plasticity.
Campbell and Siegert approximation for estimating the firing rate of a neuron
The Campbell and Siegert approximation is a method used in computational neuroscience to estimate the firing rate of a neuron given a certain input. This approximation is particularly useful for analyzing the firing behavior of neurons that follow a leaky integrate-and-fire (LIF) model or similar models under the influence of stochastic input currents.
New preprint: Breaking new ground in brain imaging with three-photon microscopy
Our new preprint on Three-photon in vivo imaging of neurons and glia in the medial prefrontal cortex with sub-cellular resolution is out! In our study, we showcase the power of three-photon microscopy to probe deeper into the brain than ever before, achieving remarkable imaging depth and resolution in live, behaving animals.
Exponential (EIF) and adaptive exponential Integrate-and-Fire (AdEx) model
The exponential Integrate-and-Fire (EIF) model is a simplified neuronal model that captures the essential dynamics of action potential generation. It extends the classical Integrate-and-Fire (IF) model by incorporating an exponential term to model the rapid rise of the membrane potential during spike initiation more accurately. The adaptive exponential Integrate-and-Fire (AdEx) model is a variant of the EIF model that includes an adaptation current to account for spike-frequency adaptation observed in real neurons. In this tutorial, we will explore the key features of the EIF and AdEx models and their applications in simulating neuronal dynamics.