Deep learning based 3D-segmentation of dendritic spines recorded with two-photon in vivo imaging

Published in: 6th International BonnBrain Conference, Bonn, Germany, 2023
Type: Poster Presentation

Citation

Fabrizio Musacchio & Pragya Mishra, Pranjal Dhole, Shekoufeh Gorgi Zadeh, Sophie Crux, Felix Nebeling, Stefanie Poll, Manuel Mittag, Falko Fuhrmann, Eleonora Ambrad, Andrea Baral, Julia Steffen, Miguel Fernandes, Thomas Schultz, Martin Fuhrmann, "Deep learning based 3D-segmentation of dendritic spines recorded with two-photon in vivo imaging" (2023). 6th International BonnBrain Conference, Bonn, Germany, https://bonnbrain.de/

Abstract

The automatic detection of dendritic spines in 3D is still a challenging and yet not fully resolved problem with regard to two-photon in-vivo imaging. The emergence of convolutional neural networks (CNN) like U-Nets1 enabled the development of deep learning based segmentation pipelines for biomedical images in general and for dendritic spines in particular (e.g.2,3). While these pipelines are most suitable for in-vitro confocal image data, they provide lower prediction accuracy when applied to volumetric in-vivo two-photon images that have a lower signal-to-noise ratio and larger motion artifacts. Thus, researchers of this field still tend to analyze dendritic spines manually, which is time-consuming and prone to human bias. We therefore developed a pipeline for multi-class semantic image segmentation based on a fully convolutional neural network, that specifically targets 3D two-photon in-vivo image data. By choosing U-Net as the underlying network architecture, only a few labeled training images (<50) are required. The U-Net processes 2D images to reduce computation time. A post-hoc 3D connectivity analysis merges the classified spine pixels and reconstructs the 3D morphology. Our pipeline is capable to segment spines from its associated dendrite with 85% accuracy and enables the further analysis of, e.g., spine morphology and spine density.

References

  1. Ronneberger et al. 2015, doi: 10.48550/arXiv.1505.04597 

  2. Xiao et al. (2018), doi: 10.1016/j.jneumeth.2018.08.019 

  3. Vidaurre-Gallart et al. (2022), doi: 10.3389/fnana.2022.817903 


Comments

Comment on this post by publicly replying to this Mastodon post using a Mastodon or other ActivityPub/Fediverse account.

Comments on this website are based on a Mastodon-powered comment system. Learn more about it here.

There are no known comments, yet. Be the first to write a reply.