May 04, 2021
Tuesday
|
07:50 AM - 08:00 AM
|
|
Welcome
|
- Location
- --
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
08:00 AM - 08:45 AM
|
|
Topological insights on neuronal morphologies
Lida Kanari (École Polytechnique Fédérale de Lausanne (EPFL))
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
The morphological diversity of neurons supports the complex information-processing capabilities of biological neuronal networks. A major challenge in neuroscience has been to reliably describe neuronal shapes with universal morphometrics that generalize across cell types and species. Inspired by algebraic topology, we have developed a topological descriptor of trees that couples the topology of their complex arborization with their geometric structure, retaining more information than traditional morphometrics. The topological morphology descriptor (TMD) has proved to be very powerful in separating neurons into well-defined groups on morphological grounds. The TMD algorithm led to the discovery of two distinct morphological classes of pyramidal cells in the human cortex that also have distinct functional roles, suggesting the existence of a direct link between the anatomy and the function of neurons. The TMD-based classification also led to the objective and robust morphological clustering of rodent cortical neurons. Recently we proved that the TMD of neuronal morphologies is also essential for the computational generation (i.e., synthesis) of dendritic morphologies. Our results demonstrate that a topology-based synthesis algorithm can reproduce both morphological and electrical properties of reconstructed biological rodent cortical dendrites and generalizes well to a wide variety of different dendritic shapes. Therefore it is suitable for the generation of unique neuronal morphologies to populate the digital reconstruction of large-scale, physiologically realistic networks.
- Supplements
-
--
|
09:00 AM - 09:45 AM
|
|
A topological approach for understanding the neural representation of natural auditory signals
Tim Gentner (UC San Diego)
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
How complex, natural signals are represented in the activity (spiking) patterns of neural populations is not well understood. For this talk, I will describe data from a series of experiments that examine the spatiotemporal pattern of song-evoked spiking in populations of simultaneously recorded neurons in the secondary auditory cortices of European starlings (Sturnus vulgaris). Single neurons in these regions display composite receptive fields that incorporate large numbers (a dozen or more) orthogonal features matched to the acoustics of species typical song. Considered independently, the spiking response of a given neuron at a given point in time is therefore ambiguous with respect to the stimulus. Applied topology provides a promising tool to resolve this ambiguity and capture invariant structure in the spiking coactivity in arbitrarily large neural populations. I will show that the topology of the population spike train carries stimulus-specific structure that is not reducible to that of individual neurons. I then introduce a topology-based similarity measure for population coactivity that is sensitive to invariant stimulus structure and show that this measure captures invariant neural representations tied to the learned relationships between natural vocalizations.
- Supplements
-
--
|
09:45 AM - 10:15 AM
|
|
Break
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:15 AM - 10:45 AM
|
|
The Persistent Topology of Dynamic Data
Woojin Kim (Duke University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
This talk introduces a method for characterizing the dynamics of
time-evolving data within the framework of topological data analysis
(TDA), specifically through the lens of persistent homology. Popular
instances of time-evolving data include flocking or swarming behaviors
in animals, and social networks in the human sphere. A natural
mathematical model for such collective behaviors is that of a dynamic
metric space. In this talk I will describe how to extend the well-known
Vietoris-Rips filtration for metric spaces to the setting of dynamic
metric spaces. Also, we extend a celebrated stability theorem on
persistent homology for metric spaces to multiparameter persistent
homology for dynamic metric spaces. In order to address this stability
property, we extend the notion of Gromov-Hausdorff distance between
metric spaces to dynamic metric spaces. This talk will not require any
prior knowledge of TDA. This talk is based on joint work with Facundo
Memoli and Nate Clause.
- Supplements
-
--
|
11:00 AM - 11:45 AM
|
|
Using Mapper to reveal a unique hub-like brain state at rest in highly sampled individuals
Manish Saggar (Stanford University School of Medicine)
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
Even in the absence of external stimuli, neural activity is both highly dynamic and organized across multiple spatiotemporal scales. The continuous evolution of brain activity patterns during rest is believed to help maintain a rich repertoire of possible functional configurations that relate to typical and atypical cognitive phenomena. Whether these transitions or "explorations" follow some underlying arrangement or instead lack a predictable ordered plan remains to be determined. Here, using a precision individual connectomics dataset, we aimed at revealing the rules that govern transitions in brain activity at rest. The dataset includes individually defined parcellations and ~5 hours of resting state functional Magnetic Resonance Imaging (fMRI) data for each participant – both of which allowed us to examine the topology and dynamics of at-rest whole-brain configurations in an unprecedented detail. We hypothesized that by revealing and characterizing the overall landscape of whole-brain configurations we could interpret the rules (if any) that govern transitions in brain activity at rest. To generate the landscape of whole- brain configurations we used Topological Data Analysis (TDA)-based Mapper approach that could reveal shape of the underlying dataset as a graph (a.k.a. shape graph). We observed a rich topographic landscape in which the transition of activity from one canonical brain network to the next involved a large, shared attractor-like basin, or "transition state", where all networks were represented equally prior to entering distinct network configurations. The intermediate transition state and traversal through it seemed to provide the underlying structure for the continuous evolution of brain activity patterns at rest. In addition, differences in the manifold architecture were more consistent within than between subjects, providing evidence that this approach contains potential utility for precision medicine approaches.
- Supplements
-
--
|
|
May 05, 2021
Wednesday
|
08:00 AM - 08:45 AM
|
|
Discovering implicit computation graphs in nonlinear brain dynamics
Xaq Pitkow (Baylor College of Medicine)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
Repeating patterns of microcircuitry in the cerebral cortex suggest that the brain reuses elementary or ``canonical'' computations. Neural representations, however, are distributed, so the relevant operations may only be related indirectly to single-neuron transformations. It thus remains an open challenge how to define these canonical computations. We present a theory-driven mathematical framework for inferring implicit canonical computations from large-scale neural measurements. This work is motivated by one important class of cortical computation, probabilistic inference. We posit that the brain has a structured internal model of the world, and that it approximates probabilistic inference on this model using nonlinear message-passing implemented by recurrently connected neural population codes. Our general analysis method simultaneously finds (i) the neural representation of relevant variables, (ii) interactions between these latent variables that define the brain's internal model of the world, and (iii) canonical message-functions that specify the implicit computations. With enough data, these properties are statistically distinguishable due to the symmetries inherent in any canonical computation, up to a global transformation of all interactions. As a concrete demonstration of this framework, we analyze artificial neural recordings generated by a model brain that implicitly implements advanced mean-field inference. Given external inputs and noisy neural activity from the model brain, we successfully estimate the latent dynamics and canonical parameters that explain the simulated measurements. In this first example application, we use a simple polynomial basis to characterize the latent canonical transformations. While this construction matches the true model, it is unlikely to capture a real brain's nonlinearities efficiently. To address this, we develop a general, flexible variant of the framework based on Graph Neural Networks, to infer approximate inferences with known neural embedding. Finally, analysis of these models reveal certain features of experiment design required to successfully extract canonical computations from neural data.
- Supplements
-
--
|
09:00 AM - 09:45 AM
|
|
Hyperbolic geometry in biological networks
Tatyana Sharpee (The Salk Institute for Biological Studies)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
Using the sense of smell as an example, I will describe both theoretical reasons and experimental evidence that natural stimuli and human perception can be mapped onto a low dimensional curved surface. This surface turns out to have a negative curvature, corresponding to a hyperbolic metric. Although this map was derived purely from the statistics of co-occurrence between mono-molecular odorants in the natural environment it revealed topography in the organization of human perception of smell. I will conclude with arguments for why hyperbolic metric can be useful for other sensory systems.
- Supplements
-
--
|
09:45 AM - 10:15 AM
|
|
Break
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:15 AM - 10:45 AM
|
|
Geometrical and topological data analyses reveal that higher-order structures provide flow channels for neuronal avalanches
Dane Taylor
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 11:45 AM
|
|
Topological analysis of quasiperiodic signals
Jose Perea (Northeastern University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
This talk will be about quasiperiodic recurrence in time series data; i.e., the superposition of periodic oscillators with non-commensurate frequencies. The sliding window (or time delay) embeddings of such functions can be shown to be dense in high-dimensional tori, and we will discuss techniques to study the persistent homology of such sets. Along the way, we will present a recent Kunneth theorem for persistent homology, as well as several applications.
- Supplements
-
--
|
|
May 06, 2021
Thursday
|
08:00 AM - 08:45 AM
|
|
Topological Data Analysis of Functional Brain Connectivity in Time and Space Domains
Bei Wang (University of Utah)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
Functional magnetic resonance imaging (fMRI) measures brain activity by detecting changes in blood oxygenation levels (BOLD signals). Relating these activity measurements to behavioral and cognitive measures is a topic of great interest in neuroscience. The functional architecture of the brain can be described as a dynamic system where components interact in flexible ways, constrained by physical connections between regions. Using correlation, either in time or in space, as an abstraction of functional connectivity, we perform topological data analysis of resting-state fMRI data. We find that temporal vs. spatial functional connectivity can encode different aspects of cognition and personality. Topological analyses using persistent homology show that persistence barcodes are significantly correlated to individual differences in cognition and personality, with high reproducibility. Topological data analyses, including approaches to model connectivity in the time domain, are promising tools for representing high-level aspects of cognition, development, and neuropathology.
This talk is a summary of joint works with Sourabh Palande, Keri L. Anderson, Jeffrey S. Anderson, Archit Rathore, Brandon Zielinski, and Tom Fletcher.
- Supplements
-
--
|
09:00 AM - 09:45 AM
|
|
Homotopy Theoretic and Categorical Models of Neural Information Networks
Matilde Marcolli (California Institute of Technology)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
We propose a mathematical formalism for neural information networks endowed with assignments of resources (computational or metabolic or informational), suitable for describing assignments of concurrent or distributed computing architectures and associated binary codes, governed by a categorical form of the Hopfield network dynamics, and measures of informational complexity in the form of a cohomological version of integrated information.
- Supplements
-
--
|
09:45 AM - 10:15 AM
|
|
Break
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:15 AM - 10:45 AM
|
|
Combining Geometric and Topological Information for Boundary Estimation
Hengrui Luo (Lawrence Berkeley National laboratory)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 11:45 AM
|
|
Identifying dynamics of networks
Konstantin Mischaikow (Rutgers University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
The classical theory of nonlinear dynamics exhibits wonderfully rich and exotic structures. I will argue that as we move to an era of data driven dynamics it offers too many riches. Stated differently, if we only have finite data at our disposal we need a simpler theory of dynamics. I will present such a theory based on combinatorics and algebraic topology. After describing the theory, I will apply it to the analysis of dynamics of networks.
- Supplements
-
--
|
|
May 07, 2021
Friday
|
08:00 AM - 08:45 AM
|
|
Nerve theorems for fixed points of neural networks
Daniela Egas Santander ( École polytechnique fédérale de Lausanne)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
A fundamental question in computational neuroscience is to understand how the network’s connectivity shapes neural activity. A popular framework for modeling neural activity are a class of recurrent neural networks called threshold linear networks (TLNs). A special case of these are combinatorial threshold-linear networks (CTLNs) whose dynamics are completely determined by the structure of a directed graph, thus being an ideal setup in which to study the relationship between connectivity and activity.
Even though nonlinear network dynamics are notoriously difficult to understand, work of Curto, Geneson and Morrison shows that CTLNs are surprisingly tractable mathematically. In particular, for small networks, the fixed points of the network dynamics can often be completely determined via a series of combinatorial {\it graph rules} that can be applied directly to the underlying graph. However, for larger networks, it remains a challenge to understand how the global structure of the network interacts with local properties.
In this talk, we will present a method of covering graphs of CTLNs with a set of smaller {\it directional graphs} that reflect the local flow of activity. The combinatorial structure of the graph cover is captured by the {\it nerve} of the cover. The nerve is a smaller, simpler graph that is more amenable to graphical analysis. We present three “nerve theorems” that provide strong constraints on the fixed points of the underlying network from the structure of the nerve effectively providing a kind of “dimensionality reduction” on the dynamical system of the underlying CTLN. We will illustrate the power of our results with some examples.
This is joint work with F. Burtscher, C. Curto, S. Ebli, K. Morrison, A. Patania, N. Sanderson
- Supplements
-
--
|
09:00 AM - 09:45 AM
|
|
Extracting topological features from multiple measurements
Martina Scolamiero (Royal Institute of Technology (KTH))
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
09:45 AM - 10:15 AM
|
|
Break
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:15 AM - 10:45 AM
|
|
Identifying analogous topological features across multiple systems
Iris Yoon (University of Delaware)
|
- Location
- --
- Video
-
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 11:45 AM
|
|
A grid cell torus
Benjamin Dunn (Norwegian University of Science and Technology (NTNU))
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
Grid cells are neurons typically described as being spatially selective, with increased activity in specific regions tessellating the environment with a hexagonal grid-like pattern. Grid cells can be grouped into distinct populations called modules, determined by their spatial scaling and orientation. We recorded neural activity from six modules across three rats during both free foraging and sleep, and, using persistent cohomology and circular coordinatization as our main tools, we depicted the toroidal structure of the population activity and decoded the time-varying toroidal positions encoded by the modules. We show that individual neurons are preferentially active at singular positions on the torus, and that these positions are maintained between environments and from wakefulness to sleep, as predicted by attractor network models of grid cells.
- Supplements
-
--
|
|
May 10, 2021
Monday
|
08:00 AM - 08:45 AM
|
|
Decoding geometry and topology of neural representations
Vladimir Itskov (Pennsylvania State University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
The brain represents the perceived world via the activity of individual neurons, or groups of neurons. There is an increasing body of evidence that neural activity in a number of sensory systems is organized on low-dimensional manifolds. Understanding the neural representations (a.k.a. the neural code) thus requires methods for inferring the structure of the underlying stimulus space, as well as natural decoding mechanisms that takes advantage of this structure.
Neural representations are constrained by receptive field properties of individual neurons as well as the underlying neural network. It is therefore essential to utilize these constraints in any meaningful analysis of the underlying space. In my talk, I will describe two different methods, based on computational topology and differential geometry that take advantage of the receptive field properties to infer the dimension of (non-linear) neural representations as well as a geometry-based learning algorithm, that can be re-interpreted as an output of a neural network. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
- Supplements
-
--
|
09:00 AM - 09:45 AM
|
|
Topological Characterization for Multi-Variate Pattern Analysis
Alice Patania (Indiana University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
09:45 AM - 10:15 AM
|
|
Break
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:15 AM - 10:45 AM
|
|
Simplicial connectivities of directed networks and higher paths
Henri Riihimäki (University of Aberdeen)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 11:45 AM
|
|
Wasserstein stability for persistence diagrams
Katharine Turner (Australian National University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
The stability of persistence diagrams is among the most important results in applied and computational topology but most results are with respect to the bottleneck distance between diagrams. This has two main implications: it makes the space of persistence diagrams rather pathological and it is often provides very pessimistic bounds with respect to outliers. In this talk I will discuss new stability results with respect to the p-Wasserstein distance between persistence diagrams. The main result is stability of persistence diagrams between different functions on the same finite simplicial complex in terms of the p-norm of the functions. This has applications to image analysis, persistence homology transforms and Vietoris-Rips complexes. This is joint work with Primoz Skraba.
- Supplements
-
--
|
|
May 11, 2021
Tuesday
|
08:00 AM - 08:45 AM
|
|
Discrete Morse based Graph Skeletonization and Applications in Computational Neuroscience
Yusu Wang (Univ. California, San Diego)
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
Recent years have witnessed a surge in the use of topological objects and methods in various applications. Many such applications leverage either the summarization (e.g, persistent homology) or the characterization power of topological objects. In this talk, we will talk about our graph skeletonization algorithm based on discrete-Morse theory, both for 2D / 3D images or for (high-dimensional) points data. We will then describe that two applications of the resulting algorithms: how the resulting graph skeleton can help us reconstructing or summarizing neurons from 2D / 3D neuronal images, and how to use it to analyze the high dimensional single-cell RNASeq data. This is joint work with many collaborators which we will acknowledge in the talk.
- Supplements
-
--
|
09:00 AM - 09:45 AM
|
|
Persistent homology in one or more parameters
Ezra Miller (Duke University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
This talk is an overview of persistent homology in one or more discrete or continuous parameters, including past and potential applications to brain structure and function. After a review of the main concepts, the focus will be on challenges and recent developments in multiparameter methods, concerning mathematics, statistics, and computation.
- Supplements
-
--
|
09:45 AM - 10:15 AM
|
|
Break
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:15 AM - 10:45 AM
|
|
Using "Concurrence Topology'' to Detect Statistical (In)dependence Among Items of the Hamilton Depression Rating Scale
Steven Ellis (Columbia University)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 11:45 AM
|
|
Topological cavities in the human connectome
Ann Blevins (University of Pennsylvania)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
The convoluted web of interactions within the human brain expertly supports a diverse array of behaviors. Modeling the brain as a network with brain regions as nodes and white matter tracts as edges has provided a wealth of information about the global organization of the connectome. Specifically, network science has shown the brain functions via collaboration between sets of densely connected regions called modules. However, the importance of sparsely connected regions – or topological cavities – around which information must flow remains elusive due to the inability to detect such structures with traditional network measures. Such cavities may serve to intentionally separate tasks or allow parallel processing of information. In this study we leverage the capabilities of algebraic topology to both detect and chronicle topological cavities within the adult structural brain network. We find that multiple topological cavities exist and extract participating brain regions in order to determine cavity function. Furthermore, we show that many of the recovered persistent cavities do not exist in an energy-conserving minimally-wired null model of the structural brain network. Additionally, by comparing with a cortical-only subset of the brain network, we determine that subcortical nodes often project onto many cortical cycles, agreeing with previously-reported findings on subcortical connectivity patterns. Finally we show that many recovered persistent cavities exist across multiple individuals and that these features are not necessarily lateralized. We discuss possibilities for topological methods in disease detection and suggest roles for topology across scales of neuroscience.
- Supplements
-
--
|
|