On whole-mammalian-brain connectomics

In response to David Markowitz’s questions on Twitter:
https://twitter.com/neurowitz/status/1080131620361912320

1. What are current obstacles to generating a connectomic map of a whole mammalian brain at nanometer scale?
2. What impt questions could we answer with n=1? n=2?
3. What new analysis capabilities would be needed to make sense of whole brain data?

Responses:

David himself knows much or all of the below, as he supported a lot of the relevant work through the IARPA MICRONS program (thank you), and I have discussed these ideas with many key people for some years now, but I will go into some detail here for those not following this area closely and for the purposes of concreteness.

1) What are current obstacles to generating a connectomic map of a whole mammalian brain at nanometer scale?

I believe the main obstacles are in “getting organized” for a project of this scale, not a fundamental technical limitation, although it would make sense to start such a project in a few years after the enabling chemistry and genetic advances have a bit more time to mature and to be well tested at much smaller scales.

a) Technical approaches

As far as technical obstacles, I will not address the case of electron microscopy where I am not an expert. I will also not address a pure DNA sequencing approach (as first laid out in Zador’s 2012 Sequencing the Connectome). Instead, I will focus solely on optical in-situ approaches (which are evolutionary descendants of BrainBow and Sequencing the Connectome approaches):

Note: By “at nanometer scale”, I assume you are not insisting upon literally all voxels being few nanometer cubes, but instead that this is a functional requirement, e.g., ability to identify a large fraction of synapses and associate them with their parent cells, ability to stain for nanoscale structures such as gap junctions, and so forth. Otherwise an optical approach with say > 20 nm spatial resolution is ruled out by definition — but I think the spirit of the question is more functional.

There are multiple in-situ fluorescent neuronal barcoding technologies that, in conjunction with expansion microscopy (ExM), and with optimization and integration, will enable whole mammalian brain connectomics.

We are really talking about connectomics++: It could optionally include molecular annotations, such as in-situ transcriptome profiling of all the cells, as well some multiplexed mapping of ion channel protein distributions and synaptic sub-types/protein compositions. This would be an added advantage of optical approaches, although some of these benefits could conceivably be incorporated in an electron microscopy approach through integration with methods like Array Tomography.

For the sake of illustration, the Rosetta Brain whitepaper laid out one potential approach, which uses targeted in-situ sequencing of Zador barcodes (similar to those used in MAPseq) both at cell somas/nuclei and on both sides of the synaptic cleft, where the barcodes would be localized by being dragged there via trafficking of RNA binding proteins fused to synaptic proteins. This would be a form of FISSEQ-based “synaptic BrainBow” (a concept first articulated by Yuyiy Mischenko) and would not require a direct physical linkage of the pre- and post-synaptic barcodes — see the whitepaper for explanations of what this means.

The early Rosetta Brain white-paper sketch proposed to do connectomics via this approach using a combination of

a) high-resolution optical microscopy,

b) maximally tight targeting of the RNA barcodes to the synapse,

&

c) “restriction of the FISSEQ biochemistry to the synapse”, to prevent confusion of synaptic barcodes with those in passing fine axonal or dendritic processes.

This is now all made much easier with Expansion Microscopy, which is now demonstrated at cortical column scale,  although this was not yet the case back in 2014 when we were initially looking at this (update: expansion lattice light sheet microscopy is on the cover of the journal Science and looks really good).

(Because ExM was not around yet, circa 2014, we proposed complicated tissue thin-sectioning and structured illumination schemes to get the necessary resolution, as well as various other “molecular stratification” and super-resolution schemes, which are now unnecessary as ExM enables the requisite resolution using conventional microscopes in intact, transparent 3D tissue, requiring only many-micron-scale thick slicing.)

(This approach does rely quite heavily on synaptic targeting of the barcodes; whether the “restriction of FISSEQ biochemistry to the synapse” is required depends on the details of the barcode abundance and trafficking, as well as the exact spatial resolution used, and is beyond the scope of the discussion here.)

With a further boost in resolution, using higher levels of ExM expansion (e.g., iterated ExM can go above 20x linear expansion), and in combination with a fluorescent membrane stain, or alternatively using generalized BrainBow-like multiplexed protein labeling approaches alone or in combination with Zador barcodes, the requirement for synaptic barcode targeting and restriction of FISSEQ biochemistry to the synapse could likely be relaxed, and indeed it may be possible to do it without any preferential localization of barcodes at synapses in the first place, e.g., with membrane localized barcodes, an idea which we computationally study here:
https://www.frontiersin.org/articles/10.3389/fncom.2017.00097/full

In the past few years, we have integrated the necessary FISSEQ, barcoding and expansion microscopy chemistries — see the last image in
https://spectrum.ieee.org/biomedical/imaging/ai-designers-find-inspiration-in-rat-brains
for a very early prototype example — and ongoing improvements are being made to the synaptic targeting of the RNA barcodes (which MAPseq already shows can traffic far down axons at some reasonable though not ideal efficiency), and to many other aspects of the chemistry.

Moreover, many other in-situ multiplexing and high-resolution intact-tissue imaging primitives have been demonstrated with ExM that would broadly enable this kind of program, with further major advances expected over the coming few years from a variety of groups.

At this point, I fully believe that ExM plus combinatorial molecular barcoding can, in the very near term, enable at minimum a full mammalian brain single cell resolution projection map with morphological & molecular annotations, and with sparse synaptic connectivity information — and that such an approach can, with optimization, likely be engineered to get a large fraction of all synapses (plus labeling gap junctions with appropriate antibodies or other tags).

This is not to downplay the amount of work still to be done, and the need for incremental validation and improvement of these techniques, which are still far less mature than electron microscopy as an actual connectomics method. But it is to say that many of the potential fundamental obstacles that could have turned out to stand in the way of an optical connectomics approach — e.g., if FISSEQ-like multiplexing technologies could not work in intact tissue, or if optical microscopy at appropriate spatial resolution in intact tissue was unavailable or would necessitate difficult ultra-thin sectioning, or if barcodes could not be expressed in high numbers or could not traffic down the axon — have instead turned out to be non-problems or at least tractable. So with a ton of work, I believe a bright future lies ahead for these methods, with the implication that whole-brain scale molecularly annotated connectomics is likely to become feasible within the planning horizon of many of the groups that care about advancing neuroscience.

b) Cost

In the Rosetta Brain whitepaper sketch, we estimated a cost of about $20M over about 3 years for this kind of RNA-barcoded synaptic BrainBow optical in-situ approach, for a whole mouse brain.

Although this is a very particular approach, and may not be the exact “right” one, going through this kind of cost calculation for a concrete example can still be useful to get an order-of-magnitude sense of what is involved in an optical in-situ approach:

If we wish to image with 4.5x expansion, that’s about 1 mm^3 / ((300 x 300 x 300 nm^3)/4.5^3) = 3e12 voxels
(The spatial resolution there is about 300/4.5 = 67 nm.) (Note that we’ve assumed isotropic resolution, which can be attained and even exceeded with a variety of practical microscope designs.)

For FISSEQ-based RNA barcode connectomics, we want say 4 colors in parallel (for the A, T, C and G bases of RNA), with 4 cameras, and to image say 20 successive cycles of that or on the order of 4^15 = 1B unique cell labels (assuming there is some error rate and/or near but not complete base-level diversification of the barcode such that we get a diversity corresponding to, say, 15 bases after sequencing 20).

The imaging takes much longer than the fluidic handling (which is highly parallel) as sample volumes get big, so let’s focus on considering the imaging time:

We have ~3e12 voxels * 20 cycles, so 6e13 voxels total, and let’s suppose that we have a camera that operates at a frame rate of ~10 Hz and has ~4 megapixels, so 400 MegaPixels per second. So 6e14 / (4e7 per sec) = 24 weeks = 6 months. Realistically, we could get perhaps a 12 megapixel camera and use computational techniques to reduce the required number of cycles somewhat, so 1-2 months seems reasonable for 1 mm^3 on a single microscope setup.

So, let’s say roughly 1 mm^3 per microscope per month.

Further, each microscope costs around $400k, suppose. (It could be brought below that with custom hardware.)

Suppose 1 person is needed per 2 microscopes + fixed 5 other people; let’s say these people cost on average $150k a year.

We wish to image 0.5 cm^3 during the course of the project, i.e., roughly the size of a whole mouse brain.

0.5 cm^3 / 1 mm^3 = 500 microscope-months

Suppose the imaging part of the project can last no more than 24 months, for expediency.

That’s 500/24 = 21 microscopes, each at $400k, or $8.3M. Let’s call that $10M on the imaging hardware itself.

That’s also 10 (for running the microscopes and associated tissue handling) + fixed 5 other people, over three years total, or $150k*15*3 = $6.75M for salaries, call it $7M for people.

There are also things to consider like lab space, other equipment, and reagents. Let’s call that another $50k per person per year or $2.25M, call it $3M, just very roughly.

What about data storage? I suspect that a lot of compression could be done online such that enormous storage of raw images might not be necessary in an optimized pipeline. But if a 1 TB hard drive costs on the order of 50 bucks, ($50 / 1 terabyte) * 3e12 voxels per mm^3 * 400 mm^3 * 20 biochemical cycles * 8 bytes per voxel  = $9.6M for the raw image data.

So $10M (equipment) + $7M (salaries) + $3M (space and reagents) + $10M (data storage) = $30M mouse connectome, with three years total and 2 of those years spent acquiring data.

To be conservative, let’s call it $40M or so for your first mouse connectome using next-generation optical in-situ barcoding technologies with expansion microscopy.

Very importantly, the cost for future experiments is lower as you’ve already invested in the imaging hardware.

c) This is for the mouse. Don’t forget that the Etruscan Shrew is, as Ed Boyden has emphasized, >10x smaller and yet still a mammal with a cortex, and that many of the genetic technologies may (or may not) be readily adaptable to it, especially if viral techniques are used for delivery rather than transgenics.

2. What impt questions could we answer with n=1? n=2?

There are many potential answers to this and I will review a few of them.

Don’t think of it as n=1 or n=2, think of it as a technology for diverse kinds of data generation:
First, using the same hardware you developed/acquired to do the n=1 or n=2 full mouse connectome, you could scale up statistically by, for instance, imaging barcodes only in the cell bodies at low spatial resolution, and then doing MAPseq for projection patterns, only zooming into targeted small regions to look at more detailed morphology, detailed synaptic connectivity, and molecular annotations. Zador’s group is starting to do just this here
https://www.biorxiv.org/content/early/2018/08/31/294637
by combining in-situ sequencing and MAPseq. Notably, the costs are then much less because one needs to image many fewer optical resolution-voxels in-situ, i.e., essentially only as many voxels as there are cell somas/nuclei, i.e., on the order of 100M ~micron sized voxels (e.g., to just look at the soma of each cell) for the mouse brain; the rest is done by Illumina sequencing on commercial HiSeq machines which have already attained large economies of scale and optimizations, and where one is inherently capturing less spatial data.

Thus, one should think of this not (just) as an investment in 1-2 connectomes, but as an investment in a technology basis set that allows truly large-scale neuro-anatomy to be done at all, with a variety of possible distributions of that anatomy over individual subjects and a variety of scales of analysis accessible and usable in combination even within one brain.

Use it as a lens on microcircuit uniformity/heterogeneity:
Second, even the n=1 detailed connectome could be quite powerful, e.g., to resolve a lot of the long-standing questions regarding cortical microcircuit uniformity or heterogeneity across areas, which were recently re-kindled by papers like this one.

I should mention that this is a super important question, including as it connects to many big theoretical debates in neuroscience. For instance,

–in one view of cortical microcircuitry, pre-structured connectivity will provide “inductive biases” for learning and inference computations, and in that view, we may expect different structured inductive biases in different areas (which process data with different properties and towards different computational goals) to be reflected in differing local microcircuit connectomes across areas: see another Markowitz-induced thread on this;

–in another view (see, e.g., Blake Richards), it is all about the input data and cost functions entering an area, which trains an initially relatively un-structured network, and thus we may expect to see connectivity that appears highly random locally but with differences in long-range inputs defining the area-specific cost functions;

–in yet another view, advocated by some in the Blue Brain Project, connectivity literally is random subject to certain geometric constraints determined by gross neural morphologies and statistical positional patterns;

–and so on…

Although a full connectome is not strictly needed to answer these questions, it would put any microcircuits mapped into a helpful whole-brain context, and in any case, if one wants to map lots of microcircuits, why not go for doing it in the context of something at least approximating a whole brain connectome?

Think of it as millions of instances of an ideal single neuron input/output map (with dendritic compartment or branch level resolution):
Third, think of the n=1 connectome instead as N=millions of studies on “what are the inputs and outputs, and their molecular profiles, of this single neuron, across the entire brain”. For instance, you could ask millions of questions of the form “whole brain inputs to a single neuron, resolved according to dendritic compartment, synaptic type, and location and cell type of the pre-synaptic neuron”.

For instance, in the Richards/Senn/Bengio/Larkum/Kording et al picture — wherein the somatic compartments of cortical pyramidal neurons are doing real-time computation, but the apical dendritic compartments are receiving error signals or cost functions used to perform gradient-descent-like updates on the weights underlying that computation — you could ask, for neurons in different cortical areas: what are the full sets of inputs to those neurons’ apical dendrites, where else do they come from in the brain, from which cell types, and how they impinge upon them through the local interneuron circuitry.  This, I believe, would then give you a map of the diversity or uniformity of the brain’s feedback signals or cost functions, and start to allow making a taxonomy of these cost functions. In the (speculative) picture outlined here, moreover, this cost function information is in many ways the key architectural information underlying mammalian intelligence.

Notably, this would include a detailed map of the neuromodulatory pathways including, with molecular multiplexing in-situ, their molecular diversity. Of particular interest might be the acetylcholine system, which innervates the cortex, drives important learning phenomena, some of which have very complex local mechanisms, and involves very diverse and area-specific long-range projection pathways from the basal forebrain as well as interesting dendritic targeting. A recent paper also found very dense and diverse neuropeptide networks in cortex.

Answer longstanding questions in neuroanatomy, and disambiguate existing theoretical interpretations:
Fourth, there are a number of concrete, already-known large-scale neuroanatomy questions that require an interplay of local circuit and long range information.

For instance, a key question pertains to the functions of different areas of the thalamus. Sherman and Guillery, for instance, propose the higher order relay theory and further that the neurons projecting from layer 5 into the thalamic relays are the same neurons that project to basal ganglia and other sub-cortical centers, and thus that the thalamic relays should be interpreted as sending “efference copies” of motor outputs throughout the cortical hierarchies — but to my knowledge, more detailed neuroanatomy is still needed to confirm or contradict nearly all key aspects of this picture, e.g., are the axons sending the motor outputs really the exact same as those entering the putative thalamic relays, are those axons the same that produce “driver” synapses on the relay cells, what about branches through the reticular nucleus, and so on. (Similar questions could be framed in the context of other theoretical interpretations of thalamo-cortical (+striatal, cerebellar, collicular, and so forth) loops.)

Likewise, there are basic and fundamental architectural questions about the cortical-subcortical interface, which arguably require joint local microcircuit and large-scale projection information, e.g., how many “output channels” does the basal ganglia have and to what extent are they discrete?

Think of it as a (molecularly annotated) projectome++:
Fifth, there are many questions that would benefit from a whole brain single cell resolution projectome, which requires much the same technology as what would be needed to add synaptic information on top of that (in this optical context), e.g., papers like this one propose an entire set of theoretical ideas based on putative projection anatomy that is inferred from the literature but not yet well validated
https://www.frontiersin.org/articles/10.3389/fnana.2011.00065/full
One may view these ideas as speculative, of course, but they suggest the kinds of functionally-relevant patterns that one might find at that level, if a truly solid job of whole brain scale single cell resolution neuroanatomy was finally to be done. Granted, this doesn’t require mapping every synapse, but again, the technology to map some or all synapses optically, together with the projectome, is quite similar to what is needed to simply do the projectome at, say, dendritic-compartment-level spatial resolution and with molecular annotations, which one would (arguably) want to do anyway.

Use it for many partial maps, e.g., the inter-connectome of two defined areas/regions:
Again, with the same basic technology capability, you can do many partial maps, but importantly, maps that include both large-scale and local information. For example, if you think beyond n=1 or n=2, and to say n=10 to n=100, you can definitely start to look at interesting disease models, and then you likely don’t need full connectomes of those, but for instance, you might look at long-range projections between two distal areas with detailed inter-connectivity information being mapped on both sides as well as their long-range correspondences. Same infrastructure, easily done before, during or after a full connectome, or (as another example) a connectome at a user-chosen level of sparsity induced by the sparsity of the, e.g., viral barcode or generalized BrainBow labeling.

Finally: questions we don’t know to ask yet, but that will be suggested by truly comprehensive mapping! For example, as explained in this thread, molecular annotation might allow one to measure the instantaneous rate of change of synaptic strength, dS/dt, which is key to inferring learning rules. As another example, entirely new areas of the very-well-studied mouse visual cortex, with new and important-looking functions, are still being found circa 2019… sometimes it seems like the unknown unknowns still outnumber the known unknowns. See also Ed Boyden’s and my essay on the importance of “assumption proof” brain mapping.

Anyway, is all of this worth a few tens of millions of dollars of investment in infrastructure that can then be used for many other purposes and bespoke experiments? In my mind, of course it is.

3) What new analysis capabilities would be needed to make sense of whole brain data?

To generate the connectome++ in the first place:
For the case of optical mapping with barcodes, there are at least some versions of the concept, e.g., the pure synaptic BrainBow approach, where morphological tracing is not needed, and the analysis is computationally trivial by comparison with the electron microscopy case that relies on axon tracing from image stacks of ultra-thin sections. The barcodes just digitally identify the parent cells of each synapse.

For various other potential versions of optical connectomics, some morphological image analysis and segmentation would be required, particularly, for instance, to correct errors or ambiguities associated with imperfect trafficking of barcodes. Likewise, in an approach that relies heavily on tracing, barcodes could be used to error-correct that tracing, and/or to help train the algorithms for that tracing. Those approaches might start to have computational analysis requirements on a similar level as those for electron microscopy image segmentation.

Sparsity of labeling, to look at a connectome of a sub-graph of neurons, but still spanning all brain areas, could likely simplify the analysis as well.

(For general infrastructural comparison, a quick search tells me that Facebook handles “14.58 million photo uploads per hour”.)

To derive insight from the connectome in the context of other studies:
To more broadly “make sense” of the data in light of theoretical neuroscience, and in terms of exploratory data analysis, I defer this to experts on connectomic data analysis and on many specific areas of neuroscience generally that could benefit and contribute.

But broadly, I don’t feel it is qualitatively different from what is needed to do analysis for other kinds of large-scale neuroscience that fall short of full mammalian connectomes, and moreover that much can be learned from studying whole-brain analyses in small organisms like Zebrafish or fly. I certainly don’t see any kind of fundamental obstacle that implies this data could not be “made sense of”, if contextualized with other functional data, behavioral studies, and computational ideas of many kinds.

Leave a Reply