General automation, and science

Generative AI is moving fast, with scaling laws visibly playing out. What will the next few years of deep learning bring us, in terms of real world impact outside of tech/software? I’ve been hearing some bold predictions for disruption. I think about this a lot, since my job is to help the world do beneficial, counterfactually necessary, science projects. If LLMs will just up and replace or transform large swaths of scientific research anytime soon, I need to be aware.

This is not to mention being concerned about AI safety and disruptive AI impacts in general. Some of the smartest people I know have recently dropped everything to work on AI safety per se. I’m confused about this and not trying to touch on it in this blog post.

In thinking about the topic over the last few months, a few apparently useful frames have come up, though they still need some conceptual critique as well, and I also have no claim to novelty on them. I’d like to know where the below is especially wrong or confused.

A possible unifying frame is simply that ~current generation AI models, trained on large data (and possibly fine-tuned on smaller data), can often begin to recapitulate existing human skills, for which there is ample demonstration data, at a “best of humanity” level. There will be some practical limitations to this, but for considering the effects it may be useful to take this notion quite seriously. If this were true quite generally, then what would be the implications of this for science?

Type 1, 2 and 3 skills

Let’s begin with a distinction between “Type 1, Type 2, and Type 3” skills.

For some skills (Type 1), like playing Go, that are closed worlds, we’ve seen that models can get strongly superhuman by self-play or dense reinforcement learning (RL).

This will probably be true for some scientific areas like automated theorem proving too, since we can verify proofs once formalized (e.g., in the context of an interactive theorem prover like Lean), and thus create entirely within the software a reinforcement-learning-like signal not so different from that from winning a game. So the impact on math could be very large (although there are certainly very non-trivial research challenges along the way).

For many other skills (Type 2), there isn’t an easily accessible game score, simulation or RL signal. But there is ample demonstration data. Thus, GPT-3 writing essays and DALL-E synthesizing paintings. For these skills, a given relatively untrained person will be able to access existing “best of humanity” level skills in under 10 seconds on their web browser. (The extent to which reinforcement learning with human feedback is going to be essential for this to work in any given application is unclear to me and may matter for the details.)

So roughly, the impact model for Type 2 skills is “Best of Existing Skills Instantaneously in Anyone’s Browser”.

What are some emerging Type 2 skills we don’t often think of? Use of your computer generally via keyboard and mouse. Every tap on your phone. Every eye movement in your AR glasses. Every vibration of the accelerometer in your smartwatch. The steps in the design of a complex machine using CAD software.

Let’s suppose that near-term AI is like that above, just applied in ~every domain it can be. This probably will have some limitations in practice, but let’s think about it as a conceptual model.

Routine coding has elements of Type 2 and elements of Type 1, and is almost certainly going to be heavily automated. Many more people will be able to code. Even the best coders will get a productivity boost.

Suppose you have a Type 2 skill. Say, painting photo-realistic 3D scenes. A decent number of humans can do it, and hence DALL-E can do it. Soon, millions of people will do prompt generation for that. Enough people will then be insanely good at such prompt generation that this leads to a new corpus of training data. That then gets built into the next model. Now, everyone AI-assisted is insanely good at the skilled prompt generation itself, with nearly zero effort. And so on. So there is clearly a compounding effect.

Even more so for skills closer to Type 1. Say you have an interactive theorem prover like Lean. Following the narrative for Type 2 skills, a GPT-like system learns to help humans generate proofs in the interactive prover software, or to generate those proofs fully automatically. Then many humans are making proofs with GPT. Some are very good at that. Then, the next model learns how to prompt GPT in the same way, so now everyone can do proofs easily at the level of the best GPT-assisted humans. 

Then, the next model learns how to do proofs at the level of the best GPT-assisted model of GPT-assisted humans? But even more so, because with automatic verification of proofs you can get an RL-like signal, without a human in the loop. You can also use language models to help mathematicians formalize their areas in the first place. Math, in turn, is a fantastic testbed for very general AI reasoning. Fortunately, at least some people think that the “math alignment problem” is not very hard, and it will have a lot of applications towards secure and verified software and perhaps AI safety itself.

These figures from Stanislas Polu’s talk at the Harvard New Technologies in Mathematics Seminar are pretty illustrative of how this formal math based testbed could be important for AI itself, too:

What about, say, robotics? The impact on robotics will likely be in significant part via the impact on coding. The software engineers programming robots will be faster. Many more people will be able to program robots more effectively.

But wait, is it true that the robotics progress rate depends mostly on the time spent by people writing a lot of code? Possibly not. You have to actually test the robots in the physical world. Let’s say that the coding part of robotics progress is highly elastic relative to the above “Best of Existing Skills Instantaneously in Anyone’s Browser” model of AI-induced changes in the world, and speeds by 10x, but that the in the lab hardware testing part of robotics is less elastic and only speeds by 2x. Let’s suppose that that right now these two components — the highly elastic coding part of robotics R&D, and the less elastic in the lab testing part — take about the same amount of time, R. That’s R/2/2 + R/2/10 = 3x speedup of robotics overall.

These numbers may be completely wrong, e.g., Sim2Real transfer and better machine vision may be able to reduce a lot more in-the-lab testing than I’m imagining, but I’m just trying to get to some kind of framework for writing back of the envelope calculations in light of the clear progress in language models.

Suppose that the above factors lead soon to 3x increased rate of progress in robotics generally. Once this 3x speedup kicks in, if we were 30 years away from robots that could do most of the highly general and bespoke things that humans do in a given challenging setting, such as a biology lab, we are now perhaps roughly 

[10 years of accelerated robotics progress away: to get the baseline general robotics capability otherwise expected 30 years from now] 

+ [say one (accelerated) startup lifetime away: from adapting and productizing that to the very specific bio lab use case, say 2 years] 

+ [how long it takes this accelerated progress to kick in, starting where we are now, say 2 years] 

+ [how long it takes for bio at some reasonable scale to uptake this, say another 2 years]. 

So that means we are perhaps about 10 to 15 years away from a level of lab automation that we’d be expecting otherwise 30+ years from now (in the absence of LLM related breakthroughs), on this simple model. 

Let’s say this level of automation lets one person do what 10 could previously do, in the lab, through some combination of robotics in the lab per se, software control of instruments, programming of cloud labs and external services relying on more bespoke software-intensive automation. Is that right? I don’t know. Note that in the above, this is still dominated by the general robotics progress rate, so to the extent that AI impacts robotics progress other than just via speeding up coding, say, or that my above numbers are too conservative, this could actually happen sooner.

We haven’t talked about Type 3 skills yet. We’ll come back to those later. 

Elastic and inelastic tasks relative to general automation

What about science generally? Here I think it is useful to remember what Sam Rodriques recently posted about

https://www.sam-rodriques.com/post/why-is-progress-in-biology-so-slow

namely that there are many factors slowing down biology research other than the brilliance of scientists in reading the literature and coming up with the next idea, say.

Consider the impact of the above robotic lab automation. That’s (in theory at least) very helpful for parts of experiments that are basically human labor, e.g., cloning genes, running gels. The human labor heavy parts of R&D are very elastic relative to the “Best of Existing Skills in Anyone’s Browser” model of near term AI impacts, i.e., they respond to this change with strong acceleration. A lot of what is slow about human labor becomes fast if any given human laborer has access to a kind of oracle representing the best of existing human skills, in this case represented by a robot. Consider, for example, the time spent training the human laborer to learn a manual skill — this disappears entirely, since the robot can boot up with that skill out of the box, and indeed, can do so at a “best of humanity” level.

Certain other parts of what scientists do are clearly at least somewhat elastic relative to “Best of Existing Skills in Anyone’s Browser”. Finding relevant papers given your near-term research goals, digesting and summarizing the literature, planning out the execution of well-known experiments or variants of them, writing up protocols, designing DNA constructs, ordering supplies, hacking together a basic data visualization or analysis script, re-formatting your research proposal for a grant or journal submission template, writing up tutorials and onboarding materials for new students and technicians, making a CAD drawing and translating it to a fabrication protocol, designing a circuit board, and so on, and so forth.

Given the above, it is easy to get excited about the prospects for accelerated science (and perhaps also quite worried about broader economic disruption, perhaps the need for something like universal basic income, and so on, but that’s another subject). Especially considering that what one lab can do depends positively on what other labs can do, since they draw continually on one another’s knowledge. Should we see an increased rate of progress, but not just linearly, rather in a change to  the time constant of an exponential? How should we model the effect of the increased single-lab or single person productivity, due to the effects of broad Best of Existing Skills Instantaneously in Anyone’s Browser capabilities?

But what about parts of scientific experiments that are, e.g., as an extreme example, something like “letting the monkeys grow old”? These are the “inelastic” parts. If we need to see if a monkey will actually get an age-related disease, we need to let it grow old. That takes years. This speed isn’t limited by access to previously-rare-and-expensive human skills. The monkey lifecycle is just what it is. If we want to know if a diagnostic can predict cancer 20 years before it arises, then at least naively, we’ll need to wait 20 years to find out if we’re right. We’ll be able to come up with some surrogate endpoints and predictors/models for long-timescale or other complex in-vivo biology (e.g., translation of animal findings to humans), but they’ll still need to be validated relative to humans. If we want in-silico models, we’ll need massive data generation to get the training data, often at a level of scale and resolution where we don’t have existing tools. That seems to set a limit on how quick the most relevant AI-accelerated bio progress could be.

Sometimes there are clever work-arounds, of course, e.g., you don’t necessarily need to grow full plants to usefully genetically engineer plants, and in the “growing old” example, one can use pre-aged subjects and study aging reversal rather than prevention/slowing. In fact, coming up with and validating those kinds of work-arounds may itself be what is ultimately rate-limiting. FRO-like projects to generate hard-to-get ground truth data or tooling to underpin specific uses of AI (like making a universal latent variable model of cellular state) in science may be one fruitful avenue. Concerned that the clinical trial to test a new cell therapy is going to be expensive and dangerous – maybe try a cell therapy with a chemical off-switch instead. How inelastic must inelastic be, really?

Type 3 skills?

Finally, what about “Type 3” skills? On the one hand, someone could say, “science” requires more than just existing “best of humanity level” skills. What scientists do is NOT just practice skills that other people already have. It is not enough to just make routine and fast on anyone’s web browser something that humanity at large already knows, because science is about discovering what even humanity at large does not know yet. What scientists are doing is creating new ideas that go beyond what humanity in aggregate already knows. So “science” is not a Type 1 or Type 2 skill, one might say, it is a “Type 3 skill”, perhaps, i.e., one that does not come from simply imitating the best of what humanity at large already knows and has documented well, but rather extends the all-of-humanity reach out further. Furthermore, as Sam points out, a lot of the literature is basically wrong (or severely incomplete) in bio, so logic based on the literature directly to scientific conclusions or even correct hypotheses may not get you all that far. Furthermore, LLMs currently generate a lot of incorrect ramblings and don’t have a strong “drive towards truth” as opposed to just a “drive to imitate what humans typically do”.

On the other hand, much of what scientists actually spend their time on is not some essentialized novel truth generating insight production galaxy brain mind state per se, but rather things like making an Excel spreadsheet to plan your experiment and the reagents you need to order and prepare. Enumerating/tiling a space of possibilities and then trying several. Visualizing data to find patterns. Finding and digesting relevant literature. Training new students so they can do the same. Furthermore, as mentioned, in certain more abstruse areas of science, like pure math, we have the possibility of formal verification. So theoretical areas may see a boost of some kind too to the extent that they rely on formal math, perhaps seeing in just a few years the kind of productivity boost that came from Matlab and Mathematica over a multi-decade period. A lot of science can be sped up regardless of whether there are real and important Type 3 skills.

Are there real Type 3 skills? Roger Penrose famously said that human mathematical insight is actually uncomputable (probably wrong). But in the above it seems like we can accelerate formal math in the context of interactive theorem provers. Where is the Type 3 math skill? In the above we also said that a lot of what seems like Type 3 science skill is just an agglomeration of bespoke regular skills. I’d like to hear people’s thoughts on this. I bet Type 3 skills do very much exist. But how rate limiting are they for a given area of science?

More bespoke, and direct, AI applications in science

This is not to mention other more bespoke applications of AI to science. Merely having the ability to do increasingly facile protein design has unblocked diverse areas from molecular machine design to molecular ticker-tape recording in cells. This is now being boosted by generative AI, and there are explicit efforts to automate it. (Hopefully, this will help quickly bring a lot of new non-biological elements into protein engineering, and thus help the world’s bioengineers move away from autonomously self-replicating systems, partially mitigating some biosafety and biosecurity risks.)

There isn’t the space here to go over all the other exciting boosts to specific areas of science that come from specific areas of deep learning, as opposed to more general automation of human cognition as we’re considering with language models, their application to coding, and so on. Sequence to sequence models for decoding mass spectrometry into protein sequences, predicting antigens from receptor sequences, stabilizing fusion plasmas, density functional theory for quantum chemistry calculations, model inference, representations of 3D objects and constructions for more seamless CAD design… the list is just getting started. Not to mention the possible role of “self-driving labs” that become increasingly end-to-end AI driven, even if in narrower areas? It seems like we could be poised for quite broad acceleration in the near term just given the agglomeration of more narrow deep learning use cases within specific research fields.

Inhabiting a changing world

We haven’t even much considered ML-driven acceleration of ML research itself, e.g., via “AutoML” or efficient neural architecture search, or just via LLMs taking a lot of the more annoying work out of coding.

A recent modeling paper concludes that: “We find that deep learning’s idea production function depends notably more on capital. This greater dependence implies that more capital will be deployed per scientist in AI-augmented R&D, boosting scientists’ productivity and economy more broadly. Specifically our point estimates, when analysed in the context of a standard semi-endogenous growth model of the US economy, suggest that AI-augmented areas of R&D would increase the rate of productivity growth by between 1.7- and 2-fold compared to the historical average rate observed over the past 70 years”

In any case, it seems that there could be a real acceleration in the world outside of software and tech, from generative AI. But “inelastic” tasks, and fundamentally missing data, within areas like biology, may still set a limit on the rate of progress, even with AI acceleration of many scientific workflows. It is worth thinking about how to unblock areas of science that are more inelastic.

In this accelerated world model, I’m (somewhat) reassured that people are thinking about how to push forward beneficial uses of this technology, and how to align society to remain cooperative and generative in the face of fast change.

Acknowledgements

Thanks to Eric Drexler, Sam Rodriques and Alexey Guzey, as well as Erika De Benedectis, Milan Cvitkovic, Eliana Lorch and David Dalrymple for useful discussions that informed these thoughts (but no implied endorsement by them of this post).

Selected mentions of the Focused Research Organization (FRO) concept online

convergentresearch.org

e11.bio

cultivarium.org

https://www.nature.com/articles/d41586-022-00018-5

https://www.forbes.com/sites/alexknapp/2023/03/17/why-billionaires-ken-griffin-and-eric-schmidt-are-spending-50-million-on-a-new-kind-of-scientific-research/?sh=7bd83bbb2847

lhttps://www.schmidtfutures.com/schmidt-futures-and-ken-griffin-commit-50-million-to-support-the-next-big-breakthroughs-in-science/

https://en.m.wikipedia.org/wiki/Convergent_Research

https://www.nature.com/articles/d41586-024-00928-6

https://www.metaculus.com/tournament/fro-casting/

https://scienceplusplus.org/metascience/index.html

https://www.sciencedirect.com/science/article/pii/S0092867423013272?dgcid=author

https://www.gov.uk/government/publications/research-ventures-catalyst-successful-applications/research-ventures-catalyst-successful-applications

https://aria.org.uk (“BUILD” or “focused research unit”)
https://www.gov.uk/government/news/plan-to-forge-a-better-britain-through-science-and-technology-unveiled

https://www.gov.uk/government/publications/research-ventures-catalyst-successful-applications

https://twitter.com/SGRodriques/status/1654921083613745152
https://bit.ly/CTP-AdamMarblestone

https://www.punkrockbio.com/p/developing-the-problem-centric-founder

https://www.biorxiv.org/content/10.1101/2023.05.19.541510v2.abstract

https://www.prnewswire.com/news-releases/cultivarium-announces-collaboration-with-atcc-to-expand-repertoire-of-microbes-available-for-the-bioeconomy-301798984.html


https://www.youtube.com/watch?v=ekYeqvMcaWQ


https://fas.org/publication/focused-research-organizations-a-new-model-for-scientific-research/

https://www.philanthropy.com/article/quick-grants-from-tech-billionaires-aim-to-speed-up-science-research-but-not-all-scientists-approve

https://www.sam-rodriques.com/post/optical-microscopy-provides-a-path-to-a-10m-mouse-brain-connectome-if-it-eliminates-proofreading

https://twitter.com/SGRodriques/status/1680219753267466240

https://www.sam-rodriques.com/post/academia-is-an-educational-institution

https://www.thetimes.co.uk/article/8cb0c0c2-bb80-11ed-b039-425ba6c60d6d

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1140211/rdi-landscape-review.pdf

https://tytonpartners.com/voices-of-impact-tom-kalil-schmidt-futures-2/ 
https://www.thendobetter.com/arts/2022/11/15/michael-nielsen-metascience-how-to-improve-science-open-science-podcast?format=amp

https://www.philanthropy.com/article/quick-grants-from-tech-billionaires-aim-to-speed-up-science-research-but-not-all-scientists-approve

https://ai.objectives.institute/

https://corinwagen.github.io/public/blog/20221026_structural_diversity.html

https://xcorr.net/2022/11/03/how-do-science-startups-actually-work/

https://www.notboring.co/p/gassing-the-miracle-machine

https://nadia.xyz/science-funding

https://nadia.xyz/idea-machines

https://nadia.xyz/reports/early-stage-science-funding-asparouhova-jan-2023.pdf
https://mobile.twitter.com/ProtoResearch/status/1504587632948482049

https://www.fastcompany.com/90684882/these-focused-research-organizations-are-taking-on-gaps-in-scientific-discovery

https://www.forbes.com/sites/johncumbers/2023/02/15/ben-reinhardt-is-on-a-mission-to-make-sci-fi-a-reality/?sh=240e12fe4148

https://www.nature.com/articles/s42254-022-00426-6
https://inews.co.uk/opinion/too-much-good-science-never-gets-funded-heres-something-that-might-help-fix-that-1777792

https://institute.global/policy/new-model-science
https://progress.institute/fund-organizations-not-projects-diversifying-americas-innovation-ecosystem-with-a-portfolio-of-independent-research-organizations/
https://logancollinsblog.files.wordpress.com/2021/05/list-of-biotechnology-companies-to-watch-1.pdf

https://overlapholdings.substack.com/p/brave-capital-a-mini-manifesto?utm_source=profile&utm_medium=reader2
https://progress.institute/progress-is-a-policy-choice/

https://manifund.org/projects/optimizing-clinical-metagenomics-and-far-uvc-implementation

https://endpts.com/inside-the-multibillion-dollar-silicon-valley-backed-effort-to-reimagine-how-the-world-funds-and-conducts-science/

https://twitter.com/SGRodriques/status/1447976944948088832

https://noahpinion.substack.com/p/interview-jason-crawford-nonprofit

https://elidourado.com/blog/geothermal/

http://gaia.cs.umass.edu/NNRI/

https://applieddivinitystudies.com/FRO/

https://www.geroscience.health/white-paper

https://www.economist.com/united-states/2021/06/05/congress-is-set-to-make-a-down-payment-on-innovation-in-america

https://mobile.twitter.com/JvNixon/status/1404808694278279183

https://dweb.news/2021/09/05/technology-on-beaming-solar-power-from-low-earth-orbit/

https://innovationfrontier.org/geothermal-everywhere-a-new-path-for-american-renewable-energy-leadership/

https://podcasts.apple.com/au/podcast/adam-marblestone-ben-reinhardt-fro-parpa-innovating/id1573395849?i=1000539374581

https://www.sciencefutures.org/resources/

https://leadegen.com/index.php/frontiers/7-tom-kalil/

https://astera.org/fros/

http://tib.matthewclifford.com/issues/tib-134-technological-sovereignty-why-we-dream-the-missing-piece-in-r-d-and-more-281762

https://ideamachinespodcast.com/adam-marblestone-ii

https://austinvernon.site/blog/drillingplan.html

https://rootsofprogress.org/how-to-end-stagnation

https://www.lawfareblog.com/chinatalk-tough-tech-roombas-valleys-death-and-woolly-mammoths

https://nintil.com/bottlenecks-workshop/

https://www.scibetter.com/interview/ricon

https://foresight.org/salon/aging-ecosystem-multipliers-focused-research-orgs-adam-marblestone-schmidt-futures-fellow/

https://arbesman.net/overedge/

https://corinwagen.github.io/public/blog/20230717_fmufros.html

https://rootsofprogress.org/a-career-path-for-invention

https://ntc.columbia.edu/wp-content/uploads/2021/04/National-Brain-Observatory.pdf

https://www.dayoneproject.org/post/focused-research-organizations-to-accelerate-science-technology-and-medicine

https://dash.harvard.edu/handle/1/42029733

https://dspace.mit.edu/handle/1721.1/123401

Related:


https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elgans-after-10-years?commentId=GBvdQoNG7L2vqPu3v

https://moalquraishi.wordpress.com/2020/12/08/alphafold2-casp14-it-feels-like-ones-child-has-left-home/#s5
“Resources also helped and this is not to be underestimated, but I would like to focus on organizational structure as I believe it is the key factor beyond the individual contributors themselves. DeepMind is organized very differently from academic groups. There are minimal administrative requirements, freeing up time to do research. This research is done by professionals working at the same job for years and who have achieved mastery of at least one discipline. Contrast this with academic labs where there is constant turnover of students and postdocs. This is as it should be, as their primary mission is the training of the next generation of scientists. Furthermore, at DeepMind everyone is rowing in the same direction. There is a reason that the AF2 abstract has 18 co-first authors and it is reflective of an incentive structure wholly foreign to academia. Research at universities is ultimately about individual effort and building a personal brand, irrespective of how collaborative one wants to be. This means the power of coordination that DeepMind can leverage is never available to academic groups. Taken together these factors result in a “fast and focused” research paradigm.

AF2’s success raises the question of what other problems exist that are ripe for a “fast and focused” attack. The will does exist on the part of funding agencies to dedicate significant resources to tackling so-called grand challenges. The Structural Genomics Initiative was one such effort and the structures it determined set the stage, in part, for DeepMind’s success today. But all these efforts tend to be distributed. Does it make sense to organize concerted efforts modeled on the DeepMind approach but focused on other pressing issues? I think so. One can imagine some problems in climate science falling in this category.

To be clear, the DeepMind approach is no silver bullet. The factors I mentioned above—experienced hands, high coordination, and focused research objectives—are great for answering questions but not for asking them, whereas in most of biology defining questions is the interesting part; protein structure prediction being one major counterexample. It would be short-sighted to turn the entire research enterprise into many mini DeepMinds.

There is another, more subtle drawback to the fast and focused model and that is its speed. Even for protein structure prediction, if DeepMind’s research had been carried out over a period of ten years instead of four, it is likely that their ideas, as well as other ideas they didn’t conceive of, would have slowly gestated and gotten published by multiple labs. Some of these ideas may or may not have ultimately contributed to the solution, but they would have formed an intellectual corpus that informs problems beyond protein structure prediction. The fast and focused model minimizes the percolation and exploration of ideas. Instead of a thousand flowers blooming, only one will, and it may prevent future bloomings by stripping them of perceived academic novelty. Worsening matters is that while DeepMind may have tried many approaches internally, we will only hear about a single distilled and beautified result.

None of this is DeepMind’s fault—it reflects the academic incentive structure, particularly in biology (and machine learning) that elevates bottom-line performance over the exploration of new ideas. This is what I mean by stripping them from perceived academic novelty. Once a solution is solved in any way, it becomes hard to justify solving it another way, especially from a publication standpoint.”

Notes on indoor food production, life support for space colonies, “refuges”, “biobox”, and related

Epistemic status: initial stab from a non-expert

Thanks to Shannon Nangle and Max Schubert for helpful discussions. 

Summary:

There are a variety of potential scenarios where a self-sustaining, biologically isolated Refuge would be needed towards existential risk reduction. 

Currently, there is very little work going on towards technical common denominators required for all such approaches, e.g., highly efficient and compact indoor food production. 

Shallow overview of prior writings related to the BioBox/Refuge concept:

There are a number of concepts floating around about Refuge or “Bio Box” that gesture at different design criteria.

There is the abstract concept around bio-risk

https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328714001888-main.pdf

There is the Carl Shulman blog version

http://reflectivedisequilibrium.blogspot.com/2020/05/what-would-civilization-immune-to.html

Shulman envisions greatly increased wealth or cost-benefit calculus motivating society to equip many standard living and working spaces with BSL-4 level biosafety precautions, e.g., “large BSL-4 greenhouses”. This does not address the issue of radical improvements in the density or autonomy of food production or waste management, but rather proposes the use of advanced filtering and sterilization procedures in the context of more conventional infrastructure.  

There are old BioSphere 2 projects, which include a lot of extraneous stuff up front, like goats and waterfalls — these are arguably not technically focused enough to drive the core capability advancements needed for a Refuge

https://biosphere2.org/

There is the notion of a regenerative life-support system for space, e.g., for future space stations. This bleeds into the notion of a bio-regenerative life-support system where a large number of essential regenerative functions, e.g., waste recycling or gas balance functions are done by biological organisms, e.g., MELISSA, BIOS-3

https://en.wikipedia.org/wiki/MELiSSA

MELISSA stands for micro-ecological life support system ALTERNATIVE, where “alternative” means it is using biology for some aspects where, for example, the International Space Station would use more conventional chemical engineering methods. See: 

https://www.youtube.com/watch?v=PHTVep3Fik0

http://www.sciencedirect.com/science/article/pii/S0168165602002225

There is the notion of a food production and waste recycling system roadmap for early Mars colonies

https://www.nature.com/articles/s41587-020-0485-4

See Except below.

While optimized for the Martian setting, they point to core technology problems that may be relevant for Earth-based refuges, including efficient indoor food production, and have led to some roadmapping work taxonomizing where biological versus non-biological solutions could be most useful in a simplified, minimal closed habitat.

There is simply the idea of highly efficient indoor food production to protect against risks to the food supply.

There is the idea of nuclear submarines being able to operate and hide ~indefinitely as a deterrent to attacks, by having a Refuge on board.

There is the notion of using Refuges as a way of maintaining other defensive or counter-offensive biotech capabilities in a safe space, e.g., if a Refuge is where we keep our vaccine/countermeasure synthesis capacity.

Within all this there are parameters including whether it is totally sealed, size, number of people supported, comfort level supported, and so on. 

There is the George Church version of BioBox which is closest to a modern idea for fully closed bio-regenerative life support system on Earth, building on MELISSA, but with an emphasis on photosynthesis and certain unconventional applications in mind. This proposes to use photosynthetic microbes as a food source, in contrast to the Nagle et al first stage Mars plan which proposes to use methanol-using heterotrophic and CO2-using lithoautotrophic fermentation.

Finally there is the idea of pushing relevant (e.g., compact indoor food production) technologies by first developing economically viable products (such as niche food products for consumers).

Then there is the “hydrogen oxidizing bacteria” (HOB) approach — see these papers:

Alvarado, Kyle A., et al. “Food in space from hydrogen-oxidizing bacteria.” Acta Astronautica 180 (2021): 260-265.

Martínez, Juan B. García, et al. “Potential of microbial protein from hydrogen for preventing mass starvation in catastrophic scenarios.” Sustainable production and consumption 25 (2021): 234-247.

Nangle, Shannon N., et al. “Valorization of CO2 through lithoautotrophic production of sustainable chemicals in Cupriavidus necator.” Metabolic Engineering 62 (2020): 207-220.

Liu, Chong, et al. “Water splitting–biosynthetic system with CO2 reduction efficiencies exceeding photosynthesis.” Science 352.6290 (2016): 1210-1213.

Chen, Janice S., et al. “Production of fatty acids in Ralstonia eutropha H16 by engineering β-oxidation and carbon storage.” PeerJ 3 (2015): e1468.

From a Denkenberger paper: “The main companies currently pioneering mass production of H₂ SCP are: SolarFoods, NovoNutrients, Avecom, Deep Branch Biotechnology [looks like they are making animal feed], Kiverdi [“air protein”] and LanzaTech [making Omega3 fatty acids from CO2, for one thing]…”. See also, notably, Circe

Other more mature aspects of life support systems outside food production

https://www.nasa.gov/content/life-support-systems

https://www.esa.int/Science_Exploration/Human_and_Robotic_Exploration/Concordia

https://patents.google.com/patent/WO1998025858A1/en

https://ntrs.nasa.gov/api/citations/20060005209/downloads/20060005209.pdf

https://www.nasa.gov/pdf/473486main_iss_atcs_overview.pdf

https://www.nasa.gov/centers/marshall/pdf/104840main_eclss.pdf

Note: the ISS imports food and some water from Earth but recycles other things like oxygen, as I understand it

Relevant excerpt from The Case for Biotechnology on Mars (Nangle et al):

“Recent advances in fermentative production of flavors, textures and foods can form the basis for new Mars-directed engineering efforts. Successful deployment will require the in-tandem development of organisms and fermenters for Martian conditions; the system must use CO2 and CH3OH as its sole carbon sources, accommodate unreliable solar irradiance and tolerate the potential presence of contaminants in water and regolith. To support this development, we propose scaling Martian food production in three stages: Stage I involves lithoautotrophic and heterotrophic fermentation; Stage II involves photoautotrophic fermentation and small-scale crop growth; and Stage III involves large-scale crop cultivation. 

Stage I. Both methanol-using heterotrophic and CO2-using lithoautotrophic fermentation will be used to complement the crew’s diet and serve as an initial demonstration of Martian food production. 

Fermentation technologies also have the added benefit of shorter boot-up and production timelines (days to weeks) compared with the production of staple plant crops (weeks to months). Fermentation can be carried out in simple stir tanks or airlift reactors that use engineered organisms to produce complex carbohydrates and proteins40,41. Several suitable methylotrophic organisms, such as Methylophilus methylotrophus and Pichia pastoris, are already genetically characterized, industrially optimized and extensively deployed for large-scale production. Methylotrophic genes have also been heterologously expressed in model organisms such as Escherichia coli and Bacillus subtilis41. Such organisms can be engineered to produce a wealth of ingredients, including flavors, protein, organic acids, vitamins, fatty acids, gums, textures and polysaccharides41. Bioreactors with these organisms have very high process

intensities, with a single 50-m3 reactor able to produce as much protein as 25 acres of soybeans, with only a few days to the first harvest42–44. CO2-using lithoautotrophs could similarly be engineered to couple their hydrogen oxidation and CO2 fixation into oligosaccharides, protein and fatty acid production. 

Maximizing yields in these microbial chassis and adapting the above organisms to Martian minimal medium remain key challenges. Initial applications can focus on small-scale sources of backup calories and on establishing benchmarks for subsequent larger-scale implementation. Demonstration of aero- and hydroponic systems to grow spices, herbs and greens would be explored in this stage45.

Stage II. The second stage focuses on introducing photoautotrophs to synthesize food. With increasing investment in Martian infrastructure, more complex bioreactors can be deployed to grow green algae rich in carbohydrates, fatty acids and protein46. Several well-developed terrestrial examples of algal industrialization exist, such as Arthrospira platensis for food or commercial algal biofuels47. On Earth, the high capital costs of building reactors and supplying high concentrations of CO2 for optimal production are commercially challenging. On Mars, however, this challenge becomes an advantage: the CO2-rich atmosphere can be enclosed and pressurized for algal growth.

As photoautotrophic growth is scaled to meet more nutritional requirements of the crew, maintaining reliable production despite the weaker Martian sunlight and planet-engulfing dust storms will be a key challenge, requiring surface testing of several reactor designs. We do not anticipate using natural sunlight as an energy source for photoautotrophs at these stages because it alone is insufficient for growth: once solar photons have passed through greenhouse materials, photoautotrophs would receive around 17 mol m–2sol–1—up to fourfold less than their typical minimal requirements35,48.

Thus, at this stage, photosynthetic organisms would be grown in photobioreactors or growth chambers with optimized artificial lighting. For longer habitation, the psychological benefits of having living plants and familiar foods are substantial49….”

Cyanobacterial food

https://wyss.harvard.edu/news/max-schubert-on-fast-growing-cyanobacteria/

But is the cyanobacterial path the right one? To compare the hydrogen oxidizing bacteria (HOB) approach with a photosynthetic microalgae or cyanobacteria approach, consider this quote from one of the Denkenberger papers: “Electricity to biomass efficiencies were calculated for space to be 18% and 4.0% for HOB [hydrogen oxidizing bacteria] and microalgae, respectively. This study indicates that growing HOB is the least expensive alternative. The [equivalent system mass] of the HOB is on average a factor of 2.8 and 5.5 less than prepackaged food and microalgae, respectively.” So HOB is significantly more efficient, per this analysis. 

The supplemental materials of the Metabolic Engineering paper includes this comparison with cyanobacterial food production:

“Comparison to cyanobacterial co-culture systems

As bioproduction technologies have expanded, co-culture and cross-feeding has been explored as a possible solution to lower feedstock costs while supporting the existing infrastructure of engineered heterotrophs. Efforts towards autotrophic-heterotrophic co-cultures have primarily focused on cyanobacteria as the autotroph 1,2 . Cyanobacteria are an obvious choice as they natively produce sucrose as an osmoprotectant—rather than a carbon

source—to high concentrations without toxicity, making it an attractive feedstock-producer for heterotrophs. Engineered cyanobacterial strains able to convert and export up to 80% of their fixed carbon successfully fed three phylogenetically distinct heterotrophic microbes (E. coli, B.

subtilis, and S. cerevisiae) 3 . However, cyanobacteria produce reactive oxygen species through photosynthesis and protective cyanotoxins, which are ultimately toxic to the heterotrophs. While cyanobacteria have higher solar-to-biomass conversion efficiencies than plants, efficiency remains 5-7% and is thermodynamically limited to ~12%—several fold lower than photovoltaics 4. In addition to their biological limitations, there are a variety of implementation constraints that hinder industrial scale-up. Because cyanobacteria grown at scale require sunlight, two common culturing methods allow for optimal sunlight penetration: pools and photobioreactors. The large shallow pools can only be used in certain regions, are susceptible to environmental changes and contamination—and so it is difficult to maintain consistent batch-to-batch cultivation. In an effort to mitigate some of these issues, these pools can be modified to grow the cyanobacteria in small diameter tubing, but this kind of containment often deteriorates from radiation exposure as well as generates substantial plastic waste 5 . Because these issues4 are all challenges for cyanobacteria monoculture, it is not clear how a co-culture system would be successfully implemented at scale.”

See here for more on comparison with using cyanobacteria:

https://microbialcellfactories.biomedcentral.com/articles/10.1186/s12934-018-0879-x

https://science.sciencemag.org/content/332/6031/805

I am not an expert here but this seems to basically be saying photosynthesis is not actually that great compared to what can be done with other kinds of conversion. 

Note: other comparisons could be made to other food from gas and food from woody biomass approaches. 

A counter-argument against this kind of industrial instrumentation and bioengineering heavy approach is that in some catastrophic scenarios on Earth, e.g., post nuclear, one might have very limited infrastructure capacity and one could perhaps instead be focusing on producing sufficient food from woody biomass with sufficient net gain to the human workers doing a lot of stuff by hand in that scenario (no power grid, no chemical manufacturing whatsoever, no good temperature control systems, etc)? 

This addresses a scenario relevant to post-apocalyptic (nuclear winter) food production but not necessarily the Refuge/BioBox scenario per se.

Some questions:

Q: Is efficient indoor food production from simple feedstocks indeed the “long pole in the tent”, technically, for a Refuge/BioBox/closed life support system in general?

Other parts of life support do seem more solved, e.g., from work done by the International Space Station teams: 

https://patents.google.com/patent/WO1998025858A1/en

https://ntrs.nasa.gov/api/citations/20060005209/downloads/20060005209.pdf

https://www.nasa.gov/pdf/473486main_iss_atcs_overview.pdf

https://www.nasa.gov/centers/marshall/pdf/104840main_eclss.pdf

Q: What about doing natural gas to food biologically as a means of producing food

https://agfundernews.com/two-startups-converting-methane-into-animal-feed-raise-funding-from-gas-giants-in-europe-asia.html

or coal to food chemically (See: https://www.washingtonpost.com/archive/lifestyle/food/1984/05/27/can-food-be-made-from-coal/d80567ac-c656-4e0b-9f54-d505bd6d261a/)?

Q: Is there value in pushing conventional indoor vertical farming instead? See:

https://www.pnas.org/content/117/32/19131

Much less efficient than using microbes? 

Q: Would we really do direct air (or ocean) capture of CO2 for a refuge on Earth?

One of the Denkenberger papers states: “Electrolysis based H₂ SCP production requires an external carbon source. This study conservatively uses direct air capture (DAC) of CO₂ as the basis of our calculations; however, CO₂ capture from industrial emitters is in most cases less expensive and in some cases can already contain some amount of hydrogen that can be used.”

“​​The nitrogen requirements can be satisfied by using ammonia from the fertilizer industry….”

Notes on sequence programmability in bio-templated electronics

Note: prepared as a response to this RFI.

Response to prompt (b) on Biotemplated Registration capabilities:

b1. What are the physical mechanisms underlying your registration approach(es)?  Include surface chemistry requirements in your discussion.

The proposed approach would achieve sequence-specific addressable chips that can direct unique-sequence DNA origami to specific spots on chip with an exponential diversity of sequence programmability rather than a more limited diversity of shape and surface affinity programmability as in previous work. 

To do this, one needs to be able to approximately size-match single DNA origami-like structures with single sequence-specific spots on chip (think of them as localized “forests” of copies of a particular DNA sequence on chip). One way of doing that would be to photo-pattern a sequence-specific DNA microarray, and then shrink the spots with Implosion Fabrication. 

What are the physical principles underlying Implosion Fabrication? It turns out that there are materials called hydrogels that, when you put them in water, can swell uniformly by a large factor, say 10x along each axis. If you add salt, they uniformly shrink back down. Implosion Fabrication uses a focused spot of light to pattern materials into a swollen hydrogel, and then shrink. So you can get 10x better resolution, say, then the smallest diameter of a focused spot of light, i.e., you can get 10’s of nanometer resolution using light with wavelength of hundreds of nanometers. This can be done with a variety of materials, but here is shown just with fluorescent dyes for demonstration purposes:

A key advantage of this approach is that Implosion Fabrication operates directly in three dimensions. 

b2. Does your approach incorporate biomaterials into the resulting device?

It can, in theory, though this depends on the nature of the post-processing, e.g., whether it involves high temperatures. The approach could be adapted for different such scenarios.  

b3. What are the expected capabilities of your registration approach(es) (e.g., location, orientation and geometrical tolerances, pitch, critical dimensions, error in critical dimensions, density multiplication, critical dimension shrinkage)?  Please include a discussion of how computational and metrology resources assist in this approach.

Goal: The key goal would be to take the full addressability within DNA origami — the fact that each staple strand, which goes to a unique site on the origami, with few nanometer precision, say, and can thus bring a unique attached chemical, nanoparticle or so on to that particular site on the origami (the 2007 Battelle roadmap has a good description of this concept, they call it “unique addressing”) — and extend that to an area approaching the size of a computer chip, so say a millimeter on a side instead of 100 nm on a side.

Background: What the current state of the art can do is use electron beam lithography to make small “sticky” spots (I’m glossing over the chemistry obviously) on a silicon surface — and importantly, those spots can have a well defined orientation and be of the exact right size and shape to stick to a shape-matched DNA origami. Like this: note how the DNA origami triangles line up quite well inside the lithographic triangles:

They can then use this to make some basic photonic devices. This is one of those technologies where it feels like it now needs exploration to find its killer app. One possibility is positioning of a small number of discrete photonic components at the right locations on chips, e.g., for single photon sources — there is some progress in that general direction: “the authors were able to position and orient a molecular dipole within the resonant mode of an optical cavity”.

Proposed innovations: The proposed approach would go beyond just matching the shapes of lithographic spots to the shapes of DNA origami, and instead to actually have unique DNA sequences at unique spots that could uniquely bind to a given DNA origami. This would be a combination of a few technologies

In more detail, in my mostly theoretical thesis chapter on “nm2cm” fabrication

http://web.mit.edu/amarbles/www/docs/Marblestone_nm2cm_thesis_excerpt.pdf

http://web.mit.edu/amarbles/www/docs/Bigger-NanoBots-Marblestone.pdf

https://dash.harvard.edu/handle/1/12274513

we proposed that the key gap in this field — of integrating biomolecular self-assembly with top-down nanofabrication to construct chip-scale systems — is that, as impressive are works like Ashwin Gopinath’s using shape to direct DNA origami to particular spots on chips, it would be even more powerful if we could direct specific origami to specific spots on chip in a sequence-specific way: each spot on the chip should have a unique DNA address that could match to a unique DNA origami slated to land there. How can we do that? 

1.1) Optical approaches (faster, cheaper than electron beam lithography) can deposit or synthesize particular DNA sequences at particular spots on chip — and this is widely used to create DNA microarrays — but the spot sizes and spacings of the resulting DNA “forests” are too large to achieve “one origami per spot”

http://www.biostat.jhsph.edu/~iruczins/snp/extra/05.08.31/nbt1099_974.pdf

1.2) So we proposed to combine sequence non-specific but higher resolution photolithography to make small spots, with coarser grained optical patterning to define sequences for those spots, and then large origami rods spanning spot to spot to help ratchet orientation and spacing into a global “crystal-like” pattern: see nm2cm chapter above

Anyway, we didn’t demonstrate much of this experimentally at all (alas, it needed an ARPA program not just a rather clumsy grad student, or at least that would be my excuse!), but since then

1.a) Implosion Fabrication (ImpFab), mentioned above, may now provide a way to take a sequence-specific DNA microarray and shrink it so that the spot size matches achievable sizes of DNA origami. Something like this: note the tiny DNA origami in the lower right for scale

1.b) Researchers have started making smaller/finer-resolution microarray-like sequence-specific (albeit random) patterns on chips, and even transferred them to other substrates

https://www.biorxiv.org/content/10.1101/2021.01.25.427807v1.full

https://www.biorxiv.org/content/10.1101/2021.01.17.427004v1.full

(With these, you make a fine grained but random pattern and then image/sequence it in-situ to back out what is where. This would obviously entail a significant metrology component, i.e., figuring out what sequence is where and then synthesizing a library of adaptor strands to bring the right origami to the right sequences.)

1.c) DNA origami have gotten bigger, too, closer to matching the sizes even of existing non-shrunken microarray spots

https://www.nature.com/articles/nature24655

1.d) Another approach that could be used for fine-grained, sequence specific patterning would be something like ACTION-PAINT. This is in the general category of nanopatterning via “running a microscope in reverse”. Basically, there is a microscopy method called DNA PAINT that works like this. You have some DNA strands on a surface, arranged just nanometers apart from one another, and you want to see how they are all arranged. If you just put fluorescent labels on all of them at once, and look in an optical microscope, then the limited resolution of the optical microscope — set by the wavelength of light, a few hundred nanometers — blurs out your image. But if you can have complementary DNA strands bind and unbind transiently with the strands on the surface, fluorescing only when they bind, and such that at any given time only one is bound, then you can localize each binding event, one at a time, with higher precision than the wavelength (by finding the centroid of a single Gaussian spot at a time). That’s the basic principle of single-molecule localization microscopy, which won a Nobel Prize in 2018

The magic is that you can localize the centroid of one (and only one) isolated fluorescent spot much more precisely than you can discriminate the distance between two (or more) overlapping fluorescent spots. So you rely on having a sparse image at any one time, as DNA molecules bind on and off to different sites on the object such that typically only one site has a bound partner at any given time on, and then you localize each binding event one by one and build up the overall image as a composite of those localizations.

Anyway, that’s a microscopy method that lets you see with resolution down to a couple nanometers, well below the wavelength of light.

But how can you use this for nano-patterning? Well, imagine you have a desired pattern you want to make, and you are doing this “single molecule localization microscopy” process in real time. Then, if you can detect that a DNA strand has bound to a spot that is supposed to be part of your pattern, and you can register this in real time, then you can quickly blast the sample with a burst of UV light which locks that strand in place, preventing it from ever leaving again. That “locks in” a DNA bound to that spot. Now, most of the time, the localizations you’ll see will be at spots you don’t want to be part of your pattern, so you don’t blast the UV light then. But every so often, you’ll see a probe bound at a spot you want to be in the pattern, and when that happens, you take fast action, locking it in. That’s what ACTION-PAINT does:

This can be seen as a kind of molecular printer with in principle roughly the same resolution as that of the underlying single molecular localization microscopy method. Which in practice is not quite as high as the best AFM positioning resolution. But it is pretty high, in the single digit nanometers in the very best case. 

Thus, I think sequence-specific bio-chips, in which thousands of distinct origami as defined by sequences, not just a few as defined by shapes, can be directed to their appropriate spots on chip in a multiplexed fashion, should be possible. Exactly what their killer applications would be is less clear to me as of now.  

b4. How broadly can your approach be applied (i.e., is it limited to a single material and/or device)?

The approach would constitute a general platform for 3D hierarchical multi-material nanofabrication. If developed intensively, many thousands of different DNA origami bearing different functionalizations could in theory be brought to appropriate defined locations in 3D. Orientation of parts would be challenging to achieve but see the “nm2cm” crystal-like annealing process proposed below to above to allow this. Other materials could also be patterned in-situ using the standard implosion fabrication methods. 

b5. What constitutes a defect in your approach?

One could have a) defective origami, b) spots that are not patterned with DNA, c) spots with DNA that do not receive the right origami, d) other larger-scale defects, e.g., non-uniformities in the implosion process if using implosion fabrication, e) orientation defects if aiming to achieve defined orientations, e.g., in something like the nm2cm scheme.

b6. What defect rate and/or density can your approach achieve?

Currently unknown. In theory, layers of error correction could be applied at various levels to reduce defect rates. 

b7. Can defect reduction techniques be applied to your approach and if so, what is the expected impact?

Exact design scheme and quantitative impact not yet clear.

b8. What manufacturing throughput can your approach achieve?

Because it can rely on photolithography rather than electron-beam lithography, and pattern on the origami at the few-nm scale in a massively parallel way, the approach could potentially be very fast, e.g., with holography optical patterning of the initial template to be imploded. 

In general, for all detailed implementation questions here, it should be noted this is more of a set of design concepts and these are quite early-stage. In my mind this would form an ancillary, more speculative part of a program, aiming to seed sequence specific assembly and registration principles beyond the “bread and butter” parts of a program that might involve aspects closer to the published literature on registration in 2D by, for example, Gopinath/Rothemund et al.

b9. What existing nanomanufacturing infrastructure (e.g., tooling, processes) is required to enable your approach?  Are these resources currently available to you?

I am currently doing other kinds of work more on the institutional side. Would suggest doing this via groups like Irradiant Technologies and collaborations with DNA nanotechnology (e.g., Ashwin Gopinanth, William Shih) and DNA microarray fabrication (e.g., Franco Cerrina, Church lab, spatial transcriptomics labs using related methods) groups. In other words, I’m not in a direct position to execute on this experimentally right now.

b10. What computational resources would assist in simulating your approach?  If you could design the ideal computational infrastructure/ecosystem, what would it look like?  Please be quantitative with expected gains from having access to this ecosystem.

Depends on further narrowing down what this gets used for. Computing doesn’t seem to be the key limitation right now for this project. 

b11. In what way(s) are these resources different from what is currently available?

Computing doesn’t seem to be the key limitation right now for this project. 

b12. How and to what magnitude would these computational resources assist your approach (e.g., improving throughput, decreasing defects, predicting device characteristics)?

Computing doesn’t seem to be the key limitation right now for this project. 

b13. What are the expected resource requirements for your approach (e.g., raw materials required, power, water)? 

Comparable to DNA microarray manufacturing. 

b14. What are the expected costs (including waste streams) of your approach and how do they compare to existing approaches?

Comparable to DNA microarray manufacturing. 

b15. What metrology tools are needed to achieve the capabilities of your registration approach?  If you could design the ideal infrastructure/ecosystem, what would it look like? 

Depending on whether one does random patterning of the initial sequences and then reads them out, this may need something like an Illumina sequencing machine to read the locations of the sequences prior to implosion and addition of the DNA origami. 

b16. In what way(s) are these metrology resources different from what is currently available?

Just needs adaptation of detailed protocols. 

References

Oran D, Rodriques SG, Gao R, Asano S, Skylar-Scott MA, Chen F, Tillberg PW, Marblestone AH, Boyden ES. 3D nanofabrication by volumetric deposition and controlled shrinkage of patterned scaffolds. Science. 2018 Dec 14;362(6420):1281-5.

Marblestone AH. Designing Scalable Biological Interfaces (Doctoral dissertation, Harvard, 2014).

Singh-Gasson S, Green RD, Yue Y, Nelson C, Blattner F, Sussman MR, Cerrina F. Maskless fabrication of light-directed oligonucleotide microarrays using a digital micromirror array. Nature biotechnology. 1999 Oct;17(10):974-8.

Cho CS, Xi J, Park SR, Hsu JE, Kim M, Jun G, Kang HM, Lee JH. Seq-Scope: Submicrometer-resolution spatial transcriptomics for single cell and subcellular studies. bioRxiv. 2021 Jan 1.

Chen A, Liao S, Ma K, Wu L, Lai Y, Yang J, Li W, Xu J, Hao S, Chen X, Liu X. Large field of view-spatially resolved transcriptomics at nanoscale resolution. bioRxiv. 2021 Jan 1.

Liu N, Dai M, Saka SK, Yin P. Super-resolution labelling with Action-PAINT. Nature chemistry. 2019 Nov;11(11):1001-8.

Next-generation brain mapping technology from a “longtermist” perspective

Summary

Neuroscience research might provide critical clues on how to “align” future brain-like AIs. Development of improved connectomics technology would be important to underpin this research. Improved connectomics technology would also have application to accelerated discovery of new potential treatments for currently intractable brain disorders. 

Neuroscience research capabilities may be important to underpin AI alignment research

Since the human brain is the only known generally intelligent system, it is plausible (though by no means certain), that the AGI systems we will ultimately build may converge with some of the brain’s key “design features”. 

The presence of 40+ person neuroscience teams at AI companies like DeepMind, or heavily neuroscience-inspired AI companies like Vicarious Systems, supports this possibility. I

f this is the case, then learning how to “align” brain-like AIs, specifically, will be critical for the future. There may be much to learn from neuroscience, of utility for the AGI alignment field, about how the brain itself is trained to optimize objective functions.

Neuroscience focused work is still a small sub-branch of AI safety/alignment research. There are preliminary suggestions that the mammalian brain can be thought of as a very particular kind of model based reinforcement learning agent in this context, with notably differences from current reinforcement learning systems, including the existence of many reward channels rather than one.

See Steve Byrnes’s recent writings on this:

https://www.alignmentforum.org/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine

https://www.alignmentforum.org/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent

https://www.alignmentforum.org/posts/diruo47z32eprenTg/my-computational-framework-for-the-brain

We are even starting to see a bit of empirical evidence for such connections based on recent fly connectome datasets:

https://www.lesswrong.com/posts/GnmLRerqNrP4CThn6/dopamine-supervised-learning-in-mammals-and-fruit-flies

In this scenario, it becomes particularly important to understand the nature of the brain’s reward circuitry, i.e., how the subcortex provides “training signals” to the neocortex. This could potentially be used to inform AI alignment strategies that mimic those used by biology to precisely shape mammalian development and behavior.

In another scenario, which could unfold later this century, closer integration of brains and computers through brain computer interfacing or digitization of brain function may play a role in how more advanced intelligence develops, yet our ability to design and reason about such systems is also currently strongly limited by a lack of fundamental understanding of brain architecture.

Current brain circuit mapping capabilities do not adequately support this agenda

Unfortunately, current brain mapping technologies are insufficient to underpin the necessary research. In particular, although major progress is being made in mapping circuitry in small, compact brain volumes using electron microscopy

https://www.biorxiv.org/content/10.1101/2021.05.29.446289v1
https://www.microns-explorer.org/

this method has some severe limitations.

First, it is at best expensive, and also still technically unproven, to scale this approach to much larger volumes (centimeter distances, entire mammalian brains), when one considers issues like lost or warped serial sections or the need for human proofreading.

This scale is required, though, to reveal the long range interactions between the subcortical circuitry that (plausibly) provides training signals to the neocortex, and the neocortex itself. Scale is also crucial for general aspects of holistic brain architecture that inform the nature of this training process. This is particularly the case when considering larger brains closer to those of humans.

Second, electron microscopy provides only a “black and white” view of the circuitry that does not reveal key molecules that may be essential to the architecture of the brain’s reward systems. The brain may use multiple different reward and/or training signals conveyed by different molecules, and many of these differences are invisible to current electron microscopy brain mapping technology.

Improved brain mapping technologies could help

New anatomical/molecular brain circuit mapping technology has the potential to increase the rate of knowledge generation about long-range brain circuitry/architecture, such as the subcortical/cortical interactions that may underlie the brain’s “objective functions”. 

This *could* prove to be important to underpin AI alignment in a scenario where AI converges with at least some aspects of mammalian brain architecture. 

See, e.g., the following comment in Steve Byrnes’s latest AI safety post here: “I do think the innate hypothalamus-and-brainstem algorithm is kinda a big complicated mess, involving dozens or hundreds of things like snake-detector circuits, and curiosity, and various social instincts, and so on. And basically nobody in neuroscience, to my knowledge, is explicitly trying to reverse-engineer this algorithm. I wish they would!”

A big part of the reason one can’t do that today is because our technologies for long-range yet precise molecular neuroanatomy are still poor.

At recent NIH/DOE connectome brainstorming workshops, I spoke about emerging possibilities for “next-generation connectomics” technologies.

Possible risks

It is also possible that advances in brain mapping technology would generally accelerate AGI timelines, as opposed to specifically accelerating the safety research component. However, I think it is at least plausible that long-range molecular neuroanatomy specifically could differentially support looking at interactions between brain subsystems separated across long distances in the brain, which is relevant to understanding the brain’s own reward / “alignment” circuitry, versus just its cortical learning mechanisms. This might bias development of certain next-generation connectomics technologies towards helping with AI safety as opposed to capabilities research, given a background state of affairs in which we are already getting pretty good at mapping local cortical circuitry.

Other possible benefits

Another core benefit of improved connectomics would, if successful, be an improved ability to understand mechanisms for, and screen drugs against, neurological and psychiatric disorders that afflict more than one billion people worldwide and are currently intractable for drug development. See more here and here.

Ways to accelerate aging research

With Jose Luis Ricon.
Adaptations of some of these notes have now made it into a report here.
See also: our review on in-vivo pooled screening for aging.

Much of the global disease burden arises from age-related diseases. If we could slow or reverse root mechanisms of the aging process itself, to extend healthspan, the benefits would be enormous (this includes infectious disease relevance)

Why now is an exciting moment to take action in the aging field

Early advances (e.g., Keynon et al) discovered genes, conserved in animals across the evolutionary tree, regulating a balance between energy consumption and repair/preservation (e.g., autophagy, mitochondrial maintenance, DNA repair), and drove the field in the direction of metabolic perturbations such as caloric restriction and the biochemical pathways involved. Unfortunately, these pathways are ubiquitously involved in diverse essential functions and there is probably an upper limit to how far these can be safely tweaked without side effects, i.e., we may already be near the “pareto front” for tweaking these aspects of metabolism.

The field has started to pick up pace in recent years with a large gain in legitimacy owed to the formation of Calico Inc, and novel demonstrations of rejuvenation treatments in mammals that go beyond simply tweaking metabolism. Indeed, methods are being developed to target all 9 “Hallmarks of Aging”: Stem cell exhaustion, Cellular senescence, Mitochondrial dysfunction, Altered intracellular communication, Genomic instability, Telomere attrition, Epigenetic alterations, Loss of proteostasis and Deregulated nutrient sensing. Commercially, the field is heating up, with many companies pursuing different hypotheses for interventions towards rejuvenation based healthspan extension.

 A good summary of compelling opportunities arising in the past decade is provided by OpenPhil, which includes the following items, for which I here add some details:

  • Prevent the accumulation of epigenetic errors (e.g., changes in DNA methylation patterns) associated with aging, or restore more youthful epigenetic states in cells
    • Ocampo et al (2016) demonstrated in mice transient, cyclic induction, via a gene therapy, of some of the same cellular reprogramming factors that were used by the seminal induced pluripotent stem cell procedure of Yamanaka et al (2006), giving rise to “partial reprogramming” which appeared to restore cells to a more youthful epigenetic state
    • Recent work from Sinclair’s lab (2019) called “recovery of information via epigenetic reprogramming or REVIVER” demonstrated rejuvenation of the retina (a central nervous system tissue) in mice in a manner that appears causally dependent on the DNA demethylases Tet1 and Tet2, suggesting a possible causal role for observed epigenetic changes in aging generally
    • This is recently studied somewhat more mechanistically in human cells: “Here we show that transient expression of nuclear reprogramming factors, mediated by expression of mRNAs, promotes a rapid and broad amelioration of cellular aging, including resetting of epigenetic clock, reduction of the inflammatory profile in chondrocytes, and restoration of youthful regenerative response to aged, human muscle stem cells, in each case without abolishing cellular identity.”
  • Solve the problem of senescent cell accumulation
    • Senolytic drugs (see the work of the Judy Campisi lab, and related startups such as Unity Biotechnology) show a median (as opposed to maximum) lifespan extension in mice on the order of 25%. These are being tested in humans on chronic kidney disease (ClinicalTrials.gov: NCT02848131) and osteoarthritis (ClinicalTrials.gov: NCT03513016).
    • Although senescence plays important beneficial roles in wound healing, pregnancy and other functions, and removing some senescent cell populations in adulthood can be harmful, there may be ways to clear or target aspects of the senescence associated secretory phenotype (SASP) without wholesale removal of all senescent cells, and/ or to remove specific subsets of senescent cells transiently, e.g., there is also some basic research work on senomodulators and senostatics as an alternative to senolytics 
  • Bloodborne factors for reversing stem cell exhaustion and potentially ameliorating diverse aspects of aging
    • The addition of youthful bloodborne factors and/or dilution of age-associated bloodborne factors can appear to re-activate aged stem cell populations, leading to increased neurogenesis, improvement of muscle function and many other improved properties
    • Potentially relevant old-blood factors include: eotaxin, β2-microglobulin, TGF-beta, interferon, VCAM1 (which may mediate aberrant immune cell crossing of blood brain barrier with age or increased aberrant transmission of inflammatory molecules across the BBB)
    • Potentially relevant young-blood factors include: GDF11 (questionable), TIMP2, RANKL (Receptor activator of nuclear factor kappa-B ligand), growth hormone, IGF-1
    • Recently, Irina Conboy’s lab at Berkeley published a study in mice showing that simple dilution of old blood plasma via replacement with a mixture of saline and albumin (so-called apheresis) could show rejuvenative effects on multiple tissues. This is an already-FDA-approved procedure. If true it is revolutionary

From a review of emerging rejuvenation strategies from Anne Brunet’s lab, we have a summary figure:

Reproduced from: Mahmoudi S, Xu L, Brunet A. Turning back time with emerging rejuvenation strategies. Nature cell biology. 2019 Jan;21(1):32-43.

In addition, I would add a few other directions:

  • Thymic regeneration: the TRIIM (Thymus Regeneration, Immunorestoration, and Insulin Mitigation) study resulted in a variety of beneficial biomarker indicators for epigenetic age reversal and immune function restoration. Greg Fahy gave an excellent talk on this. Combinations of thymus transplants and hematopoietic stem cell transplants can yield profound system-wide effects. (Another company has recently emerged focusing on thymic restoration via FOXN1.)
  • Advances in understanding immunosenescence generally, and its coupling to other aspects of aging: e.g., CD38 on the surface of monocytes may be responsible for aberrantly clearing NAD+, which then couples to more traditionally understood metabolic changes in aging — see this excellent summary of the field
  • Brain based neuroendocrine control: IKKβ, in the microglial cells in the medial basal hypothalamus in the brain seems to control multiple aspects of aging, and aging genes FOXO1 and DAF-16 seem to be at least in part under the control of neural excitation
  • Combinatorial gene therapies: We will discuss the need for this extensively below. In at least one paper, the authors focused on non cell autonomous genes which could drive systemic changes: fibroblast growth factor 21 [FGF21], αKlotho, soluble form of mouse transforming growth factor-β receptor 2 [sTGFβR2], showing they could ameliorate obesity, type II diabetes, heart failure, and renal failure simultaneously (the effect seems mostly due to FGF21 alone) 
  • Enhancing mitochondrial function: activating the expression of peroxisome proliferator activated receptor gamma coactivator-1α (PGC-1α) and mitochondrial transcription factor A (TFAM) to enhance mitochondrial biogenesis and quality control. T cells deficient in TFAM induce a multi-systemic aging phenotype in mice.
  • Novel metabolic targets: e.g., J147, targeting ATP synthase, may not be redundant with mTOR inhibition and thus could be used combinatorially: “J147 reduced cognitive deficits in old SAMP8 mice, while restoring multiple molecular markers associated with human AD, vascular pathology, impaired synaptic function, and inflammation to those approaching the young phenotype”.

“Epigenetic clock” biomarkers, reviewed here, and recent proteomic clocks, are another key advance from the past few years. In theory, clocks like these could serve as surrogate endpoints for trials, greatly accelerating clinical studies, but there are issues to be solved (see below).

Major problems holding back the field today

Still, there are a set of related, self-reinforcing factors that likely dramatically slow progress in the field:

  1. Studies are often bespoke and low-N, focus on single hypotheses, and only measure a small subset of the phenotype
    • I believe that the aging field suffers from fragmented incentives, with many small academic groups competing with one another while needing to differentiate themselves individually — this limits the scope for replication studies, combination studies and systemic convergence
    • In the absence of robust pharmaceutical industry interest in supporting the entire pipeline for anti-aging, including at early stages, the more ambitious studies (e.g., bloodborne factors or blood plasma dilution, epigenetic reprogramming) are being done by academic labs with limited resources, and are therefore carried out by postdocs with incentives focused around academic credit-assignment, i.e., limited incentives for large-scale system-building beyond publishing individual papers.
      • For example, the potentially revolutionary recent Conboy lab result that an already-FDA-approved blood plasma dilution procedure ameliorates multi-systemic aging in mice only used N=4 mice for most of its measurements. For such an important-if-true result, this seems absurdly under-powered! Indeed, they apparently replaced the entire first figure of the paper with a post-hoc justification as to why N=4 would give sufficient statistical power, probably in response to critical peer reviewers, instead of simply running more mice. This smells to me like “problematic local academic incentives”, and possibly symptomatic of under-funding as well. See here for more discussion. 
      • Likewise, the finding of GDF11 as a bloodborne rejuvenative factor has had trouble with replication. That’s the nature of science but it seems prevalent in the aging field. 
      • Another recent result on bloodborne factors was published with fanfare but had N=6 rats, did not list what their factors actually were, did not list complete methods, and was led by a small unknown company in India — see here for a discussion of some of the problematic aspects in the publication of what otherwise would be a clear win. This study relied heavily on epigenetic clocks. 
      • The TRIIM (Thymus Regeneration, Immunorestoration, and Insulin Mitigation) study result, which looks preliminarily very compelling, was done with only N=9 human subjects, and was conducted by brilliant but “off-the-beaten path researchers”, working at a small startup (Intervene Immune, Inc) taking donations via its website. This study also relied heavily on epigenetic clocks to argue its anti-aging effect.
      • The NIH Interventions Testing program (ITP) at the National Institute on Aging (NIA) specifically exists to replicate and independently test aging drugs, but has apparently not extended beyond studies of single small molecules to my knowledge or addressed the most cutting edge rejuvenation therapies, let alone combinations thereof.
  1. Tools and tool access/utilization are still limited compared to what they could be
    • As in many biological fields, key bottlenecks could be accelerated via targeted engineering advances, but tools companies and tool development projects generally remain under-funded relative to potential impact
      • For example, for bloodborne factors, we might benefit from a tool that can very precisely add or remove many specific user-defined factors from circulation, but the field has primarily focused on much simpler interventions, beginning with simply suturing young and old animals together (which raised the confound that the young animal’s organs can filter the old blood, not just provide circulating factors)
      • For epigenetic clocks, the technology is still based on methylation arrays, rather than next-generation sequencing based assays which could be multiplexed/pooled to optimize sequencing cost and could then be made much cheaper and thus applied at much larger scale (see below for details)
      • For proteomic measurements, these mostly still use defined sets of a few thousand targets, e.g., assayed using SomaLogic aptamer arrays, rather than next-generation unbiased technologies like improved mass spec proteomics (Parag Mallick et al), single cell proteomics (see recent major advances from e.g. Nikolai Slavov’s lab at Northeastern, which are currently operating at very acute sub-scale and have not been industrialized at all), let alone emerging single-molecule protein sequencing (see many new companies in this area such as QuantumSi and Encodia and recent work from Edward Marcotte’s lab now part of the company Erisyon). Aptamer array based methods may be missing, for example, small peptides like the promising mitochondrial-derived peptide humanin.
      • For epigenetic measurements and proteomic measurements, these are mostly not done at single-cell level, limiting our understanding of the specific cell types that contribute causally — for instance, if aging is heavily due to exhaustion of specific adult stem cell populations, we might find that these stem cell populations are the primary locus of the epigenetic changes, but discovering this requires single cell measurement (or else cumbersome cell type specific bulk isolation procedures)
      • Epigenetic measurements mostly haven’t taken into account the latest technologies for measuring chromatin accessibility directly, such as ATAC-seq (short for Assay for Transposase-Accessible Chromatin using sequencing) or combined single-cell ATAC-seq and RNAseq
      • Measurements of epigenetic effects do not yet include the most advanced in-situ epigenomics measurement technologies that operate by imaging in the context of intact tissues, e.g., FISSEQ, MERFISH, ExSEQ
  1. There is limited infrastructure and incentive for combinatorial studies
    • There appears to be a market failure around combinatorial testing of interventions
    • Combinatorial interventions seem warranted for several reasons, including:
      • aging may be fundamentally multi-factorial 
      • there may be synergies between mechanisms that can create self-reinforcing positive feedback loops in regeneration 
      • using many mechanisms in combination may allow each to be used at lower dosage and thus avoid side effects by avoiding driving any one pathway too strongly 
      • there may be scientific value in understanding the combinations that turn out to be useful, e.g., for identifying overlapping underlying factors and common effects behind diverse superficially-different sets of interventions 
    • For the most part, academic labs are not incentivized to systematically pursue combination therapies. This is because they focus on proving out the specific hypotheses that each lab specializes in and depends on for its reputation, often focusing on telling simple stories about mechanisms. For example, while there are labs that specialize in senescent cell clearance or bloodborne factors, it is hard to differentiate oneself academically while having a lab that combines both or is hypothesis agnostic. (Technology/tools focused labs may be better for this but then may lack the long-term follow-through to make the scientific discoveries in this field or to operate large in-vivo studies.)
      • It is also simply a lot of work to build up enough expertise in multiple domains to properly combine interventions and measurements, and this may put combination studies beyond the scale of most academic projects, even if well funded, which involve only 1-3 first-author grad students and postdocs playing primary driving roles due to the needs of academic credit assignment.
      • Tools in the field are also not optimized for combination studies, e.g., one may have one transgenic mouse model for dealing with senescent cell experiments, and another for epigenetic reprogramming experiments, and these may not be compatible or easy to fuse. (Using appropriate viral methods could overcome this but depends on good delivery vectors, and so forth.)
      • This is not to mention the fact that combination studies will inherently require a larger number of subjects N to gather appropriate statistics, which as mentioned above seems to be hard for academic labs to achieve.
    • Meanwhile, biotech and pharma are also not incentivized to do combinatorial studies
      • This is because pharma companies make money off of readily-translatable drugs, mostly small molecule drugs. Combinations would be harder to get approval for, especially if they involve new modalities like epigenetic reprogramming that may require more far-off inducible gene therapies, or unusual methods like blood dilution or combinations of multiple antibodies to block multiple age-associated targets. To make a simple and robust business case with a well-defined risk calculation, a pharma company wants simple single-target small molecule drugs, and that is simply not what the aging field, at its current level of development and possibly ever, requires to make progress.  
    • This leaves a gap where no organizations are seriously pursuing large-scale combination studies, to my knowledge, neither in humans or animals, and neither for advanced interventions like epigenetic reprogramming nor for simple lifestyle interventions, despite proposals
  1. Biomarkers are not yet established to be causal, as opposed to correlative
    • Epigenetic aging clocks and/or proteomic aging clocks would, in principle, show promise as primary endpoints for pre-clinical or clinical studies. Rather than waiting years to see extension of a mouse or human’s health-span, one could in a matter of weeks or months measure changes to epigenetic or proteomic clocks that are predictive of their ultimate healthspan. 
    • Yet the field of aging biomarkers still has major problems that limit this possibility at a technical level.
    • Specifically, it is not yet known to what degree epigenetic or proteomic aging signatures are causal of aging, versus correlative. (There are some statistical reasons to worry they may not be causal, although probably this particular reasoning applies mostly to first gen studies that relied on patchy cross-sectional datasets like the Horvath multi-tissue clock; if one uses cohorts then one gets better predictors, which is part of why GrimAge and PhenoAge work so well for mortality.) This poses several problems:
      • If they are only correlative, then there may be ways that putative therapies could “turn back the clocks”, but without affecting aging itself, i.e., they would only treat surface level indicators and thus be misleading
      • In the worst case, the epigenetic and proteomic changes could represent compensations in the body acting against aging or to forestall aging. In that case, turning back the clocks might actually accelerate aging! 
    • Additionally, for epigenetic reprogramming therapies that probably operate at least in part through DNA methylation changes (and thus are visible in epigenetic clocks), the full set of damage types they reverse is not yet established. To my knowledge (and also Laura Deming’s), for example, nobody has measured whether the epigenetic reprogramming procedures used by Ocampo et al or the Sinclair lab will reduce the age-dependent buildup of lipofuscin aggregates in cells (whereas rapamycin seems to in some studies, as well as centrophenoxine). Likewise it would be nice to look at the shape/integrity of the nuclear lamina, and whether epigenetic changes repair this as well. There are probably many other examples of this sort.
    • Finally, controversy about epigenetic clocks limits their adoption, so many studies coming out in different parts the aging field don’t measure this even though it would be easy to, e.g., the recent Conboy blood results don’t include an epigenetic clock measurement
    • See here for a table of current putative limitations of epigenetic clocks.
  1. Aging itself is not yet established as a disease or as an endpoint for clinical trials, and this may exacerbate the systemic market failures that plague preventative medicine
    • Since underlying aging factors likely cause many diseases in a multi-systemic fashion (e.g., systemic inflammation and circulatory damage may mediate both Alzheimer’s susceptibility and many other problems, e.g., cardiovascular, renal), and since these factors would best be dealt with preventatively, it would be ideal if aging itself could be classified as a disease, and even more ideal if changes in a predictive biomarker of aging could be used as an endpoint for trials
    • Many aging researchers made this same point recently in an essay in Science.
    • Yet the government is not moving on this issue, to my knowledge.

How to accelerate progress

I see four categories of systemic intervention here, all of which could be done under the umbrella of a coordinated ARPA-style initiative, at a scale of multiple tens of millions of dollars, that would cut across multiple sectors:

  • Build and/or fund non-academic Focused Research Organizations, that would carry out large-N, combinatorial screens while assaying as many phenotypic features as possible
    • We can choose to view the problem from an engineering lens: as a “search” and “measurement” problem. Viewed from this lens, we should build a dedicated, appropriately-scaled, incentive-aligned, well-managed organization to carry out the required scalable assays and search procedures — or potentially create the impetus via milestone-based funding to pivot an existing organization to focus heavily on this (possibilities for existing organizations to re-focus on this could perhaps include aging gene therapy startups like Rejuvenate Bio, Gordian Biotechnologies, a biology experiment platform company like Strateos or Vium, or potentially an organization like the Buck Institute, could partially re-structure or expand to focus on this, or perhaps some partnership of such — there also appears to be at least one aging focused CRO).
    • The purpose of such an organization could be: Searching the combinatorial space of aging interventions, while assaying the combinatorial space of aging phenotypes
    • While many individual aging mechanisms and their associated biology control knobs are being discovered in a piecemeal fashion, we do not yet have a way to comprehensively rejuvenate mammals or extend their lifespans.  Achieving this likely requires not just turning one knob at a time but turning multiple independent and interacting knobs in the correct pattern. This leads to a large combinatorial search space, but to our knowledge no organization has yet tried to search it except on the smallest of scales, probably due to the mismatch of this problem with both short term corporate (low-risk single-drug-at-a-time development) and individual-academic-lab (competitive differentiation and credibility) incentives.

More on the combinatorial intervention aspect:

  • Rather than immediately shooting for a single mechanism (what an academic would mostly be incentivized to do) or a single blockbuster drug molecule (what a company would mostly be incentivized to do), we need to first find some set of sufficient conditions for modulating healthspan by large “leaps” in the first place
  • We need a systematic project to apply factorial design to screen combination therapies across many mechanistic classes, in order to get a hit on at least one combined set of perturbations to mammalian organismal biology that can boost healthspan and regenerate many tissues without side effects. 
  • Even in a mouse, I think one could argue that this would revolutionize prospects in aging research and allow efforts to then be focused on more promising pathways — we would know at least one approach that works towards the end goal of broad healthspan extension, and the question would be translating it to a viable therapy, not whether it is fundamentally possible to slow, stop or reverse aging in a complex long-lived mammal. 
  • The “search process” could focus on simple endpoints like lifespan and serum-derived biomarkers, but alternatively, and preferably, it could comprehensively measure multi-system impacts (e.g., on all 9 “hallmarks of aging”) of interventions using multi-omics assays, and dynamically adapt the interventions accordingly, while building up a digital mapping between a multi-dimensional space of interventions and a multi-dimensional space of phenotypic effects.
    • On the assay side, this could utilize, and in the process push forward, new technologies for scanning 3D biomolecular matter in a very general and precise way (e.g., combinations of FISSEQ, MERFISH, Expansion Microscopy, multiplexed antibody staining with DNA or isotope barcoded antibodies), where one can localize and identify many different molecules per specimen at high 3D spatial resolution and over large volumes of tissue, e.g., mapping the locations and identities of hundreds to thousands of disease-relevant proteins and RNAs throughout intact specimens at nanoscale resolution. The core chemistries and imaging technologies for this exist, but they need to be integrated and brought to scale. 
      • At this level of detail, we could ask questions like: How does the loss of synapse integrity in the aging brain, say, relate to the disruption of gene expression patterns in particular parts of the tissue, subcellular damage such as holes in the nuclear membranes inside cells or buildup of lipofuscin “junk”, changes to blood brain barrier integrity, or altered surveillance by the immune system? Rather than measuring these sparsely and separately, we could measure them comprehensively and integratively inside the same intact specimen.
  • A more advanced version of the combinatorial intervention approach might use highly multiplexed CRISPR activation/inactivation of genes using libraries of CRISPR gRNAs and transgenic animals that already express the CRISPR machinery itself. Already, people are doing dozens of CRISPR sites per cell in other contexts, and reading out results based on whole-transcriptome RNAseq. This could be done first at a cellular level in a dish, and then at an organismal level using appropriate systemic gene delivery vectors, e.g., optimized blood brain barrier crossing AAVs or similar. See our review on in-vivo pooled screening for aging.
  • Note that the company Rejuvenate Bio was founded on the basis of this PNAS paper on a combinatorial gene therapy approach in animals (3 non-cell-autonomous genes with individually known effects), but it appears that its go-to-market strategy is based on a fixed combination of just two of those gene therapy targets in dogs with a congenital cardiovascular disease. It is unclear to me if it is planning to do large scale screening of novel combinations or to target aging itself in the near term. Startups often need to take the shortest path to revenue.
  • Outside of animal models, a program could also strongly pursue combinations of, e.g., already-FDA-approved compounds in human studies. Intervene Immune’s and Steve Horvath’s work with the TRIIM study is suggestive (but could be massively scaled, expanded and better outfitted), as is the TAME metformin trial. This would synergize nicely with a focused effort to improve epigenetic clock and other aging biomarkers, e.g.,  the Immune Risk Profile. Indeed, arguably an entire program could focus on human studies of systemic anti-aging / rejuvenation interventions that would fall short of a completely general approach to aging, e.g., combination therapies for immune system rejuvenation in the elderly
  1. Fund an ARPA-style initiative to develop new tools for measurement and highly specific perturbation of aging phenotypes
    • One notable low-hanging-fruit opportunity here is to lower the cost of genome-wide epigenetic clock measurements by 10x, while increasing their depth of mechanistic access
      • Epigenetic clocks currently use methylation arrays, costing around $400 per sample. This limits the size of study that can be done, e.g., to do a 100k-1M people, or 1-10k people or animals with 30 tissue types per sample and 3 time points, would be cost-prohibitive here even for the largest organizations. 
      • The emergence of next-generation sequencing based methods for DNA methylation measurement (e.g., TAPS) would allow reduction to $10 per sample, via DNA-sequence-tag-based multiplexing of many samples into a single sequencing run, a 40x cost reduction
        • Note that this could have major spinoff application to areas like liquid biopsy for cancer detection, some of which already rely on circulating tumor DNA (ctDNA) methylation, and wherein, because cancer incidence is low, very large N studies are needed to test for the statistical significance of an early detection method
        • New epigenetic and proteomic profiling tools would also have major application to patient stratification or novel primary endpoints for clinical trials of diverse diseases
      • One could also fund this to be applied at the single-cell level (definitely possible — see these papers) and in combination with recent methods for single-cell chromatin accessibility, single-cell RNAseq, single-cell chromatin conformation capture, or scRNAseq + CyTOF to measure both RNA and proteins at single-cell resolution. This would be likely to get to a far more mechanistic level of description than bulk methylation based epigenetic clocks. (An interesting form of clock indeed would be one that measured mismatches between DNA methylation, RNA and protein levels.)
    • Next generation clocks based on proteomics, mtDNA, or exosomes could be powerful
      • One could also invest in next generation proteomic profiling that goes beyond the SomaLogic targeted aptamer panels (on the order of a couple of thousand proteins assayed, out of 23k protein-coding genes in the genome not to mention many variants) to profile many more proteins or in a more unbiased way, and/or catalyze the use of such emerging technologies by the aging field by stimulating/funding collaborations with emerging companies in the proteomics field
      • Mitochondrial DNA profiling could be relevant as well, e.g.,  single cell mitochondrial genome sequencing
      • We could fund entirely new, and possibly even more powerful, methods of profiling aging, e.g., exosomal vesicles appear to provide a noninvasive transcriptome measure that could be applied in humans with a blood draw and methods are emerging for cell type specific exosome extraction, which could in theory perhaps get us a tissue-type or cell-type specific epigenetic clock using sampling of exosomal vesicles in circulation via sequencing.
    • We could develop improved and more accessible ways to measure key aging-related metabolites like NAD (i.e., specific small molecules)
    • We could stimulate the maturation and application to aging of in-situ spatially resolved epigenomics methods
    • On the precise perturbation side, we could look into the state of the technology for multiplex targeted removal of specific factors from blood. This could also use a combinatorial approach, e.g., with combinations of inactivating aptamers or antibodies, either delivered to the animal or used as capture arrays for a blood filtration device. This could allow casual examination of the significance of potential detrimental factors found in aged blood plasma. We could also invest in other improved tools for tracking the causal influence of specific blood factors on aging in downstream tissues. 
    • We could also ensure that appropriate in-vivo multiplex CRISPR activation and inactivation technologies are available to meet the needs of the aging field. 
  1. Fund an ARPA-style initiative to create validated, causal epigenetic, proteomic, scRNAseq, exosomal or combined “clocks” and apply them to large existing banks of samples
    1. For epigenetic clocks, this could include:
      • Single-cell studies to determine whether epigenetic clocks (in various tissues) reflect large changes to a sub-population of cells (e.g., adult stem cells), versus small changes to many types of cells indiscriminately, versus changes in the cell type composition of the tissue without changes to any individual cell type  — this could help to establish the causal mechanisms, if any. From a recent paper: “DNA methylation-based age predictors are built with data from bulk tissues that represent a mix of different cell types. It is plausible that small changes in cell composition during ageing could affect the epigenetic age of a tissue. Stem cells, which generally decrease in number during ageing, might be one of these cell populations. Consequently, the epigenetic clock might be a measure of the different proportions of stem and differentiated cells in a tissue.” If true, this could provide an impetus to target epigenetic reprogramming efforts specifically toward ameliorating stem cell exhaustion.
      • More generally, we can try to measure the mechanisms driving epigenetic clock “progression” over aging
      • Decoupling damage from aging, e.g., understand what is unique about aging as opposed to other kinds of damage to cells, and that can be uniquely tracked by next-gen “clocks”. Cells can be exposed to various specific kinds of damage and repair to see the impact on the clocks, as opposed to the impact of aging as such, and clocks can be “focused” to specifically measure aging itself.
      • Fund measurements determine whether epigenetic reprogramming therapies (e.g., pulsed OSK(M) as in Ocampo et al 2016) also restore other types of cellular damage beyond epigenetic states per se, e.g., buildup of lipofuscin, or decrease in proteo-stasis, and determine whether improved biomarkers can be devised that track all of these aspects as opposed to merely epigenetics
      • Applying the resulting new, richer, cheaper biomarkers to existing bio-banks, including those that tracked people over long timescales and recorded their ages and causes of mortality, and releasing the resulting data in a machine-learning-accessible way such that anyone can train new predictive models on these data
        • Existing cohorts (e.g., mentioned in Wyss-Coray papers) include the INTERVAL study, the SG90 Longevity Cohort study, the Einstein Aging Study cohort, Lothian Birth Cohorts (LBCs) of 1921 and 1936, the Whitehall study, Baltimore Longitudinal Study of Aging (BLSA), and probably many others.
        • We would want to make all the data easily accessible in a universal database so lots of people with expertise in machine learning around the world could extract new kinds of predictors from it
      • Developing proteomic or epigenetic profiles that are targeted to more specific known effects, like they recently did for senescence associated secretory phenotype (SASP) but much more broadly and aggressively. Other profiles could include a set of human homologues of Naked Mole Rat proteins thought to be involved in their aging resistance.
  1. Educate and incentivize the government to take appropriate actions
    • Lobby the FDA to classify organismal senescence itself as a disease
      • Perhaps this needs to be done by first educating members of Congress, Tom Kalil suggests
      • Recent though partial progress has occurred with the TAME metformin trial, which had languished for years but recently appears to have accelerated due to a $40M infusion of private funding: “Instead of following a traditional structure given to FDA approved trials (that look for a single disease endpoint) TAME has a composite primary endpoint – of stroke, heart failure, dementia, myocardial infarction, cancer, and death. Rather than attempting to cure one endpoint, it will look to delay the onset of any endpoint, extending the years in which subjects remain in good health – their healthspan.” It may be possible to replicate this trial design for future studies. 
    • Push for more aging-related biomarker endpoints in clinical trials of all sorts of drugs
    • Lobby for expansion of the NIH Interventions Testing Program (ITP) for aging drugs, or perhaps for NIA to fund external such programs — including going to more complex treatments beyond small molecules, and replicating key aging studies at large N
    • Create funding-tied incentives for larger-N and replicated studies in the aging field, as well as for certain phenotypic measurements to become common and released openly for all funded studies

Acknowledgements: 

Thanks to Sarah Constantin, Laura Deming and Sam Rodriques for helpful discussions prior to the writing of this document.

Notes: Some climate tech companies & projects

(as a non-expert, circa early 2021)

See also my notes from 2018 on climate:
https://johncarlosbaez.wordpress.com/2019/10/05/climate-technology-primer-part-1/
https://johncarlosbaez.wordpress.com/2019/10/13/climate-technology-primer-part-2/
https://longitudinal.blog/co2-series-part-3-other-interventions/

Cement/concrete —
https://www.engine.xyz/founders/sublime-systems/
https://www.solidiatech.com/

Batteries —
http://www.a123systems.com/
https://ionicmaterials.com/
https://formenergy.com/technology/battery-technology/

Cheap green hydrogen production by electrolysis, useful for grid scale storage —
https://www.crunchbase.com/organization/origen-hydrogen
https://hydroxholdings.co.za/technology/

Thermal energy storage, for grid-scale storage —
https://www.antoraenergy.com/technology
See also:
https://escholarship.org/content/qt2vz9b61f/qt2vz9b61f.pdf
https://arxiv.org/abs/2106.07624
https://www.science.org/doi/abs/10.1126/science.1218761

Food with less animals and ultimately less agriculture overall —
https://impossiblefoods.com
https://www.calysta.com/ (animal feed from methane)
https://www.activate.org/circe-bioscience (food from water and CO2)
See: https://www.nature.com/articles/s41587-020-0485-4
https://www.washingtonpost.com/archive/lifestyle/food/1984/05/27/can-food-be-made-from-coal/d80567ac-c656-4e0b-9f54-d505bd6d261a/
https://www.pnas.org/content/117/32/19131

Fusion —
Z-pinch and other magneto-inertial methods
https://www.zapenergyinc.com/
https://www.helionenergy.com/
Commonwealth Fusion and Tokamak Energy also

Compact and safer nuclear fission —
https://oklo.com/ (minimalist approach) https://twitter.com/oklo?lang=en
https://www.nuscalepower.com/
https://www.terrapower.com/
https://thorconpower.com/

Geothermal anywhere drilling —
https://www.engine.xyz/founders/quaise/
https://www.texasgeo.org/
https://pangea.stanford.edu/ERE/db/GeoConf/papers/SGW/2021/Malek.pdf (analysis of closed loop)

Solid state heat to electricity conversion —
https://modernelectron.com/

Electrofuels —
https://infiniumco.com/
https://carbonengineering.com/

Other carbon utilization —
https://www.twelve.co/
https://www.lanzatech.com/

Improved air conditioning and refrigeration with low global-warming potential —
https://www.gradientcomfort.com/
https://www.rebound-tech.com/

Substituting for nitrogen fertilizers —
https://www.pivotbio.com/

Marine cloud brightening —
https://www.nature.com/articles/d41586-021-02290-3 (Australia)
https://www.silverlining.ngo/research-efforts (analysis)

Soil carbon measurement —
https://arpa-e.energy.gov/technologies/programs/roots

Electro-swing direct air capture —
https://www.crunchbase.com/organization/verdox

Far out ideas to locally divert hurricanes —
https://viento.ai/

Philanthropic/patient capital for climate ventures with PRIs —
https://primecoalition.org/what-is-prime/

Novel approaches to enhance natural ocean carbon drawdown —
https://faculty-directory.dartmouth.edu/mukul-sharma (clay minerals)
https://www.frontiersin.org/articles/10.3389/fmars.2019.00022/full

Advance market commitments for negative emissions and other problems —
https://stripe.com/sessions/2021/building-carbon-removal
https://www.gavi.org/vaccineswork/what-advance-market-commitment-and-how-could-it-help-beat-covid-19
https://www.nuclearinnovationalliance.org/search-spacex-nuclear-energy

Kelp farming, ocean utilization, biochar and other multidisciplinary problems —
https://www.climatefoundation.org/

Managing wildfires —
https://nintil.com/managing-wildfires

Crops —
https://x.company/projects/mineral/

Desalination —
https://www.tridentdesal.com/
https://www.energy.gov/eere/solar/american-made-challenges-solar-desalination-prize

Getting more science-based companies to happen in this space —
https://www.activate.org/mission

Enhanced weathering for negative emissions —
https://www.projectvesta.org/

AI and data —
https://www.annualreviews.org/doi/abs/10.1146/annurev-nucl-101918-023708
https://www.climatechange.ai/

Venture portfolios —
https://www.breakthroughenergy.org/
https://lowercarboncapital.com/
https://www.engine.xyz/

ARPA-E

“Reframing Superintelligence” is a must-read

Eric Drexler of Oxford’s Future of Humanity Institute has published a new book called Reframing Superintelligence: Comprehensive AI Services as General Intelligence.

The book’s basic thesis is that the discussion around super-intelligence to date has suffered from implicit and poorly justified background assumptions — particularly the idea that advanced AGI systems would necessarily take the form of “rational utility-directed agents”.

Drexler argues a number of key points, including:

–One can, in principle, achieve super-intelligent and highly general and programmable sets of services useful to humans, which embody broad knowledge about the real world, without creating such rational utility-directed agents, thus sidestepping the problem of “agent alignment” and leaving humans in full control.

–A sketch of architectural principles for how to do so, through an abstract systems model Drexler terms Comprehensive AI Services (CAIS)

–The need to clarify key distinctions between concepts, such as reinforcement learning based training versus the construction of “reward seeking” agents, which are sometimes conflated

–A set of safety-related concerns that Drexler claims are pressing and do need attention, within the proposed framework

In addition, Drexler provides a fun conceptual synthesis of modern AI, including a chapter on “How do neural and symbolic technologies mesh?” which touches on many of the concerns raised by Gary Marcus in his essay on “Why robot brains need symbols“.

I will be keen to see whether any coherent and lucid counter-arguments emerge.

Update: a couple of nice blog posts have appeared on this topic from Richard Ngo and from Rohin Shah.

On whole-mammalian-brain connectomics

In response to David Markowitz’s questions on Twitter:
https://twitter.com/neurowitz/status/1080131620361912320

1. What are current obstacles to generating a connectomic map of a whole mammalian brain at nanometer scale?
2. What impt questions could we answer with n=1? n=2?
3. What new analysis capabilities would be needed to make sense of whole brain data?

Responses:

David himself knows much or all of the below, as he supported a lot of the relevant work through the IARPA MICRONS program (thank you), and I have discussed these ideas with many key people for some years now, but I will go into some detail here for those not following this area closely and for the purposes of concreteness.

1) What are current obstacles to generating a connectomic map of a whole mammalian brain at nanometer scale?

I believe the main obstacles are in “getting organized” for a project of this scale, not a fundamental technical limitation, although it would make sense to start such a project in a few years after the enabling chemistry and genetic advances have a bit more time to mature and to be well tested at much smaller scales.

a) Technical approaches

As far as technical obstacles, I will not address the case of electron microscopy where I am not an expert. I will also not address a pure DNA sequencing approach (as first laid out in Zador’s 2012 Sequencing the Connectome). Instead, I will focus solely on optical in-situ approaches (which are evolutionary descendants of BrainBow and Sequencing the Connectome approaches):

Note: By “at nanometer scale”, I assume you are not insisting upon literally all voxels being few nanometer cubes, but instead that this is a functional requirement, e.g., ability to identify a large fraction of synapses and associate them with their parent cells, ability to stain for nanoscale structures such as gap junctions, and so forth. Otherwise an optical approach with say > 20 nm spatial resolution is ruled out by definition — but I think the spirit of the question is more functional.

There are multiple in-situ fluorescent neuronal barcoding technologies that, in conjunction with expansion microscopy (ExM), and with optimization and integration, will enable whole mammalian brain connectomics.

We are really talking about connectomics++: It could optionally include molecular annotations, such as in-situ transcriptome profiling of all the cells, as well some multiplexed mapping of ion channel protein distributions and synaptic sub-types/protein compositions. This would be an added advantage of optical approaches, although some of these benefits could conceivably be incorporated in an electron microscopy approach through integration with methods like Array Tomography.

For the sake of illustration, the Rosetta Brain whitepaper laid out one potential approach, which uses targeted in-situ sequencing of Zador barcodes (similar to those used in MAPseq) both at cell somas/nuclei and on both sides of the synaptic cleft, where the barcodes would be localized by being dragged there via trafficking of RNA binding proteins fused to synaptic proteins. This would be a form of FISSEQ-based “synaptic BrainBow” (a concept first articulated by Yuyiy Mischenko) and would not require a direct physical linkage of the pre- and post-synaptic barcodes — see the whitepaper for explanations of what this means.

The early Rosetta Brain white-paper sketch proposed to do connectomics via this approach using a combination of

a) high-resolution optical microscopy,

b) maximally tight targeting of the RNA barcodes to the synapse,

&

c) “restriction of the FISSEQ biochemistry to the synapse”, to prevent confusion of synaptic barcodes with those in passing fine axonal or dendritic processes.

This is now all made much easier with Expansion Microscopy, which is now demonstrated at cortical column scale,  although this was not yet the case back in 2014 when we were initially looking at this (update: expansion lattice light sheet microscopy is on the cover of the journal Science and looks really good).

(Because ExM was not around yet, circa 2014, we proposed complicated tissue thin-sectioning and structured illumination schemes to get the necessary resolution, as well as various other “molecular stratification” and super-resolution schemes, which are now unnecessary as ExM enables the requisite resolution using conventional microscopes in intact, transparent 3D tissue, requiring only many-micron-scale thick slicing.)

(This approach does rely quite heavily on synaptic targeting of the barcodes; whether the “restriction of FISSEQ biochemistry to the synapse” is required depends on the details of the barcode abundance and trafficking, as well as the exact spatial resolution used, and is beyond the scope of the discussion here.)

With a further boost in resolution, using higher levels of ExM expansion (e.g., iterated ExM can go above 20x linear expansion), and in combination with a fluorescent membrane stain, or alternatively using generalized BrainBow-like multiplexed protein labeling approaches alone or in combination with Zador barcodes, the requirement for synaptic barcode targeting and restriction of FISSEQ biochemistry to the synapse could likely be relaxed, and indeed it may be possible to do it without any preferential localization of barcodes at synapses in the first place, e.g., with membrane localized barcodes, an idea which we computationally study here:
https://www.frontiersin.org/articles/10.3389/fncom.2017.00097/full

In the past few years, we have integrated the necessary FISSEQ, barcoding and expansion microscopy chemistries — see the last image in
https://spectrum.ieee.org/biomedical/imaging/ai-designers-find-inspiration-in-rat-brains
for a very early prototype example — and ongoing improvements are being made to the synaptic targeting of the RNA barcodes (which MAPseq already shows can traffic far down axons at some reasonable though not ideal efficiency), and to many other aspects of the chemistry.

Moreover, many other in-situ multiplexing and high-resolution intact-tissue imaging primitives have been demonstrated with ExM that would broadly enable this kind of program, with further major advances expected over the coming few years from a variety of groups.

At this point, I fully believe that ExM plus combinatorial molecular barcoding can, in the very near term, enable at minimum a full mammalian brain single cell resolution projection map with morphological & molecular annotations, and with sparse synaptic connectivity information — and that such an approach can, with optimization, likely be engineered to get a large fraction of all synapses (plus labeling gap junctions with appropriate antibodies or other tags).

This is not to downplay the amount of work still to be done, and the need for incremental validation and improvement of these techniques, which are still far less mature than electron microscopy as an actual connectomics method. But it is to say that many of the potential fundamental obstacles that could have turned out to stand in the way of an optical connectomics approach — e.g., if FISSEQ-like multiplexing technologies could not work in intact tissue, or if optical microscopy at appropriate spatial resolution in intact tissue was unavailable or would necessitate difficult ultra-thin sectioning, or if barcodes could not be expressed in high numbers or could not traffic down the axon — have instead turned out to be non-problems or at least tractable. So with a ton of work, I believe a bright future lies ahead for these methods, with the implication that whole-brain scale molecularly annotated connectomics is likely to become feasible within the planning horizon of many of the groups that care about advancing neuroscience.

b) Cost

In the Rosetta Brain whitepaper sketch, we estimated a cost of about $20M over about 3 years for this kind of RNA-barcoded synaptic BrainBow optical in-situ approach, for a whole mouse brain.

Although this is a very particular approach, and may not be the exact “right” one, going through this kind of cost calculation for a concrete example can still be useful to get an order-of-magnitude sense of what is involved in an optical in-situ approach:

If we wish to image with 4.5x expansion, that’s about 1 mm^3 / ((300 x 300 x 300 nm^3)/4.5^3) = 3e12 voxels
(The spatial resolution there is about 300/4.5 = 67 nm.) (Note that we’ve assumed isotropic resolution, which can be attained and even exceeded with a variety of practical microscope designs.)

For FISSEQ-based RNA barcode connectomics, we want say 4 colors in parallel (for the A, T, C and G bases of RNA), with 4 cameras, and to image say 20 successive cycles of that or on the order of 4^15 = 1B unique cell labels (assuming there is some error rate and/or near but not complete base-level diversification of the barcode such that we get a diversity corresponding to, say, 15 bases after sequencing 20).

The imaging takes much longer than the fluidic handling (which is highly parallel) as sample volumes get big, so let’s focus on considering the imaging time:

We have ~3e12 voxels * 20 cycles, so 6e13 voxels total, and let’s suppose that we have a camera that operates at a frame rate of ~10 Hz and has ~4 megapixels, so 400 MegaPixels per second. So 6e14 / (4e7 per sec) = 24 weeks = 6 months. Realistically, we could get perhaps a 12 megapixel camera and use computational techniques to reduce the required number of cycles somewhat, so 1-2 months seems reasonable for 1 mm^3 on a single microscope setup.

So, let’s say roughly 1 mm^3 per microscope per month.

Further, each microscope costs around $400k, suppose. (It could be brought below that with custom hardware.)

Suppose 1 person is needed per 2 microscopes + fixed 5 other people; let’s say these people cost on average $150k a year.

We wish to image 0.5 cm^3 during the course of the project, i.e., roughly the size of a whole mouse brain.

0.5 cm^3 / 1 mm^3 = 500 microscope-months

Suppose the imaging part of the project can last no more than 24 months, for expediency.

That’s 500/24 = 21 microscopes, each at $400k, or $8.3M. Let’s call that $10M on the imaging hardware itself.

That’s also 10 (for running the microscopes and associated tissue handling) + fixed 5 other people, over three years total, or $150k*15*3 = $6.75M for salaries, call it $7M for people.

There are also things to consider like lab space, other equipment, and reagents. Let’s call that another $50k per person per year or $2.25M, call it $3M, just very roughly.

What about data storage? I suspect that a lot of compression could be done online such that enormous storage of raw images might not be necessary in an optimized pipeline. But if a 1 TB hard drive costs on the order of 50 bucks, ($50 / 1 terabyte) * 3e12 voxels per mm^3 * 400 mm^3 * 20 biochemical cycles * 8 bytes per voxel  = $9.6M for the raw image data.

So $10M (equipment) + $7M (salaries) + $3M (space and reagents) + $10M (data storage) = $30M mouse connectome, with three years total and 2 of those years spent acquiring data.

To be conservative, let’s call it $40M or so for your first mouse connectome using next-generation optical in-situ barcoding technologies with expansion microscopy.

Very importantly, the cost for future experiments is lower as you’ve already invested in the imaging hardware.

c) This is for the mouse. Don’t forget that the Etruscan Shrew is, as Ed Boyden has emphasized, >10x smaller and yet still a mammal with a cortex, and that many of the genetic technologies may (or may not) be readily adaptable to it, especially if viral techniques are used for delivery rather than transgenics.

2. What impt questions could we answer with n=1? n=2?

There are many potential answers to this and I will review a few of them.

Don’t think of it as n=1 or n=2, think of it as a technology for diverse kinds of data generation:
First, using the same hardware you developed/acquired to do the n=1 or n=2 full mouse connectome, you could scale up statistically by, for instance, imaging barcodes only in the cell bodies at low spatial resolution, and then doing MAPseq for projection patterns, only zooming into targeted small regions to look at more detailed morphology, detailed synaptic connectivity, and molecular annotations. Zador’s group is starting to do just this here
https://www.biorxiv.org/content/early/2018/08/31/294637
by combining in-situ sequencing and MAPseq. Notably, the costs are then much less because one needs to image many fewer optical resolution-voxels in-situ, i.e., essentially only as many voxels as there are cell somas/nuclei, i.e., on the order of 100M ~micron sized voxels (e.g., to just look at the soma of each cell) for the mouse brain; the rest is done by Illumina sequencing on commercial HiSeq machines which have already attained large economies of scale and optimizations, and where one is inherently capturing less spatial data.

Thus, one should think of this not (just) as an investment in 1-2 connectomes, but as an investment in a technology basis set that allows truly large-scale neuro-anatomy to be done at all, with a variety of possible distributions of that anatomy over individual subjects and a variety of scales of analysis accessible and usable in combination even within one brain.

Use it as a lens on microcircuit uniformity/heterogeneity:
Second, even the n=1 detailed connectome could be quite powerful, e.g., to resolve a lot of the long-standing questions regarding cortical microcircuit uniformity or heterogeneity across areas, which were recently re-kindled by papers like this one.

I should mention that this is a super important question, including as it connects to many big theoretical debates in neuroscience. For instance,

–in one view of cortical microcircuitry, pre-structured connectivity will provide “inductive biases” for learning and inference computations, and in that view, we may expect different structured inductive biases in different areas (which process data with different properties and towards different computational goals) to be reflected in differing local microcircuit connectomes across areas: see another Markowitz-induced thread on this;

–in another view (see, e.g., Blake Richards), it is all about the input data and cost functions entering an area, which trains an initially relatively un-structured network, and thus we may expect to see connectivity that appears highly random locally but with differences in long-range inputs defining the area-specific cost functions;

–in yet another view, advocated by some in the Blue Brain Project, connectivity literally is random subject to certain geometric constraints determined by gross neural morphologies and statistical positional patterns;

–and so on…

Although a full connectome is not strictly needed to answer these questions, it would put any microcircuits mapped into a helpful whole-brain context, and in any case, if one wants to map lots of microcircuits, why not go for doing it in the context of something at least approximating a whole brain connectome?

Think of it as millions of instances of an ideal single neuron input/output map (with dendritic compartment or branch level resolution):
Third, think of the n=1 connectome instead as N=millions of studies on “what are the inputs and outputs, and their molecular profiles, of this single neuron, across the entire brain”. For instance, you could ask millions of questions of the form “whole brain inputs to a single neuron, resolved according to dendritic compartment, synaptic type, and location and cell type of the pre-synaptic neuron”.

For instance, in the Richards/Senn/Bengio/Larkum/Kording et al picture — wherein the somatic compartments of cortical pyramidal neurons are doing real-time computation, but the apical dendritic compartments are receiving error signals or cost functions used to perform gradient-descent-like updates on the weights underlying that computation — you could ask, for neurons in different cortical areas: what are the full sets of inputs to those neurons’ apical dendrites, where else do they come from in the brain, from which cell types, and how they impinge upon them through the local interneuron circuitry.  This, I believe, would then give you a map of the diversity or uniformity of the brain’s feedback signals or cost functions, and start to allow making a taxonomy of these cost functions. In the (speculative) picture outlined here, moreover, this cost function information is in many ways the key architectural information underlying mammalian intelligence.

Notably, this would include a detailed map of the neuromodulatory pathways including, with molecular multiplexing in-situ, their molecular diversity. Of particular interest might be the acetylcholine system, which innervates the cortex, drives important learning phenomena, some of which have very complex local mechanisms, and involves very diverse and area-specific long-range projection pathways from the basal forebrain as well as interesting dendritic targeting. A recent paper also found very dense and diverse neuropeptide networks in cortex.

Answer longstanding questions in neuroanatomy, and disambiguate existing theoretical interpretations:
Fourth, there are a number of concrete, already-known large-scale neuroanatomy questions that require an interplay of local circuit and long range information.

For instance, a key question pertains to the functions of different areas of the thalamus. Sherman and Guillery, for instance, propose the higher order relay theory and further that the neurons projecting from layer 5 into the thalamic relays are the same neurons that project to basal ganglia and other sub-cortical centers, and thus that the thalamic relays should be interpreted as sending “efference copies” of motor outputs throughout the cortical hierarchies — but to my knowledge, more detailed neuroanatomy is still needed to confirm or contradict nearly all key aspects of this picture, e.g., are the axons sending the motor outputs really the exact same as those entering the putative thalamic relays, are those axons the same that produce “driver” synapses on the relay cells, what about branches through the reticular nucleus, and so on. (Similar questions could be framed in the context of other theoretical interpretations of thalamo-cortical (+striatal, cerebellar, collicular, and so forth) loops.)

Likewise, there are basic and fundamental architectural questions about the cortical-subcortical interface, which arguably require joint local microcircuit and large-scale projection information, e.g., how many “output channels” does the basal ganglia have and to what extent are they discrete?

Think of it as a (molecularly annotated) projectome++:
Fifth, there are many questions that would benefit from a whole brain single cell resolution projectome, which requires much the same technology as what would be needed to add synaptic information on top of that (in this optical context), e.g., papers like this one propose an entire set of theoretical ideas based on putative projection anatomy that is inferred from the literature but not yet well validated
https://www.frontiersin.org/articles/10.3389/fnana.2011.00065/full
One may view these ideas as speculative, of course, but they suggest the kinds of functionally-relevant patterns that one might find at that level, if a truly solid job of whole brain scale single cell resolution neuroanatomy was finally to be done. Granted, this doesn’t require mapping every synapse, but again, the technology to map some or all synapses optically, together with the projectome, is quite similar to what is needed to simply do the projectome at, say, dendritic-compartment-level spatial resolution and with molecular annotations, which one would (arguably) want to do anyway.

Use it for many partial maps, e.g., the inter-connectome of two defined areas/regions:
Again, with the same basic technology capability, you can do many partial maps, but importantly, maps that include both large-scale and local information. For example, if you think beyond n=1 or n=2, and to say n=10 to n=100, you can definitely start to look at interesting disease models, and then you likely don’t need full connectomes of those, but for instance, you might look at long-range projections between two distal areas with detailed inter-connectivity information being mapped on both sides as well as their long-range correspondences. Same infrastructure, easily done before, during or after a full connectome, or (as another example) a connectome at a user-chosen level of sparsity induced by the sparsity of the, e.g., viral barcode or generalized BrainBow labeling.

Finally: questions we don’t know to ask yet, but that will be suggested by truly comprehensive mapping! For example, as explained in this thread, molecular annotation might allow one to measure the instantaneous rate of change of synaptic strength, dS/dt, which is key to inferring learning rules. As another example, entirely new areas of the very-well-studied mouse visual cortex, with new and important-looking functions, are still being found circa 2019… sometimes it seems like the unknown unknowns still outnumber the known unknowns. See also Ed Boyden’s and my essay on the importance of “assumption proof” brain mapping.

Anyway, is all of this worth a few tens of millions of dollars of investment in infrastructure that can then be used for many other purposes and bespoke experiments? In my mind, of course it is.

3) What new analysis capabilities would be needed to make sense of whole brain data?

To generate the connectome++ in the first place:
For the case of optical mapping with barcodes, there are at least some versions of the concept, e.g., the pure synaptic BrainBow approach, where morphological tracing is not needed, and the analysis is computationally trivial by comparison with the electron microscopy case that relies on axon tracing from image stacks of ultra-thin sections. The barcodes just digitally identify the parent cells of each synapse.

For various other potential versions of optical connectomics, some morphological image analysis and segmentation would be required, particularly, for instance, to correct errors or ambiguities associated with imperfect trafficking of barcodes. Likewise, in an approach that relies heavily on tracing, barcodes could be used to error-correct that tracing, and/or to help train the algorithms for that tracing. Those approaches might start to have computational analysis requirements on a similar level as those for electron microscopy image segmentation.

Sparsity of labeling, to look at a connectome of a sub-graph of neurons, but still spanning all brain areas, could likely simplify the analysis as well.

(For general infrastructural comparison, a quick search tells me that Facebook handles “14.58 million photo uploads per hour”.)

To derive insight from the connectome in the context of other studies:
To more broadly “make sense” of the data in light of theoretical neuroscience, and in terms of exploratory data analysis, I defer this to experts on connectomic data analysis and on many specific areas of neuroscience generally that could benefit and contribute.

But broadly, I don’t feel it is qualitatively different from what is needed to do analysis for other kinds of large-scale neuroscience that fall short of full mammalian connectomes, and moreover that much can be learned from studying whole-brain analyses in small organisms like Zebrafish or fly. I certainly don’t see any kind of fundamental obstacle that implies this data could not be “made sense of”, if contextualized with other functional data, behavioral studies, and computational ideas of many kinds.