manyagents.ai

11 Sep 2021

Is reverse-engineering and implementing the mammalian brain a random search?

This article is concerned with two approaches to reverse-engineering the mammalian brain: (i) a design derived from deductive steps based on observations; (ii) a (biased) random search of the space of possible designs, i.e. considering a vastly larger set of options.

Assume a threshold Tᵣ which sets a limit on the complexity of all programs feasibly designed (as per definition (i)) by given civilization. To discover useful programs above Tᵣ, the civilization must employ a massive parallel search over designs ordered by some fit function. I voice my doubts about the assumption that the complexity of the brain is below Tᵣ for the human civilization of the 21st century. I argue that reverse-engineering the brain on a digital machine might practically amount to a random search. Then I explore some implications this argument has.

Interdependency in the brain

There is an immense amount of detail in the brain. My sense is that it’s not clear how much are we missing and there’s little thought, let alone consensus, about whether any particular known detail is required for a useful digital reconstruction, or whether it can be abstracted away as a mere complicated biological protocol serving a simple idea. Any implementation without discrimination becomes impractical because it would amount to a simulation on the sub-nm scale. But if the myriads of protocols and parameters depend on each other, the discrimination itself results in a random search rather than a series of deductive steps.

Let me present some selected examples.

During the neocortical ontogenesis, the neuroepithelium forms minicolumns. In the rat, for about 90% of neural cells the minicolumn they migrate to is precisely given by the horizontal (tangential to pia mater) position of the mother cell. The rest moves horizontally first to join another minicolumn1. The ratio varies from area to area and species to species. How sensitive is this ratio with respect to the rest of the brain protocols? How much can we go either way before the result is a dud? For example, hyper-connectivity between the medial geniculate nucleus and the auditory cortex correlates with schizophrenia and auditory hallucinations2. The way these migrations proceed isn’t random either, there are specific patterns to it3. Not only can we vary the ratio, but the direction too.

Tracts are bundles of axons that travel horizontally to connect adjacent gyri. There were over 750 known tracts between 70 areas of the Macaque monkey’s cortex by 19914. How important is each tract? How many can we omit during neurogenesis before the result is a dud? And looking at the neocortex just through the lens of corticocortical connections is simplistic. The cortical areas project to other brain structures, which may project back to the cortex through the (dorsal) thalamus5. Cortical areas also communicate with each other via corticothalamic axons.

A common cell type is the pyramidal neuron. It is found in different forms throughout the brain. Its dendrites are covered in thousands of sub-μm structures called spines. Spines connect the dendrites to axons of other cells6. Along with ion pumps5 they appear to be the low-level mechanisms of plasticity between neural connections. Further, sometimes spines perforate to form larger synapses. Sometimes dendrites from multiple cells form synaptic triads78. Dendrites and their rich variations are vital to the brain’s performance9. Can all of this be merged and abstracted away as mere neuron-to-neuron weights in ℝ?

C. elegans is a popular example of a primitive neural system with its ~300 neurons which account for a third of all its cells. Its locomotion circuits were reverse-engineered and shown to depend on a particular way single neurons connect with each other10. A cricket’s brain contains a single interneuron that inhibits auditory processing during its own chirp11. The neuron is very particularly connected to facilitate this function. Drosophila’s protocerebral bridge is a collection of 16-18 modules each mapping a fraction of the visual field. Projection to fan-shaped body’s 8-9 modules connects opposite sectors to establish axes that pass through the centre of head12. To achieve this, the connections must again be predetermined very specifically. How many such “single neuron precision” circuits critical for the development and function of “intelligence” (i.e. ignoring the likes of vestibulo-ocular reflex) are there in the mammalian brain?

These are but a fraction of cases I found during my literature review. There is parameter after parameter which one might need to tune and having one misconfigured, I worry, cascades to degenerate many protocols.

These observations lead me to believe that we won’t be able to write a program which reconstructs the workings of the brain. As I see it, most of the work will be done by an expensive search. How to define the search space, how to traverse it and how to evaluate the programs? I don’t have anything to say to that yet.

I presume that genome is an efficient way to search the vast space of designs. One line of counter-argument goes: natural selection might favor efficiency of execution to efficiency of innovation, but it needs to innovate nonetheless. And you can innovate faster with a modular design that has less interdependencies. Presently, I do not subscribe to this.

Implications

Let’s consider two motivations for reverse-engineering the brain: (i) clinical research, that is gaining insights into neurodegenerative diseases, mental illnesses, etc.; (ii) productivity, that is employing AGI.

(i) I predict scarcity of even area-limited computational models which would bring strong or conclusive evidence for clinical research. Such evidence typically needs a healthy control model. But to determine what is a normal function of some subsystem in face of this complexity will be too hard for most research. Ergo successful applications for (i) are blocked by having a faithful reconstruction of the brain.

(ii) The probability that the brain is the simplest program with “intelligent” phenotype is zero. However unlike with the brain, we must invent simpler programs without a reference, thus they are bound by a stricter threshold Tᵢ. Whether the density of intelligent programs under Tᵢ is sufficient and we discover one is up for debate, but to summarize my view: given the successes of fairly primitive programs like DNN I suspect that AGI is the easier problem of the two. I cannot imagine we will be even close to a faithful reconstruction of the brain when we build first AGIs.

Tangentially, I am not aware of any reason why the brains we are equipped with should scale much beyond their present performance as an evolutionary search under any selection - let alone the one we obey - builds on top of the present design. I presume that without a solid theory of how the brain performs higher cognitive tasks this cannot be answered, still I would like to read a take on the scaling ability of the brain by someone close to the subject.