Kwabena Boahen grew up in Accra, Ghana, and vividly remembers his father bringing home the family’s first digital computer when Kwabena was 16 years old [1]. He immediately began learning all of its ins and outs in an attempt to program the classic arcade game, Pong. The more he read, however, the more Boahen disapproved. Every action in such a simple game required thousands of ones and zeros being manipulated, crude “brute force” by which he was “totally disgusted.” It was this dissatisfaction with computers that inspired Boahen’s later research career as a professor of bioengineering at Stanford University and head of the Brains in Silicon Lab [1].

For decades, we have relied on the exact same building blocks for our computers: transistors. Specifically metal-oxide-silicon field-effect transistors (MOSFETs) have continuously been shrunk down to smaller and smaller sizes so as to fit more of them onto a chip. Gordon Moore, former Intel CEO, posited in 1965 that the number of transistors in a chip would double every two years: a prediction that has continued to stay true.

Figure 1: Demand of AI for flops is growing unsustainably. (Fig. 1 Boahen, 2022)

But all good things must eventually come to an end, and Kwabena Boahen details this in his latest Nature Perspective [2]. Tiling smaller and smaller transistors onto a chip is all well and good, but yields diminishing returns. Think about it: signals from each transistor must now travel a longer relative distance to reach all the others. Moreover, with rapidly evolving artificial intelligence (AI) systems, demand for computing power is only increasing — that too at a rate nearly 12 times faster than Moore’s Law. It’s a computing crisis for AI [2].

Demand for flops (floating point operations per second) in AI is growing at an unsustainable pace [2]. Boahen acknowledges that the relative distance of signal transmission problem in a flat chip can be easily surmounted by stacking transistors in a three-dimensional chip. However, this comes with its own set of challenges, most notably the greatly reduced surface area for dissipating heat. We now need a solution to the thermal problem, to which Boahen proposes “signaling sparsely,” eliminating redundant signals in such a fashion that energy use scales linearly with the number of computing units in a circuit [2].

Figure 2: The energy a synthetic brain consumes could scale with its neurons like a biological brain. (Fig. 2 Boahen, 2022)

Meanwhile, nature has already figured out her own way of designing exactly this kind of linearly scaling, sparsely signaling, energy-efficient intelligent system [3]. Boahen states that the “brain processes information using 100,000 times less energy than we do right now with this computer technology that we have.” [2] To investigate how this is done, Boahen looks at the main computational unit of the mammalian brain: the pyramidal neuron. Specifically, the spiny dendrites of these neurons have been shown to be selective to inputs’ spatiotemporal ranking,i.e., which parts of the dendrites are triggered and in what order. This is unlike the dominant simple model of dendrites which simply summate input from presynaptic neurons. The electronic environment in and around these dendrites is practically independent of that of axons and traditional action potentials, which allows for local computations [3].

Boahen created a computational model for the various channels within a dendrite’s series of spiny heads and associated segments, observing the effects of input spikes on voltage [2]. He found that the membrane potential of spines is bistable — at rest or a higher “plateau potential.” When one spine reaches a plateau from synaptic input, the potential travels down the shaft of the dendrite, summating with that of the next spine (which just received input of its own), continuing the plateau all the way down. Switching the order in which consecutive input spikes arrive at the spines halts the plateau, giving rise to spatiotemporal selectivity [2].

Figure 3: Structural properties of mouse neocortical pyramidal neurons (Fig. 1a-b Luebke, 2010)

Knowing exactly which neurons have fired and in what sequence allows for the encoding of highly specific information [1]. Each input spine can be assigned a specific digit, so a dendrite responding to a specific sequence effectively recognizes a particular number (a combination of the input digits). This allows encoding of an n-ary number system (where n is the number of input spines), instead of relying on the binary ones and zeros of existing computers [2].

The bistability with applied voltage in Boahen’s model is also seen in ferroelectric capacitors [4]. Electric dipoles within these capacitors align with the voltage-induced electric field, and these dipoles flip if the field is large enough. Flipping dipoles in one capacitor can lower the energy barrier for flipping them in another one next to it, just like how dendritic spikes traverse from spine to spine. Putting several of these ferroelectric capacitors into a field-effect transistor (FET) will allow for a selective flow of current through them. By exploiting the spatiotemporal order of signals, a circuit of these ferroelectric FETs could signal sparsely, just like the brain and a hypothetical sparse 3D chip, helping to solve the thermal problem by reducing redundant signals and allowing energy use to scale linearly with the number of computing units in the circuit [4].

Figure 4: Concept for a dendrite-like nanoscale device and its 3D integration. (Fig. 4a Boahen, 2022)

“Every 10 years, I realize some blind spot that I have or some dogma that I’ve accepted,” says Boahen. “I call it ‘raising my consciousness.’” [2] He believes this so-called “dendrocentric computing” will give rise to a new paradigm in computer engineering. Just as the once leading theoretical physicist Richard Feynman predicted a “quantum supremacy” for quantum computers, Boahen strives for “neural supremacy.” [2]

Although Boahen acknowledges that Feynman’s promise of quantum computing is still yet to be fulfilled and that he might have a similar timeline with dendrocentric computing, he simply can’t go back. “Once you see it, you can’t unsee it…I’m not the same guy I was before,” affirms Boahen [2]. With dendrocentric artificial intelligence (termed “synthetic intelligence” by Boahen for short), GPT-3, a large-language AI model, could go from running with megawatts in the cloud to watts on your smartphone. The two weeks spent training the model, which released the same amount of carbon as 1300 cars in the same period, will be seen as primitive — and a 16-year-old Boahen will have finally found the elegant solution he was searching for.

References: 

[1] Computer History Museum. (2022). Oral history of Kwabena Boahen: Interview transcript. Retrieved from https://archive.computerhistory.org/resources/access/text/2022/10/102792242-05-01-acc.pdf

[2] Boahen, K. (2022). Dendrocentric learning for Synthetic Intelligence. Nature, 612(7938), 43–50. https://doi.org/10.1038/s41586-022-05340-6

[3] Branco, T., Clark, B. A., & Häusser, M. (2010). Dendritic discrimination of temporal input sequences in cortical neurons. Science, 329(5999), 1671–1675. https://doi.org/10.1126/science.1189664

[4] Luebke, J. I., Weaver, C. M., Rocher, A. B., Rodriguez, A., Crimins, J. L., Dickstein, D. L., Wearne, S. L., & Hof, P. R. (2010). Dendritic vulnerability in neurodegenerative disease: Insights from analyses of cortical pyramidal neurons in transgenic mouse models. Brain Structure and Function, 214(2–3), 181–199. https://doi.org/10.1007/s00429-010-0244-2