The Power of the Individual Neuron.
Awhile back, I wrote an entry titled, The Power of the Human Brain. I’m still proud of this entry, and I think it does a good job explaining the complexity of neuronal computation in everyday language. However, my entry was posted at a forum as an argument, and the engineer who the poster was arguing against—who has a fairly sophisticated understanding of the gist of neuroscience—absolutely demolished my entry. Simply put, the assertions that I made (that an individual neuron is more powerful than a slightly dated supercomputer) were not well-supported by the evidence I put forward in my blog post. Fair enough, this is true. However, these assertions are supported by the evidence. This entry is going to be an explication of the true power of individual neurons.
This entry will be fairly technical and complex at times, but I assure you, it will be worth the read. Neuronal communication is amazingly complex. I will assume the reader’s familiarity with my previous blog entry (linked above), and go from there. To reiterate briefly, a neuron can have up to 200,000 synaptic connections. Every time an action potential reaches these connections, information is conveyed from one neuron to another in the form of excitatory or inhibitory charges, which travel down the dendrites to the cell body. This information is integrated and computed at the cell body, or the soma, and if the excitatory threshold is reached, the neuron initiates an action potential—that is, it conveys the integrated and computed information to another neuron. The engineer that I was discussing with in my previous entry did me the favor of multiplying 10 bits of information by 20,000 synapses and assigning a weighting factor to these 10-bit signals for spatial and temporal priority (which he did not give the numerical value) took around 40 kiloFLOPS. Now, admittedly, engineering is not my strong point. I wish I knew enough even to properly adjust this calculation to the correct bits/second information conveyed by individual neural spikes—300 bits per second (Koch 2004, 333)—and the correct density of synaptic connections, mentioned above, to get the maximum computational rate of a neuron in an incredibly simplistic neural model, but I don’t know how to convert anything to FLOPS, and so I don’t know how different these new figures would make the final product. It seems to be at least a few orders of magnitude greater, but I can’t say for certain. In addition, the weighting factor given to neural coding even in simplistic models of artificial neural networks is debated, so I don’t know which he used, and so the actual computation might require greater power still.
Fortunately, however, I don’t have to be proficient in engineering to demonstrate my claim as true. About 10^4 parameters need to be specified to create neurons with biophysically realistic spatiotemporal computational abilities (Huys, Ahrens, & Paninski, 2006). These parameters need to be set to account for odd aspects of neural computation, such as backpropagation of action potentials (Grewe, Bonnan, & Frick, 2010)–which affect first-order neural communication–,among other things. In modeling these neurons, a realistic synaptic input is prohibitive for a computer’s memory capacity, and much synaptic input is set to zero for purposes of calculations. Already, we have begun to stress the computational power of computers, and neurons haven’t even begun to show off.
The type of neural model mentioned above is one known as a “multicompartmental” model, referencing the variety of parameters specified in the model. However, even multicompartmental models, which, again, are our most biophysically realistic models of neurons, cannot account for a variety of computations and behaviors known in neurons (Herz, Gollisch, Machens, & Jaeger, 2006). While they can add, subtract, multiply, and divide inputs, they cannot engage in a variety of observed, behaviorally relevant, and biological computations that actual neurons can. For a list of these, see Table 1 and Table 2 in the linked article. Some neural models that can engage in these behaviorally relevant computations are abstract with little attention to biological characteristics, and provide no rationale for how actual neurons can perform their computations. Nonetheless, because these abstract models do demonstrate the function of neurons without explaining how, they have made some testable predictions. One of these is that an individual neuron may perform the function of an entire multicompartmental artificial neural network, which has shown to be true.
Neurons are nowhere near as simplistic as summing junctions. Take dendrites, for example. Rather than passive conveyers of excitatory or inhibitory charges, they are the “the paramount examples of biological nanotechnology,” as Kolb and Whishaw state in their An Introduction to Brain and Behavior, second edition. Indeed, one article, in review of dendrites, states:
“Spines appear as one of the ultimate examples of miniaturization in Biology, and could represent a good testing ground for biological nanotechnology (Mehta et al., 1999). Indeed, there is already some evidence for both small number of molecules and extreme precision in their location…. Moreover, even the position of these channels and receptors, particularly with respect to other molecular components of the spine, appears determined with extreme precision… It seems to us that spines are built with great molecular sophistication, and that future studies to understand their structure and function must operate at an equally high level of experimental precision. It does not appear exaggerated to argue that the understanding of the function of the biochemical pathways present in spines may require single-molecule techniques (Mehta et al., 1999), in combination with detailed computational modeling of the spatio-temporal dynamics and kinetics of these molecules (Kennedy, 2000).” (Tashiro & Yuste, 2003)
Kolb and Whishaw state further about dendritic spines,
“to mediate learning, each spine must be able to act independently, undergoing changes that its neighbors do not undergo. Spines may appear and disappear on a dendrite in a matter of seconds, and they may even extend or move along a dendrite to search out and contact a presynaptic axon terminal. When forming a synapse, they can change in size and shape and even divide…. The mechanisms that allow spins to appear and change in shape include a number of different cytoskeletal filaments linked to the membrane receptors…. The activation of receptors can induce mRNA within the spine to produce more of these structural proteins. In addition, second messengers within the spine can carry signals to the cell’s DNA to send more mRNA addressed to just the signaling spine.” (p. 181)
In addition to the information-processing carried out in the soma, then, dendrites are carrying out two different forms of information-processing, all using the same resources as that of the cell.
The computational abilities of neurons do not stop there, however. One article on neuron computation titled, Inside the brain of a neuron, explicates some of the vast computations performed by neurons that go undetected by some simplistic models of neural computation (Sidiropoulou, Pissadaki, & Poirazi, 2006). To note some of these, dendrites themselves engage in nonlinear summation, amplification, and delay of inputs before conveying that information to the soma. Moreover, the soma itself computationally maintains persistent activity in generating action potentials even in the absence of synaptic input. Finally, axons themselves seem to be involved in a neuron’s information-processing. These are just a few of the extraordinary computational abilities of neurons often neglected in neuron models, and I recommend reading the whole article.
One more thing that I can’t avoid mentioning is information-processing in each cell’s DNA. A neuron’s DNA is an incredibly complex and computationally powerful information-processor itself (Shapiro, 2006). However, neural computations regularly initiate epigenetic changes (Borelli, Nestler, Allis, & Sassone-Corsi, 2008). These changes then influence neural computation, resulting in an interactive information-processing loop, all performed with the same resources that the cell has for input summation. Indeed, recent extremely basic and simplistic experiments have shown that integrating what resembles DNA into artificial neural networks is necessary for function remotely resembling that we see in reality (Qian, Winfree, & Bruck, 2011). The information-processing in DNA alone parallels or rivals computers in many ways, and this is only combined with the capacities of individual neurons.
Finally, individual neurons engage in large-scale, long-term computations simultaneously with their short-term computations, and their long-term computations result in a different output pattern (Sheffield, Best, Mensh, Kath, & Spruston, 2010). How neurons do this is currently unknown.
In sum, though I can’t do the math to convert a neuron’s computational ability into FLOPS, I don’t think anyone can. The known computational abilities of our most biophysically realistic neuron models overwhelm most computers even when synaptic input is set to a minimal level. However, these neuron models cannot come close to performing many biologically relevant computations, and actual neurons can perform computations only possible by entire networks of multicompartmental models or abstract models that are not biophysically realistic. Moreover, each neuron possessing computational abilities currently untapped by neuron models, and these abilities greatly enhance the computational power of each neuron. The interplay between the powerful information-processing capabilities of DNA and neural computation only increases our understood computational power of neurons. Finally, neurons engage in separate short-term and long-term computations simultaneously. It is no wonder why some claim that each neuron is more powerful than a supercomputer.
And we haven’t even touched mirror neurons—neurons that somehow conceptually understand actions–with a twenty foot pole.