- Frank (1954) - General Theory of Heart-Vector Projection. Classic electrophysiological theory of the heart.
- Kim and Rutherford (2016) - Maturation of NaV and KV Channel Topographies in the Auditory Nerve Spike Initiator before and after Developmental Onset of Hearing Function.
- Benned and Jensen (2016) - Live Imaging of Kv7.2/7.3 Cell Surface Dynamics at the Axon Initial Segment- High Steady-State Stability and Calpain-Dependent Excitotoxic Downregulation Revealed.
- Gao et al (2016) - Universal resilience patterns in complex networks. Mathematical work on robustness.
- Philips and Milo (2009) - A feeling for the numbers in biology. An introduction to the bionumbers database.
- Le Bihan and Johansen-Berg (2012) - Diffusion MRI at 25 Exploring brain tissue structure and function. Diffusion of water in the brain.
- Tasaki et al (1985) - Rapid swelling of neurons during synaptic transmission in the bullfrog sympathetic ganglion. Electroosmotic effects in neurons.
- Bressloff and Levien (2015) - Synaptic Democracy and Vesicular Transport in Axons. Mathematical work on axonal transport.
- Zhang et al (2016) - Axon Membrane Skeleton Structure is Optimized for Coordinated Sodium Propagation. Functional significance of the ring structure of the axon.
Archives de catégorie : Blog
A book on the theory of action potentials
I am writing a book on the theory of action potentials. I will post chapters on this website as I write them. Note however that this is essentially preparatory work for the book, and I am probably going to rewrite and reorganize it quite extensively. So do not expect a very well organized and didactical text at this stage; only the essential content should remain. I would be happy to read your comments, in particular if you find errors or omissions: please let me know and I will reward you with your name in the acknowledgments!
The plan is to start with standard biophysics of excitability, and then I will expose more advanced topics, such as: how spikes initiate in real life (as opposed to when you stimulate an axon), how excitability changes on different time scales, and how a cell learns to spike. The book adopts a systemic viewpoint; that is, the goal is to understand how the coordination of channels creates and maintains functional action potentials. I would also like to give an epistemological flavor to it that I find is missing in most textbooks: what is a model, how is it built and tested, what is its empirical value, etc.
Why this book and who is it for? With this book, I am hoping to bring theoreticians to the field of neural excitability, and to give them the necessary material that is currently scattered over many references. Currently, the field is largely dominated by experimental biologists. Yet, as I will try to convey, this is a field where one can ask many key neuroscientific questions in a context where the link between structure and function is much less speculative than in neural network research, including questions of learning and adaptation, and where one can actually develop quantitative, testable theories. As a bonus, I would also be happy if I could manage to convey some elements of theory to biologists.
Prerequisites. In principle you do not need to know much about biology to read this book, as I will try to introduce the necessary information. I am expecting some mathematical skills, mostly calculus and basics of differential equations, but nothing very advanced. Regarding physics, electrophysiology is obviously a lot about electricity. In the current version, I am assuming the reader has some basic knowledge of electricity (what current and charges are, Ohm's law). But I am planning to add a primer on electricity.
Each chapter is accompanied by a set of examples using the Brian 2.0 simulator, in the form of Jupyter notebooks.
I am also compiling a general bibliography on action potential theory (books and reviews only).
Here is a tentative outline (available chapters are in bold):
- Action potentials. An overview of action potentials, their scientific history and their function. Brian notebooks for chapter 1. Last update: 9.6.2016
- The membrane potential. The biophysical basis of membrane polarization. Brian notebooks for chapter 2. Last update: 9.6.2016
- Action potential of an isopotential membrane. Basic biophysics of the squid giant axon and Paramecium, and the Hodgkin-Huxley model. Brian notebooks for chapter 3. Last update: 19.7.2016
- Excitability of an isopotential membrane. Theoretical analysis of excitability in isopotential models [Some content to be added on excitability types]. Last update: 14.3.2017.
- Propagation of action potentials (technical draft). The cable equation; active propagation in unmyelinated and myelinated axons. Last update: 13.4.2017.
- Spike initiation in the axonal initial segment (incomplete draft). Excitation through the soma and AIS, as opposed to excitation of the middle of a squid axon. Last update: 5.4.2018.
- Dynamics of excitability. How excitability (spike threshold) changes at different time scales (adaptation and plasticity).
- Energy consumption.
- Learning to spike. How a cell builds and maintains a functional spiking system.
This week's paper selection (10 – 17 Feb 2016)
- Sillar et al (2008) - Neuromodulation and developmental plasticity in the locomotor system of anuran amphibians during metamorphosis. Metamorphosis: the ultimate plasticity.
- Bertrand and Cazalets (2013) - Activity-dependent Synaptic Plasticity and Metaplasticity in Spinal Motor Networks. Plasticity of motor control.
- LeMasson et al (1993) - Activity-dependent regulation of conductances in model neurons. A bit (too) short, but a seminal paper on intrinsic plasticity.
- Abbott and LeMasson (1993) - Analysis of Neuron Models with Dynamically Regulated Conductances. The same, but more detailed (interesting analysis).
- Henson et al (1994) - The Baroreceptor Reflex- A Biological Control System with Applications in Chemical Process Control. Cardiovascular control.
This week's paper selection (3-10 Feb 2016)
- Celani et al (2014). Odor Landscapes in Turbulent Environments. The ecological problem of odor localization.
- Lewandowska et al (2015). Recording Large Extracellular Spikes in Microchannels along Many Axonal Sites from Individual Neurons. Interesting system to monitor axon activity in cultures.
- Bertrand and Cazalets (2013). Activity-dependent Synaptic Plasticity and Metaplasticity in Spinal Motor Networks. It all started there!
- Vergassola et al (2007). 'Infotaxis' as a strategy for searching without gradients. Possibly an algorithm for odor localization in realistic environments.
- Fukson et al (1980). The Spinal Frog Takes into Account the Scheme of Its Body During the Wiping Reflex. Fascinating!
Frankenstein models and the double-edged sword of modelling tools
The reason we develop simulation tools is to boost productivity. Coding numerical integration schemes for neuron models can be rather time-consuming, and we want to invest our time in the science, rather than in the implementation. It can also be tricky. For example, simulating the cable equation in an efficient and accurate way is quite a bit of work. This is why we developed the Brian simulator, and presumably why the Neuron simulator was developed. As those tools become successful, scientists start to publish and share their models. There is for example the modeldb database, a great resource to find model code from published papers. In turn, this can yield a great increase in productivity, because models can be reused. This second productivity benefit of tools is also a motivation for the development of common formats for models and data.
On paper, this looks great. I'm all in favor of encouraging everyone to share everything they can in the most convenient way. However, I think we should be aware that the development of model databases also encourages problematic modeling practices. Making a tool entails hiding what exactly the tool does and how he does it. This can be a good thing; you don't really need to know how the cable equation is numerically integrated to understand it. But when it comes to the models themselves, I think it's a problem. The meaning of a model is specific of the context in which it has been conceived. For example, the original Hodgkin-Huxley model is a model of a space-clamped squid giant axon, which actually results from the fusion of hundreds of cells; it is not a neuron model. So taking pieces of published models to build a new model for a different scientific question and context is a delicate exercise. Unfortunately, model sharing encourages the development of what one might call “Frankenstein models”.
Let me be more specific about what I mean by “Frankenstein models”. Most detailed biophysical neuron models that people use today (see e.g. modeldb) use ion channels that have been developed from different species, different cell types and brain areas, different experimental conditions. Then those models are almost always hand-tuned, i.e. parameters are changed relative to measurements, because a model never works when you just put disparate things together. So for example, one would shift the activation curve of a channel so as to have the right spike threshold. Then if you examine a modern detailed model, you will find that not only the ionic channels are aggregates from various conditions and species, but that it has a history of hand-tuning in its components, where the hand tuning was done for various unrelated problems. That these detailed models are considered realistic puzzles me (and by these, I mean about all published models except perhaps the original Hodgkin-Huxley model).
That these models are not realistic might not be that bad, if one were lucid about the question (i.e., acknowledged the assumptions and limitations of the model). Unfortunately, there seems to be a general lack of education in neuroscience about epistemological questions related to modeling, and one often seems happy to observe that the model “has sodium channels” with experimentally measured properties. So I will make a few remarks to emphasize that to “have sodium channels” is essentially meaningless. Take the original (isopotential) Hodgkin-Huxley model. It is a model of the space-clamped squid giant axon, i.e., obtained by inserting a metal wire in the axon. As I mentioned above, that axon is actually one exception to the neuron doctrine, i.e., it is a syncytium resulting from the fusion of hundreds of cells: it is not, and has never been, a neuron model. Secondly, the exponents in the channel currents (number of gates) are obtained by optimization of spike shape. The potassium channel has exponent 4 only because they couldn't try higher numbers, because of the computers of the time. It was recognized later that the potassium current is actually more delayed and a higher exponent would be needed. You can still read in the literature today that the squid giant axon's spike is highly energetically inefficient, but that actually refers to the HH model; with a better computer the squid's spike would have turned out to be more efficient!
Then, one must realize that there is no such thing as “the sodium channel”. A sodium channel consists of a main unit, called the alpha subunit, of which there are 9 subtypes (corresponding to 9 genes), and a bunch of beta-units. The function of the channel depends on the subtype and on which beta-units are expressed. But not only. The alpha subunits can also take different forms, by alternate splicing. More importantly, their function can be modulated by a bunch of things, for example they can be phosphorylated by enzymes, which can change every aspect of their kinetics (activation and inactivation curves and dynamics). All these things depend not only on cell type (brain area etc) but on the individual cell, and it can vary over time. To top it all, properties of different channels (say sodium and potassium) are also co-tuned in cells (not surprisingly, since various compounds such as PKA modulate many types of channels). So when you measure channel properties in an oocyte, or simply in different cells, and put things together, there is close to zero probability that you will get a model that produces normal function – not mentioning the subcellular localization of channels, which is highly heterogeneous. This is why, in the end, all models are the result of some more or less laborious hand-tuning. In general, the more detailed, the more laborious and most likely inaccurate. I recently published a review where I argued that the integrate-and-fire model is actually more realistic than isopotential Hodgkin-Huxley models, but there is a general point: the more detailed a model, the more likely it is to be plain wrong. Worse, detailed models are likely to be wrong in unpredictable ways.
Model databases and standards promote the illusion that realistic models can be built simply by reusing bits of previous models, standing on overly reductionist views of the living, when systemic views should prevail. This leads to the spread of Frankenstein models with a dangerous illusion of realism.
What should we do about it? I still think that sharing models is good and should be encouraged. But how to avoid the spread of Frankenstein models? There are at least two directions I can think of. First, the way models are communicated and shared: it should be about science, not about computers. If all we do is load a file into our software tool and never actually look at the model itself, then we're doomed. There is a trend of developing standards for models, so as to facilitate exchanges. I think these standards should not go beyond the mathematical formulation of the models, for otherwise we throw away all the context in which the models are supposed to be meaningful. This is also the reason why in the Brian simulator, we insist that models are specified by their equations, and not by pre-designed components (e.g. the sodium channel). In this way it is clear what specific model is used, and we do not introduce modeling biases. The second direction is to educate scientists as regards epistemological and scientific questions in modeling. But that won't be easy!
This week's paper selection (27 Jan - 3 Feb 2016)
- Cuntz et al (2010) - One Rule to Grow Them All- A General Theory of Neuronal Branching and Its Practical Application. A formalism to synthesize dendritic morphologies.
- Williams et al (2016) - Dendritic trafficking faces fundamental speed-precision tradeoffs. How to produce spatial gradients of channels along dendrites (I guess relevant for axons too).
- Roland et al (2014) - Cannabinoid-induced actomyosin contractility shapes neuronal morphology and growth. Apparently neurons are contractile ?!
- Brachet et al (2010) - Ankyrin G restricts ion channel diffusion at the axonal initial segment before the establishment of the diffusion barrier. Somehow related to the question above.
- Anderson and Mackie (1977) - Electrically coupled, photosensitive neurons control swimming in a jellyfish. Sounds fun!
- Harvey and Bovell (2006) - Spontaneous alternation behavior in Paramecium. Paramecium has short-term memory!
- Araya and Brownell (2015) - The Neuromechanics of Hearing. An intriguing parallel between axon initial segment and electromotility in outer hair cells.
- Park et al (2016) - Neurally Encoding Time for Olfactory Navigation. Odor localization based on timing.
- Apostolides et al (2016) - Axonal Filtering Allows Reliable Output during Dendritic Plateau-Driven Complex Spiking in CA1 Neurons.
- Astman et al (1998) - Activation of Protein Kinase C Increases Neuronal Excitability by Regulating Persistent Na+ Current in Mouse Neocortical Slices. PKC lowers spike threshold (probably via axonal Nav1.6).
- Franceschetti et al (2000) - Protein kinase C-dependent modulation of Na+ currents increases the excitability of rat neocortical pyramidal neurones. Apparently because of a depolarizing shift of Nav inactivation.
This week's paper selection (20-27 Jan 2016)
- Axelrod (1983). Lateral Motion of Membrane Proteins and Biological Function. On the diffusion of proteins in the membrane (the membrane is fluid).
- Schlaepfer and Wessel (2015). Excitable Membranes and Action Potentials in Paramecia- An Analysis of the Electrophysiology of Ciliates. The journal of undergraduate neuroscience education suggests Paramecium to learn electrophysiology.
- Cembrowski et al (2016). Spatial Gene-Expression Gradients Underlie Prominent Heterogeneity of CA1 Pyramidal Neurons. Cells of the same type are not all the same! (oh no!) (or: “cell type” doesn't really exist)
- Ling (1982) - The cellular resting and action potentials- interpretation based on the association-induction hypothesis. Probably wrong but fascinating: Gilbert Ling (inventor of microelectrode and nominated for Nobel prize in the 1950s) insists that the membrane theory is wrong, membrane polarization is not based on movements of ions and pumps, but on adsorption/desorption of K+ by proteins (ie the intracellular medium is not a solution).
- Zeuthen (1978) - Intracellular gradients of ion activities in the epithelial cells of the Necturus gallbladder recorded with ion-selective microelectrodes. More about the effect of electrical fields inside cells.
- Tomassy et al (2014) - Distinct Profiles of Myelin Distribution Along Single Axons of Pyramidal Neurons in the Neocortex. Puzzling: why are myelinization patterns so weird?
- Ferrante et al (2013) - Functional Impact of Dendritic Branch-Point Morphology. Interesting: the precise way the membrane splits at branch points is electrically important.
This week's paper selection (13-20 Jan 2016)
- Konur and Ghosh (2005) - Calcium Signaling and the Control of Dendritic Development. How does the dendritic tree get to take the shape it has?
- Ramoino et al (2003) - Swimming behavior regulation by GABAB receptors in Paramecium. Do Paramecium cells chemically inhibit each other?
- Bucci et al (2005) - A role for GABAA receptors in the modulation of Paramecium swimming behavior.
- Fehmi and Bullock (1967) - Discrimination among temporal patterns of stimulation in a computer model of a coelenterate nerve net. Models of nerve nets (decentralized nervous systems).
- Smith and Bullock (1990) - Model nerve net can produce rectilinear, non-diffuse propagation as seen in the skin plexus of sea urchins.
- Nickel (2010) - Evolutionary emergence of synaptic nervous systems- what can we learn from the non-synaptic, nerveless Porifera? Sponges!
- Bullock (1970) - The Reliability of Neurons. Bullock points out that variability is not the same as noise (see my review on the rate vs. spike debate).
- von Bekesy (1964) - Olfactory analog to directional hearing. Binasal smelling, with experiments.
- Eckert and Naitoh (1972) - Bioelectric Control of Locomotion in the Ciliates. Nice review.
- Golowash et al (1999) - Activity-Dependent Regulation of Potassium Currents in an Identified Neuron of the Stomatogastric Ganglion of the Crab Cancer borealis. Yes, neurons learn to spike.
The magic of Popper vs. the magic of Darwin
In a previous post, I pointed out that Darwin's theory of evolution is incomplete, because it does not explain why random variations are not arbitrary. The emergence of complex adapted living beings does not follow from the logical structure of the explanation: introduce random variations and select the best ones. I gave the example of binary programs: introduce random variations in the code and select the best ones. It doesn't work. Yet the Darwinian explanation has something self-evident in it: if you select the best variations around something, then you should end up with something better. The fallacy is that a variation does not necessarily result in better and worse outcomes with comparable probabilities. For a program, changing one bit generally results in a faulty program. This is why no one in software engineering uses the Darwinian method to write any program.
For the record, and even though it should be self-evident, I am not advocating for “intelligent design” of any sort. I find the debate between educated creationists (ie intelligent design advocates) and neo-Darwinians generally quite disappointing. Creationists would point out some mysterious aspect of life and evolution, and assume that anything mysterious must be divine. Neo-Darwinians would respond that there is nothing mysterious. I would think that a scientific attitude is rather to point out that mysterious does not imply divine, and to try to understand that mysterious thing.
Later it occurred to me that the same fallacy occurs in epistemology, namely in Karl Popper's view of science, probably the most influential epistemological theory among scientists. Popper proposed that a scientific statement, as opposed to a metaphysical statement, is something that can be falsified by an observation. A scientific statement is nothing else than a logical proposition, and you can make an infinite number of such propositions. To distinguish between them, you need to do experiments that falsify some of them. So many scientists seem to think that the scientific process is to design experiments that can distinguish between different theories. This explains the focus on tools that I have talked about before.
There is a rather direct analogy with Darwin's theory of evolution. The focus is on selection, but neglects a critical aspect of the process, which is the creative process: how do you come up with candidate theories in the first place? I discussed this problem in a previous post. For any given set of observations, there is an infinite number of logical statements that are consistent with it. Therefore, you cannot deduce theories from observations; this is the problem of induction. How then do scientists propose theories, and why do we test some theories and not others that would have the same degree of logical validity? (e.g theory = existing set of observations + random prediction of a new observation) This is what we might call the hard problem of epistemology, in reference to the hard problem of consciousness. Popper doesn't address that problem, yet it is critical to the scientific process. How about this epistemological process:
- Consider a set of theories, which are logical propositions.
- Select the best ones using Popper's falsificationnism.
- Allow those theories to reproduce, kill the other ones.
- Introduce random variations, e.g. randomly add/remove quantifiers or variables.
- Repeat.
How well would that work? Science does seem to follow a sort of evolutionary process, but selection itself is not sufficient to explain it; one also needs to explain the creative process.
It is true that Popper has correctly identified experiment as a central aspect of science, just as Darwin has correctly identified selection as a central aspect of evolution. But Popper does not address the hard problem of epistemology, just as Darwin does not address the hard problem of evolution, and Tononi does not address the hard problem of consciousness.
This week's paper selection (6 – 13 Jan 2016)
- Berg (1975) - Bacterial behaviour. On the tumble-and-swim behavior of bacteria.
- Purcell (1977) - Life at low Reynolds number. On the physics of bacteria movements.
- Wu et al (2016) - Substantial contribution of extrinsic risk factors to cancer development. Contradicts a previous Science paper claiming that cancer is just due to bad luck (my comment at the time).
- Hodgkin (1939) - The relation between conduction velocity and the electrical resistance outside a nerve fibre. Hodgkin immerges an axon in oil to show that conduction velocity decreases, confirming the idea that the action potential propagates through local current loops involving the extracellular medium.
- Schuetze (1983) - The discovery of the action potential. A pretty good historical paper.
- Bernstein (1868) - Ueber den zeitlichen Verlauf der negativen Schwankung des Nervenstroms. Julius Bernstein describes the first electrophysiological recording of an action potential, using a sample-and-hold device with a galvanometer. 1868!
- Elgeti et al (2014) - Physics of Microswimmers – Single Particle Motion and Collective Behavior. Self-explanatory.
- Tu et al (2008) - Modeling the chemotactic response of Escherichia coli to time-varying stimuli. Self-explanatory.
