A few ideas to improve the grant system

In France, the success rate for national grants has gone below 10% (with the ANR). Not surprisingly since the budget has dropped in recent years. Unfortunately, there is not much else that is available for labs (apart from European grants, which have similar success rate). The situation is also quite difficult in the American's NIH (although not as bad). With success rates so low, many scientists complain about the time wasted on writing grants that eventually seem to be more or less arbitrarily selected. This is not just an expression of the frustration of losers: once again a study comes out showing that peer review of grants is no better than chance, when applied to the top grants. The public should be aware of this: writing a grant application takes a significant amount of time (not mentioning reviewers time), and 90% of this tax-payed timed is completely wasted, in a process that is not better than lottery. Does this really make sense?

How have we arrived to such an absurd situation? There are different factors. The first obvious error I see is to fund scientific projects almost entirely based on a competitive grant system. Only the best science should be funded, we hear. Makes sense. Really? Given that both the basic infrastructure (administration, space) and the salary of permanent scientists are paid for anyway, what this advertising slogan means concretely is that only a small proportion of paid scientists will be given the means to work normally. How does that make sense? So the first obvious thing to do is to ensure that there is a baseline that allows labs to work normally (eg pay for conferences, publications, basic equipment). Why would you permanently hire a scientist and then not give her the means to work? Absurd, but that's what the CNRS currently does (main French research organization). INSERM (medical research organization) gives a start-up package to junior scientists; it covers only two years though, but is certainly an improvement.

It is only quite recently that there is a competitive grant system in France. Many people didn't (and still don't) like it, partly for the obvious reasons I just discussed. Personally, I don't think it's a bad thing in principle. It has one value, which is to shift the power from lab directors to individual scientists, who can be more independent (that is, if they get a grant). The problem is not so much that it exists, but that it entirely replaces any other means of funding.

But a competitive grant system doesn't necessarily need to be based on projects. Great if you have a particular project that requires specific expensive equipment, for example. Or a project that requires simultaneous funding of several partners. But in most cases I know, scientists use the grant system to fund the daily work of their lab, and pitch it into a “project”, sometimes assembling different arbitrary “partners” just because the system requires it. Everyone knows that the best way to get a grant is when you already have the results but have not published them yet (so-called “preliminary results” that grant reviewers like so much). Then you get the grant to fund the “preliminary results” of the next one. It is not always like that; sometimes you really do have a good project idea – although in most cases you will use the budget parly for things that are not in the grant anyway, since it's research, not an engineering project. This happens because the funding system is built this way. Scientists have no choice but be cynical, which also results in a huge waste of resources. For programmatic funding (ie funding political priorities, say research on energy), a project-based grant system might make sense. But for basic research, all we ask the funding system is to fund individual scientists or labs because it is believed that they will do good science. For experienced scientists, the best way to predict whether they will do good science is probably to just look at what they did before. Anyway, that's what grant reviewers partly do when they judge CVs and “preliminary results”, with the added bias of story-telling skills that the project system emphasizes. So, there should be a competitive system that allocates grants based on previous work. This system already exists: it is used for hiring and promotion; it also exists at the level of teams and labs, except it has little impact on budgets. Let us just expand it and combine it with some long-term budget, and a lot of resources will be saved.

One possible objection is that it is difficult to estimate budgets if there is no specific project. True, but in reality this is exactly the same problem for project-based grants. Reviewers, who are often international scientists, cannot and do not judge budgets. In practice, the institution generally sets orders of magnitude for budgets, and applicants just ask for the maximum budget (there are of course exceptions).

A second objection is that it doesn't work for junior scientists, since they don't have much to show in their CV. I disagree, because they are hired through a highly competitive process, which is not much based on projects but on their achievements. So by design, the junior scientist did go through a highly selective process and should be granted budget. This is just called a start-up package and should be generalized. There is a valid concern, however, which is to compare scientists that are at very different stages in their career. But this doesn't sound like an impossible difficulty; one could use for example a similar system as the ERC (starting/consolidator/advanced).

All this being said, I believe there is still a need for project-based grants, but with a smaller perimeter. Projects make sense when they require setting up a collaboration, or buying some specific equipment, or within programmatic constraints. Not so much in general to fund a single team or individual. I would like to make a final suggestion. If the project-based system still has a very low success rate, so low that selection is random, then there is a simple way to increase it and waste less ressource. One suggestion made in the abovementioned paper is to use a lottery. Indeed if the process is no better than chance, we might as well dispense ourselves of the review process and save some time for reviewers. This is an improvement, but it still wastes the time of applicants; I don't think it would be received very well. There is another way that also saves the time of applicants: let them apply only every other year (say, if you are born on an even year then you can apply only on even years). The number of funded grants would stay the same of course, if the budget is the same, but the success rate would automatically increase. Maybe not by a factor of two, but probably not far from it. Note that this is not the same as the individual strategy to decide to apply only every two years; when selection is random, it is in your best interest to apply every year, because each application is independent. There is not even any reason to modify your application between successive years. Whereas if success rate is forcefully increased by blocking part of the applications, the same individual success rate is obtained by applicants with half as many applications. In addition, it makes selection less random. This system is clearly better than lottery since you know in advance that you will be blocked and don't have to write a grant. You can also prepare better for the next round.

In summary, I have made four complementary propositions:

  1. Ensure a baseline budget for each individual scientist or team, which allows them to work normally (conferences, publication costs, basic equipment).
  2. Have a competitive grant system based on past rather than hypothetical future.
  3. Have a smaller competitive project-based grant system.
  4. Increase success rates of competitive systems (possibly both) above chance level by letting scientists apply only every two years.

None of these propositions depends on total budget – which is clearly too low in France, but that's an obvious point.

This week's paper selection (17-24 Feb 2016)

A book on the theory of action potentials

Latest news

I am writing a book on the theory of action potentials. I will post chapters on this website as I write them. Note however that this is essentially preparatory work for the book, and I am probably going to rewrite and reorganize it quite extensively. So do not expect a very well organized and didactical text at this stage; only the essential content should remain. I would be happy to read your comments, in particular if you find errors or omissions: please let me know and I will reward you with your name in the acknowledgments!

The plan is to start with standard biophysics of excitability, and then I will expose more advanced topics, such as: how spikes initiate in real life (as opposed to when you stimulate an axon), how excitability changes on different time scales, and how a cell learns to spike. The book adopts a systemic viewpoint; that is, the goal is to understand how the coordination of channels creates and maintains functional action potentials. I would also like to give an epistemological flavor to it that I find is missing in most textbooks: what is a model, how is it built and tested, what is its empirical value, etc.

Why this book and who is it for? With this book, I am hoping to bring theoreticians to the field of neural excitability, and to give them the necessary material that is currently scattered over many references. Currently, the field is largely dominated by experimental biologists. Yet, as I will try to convey, this is a field where one can ask many key neuroscientific questions in a context where the link between structure and function is much less speculative than in neural network research, including questions of learning and adaptation, and where one can actually develop quantitative, testable theories. As a bonus, I would also be happy if I could manage to convey some elements of theory to biologists.

Prerequisites. In principle you do not need to know much about biology to read this book, as I will try to introduce the necessary information. I am expecting some mathematical skills, mostly calculus and basics of differential equations, but nothing very advanced. Regarding physics, electrophysiology is obviously a lot about electricity. In the current version, I am assuming the reader has some basic knowledge of electricity (what current and charges are, Ohm's law). But I am planning to add a primer on electricity.

Each chapter is accompanied by a set of examples using the Brian 2.0 simulator, in the form of Jupyter notebooks.

I am also compiling a general bibliography on action potential theory (books and reviews only).

Here is a tentative outline (available chapters are in bold):

  1. Action potentials. An overview of action potentials, their scientific history and their function. Brian notebooks for chapter 1Last update: 9.6.2016
  2. The membrane potential. The biophysical basis of membrane polarization. Brian notebooks for chapter 2Last update: 9.6.2016
  3. Action potential of an isopotential membrane. Basic biophysics of the squid giant axon and Paramecium, and the Hodgkin-Huxley model. Brian notebooks for chapter 3Last update: 19.7.2016
  4. Excitability of an isopotential membrane. Theoretical analysis of excitability in isopotential models [Some content to be added on excitability types]. Last update: 14.3.2017.
  5. Propagation of action potentials (technical draft). The cable equation; active propagation in unmyelinated and myelinated axons. Last update: 13.4.2017.
  6. Spike initiation in the axonal initial segment (incomplete draft). Excitation through the soma and AIS, as opposed to excitation of the middle of a squid axon. Last update: 5.4.2018.
  7. Dynamics of excitability. How excitability (spike threshold) changes at different time scales (adaptation and plasticity).
  8. Energy consumption.
  9. Learning to spike. How a cell builds and maintains a functional spiking system.

 

This week's paper selection (10 – 17 Feb 2016)

 

This week's paper selection (3-10 Feb 2016)

Frankenstein models and the double-edged sword of modelling tools

The reason we develop simulation tools is to boost productivity. Coding numerical integration schemes for neuron models can be rather time-consuming, and we want to invest our time in the science, rather than in the implementation. It can also be tricky. For example, simulating the cable equation in an efficient and accurate way is quite a bit of work. This is why we developed the Brian simulator, and presumably why the Neuron simulator was developed. As those tools become successful, scientists start to publish and share their models. There is for example the modeldb database, a great resource to find model code from published papers. In turn, this can yield a great increase in productivity, because models can be reused. This second productivity benefit of tools is also a motivation for the development of common formats for models and data.

On paper, this looks great. I'm all in favor of encouraging everyone to share everything they can in the most convenient way. However, I think we should be aware that the development of model databases also encourages problematic modeling practices. Making a tool entails hiding what exactly the tool does and how he does it. This can be a good thing; you don't really need to know how the cable equation is numerically integrated to understand it. But when it comes to the models themselves, I think it's a problem. The meaning of a model is specific of the context in which it has been conceived. For example, the original Hodgkin-Huxley model is a model of a space-clamped squid giant axon, which actually results from the fusion of hundreds of cells; it is not a neuron model. So taking pieces of published models to build a new model for a different scientific question and context is a delicate exercise. Unfortunately, model sharing encourages the development of what one might call “Frankenstein models”.

Let me be more specific about what I mean by “Frankenstein models”. Most detailed biophysical neuron models that people use today (see e.g. modeldb) use ion channels that have been developed from different species, different cell types and brain areas, different experimental conditions. Then those models are almost always hand-tuned, i.e. parameters are changed relative to measurements, because a model never works when you just put disparate things together. So for example, one would shift the activation curve of a channel so as to have the right spike threshold. Then if you examine a modern detailed model, you will find that not only the ionic channels are aggregates from various conditions and species, but that it has a history of hand-tuning in its components, where the hand tuning was done for various unrelated problems. That these detailed models are considered realistic puzzles me (and by these, I mean about all published models except perhaps the original Hodgkin-Huxley model).

That these models are not realistic might not be that bad, if one were lucid about the question (i.e., acknowledged the assumptions and limitations of the model). Unfortunately, there seems to be a general lack of education in neuroscience about epistemological questions related to modeling, and one often seems happy to observe that the model “has sodium channels” with experimentally measured properties. So I will make a few remarks to emphasize that to “have sodium channels” is essentially meaningless. Take the original (isopotential) Hodgkin-Huxley model. It is a model of the space-clamped squid giant axon, i.e., obtained by inserting a metal wire in the axon. As I mentioned above, that axon is actually one exception to the neuron doctrine, i.e., it is a syncytium resulting from the fusion of hundreds of cells: it is not, and has never been, a neuron model. Secondly, the exponents in the channel currents (number of gates) are obtained by optimization of spike shape. The potassium channel has exponent 4 only because they couldn't try higher numbers, because of the computers of the time. It was recognized later that the potassium current is actually more delayed and a higher exponent would be needed. You can still read in the literature today that the squid giant axon's spike is highly energetically inefficient, but that actually refers to the HH model; with a better computer the squid's spike would have turned out to be more efficient!

Then, one must realize that there is no such thing as “the sodium channel”. A sodium channel consists of a main unit, called the alpha subunit, of which there are 9 subtypes (corresponding to 9 genes), and a bunch of beta-units. The function of the channel depends on the subtype and on which beta-units are expressed. But not only. The alpha subunits can also take different forms, by alternate splicing. More importantly, their function can be modulated by a bunch of things, for example they can be phosphorylated by enzymes, which can change every aspect of their kinetics (activation and inactivation curves and dynamics). All these things depend not only on cell type (brain area etc) but on the individual cell, and it can vary over time. To top it all, properties of different channels (say sodium and potassium) are also co-tuned in cells (not surprisingly, since various compounds such as PKA modulate many types of channels). So when you measure channel properties in an oocyte, or simply in different cells, and put things together, there is close to zero probability that you will get a model that produces normal function – not mentioning the subcellular localization of channels, which is highly heterogeneous. This is why, in the end, all models are the result of some more or less laborious hand-tuning. In general, the more detailed, the more laborious and most likely inaccurate. I recently published a review where I argued that the integrate-and-fire model is actually more realistic than isopotential Hodgkin-Huxley models, but there is a general point: the more detailed a model, the more likely it is to be plain wrong. Worse, detailed models are likely to be wrong in unpredictable ways.

Model databases and standards promote the illusion that realistic models can be built simply by reusing bits of previous models, standing on overly reductionist views of the living, when systemic views should prevail. This leads to the spread of Frankenstein models with a dangerous illusion of realism.

What should we do about it? I still think that sharing models is good and should be encouraged. But how to avoid the spread of Frankenstein models? There are at least two directions I can think of. First, the way models are communicated and shared: it should be about science, not about computers. If all we do is load a file into our software tool and never actually look at the model itself, then we're doomed. There is a trend of developing standards for models, so as to facilitate exchanges. I think these standards should not go beyond the mathematical formulation of the models, for otherwise we throw away all the context in which the models are supposed to be meaningful. This is also the reason why in the Brian simulator, we insist that models are specified by their equations, and not by pre-designed components (e.g. the sodium channel). In this way it is clear what specific model is used, and we do not introduce modeling biases. The second direction is to educate scientists as regards epistemological and scientific questions in modeling. But that won't be easy!

This week's paper selection (27 Jan - 3 Feb 2016)