The villainous monster recursion

In O’Regan’s paper about the sensorimotor theory of perception (O’Regan and Noë, BBS 2001), he uses the analogy of the “villainous monster”. I quote it in full:

“Imagine a team of engineers operating a remote-controlled underwater vessel exploring the remains of the Titanic, and imagine a villainous aquatic monster that has interfered with the control cable by mixing up the connections to and from the underwater cameras, sonar equipment, robot arms, actuators, and sensors. What appears on the many screens, lights, and dials, no longer makes any sense, and the actuators no longer have their usual functions. What can the engineers do to save the situation? By observing the structure of the changes on the control panel that occur when they press various buttons and levers, the engineers should be able to deduce which buttons control which kind of motion of the vehicle, and which lights correspond to information deriving from the sensors mounted outside the vessel, which indicators correspond to sensors on the vessel’s tentacles, and so on.”

It is meant here that all knowledge must come from the sensors and the effect of actions on them, because there is just no other source of knowledge. This point of view changes the computational problem of perception from inferring objective things about the physical world from the senses to finding relations between actions and sensor data.

This remark is not specific to the brain. It would apply whether the perceptual system is made of neurons or not – for example it could be an engineered piece of software for a robot. So what in fact is specific about the brain? The question is perhaps too broad, but I can at least name one specificity. The brain is made of neurons, and each neuron is a separate entity (with a membrane) that interacts with other neurons, which are relatively elementary (compared to the entire organism) and essentially identical (in the great lines). Each entity has sensors (dendrites) and can act by sending spikes through their axons (and also in other ways, but on a slower timescale). So in fact we could think of the villainous monster concept at different levels. The higher level is the organism, with sensors (photoreceptors) and actuators (muscle contraction). At a lower level, we could consider a brain structure, for example the hippocampus, and see it as a system with sensors (spiking inputs to the hippocampus) and actuators (spiking outputs). What can be said about the relationship between actions and sensor inputs? In fact, we could arbitrarily define a system by doing at graph cut in the connectivity graph of the brain. At the final level of analysis, we might analyze the neuron as a perceptual system, with a set of sensors (dendrites) and one possible action (to produce a spike). At this level, it may also be possible to define the same neuron as a different perceptual system by redefining the set of sensors and actions. For example, sensors could be a number of state variables, such as membrane potential at different points along the dendritic tree, calcium concentration, etc; actions could be changes in channel densities, in synaptic weights, etc. This is not completely crazy because in a way, these sensed properties and the effect of cellular actions are all that the cell can ever know about the “outside world”.

One might call this conceptual framework the “villainous monster recursion”. I am not sure where it could lead, but it seems intriguing enough to think about it!

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *