In a previous post, I noted that the concept of neural assembly is limited by the fact that it does not represent relations. But this means that it is not possible to represent in this way a complex thing such as a car or a face. This might seem odd since many authors claim that there are neurons or groups of neurons that code for faces (in IT). I believe there might be again some confusion between representation and information in Shannon’s sense. What is meant when it is stated that an assembly of neurons codes for a face is that its activity stands for the presence of a face in the visual field. So in this sense the complex thing, the face, is represented, but the representation itself is not complex. With such a concept of representation, complex things can only be represented by removing all complexity.
This is related to the problem of invariant representations. How is it that we can recognize a face under different viewpoints, lightning conditions, possibly changes in hair style and facial expressions? One answer is that there must be a representation that is invariant, i.e., a neural assembly that codes for the concept “Paul’s face” independently of the specific way it can appear. However, this is an incomplete answer, for when I see Paul’s face, I can recognize that it’s Paul, but I can also see that he smiles, that I’m looking at him from the side, that he tainted his hair in black. It’s not that by some process I have managed to remove all details that are not constituent of the identity of Paul’s face, but rather I am seeing everything that makes Paul’s face, both in the way it usually appears and in the specific way it appears this time. So the fact that we can recognize a complex thing in an invariant way does not mean that the complexity itself is discarded. In reality we can still register this complexity, and our mental representation of a complex thing is indeed complex. As I argued before, the concept of neural assembly is too crude to capture such complexity.
The concept of invariance is even more interesting when applied to categories of objects, for example a chair. In contrast with Paul’s face, different chairs are not just different viewpoints on the same physical object, they really are different physical objects. They can have different colors, widely different shapes and materials. They usually have four legs, but surely we would recognize a three-legged chair as such. What really makes a chair is that one can sit on it, have her back in contact with it. This is related to Gibson’s concept of “affordances”. Gibson argued that we perceive the affordances of things, i.e., the possibilities of interaction with things.
So now I could imagine that there is an assembly of neurons that codes for the category “chair”. This is fine, but this is only something that stands for the category, it does not describe what this category is. It is not the representation of an affordance. Representing it would involve representing the potential action that one could make with that object. I do not know what kind of neural representation would be adequate, but it would certainly be more complex (i.e., structured) than a neural assembly.