In discussions of « neural coding » issues, I have often heard the idea that “the brain uses all available information”. This idea generally pops up in response to the observation that neural responses are complex and vary with stimuli in ways that are difficult to comprehend. In this variability there is information about stimuli, and as complex as the mapping from stimuli to neural responses may be, the brain might well be able to invert this mapping. I sympathize with the notion that neural heterogeneity is information rather than noise, but I believe that, phrased in this way, this idea reveals two important misconceptions.
First of all, there is often a confusion between sensitivity (responses vary along several stimulus dimensions) and information (you can recover these dimensions from the responses). I made this point in a specific paper two years ago (pdf). Neural responses are observed for a specific experimental protocol, which is always constrained in a limited set of stimuli. One can often recover stimulus dimensions from the responses within this set, but it is a mistake to conclude that the brain can do it, because this inverse mapping depends on the particular experimental set of stimuli. In other words, the mapping is in fact from the observed neural responses and the knowledge of the experimental protocol to the stimulus. The brain does not have access to such an external knowledge. Therefore, information is always highly overestimated in this type of analysis. This is in fact a classical problem in machine learning, related to the issues of training vs. test error, generalization and overfitting. The key concept is robustness: the hypothesized inverse mapping should be robust to large changes in the set of stimuli.
The second misconception is more philosophical, and has to do with the general investigation of “neural codes”. What is a code? It is a way of representing information. But sensory information is already present at the level of sensory inputs, and it is a theorem that information can only decrease along a processing chain. So if we say that the goal of a code is only to represent the maximum amount of information about stimuli, then what is gained by having a second (central) code, which can only be a degraded version of the initial sensory inputs? Thinking in this way is in fact committing the homunculus fallacy: looking at the neural responses as a projection of sensory inputs, which “the brain” observes. This projection achieves nothing, for it still leaves unexplained how the brain makes sense of sensory inputs – nothing has been gained in terms of what these inputs mean. At some point there needs to be something else than just representing sensory inputs in a high-dimensional space.
The answer, of course, is that the goal of a “neural code” is not just to represent information, but to do it in such a way that makes it easier to process relevant information. This is the answer provided by representational theories (e.g. David Marr). Then you might also argue that the very notion of a neural code is misleading because the role of a perceptual system is not to encode sensory inputs but to guide behavior, and therefore it is more appropriate to speak of computation rather than code. In either view, the relevant question when interpreting neural responses is not how the rest of brain can make use of it, but rather how they participate in solving the perceptual problem. I believe one key aspect is behavioral invariance, for example the fact that you can localize a sound source independently of its level (within a certain range). Another key aspect is that the “code” is in some way easier to decode for “neural observers” (not just any observer).