The behaviour of this model of a neuron is nice. A huge outlier can be ignored instead of the output being simply clipped at its upper bound of activation as in the traditional model. Different strength given to the constant-valued input gives some flexibility in how to deal with outliers.

A single robust estimator would be useful to a simple organism. A chain of robust estimations sounds like a good way of dealing with complex and noisy phenomena, and could plausibly evolve from an organism having a single estimator.

This slots nicely into my new model of autism, providing a neural basis. Autism may be caused by neurons using a Levy stable distribution with higher alpha, or possibly by using inputs that are more correlated (which has a similar effect and would explain the odd distribution of white matter in autistic brains). Greater degrees of freedom.

*hmm...*

So for each input we have three parameters: a scale, an offset, and the number of repetitions (similar to degrees of freedom in a t distribution). Autistic neurons will tend to have higher repetitions, and probably a compensatory larger scale (or have inputs that are more correlated, which will produce much the same effect).

*hmm... *

Oops. Of course all the inputs to the neuron will be correlated, they're all rough measures of the same quantity (which the neuron is an estimator for). So the autistic person would simply have a greater number of connections per neuron: more white matter, less gray matter. Greater breadth and lesser depth of processing.

As a further twist: the neuron need never explicitly output its output. It may be merely a means whereby the inputs exert force on one another. A neuron in this model represents a "single hidden factor" correlating the inputs. This is nicer, I think. It means the neuron can work from causes to effects, or effects to causes, or some mixture thereof, depending on what is known and what unknown. A node in a Bayesian network with some fairly extreme independence assumptions.