A Wolfram style interpretation of autism

homeblogmastodonblueskythingiverse



Look at the size of the triangles in these automata:

(Somewhere in-between is class 4 "complex" computation, as typified by Rule 110. I find Wolfram's numbering system a little wacky, as the order of increasing randomness is 1-2-4-3. Anyway...)

Rule 90 displays triangles of all kinds of sizes. The distribution of sizes has a fat tail. Large triangles are very much rarer in Rule 30, the distribution of sizes looks to have a very thin tail. [1]

Ok, so obviously I am thinking this links to my paragraph length stuff, with the autistic spectrum relating to the variable spectrum of complexity/randomness seen, if Wolfram is correct, in essentially all forms of computation. The idea that autistic people are pushing towards Rule 30 type computation also nicely explains the higher incidence of epilepsy in autism, if their brains occasionally flip over the line from computation to full blown Rule 30 style randomness.

Unfortunately, Wolfram's "New Kind of Science" gives only frustratingly vague on definitions of these ideas [2]. He does speculate somewhat as to the cause of this spectrum: different rates of information transmission. In class 3 systems, information propagates rapidly. In class 4 and 2 systems, it is somewhat slower. So the form of computation seems to have to do with the kind of information routing being used. However information can only propagate at the "speed of light" in a cellular automaton, it can't make hyper-spatial jumps through the third dimension as in the human brain, so it's not clear how this would emerge from the brain's structure.

At a slightly meta-level, Wolfram speaks of processes such as engineering and evolution consisting of two distinct forms of advancement: composition of modular components into larger devices, a fairly straightforward process, and the search for novel components, in which it is difficult to do better than brute-force surveys. I would be guessing autism leans towards the brute-force approach. Slower, but more likely to generate novel solutions. Such processes, as instances of computation, undoubtedly fit into Wolfram's computation taxonomy just fine. [3]


[1] My earlier best-fit synthesis results are somewhat explained by this. Best-fit synthesis is a form of cellular automaton, and therefore capable of displaying these different classes of computation. Simple repetition, gliders, and pseudo-randomness -- the signs of Wolfram's various classes of computation -- are also common occurrences in best-fit synthesis. However the power-laws I obtained are the reverse of what would be expected :-(. ... update: some tinkering shows that the neighbourhood used to generate the image also affects average copied-patch size, so there are parameters that will show both eidetic memory and a thin-tailed distribution of sizes. Not convinced though.

(one reason i had not investigated this before is that synthesis with large neighbourhood sizes takes a lot more computation... heh)

[2] In the section on quantum physics he steps over the line into true everything's-magically-connected style crack-pottery.

[3] As it appears that generating randomness is trivial and commonplace in computation, random search algorithms such as the Metropolis-Hastings probably have very simple implementations (even in discrete non-randomized systems). However, to find such simple implementations we can probably do no better than a brute force survey of all simple programs :-P.


Addendum: Idea for how to compute the alpha of a cellular automaton rule.

Run the cellular automaton, using random^H^H^H^H^H^H simple initial conditions. The run should be about as wide as it is high. Calculate the density of black cells on each line. Fit a Levy-stable distribution to the distribution of these densities.

This should give a numerical measure of Wolfram's ontology.




[æ]