Sparrowfall design notes

homeblogmastodonthingiverse



Life has a texture to it. Each life has its own unique path, but all are variations on a theme. Happiness and sadness pass in their various forms, following well worn trajectories. And if we document these paths, and if we record each trajectory, others might avoid our mistakes, and repeat our successes. And we might do the same of theirs. 21/1/2004: Single-person version largely complete, still needs polishing. Statistical side all working (if slow).

7/4/2004: Algorithm really is too slow to be useful. It needs to be fast enough to play with, try different scenarios, or it will be dangerous. Have to wait for Moore's law or find better algorithm :-(.

These ideas are in a state of flux, this is my thinking as of when i last updated this page. I'm interested in any comments and critiques. As always, feel free to steal any ideas in these pages. The very reasons that I'm interested in this also mean that I may not be able to follow through on it -- a competing project or two would make me very happy. My motivation for all this will, I think, be obvious to some people: you know who you are.

What and why

The name "Sparrowfall" should give you a fair idea, but to state it formally: software and/or a website to provide

The initial focus is on mental illness, since this is an area in which conventional medicine is weak (and because I have a personal stake in this area). Also chronic illnesses, such as chronic pain or athritis.

Some things that Sparrowfall is not:

The aim is for something of equivalent power to our current system of double blind placebo controlled studies, but of far larger scope. A double blind study eliminates bias due to the placebo effect and selective reporting of results, to study a single effect. Sparrowfall instead aims to accurately model the placebo effect and reporting bias, by taking into account factors such as personality traits, producing a more complete model. Sparrowfall assumes people will try out all sorts of wacky random things and believe strongly in all kinds of wacky random things, so that the effects of belief can be factored out (or even used therapeutically). This is not an unreasonable assumption.


Starting points

A simple starting point would be some software to record one's day to day condition. Things such as how long you slept, how you are feeling, what you are doing, any medications you are taking.

Care will have to be taken over temporal reporting biasses (eg reporting while happy, but not while depressed, the validity of volunteered vs asked for information) (see Blog).

So, we have collected some data. Suppose we then construct a Markov model:

The ontology here is that a person is considered as a state vector evolving over time. We have partial data of that state vector at some times.

To model many people at once (i.e. the full Sparrowfall system), add entries to the state vector indicating which person they are -- eg the person at this time is in a state of Paulness (and this state is connected to past and future states that also are in a state of Paulness). This lets the system taylor itself to individuals, without further complication to the modelling thingy.

The state vector has high dimension. Therefore, we can not just tabulate the Markov model -- a less complex model is required.

The method for constructing the model must be valid, eg Minimum Message Length is ok, but *not* Maximum Likelyhood. Alternatively, the model is constructed in an ad-hoc way, then validated (eg feed in half the data, try on remaining half) to give some kind of trust level.

The simplest valid method would be to construct a correctly distributed sample of possible models given the data. One can then examine the samples, measure means, medians, variances, etc of each parameter to determine a "best" model and how sure we are about it, or test hypotheses against the sampling itself.

The Metropolis Algorithm can do this. The only caveat is knowing at what point to stop sampling (and I'm sure someone has studied this). Also it might be slow.

The "burn-in" phase might be supplimented by an ad-hoc algorithm for choosing a good initial model.

Once we have this model we can then pose and compare hypotheticals, eg:

and calculate a probability distribution of outcome trajectories (probably by generating a large set of synthetic state sequences into the future, then averaging -- high dimensionality is a bitch). We could then compare, eg:

and also be able to decide if any differences are significant (ie P(H0) html < .01 or so).


We can also approximately fill in the unknown values in the data collected, in order to analyse it directly (rather than the model).

Notes on statistical inference

Bayes. Occam.

Maximum Likelyhood is only useful when then model is limited in size. In this case it might choose the best model in the limit as more data is collected, or it might not. Without a limit on model size, it will choose infinitely complex models in the limit.

Minimum Message Length will converge on the correct model in the limit, even without a limit on model size. It also seems fairly good with limited data. However it is *not* the optimal way to make predictions given limited data.

Optimal inference where data has no missing values:

-- first item may be approximated by using the Metropolis algorithm to generate a set of sample models. These can be fed into the second item. With sufficient time and computing resources, this method looks like it will produce optimal predictions about the future, with any amount of data.

Optimial inference with missing values:

The things we need to work out are:

These will have to be optimized in tandem, since the space of states is so large (high dimensional).

Is even this feasible?

Fun things to consider:

A tentative structure for the model:

This is extremely similar to my PhD work on image de-noising. I'm in the middle of writing that up, and will try to post it here once it's done.

We have a sequence of states, each containing a set of measurements (the same set for each state). These measurements are real valued. Let us order all of the measurements, by writing each state in turn, and within each state writing each measurement (in arbitrary order, but consistent between states). Then we may calculate the probability density by calculating the density of each measurement given the preceding measurements.

  S1  S2  S3  S4 ...
  m1  m1  m1  m1 ...
  m2  m2  m2  m2 ...
  m3  m3  m3  m3 ...

To represent discrete values, such as a binary choice, partition the number line. For example less than -1 is false, greater than 1 is true, in-between is fuzzy.

      -1     1

Or for a choice with more options, which is ordered

      1.5 2.5 3.5 4.5
<111111|222|333|444|555555>

Note that there are no bounds on a measurement.

When the user enters some data, this constrains the value of a measurement. For example, if to the question "Are you male?" they answer "true" then the maleness measurement at that time must exceed 1. Or if when asked "On a scale of 1 to 5, how happy are you?" they answer "4" then the happiness measurement at that time is constrained to lie between 3.5 and 4.5.

How shall we decide the density of each measurement (and of the model)?

A simple way would be to produce for each measurement a prediction of its most likely value, based on a linear sum of preceding measurements in our ordered list of measurements. This sum need not include all preceding measurements, just measurements in this state and the preceding several should suffice. Then let us say that the difference between the predicted and actual values tends to be normally distributed. So our probability density function for each measurement is a Gaussian distribution centered on the prediction.

Our "texture" model is the weights used in each linear sum. We have a linear sum for each of the measurement types, comprising weights for preceding measurements in the current state (according to an arbitrary ordering), and weights for measurements in previous states, and weights for information such as which questions the user answered.

We need to calculate the densities for the weights that comprise the model. A reasonable guess would be that they have a Gaussian or two-tailed exponential distribution about zero.

The densities for each weight in the model, and each measurement in the sequence of states are multiplied together, giving an over-all density.

And this is all we need :-)

Why this form of model?


Things to track

A large set of yes/no or 1/2/3/4/5 type questions. There should also be an option not to answer a question. The system will have to be able to select a set of questions that are likely to give information. Also the person using the system should be able to voluntrarily answer other questions (and add new questions).

Questions have to refer to a time frame. eg in the last 24 hours, in the last week, etc. This could be part of the question. Alternatively it could be used to constrain the state vector, i.e. the person's state is entirely missing data, but has constraints.

Below are some possibilities. See also the book "Authentic Happiness" by Martin Seligman.

Goal type items:

Resolutions (Sparrowfall's primary function would be to give estimates of the effects of these on the goal items):

Other:

and so on. It should be easy for people using Sparrowfall to add items addressing their own very specific concerns. People should also be able to enter a blog entry: even though this can't be analysed, other people can read it.


References

www.remedyfind.com Closest thing to Sparrowfall existing today. The statistics are not perfect, but this is more than made up for by the value of the comments people write about each remedy.

Authentic Happiness questionaire site A tie in to a popular psychology book by Martin Seligman. Provides facility for tracking changes in response to various psychological questionaires. Alas, he's still hung up about the whole privacy thing, and the web-site is kind of clunky. But the guy is a psychologist, so he probably knows his statistics.

Gawande, A. (2002), Complications Very readable account of the methods and culture of surgery. Of particular note:

Horton, R. (2003), Second Opinion




Less definite thoughts

Full success would be a very strange thing:

Let's take this wild speculation just that one step further. A few decades down the track, we've gone through several generations of increasingly precise models as Moore's law continues it's steady progression, and started gathering truely massive quantities of data. Sparrowfall-2020 is pretty cluey. What does that mean?


You and what army?

This is obviously a fairly big project. Here are some possibilities (a sampling of future trajectories if you like :-) ):




[æ]