- How to fix a clipped sound file.
- How to deal with missing data in Little Snob and Sparrowfall Design Notes.
- How to clean up an image that has been over-compressed or degraded in some way.
Underlying all these problems is a situation where we have incomplete data (in the form of a set of constraints) about something, and want to fill in the gaps so as to maintain the "texture" of the data. All three problems are surprisingly similar at a mathematical level.
The last of the three is what I'm bouncing about today. The result is kind of like the GIMP's Selective Gaussian Blur, only much cleverer, and with one less parameter... *cough* and a little slower *cough*
How is this done? First, I made a set of simplifying assumptions:
- The image has a texture that can be modelled by causal prediction.
- Prediction errors are normally distributed with constant variance.
- The value of each pixel lies within a certain range, and is equally likely to be any value within that range.
- In the simplest and most useful case each range is the same size, and centered about the pixel value of the input image. In this case, the size of the range is the only parameter!
- Different ways of choosing ranges can be used to adapt the method to different situations. For example: palettization, JPEG noise and image over-exposure.
It is then possible to find the most likely image, given these assumptions. For example:
Palettization: before / after (this kind of degredation occurs with over-zealous squeezing of GIFs)
JPEG artifacts: before / after (this kind of degredation occurs when you set the JPEG quality level too low)
Over-exposure: before / after (no, not like that, you sick people. Leaving the camera shutter open too long. Deary deary me)
Analogue noise: before / after (film grain, imperfect printing, noise while scanning an image and so on)