Removing compression artifacts

homeblogmastodonthingiverse



I've actually made some progress on my PhD! The funny thing is I came at this particular idea from three different directions simultaneously:

Underlying all these problems is a situation where we have incomplete data (in the form of a set of constraints) about something, and want to fill in the gaps so as to maintain the "texture" of the data. All three problems are surprisingly similar at a mathematical level.

The last of the three is what I'm bouncing about today. The result is kind of like the GIMP's Selective Gaussian Blur, only much cleverer, and with one less parameter... *cough* and a little slower *cough*

How is this done? First, I made a set of simplifying assumptions:

It is then possible to find the most likely image, given these assumptions. For example:

Palettization: before / after (this kind of degredation occurs with over-zealous squeezing of GIFs)

JPEG artifacts: before / after (this kind of degredation occurs when you set the JPEG quality level too low)

Over-exposure: before / after (no, not like that, you sick people. Leaving the camera shutter open too long. Deary deary me)

Analogue noise: before / after (film grain, imperfect printing, noise while scanning an image and so on)

Further details in my thesis.




[æ]