Submitted by so-gold t3_117npc4 in askscience
Let’s say you take a photo and then digitally blur it in photoshop. The only possible image that could’ve created the new blurred image is your original photo right? In other words, any given sharp photo has only one possible digitally blurred version.
If that’s true, then why can’t the blur be reversed without knowing the original image?
I know that photos can be blurred different amounts but lets assume you already know how much it’s been blurred.
SlingyRopert t1_j9dtray wrote
Unblurring an image is conceptually similar to the following story problem:
Bob says he has three numbers you don’t know. He tells you the sum of the numbers is thirty-four and that all of the numbers are positive. Your job is to figure out what those the numbers are based on the given information. You can’t really. You can make clever guesses about what the numbers might be based on assumptions, but there isn’t a way to know for sure unless you get additional information. In this example, thirty four represents the image your camera gives you and the unknown numbers represent the unblurred image.
In practice, there is a continuum of situations between images that can’t be unblurred and images that can be usefully improved. The determining factor is usually the “transfer function” or Linear translation invariant representation of the blurring operator applied to the image. If the transfer function is zero or less than 5% of unity at some spatial frequencies, the portions of the image information at these spatial frequencies and above is probably not salvageable unless you make big assumptions.
An equation called the Wiener filter can help you figure out which spatial frequencies of an image are salvageable and can be unblurred in a minimum squared error sense. The key to whether a spatial frequency can be salvaged is the ratio of the amount of signal (after being cut by the transfer function of the blur) to the amount of noise at that same spatial frequency.
When the signal to noise approaches one to one, you have to give up on unblurring that spatial frequency in the Wiener filter / unbiased mean squared error sense because there is no information left. This loss of information is what prevents unbiased deblurring.
If you are ok with having “biased” solutions and making some “big assumptions” you can often do magic though. For instance, you could assume that the image is of something that you have seen before and search a dictionary of potential images to see which one would (after blurring) look the most like the image you received from the camera. If you find something whose blurred image matches you could assume that the unblurred corresponding image is what you imaged and nobody could prove you wrong given the blurry picture you have. This is similar to what machine learning algorithms do to unblur an image by relying on statistical priors and training. You run the risk with this sort of extrapolation that the resulting unblurred image is a bit fictitious.
I personally recommend being cautious with unblurring using biased estimators due to the risk of fictitious imagery output.
It is always best to address the blur directly and make sure that you don’t apply a blur so strong that the transfer function goes to near zero.