About a year ago I saw a presentation from Andrew Price on a set of topics. While I really enjoyed the topics on generating computer textures and modules algorithmically, the part of the presentation that really grabbed me was on using computer AI to enlarge pictures.
One of the things that movies and TV always do that annoy me is to use "image enhancement" to get incredible detail from small images simply to help move the plot along. For the longest time this was always impossible as you just can't add data to an image if the source image has no data to work with.
Now that it's 2020, that is only "mostly true". Welcome to the future.
For all of the cases in movies, this is still true. You still can't look at a fuzzy picture of a fuzzy license plate and make it clear. (Ok, not exactly true. You can run a Million fuzzy pictures of license plates into AI learning software and you could get back a list of the "most likely" numbers. Not as exact or sexy, but incredibly useful. If you could reduce the number of license plates from a million to a couple hundred that would still be an incredible benefit.)
This advancement is slightly different.
What about a case where you want to upscale an image, but you don't need to know what exactly is in the original image because you can just copy information from a very similar image?
Say you had a low resolution picture of a sunset over the ocean and wanted the picture to be larger.
In this case it doesn't matter if your final image is "exactly what the sunset looked like", it only matters that the image is close enough that you can't notice the difference. How this makes a difference is Computer AI.
If I take 10,000 pictures of sunset's over the ocean, I get a really good idea of how these pictures look. I then feed these images into a computer AI learning routine. After that I increase the size of your small image and ask the computer to "fill in the blanks". It will do so based off of its knowledge of "how these images look" not how "this is exactly how this image looked the second it was taken".
The results can be scary good. They are also entirely dependent on how specific your reference photos are. The online demonstrations I saw used very simple scenes like pictures of kitchens or floors. The results they got were mind blowing.
In this particular case, I have pictures from my 15 year high school reunion. In a classic case of not backing up data, I had lost my original files and only had the quarter size web page files. After 15 years, what was "quarter size" at 250x180 was now just thumbnails. Using the standard resize command in Photoshop to 2x the size resulted in phots that were either blurry or had horrible artifacts.
Time to try some AI.
You can see the full results here. Each of these pictures was resized to 2x the original.
15 Year Reunion
Personally I think the results are very good but not quite "great". They are better than the default resize from Photoshop but not are not yet "mind-blowing". Given that this is still rather new technology, I think that mind-blowing is less than 5 years away.
I was a bit constrained by money (i.e. I was to cheap to pay for a solution that was only "slightly" better) and ended up using BigJPg because it was free. I did try LetsEnhance for the first picture just to compare. I found it much better, but you have to pay to upscale more than a few trials. To get an idea of other options, I found this video with someone doing a head to head comparison with multiple different solutions.