> I downsized it to 170x170 pixels and applied a gaussian blur, so that all the detail is GONE. This means it's not recoverable, the information is just not there, it's digitally blurred
Strictly speaking, applying a Gaussian blur does not destroy the information. You can undo a Gaussian blur with a simple deconvolution, which is something I would expect even a non-AI image enhancement algorithm to do (given that, you know, lenses are involved here).
I'd like to see what detail can be "recovered" with just the downsizing, which DOES destroy information.
Well the op did downsize so details had to be reconstructed. Also the noise from having the image being projected through a screen and then a retaken through the camera sensor means that it isn't just your standard perfect convolution.
if you downsize an image to 170x170px and then blow it up so it's visible to a camera from across the room without any sort of blurring, it's not going to look like anything and the camera's object detection won't recognize it as the moon - it's just going to look like a huge pixel grid.
I am not quite so confident. I would like to see an experiment to test how badly you can distort an image of the moon before the AI stops recognising it.
I'm not so confident either, especially when you consider than whatever input the NN gets is probably downscaled to hell from however many gigapixels the sensor has, otherwise good luck running it on a phone with 8GB of ram.
Strictly speaking, applying a Gaussian blur does not destroy the information. You can undo a Gaussian blur with a simple deconvolution, which is something I would expect even a non-AI image enhancement algorithm to do (given that, you know, lenses are involved here).
I'd like to see what detail can be "recovered" with just the downsizing, which DOES destroy information.