It seems fairly easy to bake a chain of custody into your images. Sensor outputs a signed raw image, AI outputs a different signed “touched up” image. We can afford to keep both in this hypothetical future; use whichever one you want.
Once generative AI really takes off we will need some system for unambiguously proving where an image/video came from; the solution is quite obvious in this case and many have sketched it already.
The images generated by SLR and mirrorless cameras are already signed with device embedded keys during EXIF embedding. Every manufacturer sells such verification systems to law enforcement or other institutions to verify such images.
Sometimes there are exploits which extract these keys from the cameras themselves, but I don't hear them nowadays.
And the answer is “spam filters and AI personal curation agents will drop any image without chain of custody from every feed that claims to be about reality”.
In a world where any image or video can be generated, chain of custody to a real-world ground-truth will be vitally important.
I think anything analog is going to be suspect; you can take a photo of a digital image and it could look like a real analog image.
Absent a chain of custody (perhaps including GPS baked into the signed image data blob), I think analog artifacts will become untrustable. Unless you can physically date them to pre-generative era!
So now not only are there AI-imagined details in your images but those details are also different depending on which device the image is viewed on. Lovely.
It’s a fair point but with high enough resolution (and perhaps GPS baked into the trusted data) I suspect it would be very hard to actually forge a digital image from an analog source.
Likewise depth fields and other potential forms of sensor augmentation.
Once generative AI really takes off we will need some system for unambiguously proving where an image/video came from; the solution is quite obvious in this case and many have sketched it already.