I am sincerely confused by how often this rationale is presented in so few words. Obviously AI has made it much easier - much, much easier - for many more people to make dramatic edits or fabricate whole images.
I understand the point that photo tampering has always been an issue, but by itself that point can't really mean anything without at all addressing the tremendous difference between traditional editing and AI editing.
We can say things like, "you should never trust X format of information anyway", but it's unrealistic for essentially any given X. A sentient human being can't function without trust heuristics. It's always been that way, for all of time - there is no such thing as 100% trust nor 0% trust in any human-to-human communication domain. That squishiness is why things like "how easy/hard is it to fabricate" matters pragmatically. And we can't rely entirely on sources either, since a ton of information is not readily sourceable, and often that's just kicking the can (is the source true?), and we often don't even notice the source. We see photos constantly, and we make assumptions about how likely they are to be real subconsciously and consciously based on a ton of variables. This is just basic human stuff. It's why new ways to more easily/broadly poison information have real consequences.
But the thing is that this isn't even the first jump in terms of making photo tampering easier - this has already happened in the past. Both naturally as these processes matured, but also when manual editing has been superseded by digital editing.
The introduction of digital photo editing (and especially making it accessible to everyone) also brought the skill floor down by several orders of magnitude. To my knowledge, people back then were also saying things akin to "you can't trust anything now". Before that point, you needed skills in working with physical photographs, and had to be a visual artist to some extent. It was a pretty niche skill set. Now, any random user can fire up some software and start doing things that used to be impossible or unthinkably hard, after just some hours of training. And that was just the beginning - photo editing became easier and more accessible over the decades to come.
So why didn't this "data collapse" happen then? I'm not saying it's impossible, but people in this comment thread are acting like there's some hard barrier in technological accessibility, before which everything is good and trustworthy, and after which there's apocalyptic consequences, permanent decay of trust, erasure of history, etc. Is it a barrier or a sliding scale? What makes people so confident that it's this exact development that will finally tip the scales for good? And if this was always bound to happen with our technology evolving, should we even fight back?
> So why didn't this "data collapse" happen then? I'm not saying it's impossible, but people in this comment thread are acting like there's some hard barrier in technological accessibility, before which everything is good and trustworthy, and after which there's apocalyptic consequences, permanent decay of trust, erasure of history, etc. Is it a barrier or a sliding scale? What makes people so confident that it's this exact development that will finally tip the scales for good?
I think people tend to get too focused on one narrow thing (e.g. this technology) and loose sight of the context. Some things going on now:
1. AI fakes will take the skill floor down "several orders of magnitude" more.
2. Trust in all kinds of institutions is declining, including the government and media (precisely the ones that were relied on to debunk and discredit fakes in the past).
3. More and more people are getting their information from social media.
tl;dr: "data collapse" hasn't happened yet because, previously we had the institutional capacity to manage airbrushed fakes/photoshops/etc. As the technology gets cheaper, and the institutions get weaker, eventually they won't be able to keep up.
I have always completely not trusted photos for several years now.
Like I can look at them and believe them, but as soon as they contain something of importance that would actually matter in a great way, I simply already "checked out" of trusting them for that purpose years ago already.
This will only make more people trust photos even less, which is a good thing, because they already shouldn't be as trusting of them as they are.
Arthur Conan Doyale was duped by a couple of young girls with a camera and paper cutouts of faeries. Has there ever been a time when it wasn't trivial to trick at least some people with low effort fakes?
People who want to believe what you're showing them will believe even the lamest fabrication, while people who don't want to believe will doubt even authentic photographs with reputable provenance.
On the other, quantity has a quality all of its own: what once took effort… well, I've now got a pile of "photographs" of an anthropomorphic dragon-racoon hybrid sitting on a motorbike in a rainy Japanese city, for about €0.03 each.
I'm not sure exactly what I spent on film and development in the 90s, but I bet it was more than that per photograph.
For sure, but there are all kinds of bad photoshops out there where it is super easy to tell, much like bad AI images that stick out like a sore thumb (or 2 thumbs).
But getting good at photoshop takes a lot of effort (as well as a license or a will to use it otherwise), that will remain the same. The same can't necessarily be said for AI images which have been improving year over year and becoming more and more accessible.
But anyone could have already paid someone who is decent enough at photoshop to make a convincing edited photo. It's not like you yourself needed to be good at photoshop for it to poison any image that someone wanted.
People could previously have afforded to get that done for any specific image, but now it's now possible for a propagandist to give each and every resident of the USA their own personalised faked image for a total cost of about $25k (if their electricity costs $0.1/kWh).
I made this point on Threads and Nilay's response was "yes making visual lies trivial to make is bad". It's never been photos that made "truth", it's been the source of the photos. You trust a photo from a photojournalist. You don't trust a photo from some rando in your social feed.
>You trust a photo from a photojournalist. You don't trust a photo from some rando in your social feed.
The problem is, this isn't highly true.
Sometimes we don't trust photos from some journalists, not necessarily because we think it is dramatically edited, but we know even professionals have been caught mildly editing, either in-camera or with tools afterward.
Conversely - sure, we don't trust when we see a photo from a rando slandering a politician, unless we want to believe it. At the same time, we mostly believe a rando photo of a fireman rescuing a cat. The latter is less likely to be fake, and if it is, the consequences of believing it are less severe.
Trust heuristics are complex and highly psychological.
I would add that, at least historically, a reputable photojournalist wouldn't likely build a very successful career on faked photos. It's heavily disincentivized. The time and effort required to build the necessary skills and clout won't casually be wasted by a professional. And if and when it does happen that a photojournalist is caught in a lie, the rest are quick to reject it, because it damages their own reputations and livelihoods.
But now, there's little to stop anyone from producing images depicting anything, and we've seen how systems that are blind to ethics can be manipulated into disseminating such images at a speed and scale that far outpaces fact-checking. Professional standards and traditional gatekeeping have no power against it.
This reminds me of the other take that's in vogue the last week, that it's a moral panic to complain there's an AI app that's happy to write school shooting instructions, meth recipes, etc. because The Internet already has all that.
This "everything was already broken" take elides a lot that's obvious and intuitive, and the obtuseness gets you regulated and everyone saying you deserved it.
For exampling, this is scaling Photoshopping to, literally everyone, in their pocket, with hours of work transformed to typing out a short message and waiting 20 seconds.
Eh. Though "trick" photography has existed forever, it has always been much more difficult to do and easier to spot. Now it is super easy. That has to change the calculus of trust and the basic assumption that most images aren't doctored.