Using it as a reference is a high bar not a low bar.
The AI videos aren't trying to be accurate. They're put out by propaganda groups as part of a "firehose of falsehood". Not trusting an AI told to lie to you is different than not trusting an AI.
Even without that playing a game of broken telephone is a good way to get bad information though. Hence why even reasonably trustworthy AI is not a good reference.
Not that this makes it any better, but a lot of AI videos on YouTube are published with no specific intent beyond capturing ad revenue - they're not meant to deceive, just to make money.
Not just youtube either. With meta & tiktok paying out for "engagement" that means all forms of engagement is good to the creator, not just positive engagement, so these companies are directly encouraging "rage bait" type content and pure propaganda and misinformation because it gets people interacting with the content.
There's no incentive to produce anything of value outside of "whatever will get me the most clicks/like/views/engagement"
One type of deception, conspiracy content, is able to sell products on the basis that the rest of the world is wrong or hiding something from you, and only the demagogue knows the truth.
Anti-vax quacks rely on this tactic in particular. The reason they attack vaccines is that they are so profoundly effective and universally recognized that to believe otherwise effectively isolates the follower from the vast majority of healthcare professionals, forcing trust and dependency on the demagogue for all their health needs. Mercola built his supplement business on this concept.
The more widespread the idea they’re attacking the more isolating (and hence stickier) the theory. This might be why flat earthers are so dogmatic.
Saying "some people use llms to spread lies therefore I don't trust any llms" is like saying "since people use people to spread lies therefore I don't trust any people". Regardless of whether or not you should trust llms this argument is clearly not proof of it.
Those are false equivalents. If a technology can’t reliably sort out what is a trustworthy source and filter out the rest than it’s not a truth worthy technology. There are tools after all. I should be able to trust a hammer if I use it correctly
All this is also missing the other point: this proves that the narrative companies are selling about AI are not based on objective capabilities
The claim here isn't that the technology can't, but that the people using it chose to use it to not. Equivalent to the person with a hammer who chose to smash the 2x4 into pieces instead of driving a nail into it.
The claim here is that it can’t because it want filter its own garbage let alone other garbage.
The narrative being pushed boils down to LLMs and AI systems being seen as reliable. The fact that Google AI can’t even tag YouTube videos as unreliable sources and filter them out of the result set before analysis is telling
The AI videos aren't trying to be accurate. They're put out by propaganda groups as part of a "firehose of falsehood". Not trusting an AI told to lie to you is different than not trusting an AI.
Even without that playing a game of broken telephone is a good way to get bad information though. Hence why even reasonably trustworthy AI is not a good reference.