Better than no checking, which is what we will do 99% of the time. There is a NLP task called "entailment", where two affirmations are judged to see if they support each other or not. A combination of search + entailment would work for fact-checking articles.
But if you want to do this properly you need to first mine all facts from all sources, then do reconciliation, then update your "truth" table for reference. Probably everyone will want to select the sources of truth they want loaded into the system, we're not going to agree on truth.
Even the bare minimum of knowing when an affirmation is controversial or doesn't exist in references would be of great help. AI could indicate <controversial> tags for the first and <citation needed> for the latter. Fortunately search can tell us when no results are found, unlike LLMs.
But if you want to do this properly you need to first mine all facts from all sources, then do reconciliation, then update your "truth" table for reference. Probably everyone will want to select the sources of truth they want loaded into the system, we're not going to agree on truth.
Even the bare minimum of knowing when an affirmation is controversial or doesn't exist in references would be of great help. AI could indicate <controversial> tags for the first and <citation needed> for the latter. Fortunately search can tell us when no results are found, unlike LLMs.