I asked experts how to counter political rage farming and deception. Here’s a citizen’s tool kit.
Pressure MPs to get rid of American owned news media. I think prevention is better than the reactive truth lagging behind a bunch of loudspeakers blaring disinformation 24/7 drowning out any rebuttal.
American culture has turned toxic…
Real-time fact-checking should be implemented as standard. With the advances in AI lately, it isn’t a difficult thing to be added, especially to political speeches being the first avenue for this. Anytime a politician makes a speech, all their statements can be fact-checked with sources in real time as subtitles.
Someone makes a claim, it shows what data supports and what data contradicts it. Show both sides as evenly as possible to reduce claims of bias if it’s contested information, but if it’s a quote, then show context. Make it mandatory on all news broadcasts first, and let independents and other news sources decide on their own whether to follow this trend or not at first.
Making huge, radical and wide-sweeping change opens up for potential problems, especially when it comes to enforcing others. But start with government funded news (or even just the CBC at first, but make the software available for free (or cheap) for anyone else who wants to use it) so that people get used to it and it can be more finely tweaked to work best while minimizing complaints of bias.
By showing sources and supporting/conflicting data for all statements, rather than just blatantly false ones, we can set a new standard based on facts, not persuasive power.
I agree that fact checking needs to be the standard in media coverage of political statements.
But I don’t think the AI tools that are currently available are ready to do it without significant human oversight. They are still prone to hallucinations and other unpredictable behavior now and then.
Yes, AIs are they are do need oversight. But it’s not possible to do this in real time without AIs. And corrections afterwards when AIs make mistakes is far better than just letting politicians get away with blatant lying. Also, as long as they’re supervised, any lines can be vetoed out if the supervisor things they may be off, leaving the corrections and source statements conservative since it’s obviously better to be silent than to be wrong for this sort of things.
And the earlier such projects start, the more we can learn to do it better as AIs get better, as well as recognize signs of the AI hallucinating.