The Defense Advanced Research Projects Agency is seeking to create software with the capability to “automatically detect, attribute, and characterize falsified multi-modal media to defend against large-scale, automated disinformation attacks.”
The software will scan news stories, photos and videos to identify “polarizing viral content” and stop its spread to eliminate “malicious intent” entirely.
Given that the program doesn’t take into account the fact that so-called “trusted sources” in the mainstream media have been responsible for some of the biggest fake news stories in modern history, such as Trump-Russia election collusion, the software will only succeed in eliminating dissident narratives.
“To hear them tell it, the Pentagon just wants to even the playing field between the ‘good guys’ – the fake-hunters pursuing the cause of truth in media – and the ‘bad guys’ sowing discord one slowed-down Nancy Pelosi speech at a time,” she writes.
“But the Pentagon’s targets aren’t limited to deepfakes, the bogeyman-of-the-month being used to justify this unprecedented military intrusion into the social media and news realm, or fake news at all. If the program is successful after four years of trials, it will be expanded to target all “malicious intent” – a possibility that should send chills down the spine of any journalist who’s ever disagreed with the establishment narrative.”
A study undertaken by researchers at University College London found that the most effective memes in the run up to the 2016 presidential election largely originated in two places – the subreddit r/the_donald – a forum devoted to boosting President Donald Trump, and 4chan’s politically incorrect /pol forum.
A VICE write-up of the study acknowledges that the most “effectively spread” memes originated on r/the_donald and /pol.
Last year, Facebook also announced it is developing a new AI algorithm that can detect and ban “offensive” memes.