While the expression ‘Four Horsemen of the Infocalypse’ has long been used on the internet to refer to criminals like drug dealers, money launderers, pedophiles and terrorists, the term ‘Infocalypse’ was brought to the mainstream relatively recently by MIT grad Aviv Ovadya, who’s currently working as the Chief Technologist at the Center for Social Media Responsibility (UMSI). In 2016, at the height of the fake news crisis, Ovadya had expressed concerns to technologists in Silicon Valley about the spread of misinformation and propaganda disguised as real news in a presentation titled ‘Infocalypse’.
According to his presentation, internet platforms like Google, Facebook and Twitter earn their revenues is all geared towards rewarding clicks, shares and viewership rather than prioritizing the quality of information. That, he argued, was going to be real problem sooner rather than later, given how easy it has become to post whatever anyone feels like without any filter. With major internet companies largely ignoring his concerns, Ovadya described the situation as “car careening out of control” which everyone just ignored.
While his predictions have now proven to be frighteningly accurate, there’s more bad news for those concerned about the blurring of lines between the truth and the politically motivated propaganda. According to him, AI will be widely used over the next couple of decades for misinformation campaigns and propaganda. In order to stop that dystopian future, Ovadya is working in tandem with a group of researchers and academics to see what can be done to prevent an information apocalypse.
One such aspect is deepfakes or AI-generated videos of celebrities morphed on other videos, largely porn. Reddit recently banned at least two such subreddits called ‘r/deepfakes’ and ‘r/deepfakesNSFW’ which already had thousands of members, and were distributing fake pornographic videos of various celebrities, like Taylor Swift, Meghan Markle, and the likes. Other platforms, like Discord, Gyfcat, Pornhub, and Twitter, meanwhile, have also banned non-consensual face-swap porn on their platforms.
Beyond violation of individual privacy, Ovadya says such videos could have a destabilizing effect on the world, if used for political gains. They can “create the belief that an event has occurred” to influence geopolitics and economy. All you need is to feed the AI with as much footage of a target politician as possible, and have them morphed on another video saying potentially damaging things.
The one sliver of hope, however, is that people are finally admitting that they were wrong to shut out the threat from fake news two years ago. According to Ovadya, “In the beginning it was really bleak — few listened. But the last few months have been really promising. Some of the checks and balances are beginning to fall into place”. While it’s certainly a positive that technologists are looking to address a problem that’s expected to get worse in the coming years, it will be interesting to see how well they will be prepared for the upcoming information warfare when we are finally presented with the worst case scenario, given that, “a lot of the warning signs have already happened”, according to Ovadya.
Project Strawberry Explained: Is ChatGPT Getting A Huge Upgrade?
Today’s Wordle Answer And Hints (August 17, 2024)
Why Geena Davis & Alec Baldwin Are Not In Beetlejuice 2?