A few years ago, experts and cybersecurity researchers tried to imagine the kind of tactics and manipulation campaigns that might threaten the 2018 and 2020 elections. The misleading AI-generated videos have topped the list. Even though the technology is still emerging at that time, the projected potential abuse when utilized was very alarming that technology companies and academic developers prioritized to work on and fund methods of detection of these fake AI-generated videos. Social media platforms have developed policies for posting contents containing ‘synthetic and manipulated media’ as an effort to retain the right balance between preserving freedom of expression and discouraging viral lies and propagandas. It was about three months ago when the wave of deepfake moving images never seemed to slow down. Today, a new form of AI-generated media is making the headlines. This is harder to detect, yet it will more likely become a pervasive force around the internet – the deepfake AI-generated text.
Last month, the next frontier of auto-generative writing GPT-3 was introduced to the public – an AI capable of producing shockingly very human-sounding and at times surreal sentences. As it learns, its output becomes more challenging to distinguish from texts produced by humans. Can you imagine that the vast majority of written contents we see around the internet are made by these AI machines in the near future?
Deepfaked video or written outputs from GPT-3 are different from a photoshopped or video edited media because no raw or unaltered media material will be used as a reference for comparison or evidence for face-check. In the early 2000s, it was pretty easy to breakdown pre and post photos of celebrities. Now we are confronted by increasing convincing celebrity face-swap on porn and clips which world leaders tell things they have never said before. Everyone has to adapt and adjust to a new level of unreality. Social media platforms recognized this distinction and moderate deepfake media content that is synthetic and which are ‘modified’.
The AI-generated text has the potential to warp the social communication ecosystem.
To be able to moderate deepfake content, you have to know it’s around the internet.
Video perhaps is the easiest for of AI-generated content to detect. But the AI-generated text currently being produced today is particularly more challenging to identify.
Now, it’s possible to detect recycled and repetitive comments that use similar snippets of text that floods a comment section of an article or a blog, win Twitter’s hashtag counts or persuade Facebook audiences. Recent manipulation campaigns have been observed, including those government-related matters that call for the public’s attention. Spotted among these are suspicious contributions, identified as such due to the repeated contents that different people were unlikely to composed. For instance, these campaigns would be much harder to uncover if these are generated by an AI.
We all learn to be more critical consumers of online contents by evaluating the campaign’s substance to not get swayed by these deceitful campaigns. As synthetic media of all types increase – photo, video, text and audio, detection becomes more challenging.