Researchers at the University of Maryland have revealed that current methods of watermarking artificial intelligence (AI)-generated content are easily evadable. Watermarks are a crucial tool in combating the misuse of AI-generated content, such as deep fakes and misinformation. However, the UMD researchers found it simple to remove existing watermarks or add fake ones to non-AI-generated images. Despite this, the team also developed an unremovable watermark that can detect stolen products. The findings highlight the need for improved watermarking technology to effectively combat AI-generated misinformation.
In a collaborative research effort between the University of California, Santa Barbara and Carnegie Mellon University, watermarks were also found to be easily removable through simulated attacks. Destructive approaches involved treating watermarks as part of the image and making alterations that noticeably affected image quality. Constructive attacks using techniques like Gaussian blur were more sensitive in watermark removal. The ease of removing watermarks suggests that watermarking AI-generated content still has room for improvement in passing simulated tests, which are essential in countering deep fake ads and political manipulation.
As the 2024 US presidential election approaches, the impact of AI-generated content on political opinion becomes increasingly significant. The Biden administration has acknowledged concerns about AI’s potential for disruptive purposes, particularly in misinformation campaigns. While developers continue to refine tools like Google’s identification tool for generative art, digital watermarking needs to evolve to become a competitive race against hackers and safeguard against the misuse of AI-generated content in influencing public perception.