12.7 C
London
Friday, October 18, 2024
HomeLatest NewsHow AI-generated images are leveraged for propaganda: NPR

How AI-generated images are leveraged for propaganda: NPR

Date:

Related stories

BlackRock’s ETF Head: 75% of Bitcoin Buyers Are New Crypto Enthusiasts

In Salt Lake City, BlackRock's chief investment officer for...

Oregon vs. Purdue Football: Livestreams, Kickoff, and Streaming Deals

To watch college football this season, there are several...

Visionary Perspectives | The Nation

In the upcoming election, the future of democracy and...

2024 U.S. Deficit Exceeds $1.8 Trillion

The Biden administration recorded a budget deficit exceeding $1.8...

The Physics Secret Behind the Incredible Speed of New Super Cars

With the additional downward force, the only way for...
spot_img

Following the online dissemination of images depicting the devastation caused by Hurricane Helene, a particular image of a crying child holding a puppy on a boat gained significant attention. The image, shared on X (formerly Twitter), garnered millions of views and sparked emotional reactions from users, including numerous Republicans who were critical of the Biden administration’s disaster management. However, many quickly noticed indications that the image was likely produced using generative artificial intelligence tools. These signs included features such as malformed limbs and blurriness, which are common in AI-generated images.

Throughout the current election cycle, AI-generated synthetic images have proliferated on social media, often appearing after politicized news events. Observers suggest that these images are being used to propagate partisan narratives, frequently without regard to factual accuracy. After users on X flagged the image of the child as being AI-generated, some individuals who shared it, including Senator Mike Lee (R-Utah), deleted their posts, as reported by Rolling Stone. Despite the revelation of the image’s artificial origin, some individuals persisted in supporting its distribution. Amy Kremer, a Republican National Committee member from Georgia, stated on X that the image’s origin was irrelevant.

Renée DiResta, a professor at the McCourt School of Public Policy at Georgetown University, commented on the phenomenon, describing it as a form of political propaganda. She explained that such images serve as signals of support for certain candidates, akin to a form of fandom. Political campaigns can capitalize on this by sharing and amplifying these images, thereby appearing to engage with the ongoing conversation or joke.

AI-generated images have also emerged depicting animals on rooftops barely above flood water in the aftermath of Hurricanes Helene and Milton. Additionally, baseless claims about Haitian immigrants in Springfield, Ohio stealing pets and wild animals, shared by former President Trump and his running mate JD Vance, led to the circulation of AI-generated images featuring Trump with various animals. DiResta remarked that generative AI has become another tool for supporters to engage with their campaigns online due to its affordability, ease of use, and entertainment value.

Discussions on truth versus facts in images have arisen, with figures like Matthew Barnidge, a professor at the University of Alabama, noting the philosophical basis for seeking deeper truths beyond factual accuracy. Barnidge pointed out that in philosophical works by Kant, Kierkegaard, and Hegel, deeper truths are often associated with concepts such as freedom and the sublime.

Research suggests that while fact-checking can influence voter perception in certain contexts, it is less effective in changing entrenched views, as evidenced by studies comparing fact-checking impacts in Australia and the United States. Emily Vraga, a health communication researcher, highlighted the challenges in fact-checking images, emphasizing that distinguishing real from AI-generated content is difficult for many people.

The role of generative AI in spreading misinformation has also been examined. Ara Merjian, an art historian at New York University, noted that photorealistic AI-generated images, like those of Taylor Swift appearing to support Trump, can have a significant impact due to their realistic resemblance, despite being false. Swift herself cited misinformation as a reason for endorsing Vice President Kamala Harris.

AI-generated content often fills the void left by a shrinking news industry and the deprioritization of news by tech platforms. Barnridge explained that propagandists frequently exploit this space by introducing content as lifestyle material rather than news. AI-generated images of various kinds, including politically inspired ones, proliferate online, with some attempting to monetize engagement by drawing users with emotional appeals and others aiming to collect personal data.

An investigation by 404 Media revealed that some individuals in developing countries instruct others on creating trending posts using AI-generated images to earn payouts from platforms like Facebook, which often exceed average local monthly incomes.

Concerns about AI-generated images influencing political perceptions have been raised, especially with striking examples such as an image shared by X’s owner Elon Musk. The image suggested that Vice President Kamala Harris was depicted in Communist attire, attempting to cast aspersions on her national allegiance. Eddie Perez of the nonpartisan OSET Institute, noted that such images contribute to political polarization and risk eroding trust in election results. Republicans have been suggesting potential election manipulation strategies, and generative AI is just one of numerous tools mentioned in this context.

Source link