Connect with us

Hi, what are you looking for?

Stock

Deepfake Kari Lake video shows coming chaos of AI in elections

Hank Stephenson has a finely tuned B.S. detector. The longtime journalist has made a living sussing out lies and political spin.

But even he was fooled at first when he watched the video of one of his home state’s most prominent congressional candidates.

There was Kari Lake, the Republican Senate hopeful from Arizona, on his phone screen, speaking words written by a software engineer. Stephenson was watching a deepfake — an artificial-intelligence-generated video produced by his news organization, Arizona Agenda, to underscore the dangers of AI misinformation in a pivotal election year.

“When we started doing this, I thought it was going to be so bad it wouldn’t trick anyone, but I was blown away,” Stephenson, who co-founded the site in 2021, said in an interview. “And we are unsophisticated. If we can do this, then anyone with a real budget can do a good enough job that it’ll trick you, it’ll trick me, and that is scary.”

As a tight 2024 presidential election draws ever nearer, experts and officials are increasingly sounding the alarm about the potentially devastating power of AI deepfakes, which they fear could further corrode the country’s sense of truth and destabilize the electorate.

There are signs that AI — and the fear surrounding it — is already having an impact on the race. Late last year, former president Donald Trump falsely accused the producers of an ad, which showed his well-documented public gaffes, of trafficking in AI-generated content. Meanwhile, actual fake images of Trump and other political figures, designed both to boost and to bruise, have gone viral again and again, sowing chaos at a crucial point in the election cycle.

Now some officials are rushing to respond. In recent months, the New Hampshire Justice Department announced it was investigating a spoofed robocall featuring an AI-generated voice of President Biden; Washington state warned its voters to be on the lookout for deepfakes; and lawmakers from Oregon to Florida passed bills restricting the use of such technology in campaign communications.

And in Arizona, a key swing state in the 2024 contest, the top elections official used deepfakes of himself in a training exercise to prepare staff for the onslaught of falsehoods to come. The exercise inspired Stephenson and his colleagues at the Arizona Agenda, whose daily newsletter seeks to explain complex political stories to an audience of some 10,000 subscribers.

They brainstormed ideas for about a week and enlisted the help of a tech-savvy friend. On Friday, Stephenson published the piece, which included three deepfake clips of Lake.

It begins with a ploy, telling readers that Lake — a hard-right candidate whom the Arizona Agenda has pilloried in the past — had decided to record a testimonial about how much she enjoys the outlet. But the video quickly pivots to the giveaway punchline.

“Subscribe to the Arizona Agenda for hard-hitting real news,” the fake Lake says to the camera, before adding: “And a preview of the terrifying artificial intelligence coming your way in the next election, like this video, which is an AI deepfake the Arizona Agenda made to show you just how good this technology is getting.”

By Saturday, the videos had generated tens of thousands of views — and one very unhappy response from the real Lake, whose campaign attorneys sent the Arizona Agenda a cease-and-desist letter. The letter demanded “the immediate removal of the aforementioned deep fake videos from all platforms where they have been shared or disseminated.” If the outlet refuses to comply, the letter said, Lake’s campaign would “pursue all available legal remedies.”

A spokesperson for the campaign declined to comment when contacted on Saturday.

Stephenson said he was consulting with lawyers about how to respond, but as of Saturday afternoon, he was not planning to remove the videos. The deepfakes, he said, are good learning devices, and he wants to arm readers with the tools to detect such forgeries before they’re bombarded with them as the election season heats up.

“Fighting this new wave of technological disinformation this election cycle is on all of us,” Stephenson wrote in the article accompanying the clips. “Your best defense is knowing what’s out there — and using your critical thinking.”

Hany Farid, a professor at the University of California at Berkeley who studies digital propaganda and misinformation, said the Arizona Agenda videos were useful public service announcements that appeared carefully crafted to limit unintended consequences. Even so, he said, outlets should be wary of how they frame their deepfake reportage.

“I’m supportive of the PSAs, but there’s a balance,” Farid said. “You don’t want your readers and viewers to look at everything that doesn’t conform to their worldview as fake.”

Deepfakes present two distinct “threat vectors,” Farid said. First, bad actors can generate false videos of people saying things they never actually said; and second, people can more credibly dismiss any real embarrassing or incriminating footage as fake.

This dynamic, Farid said, has been especially apparent during Russia’s invasion of Ukraine, a conflict rife with misinformation. Early in the war, Ukraine promoted a deepfake showing Paris under attack, urging world leaders to react to the Kremlin’s aggression with as much urgency as they might show if the Eiffel Tower had been targeted.

It was a potent message, Farid said, but it opened the door for Russia’s baseless claims that subsequent videos from Ukraine, which showed evidence of Kremlin war crimes, were similarly feigned.

“I am worried that everything is becoming suspect,” he said.

Stephenson, whose backyard is a political battleground that lately has become a crucible of conspiracy theories and false claims, has a similar fear.

“For many years now we’ve been battling over what’s real,” he said. “Objective facts can be written off as fake news, and now objective videos will be written off as deep fakes, and deep fakes will be treated as reality.”

Researchers like Farid are feverishly working on software that would allow journalists and others to more easily detect deepfakes. Farid said the suite of tools he currently uses easily classified the Arizona Agenda video as bogus, a hopeful sign for the coming flood of fakes. However, deepfake technology is improving at a rapid rate, and future phonies could be much harder to spot.

And even Stephenson’s admittedly sub-par deepfake managed to dupe a few people: After blasting out Friday’s newsletter with the headline “Kari Lake does us a solid,” a handful of paying readers unsubscribed. Most likely, Stephenson suspects, they thought Lake’s endorsement was real.

Maegan Vazquez contributed to this report.

This post appeared first on The Washington Post

Enter Your Information Below To Receive Free Trading Ideas, Latest News And Articles.






    Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

    You May Also Like

    Investing

    Overview Steppe Gold (TSX:STGO) is a precious metals development company and gold and silver producer in Mongolia. The company owns the Altan Tsaagan Ovoo...

    Latest News

    Rescuers are searching through rubble and trying to reach isolated communities after a devastating earthquake struck Morocco, killing thousands and leaving more injured or...

    Stock

    LIVE OAK, Fla. — President Biden traveled to Florida on Saturday to survey the damage caused by Hurricane Idalia, part of a storm response...

    Investing

    Investor Insight EMU NL is an Australia-focused base and precious metals exploration company offering a compelling opportunity in the highly lucrative copper space. A...

    Disclaimer: Smartmerchantknow.com, its managers, its employees, and assigns (collectively “The Company”) do not make any guarantee or warranty about what is advertised above. Information provided by this website is for research purposes only and should not be considered as personalized financial advice. The Company is not affiliated with, nor does it receive compensation from, any specific security. The Company is not registered or licensed by any governing body in any jurisdiction to give investing advice or provide investment recommendation. Any investments recommended here should be taken into consideration only after consulting with your investment advisor and after reviewing the prospectus or financial statements of the company.


    Copyright © 2024 smartmerchantknow.com