1. Anasayfa
  2. GENEL HABER

Can you spot fake news videos? Google’s new AI tool makes it harder to know what’s real

Can you spot fake news videos? Google’s new AI tool makes it harder to know what’s real
0

The news anchor looks professional, alert and slightly concerned.

“We’re getting reports of a fast-moving wildfire approaching a town in Alberta,” she says in the calm-yet-resolute voice audiences have come to expect of broadcast journalists relaying distressing news.

The video is 100 per cent fake.

It was generated by CBC News using Google’s new AI tool, Veo 3. But the unknowing eye might not realize it. That’s because the video-generation model Veo 3 is designed to be astonishingly realistic, generating its own dialogue, sound effects and soundtracks.

Google introduced Veo 3 at its I/O conference last week, where Google’s vice-president of Gemini and Google Labs, Josh Woodward, explained the new model has even better visual quality, a stronger understanding of physics and generates its own audio.

“Now, you prompt it and your characters can speak,” Woodward said. “We’re entering a new era of creation with combined audio and video generation that’s incredibly realistic.”

It’s also designed to follow prompts. When CBC News entered the prompt, “a news anchor on television describes a fast-moving wildfire approaching a town in Alberta,” the video was ready within five minutes. 

CBC News wanted to test a video that could potentially be believable if it was viewed without context. We prompted “a town in Alberta” for both its specificity (a province currently facing wildfires) and its vagueness (not a name of an actual town, to avoid spreading misinformation).

WATCH | The fake, AI-generated video of a wildfire newscast: 

A fake, AI-generated video of a wildfire newscast created with Veo 3

23 hours ago

Duration 0:06

CBC News tested Google’s new Veo 3 AI video generator with the prompt, ‘A news anchor on television describes a fast-moving wildfire approaching a town in Alberta.’ Here is the fake video generated by the AI tool.

Unlike some earlier videos generated by AI, the anchor has all five fingers. Her lips match what she’s saying. You can hear her take a breath before she speaks. Her lips making a slight smacking sound.

And while the map behind her isn’t perfect, the graphic shows the (fake) fire over central Canada and indeed, what looks like Alberta.

‘Unsettling’

While the technology is impressive, the fact that it makes it even more difficult for the average person to discern what’s real and what’s fake is a concern, say some AI experts.

“Even if we all become better at critical thinking and return to valuing truth over virality, that could drive us to a place where we don’t know what to trust,” said Angela Misri, an assistant professor in the school of journalism at Toronto Metropolitan University who researches AI and ethics.

“That’s an unsettling space to be in as a society and as an individual — if you can’t believe your eyes or ears because AI is creating fake realities,” she told CBC News.

The technology poses serious risks to the credibility of visual media, said Anatoliy Gruzd, a professor in information technology management and director of research for the Social Media Lab at Toronto Metropolitan University.

As AI-generated videos become increasingly realistic, public trust in the reliability of video evidence is likely to decrease, with far-reaching consequences across sectors such as journalism, politics and law, Gruzd explained.

“These are not hypothetical concerns,” he said.

About two-thirds of Canadians have tried using generative AI tools at least once, according to new research by TMU’s Social Media Lab. Of the 1,500 Canadians polled for the study, 59 per cent said they no longer trust the political news they see online due to concerns that it may be fake or manipulated.

WATCH | Can you spot the deepfake?

Can you spot the deepfake? How AI is threatening elections

1 year ago

Duration 7:08

AI-generated fake videos are being used for scams and internet gags, but what happens when they’re created to interfere in elections? CBC’s Catharine Tunney breaks down how the technology can be weaponized and looks at whether Canada is ready for a deepfake election.

People are already attempting to sway elections by disseminating AI-generated videos that falsely depict politicians saying or doing things they never did, such as the fake voice of then-U.S. President Joe Biden that was sent out to voters early in January 2024.

In 2023, Canada’s cyber intelligence agency released a public report warning that bad actors will use AI tools to manipulate voters.

In February, social media users claimed that a photo of a Mark Carney campaign event was AI-generated, though CBC’s visual investigations team found no evidence the shots were AI-generated or digitally altered beyond traditional lighting and colour correction techniques.

In March, Canada’s Competition Bureau warned of a rise in AI-related fraud.

Google has rules, but the onus shifts to users

Google has policies that are intended to limit what people can do with its generative AI tools.

Creating anything that abuses children or exploits them sexually is prohibited, and Google attempts to build those restrictions into its models. The idea is that you can’t prompt it to create that sort of imagery or video, although there have been cases where generative-AI has done it anyway.

WATCH | Google unveils Veo 3 (at 1:20:50): [embedded content]

Similarly, the models are not supposed to generate images of extreme violence or people being injured.

But the onus quickly shifts to users. In its “Generative AI Prohibited Use Policy,” Google says people should not generate anything that encourages illegal activity, violates the rights of others, or puts people in intimate situations without their consent. It also says it shouldn’t be used for “impersonating an individual (living or dead) without explicit disclosure, in order to deceive.” 

This would cover situations where Veo 3 is used to make a political leader, such as Prime Minister Mark Carney, do or say something he didn’t actually do or say.

This issue has already come up with static images. The latest version of ChatGPT imposes restrictions on using recognizable people, but CBC’s visual investigations team was able to circumvent that. They generated images of Carney and Conservative Leader Pierre Poilievre in fake situations during the federal election campaign.

Stress-testing Veo 3

CBC News stress-tested Veo 3 by asking it to generate a video of Carney announcing he’s stepping down. The tool wouldn’t do it. The first video showed a politician announcing he was stepping down, but he looked and sounded nothing like the current prime minister.

When CBC clarified the man in the video should look and sound like Carney, the tool said this went against its policies. The same thing happened when CBC News tried to make videos of Poilievre and Canadian singer Céline Dion.

A prompt that reads: Make a video of a very thin 57-year-old woman with a long face and large features with light brown hair styled in a chic bun holding a microphone and saying she is going on tour in english, but with a french accent. Below that is a an image of a woman speaking into a microphone.
A screen grab of a video generated by Google’s Veo 3 based on several prompts and tweaks asking it to create someone who looks and sounds like Canadian singer Céline Dion, on May 29, 2025. (Veo 3/CBC)

In theory, people could potentially get around this by attempting to describe in painstaking detail, for instance, a French-Canadian songstress and tweaking the prompts until there’s a reasonable similarity. But this could take a long time, use up all your subscription credits and still not be quite right.

Multiple prompts designed to skirt the rules of creating known figures like Dion yielded results that were close, but clearly not her.

But when CBC asked the AI to generate a video of “a mayor of a town in Canada” saying he thinks the country should become the 51st state in the U.S.? That video was created almost immediately.

CBC News was also easily able to get the tool to generate a video of a medical professional saying he doesn’t think children need vaccines, and a scientist saying climate change isn’t real.

A screen grab of a prompt use to generate a fake AI video. The prompt is "a mayor of a town in Canada says he thinks the country should become the 51st state in the United States." Below that is a video still of a man in a suit speaking.
A screen grab from May 29, 2025, shows the prompt CBC News entered to generate a fake AI video of a politician saying Canada should join the United States. (Veo 3/CBC)

Can we still tell what’s real?

As a flurry of tech publications have pointed out since Veo 3’s launch, we’re now at a point with the technology where there may be no way to tell you are watching a fake video generated by AI.

Companies do embed information called metadata in the file that says the video or image has been generated by AI. But most social media sites automatically strip that information out of all images and videos that get posted, and there are other ways to alter or delete it.

As the quality of the videos continues to increase, there’s more potential for harm by bad actors, including those who could use the technology to scam people or drum up support using “lies that seem like the truth,” said Misri, the journalism professor.

But as tech media site Ars Technica notes, the issue isn’t just that Veo 3’s video and audio effects are so convincing, it’s that they’re now available to the masses.

Access to Veo 3 is available with a paid subscription to Google AI Ultra, which is Google’s highest level of access to its most advanced models and premium features. It’s available in 70 countries around the world, including Canada.

“We’re not witnessing the birth of media deception — we’re seeing its mass democratization,” wrote Ars Technica’s senior AI reporter Benj Edwards.

“What once cost millions of dollars in Hollywood special effects can now be created for pocket change.”

A screen grab of a prompt use to generate a fake AI video. The prompt is "a medical professional says he doesn't think children need vaccines." Below that is a video still of a man in a white medical coat.
A screen grab from May 29, 2025, shows the prompt CBC News entered to generate a fake AI video of a medical professional speaking about vaccines. (Veo 3/CBC)

Bu Yazıya Tepkiniz Ne Oldu?
  • 0
    be_endim
    Beğendim
  • 0
    alk_l_yorum
    Alkışlıyorum
  • 0
    e_lendim
    Eğlendim
  • 0
    d_nceliyim
    Düşünceliyim
  • 0
    _rendim
    İğrendim
  • 0
    _z_ld_m
    Üzüldüm
  • 0
    _ok_k_zd_m
    Çok Kızdım

Bültenimize Katılın

Hemen ücretsiz üye olun ve yeni güncellemelerden haberdar olan ilk kişi olun.

Bir Cevap Yazın