TechyMag.com - is an online magazine where you can find news and updates on modern technologies


Back
Technologies

Deepfake ex-minister Dmytro Kulyoba held a video call with a US senator and asked "provocative questions"

Deepfake ex-minister Dmytro Kulyoba held a video call with a US senator and asked "provocative questions"
0 0 3 0

Benjamin Cardin, the Chairman of the U.S. Senate Foreign Relations Committee, found himself on a video call with a fake Dmytro Kuleba. Unidentified individuals created a deepfake of the former Ukrainian foreign minister using artificial intelligence.

Last Thursday, the senator's office received an email allegedly from Dmytro Kuleba requesting a video conference. During the Zoom call, the interlocutor appeared and sounded like the Ukrainian diplomat.

The senator became suspicious when the faux Kuleba began asking provocative questions about the upcoming elections and demanded opinions on sensitive foreign policy issues. In particular, the interlocutor inquired about support for missile strikes on Russian territory.

Cardin ended the call and reported the incident to the State Department, which confirmed that the person on the video call was not the real Dmytro Kuleba.

Deepfake technology utilizes artificial intelligence to create realistic videos featuring fictitious people. It has previously been used to impersonate public figures, notably in 2022, when a fake video of Ukrainian President Volodymyr Zelensky surfaced.

The incident involving Senator Cardin has heightened concerns about potential foreign interference attempts in the U.S. set to take place in November. U.S. intelligence agencies warn that Russia, Iran, and China may use artificial intelligence and deepfakes to influence the electoral process.

While the organizers of the incident remain unknown, the questions regarding missile strikes on Russia suggest possible involvement of Russian intelligence services in this operation.

The Senate Security Service has urged lawmakers to remain vigilant.

“This attempt is marked by technical complexity and plausibility. It is likely that other similar attempts will be made in the coming weeks,” the statement said.

Meanwhile, in response to the growing threat of deepfakes, YouTube has begun developing new protective tools. These tools are intended to prevent unauthorized use of the appearance and voice of artists and content creators on the platform.

Source: New York Times

Related tags:

Thanks, your opinion accepted.

Comments (0)

There are no comments for now

Leave a Comment:

To be able to leave a comment - you have to authorize on our website

Related Posts