Regulators Grapple With Dilemma Over How to Allow AI Innovations and Thwart Scams

By BeInCrypto October 04, 2023 In Europe, Regulation, Scams

YouTuber MrBeast has shared his Artificial Intelligence (AI) deepfake clip to inform his followers about the scams. Meanwhile, one of the most senior European Union (EU) officials is concerned about strict AI regulations.

With the rise of generative AI tools, regulators and users became aware of the threat the technology can bring. However, the primary concern is to draft balanced regulations focusing on users’ protection but not halt innovation.

Scammers Leverage Influencers’ Popularity Through AI Deepfake Videos

James Stephen Donaldson, the creator behind one of the world’s largest YouTube channels, MrBeast, shared his deepfake video on X (Twitter) to warn his followers about AI scams. He wrote:

“Lots of people are getting this deepfake scam ad of me… are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem.”

Advertisement

The video tricks the viewers into believing that MrBeast is giving away the latest iPhone 15 Pro for $2. It asks the viewers to click a link, probably redirecting them to some phishing website.

Lots of people are getting this deepfake scam ad of me… are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem pic.twitter.com/llkhxswQSw

Besides MrBeast, the scammers are creating AI deepfake videos using other famous personalities like Elon Musk. In September, BeInCrypto reported that bad actors are conducting crypto scams through Musk’s deepfake videos on TikTok.

Read more: 15 Most Common Crypto Scams To Look Out For

European Commission Official Warns Against Restrictive AI Regulation

The deepfake videos have been circulating around for quite a while. Hence, some are asking for strict AI regulations.

But on the other hand, if there are stricter regulations, it could slow down innovation. So, regulators need to strike a balance and not go to the extreme end of either side.

Věra Jourová, the European Commission’s vice president for values and transparency, believes there should be a solid analysis of the potential risk factors instead of paranoia while regulating AI. She told the Financial Times:

“We should not mark as high risk things which do not seem to be high risk at the moment. There should be a dynamic process where, when we see technologies being used in a risky way we are able to add them to the list of high risk later on.”

EU has been working on drafting an AI bill for the last two years. Notably, the bill would require mandatory disclosure of the copyrighted material used to train the AI models. The EU officials are expected to conclude their discussions on the AI bill by the end of 2023.

But OpenAI, the parent company of ChatGPT, believes that the draft of the EU’s AI bill is over-regulating the technology. In May, the CEO of OpenAI, Sam Altman, threatened to leave Europe if the bill gets passed without any changes.

Meanwhile, the UK is also focusing on AI regulations. In November, the country will host a global summit on the topic.

Read more: The 6 Hottest Artificial Intelligence (AI) Jobs in 2023

Do you have anything to say about the deepfake AI scams or anything else? Write to us or join the discussion on our You can also catch us on TikTok, Facebook, or X (Twitter).

For BeInCrypto’s latest Bitcoin (BTC) analysis, click here.

Disclaimer

In adherence to the Trust Project guidelines, BeInCrypto is committed to unbiased, transparent reporting. This news article aims to provide accurate, timely information. However, readers are advised to verify facts independently and consult with a professional before making any decisions based on this content.

Published on

BeInCrypto

View the full article

You may also like