Deepfake: British firm Arup falls prey to $25-M scam, how can you protect yourself?

By:
on May 17, 2024
Listen
  • This sophisticated fraud resulted in one of its Hong Kong employees transferring $25 million to scammers.
  • Arup notified Hong Kong police in January about the incident, confirming that fake voices & images were used.
  • Deepfake technology presents significant risks that require vigilance and proactive measures to mitigate.

Follow Invezz on Telegram, Twitter, and Google News for instant updates >

British multinational design and engineering company Arup, renowned for iconic buildings like the Sydney Opera House, confirmed it was targeted by a deepfake scam.

Are you looking for signals & alerts from pro-traders? Sign-up to Invezz Signals™ for FREE. Takes 2 mins.

This sophisticated fraud resulted in one of its Hong Kong employees transferring $25 million to scammers.

What was the fraud?

Copy link to section

Arup notified Hong Kong police in January about the incident, confirming that fake voices and images were used.

The scam involved a finance worker who was tricked into attending a video call with people he believed were the chief financial officer and other staff members, all of whom were deepfake recreations.

Despite initial suspicions of a phishing email, the realistic appearance and voices of his supposed colleagues led the employee to proceed with the transactions, totaling 200 million Hong Kong dollars ($25.6 million) across 15 transfers.

Rising threat of deep fake technology

Copy link to section

The incident underscores the increasing sophistication of deepfake technology.

Rob Greig, Arup’s global chief information officer, Arup, said:

Copy link to section

“Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes.”

The number and sophistication of such attacks have been rising sharply, posing significant challenges for companies worldwide.

Global concern and internal response

Copy link to section

Authorities globally are growing concerned about the malicious uses of deepfake technology.

In an internal memo, Arup’s East Asia regional chairman, Michael Kwok, emphasized the increasing frequency and sophistication of these attacks, urging employees to stay informed and alert to spot different scamming techniques.

Operational resilience and ongoing investigation

Copy link to section

Despite the significant financial loss, Arup assured that its financial stability and business operations were unaffected, and none of its internal systems were compromised. The company continues to work with authorities, and the investigation is ongoing.

This high-profile incident highlights the urgent need for businesses to enhance their cybersecurity measures to combat the growing threat of deepfake technology and other sophisticated scams.

What is a deepfake?

Copy link to section

A deepfake is content generated using deep learning techniques that appears real but is fabricated. Artificial intelligence (AI) used to create deepfakes typically employs generative models, such as Generative Adversarial Networks (GANs) or auto-encoders.

Deepfakes can be videos, audio recordings, or images depicting individuals or groups doing or saying things they never did.

To produce convincing content, AI must train on large datasets to recognize and replicate natural patterns.

Deepfake technology, while innovative, opens up dangerous opportunities for illegal use, including identity theft, evidence forging, disinformation, slander, and biometric security bypass.

Fraudsters often leverage the depicted person’s authority or personal connection to their targets.

Types of deepfakes

Copy link to section

Deepfakes can produce video, audio, or image content, used as recorded media or in real-time streams. These formats can be encountered in various scenarios, from social media posts to phone calls and video conferences.

Face swapping: This application replaces the facial features of a target person with fake features, often of another person.

Techniques like facial landmark detection and manipulation make the blending seamless and hard to spot when caught unaware.

Voice cloning: This technique replicates an individual’s voice. High-quality audio data from recordings of the target person speaking in various contexts is needed to train a voice cloning model.

Real-time video deepfakes

Copy link to section

Real-time video deepfakes generate manipulated video content instantly during live streams and video calls.

Voice cloning and face swapping are frequently used to create a convincing fake environment. Deepfake generation software can integrate with streaming platforms and video conferencing tools in several ways:

A separate application captures, processes, and sends the manipulated video feed to the conferencing software.

Direct integration into video conferencing software as an optional feature or plugin.

Using a virtual camera to intercept the video feed from the physical camera and output the manipulated feed.

How to protect yourself against deepfakes?

Copy link to section

As deepfake technology advances, it is crucial to protect yourself and your organization from fraud. Here are some ways to safeguard against deepfakes:

Watch out for red flags: Look for unrealistic facial expressions or movements, inconsistencies in lighting and shadows, unnatural head or body movements, and mismatched audio and video quality.

Be proactive if suspicious: Engage in casual conversation to catch a faker off guard. Ask the person to share their screen or confirm their identity by providing exclusive information or sending a confirmation message through a different channel.

Set up a passphrase: Establish a password or passphrase for sensitive topics with colleagues and family members. This method is effective in voice, video, and text communication.

Deepfake technology presents significant risks that require vigilance and proactive measures to mitigate.

By understanding the types of deepfakes and implementing strategies to identify and counteract them, individuals and organizations can better protect themselves from potential fraud.

As generative AI continues to develop, staying informed and prepared is crucial in safeguarding against the growing threat of deepfakes.

UK AI Tech