Cybersecurity Awareness Month: Risks of AI-Powered Deepfakes

Cyber fraud is an emerging claims area for CLIA. Fraudsters are becoming increasingly sophisticated, especially as AI technologies become more accessible. Even those well versed in technology and armed with training are vulnerable to cyber fraud. With substantial involvement in high value transactions, and as custodians of large volumes of highly sensitive information, law firms are prime targets of cyber fraud and must remain as vigilant as possible.

Deepfake Scams

AI deepfakes are a particularly concerning emerging trend. AI deepfakes create highly realistic fake images, audio, or video, which fraudsters can use to impersonate key contacts and trick people into making unauthorized payments or revealing credentials. In 2024, a multinational engineering company was the target of a $25million deepfake scam wherein an employee was duped by fraudsters to attend a video call with whom they believed to be the company CFO and other staff. Thinking that it was a legitimate video call with the company’s CFO, the employee transferred funds to the fraudster. Unfortunately, the actors in the video call were all in fact AI deepfake impersonations of the CFO and staff.

With the introduction of AI video creators such as Google Veo 3, and more recently OpenAI’s Sora, AI deepfake technology is becoming more accessible. This means deepfake scams may become more prevalent. These apps allow everyday users to create artificial video content based on text prompts and uploaded images. Using OpenAI’s Sora, users have created realistic videos of deceased celebrities (like Michael Jackson), fake videos of real people shoplifting at grocery stores like Target, and fake home invasions from the point of view of a doorbell camera. AI deepfake videos are becoming more prevalent in everyday life, making it more difficult than ever to discern what is real and what is fake. Apps like OpenAI’s Sora are supposed to have certain safeguards to prevent use for fraudulent activity, however, it is not yet known how robust or effective the safeguards will be.

Prevention and Mitigation

To combat the increasing threat of AI enhanced cyber fraud, including deepfake scams, it’s important that law firms continue to:

  • Stay informed on new and emerging trends

  • Allocate resources and initiate regular training for everyone in the firm, including staff

  • Consider the use of AI-detection tools to verify legitimacy of transactions and communications

  • Maintain up to date data management practice and policies

    • Second point of contact available/multi-factor authentication to verify whether a transaction or communication is legitimate

    • Policies in place for when a breach does occur, such as identification of staff or contractors responsible to assist in mitigation

    • Have cyber specific insurance in place

  • Maintain up to date and appropriate software

    • Updated software is less vulnerable to attack

  • Maintain updated and appropriate back ups

    • Reduces likelihood of permanent data loss

It is more difficult than ever to discern what is real and what is fake. AI deepfake technology is becoming increasingly accessible, which poses a risk to law firms. To enhance client trust and protect clients from the consequences of cyber fraud, law firms should gain an understanding of AI deepfake scams and take proactive steps to reduce vulnerabilities.

Next
Next

Bite Size CPD: Cybersecurity