Vishing Scam, DeepFakes

  • General
  • 5 minute read

tl;dr

Fraudsters are sending text messages to individuals with a deceptive notice, stating a suspicious $3,500 transaction has been identified with their account. The fraudsters prompt the user to reply Y/N if the transaction is legitimate. When users reply N, they receive a spoofed phone call from “Bank of America”, which is actually the fraudster, claiming to help them reverse the transaction. In reality, the fraudster guides the user through the process of creating a new transaction to transfer money to the fraudster, using the Zelle app.

The image above displaying a Vishing alert is NOT what the victims saw on their phone, instead they saw a bank name displayed on their caller ID.

Background

There’s been a rise in deceptive vishing scams tricking individuals into transferring money using the Zelle app. Vishing is the fraudulent practice of using voice (hence the V in Vishing) via phone call or voicemail to trick unsuspecting users into providing personal information. In addition to obtaining personal information, attackers can guide the users through a series of steps, over the phone, which provide the attacker with useful personal information they can then leverage for financial gain. In the Zelle case, attackers guided their targets, over the phone, on steps to “reverse” a fraudulent bank transfer when in reality the user was creating a wire transfer and sending money to the fraudsters.

Conditioning

The reason these attacks are so effective is that the attackers took advantage of our natural conditioned response to this scenario. The attackers sent text messages to the users, saying there was a suspicious transaction identified with their account, do you approve this transaction? Many banks send similar text messages to their users when suspicious transactions arise, so users are already conditioned to seeing these types of messages and replying “Y” or “N”.

When users receive these text messages from their bank, they come from a random 6 or 10 digit number with no easy mechanism for you to confirm if that message did in fact come from your bank. The attackers clearly knew this and leveraged a similarly structured text message which was sent to their targets.

The fraudsters sent a text message along the lines of:

Did you make a transaction at X location for $$$$. YES reply Y, NO reply N

At this point users are conditioned to seeing and responding to these types of messages, and when they see an attacker send the same message, they reply N to cancel the “fraudulent” transaction, as they’ve done in the past.

Eager To Help

The moment the user replies N, the fraudsters are waiting on the other line to “help you” reverse the transaction. The victims receive a phone call from a spoofed number that caller ID may even display as “Bank of America” or any other bank the attackers choose.

As Cybersecurity experts, we know how easy it is to spoof a phone number, but the average person is not aware that if your caller ID displays “Bank of America” it may not actually be BoA. Many services exist which allow threat actors or penetration testers (ethical hackers) to spoof phone numbers. They could even spoof a family member’s phone number, and if you have the phone number listed in your contacts, your phone will display as the name in your contacts when receiving a call from the spoofed number.

Futuristic Vishing Implications with DeepFakes

If the attacker spoofs the phone of one of your family members or colleagues and tries to communicate with you, there’s a good chance you’ll notice their voice sounds different and hang up the call. However, there may be a scenario in the not too distant future where you receive a call from a family member or colleague, it sounds just like them, and it’s not them.

With advances in artificial intelligence, we’re living in a time when technologists can gather voice or video samples from an individual, and use these samples to generate new video or audio that is identical in appearance and voice to the original samples. Thereby, creating a DeepFake of the individual. DeepFakes are currently being used for business purposes and have important, reputable use cases, but as is the case with many technologies, the technology can usually be abused for nefarious purposes.

Building a DeepFake Model

The DeepFake models are “trained” by providing samples of either audio or video of the target of the DeepFake. In cases like actors or public news figures, it’d be trivial for a threat actor to obtain samples that could be used to train a model. The more input into the model, the more accurate the generated output. Once the model has been trained, it can then be used to generate new text or video, indistinguishable from the original.

DeepFake Vishing Scenario

Let’s imagine at your organization, you have an employee who is public facing, and often in the media. This individual may be required to do this as part of their job, or maybe they’re part of an organization outside their day-to-day that requires presenting and recording said presentations. A threat actor could perform reconnaissance to identify DeepFake targets by researching your organization and seeing what individuals have the most audio\visual content that is publicly accessible, and use this content to build a DeepFake model of the individual.

Social Media Concerns

Employees in the public eye aren’t the only juicy targets of a DeepFake attack. As younger generations are more conditioned to filming themselves and putting content on the internet, they’re also unknowingly creating more content that could be used by a threat actor to generate a deep fake of them specifically. You could have an individual who is active in social media or in a youtube channel, and that content could be leveraged to train a model then used in a DeepFake attack in the work setting.

Conducting The DeepFake Vishing Attack

Once the threat actor has trained the model, they could then generate a new audio which sounds identical to the target and craft a highly realistic vishing scenario. When combining this realistic audio with a phone spoofing, we create a scenario where you could receive a voicemail, from what appears to be a known contact, with their familiar voice, instructing you to perform an action that could be leveraged in a scam.

This could look like a voicemail coming from your boss, instructing you to do something which sounds exactly like your boss.

Practical Takeaways

Today’s Vishing

Do not reply to any banking related text or call that is initiated by any “Bank”. These calls may be spoofed. Do not call the person back using the number they used to call you. If you receive a message whether it be text or voice, and you have concern, call your bank using their official phone number which can be found on their main website. If a link is provided in the text message, do not navigate to the link.

DeepFake Vishing

For the time being, largely the same guidance applies. In the future there could be scenarios where you receive messages from someone you recognize, or even real-time communication with the individual, that isn’t them. Hang up, ignore the message, and reach out to the individual in a manner you know is safe and secure.



Need more assistance?

If you found the information above difficult to consume or need additional assistance, please reach us by email at [email protected] or by filling out the contact form below.

DomainGuard logo large to display upon entry