When video call is no longer proof: How deepfakes threaten crypto sector

There was a time when video calls served as an extra safeguard against scammers. If you saw someone live on screen, you assumed you were speaking with the real person. But technology has evolved — and what was recently considered secure no longer guarantees safety.
Former Binance CEO Changpeng Zhao has drawn attention to a new wave of fraud that’s gaining momentum: the use of real-time deepfake video to target and compromise members of the crypto community. According to Zhao, even video verification may soon become meaningless. If a face and voice can be convincingly faked, how can anyone be sure who they’re really talking to?
Loading...
Real attacks are no longer rare
This isn’t just a hypothetical threat. Japanese crypto influencer Mai Fujimoto lost access to both her Telegram and Metamask wallet after joining a Zoom call with a deepfake version of someone she trusted. She was tricked into clicking an “update” link after struggling to hear the audio. Before joining the call, Fujimoto hadn’t realized that her acquaintance’s Telegram account had already been hacked.
«She sent me the link and instructed me to follow some steps to adjust the audio settings, and I believe that’s when the attack compromised my computer.»
Deepfake attacks are increasingly targeting employees at crypto companies, funds, and exchanges. In a recent case, malicious actors posed as executives from a crypto fund over multiple Zoom calls, convincing one staff member to install the software they needed. The result: a keylogger, screen recorder, and the theft of private keys.
Almost everything can be faked
The problem is that the traditional model of digital trust no longer works. A face, a username, a voice — all of these can now be convincingly forged. Modern deepfake algorithms can not only replicate someone’s tone and facial expressions but also adapt in real time to a person’s reactions. This means that visual and audio contact is no longer a reliable indicator of authenticity.
While corporate environments can still implement multi-level verification — using internal platforms, access tokens, or backup confirmation channels — informal conversations and personal communication often rely solely on trust. That’s exactly what makes users vulnerable: familiarity with someone’s «voice» or «face» once provided a sense of security, but now it may become a weak point.
Technology is moving even further. Some plugins can now generate hyper-realistic faces during video calls, simulating eye movement, blinking, and even audio delays — all to create the illusion of a live conversation. And that illusion can be enough to convince someone to grant access or perform a dangerous action without suspecting a thing.
Cyber hygiene is no longer optional
Zhao’s call to never install software from unofficial sources is no longer just a general reminder — it’s a baseline requirement for cyber hygiene. In a world where even video calls can be compromised, the only effective protection is critical thinking and clear digital behavior protocols.
That means completely avoiding:
- installing any software from links received in private messages;
- entering passwords or codes during a video call;
- skipping a secondary confirmation channel (such as a separate message or call via another platform).
Meanwhile, companies must introduce internal identity verification policies, even when the voice sounds familiar, adopt behavior monitoring tools, and train teams to recognize signs of impersonation.
There’s no law requiring this yet. But reality does — and the cost of a mistake is measured not only in funds lost, but in reputation and business continuity.