AI Tools & Products

‘HELLO BOSS’: Inside the Chinese Realtime Deepfake Software Powering Scams Around the World

· May 7, 2026
‘HELLO BOSS’: Inside the Chinese Realtime Deepfake Software Powering Scams Around the World

Chinese realtime deepfake software called ‘Haotian AI’ has been uncovered in use by scammers worldwide, enabling them to impersonate others during live video calls on platforms like WhatsApp, Zoom, and Teams. 404 Media obtained a copy of the software, which lets fraudsters swap their faces in real time, making it appear as if they are someone else entirely. This technology is becoming a powerful tool for deception, intensifying the risk of identity fraud through live video interactions.

The significance of this discovery lies in how it disrupts trust in digital communication channels. Video calls are often considered more secure and authentic than voice or text alone because they show real faces in real time. When scammers can manipulate their appearance perfectly, it becomes much harder to spot imposters. This increases the danger for businesses, governments, and individuals who rely on video for sensitive conversations, making fraud easier and losses potentially much larger.

Deepfake technology has advanced rapidly over the last few years, originally emerging from research in artificial intelligence and computer vision. It uses AI models to generate highly convincing fake images or videos based on input data. Early versions were static or pre-recorded, but realtime applications like Haotian AI use fast computing and sophisticated algorithms to change faces as the user moves and speaks. This leap makes it viable for live scams, extending AI’s impact beyond media manipulation into direct, real-world fraud.

This development signals a need for new defenses in digital identity verification and communication security. Traditional methods that trust video authenticity are now vulnerable to AI-powered deception. The next moves will likely involve combining AI detection tools that can identify deepfakes with stronger multi-factor authentication processes and biometric verification. Organizations should prepare for a future where realtime deepfake scams become more common and harder to detect simply by eye.

The wide availability of software like Haotian AI also raises ethical questions about the regulation of AI tools and the responsibilities of developers. Preventing malicious uses will require collaboration between software creators, platform providers, and lawmakers to establish clear guidelines and technical safeguards. The unfolding battle between AI-generated forgeries and detection techniques will be central to maintaining trust in digital communication systems.

— AI Quick Briefs Editorial Desk

Stay ahead of AI Get the most important AI news delivered to your inbox — free.