HONOR has announced that its AI Deepfake Detection feature will be available worldwide starting in April 2025. This move aims to help users identify manipulated audio and video content in real time.

Deepfake technology—which uses AI to produce highly realistic but fake media—has become a growing concern for businesses and individuals. The Entrust Cybersecurity Institute found that in 2024 alone, a deepfake attack occurred every five minutes. Deloitte’s 2024 Connected Consumer Study also reported that 59% of respondents struggled to tell the difference between content created by humans and material generated by AI. Meanwhile, 84% of people using generative AI said they want clearly labeled AI-generated content.

HONOR first introduced its AI Deepfake Detection tech at IFA 2024. The system uses advanced AI algorithms to pick up on minor inconsistencies that are hard for the human eye to spot. These can include pixel-level flaws, issues with border compositing, irregularities between video frames, and unusual facial features like face-to-ear ratios or hairstyle anomalies. When the system identifies manipulated content, it sends an alert so users can steer clear of possible risks.
This global rollout comes at a time when deepfake attacks are on the rise. Between 2023 and 2024, digital document forgeries increased by 244%. Sectors such as iGaming, fintech, and crypto have been especially hard-hit, with deepfake incidents growing year-over-year by 1520%, 533%, and 217%, respectively.
HONOR’s initiatives are part of a wider industry effort to address deepfake concerns. Organizations like the Content Provenance and Authenticity (C2PA), founded by Adobe, Arm, Intel, Microsoft, and Truepic, are working on technical standards to verify digital content authenticity. Microsoft has introduced AI tools to help prevent deepfake misuse, including an automatic face-blurring feature in images uploaded to Copilot. Additionally, Qualcomm’s Snapdragon X Elite NPU supports local deepfake detection using McAfee’s AI models, maintaining user privacy.
Marco Kamiya of the United Nations Industrial Development Organization (UNIDO) praised this technology, noting that AI Deepfake Detection is a critical security measure on mobile devices and can help shield users from digital manipulation.
For more daily updates, please visit our News Section.
Stay ahead in tech! Join our Telegram community and sign up for our daily newsletter of top stories!







Comments