Modeling Biomechanical Constraint Violations for Language-Agnostic Lip-Sync Deepfake Detection
Abstract
Lip-sync deepfake detectors based on pixel-level artifacts or audio-visual correspondence lack cross-language generalization, whereas a novel approach leveraging biomechanical constraints of authentic orofacial articulation shows consistent performance across diverse speaking conditions through temporal lip jitter measurement.
Current lip-sync deepfake detectors rely on pixel-level artifacts or audio-visual correspondence, failing to generalize across languages because these cues encode data-dependent patterns rather than universal physical laws. We identify a more fundamental principle: generative models do not enforce the biomechanical constraints of authentic orofacial articulation, producing measurably elevated temporal lip variance -- a signal we term temporal lip jitter -- that is empirically consistent across the speaker's language, ethnicity, and recording conditions. We instantiate this principle through BioLip, a lightweight framework operating on 64 perioral landmark coordinates extracted by MediaPipe.
Get this paper in your agent:
hf papers read 2604.16808 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper