For most of human history, sensory evidence has been the cornerstone of trust. To see someone’s face or hear their voice was to know they were present. That assumption no longer holds. Generative AI systems are capable of producing video and audio so realistic that even experts struggle to distinguish them from genuine recordings (Tolosana et al., 2020; Verdoliva, 2020). The collapse of sensory trust has profound implications, not only for financial fraud and disinformation but for the very notion of reality in digital spaces.
In this environment, the question is not whether images or sounds can be trusted—they cannot, at least not on their own—but what new foundation can replace them. Trust must shift from what we see and hear to what we can measure biologically. Psychophysiology provides that foundation.
Most existing deepfake detection systems rely on AI classifiers trained to spot artifacts: unnatural eye blinks, inconsistent lighting, or irregular textures. While effective in the short term, these approaches are inherently fragile. Each generation of generative models eliminates prior weaknesses, forcing detectors to play catch-up (Goodfellow, McDaniel, & Papernot, 2018).
This cat-and-mouse dynamic creates diminishing returns. Detectors tuned to one type of artifact often fail on another. Adversaries, meanwhile, can train their models against known detectors, further eroding accuracy. The result is an endless arms race, one that consumes resources without producing lasting security. In a post-deepfake era, detection must move beyond artifacts and into signals that generative systems cannot convincingly replicate.
Psychophysiology offers a fundamentally different approach. Unlike surface features, psychophysiological signals—heartbeats, microexpressions, pupillary reflexes—are emergent properties of complex living systems. They cannot be convincingly simulated because they depend on integrated biological processes operating across multiple timescales.
For example, heart rate variability reflects the dynamic interplay of sympathetic and parasympathetic branches of the autonomic nervous system (Keene, Bolls, Clayton, & Berke, 2017). Facial EMG captures micro-activations of muscle fibers that reflect affective states outside conscious control (Cacioppo et al., 1986). Pupillary dilation indexes both low-level light reflexes and high-level cognitive load (Beatty & Lucero-Wagoner, 2000). These signals are not independent; they move together in coherent patterns tied to external stimuli (Fisher, Huskey, Keene, & Weber, 2018).
Synthetic systems may approximate a smile or simulate a heartbeat, but they cannot reproduce the coherence of multiple, interacting physiological systems responding in real time to environmental demands. This is the basis of psychophysiological distinction—the irreducible gap between living humans and synthetic agents.
If psychophysiology becomes the standard for digital trust, the boundary between humans and AI will be redefined. Verification will no longer ask: Does this face match a stored template? but rather: Does this subject exhibit coherent, dynamic signatures of life?
This reframing has far-reaching implications. In financial systems, it ensures that only living individuals—not bots or synthetic identities—gain access. In media, it restores credibility by grounding authenticity in biology. In communication, it enables platforms to distinguish between genuine human interlocutors and AI impersonators. More broadly, it signals a societal shift: human presence is no longer established by appearance alone but by measurable embodiment.
For psychophysiology to serve as the new foundation of trust, it must also be transparent. Users, regulators, and organizations need to understand how signals are measured, what coherence means, and how classifications are made. Moveris emphasizes interpretability by providing not only human likelihood scores but also evidence of coherence—such as graphs showing alignment of heart rate or pupil dilation with stimulus events.
This commitment to explainability ensures that psychophysiology does not become a “black box.” Instead, it becomes a scientifically interpretable, auditable framework for trust, one that regulators can evaluate and users can understand.
Adopting psychophysiology as a foundation for trust raises important research and ethical questions. Future work must expand validation across demographics to ensure fairness, develop best practices for privacy and data protection, and establish international standards for psychophysiological verification.
Ethically, psychophysiology must be deployed with sensitivity to consent and transparency. Users must know what signals are being measured, how long they are retained, and for what purposes. Regulators such as the European Union (through the AI Act) and NIST are already moving toward frameworks that prioritize fairness and accountability. Moveris positions psychophysiology not only as a technological solution but as an ethical one, grounded in inclusivity and transparency.
The post-deepfake era demands a redefinition of trust. Appearances can be forged, artifacts can be erased, and AI-vs-AI detection is unsustainable. What remains uniquely human are the psychophysiological signatures of life—dynamic, multimodal, and coherent with environmental events. By grounding verification in these signals, psychophysiology provides a durable, scientifically robust framework for distinguishing humans from synthetic agents.
Moveris is at the forefront of this shift, operationalizing decades of psychophysiological research into scalable, interpretable systems. In a world where what we see and hear can no longer be trusted, psychophysiology offers a new answer: trust grounded in biology, not appearance.