Methodology
Definitions, datasets, and caveats behind ANIMA benchmark claims.
Internal benchmarks. Independent audit forthcoming Q3 2026.
Claim definitions
| Claim | Definition | Dataset / Sample | Test window |
|---|---|---|---|
| 99.9% human accuracy* | Share of legitimate human sessions classified as human | Internal benchmark cohort, mixed device/browser traffic, n=120,000 sessions | 2026-02-01 to 2026-04-15 |
| <0.1% AI bypass* | Share of automated attack attempts receiving verified outcome | Scripted + headless + AI-assisted bypass attempts, n=180,000 attempts | 2026-02-01 to 2026-04-15 |
| 0.4s average verify time* | Mean end-to-end user verification completion time | Successful verification events, n=95,000 events | 2026-03-01 to 2026-04-15 |
| 7 signals | Tremor, rhythm entropy, associative latency, mouse dynamics, device motion, scroll behavior, fusion score | Signal engine specification | Current production architecture |
Biometric research basis
- Tremor features informed by established human motor control frequency bands (8-12Hz micro-tremor ranges).
- Rhythm entropy uses inter-event interval variance distributions observed in human interaction studies.
- Associative latency thresholds reflect cognitive reaction-time literature for semantic prompt tasks.
- Passive dynamics combine mouse curvature variance, motion jitter, and scroll deceleration signatures.
Benchmark methodologies are periodically revised. Historical claims may reflect prior model versions.
Caveats and limits
- Results are internal and environment-specific; customer traffic distributions may differ.
- Adversary behavior evolves continuously, requiring retraining and threshold updates.
- Performance claims exclude customer integration misconfiguration and invalid token workflows.