Most biodiversity scores cannot survive an audit. WildMetrics Earth does not estimate β it detects, classifies, and logs every species event to a traceable timeseries, grounded in physical sensors and peer-reviewed models.
Biodiversity claims built on proxies and desktop models fail when scrutiny arrives. WildMetrics Earth processes real sensor events β field recordings, satellite passes, and camera trap images β through validated ML models. Every metric traces back to a date, a location, and a physical source.
Three sensor pipelines
Acoustic Β· Satellite Β· Camera trap β each with dedicated ML processing
Five ML models
BirdNET, BatDetect2, MegaDetector, SpeciesNET, Acoustic Indices β purpose-built for field biodiversity
Structured timeseries output
Species detections, habitat quality, and land cover change β delivered to your dashboard and reporting pipeline
Each pipeline ingests raw sensor data, runs validated ML models, and writes structured metrics to a shared timeseries database. All three feed the same reporting and dashboard layer.
AudioMoth field recorders capture WAV audio. WildMetrics Earth processes every recording through three models β identifying bird species, bat calls, and broader soundscape health in a single pass.
Passerine bird species identification from raw audio. Detects and classifies hundreds of species β producing per-recording species lists, confidence scores, and detection timeseries.
Bat echolocation call detection and species classification. Identifies bat activity in the ultrasonic spectrum β producing call event timeseries, species-level classification, and nightly activity profiles.
Soundscape biodiversity proxy metrics computed from spectrogram analysis: ACI (Acoustic Complexity), BI (Bioacoustic Index), NDSI, ADI, and EVE (Acoustic Evenness).
Outputs
Sentinel-2 and Landsat imagery is processed through atmospheric correction, spectral index computation, and change detection β producing a longitudinal land cover intelligence layer for every project site.
NDVI, EVI, and NDWI computed per acquisition β tracking vegetation health, canopy density, and water stress over time.
Automated change detection identifies land cover transitions β capturing deforestation, degradation, fire aftermath, and restoration progress. Integrates ESA WorldCover, GFW, and GEDI data.
Per-pixel temporal statistics across all Sentinel-2 acquisitions β median, percentile bands, standard deviation β building a statistical fingerprint of each landscape.
Outputs
Field camera traps collect images and video of wildlife. WildMetrics Earth runs two sequential ML models β first detecting and localizing animals, then classifying them to species level with IUCN threat attribution.
Animal, person, and vehicle detection with bounding box localization. Filters every image for wildlife presence β dramatically reducing processing cost on large deployments.
Fine-grained wildlife species classification. Produces species names, common names, and IUCN Red List threat categories β enabling direct TNFD-ready reporting from camera trap records.
Automated frame extraction from MP4 video clips β enabling the same species detection pipeline on motion-triggered video as on still images.
Outputs
All three pipelines write to a shared timeseries database. WildMetrics Earth aggregates across sensor types and delivers structured reports β automatically, on a schedule.
Unique species count per taxonomic class β Aves (BirdNET), Mammalia (MegaDetector + SpeciesNET), and Chiroptera (BatDetect2) β aggregated from acoustic recorders, camera traps, and satellite-derived habitat layers. Filter by date range, site, or sensor deployment to compare biodiversity across projects.
Daily detection counts grouped by sensor type and pipeline β acoustic events from AudioMoth deployments, camera trap triggers from field units, and satellite acquisition passes from Sentinel-2. Track monitoring coverage gaps, seasonal migration patterns, and dataset completeness across the full project lifecycle.
Every detected species is matched against the IUCN Red List β from Least Concern through Critically Endangered. The breakdown maps directly to TNFD disclosure requirements, giving project developers and corporate buyers an auditable view of threatened species presence at each monitoring site.
The cross-pillar summary combines acoustic biodiversity indices, camera trap species richness, satellite-derived vegetation health (NDVI/EVI), and land cover change events into a single project-level dashboard. This is the view project developers share with verification bodies, buyers use for due diligence, and government planners use for jurisdictional reporting.
24-hour diurnal detection histograms per taxonomic class β built from timestamped BirdNET, BatDetect2, and MegaDetector events. Dawn chorus peaks for birds, dusk echolocation bursts for bats, nocturnal mammal activity from camera traps. A key quality signal that proves ecological realism and validates habitat assessment claims.
Randomly sampled detection records per project β linking each species event to its source audio clip, camera trap image, or satellite tile. Auditors and rating agencies can independently verify any claim by tracing it back to the raw sensor data. No black boxes, no unverifiable scores.
Every metric in WildMetrics Earth traces back to a physical sensor event β a WAV file, a satellite acquisition, an image frame. The models that process those events are published, peer-reviewed, and used in the field conservation community. There are no proprietary black-box scores.
When a buyer, auditor, or rating agency asks "where does this biodiversity score come from?" β you have an answer. BirdNET v2 detected Turdus merula on AudioMoth unit 4 at 05:42 on 14 March. SpeciesNET classified a jaguar at the northeast transect on 3 separate camera nights.
Explainable outputs are not a feature. They are the baseline requirement for any data that enters a carbon or nature credit verification cycle.
The current platform is the evidence layer. The roadmap extends into predictive intelligence and real-time alerting.
Automated early-warning signals triggered by acoustic anomalies, NDVI drops, or camera trap disturbance events β delivered before the next verification cycle.
Machine learning on multi-decade satellite time series β predicting permanence risk trajectories before they appear in standard vegetation indices.
Integrated inference across acoustic, satellite, and camera trap data β species habitat models calibrated simultaneously against multiple independent sensor types.
Book a demo and we'll walk you through the dashboard for your project type β carbon project, corporate portfolio, or government planning program.