WingID: Bringing AI to the Parts Collection Survey
The Survey That Hasn't Changed in 65 Years
Since 1961, the U.S. Fish & Wildlife Service has relied on the Parts Collection Survey (PCS) to determine species, age, and sex composition of the national waterfowl harvest. It remains one of the most important tools in migratory bird management.
The process is straightforward: USFWS mails postage-paid envelopes to a sample of hunters. Hunters mail back one wing from each duck harvested. Wings are frozen and shipped to regional offices, where trained biologists called “speciators” examine each specimen by hand at multi-day events called “wingbees.” A second person verifies each identification using the Carney reference guide — Species, Age, and Sex Identification of Ducks Using Wing Plumage — first published in 1964.1
The program collects approximately 90,000 duck wings, 20,000 goose tail feathers, 10,000 dove wings, and 8,000 woodcock wings annually.1 It's an impressive operation. It's also entirely analog, dependent on a shrinking pool of trained speciators (there are only four in the entire country — one per flyway), and constrained by the physical logistics of mailing, storing, and manually examining tens of thousands of biological specimens.1
Accuracy varies by species. Mallard identification exceeds 99% for species and sex. But aging accuracy drops to 82–84% for Blue-winged Teal and 92% for Wood Duck.2 The hardest task — distinguishing female and juvenile Canvasbacks from Redheads — remains a documented challenge even for expert speciators.2
What the PCS Captures — and What It Doesn't
The PCS answers a critical question: of the birds hunters take, what species, ages, and sexes are represented? That compositional data drives the population models that inform season frameworks and bag limits. But it has structural limitations:
Months of latency between harvest and results. Sample-based — a fraction of hunters, not a census. No geographic precision beyond state-level attribution. No real-time feedback to hunters or agencies during the season. Single point of failure in the speciator pipeline — four people for the entire country.
AI Has Already Proven It Works for Birds
The question isn't whether computer vision can identify bird species. It can.
Cornell Lab of Ornithology's Merlin app achieves approximately 95% accuracy on live bird photo identification across 6,000+ species, using a convolutional neural network trained on 6 million photos from the Macaulay Library.3 Over 33 million people have used Merlin worldwide.3
USGS has demonstrated that deep learning can achieve 93.6% agreement with manual waterfowl counts from aerial survey imagery while reducing human labor by 96%.4
These systems prove that AI-assisted wildlife identification is technically viable, field-tested, and already operating at scale. But as of today, no AI-based system for identifying waterfowl from harvested wing photos exists in production anywhere. The exact methodology the PCS has used manually since 1961 — wing-based species, age, and sex determination — has never been digitized.
WingID is the first.
How WingID Works
WaterfowlAI's WingID uses proprietary computer vision models powered by artificial intelligence, running entirely on the hunter's device — processing wing photos without an internet connection. That matters in the field, where cell service is often nonexistent. The model identifies species locally, then syncs results and training data when connectivity is restored.
This isn't a cloud-dependent prototype. It's field-grade technology designed for the realities of where waterfowl hunting actually happens — flooded timber at 5 AM with no signal.
First Season Results
| Metric | Value |
|---|---|
| Wing photo submissions | 742 |
| Unique users | 207 |
| Species identified | 52 |
| Avg AI confidence | 75.4% |
| Geographic reach | 9 US states + England |
| Top species (AI) | Mallard (96 IDs) |
| User correction rate | 35.5% |
The Correction Loop Is the Product
The 35.5% correction rate isn't a failure metric — it's the engine of improvement. When a hunter overrides an AI prediction, both the original classification and the correction are stored. Over time, this creates a gold-standard training set labeled by the people who know these species best — waterfowl hunters who handle birds every day of the season.
For context: Cornell's Merlin achieved its current ~95% accuracy through years of iterative training on millions of community-submitted photos.3 WingID is at the beginning of that same curve — but with a fundamentally harder identification task. Merlin identifies live birds in habitat with full body plumage visible. WingID identifies species from a single harvested wing.
Each correction makes the next model version better. The first season's 742 submissions and corrections are the foundation of a training dataset that will grow with every photo submitted.
A Biologist's Evaluation
For the biologists and researchers who have spent careers working with the PCS, a new technology should be evaluated skeptically. That skepticism is earned — the history of wildlife management is littered with promising technologies that didn't survive contact with field conditions.
It digitizes existing methodology, not replaces it. WingID uses the same diagnostic approach as the PCS — wing plumage characteristics — but captures them as high-resolution digital images instead of physical specimens. The underlying biology doesn't change.
It scales the speciator, not eliminates them. Four speciators for an entire country is a single point of failure for a critical federal dataset. WingID doesn't aim to replace expert identification — it aims to extend it. AI handles initial classification; experts review, correct, and validate.
It generates training data passively. Every hunter who submits a photo and confirms or corrects the ID is contributing to a labeled dataset. Unlike traditional training programs that require dedicated effort, WingID's training data is a byproduct of hunting.
It works offline. On-device inference means the model runs where hunters actually are — in the field, without connectivity. Results sync later. This isn't a lab tool that requires ideal conditions.
The Cost and Revenue Equation
The current PCS requires USFWS to print and mail postage-paid envelopes, receive and store biological specimens, ship frozen wings to regional offices, and staff multi-day wingbee events — all to process approximately 90,000 wings per year with four speciators. Every step carries cost: postage, cold storage, labor, travel, and the opportunity cost of biologists who could be doing other work.
WingID inverts that cost structure. Hunters submit wing photos using a free app they already have on their phones. There are no envelopes to mail, no specimens to freeze, no shipping logistics, and no multi-day identification events to staff. The AI handles initial classification instantly. Expert review can happen remotely, asynchronously, from anywhere — multiplying the capacity of the existing speciator workforce rather than replacing it.
For state agencies, the value proposition is equally direct. Real-time species composition data during the season — not months after it ends — means managers can monitor harvest composition as it happens. If an unusual pattern emerges (unexpected species shifts, concerning sex ratios), agencies see it in weeks rather than discovering it the following year.
For university research programs, WingID creates an entirely new class of data asset. The growing library of labeled wing photographs — with species, AI predictions, and hunter corrections — is a training dataset that didn't exist before. Graduate students and faculty studying waterfowl identification, morphology, or population dynamics can access this dataset for published research at no cost to their institutions.
For conservation organizations like Ducks Unlimited and Delta Waterfowl, WingID offers a scalable way to engage their member bases in citizen science. Every hunter who uses the app is contributing to the same compositional data that drives season frameworks — turning routine hunting activity into a conservation contribution with no additional effort required.
From Missouri to California: Two Flyways, One Architecture
WingID's data architecture is designed to be interoperable with existing PCS workflows. The platform's wing sample schema includes collection method fields that distinguish harvest data, survey data, and research submissions — the same categories the PCS uses.
WaterfowlAI is actively working with university research partners in California — including collaborations with UC Davis, home to the Dennis G. Raveling Endowed Chair in Waterfowl Biology and the only avian biology-focused graduate program in the UC system5 — to explore how AI-powered wing identification can modernize harvest monitoring for the Pacific Flyway.
California's Central Valley has lost 95% of its historic wetlands but still supports approximately 60% of all wintering Pacific Flyway waterfowl.6 WaterfowlAI has developed formal proposals for modernizing harvest monitoring in California and built a dedicated California WingID platform as a proof of concept.
Citizen Science Without the Friction
WingID is free to every user — no subscription required. That's a deliberate choice modeled on what made eBird successful. eBird has amassed 2.1 billion bird observations from 1.2 million contributors, produced over 90 peer-reviewed publications, and has been validated by USFWS itself as providing “the best available high-resolution information across wide U.S. areas for the entire year.”7 8
But eBird requires active participation — birders go out specifically to observe and report. WingID generates species identification data as a natural byproduct of the hunting experience. Snap a photo, get an ID, move on. The conservation data is captured automatically.
When WaterfowlAI launched its Flyway Drive wing photo collection initiative, the effort was supported by Ducks Unlimited's Conservation team — an endorsement from the organization that conserved a record 1.2 million acres in FY2025 and generated $421 million in revenue.9
What Comes Next
Model accuracy improvement through the growing correction-labeled training dataset. Age and sex determination beyond species identification — closing the gap with full PCS capability. Formal research validation with university partners in both the Mississippi and Pacific flyways. Agency pilot programs to test integration with existing PCS workflows. International expansion — WingID already has users in England, and the underlying technology is species-agnostic.
The Parts Collection Survey has served conservation well for 65 years. WingID isn't here to retire it. It's here to give it a digital future — one where every hunter with a smartphone can contribute to the same species composition data that currently depends on four people, a mail system, and a lot of freezer space.
Sources
- USFWS, “Speciator? Wingbee? One Way We Count Waterfowl”; USFWS, Migratory Bird Parts Collection Survey.
- USGS, “Accuracy of Aging Ducks in the US Fish and Wildlife Service Waterfowl Parts Collection Survey” (Raftovich & Chandler, Journal of Wildlife Management, 2013).
- Cornell Lab of Ornithology, Merlin Photo ID v3.5; Cornell Tech, “Merlin Bird Photo ID.”
- USGS Upper Midwest Environmental Sciences Center, “Deep Learning for Automated Detection and Classification of Waterfowl.”
- UC Davis Avian Sciences Graduate Group; UC Davis, John M. Eadie research program.
- USGS, “Waterfowl Ecology in California and Pacific Flyway.”
- eBird 2025 Year in Review.
- USFWS/Cornell Lab, eBird policy validation study; Horns et al. (2021), Journal of Applied Ecology.
- Ducks Unlimited 2025 Annual Report.