The MN Wolf Count: What the Numbers Do (and Don't) Tell Us
Part Two of Beyond the Count: the science, the stories, and the stakes
When it comes to wolves, everything comes back to the numbers.
How many wolves are there?
Are there too many?
Have they “recovered”?
Should protections be removed?
Numbers shape headlines. Numbers drive public perception. And most importantly, they influence policy decisions that determine whether wolves live or die.
So it might surprise you to know that the method we use to estimate Minnesota’s wolf population hasn’t fundamentally changed in over two decades.
Yes, we’ve added GPS collars and updated maps. But the core framework—the assumptions, the math, the sampling strategy—remains largely the same.
And that matters. Because how we count wolves shapes how we understand their status, their impact, and their future.
For decades, Minnesota’s wolf population has been estimated at around 2,700 individuals. This number is cited in debates over delisting, hunting, and management. But does it tell the full story?

At first glance, a stable population might seem like a sign of success. But when it comes to wildlife conservation, numbers alone don’t define recovery. To understand the true status of wolves in Minnesota, we need to look beyond population estimates and consider factors like pack stability, genetic diversity, and ecological function.
How We Count Wolves in Minnesota: A Primer
Each winter, the Minnesota DNR estimates the number of wolves in the state using a combination of:
GPS-collared wolves from a sample of packs
Aerial surveys and track counts to estimate or confirm the size of collared packs
GPS data to determine average territory sizes of collared packs
And a constant assumption that 15% of wolves are lone individuals
From there, they divide the estimated total wolf-occupied range (updated in 5 year intervals) by the average collared pack territory size, multiply by the average collared pack size, and add 15% to account for lone wolves.
The result? A population estimate.
It’s clean. Simple. Predictable.
But here’s the thing: nature isn’t clean, simple, or predictable, is it?
Where This Method Falls Short
Over the past 8 years, I’ve spent a lot of time analyzing this method and asking hard questions about it.
Questions like:
Why do we assume lone wolves make up 15% of the population every single year, no matter what?
Why are different wolf packs sampled in different locations each year, despite known variation in habitat, prey, and human disturbance and their impacts on pack make-up, behavior, and ecology?
Why do we only update our "occupied range" measure every 5 years?
And what happens when we base statewide wolf estimates on the behavior of a few dozen packs out of several hundred?
And, perhaps most importantly, what are we missing?
A Margin of Error
Annually, population estimates are reported along with a margin of error (at 90% confidence), which typically ranges from ±16% to ±27% of the estimated total.
In 2022–23, the ±800 wolves equals over one-quarter of the entire estimate—that’s a possible swing of 1,600 wolves total between the low and high bounds.
A 25%+ error range isn’t just a “small uncertainty”—it means the estimate could be thousands of wolves off in either direction.
When management decisions (like delisting, hunting quotas, or lethal control thresholds) are based on these numbers, it gives the illusion of precision where very little actually exists.
In science, error margins are expected. But we should be deeply cautious when the method we use introduces as much uncertainty as the phenomenon we’re trying to measure.
Yet year after year, the reports say the same thing:
"The population is stable."
No mention of how wolf behavior, mortality, or dispersal might be changing.
No caveat that pack size and territory estimates are built on shifting sand. No mention that over 7% of the state's estimated population is killed by USDA-APHIS annually.
No serious analysis of genetic diversity, ecological function, or habitat connectivity.
Just a number.
And when federal protections hang in the balance, a number is often all it takes.
The Territory Problem
At the heart of Minnesota’s population estimate is a simple equation:
Estimated Wolf Population =
(Occupied Range ÷ Average Pack Territory Size) × Average Pack Size + 15% (Lone Wolves)
It sounds straightforward—just plug in some field data and out comes a number. But in reality, this formula hinges almost entirely on one, single sensitive variable: average collared pack territory size.
Here’s why that matters:
The occupied range (around 74,000 km²) hasn’t changed significantly since 2017.
Average pack size varies only slightly year to year (typically 3.8–4.9 wolves).
The 15% lone wolf adjustment is fixed, regardless of real-world fluctuation.
So when the population estimate goes up or down by hundreds of wolves, it’s not because the landscape changed, or wolves suddenly multiplied or vanished.
It’s because the average pack territory size—the denominator in the equation—shifted.
A smaller average territory means more estimated packs fit into the landscape, inflating the total population.
A larger average territory shrinks the number of estimated packs, bringing the population estimate down.
Crucially, it's important to remember where the number for average territory size comes from in the first place.
It’s based on GPS data from just 35–50 different packs each year—out of more than 500 estimated statewide.
In other words, the entire estimate rests on a very small—and sometimes incomplete—sample. Even worse, some years rely heavily on scaled estimates. That's when a pack has fewer than 100 GPS points, so its territory size is multiplied by 1.37 as a correction factor. But that scaling is applied heavily inconsistently, year to year.
In other words: A few collars in the wrong place—or a few territory sizes misunderstood—can swing the statewide estimate by hundreds or even 1,000+ wolves.
That’s not a flaw in the math.
It’s a flaw in what we ask that math to carry.
And it only gets worse
Collars Go Missing. Wolves Do, Too.
Territory size isn’t the only weak link.
Each year, a substantial percentage of collared wolves are recorded as dead, missing, or dropped from the data entirely. In some years, as many as 40–47% of collared wolves were either killed or vanished during the survey period.
The model, on a face-value positive, only includes data from active collars. But this means that wolves that go missing aren’t factored into mortality estimates—they’re essentially erased from the equation.
There’s no correction for poaching, disease, car strikes, or dispersal-related deaths. And no modeling of survival probabilities for missing animals.
This creates a dangerous illusion of stability. If your dataset silently loses wolves each year—but you never account for where they went or why—you’re left with numbers that look solid, but are quietly leaking meaning.
When we treat the annual estimate as a stable indicator of biological reality, we ignore how much it's propped up by incomplete telemetry, fixed assumptions, and unacknowledged attrition.
So... Why Do We Do It This Way?
That’s exactly the question I want to explore throughout the remainder of this series.
Why do we use methods that haven’t kept pace with the science?
Why are we okay with defining “recovery” with a calculator instead of a biological reality?
And why do we treat wolves like a statistic when their survival depends on understanding them as individuals, families, communities, and keystone species within complex ecosystems?
I have asked. I have held meetings. I have sent emails. I have waited. And to this day, I've received nothing more than an “oh, damn.”, a few acknowledgements that the methodology could be more robust and that budget is a hindrance.
Fortunately, I've written lengthy reports about how the methodology could be updated in ways that largely require little to no increase in budget, as they are nearly entirely based on updates in statistical modeling or computation.
Those messages have gone ignored, too.
But you know what? That's all beside the point. Because, as helpful as estimates are, and as needed as these updates may be, they still won't tell the full story beyond the count.
What’s Next
In the coming weeks, I’ll break down each part of Minnesota’s wolf estimation method—where it works, where it doesn’t, and how we could do better.
We’ll also look at:
What wolves actually do for Minnesota’s ecosystems
How rare (and often misrepresented) conflicts with livestock really are
The effectiveness of nonlethal conflict prevention
What “recovery” should actually mean—and what happens when we get it wrong
This isn’t about choosing sides.
It’s about choosing better questions and finding better solutions.
Because wolves are more than a number.
Thanks for reading,
- Devon