69.9% of sales teams forecast from CRM data they don’t trust. Not data they haven’t checked. Data they actively distrust — and use anyway.

That number should end every forecast accuracy conversation. Especially the ones that start with “our reps need more CRM discipline.”

The reps aren’t the problem. The system feeding the forecast is.

The Trust Gap

A forecast is only as reliable as the data beneath it. And in most B2B organizations, that data has been degrading for months before anyone names the problem.

Here’s what degradation looks like in practice: required fields get filled with placeholder values. Stage definitions drift from how deals actually move. Activity logging becomes inconsistent because the CRM doesn’t reflect the actual sales motion.

Contacts go stale. Relationships between accounts, opportunities, and contacts break down.

None of this is visible in the forecast number itself. The number looks like a number. It sits in a slide deck. Someone presents it with confidence. The board nods.

Then the quarter ends 15-20% below plan, and the post-mortem focuses on deal slippage — not on the fact that the foundation was unreliable from the start.

The Complexity Multiplier

The trust gap gets worse as deal complexity increases. B2B win rates sit between 21-25% on average. Enterprise deals close at 17%. And the average B2B deal now involves 13 decision-makers — up from 7 just a few years ago.

Every additional stakeholder creates another handoff. Every handoff is a potential data gap. The opportunity record should capture buying committee structure, engagement status across contacts, competitive positioning, and deal progression signals. In most CRMs, it captures a stage dropdown and a close date.

So the rep forecasts based on their relationship with one champion. The manager discounts the number based on gut feel. The VP discounts it again. The number that reaches the board has been adjusted three times. Each adjustment compensates for a system that doesn’t capture what’s actually happening.

That’s not a forecast. It’s a consensus estimate built on institutional distrust.

Where the System Breaks

The forecast trust gap isn’t a single failure. It’s a cascade of small misalignments that compound over time — the same erosion pattern that shows up across every revenue system.

Stage definitions don’t match reality. The CRM stages were defined when the company was smaller, the product was simpler, and the buyer journey was shorter. They haven’t been updated. Reps force-fit deals into stages that don’t describe what’s actually happening.

Activity data is incomplete. The CRM captures logged calls and emails. It misses the Slack conversation, the hallway meeting, the champion’s internal presentation to their CFO. The activities that actually move deals forward are invisible to the system.

Contact relationships aren’t mapped. Knowing that 13 people are involved in a deal is useless if the CRM only tracks 3 of them. The buying committee structure — who influences whom, who has budget authority, who can veto — lives in the rep’s head, not the system.

Historical patterns aren’t available. What’s the average cycle time for deals of this size? What’s the win rate when a specific competitor is involved? What stage do deals most commonly stall at? This data exists in the CRM, but it’s buried under inconsistent tagging and unreliable stage progression timestamps.

The Discount Economy

When teams don’t trust their forecast data, they develop coping mechanisms. The most common is the layered discount — a percentage cut applied at each management level to compensate for data they don’t believe.

Rep forecasts $500K. Manager cuts to $400K. VP cuts to $340K. The board sees $340K and plans accordingly. If the quarter lands at $360K, everyone calls it a beat. If it lands at $310K, the finger-pointing starts.

The discount economy feels like pragmatism. It’s actually an expensive workaround for a broken system. Every hour spent debating discount percentages is an hour not spent fixing the data. The discounting becomes unnecessary when the foundation is reliable.

And the discounts themselves are unreliable. They’re based on pattern recognition from previous quarters — quarters where the same data quality issues existed. You’re calibrating your correction factor against a baseline that was already wrong.

What Fixing This Actually Looks Like

The fix isn’t a better forecasting model layered on top of bad data. It’s fixing the data.

That means aligning stage definitions with how deals actually progress — not how they were designed to progress three years ago. It means capturing buying committee structure in the CRM, not just the primary contact. It means building activity logging into the workflow so it happens automatically, not relying on reps to manually log after every interaction.

It also means accepting that some of this is a system design problem, not an effort problem. If the CRM doesn’t make it easy to capture the right data, the right data won’t get captured. No amount of training or accountability changes that equation.

The companies that forecast accurately aren’t staffed with more disciplined reps. They have systems that capture what matters, structured so the data stays reliable as the business scales.

The Question Worth Asking

If 69.9% of teams forecast from data they don’t trust, the question isn’t whether your forecast is accurate. It’s which parts of your system are producing data you can’t rely on — and how long that’s been compounding.

The answer determines whether you’re looking at a configuration fix or a full system rebuild. The earlier you ask, the cheaper the answer.


Most forecast problems aren’t forecasting problems. They’re system problems that surface at forecast time. Take the Designate Scorecard to find out where your revenue systems are breaking down.

How healthy are your revenue systems?

Take the Revenue Systems Health Scorecard — a 5-minute self-assessment for B2B revenue leaders.

Take the scorecard →

Diagnose your own revenue systems → Explore the console