
Why 'Do We Have Enough Pipeline?' Is the Wrong Question (And the Three Pipeline Questions That Actually Matter)
Your forecast missed last quarter. Your CRO asked the same question they ask every Monday morning: "Do we have enough pipeline?"
The team scrambled. SDRs got pushed to add more leads. Marketing got asked for more campaigns. Three weeks later, the gap is the same.
That question hides the actual problem.
I sat down with Yannis Sarantis, VP of Revenue Operations at Hack The Box, on The Revenue Vault. Yannis runs revenue ops at a company with 4 million community members and one of the most operator-driven GTM organizations in cybersecurity. Before Hack The Box, he survived Greece's financial collapse at 17, paid his way through US schools, did 40 startup interviews after HBS, and joined a company that eventually shut down.
He calls the calm-under-pressure trait that made him a great rev ops leader his "superpower." It came from chaos.
What follows is the operator-level breakdown of the conversation.
The Wrong Question Every CRO Asks
The default conversation in most exec rooms is binary. Do we have pipeline? Yes or no. Are we covered? 4x or 5x? The number gets thrown around like a poker chip.
Yannis's framing is sharper. Pipeline questions in binary form give you a binary answer. Structural problems hide underneath. The answer to "do we have pipeline" can be yes while the company is still bleeding revenue.
He gave the example of a team running a 10% disco-to-close win rate. More pipeline does not fix that team. More pipeline pushes more leads through the same leak.
I see this pattern across hundreds of B2B sales orgs. The CRO asks for more pipeline. The board asks for more pipeline. The team responds. Three quarters later the win rate is unchanged, the deal cycle is longer, and the forecast still misses.
The Amazon Analogy: Track Leading Indicators
Yannis's framework comes from outside revenue ops entirely. He is a former mechanical engineer with a physics background. His instinct is to break complex systems into first principles and look at the layer underneath.
His Amazon analogy lands clean. Amazon's mission is customer happiness. Their KPIs are revenue and retention. But in their operating cadences, leadership rarely looks at those headline numbers. They look at the leading indicators: percent of customers returning packages, average delivery times, defect rates per million.
Why? Because by the time the headline metric moves, the cause is six weeks old.
The same pattern applies to pipeline. By the time you notice the forecast is missing, the leak happened a quarter ago.
The Three Pipeline Questions Every CRO Should Run Weekly
Yannis's pipeline review at Hack The Box runs three diagnostic questions every Monday. The CRO is in the room. The head of marketing is in the room. The head of BDR is in the room. They run the same three questions every week.
Question 1: What percent of ICP leads got first response inside SLA?
Most companies have an SLA on response time. Almost none of them measure it. The leak shows up here first. If 70% of your ICP leads are getting a response within an hour and the rest are taking three days, the three-day cohort dies. The answer is fixing response speed.
Question 2: What's our week-over-week ICP volume by segment?
Pipeline is not pipeline. There is ICP pipeline and there is noise. If your ICP volume is flat or declining and your total volume is climbing, you are looking at a targeting problem masquerading as a coverage problem. More SDRs and more campaigns will not help.
Question 3: What's our disco-to-close win rate this month versus last?
This is the one most teams skip. Win rate by stage is the most diagnostic metric in the funnel. If it drops, the issue is something happening between disco and close. A competitive shift, a pricing problem, or a skill gap on the team.
Run all three weekly. The leaks surface early. The fixes go in before the quarter is gone.
Removing Forecast Bias
A second framework Yannis breaks down on the episode is the three-layer forecast.
Most forecasts are biased two ways. Reps want to look good in the first half of the quarter, so they over-commit. Reps want to look better in the second half, so they sandbag. Either bias produces forecasts that miss.
Yannis's stack at Hack The Box layers three independent inputs:
Layer 1: Bottom-up rep submission, no peer visibility. Reps submit their own forecasts independently. They do not see each other's numbers until the manager review. This eliminates the cohort drift where everyone shifts their commit toward the team average.
Layer 2: Mathematical model based on stage history. A weighted model takes deal age, stage, and historical conversion rates, and produces an expected close number. The model is dumb, but it is unbiased.
Layer 3: Qualitative context layer. Pull the call transcripts. Look at sentiment of the last call. Track time since last contact. A deal in commit with negative call sentiment and a two-week gap of no contact is a slipping deal, regardless of what the rep said.
When all three layers agree, the forecast is real. When they disagree, the deal goes back a stage.
The 12-week version of this lifts forecast accuracy from the industry-average 63% to north of 80%.
The Hardest Conversation a Rev Ops Leader Has
Yannis told a story about pushing for multi-year contracts at Hack The Box. The board wanted longer terms. The execs wanted to lock in revenue. The CRO and the rev ops team aligned on the play.
The reps did not execute it.
The natural exec move is to redesign the comp plan. Push more incentives toward the longer-term deal. Yannis's response was different. Get out of the office. Shadow the calls. Talk to the reps. Talk to the customers.
What he found was a skill gap. The reps lacked the ability to handle the multi-year objection. They got hit with the price objection on the second call, did not know how to reframe, and conceded to the shorter term.
The fix was skill development.
A mentor told me a line years ago that fits this exactly: "The answer is never in your office."
My version: revenue ops should look at the data first, then talk to the reps, then talk to the customers. The numbers tell you something is wrong. The reps tell you what is actually broken. The customers tell you what is actually true.
Run the loop in that order and the answer surfaces.
FAQ
Why does asking "do we have enough pipeline?" hide problems?
It compresses a structural question into a binary answer. A team with low ICP fit and high lead volume can answer "yes." A team with great fit but slow response can answer "yes." Both teams will miss forecast.
What are leading indicators for pipeline health?
The three Yannis runs at Hack The Box: ICP response time inside SLA, week-over-week ICP volume by segment, and disco-to-close win rate by month. Each surfaces a different leak.
How do you remove forecast bias?
Three independent inputs that triangulate: bottom-up rep submission with no peer visibility, a mathematical model based on stage history, and a qualitative context layer pulling call sentiment and contact recency. When all three align, the forecast is real.
What's the right order of investigation when forecast misses?
Data first, reps second, customers third. The data tells you where to look. The reps tell you what is broken. The customers tell you why.
Bottom Line
Pipeline coverage is a vanity metric. The leaks live one layer below. Run the three diagnostic questions weekly. Triangulate the forecast across three independent inputs. Get out of the office and into the trenches when the numbers stop adding up.
If you lead a B2B sales team and you want to see exactly where your forecast is leaking, book a free Executive Snapshot below.
For the full conversation with Yannis, watch the episode embedded at the top of this post.

