In This Article
Summary: Most agencies fail when asked one specific question: “What is your honest win rate?” The 11 other questions in this list are designed to expose the gap between what an agency claims and what it can deliver. Use them as a structured filter before you sign anything. The good answers will help you. The bad answers will save you a year and several crore.
Founders ask the wrong questions when evaluating growth agencies. They ask about packages, pricing, and “what would you do for us in the first 90 days.” Those questions get rehearsed answers. Every agency has the same elevator pitch about “data-driven strategies” and “compound growth.” Walking out of those conversations, you learn nothing about whether the agency can actually do the work.
The questions that matter are the ones agencies cannot rehearse. They expose internal process, hiring quality, attribution discipline, and willingness to disqualify themselves. After 13 years running engagements at upGrowth Digital, I can tell you which questions break good agencies away from bad ones inside 30 minutes. Twelve of them. Use this list before you sign anything.
Run them in this order. The early questions filter for capability. The middle ones filter for honesty. The last few filter for fit. An agency that handles all twelve well is rare. An agency that fumbles three or more should be off your shortlist.
Good answer: a senior team member walks you through three engagements in your vertical. They name the specific metric (organic clicks, CPL, ROAS, AI citation share), the specific number, the timeframe, and what they did differently from the previous agency. The case studies are within the last 24 months because patterns from 2022 do not transfer to 2026. They acknowledge what went sideways in each engagement, not just the wins.
Bad answer: case studies that are “anonymous” because of NDA, or that quote “300% growth” without naming the metric or the baseline, or that describe what the agency did without telling you what the client actually got. If they cannot show specific numbers in your vertical, they have not done the work in your vertical. The Lendingkart engagement at upGrowth ran 5.7x lead volume, 30% CPL reduction, and 4x spend scaling on Google Ads. That is the level of specificity to expect. If an agency cannot match that disclosure, they are either new to your vertical or hiding the numbers.
Good answer: they name the strategist, the specialist (SEO, paid, content, GEO, depending on the engagement), and the account manager. Each named operator has been with the agency for at least 18 months. The strategist has worked with at least three companies in your stage range. The agency commits in writing that these people will not be swapped out without your approval.
Bad answer: vague references to “our team,” senior people on the pitch call who you never see again after signing, junior operators on the actual work, no commitment to staffing continuity. The pitch-and-switch pattern is the most common reason engagements fail. The work the senior team promised is executed by a junior team that does not understand the strategy. Get the names in writing before you sign.
Good answer: they describe a structured diagnostic that runs across attribution, conversion, retention, paid efficiency, organic, AI visibility, and content. They explain how they will sequence the bottleneck fixes. They show you the framework they apply, and they can name it (the Organic Compounding System, the Paid-to-Organic Transition Model, the GEO Visibility Framework, etc). They commit to a written diagnostic deliverable in the first 30 days that you own regardless of whether the engagement continues.
Bad answer: “we’ll start with a discovery call and propose a strategy.” That is not a process. That is a placeholder. Agencies without a real diagnostic process default to their own playbook regardless of what your business actually needs. They fix what they know how to fix, not what is broken in your situation.
Good answer: they explain that the two disciplines run in parallel rather than as substitutes. They walk you through what changes in content engineering for AI extractors (FAQ schema, entity definitions, original data, structured sections). They name specific examples of brands that have successfully integrated both disciplines. They have an opinion about which signals matter most in 2026 and can defend it.
Bad answer: “GEO is just SEO with extra steps” or “we mostly do SEO and GEO is a future priority” or anything that suggests they have not thought carefully about how AI search has changed the discipline. ChatGPT has 883 million monthly active users in 2026. Google AI Overviews appear in 18% of all searches. Pew Research found pages featured in AI Overviews see a 46.7% drop in click-through rates. An agency that has not integrated GEO thinking by now is two years behind.
Also Read: Generative Engine Optimization Services
Good answer: a specific number, ideally between 60% and 80% of engagements meeting their primary KPI. They explain what defines a win for them and what defines a loss. They tell you what kills engagements (client side: leadership change, budget cut, lack of internal owner; agency side: scope creep, wrong fit at signing). The number feels uncomfortable to share but they share it.
Bad answer: “almost all our engagements succeed” or “we have a 95% retention rate” or anything north of 90%. No agency wins 95% of engagements. The few that exist do so by quietly dropping clients before failure, redefining success, or operating at such low scope that the bar is meaningless. The most credible answer is a real number with real losses attached. Agencies that admit they fail occasionally are usually the ones that succeed more often.
Good answer: a specific story, maybe two. The client was the wrong stage, the wrong vertical, had unrealistic expectations, or had broken attribution that needed to be fixed before any agency work made sense. The agency declined the engagement and explained why. They lost the revenue. They were right.
Bad answer: “we work with anyone who is the right fit” or “we have never turned down a client.” Agencies that never disqualify themselves take on engagements they should not. Six months later both sides are unhappy. The willingness to lose a deal at signing is one of the strongest predictors of engagement quality. We use this signal at upGrowth and it has saved both us and the client multiple times.
Good answer: weekly reporting with specific metrics tied to the diagnosis they did at kickoff. The metrics include leading indicators (rankings, traffic, CTR, AI citation share, CPL, ROAS) and lagging indicators (qualified leads, pipeline contribution, revenue attribution). They can show you a sample report from another engagement (with names redacted) so you can see the format. They commit to a monthly strategy review with you, not just a metrics dump.
Bad answer: monthly reports with vanity metrics. “We grew your traffic 40%” without explaining whether that traffic converted. Reports built around the agency’s deliverables (X blogs published, Y backlinks built) rather than the client’s outcomes. Agencies that report on activity rather than outcome are agencies that cannot prove their work moved the needle.
Good answer: they explain how they distinguish organic-assisted paid conversions, brand search lift from content marketing, AI citation referrals, and direct attribution. They acknowledge the limits of last-click models. They have a position on multi-touch attribution that is more sophisticated than “we use Google Analytics.” They are willing to integrate with your CRM to track downstream pipeline.
Bad answer: “we report on the GA4 default model.” That is not attribution thinking, that is dashboard interpretation. The 2026 reality is that 18% of searches return AI Overviews and 60.7% of AI search market share sits with ChatGPT. Last-click attribution misses most of the value chain. Agencies that have not built a sophisticated attribution view are reporting on the wrong things.
Also Read: How Fi.Money Became the Top Authority in Google AI Overviews
Good answer: a clear escalation path. The strategist owns the day-to-day. The senior leadership at the agency is available for cross-functional conflicts. They commit to a quarterly business review with you and your CEO/CMO. They have a process for surfacing disagreements rather than executing on conflicting instructions. They tell you they will push back when they think you are wrong, and they describe a recent example.
Bad answer: “we are very flexible and accommodate client preferences.” Translation: they will execute whatever you ask and not push back. That is not a partner. That is a vendor. Vendors are useful when you know exactly what you want. They are dangerous when your strategic clarity is incomplete because they will execute against your blind spots without challenge.
Good answer: a structured 30-to-60-day handover. They commit to documenting the work, transferring asset ownership (content, dashboards, links, accounts), and providing a final strategic recommendation. The off-boarding includes the diagnostic, frameworks, and execution playbooks they built during the engagement. You leave with assets you can hand to the next agency or in-house team without losing six months of context.
Bad answer: anything that suggests off-boarding is an afterthought, or that key information lives in the agency’s tools without your access, or that asset transfer would require additional fees. Agencies that build hostage situations into their engagements are agencies that fear competition on results. The good agencies make the off-boarding clean because their reputation lives on what clients say after the engagement ends.
Good answer: a specific commitment tied to your bottleneck. If your bottleneck is organic compounding, they commit to a target organic traffic growth, featured snippet count, and topical authority expansion. If your bottleneck is AI visibility, they commit to citation share growth in target queries. If your bottleneck is paid efficiency, they commit to CPL reduction and ROAS targets. The numbers feel specific and conservative rather than aspirational.
Bad answer: vague language about “compound growth” or “marketing leadership transformation” without a specific metric or number. Or wildly aspirational targets that the agency will not be held accountable for. The right number is uncomfortable to commit to. If the agency commits without flinching, they are either confident or careless. The follow-up question to ask is “what conditions would make you fail to hit that number” and watch how they answer.
This is the question that breaks most agencies. It is also the most useful question on the list.
Good answer: a specific honest answer. “If your bottleneck is product-market fit, we cannot fix that.” “If you do not have a senior internal owner for this engagement, we will fail.” “If you need execution at sub-six-month timelines and you are still pre-attribution, an SEO engagement is not the right fit; you need a fractional CMO first.” The agency is willing to articulate the boundary conditions where they would not succeed.
Bad answer: silence, awkward laughter, or “we will figure out a way to make it work for any client.” That is not confidence. That is a sales script. The agencies that can name when they are not the right fit are the ones that succeed when they are. The willingness to lose a deal at signing is the same muscle that makes the engagement work after signing.
Also Read: How We Helped Lendingkart Through Google Ads
I should run our own answers through this list because the questions are useless if the people who wrote them cannot pass them.
Capability questions: case studies are public at upgrowth.in/case-study/, with named numbers, named clients, named timeframes. Fi.Money grew 200,000+ monthly clicks in 9 months. Lendingkart hit 5.7x lead volume with 30% lower CPL. Scripbox crossed 198,000 traffic in 2 months. Vance hit 70% organic traffic from target geos in 3 months. Delicut Dubai went from 40,000 AED to over 2 million AED monthly sales. The named operators on each engagement are senior and have been with us for years. The diagnostic process runs through the seven-bottleneck framework before any execution work. The SEO versus GEO position is integrated rather than parallel because that is what the 2026 buyer requires.
Honesty questions: our engagement win rate sits in the 60-to-75% range depending on how you define success. We have walked away from prospects whose attribution was so broken that any agency engagement would have failed. Our reporting is weekly with metrics tied to the kickoff diagnostic. Our attribution methodology accounts for AI citation referrals, organic-assisted paid, and pipeline contribution rather than just last-click.
Fit questions: our escalation path runs from strategist to senior leadership to founder. We push back when we think clients are wrong, and we have ended engagements when the disagreement was structural. Our off-boarding includes documentation, asset transfer, and a strategic recommendation that survives the engagement. Our 12-month commitments are specific to the diagnosed bottleneck and conservative enough to be defensible. We tell prospects when we are not the right fit, and Grove (our diagnostic at upgrowth.in/grove) does this automatically when prospects are pre-revenue, sub-50K marketing budget, or looking for execution at freelancer pricing.
The honest answer to “why should we not hire upGrowth” is also worth stating. If you are pre-revenue or pre-product-market-fit, we are the wrong partner. Our work is built for funded, post-PMF companies that need to scale a working motion. If you have a strong existing in-house team and need pure execution, a specialist on retainer will cost less than our agency engagement and may produce comparable results. If your category does not yet have AI search demand, GEO work is premature and you can wait six months. We say all of this in the first conversation rather than after the contract is signed.
Also Read: SEO Agency vs GEO Agency vs In-House: How to Decide in 2026
Also Read: AI Growth Strategist vs Marketing Chatbot: The Real Difference
Q: What is the most important question to ask a growth agency before hiring them?
A: “Why should we not hire you?” Most agencies cannot answer this. The ones that can name the conditions where they would not succeed are the ones that actually succeed when conditions match. The willingness to disqualify themselves is the same muscle that makes the engagement work. We treat this as the single most important filter at upGrowth, and Grove (our diagnostic at upgrowth.in/grove) runs this filter on prospects automatically by walking them through stage, team setup, and bottleneck before recommending an engagement.
Q: How do I know if a growth agency case study is credible?
A: Three signals. The case study names the specific client (not “a Series B fintech”). The case study quotes specific metrics with specific timeframes (200,000 clicks in 9 months, 5.7x lead volume, 30% CPL reduction). The case study acknowledges what was hard, what almost did not work, or what the team did differently from a previous agency. Anonymous case studies, vague growth percentages, and stories that read like marketing copy are warning signs. Real case studies sound like operators talking, not marketers writing.
Q: What is a normal win rate for a growth agency?
A: 60% to 80% of engagements meeting their primary KPI is a credible range. Agencies claiming above 90% are usually quietly dropping failures, redefining success, or operating at low enough scope that any outcome counts as a win. Agencies below 50% have a process problem, a hiring problem, or a fit problem. The right agencies are honest about the failures and explain what kills engagements (leadership change, budget cut, lack of internal owner on the client side).
Q: Should I hire an agency that does both SEO and GEO, or specialists in each?
A: Integrated, in most cases. The two disciplines share the same content engineering surface and increasingly the same content engineers. Splitting them across two agencies creates coordination overhead and conflicting recommendations on what to publish, how to structure schema, and which queries to target. The integrated path produces better results faster. The exception is at very high spend levels (multiple crore per year on each discipline) where two specialist agencies with a fractional CMO coordinating between them can outperform a single integrated agency. For most companies, integrated is the right call.
Q: What red flags should I watch for in a growth agency pitch?
A: Five common ones. Senior people on the pitch who disappear after signing. Case studies that quote percentages without naming the metric or baseline. Reports built around agency deliverables rather than client outcomes. Inability to articulate when they would not be the right fit. Aspirational 12-month commitments without conservative anchor numbers. Any one of these is a yellow flag. Two together is a red flag. Three is a no.
Q: How long should the contract be with a growth agency?
A: 6 to 12 months for most engagements. Below 6 months, the agency cannot do compounding work; they default to short-term tactical execution. Above 12 months, you lose the option value of testing whether the engagement is working. The right structure is a 12-month engagement with quarterly business reviews and an explicit off-ramp at the 6-month mark if the agreed-upon KPIs are not tracking. The off-boarding terms should be in the contract from day one. Agencies that resist these terms are signaling they expect to perform poorly enough that you would want to leave early.
Pull your current shortlist of growth agencies. Run all twelve questions on each one in the next pitch call. Score the answers as good (clean and specific), partial (acceptable but lacking specificity), or weak (rehearsed or evasive). The agencies with mostly good answers stay on the list. The agencies with three or more weak answers come off it. The exercise takes 30 minutes per agency and saves 6 to 12 months of misaligned engagement.
If you want to run the diagnostic conversationally before the agency pitches start, Grove at upgrowth.in/grove walks you through the framework in 5 to 7 minutes. The output tells you which agency structure your situation actually needs (full SEO, GEO-heavy, fractional plus specialist, or in-house). With that diagnosis in hand, the twelve questions become much sharper because you know what you are filtering for.
About the Author: I’m Amol Ghemud, Chief Growth Officer at upGrowth Digital. We help SaaS, fintech, and D2C companies shift from traditional SEO to Generative Engine Optimization. This shift has generated 5.7x lead volume increases for clients like Lendingkart and 287% revenue growth for Vance.
In This Article