How to Audit Your GTM Stack (and Actually Find the Waste)
Most GTM teams run between twelve and eighteen paid tools at once. Ask the budget owner what a third of them do on a Tuesday, and you get a shrug. That is not a technology problem. It is an accountability problem, and it gets worse every renewal because nobody wants to be the person who admits the org bought the wrong thing. At $1M-$50M ARR, the average mid-market GTM org is bleeding about $3k-$15k/month on overlap, shelfware, and contracts that made sense eighteen months ago. A real audit surfaces most of that in under a day. Teams skip it because it sounds slow, political, and thankless. It does not have to be: you are chasing facts finance already cares about, not running a philosophy workshop. This guide is the sequence we run when an operator needs ammunition before the next CFO conversation: what to measure, where the bodies are buried, and how to turn findings into renewal decisions instead of Slack debates.
What a GTM stack audit actually is
A GTM stack audit is a structured judgment on four realities, not a prettier spreadsheet. First, what you are paying for: executed contracts, billed seats, renewal dates, true-up clauses, and anything that auto-renews without approval. Second, what people actually use: active users in the last thirty days, login rates, rough "last updated" signals in CRM or MAP, and which workflows still run through the tool. Third, **tool overlap: two or more vendors paid to do the same job (sequencing, enrichment, lifecycle automation, comms). Fourth, stage fit**: enterprise SKUs on a twenty-person motion, or SMB tools choking now that you run multi-threaded six-figure deals. Most teams stop after dumping vendor names into a tab called "stack inventory." That is bookkeeping, not an audit. An audit ties spend to usage, usage to jobs-to-be-done, and jobs-to-be-done to a single owner. When you cannot draw those lines, you do not have a stack map; you have a rumor mill with invoices attached.
The five things you're actually looking for
You are not hunting "best-in-class" logos. You are hunting duplicated spend, quiet shelfware, and contracts that outlived the motion they were bought for. The next sections are the five patterns that show up in almost every audit we see between $5M and $40M ARR.
Tool overlap (the biggest leak)
Overlap is the pattern where two tools execute the same workflow for two different parts of the org. Classic pairs: Salesforce plus Outreach when both teams run sequences and tasks out of two systems. HubSpot Marketing Hub sitting next to Marketo when both own lifecycle automation. ZoomInfo plus Apollo plus Clearbit when three departments each bought a contact graph. Slack plus Microsoft Teams plus Notion plus Confluence when nobody agreed where decisions live. Operator tell: most teams running HubSpot and Marketo do not feel the pain in the UI week-to-week; they feel it in the renewal packet when both platforms bill for nurture, scoring, and routing. The overlap is obvious on paper before it is obvious in the product tour. If you want the political story for why this persists, read why stacks slide into chaos: it is rarely malice; it is fragmented buying with no single person looking at the combined tab.
Unused seats and shelfware
Seat math is blunt and bruising. Typical failure mode: sales leadership buys fifty Outreach or Salesloft seats; eighteen reps touched the tool last month; finance renews fifty because no one forwarded the usage export. Another: a visibility tool rolled out company-wide two years ago; six people log in today; the renewal still says "enterprise." Pull active-user counts from each admin console (most surface last-login or thirty-day actives in two clicks). Compare to contracted seats. Anything under about seventy percent utilization on a per-seat SKU deserves a red flag before you negotiate. The embarrassment is the point: executives assume teams max tools out; RevOps knows they do not. This is the cleanest early win because it does not require ripping workflows apart on day one.
Pricing model mismatch
Three recurring hits: per-seat tools priced like you still have ninety sellers when headcount is fifty-five. Usage-based SKUs (events, credits, API calls) where nobody set guardrails and overage shows up as a "surprise" line eight months in. Multi-year commits signed at peak hiring that now overshoot reality by thirty percent. Spotting it is boring arithmetic. Take billed amount, divide by active users, compare to list bands you can sanity-check in public pricing. If the effective ACV per active user is double what a peer team pays for the same motion, you either have a discount story to tell, or a mismatch story. Same for data vendors: if credits burn at twenty percent but the contract assumes eighty, you are financing someone's outdated model of pipeline.
Wrong-stage tools
The stack that got you to $8M ARR often cracks at $25M. The reverse happens too: a fifteen-person team paying Salesforce Enterprise with four admins worth of workflows is usually overbought. Not because Salesforce is "bad," but because the operating cost (implementation, governance, time-to-change) dwarfs the upside until you have real segment complexity. HubSpot routinely carries teams toward roughly $30M-$50M ARR before governance and object model pressure push serious Salesforce evaluations. That band is not gospel; it is a directional warning from hundreds of renewals. If your audit shows nine months of workarounds, phantom fields, and CSV bridges, you are past the logo debate; you are in a stage-fit problem.
AI-native replacement opportunities
Some categories already have fast, cheap challengers that assume modern outbound or enrichment flows. Sequencing skews toward tools like Smartlead or Instantly for high-volume mail; orchestration-heavy enrichment often runs through Clay-style tables before it touches a legacy database vendor; call-centric teams trial Grain or Fellow-class helpers for notes instead of bolting another "legacy recorder" on Zoom. You do not have to standardize on any one name here. You do have to flag categories where the incumbent was bought three budgeting cycles ago and the workflow today is patched with spreadsheets and Zapier. For how operators are thinking about that shift (without the hype words), read how AI changes sales operations. If your audit skips this lane, you will over-credit "stable" tools that quietly tax headcount.
How to actually run the audit (step by step)
1. Pull every GTM tool into one list. Start with finance AP or your procure-to-pay tool (Rippling, Airbase, Spendesk, whatever actually records spend). Cross-check SSO (Okta, Google Workspace SAML). Expect three to five invoices nobody can attribute; flag them early instead of pretending they are "IT." 2. Map each tool to one primary job. Write plain language: "Salesforce: CRM system of record." "Outreach: rep sequencing." "HubSpot: marketing automation." When two rows describe the same job, you have a candidate overlap pair before you debate features. 3. Pull usage data per tool. Export thirty-day active users, login rates, and any shipped reports the vendor emails you. Quality is uneven. I have seen perfect Workday tiles and garbage from niche SaaS. Even directional counts beat guessing. If you want a crisp rubric for benchmark-backed audits before you brute-force every admin UI, skim best GTM stack audit tools; then still download the CSVs. 4. Record true cash cost. Not list price: net bill after discounts, plus true-ups, plus support SKUs people forgot. Annualize to monthly so you can compare vendors on the same basis. If procurement will not share net terms, use the last three invoices and average. 5. Cluster overlap pairs. Group tools that share a job from step two. CRM+SEP, MAP+MAP, data+data, internal comms stacks. Rank pairs by combined spend; that is your consolidation shortlist. 6. Flag shelfware. Divide active users by entitled seats. Sub-seventy percent on per-seat SaaS is a negotiation lever, not a moral judgment. 7. Decide cut, consolidate, or keep-with-terms. The output is a one-pager per tool: owner, cost, utilization, overlap, renewal date, recommendation. Anything without an owner does not survive the next quarter.
What to do with the findings (the political part)
Collecting the data is the easy afternoon. Spending political capital is the hard quarter. Frame every conversation as budget and risk, not "tool religion." CFOs follow sentences like "we pay for the same lifecycle workflow twice" or "we renewed fifty seats when eighteen people logged in." They tune out screenshots. Bundle cuts to renewals when you can. Mid-contract kills burn goodwill and often cost more in break fees than they save. At renewal, not renewing a shelfware SKU is just a line-item negotiation. Give each functional leader one slide: what you propose, what you save, what work shifts for thirty days. If nobody owns the downside, the tool will boomerang. The goal is not a perfect stack on paper. The goal is stopping unnoticed overlap so the next audit gets easier instead of harder.
Red flags that tell you the audit is overdue
- Three internal tools end in "Ops" and nobody can crisply explain what each does on a pipeline review. - Marketing bought a MAP, sales bought a CRM add-on that does MAP-like nurtures, and neither team was in the same room. - A contract auto-renewed six months ago and the first anyone heard was an invoice forwarded with "FYI." - You are still paying seats for people who left; HR offboarding never reached IT. - Finance says "I know we pay for something starting with Z" and nobody can find the DocuSign.
What this looks like in practice (the StackSwap moment)
Doing this by hand for one GTM org usually takes four to eight hours of pulls, plus another few hours of arguments. StackSwap runs the same overlap and spend framing from a pasted tool list in under a minute: where categories stack, what looks redundant, and what modeled waste band you might be sitting in before you open a fourth spreadsheet tab. It does not cancel contracts for you; someone still has to look a VP in the eye. It does give you the picture most teams never bother to assemble until a board deck forces it. Most groups find something on the order of $3k-$10k/month they can explain on a single slide once the map exists.