There's a version of benchmarking that dealers do because their 20 Group facilitator asks them to. They fill out the composite, review the report, note that they're 12 points behind on service absorption, and move on. Nothing changes.
Then there's peer benchmarking — a fundamentally different model where the comparison data is just the starting gun, and the race is a real conversation about how to close the gap. This is the version that actually transforms performance. Here's why it works, how it differs from tool-based benchmarking, and what the research says about outcomes.
Free tool: Use the Peer Pod ROI Calculator to estimate the dollar impact of closing your dealership's performance gaps through structured peer learning. Also see the 2026 Automotive Dealership Leadership Report for the research behind peer program performance outcomes.
The Fundamental Limitation of Solo Benchmarking
A DMS report can tell you that your used vehicle turn is 44 days. A manufacturer composite can tell you the network average is 32 days. What neither can tell you is:
- Whether that 32-day average includes stores with wholesale-heavy strategies that inflate the number
- Which specific operational changes produced the improvement at the stores ahead of you
- Whether a comparable store in a comparable market solved the same problem you're facing right now
- What it felt like to make that change — the resistance, the timeline, the unexpected consequences
That knowledge only travels through relationships. And it only gets shared under conditions of trust and confidentiality — which is precisely what structured peer benchmarking programs are built to create.
Peer Benchmarking vs. Tool-Based Benchmarking
Tool-Based Benchmarking
- Shows where you stand
- Historical / lagging data
- Averaged peer groups
- No context for the gap
- No path to improvement
- Passive — you interpret alone
Peer Benchmarking
- Explains why you stand there
- Real-time operational insight
- Curated, comparable peers
- Context behind the numbers
- Specific strategies to close gaps
- Active — peers challenge you
The best operators use both. Tools provide visibility; peers provide insight. Together, they produce action.
The Research Case for Peer Learning in Performance Improvement
Across industries, organizations that participate in structured peer learning programs consistently outperform those that rely on self-directed improvement. The mechanism is well-documented: peer accountability raises follow-through rates, social comparison accelerates adoption of new practices, and confidential data-sharing surfaces operational benchmarks that surveys and manufacturer reports systematically undercount.
In the automotive retail context specifically, dealers who participate actively in 20 Groups and structured peer programs consistently rank in the top quartile of their market segments — not because the programs select for high performers, but because the sustained peer comparison and accountability loop drives continuous improvement.
How Peer Benchmarking Works in Practice
The mechanics of effective peer benchmarking are straightforward but require deliberate design to function well. Here's what a high-functioning peer benchmarking session looks like:
Pre-Session: Data Submission
Participants submit standardized operational data — typically gross per unit (new/used), F&I income per vehicle, service absorption, CSI scores, closing ratio, and key fixed-ops metrics. A facilitator aggregates this into a composite with individual store identifiers visible only to each store's own representative.
Opening Round: Numbers Without Context
The facilitator presents the composite. Every participant sees where they stand relative to the group. This deliberate exposure of gaps — in a room of peers, not a room of competitors — is what makes the conversation that follows substantive. Nobody is protecting their ego from an OEM representative or a vendor. They're among colleagues who face the same challenges.
Core Discussion: Gap Analysis
The facilitator identifies the two or three largest variance points across the group — areas where the spread between top and bottom performers is greatest. These become the focus. The top performers explain what they're doing differently. The middle and bottom performers ask questions and share their constraints. The group problem-solves together.
This is where peer benchmarking diverges most sharply from tool-based analysis. The "why" behind the gap — the specific operational change, the management decision, the vendor relationship, the hiring approach — only surfaces in conversation. It can't be read off a dashboard.
Accountability Commitments
Each participant commits to at least one specific action item before the next session. These are documented. At the following session, the first agenda item is reviewing whether those commitments were kept. The accountability loop is structural, not aspirational.
The key design principle: Peer benchmarking groups should be non-competing stores in comparable markets. The condition for honest data-sharing is that participants have nothing to lose by being transparent — which requires that their geographic markets don't overlap. This is non-negotiable for high-quality data and high-quality conversation.
The Metrics That Move in Peer Benchmarking Programs
Not all dealership metrics respond equally to peer benchmarking. The metrics that show the fastest improvement tend to be those driven by management practices and processes — not capital investment or market conditions. Here's what typically moves fastest:
Service Absorption
Service absorption is almost entirely a function of process, pricing strategy, and capacity management. Stores in peer programs that are lagging on absorption can see meaningful improvement within 90–120 days of adopting practices from peers who've already solved the problem. The solutions are rarely novel — they're just not obvious without a peer who's already found them.
Used Vehicle Gross Per Unit
Recon process, sourcing strategy, and pricing discipline are all peer-learnable. Stores that struggle with used gross often have one or two specific process breakdowns — and peers who've solved them know exactly what those breakdowns look like.
F&I Income Per Vehicle Retailed
F&I is highly sensitive to menu presentation, product selection, and deal structure. Peer exposure to high-performing F&I processes accelerates improvement faster than any single training program — because it provides real-world context and immediate validation.
Closing Ratio
Closing ratios respond quickly to process changes in how dealerships handle internet leads, follow-up cadences, and showroom presentation. Peers who've moved the needle share specific scripts, tools, and structures that can be adapted and implemented within a few weeks.
Building Your Own Peer Benchmarking System
You don't need to wait for a formal program to start capturing some of the benefits of peer benchmarking. Here's a minimal viable approach:
-
Identify 5–10 non-competing peers Dealers in different geographic markets, similar volume and brand mix. These can be people you've met at 20 Group events, OEM conferences, or through dealer associations.
-
Agree on a small set of shared metrics Start with five: GPU new, GPU used, F&I income per vehicle, service absorption, and one leading indicator (closing ratio or internet lead response time). More metrics reduce participation; start simple.
-
Establish a quarterly meeting cadence Virtual meetings work for data review. In-person meetings work better for strategic conversation. A quarterly in-person session with monthly virtual check-ins is a sustainable cadence for most operators.
-
Use a neutral facilitator Self-facilitated peer groups tend to drift toward conversation and away from accountability. A facilitator keeps the group focused on data, gap analysis, and commitments — which is where the value lives.
-
Formalize the accountability structure Document action items from each session. Begin the next session by reviewing them. This single structural element doubles follow-through rates compared to informal peer conversations.
Why Confidentiality Is the Foundation
The quality of data in a peer benchmarking program is directly proportional to the confidentiality of the structure. Dealers share real numbers — real gross, real expenses, real operational details — only when they're certain that information won't reach competitors, vendors, or manufacturers.
This is why the best peer benchmarking programs have strict confidentiality agreements, geographic non-overlap requirements, and clear policies about what can and cannot be discussed outside the group. When these structures are in place, the quality of sharing — and therefore the quality of insight — is dramatically higher than any data you can get from a vendor composite or manufacturer report.
LeaderSpin's peer pods are designed around this principle. Every group operates under a confidentiality agreement. Members are screened to prevent geographic overlap. The facilitator's role includes protecting the integrity of the confidential sharing environment. The result is a level of operational transparency that's simply not available anywhere else.
The Compounding Effect Over Time
One of the underappreciated aspects of peer benchmarking is how the value compounds. In the first session, you're comparing data. By the sixth session, you have a baseline for each peer, you understand their market conditions, and you can interpret their numbers in context. By the twelfth session, you have a two-year view of what worked, what didn't, and which strategies survived contact with reality.
This longitudinal context is the asset that tool-based benchmarking can never replicate. No dashboard captures the full arc of a management decision — the initial implementation struggles, the adjustments, the eventual results. Peers who've watched each other make decisions over multiple years carry that context natively.
The dealers who report the highest long-term ROI from peer programs are almost always the ones who've participated for three or more years. The relationship depth and shared history are the asset — not any single session's insights.
Getting Started
If you're not currently in a structured peer benchmarking program, the question isn't whether it would improve your performance. The research and the track record of 20 Groups are clear on that. The question is which program gives you the right peer group, the right structure, and the right facilitator to maximize the value.
LeaderSpin's peer pods are built specifically for automotive dealership leaders — single-point and small-group operators, GMs and principals — who want the benefits of structured peer benchmarking with the flexibility and confidentiality that more traditional programs don't always offer. Groups are limited to 8–12 members to ensure genuine peer conversation, not lecture-style facilitation.