Resource

    🎯 The Ultimate Guide to Lead & Account Scoring

    6/4/2025
    A comprehensive, practical template to help build and improve productive lead and account scoring models and functionality.

    🤔 But Sara…why should I care? Isn’t scoring so 2012?

    Simple: if you get a lot of inbound interest, but you’re trying to keep a lean sales team — you’re going to have to do 2 things:
    • Automate as much as you can, in an effective way
    • Prioritize leads that are ICP VIPs vs. fringe vs. not relevant at all
    Why? Because if you don’t do this, your MQLs will quickly remain untouched. And untouched MQLs are the death knell of marketing budget and, eventually marketing. Scoring has matured with machine learning and AI, but yes — some companies are still adopting basic scoring models, even today. 🫠 (Which is okay! No shade, just shows how different industries mature at different rates)

    🧭 Dashboard Overview

    A scoring dashboard helps you centralize your key metrics for visibility and action. It’s your mission control for identifying trends, spotting anomalies, and making data-driven decisions. Here are some common examples:
    • Lead Score Distribution: Visualize lead scores across the funnel (MQLs, SQLs, SALs, etc.)
    • Account Score Distribution: Monitor the health of your account scoring model (Tiers, ABM targets, ICP match)
    • For PLG: Interest Product Activity: Monitor product activity trends, like adding new users, taking advantage of new functionality, hitting usage limits, etc. This is an intent signal that is often forgotten!
    • High-Intent Signals This Week: Dynamic list of leads/accounts showing key intent signals (demo requests, pricing views, job changes, company hiring, etc.)
    • Action Needed: Leads/Accounts with inconsistent scores or flagged by sales as needing CS or exec Sales attention

    🏗️ Lead & Account Scoring Model Blueprint

    Lay the foundation for your scoring system by identifying what to score, why it matters, and how to quantify behavior and fit.

    🎯 Scoring Purpose

    Before you build, define the objective of your model (i.e., prioritizing sales efforts, filtering MQLs, guiding nurture streams).
    Some examples of different goals and approaches:
    1. Prioritizing Sales Efforts
    Goal: Help sales reps focus on leads/accounts most likely to convert right now.
    Model Focus:
    • High weighting on intent and recency.
    • Include behaviors like demo requests, pricing page visits, reply to outbound.
    • Use 3rd-party data (Vector, Bombora, G2) to detect active buying signals.
    • Shorter decay windows (i.e.., 7–14 days) to maintain urgency.
    Fit Criteria: Still important, but less strict than in other models. Sometimes non-ICP leads with strong intent are worth outreach.
    Outputs: A ranked list of hot leads/accounts updated in real time or daily.
    Who uses it? SDRs, BDRs, AEs
    2. Filtering MQLs
    Goal: Automatically qualify marketing-sourced leads before handing to sales.
    Model Focus:
    • Balanced weighting between fit and engagement.
    • Emphasis on form fills, email clicks, content downloads…but only if the lead fits your ICP.
    • May include progressive profiling fields (job title, company size, tech stack).
    Fit Criteria: Must be strong. This model serves as a gatekeeper.
    Threshold Logic: Score > 65 + meets job title & industry criteria = MQL.
    Who uses it? Marketing automation platforms (i.e., Marketo, HubSpot)
    3. Guiding Nurture Streams (less common)
    Goal: Route leads to the right automated campaigns based on score/behavior.
    Model Focus:
    • Granular scoring logic across fit, engagement, and topic interest.
    • Includes content themes consumed (i.e., ABM vs. product-led content).
    • Uses low-to-mid intent signals to determine nurture stage.
    Fit Criteria: Used to segment messaging, not to exclude leads.
    Examples:
    • Low score + good fit = awareness stream
    • Medium score + product content = consideration stream
    Who uses it? Marketing ops, demand gen

    🧩 Typical Core Scoring Components

    Dimension
    Lead Scoring
    Account Scoring
    Fit
    Title, Industry, Company Size, Location, Tech Stack
    Industry, Revenue, Employee Count, Funding, Geography
    Engagement
    Email Opens, Clicks, Form Fills, Content Views
    # of Contacts Engaging, Web Visits, Event Attendance
    Intent
    Demo Requests, Pricing Page Views, Contact Sales
    Buying Signals (i.e., Product Activity, G2 Visits, Intent Data Providers)
    Negative
    Unsubscribes, Bounces, Inactivity
    Low Engagement, Competitor Alignment, Irrelevant Signals
    ✅ Use weighted scoring for each category
    ✅ Include decay logic for time sensitivity (base the decay on your typical sales cycle — if your typical sales cycle is 12 months, for example, you can definitely phase out any activity over 18 months ago)
    ✅ Build scoring models in layers—start simple and refine iteratively

    📡 Incorporating 3rd-Party Intent Data into Scoring

    Use data from external sources to understand what prospects are researching before they ever engage with you directly. 3rd-party intent data reveals research behavior happening elsewhere —think of it as an "outside-in" view of buyer intent.

    Top Providers Include:

    • 6sense: Topic and solution research tracking across B2B sites
    • Bombora: Surge data based on content consumption
    • Vector: Firm-level and person-level signals (i.e., competitor views, ad clicks)
    • G2 / TrustRadius: Review and comparison activity insights

    📡 Fields & Values for 3rd-Party Intent Scoring

    🔎 Common Fields Across Providers

    Field Name
    Type
    Example Values
    Usage
    intent_topic
    Text
    "account-based marketing", "CRM tools"
    Match to buying stage or product category
    intent_intensity
    Integer (1–100)
    75, 90
    Weight higher scores more
    intent_type
    Text
    "research", "comparison", "purchase intent"
    Use for stage mapping
    intent_timestamp
    DateTime
    2025-06-01T10:00:00Z
    Use for decay logic
    source
    Text
    Bombora, G2, Vector
    Track provider performance

    🧠 Examples of Provider-Specific Fields

    🟣 Bombora

    Field Name
    Type
    Example Values
    Notes
    surge_score
    Integer
    65
    Score over 60 usually indicates intent
    company_domain
    Text
    example.com
    Match to account
    topic
    Text
    "revenue operations"
    Use to align with product categories

    🔵 G2

    Field Name
    Type
    Example Values
    Notes
    page_view_type
    Text
    "Pricing Page", "Comparison"
    Indicates buying stage
    product_compared
    Text
    Competitor name
    Use for competitive scoring
    user_title
    Text
    "Director of Marketing"
    Use for decision-maker weighting
    location
    Text
    "San Francisco, CA"
    Helpful for geo-targeting

    🔧 How to Use These in Scoring

    • Assign base points per topic + intensity (i.e., +10 for CRM topic + intensity > 70)
    • Add bonus points if the source is high-converting (i.e., +5 if G2 pricing page)
    • Decay scores based on intent_timestamp (i.e., -50% after 14 days)
    • Use boolean fields (like person-level match) as triggers or overrides

    🛠️ Integration Strategy

    1. Align intent signals to funnel stages (awareness, consideration, decision)
    1. Assign point values based on relevance and conversion correlation
    1. Set up decay functions (i.e., intent signals older than 14 days lose weight)
    1. Blend with first-party data for richer insight
    1. Don’t forget to review performance with sales every quarter

    🤖 AI in Scoring

    🧠 Predictive Scoring

    Predictive scoring uses machine learning to analyze historical CRM, marketing, and sales data to determine which leads and accounts are most likely to convert. It identifies patterns across closed-won opportunities, such as buyer journey paths, engagement velocity, firmographics, and even deal velocity.
    • Inputs: Historical win/loss data, marketing engagement, sales interactions, customer profiles.
    • Outputs: A predictive score that represents conversion likelihood — often with thresholds like 80+ = high propensity.
    • Benefits: Reduces reliance on guesswork and anecdotal sales feedback. Enables more data-backed prioritization.
    Common tools: MadKudu, Salesforce Einstein, HubSpot Predictive Lead Scoring.

    📈 Dynamic Models

    Dynamic models continuously adjust lead/account scores in real time based on new signals, like fresh engagement, intent spikes, or inactivity. Unlike static rules-based models, dynamic models “learn” as they go. Basically, a big if-then ML model vs. more advanced AI reasoning.
    • Think of these as machine learning-based systems that respond to changing buyer behaviors, content consumption patterns, or even seasonal trends.
    • Example: A lead score automatically increases after a visit to your pricing page and again when they download a whitepaper, without needing manual score recalibration.
    These models can adapt to individual buyer journeys instead of treating everyone the same.

    🎯 Recommended Actions

    Here, AI doesn’t just score, it can also recommend next best actions based on behavior and conversion probability.
    • For Sales: AI can generate ranked lists of hot leads/accounts, with suggested outreach timing and messaging.
    • For Marketing: AI may trigger personalized nurture tracks, retargeting ads, or content offers based on interest.
    • Cross-Functional: These recommendations can feed into task automation in CRMs or sales engagement tools like Outreach or Salesloft.
    This empowers GTM teams to act faster and more precisely, and takes some load off of management. Gong is an example vendor in this space.

    🔒 Other Important Considerations

    • Be careful about sales fatigue. If you send over a bunch of lemons, they will get discouraged and start to worry about hitting quota vs. helping you. Set reasonable expectations, tell them what’s in it for them (being cited as key contributor to effort, getting more leads more quickly than other reps as a part of beta), and react quickly to feedback so as to not burn the bridge. Not all feedback is valid (sales can be a little fussy at times!), but if you look into it and you are sending a bunch of junk, that is a valid complaint that deserves quick rectification.
    • Validate any assumptions with A/B testing and closed-won insights. DO NOT only rely on vibes or ego contests! It doesn’t matter if marketing thinks a signal should be meaningful, it needs to be proven out in results.
    • Watch for data quality issues and model drift. If you leave a model alone for too long or if your sales teams are inserting biases into which MQLs they pick up vs. don’t, this could impact the accuracy of your scoring model. Also consider which types of campaigns you’ve been running lately — if you stop running campaigns of a certain type and they were a substantial part of your scoring model, you could see a massive dip in MQLs.
    • Keep models simple until proven effective — do not start with a super complex AI model! Your stakeholders will drown in the complexity.
    • Use score trends to refine nurture tracks — if a score isn’t going up through a nurture track, your nurture may need some content or timing work.
    • Align with sales on what makes a lead "sales-ready,” avoid sending leads that go nowhere and don’t sell through.
    • Make sure you create a playbook for each type of lead sent over — if you introduce Influencer Marketing campaigns but don’t give sales a playbook on what to do with those leads, they’ll likely just leave them to rot in favor of leads they already have pre-built outreach playbooks for.
    • Cross-check scoring with lifecycle stage logic. If too many non-ready records or accounts are becoming MQLs, your scoring needs to be adjusted.

    Example Scoring Table:

    Signal
    Source
    Points
    AI Weight Adjustment
    Pricing page visit
    1st-Party Web
    +15
    +5 if correlated with closed-won deals
    G2 competitor comparison visit
    3rd-Party (G2)
    +20
    +10 for enterprise accounts
    Bombora surge on "ABM tools"
    3rd-Party (Bombora)
    +10
    +0 (baseline)
    Vector competitor interest
    3rd-Party (Vector)
    +15
    +5 for strategic accounts
    Clicked ad but didn't convert, ICP
    1st-Party Ad Data (Vector)
    +20
    +8 if title = decision maker
    Email click
    1st-Party Email
    +5
    -2 if bounce rate >5%
    Demo request
    1st-Party Web
    +50
    +0 (baseline)

    📊 Score Mapping & Thresholds

    Define your numerical ranges for lead stages and account tiers so that scoring outputs are actionable and clearly understood across teams. Nothing is like a scoring model that sits by itself and is therefore useless. 🫠
    • Lead score stages: Inquiry < 40, MQL 65+
    • Account tiers: Tier 1 (ICP + recent intent), Tier 2 (ICP only), Tier 3 (Not ICP)
    • Trigger logic: demo request = auto-SQL, pricing view + fit = MQL bump

    🔍 Audit & Optimization Framework

    Maintain scoring effectiveness over time by conducting regular reviews and adjustments. As buying and marketing trends change, adjust your scoring to match.
    • Run monthly conversion correlation reports
    • Compare scoring tiers with closed-won outcomes
    • Collect qualitative feedback from SDRs/AEs
    • Adjust weights and triggers as needed

    🛠️ Tools & Integrations

    Build a scoring tech stack that unifies first-party data, external insights, and predictive analytics for full-funnel intelligence.
    • CRM: Salesforce, HubSpot
    • MAP: Marketo, Pardot, HubSpot
    • Intent: Bombora, 6sense, G2, Vector
    • AI: MadKudu, Salesforce Einstein
    • BI & Dashboards: Looker, Tableau, Power BI

    ✅ Action Items & Reminders

    Operationalize your scoring system with a recurring calendar of reviews and cross-functional collaboration. Setting these touchpoints early will help you avoid setting-and-forgetting your scoring models.
    • Weekly: Check for high-intent surges in Tier 1 accounts
    • Monthly: Review engagement decay logic and refresh content weights
    • Quarterly: Stakeholder model review and AI recalibration
    • Annually: Reset benchmarks, update ICP data, refresh scoring model

    Related Guides

    👥 Strategy & Technical Guide to ABM & CBM

    🌐 The Ultimate Guide to Forms & Landing Pages That Perform

    Coffee Kitty

    The Marketing Operations Strategist Newsletter

    Join 3,500+ operations professionals. Get actionable MOPs tips every month.