Definition
Red/Amber/Green (RAG) Scoring is a simple status classification method that uses three color-coded categories—Red, Amber (or Yellow), and Green—to communicate the health, risk level, or progress of an item such as a project, campaign, KPI, initiative, or operational process.
In marketing, RAG scoring is used to make performance and delivery status legible at a glance across channels, campaigns, creative production, experimentation, and MarTech operations—especially when leaders need fast visibility without reading a novel disguised as a dashboard.
How it relates to marketing
Marketing organizations use RAG scoring to standardize how they communicate status across:
- Campaign performance: pacing to goal, ROAS, conversion rate, CAC, pipeline contribution
- Delivery and operations: on-time launch readiness, creative production stages, QA pass/fail, dependency tracking
- Customer journey orchestration: journey health, audience eligibility, suppression logic, message frequency compliance
- Experimentation: test progress, sample size attainment, statistical confidence readiness
- MarTech and data: data quality checks, tag governance, integration uptime, consent enforcement
It is common in marketing dashboards, weekly business reviews (WBRs), project steering updates, and cross-functional standups where stakeholders need consistent semantics for “how bad is it?”
How to calculate (the term)
RAG scoring is calculated by mapping one or more metrics or criteria to threshold ranges. The most common patterns are:
Single-metric thresholding
- Choose a KPI and define numeric thresholds for Green/Amber/Red.
Example (campaign pacing to monthly target):
- Green: ≥ 95% of expected-to-date target
- Amber: 85%–94%
- Red: < 85%
Formula example:
- Pacing % = (Actual to Date ÷ Expected to Date) × 100
- Then assign RAG based on the threshold bands.
Multi-metric rule-based scoring
- Use multiple measures and define rules (e.g., “Red if any critical metric is Red” or “Amber if two metrics are Amber”).
Example (email program health):
- Deliverability, click-to-open rate, complaint rate
- Red if complaint rate exceeds threshold or deliverability drops below threshold
- Amber if engagement is down but deliverability is stable
- Green if all metrics are within range
Weighted composite scoring
- Convert metrics to points, weight them, total a score, then map score bands to RAG.
Example:
- Score = (0.4 × Pacing Score) + (0.3 × ROAS Score) + (0.3 × Conversion Score)
- Map final score to Red/Amber/Green thresholds
How to utilize (the term)
Common marketing use cases for RAG scoring include:
- Executive dashboards
- Show top KPIs and initiatives with clear status
- Quickly surface where leadership attention is needed
- Campaign health monitoring
- Apply RAG to pacing, spend efficiency, and conversions
- Trigger review workflows when Amber/Red appears
- Operational readiness
- Track campaign launch readiness (creative approvals, QA, tagging, legal review)
- Prevent “we launched anyway” incidents
- Experimentation governance
- RAG for test readiness (hypothesis approved, instrumentation complete, sample size on track)
- Improve experiment velocity without sacrificing rigor
- MarTech support and incident management
- RAG for system uptime, integration latency, data pipeline failures
- Standardize escalation and incident comms
Compare to similar approaches
| Approach | What it communicates | Strengths | Limitations | When to use |
|---|---|---|---|---|
| RAG Scoring | Discrete status (3 levels) | Fast, simple, consistent | Can oversimplify; subjective thresholds | Exec views, portfolio health, operational status |
| Numeric KPI reporting | Exact performance values | Precise, trend-friendly | Harder to scan; overload risk | Analyst views, deep dives, optimization |
| Traffic-light with more levels (e.g., 5-band) | More granular status | Better nuance | More complexity; less intuitive | Large portfolios, mature governance |
| Risk registers (qualitative) | Risks, mitigations, likelihood/impact | Captures context and actions | Not instantly scannable | Program management, dependencies |
| Alerting/anomaly detection | Unexpected change signals | Detects issues quickly | Requires tooling/tuning | Real-time monitoring, high-volume programs |
Best practices
- Define thresholds with governance
- Document the exact rules for each KPI (and who owns them).
- Avoid “Green means I feel good today.”
- Separate performance RAG from delivery RAG
- A campaign can be Green on execution (launched) and Red on outcomes (underperforming). Don’t blend them unless you want confusion.
- Use consistent directionality
- Ensure everyone knows whether “higher is better” or “lower is better” for each metric (e.g., CAC lower is better).
- Include trend context
- Pair RAG with a short trend indicator (up/down/flat) or delta so Amber doesn’t become permanent wallpaper.
- Define escalation actions
- Predefine what happens for Amber and Red (review cadence, owner, mitigation plan, decision rights).
- Avoid threshold churn
- Changing thresholds every week makes RAG meaningless. Revisit on a set cadence (e.g., quarterly) or when strategy changes.
- Watch for aggregation traps
- Portfolio rollups should have clear logic (e.g., “Red if any Tier-1 KPI is Red”).
- Don’t average problems into invisibility.
- Be explicit about confidence
- If data is incomplete or delayed, use an “Unknown/Gray” status or annotate—don’t guess and color it Green.
Future trends
- Automated RAG via anomaly detection
- Instead of static thresholds, RAG can reflect statistically unusual behavior relative to historical baselines.
- Context-aware scoring
- Thresholds that adjust based on seasonality, channel mix, inventory constraints, or lifecycle stage.
- Action-linked dashboards
- Clicking a Red status launches a workflow: incident ticket, root-cause checklist, or optimization playbook.
- Portfolio optimization integration
- RAG feeding prioritization models (e.g., which campaigns to pause, which tests to accelerate, where to reallocate spend).
Related Terms
- KPI Thresholds
- Marketing Dashboard
- Performance Pacing
- Service Level Agreement (SLA)
- Operational Readiness
- Risk Register
- Exception Reporting
- Anomaly Detection
- Portfolio Management
- Experimentation Governance
