Definition
Structural Friction Index (SFI) is a lightweight operational metric proposed by Greg Kihlström of The Agile Brand that quantifies the internal effort required to produce customer-impacting outcomes. It expresses the ratio of internal interactions to externally visible output:
SFI = (Meetings + Handoffs + Approvals) / (Customer-impacting outputs)
- Meetings: scheduled touchpoints to move work forward.
- Handoffs: transfers of ownership or work-in-progress between individuals or teams.
- Approvals: formal sign-offs required to proceed.
- Customer-impacting outputs: features released, campaigns launched, experiments run, issues resolved to SLA, revenue-affecting fixes, or equivalent “shipped” outcomes.
A higher SFI indicates more internal motion per unit of delivered value.
How it relates to marketing
Marketing leaders manage cross-functional work that often accrues coordination overhead. SFI surfaces that overhead so teams can increase launch velocity, experimentation cadence, and time-to-impact. It complements financial KPIs by exposing whether organizational gravity is diluting customer-visible results, both for brand and demand programs as well as product-led growth motions.
How to calculate
- Define outputs by function (e.g., for marketing: campaign launches, experiments started, assets published that meet predefined quality gates). Publish these definitions to avoid gaming.
- Instrument the numerator:
- Meetings from calendar systems (include recurring standups only if they are used to advance the specific work measured).
- Handoffs from workflow/project tools (e.g., changes in assignee or status across teams).
- Approvals from ticketing/creative review systems.
- Choose a time window (e.g., month or quarter) and a scope (team, program, or value stream).
- Compute SFI using the formula above.
Example calculations (hypotheticals):
- Go-to-market team (200-person SaaS)
Before: (32 meetings + 18 handoffs + 6 approvals) / 8 launches = 56/8 = 7.0
After priority caps, pre-approved creative patterns, and a single decision-maker: (14 + 6 + 2) / 11 = 22/11 = 2.0 - Platform team in a regulated enterprise
Before: (45 + 25 + 10) / 4 releases = 80/4 = 20.0
After consolidating ownership, removing duplicate review boards, and standardizing release templates: (20 + 8 + 4) / 8 = 32/8 = 4.0
How to utilize
- Baseline and target bands: Establish norms by team/context. A practical starting point is
Healthy: < 2.0 | Caution: 2.0–5.0 | Critical: > 5.0
Adjust thresholds upward for compliance-heavy domains. - Detect and diagnose: Treat SFI spikes as “smoke alarms.” Investigate duplicate approvals, unclear ownership, excess WIP, or unnecessary status meetings.
- Drive operating changes:
- Reduce handoffs via single-threaded ownership and clear decision rights.
- Collapse approvals with pre-approved templates, design systems, and risk-based review tiers.
- Cap WIP and initiatives to cut partial work and context switching.
- Standardize with briefs, intake forms, and release/launch checklists.
- Shift decisions closer to the work to prevent long escalation chains.
- Incentives and governance: Include “steps removed / approvals retired” in performance goals. Review SFI in quarterly business reviews alongside velocity and outcome metrics.
- Reporting: Track SFI over time, correlate to cycle time, launch frequency, experiment throughput, and revenue contribution to validate impact.
Comparison to similar approaches
Metric/Approach | What it measures | Unit | Primary data source | How it complements SFI |
---|---|---|---|---|
Cycle Time | Start-to-finish time to complete work | Days | Workflow systems | SFI explains why cycle time is long by revealing internal interactions. |
Lead Time to Value | Request-to-customer impact | Days | CRM/analytics + workflow | Pair with SFI to see whether delays stem from coordination vs. technical/market factors. |
Flow Efficiency | Active time vs. wait time | % | Kanban/value stream data | High SFI often correlates with low flow efficiency due to wait states from approvals/handoffs. |
Throughput/Launch Velocity | Items delivered per period | Count/time | Workflow/release logs | SFI normalizes internal effort per delivered item, adding quality of flow context. |
DORA Metrics (e.g., Deployment Frequency, Lead Time for Changes) | Software delivery performance | Various | DevOps toolchain | For marketing/MarTech teams, SFI highlights governance/coordination burdens that DORA does not. |
RACI/Decision Rights | Role clarity | Qualitative | Operating model docs | Use RACI changes to lower SFI by reducing ambiguous ownership and rework. |
Best practices
- Make definitions public: Document “customer-impacting outputs” per function (product, marketing, CX) and socialize them.
- Automate capture: Pull meeting counts from calendars, handoffs from status/assignee changes, approvals from review logs; avoid manual tallies when possible.
- Segment sensibly: Track SFI by work type (e.g., net-new campaign vs. BAU), risk tier, and regulatory intensity.
- Set and revisit bands: Start with the healthy/caution/critical bands above and refine using rolling three-period medians.
- Timebox meetings: Exclude broad team rituals that do not advance the measured work, or track them in a separate “overhead” view for transparency.
- Limit stakeholder touches: Use pre-approved creative patterns and content governance to reduce review cycles.
- Own the path to green: Assign a single accountable owner for each initiative to minimize handoffs.
- Bake fixes into rhythms: Rolling funding, WIP caps, standardized briefs, and tiered approvals should become standard, not temporary campaigns.
- Align incentives: Recognize teams for removing steps and approvals as well as for shipping work.
- Guard against gaming: Random audits, published definitions, and periodic recalibration keep incentives healthy.
Future trends
- Native instrumentation in collaboration and workflow suites that auto-classify meetings, handoffs, and approvals by initiative.
- Predictive analytics that forecast SFI’s impact on cycle time, launch rates, and pipeline contribution.
- Benchmarking across similar organizations or value streams with normalization for risk and complexity.
- AI-assisted friction detection that flags redundant review chains and proposes consolidation.
- Risk-based approval policies with dynamic routing (e.g., low-risk content bypasses full boards) to keep SFI within target bands.
- Real-time dashboards that visualize SFI alongside throughput, experiment velocity, and cost-to-serve.
Related Terms
- Flow Efficiency
- Cycle Time
- Lead Time
- Work in Progress (WIP)
- Throughput
- Value Stream Mapping
- Decision Rights (RACI)
- DORA Metrics
- Experiment Velocity
- Change Approval Board (CAB)