SkillRaptor
ExploreHow It WorksPricing
SkillRaptor
Terms of ServicePrivacy PolicyAcceptable UseExplore SkillsContact

© 2026 SkillRaptor. All rights reserved.

skill.detail

Back to James Whitfield's profile
Consulting

Strategy & Operations Consulting

Hypothesis-driven management consulting for diagnosing business problems, scoping engagements, and prioritizing initiatives. Use when diagnosing root causes via MECE issue trees, prioritizing hypotheses with impact-feasibility matrices, validating via data triangulation scorecards, scoping Objectives-First memos, or plotting 80/20 Impact-Effort grids for initiative ranking. Covers MECE decomposition, hypothesis generation, scorecard validation, and sensitivity analysis. Not for financial modeling, operational execution, or HR policy design.

1,264Words
Mar 2026Created
J
James Whitfield·Strategy & Operations ConsultantView profile
Add to your AI tools

Drop this file into your favorite AI tool so it thinks like you every time.

  1. 1Click "Copy skill content" below.
  2. 2Open ChatGPT, Gemini, or any AI chat tool.
  3. 3Paste into Custom Instructions, system prompt, or project knowledge.
  4. 4Done. The AI now follows your methodology.

When to Use This Skill

Match this skill to requests involving structured problem-solving in consulting contexts:

  • Decomposing stated business issues (e.g., sales drops, margin erosion) into MECE drivers for diagnosis.
  • Generating and ranking 3-5 testable hypotheses per driver branch.
  • Scoping project boundaries with one-page memos tied to measurable outcomes.
  • Ranking initiatives on 2x2 grids balancing NPV impact against resource effort.

Step-by-Step Process

Diagnosing the Real Problem

  1. Build MECE issue tree in Excel or Lucidchart decomposing the stated issue into core drivers (e.g., Sales = Price × Volume; Volume = #Transactions × Avg Units/Transaction; branches: acquisition, retention, basket size).

    • Input: Restated problem statement from user query.
    • Test MECE using 'one-home rule' (assign each sub-issue to exactly one branch) and Excel checklist (columns: Issue, Parent Branch, Coverage %, Overlap Flag).
    • Output: Visual tree diagram with 100% coverage and zero overlap flags, because incomplete decomposition misses root drivers while overlaps cause double-counting.
    • Success criterion: Checklist shows full mapping of restated problem; iterate until zero flags.
  2. Generate 3-5 testable hypotheses per major branch (e.g., "Repeat purchase rate dropped 20-30% due to loyalty program glitches").

    • Input: Tree branches from Step 1.
    • Format: Bullet list with hypothesis statement, predicted gap size (%), and test method.
    • Limit to quantifiable predictions, because vague hypotheses resist validation.
  3. Prioritize hypotheses using 2x2 matrix (rows: branches; columns: Impact [% of total gap], Feasibility [1-10 score], Priority quadrant).

    • Input: Hypotheses from Step 2; estimate impact as % of gap explained, feasibility on data access (<2 weeks via standard sources).
    • Output: Ranked list selecting top 1-2 per quadrant (see Decision Rules for thresholds), because this focuses effort on high-leverage tests.
  4. Generate and populate Hypothesis Validation Scorecard for top hypothesis using provided data.

    • Input: User-supplied internal metrics (e.g., CRM extracts), customer feedback summaries (e.g., 10-15 responses), external benchmarks (e.g., industry reports).
    • Use Google Sheets template below; score each source strength and compute convergence.
    • Output: Completed scorecard (template below), because triangulation from 3+ sources confirms causality over correlation.
Hypothesis Validation Scorecard

Hypothesis: [State full hypothesis, e.g., "Loyalty glitches caused 25% repeat drop"]

| Component | Quantitative (Strength: High>1k sample/Med>100/Low) | Interviews (Strength: High≥70% consensus/Med≥50%/Low) | Benchmarks (Strength: High direct/Med proxy/Low) | Convergence (≥2/3 High/Med?) | Residual Gap (%) |
|-----------|-----------------------------------------------------|-------------------------------------------------------|-----------------------------------------------|------------------------------|------------------|
| [e.g., Delay frequency] | [Evidence + Strength] | [Evidence + Strength] | [Evidence + Strength] | [Yes/No] | [Unexplained %] |
| ... (4-6 rows) | | | | **Overall: [Confidence %]** | **Total: [%]** |
  • Example: For retail sales drop, scorecard populated with CRM cohort curves (High, n=5k), interview quotes (High, 80% consensus on glitches), benchmarks (Med proxy) yields 85% confidence.

Scoping an Engagement

  1. Structure one-page scoping memo using Objectives-First Framework.

    • Input: Client objective and constraints from query.
    • Sections: North Star Objective, 3-5 Key Questions, Deliverables (with owners), Timeline (e.g., 8 weeks), Success Metrics (e.g., 20% uplift target).
    • Output: Markdown or PDF memo, because it aligns stakeholders pre-kickoff and excludes scope creep.
  2. Simulate co-creation by generating Miro workshop agenda (90-min outline: 20-min objective alignment, 30-min question brainstorming, 40-min memo drafting).

    • Include prompts for participant input, because virtual agendas enable remote execution.

Prioritizing Initiatives

  1. Plot initiatives on 2x2 Impact-Effort Matrix in Excel (x-axis: Effort [resource hours]; y-axis: Impact [NPV estimate]).

    • Input: List of 10-50 initiatives with rough NPV and hours from user.
    • Quadrants: Quick Wins (high impact/low effort), etc.
    • Output: Grid with top 3-5 prioritized (see Decision Rules), because 80/20 rule surfaces 20% effort for 80% value.
  2. Run sensitivity analysis: Vary NPV inputs ±20% and re-plot.

    • Output: Updated matrix with stability bands, because it flags fragile priorities.

Decision Rules

Use these thresholds to advance or pivot during workflows.

StageConditionActionReason
Hypothesis Prioritization (Diagnosing Step 3)Impact >50% of gap AND Feasibility ≥8/10 (data <2 weeks via CRM/POS/surveys)Test first; allocate 80% effortQuick wins explain most variance with minimal delay.
Hypothesis Prioritization (Diagnosing Step 3)Feasibility <8/10 (e.g., needs 4-week custom data)Park in monitor backlogAvoids low-ROI tests; revisit post-validation.
Validation Scorecard (Diagnosing Step 4)<70% confidence (e.g., data aligns but interviews contradict, residual gap >30%)Pivot to #2 hypothesis or design experimentLow convergence signals incomplete diagnosis.
Scorecard Sources (Diagnosing Step 4)≥2/3 sources High/Med (Quant High if >1,000 sample; Interviews High if ≥70% consensus; Benchmarks Med if proxy)Validate hypothesisMulti-source agreement confirms root cause.
Initiative Greenlight (Prioritizing)Lacks Week 4 kill criteria (e.g., no-drop milestone)RejectPrevents sunk costs on failing pilots.

Example: Acquisition drop hypothesis with 60% impact and 9/10 feasibility (CRM Day 1 data) advances to test.

Hard Constraints

  • Never propose solutions until top hypothesis validated via scorecard with ≥70% confidence using 3+ sources, because untested fixes waste 80% of effort on symptoms.
  • Never advance from MECE tree build without zero checklist flags and 100% coverage, because gaps or overlaps distort prioritization.
  • Never scope an engagement without upfront measurable success metrics (e.g., % uplift targets), because undefined outcomes enable scope creep and value erosion.

Common Mistakes to Avoid

  • Don't allow overlapping branches in MECE trees (e.g., inventory inefficiency double-counted with demand forecasting). Instead, apply one-home rule and checklist to merge/reassign, because overlaps dilute evidence and recommendations.
    • Example: E-commerce tree flagged 25% pricing/supply overlap; reassignment isolated 15% margin leak.

Tools and Deliverables

  • Excel/Lucidchart: Visualize MECE trees and 2x2 matrices.
  • Google Sheets: Hypothesis Validation Scorecard (template above); prioritization sensitivity (vary NPV ±20%).
  • Miro: Agendas for scoping workshops.
  • NVivo-style coding template: For qualitative themes (columns: Quote, Theme, Frequency %).

Primary deliverables: MECE tree diagram, prioritized hypothesis list, populated scorecard, one-page scoping memo, Impact-Effort grid.

Edge Cases and Limitations

  • Interdependent drivers (>20% causal links, e.g., forecasting affects inventory/pricing): Elevate root to standalone branch; annotate downstream influence % (e.g., 60%); expand checklist with Dependency Flag column; iterate to zero flags.

    • Signal: Causal chain detected in tree build; adapt per BCG method for 100% coverage.
  • Predominantly qualitative issues (>70% soft drivers, e.g., cultural silos with no KPIs): Pivot to Cultural Immersion—generate 20-question interview guide + shadowing checklist; produce theme-coding spreadsheet (NVivo template: Quote, Theme %, Hypothesis Link); build qual hypothesis tree; output workshop agenda for MECE test.

    • Signal: <30% quantifiable gap post-initial tree; quantify via pilots after behavioral themes emerge.
    • Example: Tech firm siloed culture (40% blame theme) drove attrition diagnosis.

For detailed examples, walkthroughs, and edge cases, consult 'references/REFERENCE.md'.

Use when
  • diagnosing root causes behind business problems
  • building MECE issue trees for problem decomposition
  • prioritizing hypotheses by impact and feasibility
  • validating hypotheses with data triangulation
  • scoping consulting engagements with clear objectives
  • prioritizing initiatives using impact-effort matrix
  • handling interdependent problem drivers
  • diagnosing cultural or qualitative issues
management-consultingproblem-diagnosismece-issue-treehypothesis-drivendata-triangulationissue-prioritizationconsulting-workflowsroot-cause-analysis

your turn

Expertise like this takes years to develop.

Now any AI tool can learn it in seconds.