SkillRaptor
ExploreHow It WorksPricing
SkillRaptor
Terms of ServicePrivacy PolicyAcceptable UseExplore SkillsContact

© 2026 SkillRaptor. All rights reserved.

skill.detail

Back to Mira Santos's profile
Design

Product & UX Design Workflows

UI design audits for e-commerce checkout flows using mobile-first Nielsen heuristics. Use when auditing checkout flows for drop-offs, mapping flows in Figma, scoring friction 1-5 per step, prototyping high-friction fixes, validating prototypes on iOS Simulator, or analyzing Hotjar heatmaps for rage clicks and cold zones. Key capabilities include friction scoring with Nielsen's 10 principles, staging validation targeting <90s completion, and structured Google Slides handoff decks. Not for full product roadmaps, code implementation, or brand identity design.

1,306Words
Mar 2026Created
M
Mira Santos·Product & UX Design ConsultantView profile
Add to your AI tools

Drop this file into your favorite AI tool so it thinks like you every time.

  1. 1Click "Copy skill content" below.
  2. 2Open ChatGPT, Gemini, or any AI chat tool.
  3. 3Paste into Custom Instructions, system prompt, or project knowledge.
  4. 4Done. The AI now follows your methodology.

Step-by-Step Process

Auditing Checkout Flows

Follow this workflow when requested to audit a checkout flow for drop-offs or friction.

  1. Map the flow mobile-first using Nielsen's 10 usability heuristics.

    • Input: Screenshots or URL of the current checkout flow on iOS/Android devices.
    • Output: Figma-style wireframe diagram (text-based or descriptive) with each screen limited to three fields maximum and a persistent progress indicator like a bold step counter.
    • Why: Limits input fatigue and provides clear orientation, reducing abandonment by orienting users to completion.
  2. Time each step using simulated Hotjar heatmaps and session recordings.

    • Input: Provided session data or describe 50-100 sessions; focus on real-device timings for iOS/Android.
    • Output: Table of step timings with hesitations noted (e.g., >45s on shipping selector).
    • Why: Identifies cognitive overload points where users hesitate, correlating to drop-offs.
  3. Score each step 1-5 on friction based on cognitive load.

    • Input: Mapped flow and timing data.
    • Output: Spreadsheet or table with scores: deduct 1 point per Nielsen violation (e.g., -1 for no progress indicator, -1 for >3 fields/screen, -1 for unclear CTAs); target average <2 per step.
    • Why: Quantifies violations across heuristics like error prevention and flexibility/efficiency for prioritization.
  4. Prioritize and prototype fixes targeting <90s total completion.

    • Input: Scores and timings.
    • Output: Figma prototype descriptions (e.g., reduce shipping to 2-field screen with guest checkout toggle and predictive address dropdowns).
    • Why: High-friction steps (>4 score, >45s hesitation) drive 80% of abandons; prototypes test fixes directly.
  5. Validate prototype on iOS Simulator for tap targets and smoothness.

    • Input: Prototype specs.
    • Output: Validation report confirming reductions (e.g., guest toggle drops shipping fields from 5 to 2).
    • Why: Ensures mobile parity before staging, catching tap target issues early.
  6. Run staging Hotjar simulation across 50-100 sessions.

    • Input: Validated prototype.
    • Output: Metrics report: time-to-complete (<90s), heatmaps, rage clicks, drop-offs.
    • Why: Measures uplift (e.g., 18% completion boost) with real-user simulation data.
  7. Generate handoff deck in Google Slides format (10-15 slides max).

    • Input: All prior outputs.
    • Output: Structured deck with embedded Figma links (template below).
    • Why: Enables PM review with evidence-backed proposals, minimizing ping-pong.

Handoff Deck Template

Executive Summary (1 slide): Problem (e.g., 25% cold zone at shipping), uplift projection (18%).

Audit Evidence (3-4 slides): Hotjar heatmaps (>20% cold zones), 5-10 annotated session screenshots/GIFs.

Root Causes & Prioritization (2 slides): Decision matrix by abandonment >10-15%.

Proposed Fixes (3-4 slides): Figma before/after links, microcopy tweaks.

Metrics & Validation (1-2 slides): Staging uplift (18%), QA sign-off.

Example: For a shipping selector with 45s hesitations, prototype a guest toggle reducing fields to 2, validating 22% conversion uplift in staging.

Reviewing Landing Pages

  1. Run 5-second test simulation.
    • Output: Note key elements recalled (headline, CTA).
  2. Layer with AIDA framework (Attention, Interest, Desire, Action).
    • Score 1-5 per stage using Zeplin for pixel-perfection and WebAIM for 4.5:1 CTA contrast.

Accessibility Audits for Forms

  1. Scan with WAVE/Lighthouse for WCAG 2.1 AA.
    • Output: Compliance report focusing on tab order (focus-first).
    • Why: Ensures keyboard navigation without mouse dependency.

Decision Rules

Use these conditionals for branching during audits.

ConditionActionReason
Score ≥4 AND hesitation >45s per stepBuild and test Figma prototype on iOS SimulatorTargets overload steps violating efficiency heuristics, where fixes yield >15% uplift.
Score ≤3 AND no Hotjar drop-off spikesProceed to PM handoffIndicates low-risk flow; prioritizes high-conversion opportunities without over-investment.
Staging: drop-offs <15%, avg score ≤2/step (1-2 violations), friction ≤2 (<15s hesitations), no >20% heatmap drop-offs at gatesPM handoffMeets pass threshold; prevents production pollution pre-A/B.
Rage clicks: >3 clusters/session (≥5 rapid clicks in 3s) OR >5% of sessionsRevert Figma changes, re-validate on iOS SimulatorCorrelates 80% to >25% production drop-offs; catches unresponsive elements.
Pre-audit: non-ecomm OR <100 monthly checkouts OR <500 sessions with <10% drop-offsBail on full workflowAvoids sunk-cost on low-impact noise; focus on high-traffic 80/20 gates.
Hotjar/FullStory: <5% relative drop-off at gates AND <10 absolute abandons/monthLightweight Figma Mirror review + PM sign-offBypasses heavy staging for minimal ROI scenarios.
Key gates (abandonment >10-15% historical: shipping, billing, payment, review) with >20% cold zonesFlag for deeper session reviewCold zones (>20% interactive area) signal hidden errors driving abandons.

Friction scale:

  • 1: Seamless (<10s, auto-fill, predictive dropdowns for flexibility).
  • 5: Overload (>45s, redundant fields sans guest toggle, vague errors).

Example: If shipping gate shows 25% cold zone cross-referenced with error-state recordings, prioritize revert and re-prototype.

Hard Constraints

  • Never exceed three fields per screen, because it minimizes input fatigue and cognitive overload.
  • Always include persistent progress indicator like bold step counter, because it provides user orientation reducing disorientation abandons.
  • Never handoff without 20+ cross-referenced Hotjar replays and prototype tests validating <10% cold zones across gates, because vague reports cause PM ping-pong and live risks.
  • Never audit flows with <5% total site traffic (GA4-validated), because it wastes bandwidth on insignificant impact versus 80/20 high-traffic gates.

Common Mistakes to Avoid

  • Don't fixate on minor heatmap blips (e.g., isolated margin clicks) without prioritizing cold zones (>20% interactive area) at key gates (>10-15% abandonment). Instead, segment by gates and cross-reference 5-10 session recordings per flag, because it contextualizes errors and avoids false positives ignoring high-risk drop-offs.

Example: Juniors missing 25% cold zone on expedited toggle due to peripheral heatmaps led to overlooked hidden errors.

Tools and Deliverables

  • Figma: Generate mobile-first flow maps and prototypes (e.g., 2-field guest toggle screens); embed links in decks.
  • Hotjar: Simulate heatmaps/session recordings for timings, cold zones (>20%), rage clicks; target 50-100 staging sessions.
  • iOS Simulator: Validate prototypes for tap targets and flow (e.g., address dropdowns).
  • Zeplin: Check landing page pixel-perfection against guidelines.
  • WebAIM/WAVE/Lighthouse: Verify WCAG 2.1 AA contrast (4.5:1 CTAs) and tab order.
  • FullStory: Sync real-time heatmaps/sessions with Hotjar.
  • GA4: Validate traffic % for bail decisions.
  • Google Slides: Primary deliverable (10-15 slides, see template in workflow).

Primary deliverable: Google Slides handoff deck with embedded Figma prototypes.

Edge Cases and Limitations

  • New key gate post-fix (e.g., promo code >20% cold zone despite overall pass): Extend to 20+ Hotjar replays, A/B microcopy (e.g., "Got a code? Skip →" with inline validation), FullStory sync, Figma Mirror re-test, 48h uplift confirmation before deck. Why: Isolates regressions without full re-audit; prioritizes evidence over speed to avoid live risks.
  • Low-volume flows (<100 checkouts/month, e.g., SaaS signup): Fast-track after 5 replays confirming <3% cold zones. Why: Skips 2-week staging loops with negligible uplift potential.

Example: SaaS signup with 80/month projected bailed after spot-checks, enabling quick handoff.

For detailed examples, walkthroughs, and edge cases, consult 'references/REFERENCE.md'.

Use when
  • auditing checkout flows for drop-offs
  • mapping mobile-first flows in Figma
  • scoring steps on friction scale with Nielsen heuristics
  • analyzing Hotjar heatmaps and session recordings
  • prototyping high-friction fixes in Figma
  • validating prototypes on iOS Simulator
  • running staging Hotjar tests for rage clicks
  • preparing handoff decks for PM review
ui-designcheckout-auditusability-heuristicsnielsen-principlesmobile-firsthotjar-analysisfriction-scoringrage-clicks-detection

your turn

Expertise like this takes years to develop.

Now any AI tool can learn it in seconds.