Skip to content

Validation Playbook

Phase: 8 — Validation (Fast Track: top 3 experiments + risk highlights + kill criteria) Project: likeness Date: 2026-05-09 Confidence: High — these are the right experiments. Whether they validate or invalidate the business is the question they exist to answer.


What this is

The actionable list of what to do next. Three experiments, ordered cheapest-fastest first. Each tells the founder how to run it, what to measure, and what result triggers proceed vs. pivot vs. stop.

The structure compresses Phase 8's full set (validation playbook + risk analysis + assumptions tracker + experiment design + kill criteria) into a single readable document because we're in Fast Track mode. If the founder wants to expand any section, that's a follow-up.

Top 3 Experiments

Experiment 1 — Structured creator discovery interviews (PRIORITY 1)

Purpose: Resolve the load-bearing unknown — will adult creators with existing audiences actually participate in Likeness's specific configuration?

Hypothesis: ≥7 of 10 mid-tier verified adult creators interviewed will express genuine interest in the concierge configuration as proposed (verified-only, license-gated AI, no model export, revocable, ~75% net to creator after platform + processor fees).

Method: 1. Use docs/reviewer-checklist.md as the structured interview spine. 2. Recruit 10-15 candidate creators via the founder's small insider network plus 2-3 adult industry events (XBIZ, AVN if calendar permits). 3. Show the working mockup (the existing one in mockup/). 4. Walk through the license configuration UX in detail. 5. Walk through the economics honestly: 80% creator take from subscriptions; pass-through compute cost + creator markup on AI generations; processor fees of 5-15%. 6. Ask the bottom-line question: "Would you participate in concierge if we launched it next quarter?" 7. Record verbatim quotes for the highest-frequency objections, concerns, and excitements.

What to measure: - # of creators who say yes to concierge participation - Pain hierarchy: rank-order the problems creators actually name first - Pricing tier resonance: which tier do creators think makes sense as their default? - Most common objection (the thing that, if unsolved, kills participation) - Most common excitement (the thing creators most want from this)

Success criteria: ≥7 of 10 say yes; pain hierarchy aligns with the founder brief's framing or reveals a compelling alternative; ≥3 commit to concierge participation conversations.

Estimated time: 4-6 weeks from cofounder Creator Ops hire (2 weeks scheduling, 4 weeks of interviews + write-up).

Estimated cost: $5-15K travel + signing-incentive prep; mostly time-cost of founder + Creator Ops cofounder.

What invalidation looks like: ≤3 of 10 say yes; pain hierarchy reveals a different problem we're not solving; pricing economics are flat-out rejected.

Experiment 2 — Pre-launch payment processor diligence

Purpose: Validate that the named #1 risk (processor de-banking) is manageable through the multi-processor approach.

Hypothesis: At least 2 of {CCBill, Segpay, Verotel, Epoch} will approve Likeness for merchant onboarding based on a complete consent-posture pitch, with terms within the 5-15% fee range.

Method: 1. CEO (post-hire) leads conversations with each named processor. 2. Pitch package includes: TAKE IT DOWN Act 48-hour compliance plan, 2257 records system, identity verification provider selection, license-engine architecture, watermarking + provenance stack, hard-block category list, multi-processor redundancy intent. 3. Highlight Civitai precedent and how Likeness's design specifically prevents that failure mode. 4. Request indicative terms from each processor. 5. Document each processor's diligence questions and any red flags they raise.

What to measure: - # of processors who issue indicative approval in writing or strong verbal - Range of fees offered - Red-flag concerns surfaced by processors that may shape product - Estimated time to operational onboarding post-incorporation

Success criteria: ≥2 processors with indicative approval; fees in the 5-15% range; no insurmountable diligence asks.

Estimated time: 8-12 weeks (diligence is slow in this category).

Estimated cost: Time of CEO + Compliance Lead. Possibly $5-10K outside counsel for processor-friendly contract review.

What invalidation looks like: Both processors decline based on category positioning; fees demanded exceed 15% making unit economics broken; diligence asks compromise core product commitments (e.g., processor demands an export path for moderation purposes).

Experiment 3 — Concierge cohort fan-economics test

Purpose: Validate the most leveraged unit-economics variable: % of fans who adopt AI-generation tiers and at what spend.

Hypothesis: Within 90 days of concierge cohort launching, ≥15% of an active creator's fans will be on a $25+ tier including AI access, and active-fan AI-credit spend will average ≥$50/month.

Method: Run only after Experiments 1 and 2 succeed. 1. Onboard 5-10 concierge creators with a fully-instrumented rollout. 2. Each creator launches AI tier to existing fan base via standard creator-side announcement. 3. A/B price tiers across the cohort: half launch at $25/$50 entry; half launch at $35/$60. 4. Measure adoption velocity, AI-credit consumption, retention, and per-fan cumulative spend over 90 days. 5. Survey fans (with creator's blessing) on perceived value vs. real-content baseline.

What to measure: - % of active fans on AI-inclusive tiers - Avg per-active-fan AI-credit consumption (in dollars) - Subscription-to-AI-tier upgrade rate - 30/60/90-day retention on AI tier - Cannibalization signal: did total fan spend per creator go up, or did AI spend simply substitute for real-content spend?

Success criteria: ≥15% of active fans on AI tier within 90 days; avg $50+/month AI-credit spend; retention >50% at 60 days; total fan spend per creator non-cannibalized.

Estimated time: 90 days running + 4 weeks setup + 4 weeks analysis = ~5 months total.

Estimated cost: Inside the planned compute and infrastructure budget; marginal cash. Real cost is creator-time and analysis time.

What invalidation looks like: <5% of fans on AI tier OR avg AI spend <$15/month OR pure cannibalization (total per-creator spend doesn't rise). Any of these means the AI-as-additive-revenue thesis fails for at least the early-cohort segment.

Risk Highlights

Critical risks to monitor

Risk Likelihood Impact Mitigation
Payment processor de-banks Medium Catastrophic Multi-processor redundancy; rigorous consent posture documented (Experiment 2)
Creator participation fails to materialize High (because unvalidated) Catastrophic Front-load creator interviews (Experiment 1) before deep capital commitment
Vylit or Fanvue ships consent-first explicit-allowed product within 12 months Medium High Speed of cofounder hires; competitive monitoring; differentiation on architectural commitments
ML stack costs exceed pass-through pricing Medium High ML Lead owns cost / latency / quality curve; conservative initial pricing
TAKE IT DOWN Act enforcement creates platform-level operational shock Low Medium 48-hour takedown SLA built into platform from day 1; Compliance Lead owns enforcement readiness
State-level AI likeness law creates compliance gap Low-Medium Medium Federal floor + California compliance + monitoring posture

Risks that are real but well-handled in the plan

  • 2257 record-keeping (Compliance Lead seat funded; system selected)
  • Mastercard / Visa rules on AI-generated adult content (Likeness's design IS the compliant posture)
  • Model leakage (architecture commitments + ML Lead seat; threat model documented in ML brief)

Risks named but plan-level under-resourced

  • Litigation reserve: $170K reserve is one bad legal month, not deep insurance against multi-quarter litigation. Bridge round contingency is implied.
  • CEO and Compliance Lead recruiting risk: both have specialty profiles where market rate is roughly double cofounder-modest cash. The team is exposed to slow recruiting.

Kill Criteria

These are the specific conditions that should trigger stop or pivot. Each ties to an experiment.

  1. ≤3 of 10 creators express willingness in Experiment 1. Pivot toward Variation B (defensive takedown service first, monetization second) or stop.
  2. Both adult-friendly processors decline based on category in Experiment 2. Stop. The category is not viable through legitimate processors.
  3. Concierge cohort AI-tier adoption stays below 5% at 90 days (Experiment 3). Pivot away from AI-as-primary-revenue thesis toward AI-as-defensive-feature thesis.
  4. A funded direct competitor (consent-first + explicit-allowed + verified-creator-licensed) launches within 6 months of Likeness funding close. Re-evaluate competitive position; possibly accelerate or pivot to a niche within the niche.
  5. Concierge creator total monthly fan spend cannibalizes (AI substitutes for real content) rather than additively grows. Pivot pricing model; AI as packaged-with-subscription rather than separate tier.
  6. Compliance & Legal Lead or CEO recruiting fails at cofounder-modest cash within 90 days. Either raise additional capital, reduce team size, or stop. Ten months of runway with two vacant seats is not a viable plan.
  7. Federal regulatory shock — e.g., a federal ban on AI-generated explicit content of real performers regardless of consent. Shut down or pivot to mainstream creator vertical.

Strategic Connections

  • Experiment 1 directly resolves the load-bearing unknown documented in 01-discovery/target-audience.md.
  • Experiment 2 maps to the named #1 risk in 01-discovery/raw/regulatory.md and the founder brief.
  • Experiment 3 tests the most leveraged variable in 05-financial/revenue-model.md.
  • Kill criteria 4 connects to the competitive analysis in 01-discovery/competitor-landscape.md (Vylit window, Fanvue threat).

Flags

Red Flags: - Don't start spending heavily on Experiments 2 or 3 until Experiment 1 produces signal. Sequencing matters; investing $200K in processor BD before knowing whether 7 creators will participate is wrong order.

Yellow Flags: - Experiment 1 is gated on the Creator Ops cofounder hire. If that hire doesn't close within 90 days, the validation timeline slips. - All three experiments together take ~6-8 months from funding close to completion. That's roughly a third of the 18-month runway. Front-loading them is non-negotiable.

Sources

  • 01-discovery/target-audience.md — discovery gap analysis
  • 01-discovery/raw/regulatory.md — processor risk evidence
  • 05-financial/revenue-model.md — sensitivity analysis identifying the leveraged variable
  • docs/reviewer-checklist.md — interview spine
  • docs/founder-brief.md and docs/budget.md — original strategy and capital plan