MVP Definition¶
Phase: 6 — Product Project: likeness Date: 2026-05-09 Confidence: Medium-High — MVP scope is well-grounded in research and existing planning; the hypothesis it tests is the load-bearing unknown.
Core hypothesis the MVP tests¶
The MVP is a single coordinated experiment with one compound hypothesis:
Mid-tier verified adult creators will participate in a consent-first AI likeness platform at the proposed economics (~75% net to creator after platform + processor fees), and their existing fans will adopt AI-inclusive tiers at >15% within 90 days, generating per-active-fan AI spend averaging ≥$50/month.
That sentence has three parts, each independently falsifiable:
- Creator participation — creators sign on and stay engaged through the concierge phase.
- Fan adoption — existing fans of those creators adopt AI tiers in measurable proportions.
- Unit economics — the per-fan AI spend at adoption rates above clears against compute cost + platform overhead.
If all three validate, the closed-beta phase becomes a scale question rather than a viability question. If any of them fails, the appropriate response is documented in 06-validation/validation-playbook.md's kill criteria.
Scope: the concierge MVP¶
What's IN the MVP¶
The MVP runs the full vertical slice — creator onboarding through fan generation through revenue payout — at small scale. Not a prototype. A real, money-handling, regulator-compliant operation, just for 5-10 creators instead of thousands.
This is deliberate. Half-built MVPs that stub the compliance and processor work cannot answer the unit-economics question and cannot run safely in this category.
What's OUT of the MVP (named explicitly)¶
- Video generation — deferred per founder brief. ML cost is materially higher; quality bar is materially higher; doing it badly damages the brand. Won't have this time.
- Voice cloning — deferred. Same reasoning.
- Fan self-insert tier — operationally complex (verification of fan identity, double-license management, abuse vectors). Founder brief is correct to defer. Won't have this time.
- Public discovery / search — Likeness is not a discovery platform; the creator brings the audience. No browse-creators feature.
- Open API / developer access — would create export-path attack surface. Architecturally incompatible with no-export commitment at MVP stage. Won't have this time.
- Mobile app distribution — adult content is not viable on Apple App Store / Google Play. Mobile web works fine. Won't have this time.
- Unrestricted creator collaborations — collaborations require both creators' license overlap, abuse-vector analysis, and per-collab agreements. Won't have this time; consider opt-in cohort collaborations only after concierge unit economics validate.
- Mainstream creator vertical (non-adult) — different sales motion, different processors, different brand. Won't have this time.
- International — UK / EU / other geographies require dedicated compliance investment. US-first per
01-discovery/market-analysis.md. Won't have this time.
The discipline matters: each named exclusion above will get pushed back into scope by someone (a creator, a fan, a partner, an investor) at some point in the first 18 months. Saying these out loud, with reasons, makes the pushback easier to defend.
Must-have features (for MVP to test the hypothesis)¶
These are the features without which the concierge MVP cannot operate or cannot answer the hypothesis. RICE prioritization details are in feature-prioritization.md.
Identity and verification¶
- Creator identity verification (government ID + liveness check via third-party provider; e.g., Persona / Yoti / Veriff)
- Fan age verification (US baseline; UK Ofcom-grade infrastructure deferred)
- 2257 records system integration for all real-creator depictions including AI-generated
License engine¶
- Structured creator license object — explicit categories, blocked categories, distribution rules, per-fan permissions, revocation status
- License-gated prompt parser + classifier — every prompt parses against the creator's license before any model is loaded; deterministic rules + classifier model + human escalation
- Revocation flow — immediate forward-effective; pause-all, per-category disable, per-fan ban, full takedown
- Audit log of every license decision — every approve/deny written to an append-only log per creator
ML pipeline¶
- Per-creator LoRA training on curated source material (ML brief specifies methodology)
- License-gated inference service — creator's LoRA + face adapter + ControlNet, loaded per-request, isolated per-creator
- Distilled / accelerated inference variant for cost reasons (Flux Schnell or equivalent)
- Face matching for output verification — identity preservation against creator face vector; flag/reject mismatches
Provenance and watermarking¶
- Invisible watermark per output — Tree-Ring / StableSignature / equivalent; survives JPEG, screenshot, mild edit
- Perceptual hash + signed metadata + license ID attached to every generation
- C2PA-compliant content credentials on outputs
Creator monetization¶
- Subscription tier configuration — creator-set tier prices; multiple tiers
- Compute credit purchase + consumption — fan buys credits, consumes per generation
- Approval queue — creator reviews fan submissions; approve / reject / publish to gallery / sell as PPV
- Submission fee mechanic — fan pays to submit for review; creator captures most of the fee
- PPV unlock of approved generations
- Creator payout flow — multi-processor with redundancy
Trust & safety operations¶
- Pre-publication content review for any creator-uploaded source material (real photos / videos)
- Hard-block category enforcement — minors, age-ambiguous, public figures, third-party uploads, nonconsent, leaked-tape framing
- Abuse detection on prompts and outputs — classifier + human escalation queue
- Takedown intake and outbound takedown — TAKE IT DOWN Act 48-hour SLA; takedown pipeline for unauthorized off-platform posts of platform-generated content
Platform basics¶
- Creator onboarding flow with the explicit "three things to know" disclosure (per
tone-of-voice.md) - Fan signup + subscription flow
- Fan generation interface — prompt + license-bounded options + credit cost
- Creator dashboard — earnings, fan activity, audit log, license editor, revocation controls
- Fan gallery / approved-content view per subscribed creator
- Multi-processor billing with auto-failover — chargeback monitoring, fraud detection, retry logic
Nice-to-have features (Should Have, build if scope permits)¶
These extend the MVP but are not gating to hypothesis validation:
- Custom requests / 1:1 messaging — present on every other adult creator platform; carry-over functionality fans expect. Strong should-have.
- Creator-side analytics — fan adoption rates, top-performing license configurations, revenue trends. Useful for creators to optimize.
- Creator-side takedown monitoring for unauthorized off-platform AI use of the creator (the absorbed lesson from
00-intake/brainstorm.mdVariation B). High-value trust signal even if low operational scale at MVP. - Tip / one-off fan transactions outside generation (extends to the OnlyFans-style mental model).
- Creator-controlled gallery curation with public / fan-tier-gated / private states.
Could-have features (defer unless trivial)¶
- Per-output ratings / favoriting by creator to inform future training iterations.
- Creator-set generation rate limits per fan (in addition to platform-level rate limits).
- Creator-to-creator messaging for cohort coordination.
- Theming / creator-side branding of their fan-facing surfaces.
Won't-have (MVP, to prevent scope creep)¶
Already named in the "Out of MVP" section above. Repeating the list here as a single line for scope-meeting convenience: video, voice, self-insert, public discovery, open API, mobile app store, unrestricted collaborations, mainstream-creator vertical, international, posting / open-network social features.
Success criteria¶
The MVP validates if all three of the following are true at end of 90-day concierge cohort run:
| Criterion | Threshold |
|---|---|
| Creator retention | ≥60% of concierge cohort still actively engaged at 90 days |
| Fan AI-tier adoption | ≥15% of subscribed fans on a tier including AI |
| Per-active-fan AI spend | ≥$50/month average across AI-engaging fans |
| Multi-processor uptime | No single-processor failure events causing creator-payout interruption |
| Compliance | Zero TAKE IT DOWN Act SLA breaches; zero hard-block category violations passing classifier |
The MVP is not validated if: - Creator retention <40% at 90 days - Fan AI-tier adoption <5% - Per-fan AI spend <$15/month - A processor de-banks the platform - A hard-block category violation reaches publication
Borderline outcomes (15-40% creator retention, 5-15% fan adoption, $15-50/month AI spend) trigger a pivot conversation, not a pass-or-fail call. See kill criteria in 06-validation/validation-playbook.md.
What MVP success enables¶
If MVP validates, the immediate next phase is closed beta — expand to 50-200 creators, prove scaling motion, start building the seed-round narrative. Specifically:
- Onboard a second cohort of 50-200 creators with semi-automated rather than concierge-manual workflows
- Begin video generation R&D (deferred from MVP, planned for closed beta) with concierge cohort as opt-in pilot
- Open processor relationships beyond the initial 2-3 to a redundancy floor of 4-5 active processors
- Begin partnership conversations with creator-rights organizations and talent agencies as a Year 2 channel
What MVP failure looks like and what to do¶
Mode 1: Creator participation fails (≤3 of 10 creators say yes in pre-MVP discovery). Pivot to Variation B (defensive takedown service first, monetization second) or stop. The platform's hypothesis is wrong about creator priorities.
Mode 2: Fan adoption fails (<5% on AI tiers). The architecture and creator experience may be fine, but the fan-side product doesn't clear comparable AI girlfriend platforms on perceived value. Pivot pricing model — package AI access into base subscription rather than separate tier.
Mode 3: Unit economics fail (per-fan spend too low to clear inference cost). Either creators set markup too low (creator-side fix) or compute cost is structurally higher than projected (ML Lead-side fix). Iterate before pivoting.
Mode 4: Processor de-banks. Pre-MVP processor BD should have prevented this. If it happens, the response is the multi-processor redundancy plan kicking in, plus an honest re-evaluation of whether the platform's compliance posture is being read correctly by processor compliance teams.
Strategic Connections¶
- The MVP success criteria directly map to the experiments in
06-validation/validation-playbook.md. - The "Out of MVP" list reflects the founder brief's deliberate scope discipline and the founder's pre-existing decisions in CLAUDE.md.
- The Must-have feature list reflects the architectural commitments in
01-discovery/raw/regulatory.mdand the ML brief. - The 90-day success threshold connects to the financial sensitivity analysis in
05-financial/revenue-model.md(15% fan adoption is the leveraged variable).
Flags¶
Red Flags: - The MVP cannot ship without all 30 must-have features. This is heavier than typical MVPs because the category requires the full compliance stack to operate at all. Engineering plan should reflect this honestly — the MVP is roughly 6-9 months of engineering, not 8 weeks.
Yellow Flags: - "Pre-publication content review" (feature 21) and "abuse detection with human escalation" (feature 23) are operational, not just engineering. Trust & Safety Lead's hire is gating to MVP launch, not just nice-to-have. - Distilled / accelerated inference variants (feature 10) introduce a quality / cost trade-off. ML Lead must own the operating point and surface tradeoffs to the team rather than choosing silently.
Sources¶
01-discovery/raw/regulatory.md— must-have feature compliance grounding01-discovery/competitor-landscape.md— out-of-scope competitive contextdocs/founder-brief.mdanddocs/ml-lead-technical-brief.md— MVP scope baseline05-financial/revenue-model.md— success criteria thresholds06-validation/validation-playbook.md— kill criteria and pivot modes