Feature Prioritization¶
Phase: 6 — Product Project: likeness Date: 2026-05-09 Confidence: Medium-High on the prioritization logic; Low on absolute effort estimates (no engineering team in place yet to size accurately)
Method¶
Two complementary frames are applied:
- MoSCoW — drawn from the must-have / should-have / could-have / won't-have lists in
mvp-definition.md. Tells the team what's optional vs. non-negotiable. - RICE — applied to the must-have list to determine build order within the must-haves. RICE doesn't decide what's in MVP; that's a strategic call the must-have list already made. RICE decides what gets built first, second, third.
Effort is in T-shirt sizes (S = days, M = weeks, L = months, XL = multiple months). Concrete person-month numbers depend on team makeup; the relative ordering is what matters.
RICE rationale¶
For an MVP that must ship the full vertical slice, RICE is mostly used to decide which features the engineering team starts on first. Reach is essentially constant (every must-have affects the same concierge cohort), so the formula reduces to: prioritize by (Impact × Confidence) / Effort with ties broken by dependency order.
- Reach = 100 (treating "all concierge users" as the unit; relative reach across must-haves is essentially equal)
- Impact = 0.25 (minimal) / 0.5 (low) / 1 (medium) / 2 (high) / 3 (massive)
- Confidence = 50% (low) / 80% (medium) / 100% (high)
- Effort = T-shirt size mapped to person-weeks (S=2, M=8, L=20, XL=40)
Build order — must-haves (MVP)¶
Sorted by RICE descending. Build order is roughly top-to-bottom but adjusted for dependencies.
| # | Feature | MoSCoW | Impact | Conf | Effort | RICE | Notes |
|---|---|---|---|---|---|---|---|
| 1 | License engine: structured creator license object | Must | 3 | 100% | M (8wk) | 38 | Foundation; everything depends on it |
| 2 | License-gated prompt parser + classifier | Must | 3 | 80% | L (20wk) | 12 | Hardest single piece; ML + compliance jointly |
| 3 | Creator identity verification (3rd-party provider) | Must | 3 | 100% | S (2wk) | 150 | Vendor integration; must be first to receive any creator |
| 4 | Per-creator LoRA training pipeline | Must | 3 | 80% | L (20wk) | 12 | ML Lead's primary work in months 1-3 |
| 5 | License-gated inference service | Must | 3 | 80% | L (20wk) | 12 | Tightly coupled to feature 4 |
| 6 | Audit log of every license decision | Must | 3 | 100% | S (2wk) | 150 | Foundational for compliance and trust |
| 7 | 2257 records system integration | Must | 3 | 100% | M (8wk) | 38 | Vendor integration (Quick2257 or similar) |
| 8 | Multi-processor billing with auto-failover | Must | 3 | 80% | L (20wk) | 12 | CEO-led BD then engineering; gating to launch |
| 9 | Watermark per output (invisible) | Must | 2 | 80% | M (8wk) | 20 | Off-the-shelf at MVP, possibly custom layer in beta |
| 10 | Perceptual hash + signed metadata + license ID | Must | 2 | 100% | S (2wk) | 100 | Standard cryptographic plumbing |
| 11 | Creator onboarding flow (the "three things to know" disclosure) | Must | 3 | 80% | M (8wk) | 30 | Brand-aligned; voice work matters here |
| 12 | Subscription tier configuration | Must | 2 | 100% | M (8wk) | 25 | Standard creator-platform feature |
| 13 | Compute credit purchase + consumption | Must | 3 | 80% | M (8wk) | 30 | Tightly coupled to feature 8 (billing) |
| 14 | Fan generation interface | Must | 3 | 80% | M (8wk) | 30 | Where the user-facing AI experience happens |
| 15 | Approval queue (creator-side) | Must | 2 | 80% | M (8wk) | 20 | Creator-side workflow for fan submissions |
| 16 | Submission fee mechanic | Must | 1 | 80% | S (2wk) | 40 | Lightweight; rides on credit and approval flows |
| 17 | PPV unlock of approved generations | Must | 1 | 80% | M (8wk) | 10 | Standard creator-platform feature |
| 18 | Creator payout flow | Must | 3 | 80% | M (8wk) | 30 | Multi-processor reconciliation |
| 19 | Revocation flow | Must | 3 | 100% | M (8wk) | 38 | The brand-defining feature; UX must be confident |
| 20 | Face matching for output verification | Must | 2 | 80% | M (8wk) | 20 | Identity-preservation enforcement |
| 21 | Pre-publication content review (creator-uploaded) | Must | 2 | 100% | M (8wk) | 25 | Operational; T&S Lead operates, engineering enables |
| 22 | Hard-block category enforcement | Must | 3 | 100% | M (8wk) | 38 | Classifier + deterministic rule + human escalation |
| 23 | Abuse detection on prompts and outputs | Must | 2 | 80% | M (8wk) | 20 | Same classifier system as feature 22 |
| 24 | Takedown intake + outbound takedown pipeline | Must | 2 | 80% | M (8wk) | 20 | Required by TAKE IT DOWN Act; brand asset |
| 25 | Fan signup + subscription flow | Must | 2 | 100% | S (2wk) | 100 | Standard |
| 26 | Creator dashboard (earnings, fan activity, audit log, license editor, revocation) | Must | 3 | 80% | M (8wk) | 30 | The creator's day-to-day surface |
| 27 | Fan gallery / approved-content view per creator | Must | 1 | 100% | S (2wk) | 50 | Standard |
| 28 | Distilled / accelerated inference variant (Flux Schnell or equivalent) | Must | 2 | 80% | M (8wk) | 20 | Cost-side requirement; not at MVP launch but pre-cohort scale |
| 29 | C2PA-compliant content credentials | Must | 2 | 100% | M (8wk) | 25 | Pre-launch; standard adoption |
| 30 | Fan age verification (US baseline) | Must | 2 | 100% | S (2wk) | 100 | Regulatory baseline |
Recommended build order (dependency-aware)¶
Pure RICE order isn't shippable because some features depend on others. Suggested implementation phasing:
Phase A (months 1-3) — foundation 1. Creator identity verification (#3) — vendor integration, immediate 2. Audit log infrastructure (#6) — foundational for everything 3. Structured license object (#1) — the data model that everything else queries 4. 2257 records system (#7) — vendor integration in parallel with above 5. Per-creator LoRA training pipeline (#4) — ML Lead's primary work 6. Distilled inference variant (#28) — set up early to inform cost trade-offs 7. Watermark + perceptual hash + signed metadata (#9, #10, #29) — small ML/crypto work, can start early
Phase B (months 3-6) — vertical slice 8. License-gated prompt parser + classifier (#2) — the hardest piece; depends on #1 9. License-gated inference service (#5) — depends on #4 + #2 10. Hard-block category enforcement (#22) — same classifier system as #23, ship together 11. Abuse detection (#23) — see above 12. Face matching for output verification (#20) — depends on #5 13. Multi-processor billing (#8) — engineering after CEO secures relationships 14. Compute credit purchase + consumption (#13) — depends on #8 15. Subscription tier configuration (#12) — depends on #8 16. Fan signup + subscription flow (#25) — depends on #12 17. Fan age verification (#30) — vendor integration, in parallel 18. Fan generation interface (#14) — depends on #5 + #13
Phase C (months 6-9) — full product 19. Approval queue (#15) — creator workflow 20. Submission fee mechanic (#16) — rides on #15 + #13 21. PPV unlock of approved generations (#17) — rides on #15 22. Creator dashboard (#26) — pulls from #6, #1, #19 23. Fan gallery (#27) — pulls from #15 24. Creator payout flow (#18) — depends on #8 25. Revocation flow (#19) — touches every prior surface; ship after the rest is stable 26. Creator onboarding flow (#11) — built throughout but polished at end with brand voice 27. Pre-publication content review tooling (#21) — T&S Lead drives operationally 28. Takedown intake + outbound (#24) — required by TAKE IT DOWN Act 48-hour SLA
Phase A duration: ~3 months. Phase B: ~3 months. Phase C: ~3 months. MVP launch target: month 9.
This is faster than typical for a regulated category, and it depends on hiring all six founding seats by month 4 per the budget plan. Slip in any of those hires extends the timeline proportionally.
Should-haves — order if scope allows¶
These are not gating to MVP launch but should be sized into the engineering plan as soon as the must-haves clear Phase B:
| Feature | Impact | Effort | Reason in this tier |
|---|---|---|---|
| Custom requests / 1:1 messaging | 1 | M | Creator carry-over expectation; absent at MVP creates friction |
| Creator-side analytics | 2 | M | Helps creators optimize; partial visibility shipped at MVP via dashboard |
| Creator-side takedown monitoring (off-platform) | 2 | L | Trust signal from Variation B; high-value but heavy |
| Tip / one-off fan transactions outside generation | 1 | S | Creator-economy carry-over |
| Creator-controlled gallery curation (public / fan-tier / private) | 1 | S | Granularity beyond fan tier visibility |
Could-haves — only if trivial¶
| Feature | Notes |
|---|---|
| Per-output ratings / favoriting by creator | Useful for future LoRA iteration; not load-bearing |
| Creator-set per-fan generation rate limits | Beyond platform-level limits |
| Creator-to-creator messaging | Cohort coordination; can be done in Discord at MVP |
| Theming / creator-side branding of fan surfaces | Makes the platform feel less like a single-template platform; nice but not essential |
Won't-haves (MVP, do not start)¶
Restated for scope discipline:
- Video generation
- Voice cloning
- Fan self-insert tier
- Public discovery / browse-creators
- Open API / developer access
- Mobile app store distribution
- Unrestricted creator collaborations
- Mainstream creator vertical (non-adult)
- International (UK, EU, others)
- Posting / open-network social features
Effort summary¶
T-shirt size summary across must-haves: - 7 × S features (~14 person-weeks) - 18 × M features (~144 person-weeks) - 5 × L features (~100 person-weeks) - 0 × XL features
Total: ~258 person-weeks ≈ 5 person-years of engineering for must-haves alone. This is consistent with the budget assumption of CTO + ML Lead + engineering hires in months 1-9, supplemented by T&S Lead and Compliance Lead doing operational rather than engineering work.
If this number looks high, it's because regulated categories require the full vertical slice. The alternative is shipping a half-built product that can't safely go in front of creators.
Dependencies between features¶
Critical paths to watch:
- License engine (#1) blocks license-gated parser (#2), inference (#5), revocation (#19), audit logs (#6).
- Multi-processor billing (#8) blocks credit purchase (#13), subscriptions (#12), payout (#18).
- Identity verification (#3) and 2257 records (#7) block creator onboarding (#11) and pre-publication review (#21).
- LoRA training pipeline (#4) blocks inference service (#5) which blocks fan generation (#14) and face matching (#20).
The hardest critical path: #1 → #2 → #5 → #14. License engine to license-gated parser to inference to fan generation. This is the primary engineering risk and should be the dedicated focus of the CTO + ML Lead through Phase B.
Strategic Connections¶
- The build order assumes the budget hiring sequence in
docs/budget.md(4 people in months 1-3, full team by month 4). Slip in the hiring sequence cascades into engineering delays. - The "ship the full vertical slice" discipline reflects the regulatory analysis in
01-discovery/raw/regulatory.md. Half-shipping is not safe in this category. - The MVP timeline (9 months from funding close to launch) aligns with the founder brief's concierge phase target and the validation experiments in
06-validation/validation-playbook.md.
Flags¶
Red Flags: - The MVP is ~9 months of engineering, not 8 weeks. Founder and investors should align on this expectation before pre-seed close. Selling a "concierge launch in Q4" requires Q1 funding close + immediate hires; selling "MVP at 6 months" misrepresents the build.
Yellow Flags: - All effort sizes are estimates without an engineering team in place. Once CTO and ML Lead are operating, re-size in week 2. - Feature #2 (license-gated prompt parser + classifier) is the single hardest piece. ML Lead should own this from day 1; it's the feature most likely to slip and most painful to slip.
Sources¶
04-product/mvp-definition.md— feature list ground truthdocs/ml-lead-technical-brief.md— engineering effort contextdocs/budget.md— hiring sequence01-discovery/raw/regulatory.md— compliance-feature grounding