Regulatory Landscape: Consent-First AI Likeness for Adult Creators¶
Phase: 3 — Wave 1, Agent A3 (executed in main thread, sequential mode) Date: 2026-05-09 Confidence: High (live-search data from Tier 1/2 legal sources for all major findings)
US Federal Regulations¶
TAKE IT DOWN Act (S. 146, signed into law May 19, 2025)¶
- Status: ENACTED. Federal law as of May 19, 2025. Bipartisan: 409-2 House passage. [Source: Congress.gov, Tier 1]
- Criminal prohibition: Publishing or threatening to publish nonconsensual intimate imagery (NCII), explicitly including AI-generated digital forgeries. Effective immediately upon signing.
- Platform notice-and-removal: Covered platforms must implement a notice-and-removal process and remove flagged content within 48 hours of notification. Compliance deadline: May 19, 2026.
Implication for Likeness: Platform-level mandatory. Likeness's takedown infrastructure satisfies this requirement for content that was once on the platform. The NEW operational obligation: handling notices about content from third-party platforms that depicts a Likeness creator. The platform should have a 48-hour-compliant takedown response process for any verified creator on its platform, even content not generated on Likeness — as a feature, this is a creator-facing benefit and a compliance posture.
NO FAKES Act (HR 2794 / Senate companion, reintroduced April 9, 2025)¶
- Status: PENDING. Reintroduced 2025; not yet enacted. Broad industry support (SAG-AFTRA, OpenAI, Disney, Google, RIAA). [Source: Congress.gov / Reed Smith / Recording Academy, Tier 1/2]
- Establishes a federal right of publicity with statutory damages.
- Lifetime + post-mortem 70 years (with 10-year renewal).
- Critical for Likeness: "Rights granted may be licensed but not assigned during an individual's lifetime." This provision was specifically introduced to prevent studios/labels from extracting perpetual likeness transfers. This aligns directly with Likeness's revocation premise.
Implication for Likeness: If passed, NO FAKES Act becomes a tailwind, not headwind — it formalizes the licensable-but-not-assignable structure Likeness already operates under.
2257 (18 U.S.C. § 2257) — record-keeping for explicit performer content¶
- Applies to "visual depiction of an actual human being engaged in actual sexually explicit conduct."
- Wholly synthetic content (no real human depicted, no real likeness used) is exempt.
- Content depicting a real verified performer's likeness — exactly Likeness's scope — is COVERED. [Confirmed via legal sources, Tier 2]
- Required records: government ID, dated proof of age, addresses, custodian designation. Standard adult industry compliance.
Implication for Likeness: The conservative posture in the founder brief is correct. AI content of a real verified creator must be treated as 2257-covered. The Compliance & Legal Lead role is sized correctly for this.
CSAM and AI-generated obscenity (federal)¶
- Federal CSAM statutes apply to AI-generated material that depicts a minor. Hard zero tolerance.
- Federal obscenity statutes (Miller test) and PROTECT Act remain applicable.
- No safe harbor for "barely legal," age-ambiguous, or school/youth-themed content involving real or synthetic depictions.
Implication for Likeness: Founder brief's hard-block list (no minors, no age-ambiguous, no public figures, no third-party uploads, no nonconsent, no leaked-tape framing) is correctly conservative.
US State Regulations¶
Tennessee — ELVIS Act (signed March 2024, effective July 1, 2024)¶
- Status: ENACTED. First state with this kind of explicit AI likeness protection. [Source: Vanderbilt Law / Latham, Tier 1]
- Class A misdemeanor for AI cloning of voice without consent.
- Expanded right of publicity to cover voice + likeness for AI use.
- Record companies / contracting parties may enforce on artists' behalf.
California — AB 2602 + AB 1836 (signed September 2024)¶
- AB 2602: Voids contracts that allow creation/use of digital replicas in personal-services agreements unless (a) specific list of all proposed uses AND (b) individual is represented by counsel or labor union during negotiation. [Source: Fenwick, Tier 1]
- AB 1836: Posthumous protections.
- Critical alignment with Likeness: AB 2602's "specific list of all proposed uses" is essentially the license object Likeness's product is built around. California law has formalized the consent posture Likeness sells.
State landscape generally¶
- In 2025, lawmakers in every state introduced some form of sexual deepfake legislation. [Source: MultiState, Tier 2]
- Right of publicity expansions are uneven; ~12-15 states have explicit AI likeness statutes as of 2026; the rest rely on common-law publicity rights.
Implication for Likeness: California is the highest-stakes jurisdiction (largest creator population, strict statute). The platform's license framework is structurally compliant with AB 2602 by design — this is a feature, not a checkbox.
UK / EU¶
UK Online Safety Act — age verification (effective July 25, 2025)¶
- Status: ENFORCED. Penalties: £18M or 10% of global revenue, whichever is greater. [Source: Ofcom / law firm analysis, Tier 1]
- Acceptable verification methods: Open Banking, photo ID matching, facial age estimation, mobile network operator checks, credit card checks, digital identity services.
- As of February 2026: Ofcom investigations into 90+ services, 6 fines issued.
- VPN use jumped 1400% on day one of enforcement [Tier 2 — directional].
Implication for Likeness: UK launch requires Ofcom-grade age verification on the FAN side, not just the creator side. Adds materially to the identity-verification line in the budget if/when UK launches. Validates the founder brief's "geofence US-only at launch" position.
EU AI Act — Article 50 (enforcement August 2026)¶
- Mandates machine-readable disclosure on AI-generated content.
- C2PA / Content Credentials are the de-facto standard the regulation contemplates.
- [Source: arxiv pre-print + C2PA documentation, Tier 1/2]
Implication for Likeness: Per-output watermark + signed metadata + C2PA manifest is the compliant posture. The Likeness ML brief's "watermark + perceptual hash + signed metadata" stack maps directly onto Article 50 requirements.
Payment Processor Rules¶
Mastercard adult content policy (updated 2021, AI-extended in 2024-2025 revisions)¶
- Written consent required from all individuals appearing in adult content, whether uploaded or generated, real-time or live-streamed. [Source: Mastercard rules + corepay analysis, Tier 1]
- Identity AND age verification of all persons depicted.
- Pre-publication content review.
- Mastercard explicitly extended coverage to AI and synthetic images to prevent illegal or brand-damaging transactions.
Visa Integrity Risk Program (VIRP, April 2024)¶
- Increased merchant scrutiny: chargeback thresholds, content moderation reviews, bank-level due diligence. [Source: Tier 2]
- Five required policies: Content Management, Age Verification, Complaint & Removal, Third-Party Agreement, Chargeback/Fraud Mitigation.
Adult-friendly processor landscape¶
- Active processors as of 2025: CCBill, Segpay, Verotel, Epoch, Paxum, plus crypto gateways (NowPayments, etc.). [Source: Multiple Tier 3 industry guides]
- Processors do approve AI-based adult platforms but with elevated diligence.
- Civitai's processor cut them off in May 2025 over nonconsensual content — direct precedent. [Source: MIT Tech Review, Tier 1]
Implication for Likeness: Likeness's product is structurally a better fit for adult-friendly processors than the gray-market AI platforms — written consent on every depicted person, identity verification, pre-publication review. This should be a CEO-led BD asset: the Likeness compliance posture is a processor-relationship advantage.
Enforcement & Precedent¶
- Civitai processor de-banking (May 2025) — see above.
- Take It Down Act has not yet generated reported prosecutions as of search cutoff but is criminally enforceable now.
- State-level deepfake prosecutions are emerging, particularly under TN ELVIS Act and CA AB 2602.
- Class actions against celebrity deepfake sites are accelerating. [Tier 2 — multiple sources]
Compliance Cost Estimate¶
Per the founder budget at docs/budget.md:
- Legal & professional services: $130K (entity, agreements, AI/likeness counsel, adult industry counsel, insurance, trademarks)
- Compliance & T&S infrastructure: $35K (2257 records system, classifier tooling, watermarking services, external audit)
- Total: $165K of $1.5M raise (~11%)
This is plausible for the concierge phase but lean. Single contested takedown or processor litigation event can double the line. The reserve at $170K provides one bad month of breathing room.
Risk Assessment¶
| Risk | Level | Comment |
|---|---|---|
| Federal regulatory ban of legitimate consent-first AI likeness | Low | Trend is enabling, not banning. NO FAKES Act if passed is favorable. |
| 2257 enforcement against Likeness | Low if compliance is rigorous | Conservative posture is correct. Compliance Lead must own this. |
| State AI likeness law non-compliance | Low | Likeness's design IS the compliant posture (esp. CA AB 2602). |
| Mastercard/Visa rules tightening further | Medium-High | Active trend. Not company-killing but adds cost. |
| Payment processor de-banking | Medium-High | Civitai precedent. Multi-processor redundancy is mandatory. |
| UK/EU compliance cost shock if expanding | Medium | Geofence US-first is correct. UK/EU adds 15-25% to compliance ops. |
| Platform-level takedown obligation under Take It Down Act | Low (operationally) | Likeness's takedown stack already addresses this. |
Overall regulatory risk: MEDIUM, structurally manageable. Likeness's product-architecture choices align with the regulatory direction. The practical risk concentration is at the payment processor layer, not the regulatory body layer.
Flags¶
Red Flags: - None at the regulatory level. Likeness is structurally aligned with US federal direction (Take It Down Act, NO FAKES Act if passed) and California state law (AB 2602). The platform's design IS the compliant posture.
Yellow Flags: - NO FAKES Act has not yet passed. Likeness's positioning leans on a federal-right-of-publicity narrative that is currently aspirational, not law. Be careful not to overclaim federal alignment in investor materials. - Civitai precedent is real. Even with strong consent posture, a single bad-actor incident detected by a payment processor's compliance team can cause de-banking. Multi-processor redundancy and explicit content-moderation logging are non-negotiable. - UK/EU expansion is much more expensive than US-first launch. Pre-seed budget does not contemplate UK age-verification compliance. Year 2 expansion will require dedicated compliance investment.
Sources¶
- Take It Down Act (Congress.gov) — Tier 1
- Skadden: Take It Down Act analysis — Tier 1
- NO FAKES Act 2025 (Congress.gov) — Tier 1
- Vanderbilt Law: Tennessee ELVIS Act — Tier 1
- Fenwick: California AI likeness laws — Tier 1
- Mastercard Rules 2025 — Tier 1
- MIT Technology Review: Civitai processor cutoff — Tier 1
- Ofcom UK Online Safety Act guidance — Tier 1