Likeness: a founder brief¶
The idea in one paragraph¶
Likeness is a subscription platform where verified adult creators monetize controlled access to their own likeness. Creators upload real photos and videos like they would on OnlyFans, and they can also opt in to having a private AI model of themselves trained on the platform. Subscribers use that model to generate images within rules the creator sets, submit favorites for review, and access galleries the creator has approved. Every output is watermarked, tied to a license, and revocable. Creators own the rights to their likeness throughout. The platform never lets the model leave its servers.
The shorter version: an OnlyFans-style monetization layer for the AI version of yourself, where the creator stays in charge of every decision.
Why this idea exists¶
Two things are true at the same time right now.
First, the behavior is already happening. People are training image and video models on adult creators without permission and generating content with them. The tools get easier every month. Some of this content gets sold on gray-market sites; most of it circulates in private Discord servers and forums. Creators have no consent, no control, no revenue, and limited recourse. The technology is not going back in the box.
Second, creators in the adult industry are facing a parallel problem with their physical workplaces. Some venues are pushing performers to sign agreements that hand over likeness rights in perpetuity, sometimes as a condition of working there. There is an active labor movement pushing back on this. The demand in those fights is straightforward: workers want to keep ownership of their own image, set the terms of its use, and revoke those terms if the relationship goes bad.
Likeness is, in a real sense, the same demand turned into infrastructure. The premise is that if a creator's likeness is going to be used to generate AI content — which is happening regardless — the creator should be the one licensing it, setting the rules, approving the outputs, and earning the money.
What makes this defensible¶
The version of this idea that gets sued, deplatformed, and morally torched is "deepfakes for everyone." That is not the company.
The defensible version is a consent-first licensing platform where verified adult creators sell controlled access to their own likeness, approve derivative AI content individually, and retain the right to revoke at any time.
That distinction determines almost everything else: who can be on the platform, what content is allowed, how payments flow, what the legal posture is, and which conversations with regulators and processors are survivable.
The product¶
What the creator gets¶
A creator account on Likeness includes the monetization tools you'd expect from OnlyFans: real photos and videos, subscription tiers, tips, custom requests, pay-per-view drops, direct messaging.
On top of that, the creator can opt in to an AI likeness layer:
- Upload curated source material to train a private likeness model.
- Define what the model is allowed to generate — explicitness levels, allowed and blocked categories, scene types, wardrobe and setting rules, and whether collaborations are permitted at all.
- Decide who can use the model: all subscribers, a specific tier, or only fans the creator has individually approved.
- Decide what subscribers can do with outputs: view only, view and download with watermark, submit for creator review, or some combination.
- Run an approval queue for fan submissions. Approved generations can be added to a public gallery, sold as PPV, or kept private. Rejected ones are blocked. Patterns the creator doesn't like can be banned at the prompt level.
- Revoke at any time. Pause the model entirely. Disable specific categories. Take the gallery down. Cut off a specific subscriber.
The model never leaves the platform. There is no version of "export my LoRA" or "give me the weights." That is non-negotiable, because the moment weights leave, control is gone forever.
What the fan gets¶
Fans subscribe to creators they already follow. Likeness is not a discovery platform; the creator brings the audience. The fan picks a tier, and depending on the tier they get:
- Access to the creator's real content.
- Access to the creator's approved AI gallery.
- A generation interface where they can use the creator's model within the creator's rules.
- The ability to submit favorites to the creator, optionally with a tip attached.
- The ability to download approved private generations, watermarked.
The high-end version is a self-insert tier. A fan can verify their own identity, train a small private likeness model of themselves, and — if the creator they subscribe to has opted in — generate scenes that include both. This is the most parasocially intense version of the product and also the most operationally complex. It launches later, not first.
The fan never sees raw weights. They never get an API. They generate through a controlled interface where every prompt is checked against the creator's license before inference runs.
What the controls actually look like¶
This is where most "AI consent" pitches collapse, so it's worth being concrete.
A license is a structured object the creator configures. It includes things like:
- Allowed content categories (solo, partnered with verified collaborators, fan-insert, etc.)
- Explicitness ceiling
- Blocked categories — a creator-controlled list, plus categories the platform itself bans regardless
- Public gallery: yes / no / approval required
- Downloads: yes / no / watermarked / paid tier only
- Collaborations: closed / approved-only / open to verified creators
- Per-fan permissions and bans
- Revocation status
Every generation request runs through that license before it gets near a model. If the prompt fails the check, the request fails. The creator can change the license at any time and the change applies forward immediately.
The platform layers its own non-negotiable rules on top: no minors, no age-ambiguous content, no public figures who haven't onboarded as creators, no third-party reference uploads, no nonconsensual scenarios, no "leaked tape" framing, no impersonation. These are not creator-configurable. They are platform floor.
The honest part about revocation¶
A creator can revoke a license. The platform can stop generation, take down the public gallery, cut off subscribers, watermark everything that was downloaded, and pursue takedowns when revoked content shows up elsewhere.
What the platform cannot do is make files that already left the system disappear. If a fan downloaded a watermarked image last week, that image still exists on their device. The platform can hash it, watermark it, prove provenance, help with takedowns, and ban the user — but it cannot delete what's already on someone's hard drive.
Being upfront about this with creators is more valuable than overpromising. The pitch is "more control than you have today, and a serious enforcement toolkit." It is not "perfect rights management." Anyone selling perfect rights management on the internet is lying.
Why creators might actually want this¶
The value proposition for someone working in the field looks roughly like this:
- The deepfakes are happening anyway. This puts the creator on the supply side of that economy instead of the victim side.
- It is a new revenue line on top of existing real content, not a replacement.
- It is a worker-controlled answer to perpetual-likeness contracts. The license belongs to the creator and is revocable.
- It is a structured way to enforce against unauthorized AI use of their image, because the platform has provenance, hashes, and watermarks for everything legitimate, which makes everything else easier to identify and take down.
- The creator doesn't have to learn to train models, run inference, or deal with the technical side. They review submissions and set rules.
This is the part I most want to test by talking to people in the industry. I have a hypothesis about what would feel respectful, useful, and worth a working creator's time, but I don't have ground truth, and the difference between hypothesis and ground truth is the whole company.
Business model¶
The money flows the way it does on OnlyFans, plus a layer for compute.
- Subscriptions, tips, PPV, and custom requests work the same way they do elsewhere. The platform takes a percentage of the creator's earnings. Industry standard is around 20%.
- AI generation costs the fan compute credits. The creator can set a markup on top of platform compute cost. This is the new revenue line — it benefits the creator without requiring them to produce more content.
- Creators can charge for submissions to their approval queue. Fans pay for review; creators choose what to publish.
- Optional premium creator tools — analytics, takedown monitoring, more sophisticated approval workflows — either as paid add-ons or part of revenue share.
A reasonable starting tier structure for a fan might look like:
- Real content only — $15/month
- Real content + AI gallery viewing — $25/month
- Real content + AI generation with monthly credits — $50/month
- Premium generator with submission privileges — $100/month
- Self-insert tier — $200/month, eventually
These are illustrative. Every creator sets their own.
Legal and compliance reality¶
This is where most adult-AI ideas die, so it's worth being direct.
Age and identity verification. Every creator needs government ID, liveness verification, and ongoing recordkeeping. Federal recordkeeping requirements for explicit content involving real performers exist, and the conservative posture is that synthetic content depicting a real likeness should be treated the same as content depicting that performer directly. The same applies to anyone using the self-insert feature.
Nonconsensual intimate deepfake laws. U.S. law is moving in a clear direction: nonconsensual intimate AI imagery is increasingly regulated and increasingly illegal at both federal and state levels. A consent-first platform actually benefits from this trend — it is the legitimate alternative to the gray market.
CSAM is zero tolerance. AI-generated or altered child sexual abuse material is criminal under expanding state and federal statutes. The platform must hard-block minors, age-ambiguous content, school settings, and "barely legal" framing. There is no edge case to litigate.
Payment processors are the single hardest operational problem. Stripe and PayPal will not work. The platform needs adult-friendly high-risk processors, redundancy across multiple processors, conservative content rules, low chargeback rates, and visible compliance infrastructure. This actively shapes the product. Categories like nonconsent fantasy or incest roleplay — allowed on some adult platforms — have to be off the table here. Processors will pull the plug, and losing payments ends the company overnight.
International regulation. Launching in the U.S. with a verified-adults-only posture is the safe starting position. UK and EU regulation around deepfakes and platform safety is real and adds compliance overhead for those markets. Geofence at launch and expand jurisdiction by jurisdiction.
The pattern across all of this: compliance is not a slide in the deck. It is a feature of the product. The platforms that survive in this category are the ones that treat trust and safety as a profit center, not a tax.
The biggest risks, named honestly¶
Payment processors pulling out. This kills the company. Mitigation: conservative content rules, multiple processors, visible audit logs, a reserve fund, and a legal posture that holds up to processor due diligence.
Misuse by fans. People will try to upload reference photos of celebrities, exes, and random people. The platform's hard rule is no third-party reference uploads at all. Every face in the system is verified. Face-matching catches attempted violations. Repeat offenders are banned and reported where required.
Creators not trusting the platform. This is the biggest non-legal risk. If creators feel the platform is going to replace them with AI versions of themselves, the product fails. The mitigation is structural: creators own the rights, the model never leaves the platform, the license is revocable, the logs are transparent, and the creator's earnings on AI generation are higher than what the gray market currently gives them, which is zero.
Model leakage. If weights leak, trust dies. Mitigation is never exposing weights, isolating model storage, signed model registries, access logs, encryption, and per-creator adapter separation. The technical bar is high but well-defined.
Output leakage. Fans will try to leak approved generations. Mitigation is buyer-specific watermarks, perceptual hashes, takedown automation, and account bans. This is the same problem OnlyFans has, just more so.
Regulatory shift. Laws around AI likeness are moving. The mitigation is staying narrow, over-complying, maintaining strong consent records, and treating legal counsel as core staff rather than an outside vendor.
What I would build first¶
I do not want to build the full platform on day one. The right v0 is a concierge prototype.
- Recruit five to ten creators who are interested in the idea and willing to work closely on it.
- Train their likeness models manually, with their direct involvement in approving training data and test outputs.
- Build a basic fan generation interface for still images only. No video. No voice. No collaborations yet.
- Build the approval queue and the license engine.
- Run subscriptions and credits through an adult-friendly processor.
- Watermark every output.
- Manually moderate everything that needs moderating.
The point is to learn three things:
- Do fans pay for authorized AI access at a rate that works economically?
- Do creators feel the workflow respects them and is worth their time?
- Does the moderation and compliance load scale, or does it grow faster than revenue?
If those three answers are yes, yes, and yes, the platform is real. Then add video, then collaborations, then self-insert, then everything else, in that order.
Total cost of a serious MVP — engineering, compliance, infrastructure, model pipeline, trust and safety operations, and initial creator acquisition — is probably in the range of $750K to $1.5M. The model training itself is not the expensive part. The expensive parts are everything around it: legal, payments, moderation, identity infrastructure, and the operational work of getting compliance right. A bare-bones idle infrastructure cost is low five figures per month. Real cost scales with inference volume, and video, when it eventually arrives, is genuinely expensive.
What I am looking for in cofounders¶
This is a six-person company at the start, give or take. Roles I think need to exist from day one:
- Founding CTO / Product. That is me. Direct CTO experience; owns the stack, infrastructure, product direction, and UX until the team is mature enough to bring on a separate design hire. The decision not to lead fundraising is deliberate — recruiting a CEO who can do that job well is the first cofounder priority.
- Founding CEO. Narrow scope: fundraising, investor relations, BD, and business strategy. Has raised before; ideally with real reps against adult-friendly high-risk processors. Comfortable being the business face while a technical cofounder owns vision and product.
- Compliance and legal lead. Adult industry experience, AI policy literacy, and a willingness to make compliance a core product rather than a checkbox. Not an outside lawyer — an inside leader.
- ML lead. Comfortable with image-generation pipelines, fine-tuning, isolated inference, model registries, and the security posture required to keep weights from ever leaving the platform. Narrower than a typical ML+infra lead — the CTO owns broader platform infrastructure.
- Trust and safety lead. Has built moderation systems before. Understands abuse vectors. Knows the difference between policy and enforcement.
- Creator relationships and operations. Ideally someone with existing trust in the adult industry. This person owns onboarding, support, and the creator side of the product.
If someone in the industry wants to be on the founding team in a creator-facing role, that is the version of this I would most want to build. The premise of the platform is that the creator is in charge. The company should reflect that.
Closing¶
The thing I most want to know from people in the field is whether this idea, as described, would actually be useful and respectful — or whether it has the smell of something built about creators rather than for them. The technical and legal pieces are solvable. The cultural piece is not solvable by me alone.
If a version of this exists that real creators want to use — that gives them control they do not currently have, revenue they are not currently capturing, and a defense against the unauthorized use of their image that already exists — then it is worth building. If the right answer is something different from what I have described here, I would rather hear that early than ship the wrong thing.
That is the brief.