Runtime — SOFIA's first implementation uses Claude Code and CLAUDE.md files. The concepts described here (persona, isolation, artifacts) are provider-agnostic — only the runtime layer is specific to a provider. Some sections describe Claude Code specifics. Latter versions implement other providers.

Field Feedback


Textbook case — ADR-051: when the architect says "not now"

Productive friction between a dev and an architect. Nobody is wrong.


The context

Katen project, v0.21. The developer (Axel) proposes an ADR to add concurrent execution via Web Workers to the engine. The ADR is solid: theoretical grounding (Petri nets, independent transitions), clean design (worker pool, fireable partition), opt-in.

What happens

The architect (Mira) reviews the ADR and recommends Deferred:

The review is tough. 5 recommendations, 3 high priority.

Why this is a good example of friction

The dev isn't wrong. The ADR anticipates a real need. Concurrency will be necessary when the engine handles heavy compositions (monitoring 200 sources, ML compute). The design is ready.

The architect isn't wrong. The ADR adds complexity to the engine core for a need that doesn't exist yet. The project principles say "make it work, make it right, make it fast — in that order". We're not at the "fast" stage.

The tension produces a better decision:

Without the review, the ADR could have been implemented too early, adding complexity to a player mid-refactoring. Without the ADR, the concurrency need would not have been formalized and would have arrived as an emergency later.

What this illustrates

  1. Constraints create frictionMira (architecte) doesn't code, so she can't "let an ADR slide" to move faster. She's forced to challenge it on principles.
  2. The dev escalates, the architect filtersAxel (dev) anticipates a technical need. Mira confronts it with the roadmap and principles. Both perspectives are necessary.
  3. Deferred ≠ Rejected — the decision isn't "no" but "not now, and here's what needs to be fixed when the time comes". Axel's work isn't lost.
  4. The orchestrator decides — the orchestrator reads the review, evaluates, decides. The personas exposed the tension. The orchestrator resolves it.

The pitfall avoided

Without friction: the dev implements concurrency in v0.22, the engine cleanup in v0.23 breaks the player, concurrency has to be rewritten. Two months of work lost.

With friction: the ADR waits for the player to stabilize. When reactivated, the design will be better and the player will be clean.


Field experience report — Challenger pattern

One producer, N challengers. Each on their axis. Each with blocking rights.


The pattern

One persona produces. Other personas intervene to verify quality on their axis of expertise, without producing themselves. Each challenger has blocking rights on their axis. The orchestrator orchestrates and decides.

This is distinct from peer friction (two personas at the same level who contest). The challenge is asymmetric: one produces, others verify.

Instances observed on Katen

Product chain (code)

Role Persona Axis
Producer Axel Code, implementation
Challenger Mira Architecture, ADR coherence
Challenger Léa Formal invariants, rigor
Challenger Nora UX, user flows

One producer, three challengers. Maximum intensity — it's the key product.

Editorial chain (blue book)

Role Persona Axis
Producer Winston Writing, narrative
Challenger Mira Structure, argumentative coherence
Challenger Léa Academic references, facts
Challenger Marc Positioning, tone
Challenger Nora UX of published deliverables

Multi-format production chain

Role Persona Axis
Producer Sofia PDF, PPTX, web, social media
Challenger Nora UX, accessibility

Properties

Not the other way around.

not on everything. The architect doesn't challenge UX. UX doesn't challenge architecture.

whether the block is lifted or maintained.

The cost is linear, not combinatorial.

Academic signal

Huang et al. (2025) — Resilience of Multi-Agent Systems to Untrustworthy Agents (arXiv:2408.00989) — measure the resilience of multi-agent topologies against untrustworthy agents. The hierarchical topology (central coordinator + specialized agents) loses only -5.5% performance with failing agents, versus -10% to -24% for flat topologies (debate, relay).

Limitation: the study is about pure multi-agent (AI↔AI), without a human at the center. The challenger pattern in SOFIA is human↔AI orchestration — the orchestrator arbitrates, not a coordinator agent. This is a convergent signal (hierarchical topology is resilient), not a validation of our method. Nobody has measured this pattern with a human orchestrator.

For your project

The challenger pattern emerges naturally when a persona starts producing deliverables that impact multiple dimensions. A few rules:

"verify reference rigor" or "verify accessibility"

decides, but the producer doesn't push through


Context transfer post-reorganization

Creating personas isn't enough. You need to transfer what they need to know.


The case

Split of a 7-persona instance into 3 instances. The diagnosis: the problem wasn't the bus saturating — it was persona granularity (two professions in the same context). The split fixes granularity. But the 6 new personas start without context — no history of decisions, studies, or failures.

This is exactly the onboarding problem for a human joining an existing team.

What we observed

  1. The first transfer worked. Mira produced a structured note (9 studies + figures, with what/where/why/priorities). Aurele was able to start immediately.
  2. The transfer wasn't planned in the migration plan. The target study described the topology, personas, scripts. Not the context transfer. We discovered it after the fact.
  3. The mapping isn't 1:1. One sender can feed multiple recipients (Mira → Aurele + Emile + Livia). The orchestrator identified a missing transfer that the architect hadn't seen.
  4. Self-transfer is a special case. When a persona changes instance but keeps their name (Marc), they must document what changes in their scope themselves. No external note — reflective work.

Transfer protocol

When a scope changes hands:

  1. The former owner produces a structured note — what, where, why, what stays with them. Free format, mandatory content.
  2. The orchestrator checks completeness — are all knowledge flows covered? The sender/recipient mapping isn't obvious.
  3. Distinguish operational context from history — files transfer. The "why we made this choice" is harder to capture. Sessions contain this context but aren't structured for transfer.

Claude memory

Claude project memory (~/.claude/projects/) doesn't transfer automatically during a split. After a reorg:

Sign: a persona who "remembers" decisions made in a scope that's no longer theirs. The memory has become noise.

The rule

Context transfer is an explicit step in any reorganization, on par with creating personas and conventions. Not planning for it = letting new personas navigate blind.


Field experience report — Editorial chain

The multi-persona editorial chain emerges naturally. You need to document it before it breaks.


The pattern

Observed during the production of the SOFIA blue book. Reproducible for any editorial deliverable: article, fragment, public documentation.

Team thinks
  → fragments, exchanges, inter-persona friction
  → raw material accumulates

Writer drafts
  → distills exchanges into structured text
  → exchanges with experts to calibrate substance

Experts validate substance
  → architect: structure, argumentative coherence
  → researcher: references, factual rigor
  → strategist: positioning, tone

Producer formats
  → layout, multi-format adaptation (PDF, web, social media)

Challenger verifies the output
  → UX, accessibility, coherence

The orchestrator validates before release
  → especially the contextualization of academic references

What we learned

the producer delivers format, the strategist decides timing.

content → validation → production → distribution.

miscontextualized and change the thesis. Only the orchestrator checks that.

Key rule

Nothing goes out without the orchestrator's review and validation before publication. The writer formulates, the researcher sources, the orchestrator validates that the use of the reference is sound.

For your project

If your SOFIA team produces public content (articles, documentation, presentations):

who challenges, who decides timing

documentation it will break at the first persona change

and references


Field experience report — Factual contamination

The repo is not a source of truth for facts. It never was.


The problem

LLMs don't count, don't calculate durations, and favor internal coherence over external truth. An approximate data point entered once — sometimes by the orchestrator, sometimes hallucinated by the AI — will be propagated into every document generated afterwards.

The larger the repo grows, the more invisible the error becomes. It looks like coherence because each contaminated document reinforces the others. The AI doesn't doubt a data point it finds in 10 repo files. The fact that it wrote it in those 10 files itself doesn't enter its reasoning.

Real case — Katen

The orchestrator used "15 years" to describe his duration of reflection on the project. It was an approximation — the real duration is 18 years (2008-2026). The AI picked up the figure, propagated it across ~30 documents, and stabilized it.

Audit:

The error came from the orchestrator himself. The AI amplified it and made it invisible.

The mechanism

  1. An approximate data point enters a session
  2. The AI picks it up without checking, phrases it nicely, propagates it
  3. Each contaminated document becomes a source for subsequent sessions
  4. The error stabilizes — it looks correct because it's consistent with the other contaminated documents

This is a mutual reinforcement effect. The same phenomenon exists at web scale (model collapse, Habsburg AI) — but at web scale, it's irreversible. In a SOFIA repo, it's traceable and fixable. Provided the orchestrator checks.

What is vulnerable

Three classes of sourcing errors

Contamination isn't limited to raw facts. Sources themselves can be problematic in three ways:

Class 1 — Assertion without source. The text states something as fact, no source supports it. The AI produced a plausible assertion from its distribution, not from data.

Class 2 — Source contradicts the assertion. The source exists and is cited — but it says something different from what the text claims. The AI "summarized" with distortion, or confused two sources.

Class 3 — Correct source, incoherent usage. The source is correct and faithfully cited — but it doesn't say what the usage context requires. Example: citing a study on AI agents to justify human behavior, without a caveat about the transfer.

Class 3 is the most dangerous: everything looks correct, the source is verifiable, the summary is faithful. Only someone who understands the usage context can detect the incoherence.

Safeguards

1. Continuous factual verification

Not at the end of the project — continuously. Every session that handles facts (dates, numbers, refs) should include a verification pass. That's duty 1 of the method.

2. Decontamination passes

Targeted audits on the most sensitive data, at regular intervals. On Katen, an audit identified ~55 occurrences across ~42 files in a single session. It's doable — provided you plan for it.

3. Explicit source of truth

Critical project facts must be declared once, in a reference document, and always verified against that source. Not against the repo — against the source.

For your project

This is not a flaw of the method. It's a property of the underlying technology. Precision errors are normal — LLMs favor plausibility over truth.

The orchestrator is the only safeguard. The method must state this explicitly, and the orchestrator must integrate it as practice, not as an abstract principle.

And it's one of the strongest arguments in favor of the SOFIA method: in a world where the web contaminates itself irreversibly, a structured repo with cross-reviews is one of the rare spaces where decontamination remains possible.


Field experience report — Katen

7 personas, 210+ sessions, a real project.


The project

Katen is a formally verified orchestration engine for Data & AI pipelines, based on Petri nets. Built in pure HTML/JS/SVG, zero dependencies. Open source MIT. 18 years of reflection (2008-2026) — from the v1 C++/Qt to v2 pure web.

The team

Persona Role Workspace
Mira System & solution architect experiments/architecture/
Axel Full stack developer katen/ (product repo)
Léa Researcher & scientific validation experiments/recherche/
Nora Product Design & UX experiments/ux/
Marc Strategic advisor experiments/strategie/
Sofia Visual identity & Multi-format production experiments/graphisme/
Winston Writer & editorial distiller experiments/maturation/

One human (the orchestrator/project creator) arbitrates.

What worked

Friction produces better decisions

Mira challenging an Axel implementation. Léa (recherche) flagging that a claim doesn't hold up to the literature. Marc (strategie) asking "who's going to pay for this?". Sofia (gardienne) refusing a visual theme that looks good but doesn't carry the project identity. These frictions prevented real mistakes.

Isolation forces rigor

Mira doesn't code → she's forced to specify clearly. Axel doesn't decide on architecture → he escalates frictions instead of working around them. Sofia produces, Nora (UX) challenges — the one who decides the form is the one who delivers it, the one who challenges doesn't produce. The result: 62 ADRs, 24 architecture principles, usable specs.

Sessions structure continuity

The session summary is the bridge between conversations. Without it, each session starts from zero. With it, the persona picks up exactly where they left off.

Artifacts as protocol

Cross-reviews (Mira reviews an ADR, Léa reviews a public claim, Nora challenges a Sofia deliverable) are more useful than any chat. Writing forces clarity.

The challenger pattern

One producer, N challengers with blocking rights on their axis. Axel codes → Mira challenges architecture, Léa the invariants, Nora the UX. Winston (redacteur) writes → Mira challenges structure, Léa the refs, Marc the positioning. The orchestrator decides when a challenger blocks.

The editorial chain

Winston writes, experts validate substance, Sofia produces format, Nora challenges before publication, the orchestrator validates last. The SOFIA blue book is the first complete product of this chain.

What broke

Lost session

A week of work (6-7 sessions) was lost following a Claude app crash. Context was partially reconstructed from produced files, but untraced intermediate decisions were lost.

Lesson: session summaries are not optional. Produced files are the only source of truth that survives.

Scope drift

Some personas occasionally overstepped their scope — the architect starting to write pseudo-code, the strategist giving technical opinions. Isolation in the CLAUDE.md works, but it must be actively maintained.

Lesson: the "What they don't do" section is the most important in the persona sheet. Review it regularly. The "What they challenge" section makes friction structural.

Initial calibration too broad

The first personas were too generalist. It was by using them that we narrowed them down — adding constraints, sharpening stance, reducing scope. Defining by medium (spec, code, review, PDF) is more reliable than by skill.

Lesson: the first draft is always too broad. Iterate.

Factual contamination

~30 documents contained "15 years" instead of "18 years" for the orchestrator's duration of reflection. The error came from the orchestrator himself, propagated and stabilized by the AI. Detected by Léa during a targeted audit.

Lesson: the repo is not a source of truth for facts. Continuous human verification — not at the end of the project.

Blurry production boundaries

When personas started producing (not just specifying), scope boundaries became blurry. Who publishes what on which channel? Resolved by separating producer and challenger, and centralizing scripts in shared/tools/.

Lesson: thinking isolation is in the persona sheets. Production isolation is in the publication conventions. Both are necessary.


Field experience report — Merging two personas

When two personas don't produce friction, it's a signal, not a failure.


The problem

Two personas — UX and Graphic Design — covered two distinct professions in a real professional context. In a solo SOFIA context, they produced zero friction on content. No disagreement, no contestation, no tension on decisions. Flat flow.

The only observed friction was logistical: the cost of switching from one persona to the other for an output that never diverged.

What we observed

On Katen, over 6 weeks:

The diagnosis

The UX/Graphic Design separation rests on a real tension in a team: "what is usable" vs. "what is beautiful". Two different people with different training and different priorities naturally produce that friction.

In a solo context, that tension didn't exist. The orchestrator carried both judgments in the same head. His aesthetic filter and his usability filter were already integrated before prompting. One handled structure, the other designed — but nobody contested.

This is a case where the professional distinction (real in the world) didn't produce a distinction of decision axis (in the project). Two roles, one axis. So one persona.

The rule

Don't derive personas from professions. Derive them from axes of tension.

Diagnostic signals

How to detect that merging is needed:

For your project

Friction failures are data, not practitioner failures. If two personas never oppose each other, it's not that you calibrated them poorly — it's that they cover the same decision axis in your context. Merge and move on.

This is also the first documented case of justified removal of a persona in SOFIA. As such, it has as much value for the method as creation cases.


Field experience report — Persona calibration

Define by medium, not just by skill.


The problem

The initial persona sheets defined roles by professional skill: "UI Designer", "UX Lead", "Full stack Dev". That's enough for thinking — the architect specifies, the dev implements, the strategist questions.

It's no longer enough when personas start producing. When the same content must exist in markdown, PDF, HTML, and social media visuals — who is responsible for each transformation? "UI Designer" says nothing about that.

What we observed

On Katen, three personas produced content for distribution channels, each with their own tools, with no clear contract on boundaries. The SOFIA blue book revealed the problem: Winston writes it, Sofia formats the PDF, but who produces the web version? Winston's publication scripts generated HTML — that overlapped with Nora's scope.

The learning

Defining a persona by the "what" (UI design) isn't enough. You also need the "for which medium":

The medium clarifies the boundary where skill blurs it. Sofia and Nora both have design skills. It's the medium that separates them: Sofia produces across all channels, Nora challenges the output.

For your project

When you calibrate a persona, ask two questions:

  1. What — which skill?
  2. On which medium — which output format?

If two personas share the same skill but different media, the boundary is clear. If two personas share the same medium, you have a scope conflict to resolve.

Calibration isn't a day-one exercise. It evolves with use. Plan to revisit persona sheets when production roles emerge — and they always do.


Pitfalls and classic mistakes

What doesn't work, so you don't have to find out yourself.


1. Too many personas too early

You don't need 5 personas on day 1. Start with 1. Calibrate it. Add the second when the need is clear.

Sign: you have personas you never use. Solution: delete unused personas. No regrets.

2. The compliant persona

A persona that always says yes is useless. It's an assistant with a first name.

Sign: the persona never says "no", "that's not my role", or "the spec is too vague". Solution: strengthen the constraints in the persona sheet and the CLAUDE.md.

3. Forgetting session summaries

The next session has no context without a summary. You'll waste time re-explaining, or worse, the persona will head in a direction inconsistent with the previous session.

Sign: you start each session with 10 minutes of explanation. Solution: the summary is mandatory in the CLAUDE.md. Not optional.

4. Soft isolation

A CLAUDE.md without an Isolation section is a broken CLAUDE.md. The persona will read and write everywhere, and friction disappears.

Sign: the architect modifies code, the dev rewrites specs. Solution: add explicit boundaries. "Never read/write outside of X."

5. The orchestrator who doesn't decide

Personas expose tensions. If the orchestrator doesn't decide, tensions accumulate and nothing moves forward.

Sign: the same open questions come back session after session. Solution: decide. Even if it's imperfect. An "Accepted" ADR is better than an eternally "Proposed" ADR.

6. Confusing persona and assistant

A persona is not a politer assistant. It's a role with constraints that force it to think differently. If you remove the constraints, you're back to a generalist assistant.

Sign: you give the same instructions to all your personas. Solution: each persona has a stance, constraints, and scope that are different. The difference is what creates value.

7. The lost session

Claude Code can crash. The app can freeze. Context can get corrupted. It will happen.

Sign: you've lost a week of work. Solution: files are the only source of truth. Produce artifacts (ADRs, specs, reviews) continuously. The session summary is the bare minimum. Files survive crashes.

8. The dev who never flushes

The developer persona is different from the others. They code, they're in the flow, they have a long-running session going continuously. Stopping for a session summary breaks the rhythm.

Result: no summary, no trace of intermediate decisions. The code is in git, but the why behind implementation choices disappears if the session crashes.

Sign: your dev session never closes and has no summary. No silver bullet — it's a trade-off. A few approaches:

(end of a feature, before a refactoring)

"Thinking" personas (architect, strategist, researcher) have short sessions with file deliverables. The "doing" persona has a long session with code deliverables. It's not the same rhythm and that's fine — but you need to know it.

9. Copying personas from another project

Katen personas are calibrated for Katen. If you copy them without adapting, they won't match your project or your way of working.

Sign: the persona talks about Petri nets when you're building a mobile app. Solution: use the examples as structural reference, not as content. The SOFIA guide is there to help you design your own.

10. The shared blind spot

All personas are calibrated by the same orchestrator. Their implicit biases become the biases of the entire team. Friction is real — but it plays within a thinking space bounded by what the orchestrator knows they don't know. What they don't know they don't know, no persona will surface.

Sign: all personas converge quickly, nobody questions the premises. No simple solution — this is the structural limit of a single-orchestrator system. An outside perspective (human peer, user, meta persona outside the flow) is the only compensation mechanism.

11. Signal overproduction

A well-functioning method generates more material than the orchestrator can absorb. Notes, reviews, sessions, artifacts accumulate. The product (tight scope) stays under control. Explorations pile up without being sorted or qualified.

Sign: unread notes accumulate, sessions go without follow-up, the orchestrator skims instead of reading. Solution: archive regularly, distinguish signal from noise, accept that not everything will be processed. Overproduction is a sign of system health — provided you govern it.

12. Framework ossification

Conventions, constraints, scopes freeze. What was a contextual decision becomes dogma. Personas apply rules that no longer have a reason to exist because the context changed but nobody flagged it.

Sign: a constraint nobody can justify, a convention everyone works around, a persona whose scope no longer matches their actual deliverables. Solution: recalibrate periodically. Revisit the constraints — they're the most powerful calibration lever. A soft persona is a persona whose constraints have eroded.

13. The orchestrator bottleneck

SOFIA relies on a single control point — the orchestrator. That's its strength (coherence, arbitration, big picture) and its structural limit. Everything goes through one person: reading, filtering, contextualizing, transmitting, arbitrating, verifying.

Intentional friction has a cost, borne by a single person. The question isn't how to eliminate it — it's how to make it sustainable.

Sign: the orchestrator approves without reading, sessions become mechanical, decisions are deferred. Approaches: reduce the number of active personas, space out sessions, prioritize friction circuits that produce value, accept that some flows run in degraded mode.


Field experience report — Product chain

Specify, implement, challenge. Three roles, one cycle, zero shortcuts.


The pattern

The product chain is Katen's core cycle. It's the oldest, most battle-tested pattern, and the most revealing of the SOFIA method in action.

The orchestrator identifies a need or friction
  → opens architect session

Architect specifies
  → ADR, interface contracts, formal specs
  → doesn't code — if the spec is vague, the dev will say so

Dev implements
  → code, tests, friction reports
  → doesn't reinterpret — if the spec is ambiguous, they escalate

UX challenges
  → review of flows, visual states, accessibility
  → doesn't produce the code — she specifies expected behaviors

Researcher verifies
  → formal invariants, consistency with the Petri-net model
  → doesn't decide on architecture — she validates the theory

The orchestrator arbitrates and decides
  → when personas diverge, the orchestrator decides
  → the decision is traced (ADR, note, session)

What we observed on Katen

The spec forces clarity

Mira doesn't code. That's the most productive constraint on the team. Because she can't "show it in the code", she's forced to specify clearly — interface contracts, expected states, edge cases. The result: 62 ADRs, 24 architecture principles. Specs that Axel can implement without guessing.

Without that constraint, the architect jumps straight to pseudo-code. The spec stays vague. The dev interprets. Bugs are structural, not technical.

Implementation friction surfaces the real problems

Axel doesn't work around issues. When an interface contract generates unexpected complexity, he flags it rather than patching. These escalations changed ADRs — not because the spec was bad, but because the field reveals what theory can't see.

Concrete case: operator parallelization. Mira blocks ("not now, the roadmap has higher priorities"). Léa confirms from an orthogonal angle ("no research interest"). Two refusals, two independent reasons. The topic is deferred. Three weeks later, the design comes back — better than it would have been.

UX challenges what the dev doesn't see

Nora questions onboarding flows that satisfy the developer but lose the user. She doesn't code — she specifies expected behaviors. Axel escalates technical constraints. The friction between them produces interfaces that hold up both technically AND humanly.

The researcher anchors in the formal

Léa doesn't decide on architecture. But when an implementation touches the Petri-net model's invariants — firing policy, connection states, reversibility — she verifies. Her "that doesn't hold" carries the same authority as a failing test: you don't push through.

The orchestrator carries the context

The orchestrator is the only one who sees all sessions. They filter, reformulate, contextualize. When Mira deposits a review for Axel, the orchestrator adds: "we decided yesterday with Marc to delay publication — that changes the priority of this spec." No persona has that context alone.

ADRs as the backbone

Every structural decision produces an ADR. The format is standard: context, decision, consequences, status. The ADR isn't bureaucracy — it's memory.

An unwritten ADR is a decision that will be questioned three sessions later by someone who didn't know about it. Over 210+ sessions, that happens fast.

ADRs potentially go through 4 challengers:

The orchestrator decides. The ADR is accepted, rejected, or deferred. Status is traced.

For your project

The product chain is SOFIA's fundamental use case. A few rules:


Field experience report — Production role isolation

When personas start producing, boundaries shift.


The problem

The SOFIA method documents thinking role isolation well: the architect doesn't code, the strategist doesn't touch code, the dev doesn't decide on architecture. These constraints create productive friction.

But when personas move from thinking to production — writing a white paper, generating a PDF, publishing on social media — scope boundaries become blurry. Who publishes what on which channel? Who maintains the build scripts? Who validates before release?

What we observed

On Katen, publication scripts were scattered across individual workspaces: maturation/bin/publish-*.py under Winston, graphisme/tools/build_pptx.py under Sofia. Result:

The solution

Two decisions:

1. Separate thinking and production in roles. A persona who thinks AND produces the final deliverable is judge and jury. Friction disappears. On Katen: Sofia produces (all channels), Nora challenges (UX, accessibility). The one who decides the form is the one who delivers it. The one who challenges doesn't produce.

2. Centralize scripts in shared/tools/. Each persona triggers their scripts, but the code lives in a space visible to all. The architect audits coherence, the dev audits quality, UX audits the output.

For your project

When your personas start producing public deliverables:

the one who validates

Thinking isolation is in the persona sheets. Production isolation is in the publication conventions. Both are necessary.

Multi-format — when a deliverable exists on multiple channels

The problem worsens when the same content must exist in markdown, PDF, HTML, and social media visuals. The question is no longer just "who publishes" but "who owns which transformation".

What we observed

Without a clear contract on channels, tasks fall through the cracks:

The rule

One channel = one owner. The persona who produces the deliverable for a given channel is responsible for triggering, coherence, and updates. Others challenge via review — they don't produce.

Channel Owner Challengers
Markdown source Writer Architect (structure), Researcher (sources)
Generated PDF/HTML Graphic designer UX (accessibility), Writer (content)
Social media visuals Graphic designer Strategist (message), UX (readability)
Website Dev or Graphic designer UX (user journey), Architect (coherence)

The orchestrator validates before any release — regardless of channel.