About Second Pass

Second Pass produces a single, non-iterative framing of a conversation transcript and then stops. Because users cannot clarify, debate, or refine the output, misinterpretation and projection are primary risk vectors.

This role exists to govern that boundary.

About the Role

We are hiring an Interpretation Risk & Misuse Lead to identify and mitigate risks arising from how users read, project onto, or over-trust product outputs.

This role focuses on second-order harms: false closure, authority attribution, paranoid or confirmatory readings, and misuse driven by interpretation rather than system behavior.

This is not a moderation or therapy role. It is epistemic risk governance.

Responsibilities

  • Identify patterns where users may misread outputs as truth, diagnosis, or intent.
  • Stress-test example outputs, labels, and framing for projection and over-confidence risk.
  • Review interpretive language for epistemic force (what feels definitive vs framed).
  • Define when ambiguity or restraint is safer than clarity.
  • Recommend removal or revision of outputs that create false closure, even if “accurate.”
  • Partner with Product Integrity Lead to ensure interpretation risks are addressed without expanding scope.

What This Role Does Not Do

  • Does not add features or expand interpretive depth.
  • Does not provide user guidance, reassurance, or advice.
  • Does not optimize for engagement or satisfaction.
  • Does not resolve user problems or relationships.

Who You Are

  • Highly sensitive to how language is interpreted, not just what it says.
  • Comfortable killing outputs that feel “useful” but are risky.
  • Skeptical of definitive framing in ambiguous human contexts.
  • Oriented toward preventing harm through restraint, not explanation.

Relevant Experience

Backgrounds that tend to fit this role include:

  • Trust & Safety or AI policy (interpretation-focused)
  • AI red-teaming or misuse research
  • Linguistics, discourse analysis, or behavioral risk
  • Editorial or review roles involving high-stakes language decisions

Formal AI engineering experience is not required.

Compensation

  • Base Salary Range: $220,000 – $320,000 USD (depending on experience and scope)
  • Equity / Long-Term Incentives: Available and negotiable
  • Compensation reflects senior governance and safety-adjacent roles in AI organizations

Logistics

  • Location: Remote (United States)
  • Employment Type: Full-time
  • Reporting: Direct to founder; works closely with Product Integrity Lead
  • Visa Sponsorship: Not available at this time

How to Apply

Please submit:

  1. An example of language you removed or softened because it was risky despite being accurate.
  2. One interpretation-driven failure mode you believe AI tools underestimate.

Concise answers preferred.