Get the latest tech news
Legion Health (YC S21) is hiring engineers to help fix mental health with AI
# **Founding Engineer—Build AI-Native Ops for Mental Health (YC S21, $1M+ ARR)** --- ### **🧠 Build AI-native care infrastructure. Ship for real patients. Operate at speed.** --- ### **ℹ️ TL;DR** We're building the **AI-native operations layer for psychiatric care**. Not diagnostics, but what happens **_outside_** the visit—the real operational backend: scheduling, documentation, billing, intake, risk detection, and more. If you want to build core infrastructure with real AI, **own** systems end-to-end, and work directly with a deeply technical founder still up to his neck in the code—read on. --- ### **🗺️ Context** Hey—I'm Daniel, co-founder and CTO of Legion Health (YC S21). Mental health care is **operationally broken**—patients ghosted, clinicians buried in forms, payers chasing missing notes. The industry is flooded with AI startups trying to automate away diagnosis—and even providers as a whole. **But diagnostics aren’t the bottleneck. Operations are.** So, we’re solving the _real_ problem: > "What if operations worked?” We’re building a real-time, AI-powered backend for mental health clinics—**LLM agents + structured systems** that coordinate human care like it’s software. Think: * Automated intake that personalizes itself mid-call * AI copilots that document visits while verifying insurance * Risk detectors trained on full transcripts and events * Schedulers that close the loop without human friction And we’re already live. Our agent infrastructure supports over **2,000+ patients** — with **only one support human**. **Unlike most AI startups, we are our own customer**. We operate our own large psychiatric practice. The systems you build directly impact our clinicians and patients _today_. No theoretical B2B pitches, begging for pilots, and months of stakeholder alignment and bureaucracy—just real-world, high-stakes operational challenges. ⭐ If you've felt like AI is being wasted on toy tools or B2B busywork—**come work on something real**. --- ### **🏆 What we’ve already built** * $1M+ ARR, growing fast—**post-PMF, pre-scale** * $6M raised, **$3M in the bank** * A live agentic co-pilot (think “Cursor for care ops”) that actually reduces admin overhead * A functioning psychiatric practice with real clinicians, patients, and claims * A high-leverage AI architecture in motion—LLM tool use, RAG, event-driven infra * A clear path towards for an event-driven, simulation-capable architecture that self-improves * A small, intense team (\~11 people) shipping daily—no dead weight This is the moment where **foundational hires make the company**. You're not joining an idea-stage pipe dream or a late-stage dinosaur. You're helping us define the future. --- ### **👷🏻♂️ What you’ll build and own** This isn't a feature factory. We move **_fast_**, shipping multiple meaningful features straight to real patients and providers every single day. You’ll **own entire domains** end-to-end—architecture, implementation, iteration—not just JIRA tickets. This is the frontier of applied AI in healthcare. * **Core Event-Driven Backend:** Architect and scale our Node.js / Next.js / TypeScript / Supabase (Postgres) / AWS stack. Design schemas, event flows, and APIs that power real-time, resilient psychiatric care ops. * **LLM Agent Infrastructure:** Build _actual agent loops_—tool use, memory, retry logic, context updates, feedback mechanisms. Not a demo. A co-worker. * **Human + AI Ops UX:** Shape real-time interfaces where human teammates and agents co-work, co-learn, and co-adapt. Agents learn by using the same UI our humans do. * **World-State Simulation:** Define the canonical state of a patient’s journey. Power alerting, planning, and agent behavior with a simulation of psychiatric care at scale. * **Data & Compliance:** Engineer secure, HIPAA-compliant pipelines for transcripts, events, and EHR data—structured for both operations _and_ AI training. * **System Design & Strategy:** You’ll work directly with Danny (CTO) to debate architecture, invent new primitives, and define the foundation for AI-native mental health systems. You’ll help answer questions at the frontier of AI, like: > _What does reliable agent infrastructure look like in production?_ > > _What’s the role of structured data in a world with LLMs?_ > > _How do we make agents auditable, evolvable, and fast?_ --- ### **🫵🏻 This is for you IF** * You've built and owned complex systems from 0 to 1—not just features but foundations * You think in systems, state machines, and event flows—not just endpoints. * You're LLM-fluent or are a strong systems engineer eager to become fluent _fast_. * You default to velocity and appreciate clean architecture. Move fast _and_ build resiliently. * You're allergic to bureaucracy, performative work, and slow decisions. * You hold an incredibly high bar for yourself and expect it from others. Mediocrity is painful. * You want direct impact, technical depth, and to solve problems that have never been solved before. * You believe AI today is the worst AI will ever be—and want to be a part of building the future. We're not looking for warm bodies. We’re looking for **founding technologists** who can _think in systems_ and ship fast. --- ### **🙅🏻♀️ This is NOT for you IF** * You need extensive structure, mentorship programs, or a predefined career ladder. * You prefer working on a single, well-defined component. * You view LLMs as just another API call. * You're uncomfortable with ambiguity or rapid iteration. * You can't handle direct, honest feedback or thrive in a high-candor environment. --- ### **⚙️ Our Stack** **Backend:** Node.js, TypeScript, Supabase (Postgres), AWS (ECS, Lambda, S3) **Frontend:** Next.js 15, Tailwind, Vercel **AI Infra:** OpenAI, Anthropic, agentic loops and workflows, observability and eval (e.g. Langfuse), embeddings and vector DBs, tool-calling, model context protocol (MCPs), etc. **Other:** PHI security, audit pipelines, real-time schedulers, transcript parsing --- ### **🚀 Why Legion? Why Now?** This is the moment when the infrastructure is still malleable. When the next 20 years of mental health care can still be shaped by a few engineers with systems taste, speed, and conviction. You won’t be joining as employee #73. You’ll be founding the engineering culture. You’ll have a direct line to me. You’ll shape core systems _and_ help decide what we build next. If you’ve ever said, “I wish I could’ve been there when \[insert legendary product\] was getting built,” this is that moment. **If this resonates, I want to work with you**. Let’s build the founding systems of AI-native mental healthcare—and make something people didn’t think was possible.
Automated intake that personalizes itself mid-call AI copilots that document visits while verifying insurance Risk detectors trained on full transcripts and events Schedulers that close the loop without human friction $1M+ ARR, growing fast— post-PMF, pre-scale$6M raised,$3M in the bank A live agentic co-pilot (think “Cursor for care ops”) that actually reduces admin overhead A functioning psychiatric practice with real clinicians, patients, and claims A high-leverage AI architecture in motion—LLM tool use, RAG, event-driven infra A clear path towards for an event-driven, simulation-capable architecture that self-improves A small, intense team (~11 people) shipping daily—no dead weight OpenAI, Anthropic, agentic loops and workflows, observability and eval (e.g. Langfuse), embeddings and vector DBs, tool-calling, model context protocol (MCPs), etc.
Or read this on Hacker News