Get the latest tech news
Cedana (YC S23) Is Hiring a Systems Engineer
At Cedana, we are solving what many thought was impossible: the seamless, live migration of active CPU+GPU containers across global compute. We're building the next generation of AI orchestration systems, founded on our pioneering work in checkpoint/restore technology. This isn't just an incremental improvement; it's a fundamental shift that makes distributed computing truly portable, elastic, and resilient across planet scale compute. This is an exceptionally difficult systems problem that requires a rare combination of kernel engineering, distributed systems design, and a relentless pursuit of perfection. We’re backed by leading investors, including a co-founder of OpenAI, the former Chief Architect of Slack, the founding team of Meta AI, YC, Initialized Capital, and Garry Tan. To achieve our mission, we’re looking for brilliant systems engineers — the kind who are obsessed with understanding how computing works from the silicon up. We’re looking for systems engineers who live deep in the container stack and understand Kubernetes beyond just the surface. If you thrive on solving deep, complex problems in uncharted territory, we invite you to join us. What You Will Do As a core member of our engineering team, you will build and fortify the "magic" that powers our platform. You will operate across the entire compute stack, from the Linux kernel to our managed Kubernetes offering, to deliver a product that is both powerful and exceptionally reliable. Design and Build New Orchestration Primitives: Architect and implement core components of our system, leveraging our unique insights into checkpointing, virtualization, and container orchestration to create capabilities that don't exist anywhere else. Design and implement novel scheduling and resource management capabilities by integrating our core checkpoint/restore engine directly into the control planes of Kubernetes, SLURM, and other orchestrators. Engineer Unbreakable Reliability: Enhance the stability and performance of our entire system, from kernel-level interactions and hypervisor optimizations to our managed Kubernetes cloud platform. Dive deep into the Linux kernel, container runtimes, and hypervisors to ensure our live migration capability is bulletproof. Partner with Customers: Work directly with customers to solve their most complex infrastructure challenges, acting as a trusted technical partner and gathering insights that drive our product roadmap. Develop Sophisticated Tooling: Build and refine our internal observability and alerting infrastructure to proactively identify and resolve issues anywhere in the stack, ensuring our systems meet the highest standards of performance and availability. Who You Are You aren't a traditional full-stack developer. You are driven by a deep curiosity to understand every layer of the technology you work with. You have a track record of solving challenging problems in complex systems and a passion for building robust, high-performance infrastructure. A Systems Thinker: You have the intellectual bandwidth and desire to learn the full compute stack, from hardware and device drivers to the OS kernel, container runtimes, and distributed systems. A Creative Problem-Solver: You possess a history of tackling difficult technical challenges, perhaps in compilers, distributed systems, embedded systems, or highly available platforms. A Proven Collaborator: You have a demonstrated ability to work effectively with a team of high-caliber engineers to achieve ambitious goals. Intellectually Fearless: You are energized, not intimidated, by problems that have no known solutions. The prospect of building something that has never been built before is your primary motivator. Required Experience Deep Understanding of Concurrency and Distributed Systems: Strong grasp of the theoretical and practical challenges of building distributed systems, including concurrency control, multi-threading, pre-emption, and resource contention. You can reason about race conditions, deadlocks, and consistency models from first principles. Mastery of Systems Programming: You have demonstrable, expert-level proficiency in C for kernel-level work and either Go or Rust for building high-performance, concurrent services. Python for integrating with existing orchestration frameworks. You are not just a user of these languages; you understand their memory models, concurrency primitives, and how they translate to machine code. Linux & Container Internals: You possess a fundamental understanding of Linux/UNIX (system libraries, services, networking, kernel/user-space interaction) and containerization tech (containerd/cri-o, runc, cgroups, namespaces, seccomp). Orchestrator Internals: Understanding of fairshare principles, including multifactor priority, fairshare decay, and QOS management. HPC & GPU Workloads: Deployed or managed GPU workloads under SLURM, with knowledge of workload isolation and accelerator resource accounting. Understanding of Networking: You understand how packets flow in Kubernetes, and have hacked around or deployed tooling like CNI, Cilium, and/or Istio. Production Experience and On-call Ready: You have hands-on experience scaling infrastructure, managing production-level Kubernetes clusters, and working with infrastructure-as-code tools like Helm and Terraform. You understand the importance of reliability and are familiar with being on-call. (Our founders have extensive on-call experience and are committed to building a sane, sustainable rotation). Bonus Points If You Have Contributed to open-source projects like Kubernetes, containerd, or the Linux kernel. Experience with virtualization in Kubernetes, like KubeVirt or Kata. Experience checkpointing and restoring jobs within SLURM (e.g., DMTCP, BLCR, CRIU). Experience writing SLURM plugins (e.g., sched, job_submit, prolog/epilog), or extending SLURM behavior via Lua or C. Worked on multi-cluster or federated SLURM setups. Built tooling to bridge SLURM and Kubernetes or run mixed workload environments. Contributed to open-source schedulers or job systems (SLURM, Flux, Torque, PBS, etc). Familiarity with HPC environments (SLURM, MPI, RDMA) or GPU-centric Kubernetes tooling (Kueue, KubeFlow, KServe). A passion for debugging weird kernel panics just as much as you enjoy writing elegant Go or Rust code. Experience leading teams or mentoring other engineers in a remote environment. Written your own container runtime!
We’re backed by leading investors, including a co-founder of OpenAI, the former Chief Architect of Slack, the founding team of Meta AI, YC, Initialized Capital, and Garry Tan. Develop Sophisticated Tooling: Build and refine our internal observability and alerting infrastructure to proactively identify and resolve issues anywhere in the stack, ensuring our systems meet the highest standards of performance and availability. Linux & Container Internals: You possess a fundamental understanding of Linux/UNIX (system libraries, services, networking, kernel/user-space interaction) and containerization tech (containerd/cri-o, runc, cgroups, namespaces, seccomp).
Or read this on Hacker News