The role
- Architect and implement backend services and APIs that orchestrate model training, simulation jobs, and data access for internal researchers and external partners.
- Ensure that our compute infrastructure scales gracefully, remains observable, and meets high reliability standards under sustained load.
- Introduce intelligent caching and data-placement strategies to deliver low-latency access to large scientific datasets.
- Contribute to core libraries and shared tooling that promote maintainability, security, and consistent engineering practices company-wide.
- Partner with machine-learning and domain-science teams to convert evolving research requirements into stable, well-documented production systems.
You might be a fit if you
- Have 3+ years building production distributed systems with modern frameworks in industry.
- Care about design systems, accessibility, and pixel-perfect execution as much as you care about API design and test coverage.
- Enjoy turning ambiguous scientific requirements into shipped features and iterating quickly based on user telemetry.
- Write clear docs, review code constructively, and value a culture of shared ownership.
Bonus points
- Familiarity with molecular biology, protein science or similar.
- Prior work on GPU-accelerated services, low-level performance profiling, or compiler/runtime optimisation.
- Contributions to open-source infrastructure or high-performance computing projects.
- Past exposure to laboratory information-management systems (LIMS) or scientific data formats (PDB, CIF, HDF5).
Why us
- Top-tier cash compensation plus generous equity.
- Hardware to match your ambitions (all the compute you need!).
- Full medical/dental, 401(k) with match, unlimited PTO.