Portfolio Jobs

I only invest in exceptional people. Now you can work with them!
Sarah Smith Fund
companies
Jobs

Member of Technical Staff - Backend Engineer

GenPeach AI

GenPeach AI

Software Engineering, IT
Posted on Mar 3, 2026

About GenPeach AI

GenPeach AI builds next-generation multimodal foundation models for creative freedom and human-centered AI experiences. We train and deploy our own large-scale models and ship them into real products – operating at the intersection of research-grade AI and production-grade systems.

You’ll join the team responsible for the backend systems that serve our models in production. This team builds the infrastructure that turns frontier research into reliable, scalable, measurable product capability.

About the Role

We’re looking for a backend engineer to build and scale the services that power ML inference and internal ML tooling. This is a high-ownership role with direct impact on latency, throughput, reliability, and developer velocity across research and product.

In this role, you will

  • Build and own Python backend services powering ML inference and internal ML tooling

  • Design and operate high-performance async APIs (FastAPI, asyncio), deployed on Kubernetes

  • Develop and run task queues and background processing for inference and batch workloads (NATS)

  • Shape a scalable microservices architecture: service boundaries, routing, and APIs

  • Partner closely with ML engineers to productionize models: service wrapping, inference pipelines, scalability

  • Improve observability across services: logging, metrics, dashboards, alerting

  • Debug and resolve performance bottlenecks across Python, networking, and storage

  • Own services end-to-end: design → deploy → monitor → operate

Minimum Qualifications

  • 3+ years of professional backend experience with Python

  • Strong proficiency with FastAPI and asyncio (async programming, concurrency patterns)

  • Experience with PostgreSQL and Redis/Valkey in production environments

  • Solid understanding of microservices architecture and API design

  • Hands-on experience with Docker, Kubernetes, S3 storage and core DevOps practices

  • Ability to own services end-to-end: design, implementation, deployment, monitoring, and incident response

Strong candidates may also have experience with

  • Familiarity with ML infrastructure or building APIs for ML inference workloads

  • Experience with event-driven systems (pub/sub, idempotency, eventual consistency)

  • Familiarity with message brokers (NATS, Kafka, RabbitMQ)

  • Knowledge of web security fundamentals (OAuth2, JWT, rate limiting, OWASP)

  • Exposure to high-load systems and performance optimization

  • Experience with observability tooling (Prometheus, Grafana, tracing)


Our Stack

Python 3.13, FastAPI, asyncio, PostgreSQL, Redis/Valkey, NATS JetStream, Kubernetes, Docker, Werf + Helm, Envoy Gateway, Prometheus/Grafana

What Makes This Role Unique

  • Work on real production ML inference systems where performance matters

  • Tight collaboration with ML teams – no silos between research and production

  • High ownership in a small, senior team with strong engineering standards.

  • Opportunity to shape backend and infra foundations from first principles

Our Culture

  • High ownership and accountability

  • Strong technical standards

  • Direct, low-ego communication

  • Bias toward impact: measure → iterate → ship

Logistics

  • Location: Zurich or Warsaw: onsite or hybrid. If you’re elsewhere, we’re open to remote (team/timezone fit considered).

  • Competitive salary + meaningful equity (depending on role and level)

  • Interview process: quick screen → technical (practical + systems) → team fit/values