Following a year of rapid growth in our open-source ML projects and winning top tier customers across sectors to support strategic AI platform initiatives, Seldon expanding to find our first technical hire in Bengaluru, India!
We’re looking for our first technical base on the ground to help solidify our India operation from day zero. Seldon is looking for a Machine Learning Engineer to join our Solutions Team.
Seldon is a London based scale-up that builds open source and enterprise machine learning frameworks that power massive scale deployments of production AI systems. Our open source frameworks benefits from over 200,000 installations, and power our enterprise product Seldon Deploy, which is currently being used by some of the leading global organisations across industries such as automotive, pharma, finance, etc.
About the role
Your role at Seldon will primarily involve:
- Supporting production systems at scale based on our open source and enterprise machine learning products in Kubernetes
- Triaging production client deployments to ensure success for large scale production machine learning systems in critical environments
- Submit bugs and patches to our open source and enterprise products to resolve issues
- Contributing to our open source projects to extend their functionality
- Architecting solutions for critical industry machine learning systems
- Identifying & documenting best practices for ML Engineering
- Optimising the performance of machine learning systems
- Designing and delivering high impact solutions with top tier organisations
- Contributing to global open source projects and technology conferences
- Growing within a scaling startup crafting a role of your own
- A degree or higher level academic background in a scientific or engineering subject.
- Strong computer science foundations.
- Strong System architecture knowledge/experience.
- Familiarity with linux based development.
- Experience architecting/applying technology to solve real world challenges.
- Experience delivering production-level client-facing projects.
- Experience with Kubernetes and the ecosystem of Cloud Native tools.
- Experience using machine learning tools in production.
- Share options to align you with the long-term success of the company.
- Exciting phase of fast-paced start-up challenges with an ambitious team and unlimited potential for professional growth.
- Access to discounted lunches, gyms, shopping and cinema tickets.
- Healthcare benefits.
- Flexible work-from-home policy.
About our tech stack
Some of our high profile technical projects:
Some of our high profile technical projects:
- We are core authors and maintainers of Seldon Core, the most popular Open Source model serving solution in the Cloud Native (Kubernetes) ecosystem
- We built and maintain the black box model explainability tool Alibi
- We are co-founders of the KFServing project, and collaborate with Microsoft, Google, IBM, etc on extending the project
- We are core contributors of the Kubeflow project and meet on several workstreams with Google, Microsoft, RedHat, etc on a weekly basis
- We are part of the SIG-MLOps Kubernetes open source working group, where we contribute through examples and prototypes around ML serving
- We run the largest Tensorflow meetup in London
- And much more
Some of the technologies we use in our day-to-day:
- Go is our primary language for all-things backend infrastructure including our Kubernetes Operator, and our new GoLang Microservice Orchestrator)
- Python is our primary language for machine learning, and powers our most popular Seldon Core Microservices wrapper, as well as our Explainability Toolbox Alibi
- We leverage the Elastic Stack to provide full data provenance on inputs and outputs for thousands of models in production clusters
- Metrics from our models collected using Prometheus, with custom Grafana integrations for visualisation and monitoring
- Our primary service mesh backend leverages the Envoy Proxy, fully integrated with Istio, but also with an option for Ambassador
- We leverage gRPC protobufs to standardise our schemas and reach unprecedented processing speeds through complex inference graphs
- We use React.js for our all our enterprise user products and interfaces
- Kubernetes and Docker to schedule and run our services (Oliver,our Head of Engineering, gave a great talk at KubeCon on how we use these technologies)
- AWS for most of our infrastructure
- React for internal web dashboards
- We also have two physical datacenter sites with actual cables to connect to various third parties
Our interview process is normally a phone interview, a coding task, and 2-3 hours of onsite interviews. We promise not to ask you any brain teasers or trick questions. We might design a system together on a whiteboard, the same way we often work together, but we won’t make you write code on one. Our recruitment process has an average length of 3 weeks.