Opus 2 Logo

Opus 2

Machine Learning Ops Engineer - AI

Reposted 17 Days Ago
Be an Early Applicant
In-Office or Remote
Hiring Remotely in London, England
Mid level
In-Office or Remote
Hiring Remotely in London, England
Mid level
The MLOps Engineer will build and maintain AI infrastructure, ensuring deployment, monitoring, and scaling of machine learning models throughout their lifecycle.
The summary above was generated by AI
Description

As Opus 2 continues to embed AI into our platform, we need robust, scalable data systems that power intelligent workflows and support advanced model behaviours. We’re looking for an MLOps Engineer to build and maintain the infrastructure that powers our AI systems. You will be the bridge between our data science and engineering teams, ensuring that our machine learning models are deployed, monitored, and scaled efficiently and reliably. You’ll be responsible for the entire lifecycle of our ML models in production, from building automated deployment pipelines to ensuring their performance and stability. This role is ideal for a hands-on engineer who is passionate about building robust, scalable, and automated systems for machine learning, particularly for cutting-edge LLM-powered applications.

What you'll be doing
  • Design, build, and maintain our MLOps infrastructure, establishing best practices for CI/CD for machine learning, including model testing, versioning, and deployment.
  • Develop and manage scalable and automated pipelines for training, evaluating, and deploying machine learning models, with a specific focus on LLM-based systems.
  • Implement robust monitoring and logging for models in production to track performance, drift, and data quality, ensuring system reliability and uptime.
  • Collaborate with Data Scientists to containerize and productionize models and algorithms, including those involving RAG and Graph RAG approaches.
  • Manage and optimize our cloud infrastructure for ML workloads on platforms like Amazon Bedrock or similar, focusing on performance, cost-effectiveness, and scalability.
  • Automate the provisioning of ML infrastructure using Infrastructure as Code (IaC) principles and tools.
  • Work closely with product and engineering teams to integrate ML models into our production environment and ensure seamless operation within the broader product architecture.
  • Own the operational aspects of the AI lifecycle, from model deployment and A/B testing to incident response and continuous improvement of production systems.
  • Contribute to our AI strategy and roadmap by providing expertise on the operational feasibility and scalability of proposed AI features.
  • Collaborate closely with Principal Data Scientists and Principal Engineers to ensure that the MLOps framework supports the full scope of AI workflows and model interaction layers.

What excites us?

We’ve moved past experimentation. We have live AI features and a strong pipeline of customers excited to get access to more improved AI-powered workflows. Our focus is on delivering real, valuable AI-powered features to customers and doing it responsibly. You’ll be part of a team that owns the entire lifecycle of these systems, and your role is critical to ensuring they are not just innovative, but also stable, scalable, and performant in the hands of our users.

Requirements

What we're looking for in you

  • You are a practical and automation-driven engineer. You think in terms of reliability, scalability, and efficiency.
  • You have hands-on experience building and managing CI/CD pipelines for machine learning.
  • You're comfortable writing production-quality code, reviewing PR's, and are dedicated to delivering a reliable and observable production environment.
  • You are passionate about MLOps and have a proven track record of implementing MLOps best practices in a production setting.
  • You’re curious about the unique operational challenges of LLMs and want to build robust systems to support them.

Qualifications

  • Experience with model lifecycle management and experiment tracking.
  • Ability to reason about and implement infrastructure for complex AI systems, including those leveraging vector stores and graph databases.
  • Proven ability to ensure the performance and reliability of systems over time.
  • 3+ years of experience in an MLOps, DevOps, or Software Engineering role with a focus on machine learning infrastructure.
  • Proficiency in Python, with experience in building and maintaining infrastructure and automation, not just analyses.
  • Experience working in Java or TypeScript environments is beneficial.
  • Deep experience with at least one major cloud provider (AWS, GCP, Azure) and their ML services (e.g., SageMaker, Vertex AI). Experience with Amazon Bedrock is a significant plus.
  • Strong familiarity with containerization (Docker) and orchestration (Kubernetes).
  • Experience with Infrastructure as Code (e.g., Terraform, CloudFormation).
  • Experience in deploying and managing LLM-powered features in production environments.
  • Bonus: experience with monitoring tools (e.g., Prometheus, Grafana), agent orchestration, or legaltech domain knowledge.
Benefits

Working for Opus 2

Opus 2 is a global leader in legal software and services, trusted partner of the world’s leading legal teams. All our achievements are underpinned by our unique culture where our people are our most valuable asset. Working at Opus 2, you’ll receive:

  • Contributory pension plan.
  • 26 days annual holidays, hybrid working, and length of service entitlement.
  • Health Insurance.
  • Loyalty Share Scheme.
  • Enhanced Maternity and Paternity.
  • Employee Assistance Programme.
  • Electric Vehicle Salary Sacrifice.
  • Cycle to Work Scheme.
  • Calm and Mindfulness sessions.
  • A day of leave to volunteer for charity or dependent cover.
  • Accessible and modern office space and regular company social events.

Top Skills

Amazon Bedrock
AWS
Azure
CloudFormation
Docker
GCP
Grafana
Java
Kubernetes
Prometheus
Python
Sagemaker
Terraform
Typescript
Vertex Ai

Similar Jobs

2 Hours Ago
Remote or Hybrid
2 Locations
Mid level
Mid level
Artificial Intelligence • Cloud • Sales • Security • Software • Cybersecurity • Data Privacy
The Customer Success Manager builds relationships with clients, ensuring value delivery and retention through strategic insights and monitoring solution usage.
4 Hours Ago
Remote or Hybrid
Leeds, West Yorkshire, England, GBR
Mid level
Mid level
eCommerce • Fintech • Hardware • Payments • Software • Financial Services
The Territory Account Executive will engage with local sellers, sell Square's solutions, generate leads, and close deals in the field.
Top Skills: Salesforce
8 Hours Ago
Remote or Hybrid
Staines, Surrey, England, GBR
Senior level
Senior level
Artificial Intelligence • Cloud • HR Tech • Information Technology • Productivity • Software • Automation
The Advisory Presales Solution Consultant will engage customers and design data integration solutions using ServiceNow's technologies while supporting the presales process and collaboration with teams.
Top Skills: AngularjsCSSHTML5JavaScriptPythonReactSQL

What you need to know about the Edinburgh Tech Scene

From traditional pubs and centuries-old universities to sleek shopping malls and glass-paneled office buildings, Edinburgh's architecture reflects its unique blend of history and modernity. But the fusion of past and future isn't just visible in its buildings; it's also shaping the city's economy. Named the United Kingdom's leading technology ecosystem outside of London, Edinburgh plays host to major global companies like Apple and Adobe, as well as a growing number of innovative startups in fields like cybersecurity, finance and healthcare.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account