Recraft Logo

Recraft

ML Data Engineer

Posted 6 Hours Ago
Be an Early Applicant
In-Office
London, Greater London, England
Mid level
In-Office
London, Greater London, England
Mid level
The ML Data Engineer will design and manage data pipelines for large-scale unstructured data, primarily images, ensuring efficient ingestion and preprocessing while collaborating with ML engineers for model training.
The summary above was generated by AI
About Us

Founded in the US in 2022 and now based in London, UK, Recraft is an AI tool for professional designers, illustrators, and marketers, setting a new standard for excellence in image generation.

We designed a tool that lets creators quickly generate and iterate original images, vector art, illustrations, icons, and 3D graphics with AI. Over 3 million users across 200 countries have produced hundreds of millions of images using Recraft, and we’re just getting started.

Join a universe of professional opportunities, develop and support large-scale projects, and shape the future of creativity. We are committed to making Recraft an essential, daily tool for every designer and setting the industry standard. Our mission is to ensure that creators can fully control their creative process with AI, providing them with innovative tools to turn ideas into reality.

If you’re passionate about pushing the boundaries of AI, we want you on board!

Job Description

At Recraft, we’re building the next generation of generative models across images and text. We’re looking for an ML Data Engineer to scale our data pipelines for unstructured data (primarily images) and keep our training flows fast, reliable, and repeatable. You’ll design and operate high-throughput ingestion and preprocessing on Kubernetes, evolve our internal data-pipeline framework, and work hand-in-hand with ML engineers to ship datasets that move model quality forward.

Key Responsibilities
  • Develop and maintain data-ingestion pipelines to source and prepare large-scale image (and occasional text/HTML) datasets from open, publicly accessible, and permitted sources.

  • Own the end-to-end flow: raw data → quality/beauty/relevance filtering → dedup/validation → ready-to-train artifacts.
    Operate and improve our Kubernetes-based data-pipeline framework (distributed jobs, retries, monitoring, automation).

  • Work with S3-style object storage: efficient layouts, lifecycle, throughput, and cost awareness.

  • Add tooling around pipelines (progress/health visualization, metrics, alerts) for observability and faster iteration.

  • Collaborate closely with ML engineers to align datasets with training needs and accelerate experimentation.

Requirements

Must-have

  • Strong Python fundamentals; you write clean, maintainable, production-ready code.

  • Solid hands-on Kubernetes experience (containers, jobs, batch/distributed processing).

  • Proven track record with unstructured data, especially images (loading, filtering, transforming at scale).

  • Experience developing data-ingestion or parsing tools for publicly accessible sources, including handling real-world reliability and failure cases gracefully.

  • Comfort with S3/object storage and moving lots of data efficiently and safely.

  • Pragmatic, detail-oriented, ownership mindset; you enjoy making systems reliable and fast.

Nice-to-have

  • Familiarity with ML workflows (PyTorch) and downstream training considerations.

  • Experience with image quality scoring, captioning, or image-to-text pipelines.

  • DAG/workflow visualizations or pipeline UX tooling.

  • DevOps fluency: Docker, CI/CD, infra automation.

What We Offer
  • ​​Competitive salary and equity.

  • We’re able to offer Skilled Worker visa sponsorship in the UK for qualified candidates.

  • Real impact on model quality: your pipelines directly power training runs and product improvements.

  • Ownership with support: autonomy to design and improve systems, alongside experienced ML peers.

  • Modern stack: Python, Kubernetes, S3, internal pipeline framework built for scale.

  • Growth: a fast-moving environment where shipping well-engineered systems is the norm.

Top Skills

Kubernetes
Python
S3

Similar Jobs

22 Days Ago
Hybrid
London, Greater London, England, GBR
Senior level
Senior level
Blockchain • Fintech • Payments • Consulting • Cryptocurrency • Cybersecurity • Quantum Computing
As a Lead ML Ops & Data Engineer, you'll develop automated pipelines, collaborate with teams to ensure efficient ML Ops, and maintain high-quality data sources.
Top Skills: AirflowCi/CdData EngineeringMl OpsMlflowPython
5 Days Ago
In-Office
London, Greater London, England, GBR
Senior level
Senior level
Artificial Intelligence • Transportation
The Senior Machine Learning Engineer will design and optimize data pipelines for autonomous driving research, ensuring high-quality datasets are used for model training and evaluation.
Top Skills: DaskPythonPyTorchRaySpark
16 Hours Ago
In-Office
Newcastle, Newcastle upon Tyne, England, GBR
Senior level
Senior level
Information Technology
Join as an Ops Engineer to deploy and scale machine learning solutions, collaborate with teams, and mentor juniors while ensuring best practices in data engineering.
Top Skills: Azure Ml PipelinesCatboostGitKerasKubeflowLightgbmMlflowNumpyOnnxPandasPysparkPythonPyTorchScikit-LearnSQLTensorFlowTensorflow ServingTensorrtTorchserveXgboost

What you need to know about the Edinburgh Tech Scene

From traditional pubs and centuries-old universities to sleek shopping malls and glass-paneled office buildings, Edinburgh's architecture reflects its unique blend of history and modernity. But the fusion of past and future isn't just visible in its buildings; it's also shaping the city's economy. Named the United Kingdom's leading technology ecosystem outside of London, Edinburgh plays host to major global companies like Apple and Adobe, as well as a growing number of innovative startups in fields like cybersecurity, finance and healthcare.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account