This description is a summary of our understanding of the job description. Click on ‘Apply’ button to find out more.
Role Description
We’re looking for someone who wants to bring a full-stack perspective to data. As a Software Engineer supporting our Data function, you will be responsible for creating and maintaining pipelines that enable our Data Science, Engineering, and Product teams, and the wider Mercor organization.
Your focus will be on data reliability, availability, and timeliness, with a focus on collaboration (and significant operational crossover with) our Data Science team and the many partner functions.
- Building robust pipelines to ingest, transform, and consolidate data from diverse sources (e.g., MongoDB, Airtable, PostHog, production databases).
- Designing dbt models and transformations to standardize and unify many disparate tables into clean, production-ready schemas.
- Implementing scalable, fault-tolerant data workflows with Fivetran, dbt, SQL, and Python.
- Partnering with engineers, data scientists, and business stakeholders to ensure data availability, accuracy, and usability.
- Owning data quality and reliability across the stack, from ingestion through to consumption.
- Continuously improving pipeline performance, monitoring, and scalability.
Qualifications
- Proven experience in data engineering, with strong knowledge of SQL, Python, and modern data stack tools (Fivetran, dbt, Snowflake or similar).
- Experience building and maintaining large-scale ETL/ELT pipelines across heterogeneous sources (databases, analytics platforms, SaaS tools).
- Strong understanding of data modeling, schema design, and transformation best practices.
- Familiarity with data governance, monitoring, and quality assurance.
- Comfort working cross-functionally with engineering, product, and operations teams.
- Bonus: prior experience supporting machine learning workflows or analytics platforms.