Senior Machine Learning Engineer
TrueAccord
Key Responsibilities
- Building ML Infrastructure: As the main architect, developer, and owner of the Machine Learning infrastructure in production, your role will be to design, architect and build a scalable and efficient infrastructure that serves our needs.
- ML Pipeline Development: Our vibrant data science team develops new models that solve business problems. The role of our Senior MLE is to understand each model, finding solutions to scale them, and deploy the required pipeline for them.
- Architecting Data Platform: As a part of the Data and ML Engineering team, you will work closely with other Data Engineers to find the best solutions for building scalable data platform and supporting existing pipelines.
- Feature Engineering: Creating and maintaining offline and online feature stores and developing requires features for each model.
- ML Infrastructure Development: Making a scalable, modern and efficient infrastructure for ML models is one of the main responsibilities of this role.
- Model Monitoring and Maintenance: We have a few models in production that require support and monitoring.
- Data Strategy: Participating in data engineering team strategy decisions is a key responsibility of this role.
- Collaboration: The Senior Machine Learning Engineer will help the data engineering team in making architectural and decision decisions that enables creating robust data and ML products, developing ETLs, and working closely with the Data Science team to scale their algorithms and deploy their models in production.
You have:
- Bachelor's degree in Computer Science, Engineering, or related technical field; Master's degree preferred
- 5+ years of hands-on experience in machine learning engineering, with at least 3 years in data engineering-focused roles
- Deep understanding of database systems, ETL architecture, and data warehousing concepts
- Strong proficiency in Python and ML frameworks (TensorFlow, PyTorch, Scikit-learn)
- Proven experience building and optimizing large-scale data infrastructure using AWS cloud services using tools such as Terraform, CDK, CloudFormation
- Advanced SQL skills and experience with NoSQL databases, with demonstrated expertise in big data technologies (e.g., Redshift, Databricks)
- Experience with Docker containerization; knowledge of orchestration platforms like Kubernetes is required
- Strong analytical and problem-solving skills, with proven ability to design scalable, efficient systems
- Track record of successful collaboration with data science teams and stakeholders
You might also have:
- Experience with Data Lakes and Snowflake.
- Experience with NoSQL databases such as DynamoDB.
- Experience with steaming technology e.g. Kafka and event based architectures.
- Knowledge of emerging technologies and trends in machine learning engineering.
- Familiarity with Domain-Driven Design principles
- Certification in relevant technologies or methodologies.