About the Role
Join us in building the backbone of production-grade AI systems. As an MLOps Engineer, you will be responsible for designing, deploying, and maintaining scalable machine learning infrastructure that powers real-world applications.
You will work at the intersection of machine learning, software engineering, and DevOps—ensuring models move seamlessly from experimentation to reliable production systems. This role is ideal for engineers who enjoy solving complex infrastructure challenges and enabling ML teams to move faster.
• Design, build, and maintain end-to-end ML pipelines.
• Automate model training, validation, and deployment workflows.
• Develop CI/CD pipelines specifically for ML systems.
• Monitor production models for performance, drift, and reliability.
• Manage model versioning, experiment tracking, and reproducibility.
• Collaborate with ML engineers, data scientists, and backend teams.
• Optimize infrastructure for scalability, cost, and performance.
• Ensure best practices in security, governance, and compliance.
Tech Stack & Tools
• Programming: Python, Bash
• ML Tools: MLflow, Weights & Biases, Kubeflow
• Cloud Platforms: AWS (SageMaker, S3, EC2), GCP (Vertex AI), Azure ML
• Orchestration: Airflow, Prefect
• Containerization: Docker, Kubernetes
• Data Tools: SQL, Spark, Kafka (streaming pipelines)
• CI/CD: GitHub Actions, Jenkins, GitLab CI
• Monitoring: Prometheus, Grafana, ELK Stack
Requirements
Key Focus: Build reliable, scalable, and automated ML systems
Required Skills:
Strong experience in Python and software engineering fundamentals.
Hands-on experience with MLOps tools and pipeline automation.
Experience deploying ML models in production environments.
Familiarity with cloud platforms (AWS/GCP/Azure).
Knowledge of containerization and orchestration (Docker, Kubernetes).
Understanding of ML lifecycle and model evaluation concepts.
Experience with CI/CD pipelines and version control (Git).
Valuable Experience (Nice to Have):
Experience with real-time ML systems or streaming pipelines.
Familiarity with LLM deployment and inference optimization.
Knowledge of feature stores and model registries.
Exposure to distributed systems and large-scale data processing.
Understanding of monitoring, logging, and observability systems
About the Company
SierraCorp AI Staffing is a specialized engineering recruitment firm dedicated to solving the talent gap for high-growth startups by placing elite AI and Machine Learning engineering talent. Unlike generalist agencies, our focus is laser-sharp, exclusively covering roles at the intersection of AI and infrastructure, including ML, Data Science, and MLOps Engineers. Our technical depth is a key differentiator, as our recruiters are former engineers who thoroughly pre-vet every candidate, assessing technical depth and coding ability before submission. We prioritize speed and precision, delivering a curated slate of qualified candidates within an average of 5–7 business days, supported by proprietary AI-powered sourcing tools. Whether you need full-time permanent hires or flexible part-time contractors, SierraCorp is built for long-term partnership and mutual success.