Design and implement cloud solutions and ability to build MLOps pipelines on cloud solutions.
Receiving models from data scientists, creating, and using benchmarks and metrics, and making ML systems available/accessible to the systems that use/need it.
Develop, deploy, and maintain machine learning models, pipelines and workflows in production environment.
Work with DevOps team to deploy and manage infrastructure for machine learning services.
Technical Skills – must-have
Bachelor’s or master’s degree in computer science, engineering or related field.
5+ years of experience in software development, machine learning engineering or related field.
Programming languages like Shell, Python, SQL.
Ability to understand tools used by data scientist (eg Jupiter notebook, pandas, scipy) and experience with software development and test automation Docker, Kubernetes
Strong understanding of machine learning concepts and frameworks, including TensorFlow, PyTorch, Scikitlearn, etc.
RESTRICTED
Familiarity with one or more data-oriented workflow orchestration frameworks (KubeFlow, Airflow, mlflow, Argo, etc.)
Good English communication (verbal, written, email). Ability to explain complex ideas
Technical Skills – nice-to-have
GCP certified.
Experience with time-series data and forecasting models.
Experience with data streaming technologies such as Kafka, Kinesis, etc.
Cloud DevOps (Terraform/Jenkins/Ansible)
Security (IAM, AD, ADLDS, roles, service accounts, entitlements)
Agile development principles (Scrum, Kanban, MVP).
Python with Data analytics (Pandas, Numpy etc.)
Additional Requirements
Ability to work in a team that is located across multiple countries/regions.