Were looking for a Data Engineer to join our Data Labs (DL) department, which specializes in professional services for our super-premium customers. This role will report to our DL (Datalabs) Data Science Team Manager in the R&D.
Why is this role so important at our company?
we are a data-focused company, and our unique AI and machine learning capabilities are the center of our business.
As part of this role, you will create and support a complex data model pipeline that helps analyze the petabytes of data we receive from various sources, and research and develop new features and capabilities for our product solutions
As a data engineer in the Datalabs team, you will work on the very core of the company. Part of your role will be to create processes that help turn raw data into usable metrics and leverage AI models and statistical algorithms to support out-of-the-box requests from customers who want custom data labs. The Datalabs departments business-oriented nature also means you will be supporting a team of analysts and data scientists who interact directly with customers. Together with them, you will translate the voice of these customers into best-in-class data labs.
So, what will you be doing all day?
Building and maintaining our big-data pipelines
Take a major part in designing and implementing complex high-scale systems using a large variety of technologies
Be part of a team with smart and motivated engineers, and data scientists, to collaborate on the planning, development, and maintenance of our products
Implement solutions in the AWS cloud environment, and work in Databricks with PySpark.
Why is this role so important at our company?
we are a data-focused company, and our unique AI and machine learning capabilities are the center of our business.
As part of this role, you will create and support a complex data model pipeline that helps analyze the petabytes of data we receive from various sources, and research and develop new features and capabilities for our product solutions
As a data engineer in the Datalabs team, you will work on the very core of the company. Part of your role will be to create processes that help turn raw data into usable metrics and leverage AI models and statistical algorithms to support out-of-the-box requests from customers who want custom data labs. The Datalabs departments business-oriented nature also means you will be supporting a team of analysts and data scientists who interact directly with customers. Together with them, you will translate the voice of these customers into best-in-class data labs.
So, what will you be doing all day?
Building and maintaining our big-data pipelines
Take a major part in designing and implementing complex high-scale systems using a large variety of technologies
Be part of a team with smart and motivated engineers, and data scientists, to collaborate on the planning, development, and maintenance of our products
Implement solutions in the AWS cloud environment, and work in Databricks with PySpark.
Requirements:
This is the perfect job for someone who:
Holds a BSc degree in Computer Science or equivalent practical experience.
You love building robust, fault-tolerant, and scalable systems and products
You are a go-getter and a team player with a sense of ownership.
Has at least 3+ years of server-side software development experience in one or more general-purpose programming languages (C#, Go, Python, etc.)
Experience building large-scale web APIs: advantage for working with Microservices architecture, AWS, and databases (Redis, PostgreSQL, Firebolt)
Familiarity with Big Data technologies: A familiarity with Spark, Databricks, and Airflow is a big advantage.
Worked in a cloud environment such as AWS or GCP, and is familiar with its different services.
Familiarity with ML pipelines and applications
Familiarity with LLM tools and frameworks.
This is the perfect job for someone who:
Holds a BSc degree in Computer Science or equivalent practical experience.
You love building robust, fault-tolerant, and scalable systems and products
You are a go-getter and a team player with a sense of ownership.
Has at least 3+ years of server-side software development experience in one or more general-purpose programming languages (C#, Go, Python, etc.)
Experience building large-scale web APIs: advantage for working with Microservices architecture, AWS, and databases (Redis, PostgreSQL, Firebolt)
Familiarity with Big Data technologies: A familiarity with Spark, Databricks, and Airflow is a big advantage.
Worked in a cloud environment such as AWS or GCP, and is familiar with its different services.
Familiarity with ML pipelines and applications
Familiarity with LLM tools and frameworks.
This position is open to all candidates.








