we are the leader in Behavioral Biometrics, a technology that leverages machine learning to analyze an online users physical and cognitive digital behavior to protect individuals online. our companys mission is to unlock the power of behavior and deliver actionable insights to create a digital world where identity, trust, and ease coexist.Today, 32 of the world's largest 100 banks and 210 total financial institutions rely on our companyConnect to combat fraud, facilitate digital transformation, and grow customer relationships.. our companys Client Innovation Board, an industry-led initiative including American Express, Barclays, Citi Ventures, and National Australia Bank, helps our company to identify creative and cutting-edge ways to leverage the unique attributes of behavior for fraud prevention. With over a decade of analyzing data, more than 80 registered patents, and unparalleled experience, our company continues to innovate to solve tomorrows problems.
Main responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Optimize and monitor the team-related cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Implement CI/CD for Data Workflows.
Main responsibilities:
Provide the direction of our data architecture. Determine the right tools for the right jobs. We collaborate on the requirements and then you call the shots on what gets built.
Manage end-to-end execution of high-performance, large-scale data-driven projects, including design, implementation, and ongoing maintenance.
Optimize and monitor the team-related cloud costs.
Design and construct monitoring tools to ensure the efficiency and reliability of data processes.
Implement CI/CD for Data Workflows.
Requirements:
3+ Years of Experience in data engineering and big data at large scales. - Must
Extensive experience with modern data stack – Must:
Snowflake, Delta Lake, Iceberg, BigQuery, Redshift
Kafka, RabbitMQ, or similar for real-time data processing.
Pyspark, Databricks
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. – Must
Hands-on experience with Docker and Kubernetes. – Must
Expertise in ETL development, data modeling, and data warehousing best practices.
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc)
Experience with infrastructure as code, deployment automation, and CI/CD.
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
3+ Years of Experience in data engineering and big data at large scales. - Must
Extensive experience with modern data stack – Must:
Snowflake, Delta Lake, Iceberg, BigQuery, Redshift
Kafka, RabbitMQ, or similar for real-time data processing.
Pyspark, Databricks
Strong software development background with Python/OOP and hands-on experience in building large-scale data pipelines. – Must
Hands-on experience with Docker and Kubernetes. – Must
Expertise in ETL development, data modeling, and data warehousing best practices.
Knowledge of monitoring & observability (Datadog, Prometheus, ELK, etc)
Experience with infrastructure as code, deployment automation, and CI/CD.
Practices using tools such as Helm, ArgoCD, Terraform, GitHub Actions, and Jenkins.
This position is open to all candidates.