You will work with Engineers, Data Scientists, DevOps, and PMs and be in charge of designing and developing Apache Spark applications on AWS infrastructures that process large volumes of media exposure records while preserving privacy.
If you want to grow with us and work for a world leader company, now is the right time to join.
What You'll Do
Be part of an engineering team that focuses on Big Data analytical platforms and tools
Design and implement highly reliable and scalable Spark applications that efficiently process big data
Collaborate with data scientists on the design and implementation of ML data pipelines
Integrate with relational databases, MongoDB, and AWS services (e.g. EMR, S3, SQS)
Work in an agile team to improve the development life cycle, development practices, and testing facilities
B.Sc. in Computer science or equivalent
4+ years of experience with server side Java programming
4+ years of experience in building SaaS production solutions in the cloud (AWS, Azure, GCP)
Strong analytical and troubleshooting skills
Design skills
Self-motivated and fast learner with a strong sense of ownership
Proficient English skills
Nice to Have
Experience with Spark
Experience with Big Data processing and use cases
Experience with Python
Experience with NoSQL databases such as MongoDB
Significant experience with EMR





















