Our mission is to explore and implement new technologies and enrich the companys internal data assets through smart collection, integration, and automation.
We move fast, work across multiple domains, and maintain a culture that values curiosity, ownership, and impact.
This is an on-site position.
Responsibilities
Take end-to-end ownership of data pipelines: from extraction (web scraping, APIs), through transformation and orchestration, to delivering accessible and valuable datasets.
Integrate new and external data sources into the companys internal platforms.
Solve real-time issues and optimize pipeline performance through smart automation.
Collaborate with cross-functional teams to improve access to high-quality, structured data.
Work on multiple projects simultaneously in a dynamic and agile environment.
Lead and contribute to early-stage innovation projects directly impacting business strategy.
2+ years of hands-on Python development (ETL, scripting, automation).
Strong knowledge of SQL and ability to work independently with relational databases.
Experience building and maintaining ETL workflows and orchestrating data processes.
Familiarity with scraping tools/frameworks (e.g., requests, Selenium, BeautifulSoup).
Ability to manage multiple tasks and projects independently and efficiently.
A genuine love for technology, a curiosity to explore new tools, and an eagerness to learn.
Advantages
Experience with modern data platforms such as Snowflake.
Background in web data extraction and automation.
Understanding of data warehouse architecture and data quality practices.
System-level thinking and an innovation-driven mindset.