Our toolchain democratizes access to data for everyone and makes it easy and painless to run experiments to establish cause and effect. The team focus is on the complete data life-cycle to ensure any data leaving Kiwi.com is of the highest quality.
If you are interested in putting our data loads and collectors on the next level including both the batch and real-time processing of our data routines, you are the ONE we looking for! If you love to experiment and build on top technologies like airflow, custom python apps or anything else from the open source world come to see us!
Few examples of our Data Engineers’ work:
Data workflow management: To manage our data loads for the Analytics world we’re using the Apache Airflow. Apache Airflow enables scheduling data-related workflows with a code-as-configuration model and web front end, we driving our data routines to feed up data provisioning customers.
Real-time streaming infrastructure: To enable our analytics teams to move quickly, getting accurate data with minimal delay is a core focus in data provisioning & engineering. Currently, we are building out real-time infrastructure to allow for easy development of streaming applications that includes anomaly detections and forecasts.
Interactive dimensional analysis: Our data analysts have a strong need to query data and compute aggregates on various dimensional cuts in “yesterday was too late” frame. To address this, we are building a query tool stack to allow users to interactively slice-and-dice large datasets.
What will you do?
- Develop, monitor and support our data workflow management environments and ELT/ETL routines tooling as a service, as well as decommission any no longer used service/tool in order to perform reasonable data associated with those decommissioned tools
- Provide continuous support on data workflow management end ETL jobs for our data infrastructure services; maintain and provide all relevant information on current infrastructure and tools
- Educate on the current toolings and data used within the data provisioning stack in order to make the access easier for anyone in the company
- Regularly update and clearly communicate on the team achievements and main projects progress
What we expect?
- 2+ years of full-time, industry experience
- Experienced & interested in technologies like Airflow, Postgres, Redis/Kafka, or Presto
- Working knowledge of relational databases and query authoring (SQL)
- Working with batch and real-time data processing routines
- Strong coding skills in Python (preferred) / Ruby
- Rigor in high code quality, automated testing, and other engineering best practices
- Operations of robust distributed systems in cloud (AWS, Google Cloud) is the best fit
- BS/MS in Computer Science or a related field (ideal)
We believe we are a fun bunch to work with and you’ll get to train in a fresh and global company, talk with people from around the world and never get bored.
You’ll work in one of the most promising tech companies (awarded Forbes Startup of 2017, Super-brand Award 2017, Deloitte Technology Fast 50 (fastest growing technological company in Central Europe) and here is what we offer:
- We use our work time wisely and we have flexible work schedule.
- We like to party and hang out together.
- We also enjoy common benefits, such as meal vouchers, flexible benefits scheme, sick days, VIP Medical Care, flight vouchers, Multisport card for free, etc..
- Do, fail, learn – repeat! We understand that mistakes happen and we learn fast.
- Besides a fair salary, we can also look forward to quarterly bonuses dependent on our performance.
- Kids and parties are welcome in our offices.
Sounds interesting to you? Grab your chance and apply today!
Your manager to be