Senior Data Engineer (Scoober)
Whether they know us as Takeaway.com, Lieferando.de, Pyszne.pl or any of our 11 international brands, we’re connecting food-lovers with the dishes they love. From romantic ramen for two to fries with friends, we’re serving up memorable mealtimes – anytime, anywhere.
We’ve come a long way since our start-up roots back in the days of dial-up internet (remember that?). But our commitment to staying ahead of the game with data analysis has remained constant. And that’s why we need you…
All about the role:
- Manage data infrastructure of the logistics department
- Collect and transform unstructured data from different sources to structured output, be it columnar data store, flat or Parquet/ORC files, NoSQL, or streams.
- Apply known “best practice” / “lessons learned” knowledge to create reliable, multiple times a day or continuously running agents, possibly aware of model changes and other possible occurring issues
- Create reusable, maintainable, scalable (vertical, horizontal) integrations/services using a cutting-edge hybrid cloud infrastructure
- Design and build new ETL/ELT and manage existing logistics data pipeline/ETL (batch and real-time data)
- Set up monitoring to ensure continuous high data quality
- Support performance tuning of existing tables and our Data Analysts in optimising queries
- Support Data Scientists in productionising their ML models
- Collaborate excellently with the team and stakeholders in agile iterative processes
Can you deliver…
- 4+ years of experience in data engineering
- You are experienced in SQL and one or several programming languages – preferably Python
- You are preferably experienced in retrieving external data (scraping/ through API)
- You have extensive and proven professional experience handling big data pipelines, data warehouses, or other (preferably distributed) data stores.
- You have worked with distributed data warehouses like Redshift/Hive/Cassandra etc or with database management systems like Postgres/Oracle/MySQL etc.
- You are comfortable with transformation tools and workflow management systems (eg Airflow/Luigi)
- Experience in the cloud (AWS/GCP/Azure) and Spark/Hadoop is preferred
- You are comfortable with handling streaming data and have proven experience in designing and implementing real-time pipelines (Required)
- Experience with microservices architecture, containerization, and k8s is preferred
- You are familiar with parsing structured and unstructured data and creation of ETL pipelines for data warehousing
- You have knowledge about how to model data and processes, how to test data, how to implement proper logging, and how to troubleshoot issues swiftly.
- You do not have to wait for a DevOps engineer to hand you servers and would rather be able to deploy services on the cloud using Continuous Integration / Continuous Deployment
- You are proficient in written and spoken English
Here is our offer:
Like the best food pairings, your expertise and these great rewards belong together:
- Competitive salary
- The opportunity to play a key analytical role for a major international company
- Great company events, including our annual snow event
- A dynamic, welcoming office in the heart of Berlin (we’re right by Potsdamer Platz!)
- Contribution to your pension plan
- Office perks: Delicious tea & coffee, table tennis, our famous Friday drinks…and more
- Contribution to travel costs and Takeaway Pay allowance