Data Engineer

NET CHECK, dedicated to providing high-quality engineering services to the Telecommunications industry continuously expands its global presence and has recently established a local office in Serbia. We are currently moving from an existing on-premise PostgreSQL infrastructure to a scalable Kubernetes solution on Azure. We’re looking for Data Engineers to join our team in Berlin to support this process and establish/improve pipelines from basic ETL to product creation.


As a Data Engineer, you will:

  • develop, set up, improve and maintain big data projects,
  • design and support testing environments and support ongoing projects,
  • work internally as part of our Engineering team and take an integral role in the development,
  • be responsible for the architecture of data storage, operation and maintenance of data warehouse and creation of strategies for backup of the data. Must be self-directed and comfortable supporting multiple teams, systems and products,
  • create and maintain optimal data pipeline architecture,
  • assemble large, complex data sets,
  • identify, design, and implement internal process improvements: automate manual processes, optimize data delivery, re-design infrastructure for greater scalability, etc.,
  • build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using state of the art tools and procedures,
  • build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics,
  • work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs,
  • create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader,
  • work with data and analytics experts to strive for greater functionality in our data systems.


Mandatory Requirements:

  • Bachelor’s degree in in computer science, computer software/computer systems engineering, computer systems and networks, electrical/electronic engineering, mathematics, physics,
  • 2 + years of experience in a Data Engineer role,
  • Good knowledge/experience in transforming existing SLQ scripts to scalable Spark jobs/services,
  • Knowledge of Hadoop and Spark,
  • Knowledge Database systems (SQL and NoSQL),
  • Knowledge of ETL tools,
  • Knowledge of Data APIs,
  • Knowledge of algorithms and data structures,
  • Experience with Spark and Spark environments for Big Data solutions,
  • Understanding the basics of distributed systems,
  • Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets,
  • Strong analytic skills related to working with structured datasets,
  • Experience with object-oriented/object function scripting languages: Java, etc.,
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores,
  • Strong project management and organizational skills,
  • Ability to work independently but comfortable working in a team environment,
  • Fluent English in spoken and writing, additional language is an advantage (German)

Optional Requirements:

  • Experience in Telecommunication,
  • PySpark knowledge,
  • C++,
  • AWS and Azure knowledge,
  • Linux and bash scripting,
  • Experience with Apache Airflo,
  • Kaggle


  • Hands-On experience in a real big data environment,
  • Working in a pleasant and dynamic environment and motivated team
  • Responsible tasks with creative design framework,
  • Flat hierarchies with short decision paths,
  • Competitive compensation and benefits,
  • Possibility to work remotely but with availability to travel for on-site projects.
  • Subsidised BVG-Jobticket
  • Dienstfahrrad

Follow Us on Facebook