Data engineer

Are you passionate about designing robust data pipelines and ready to develop and deploy these at our customers? Then you are the right fit for our function as data engineer.

As a data engineer you are responsible for:

  • Having conversations with the customer to correctly identify the needs
  • Constructing ETL/ELT pipelines to support their needs
  • Combining data from multiple sources in an efficient and performant manner
  • Setting up a monitoring and alerting system
  • Choosing the right architecture based on the clients needs and current stack
  • Advising the clients (business analysts, data scientists) on the correct usage of their data

Wished requirements:

  • Knowledge of Python (Pandas, NumPy,…) and PySpark (data loads, transformation (ETL/ELT))
  • Knowledge of SQL
  • Knowledge on how to build a Data Lake (HDFS, Hive, Parquet, Delta)
  • Knowledge of Databricks
  • Knowledge of at least one major cloud infrastructure vendor (AWS, GCP, Azure)

Nice to have (plus):

  • Knowledge of Docker
  • Knowledge of DevOps, CI/CD and testing
  • Knowledge of a Unix environment
  • Knowledge of streaming data (Kafka)
  • Knowledge of Airflow, Apache Beam, …
  • Knowledge of data modelling (Kimball, historization, SCD)

What we can offer:

  • You will face a variety of projects where you can use state-of-the-art technologies
  • We offer an individualized learning path with dedicated feedback and training sessions
  • We don’t only focus on your technical skills, we also provide you with the necessary guidance to become a seasoned business professional
  • We offer both numerous internal and external training sessions
  • We provide flexible work-at-home arrangements according to the clients needs
  • We offer a competitive renumeration package with bonusses and a company car
  • We have numerous knowledge sharing sessions and fun social activities such as team building, a trip abroad, …
Apply now