About the company

Our client is a company with an innovative service model that encourages the development of technical capabilities of data engineering in the cloud in their clients. They are a consulting company focused on data engineering in the Cloud aimed at providing its clients with advanced data management services in an innovative approach.

About the role

You will work on data engineering projects in highly up-to-date technological ecosystems. It will have development, architecture and development functions of the operation.


  • Implement the new Modern Data Infrastructure working closely with the DevOps team
  • Include a Data Governance system, Data Infrastructure and GDPR com
  • Develop complex and efficient pipelines to transform raw data sources into powerful, reliable components for our data lake
  • Optimize and improve existing features or data processes for performance and stability

Job requirements

  • Solid SQL knowledge and expertise. You can write complex queries and optimize them for performance, scalability and ease of maintenance
  • Solid experience with Python
  • Solid experience orchestrating Data Pipelines with Airflow
  • Experience with Terraform, Spark and Kafka
  • 2+ Years experience working with Azure or any other Cloud Platform (E.g. GCP, AWS)
  • Demonstrable experience defining technical roadmaps and data observability .
  • Experience with software development best practices, such as version control, testing, CI/CD and automation

What they offer

  • Hybrid work: At least 1-2 in-site office
  • 23 vacation days
  • An anual bonus


Sector del empleo:
Requisitos · Experiencia mínima:
De 1 a 3 años
Requisitos · Idiomas:
  • Inglés
  • Requisitos · Habilidades:
  • Automotivación
  • Requisitos · Lenguajes de programación:
  • Python
  • Spark
  • SQL
  • Requisitos · Entornos: