Our client is the leading Visual Media Platform and we believe in high-quality advertising content. We allow brands to take advantage of visual media in the most responsive, bespoke, and non-intrusive way. Our proprietary technology uses the power of Machine Learning to provide human-like understanding of content, the highest level of brand safety in the industry and unmatched, cookieless targeting capabilities.

The team faces massive scalability challenges to be able to process millions of requests per minute with very strict timing requirements. Through the use of machine learning algorithms we analyze the content of every page and achieve human-like content understanding to provide the best possible ad contextualization in the industry.



You will be a key player in the development of a reliable data architecture for ingestion, processing, and surfacing of data for large-scale applications
You will cooperate with other teams to unify data sources, as well as recommend and implement ways to improve data reliability, quality and integrity.
You will start by processing data from different sources using tools such as SQL, MongoDB, and Apache Beam, and will be exploring and proposing new methods and tools to acquire new data.
You will work with data science and data analytics teams, to help them improve their processes by building new tools and implementing best practices
You will ensure continuous improvement in delivery, applying engineering best practices to development, monitoring, and data quality of the data pipelines.

You have at least 2-4 years of solid experience in Data Engineering
You have a degree in Computer Science, Engineering, Statistics, Mathematics, Physics or another degree with a strong quantitative component.
You are comfortable with object-oriented languages, such as Python or Scala, and you are fluent in working with a Linux terminal and writing basic bash scripts.
You have ample experience with Data Engineering tools such as Apache Beam, Spark, Flink or Kafka.
You have experience orchestrating ETL processes using systems such as Apache Airflow, and managing databases like SQL, Hive or MongoDB.

Access to a flexible benefits plan (plan de retribución flexible) with restaurant, transportation and kindergarten tickets and discounts on medical insurance
A great work location in the heart of Madrid (Gran Via) with food, snacks, great coffee
The option to work from home when you need to and real flexible working hours
The option to follow company-paid English and/or Spanish courses weekly

Sector del empleo:
Requisitos · Experiencia mínima:
De 3 a 5 años
Requisitos · Idiomas:
  • Inglés
  • Requisitos · Habilidades:
  • Autonomía
  • Liderazgo
  • Toma de decisiones
  • Requisitos · Lenguajes de programación:
  • Python
  • Spark
  • Requisitos · Entornos:
  • Google Cloud
  • Kafka
  • TensorFlow