About the role
We are looking for a savy Data Engineer to join our growing team of data and analytics experts.You will play a key role in designing and developing our data ecosystem, big data-pipelines, data tooling and processes. You will also help make the huge amount of our data accessible, reliable and efficient to query.
- You will contribute with your knowledge and experience to improve current architecture and research/implement new components on this. And play with all kinds of information types (external, internal, structured, not structured).
- You will be involved in the process of converting the data team into a high performance and self-managed team with autonomy and Product mindset as well.
- You will have a personal budget for training. Flexible working hours and remote, up to you! and one more thing; 10% of the time to work on any project you’re interested in.
- Also you will work with lean & agile methodologies, XP, good practices, design principles, and patterns. We love pragmatic, agile, quality-oriented methodologies.
- You will have everything you need to work comfortably.
Perks & Benefits
We have a beautiful office based on a bike-friendly location in the center of Barcelona with a big terrace and plenty of nice places nearby. We have a transparent and open culture with a high degree of autonomy, we offer competitive salaries and 23 vacation per year+2 free days, English and Spanish classes, discounts on health insurance, flexible compensation and an amazing cafeteria with daily fresh brewed coffee, tea, fruits, drinks and much more!
Your personal challenge
- Find the best tools or solutions in order to give response to new needs from the company.
- To develop complex and efficient pipelines to transform raw data sources into powerful, reliable components of our Data Lake and Datawarehouse.
- Participate in converting the Data Team from a Tech Team to a Product Team.
- Increase actual Data Flow with external data sources and streaming data.
- Work closely with Data Scientists, Data Analysts and the whole Engineering team to ensure high data quality and find new opportunities.
- Help to implement Data Governance in the Data infrastructure working closely with the Devops team.
- Optimize and improve existing features or data processes for performance and stability.
- Good knowledge of Apache Spark, specially PySpark.
- Demonstrable experience (2+ years) working as a Data Engineer on AWS cloud or similar.
- Quality and detail-oriented with a keen eye for accuracy. We need to trust all the data delivered by the team.
- Have an analytical mind and make your decisions based on the data available.
- Strong SQL knowledge. You can write complex queries and optimise them for performance, scalability and ease of maintenance.
- Relational Databases like PostgreSQL.
- Used to work with code versioning tools like Git.
- Familiar with Zeppelin or Jupyter notebooks.
- Write efficient, well-tested code with a keen eye on scalability and maintainability.
- Excellent interpersonal skills including written and verbal communication.
- Fluent spoken and written English.
- European Work Permit.
Nice to have
- Experience in the following technologies: AWS Glue and Apache Airflow.
- Data Modeling: Snowflake or Star Schema
- Experience with building stream-processing systems, using solutions such as Kafka and Spark Streaming
- Redshift, PostgreSQL (RDS).
- Git / GitHub
- Fluent spoken and written Spanish.