Senior Data Engineer
Senior Data Engineer
SENIOR DATA ENGINEER, FREELANCE – HILVERSUM, THE NETHERLANDS
Embracing Big Data technologies to enable data-driven decisions, looking to expand the Data Engineering team to keep pace. As a Sr. data engineer you will work with a variety of talented teammates and be a driving force for building solutions for Digital. You will be working on development projects related to consumer behavior, commerce, and web analytics.
ABOUT THE COMPANY
To us, innovation is about elevating human potential building a creative/ diverse global team. We strive to make a positive impact in communities where we live and work, by making our products more sustainably. We obsess the needs of the world’s best athletes, using their insights to create products that are beautiful and useful for everybody. To make big leaps, we take big risks, creating groundbreaking sport innovations. Incremental change won’t get us to where we want to go fast enough. The company is a place where everyone is an explorer, bringing diverse perspectives toegether— scientists and shoe designers, coders and quarterbacks—to share knowledge of the body in motion. We serve athletes…billions of them. Because, as our co-founder said, if you have a body, you’re an athlete.
WHAT WILL YOU DO:
• Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution.
• Build utilities, user defined functions, and frameworks to better enable data flow patterns.
• Research, evaluate and utilize new technologies/tools/frameworks centered around Hadoop and other elements in the Big Data space.
• Define and build data acquisitions and consumption strategies
• Build and incorporate automated unit tests, participate in integration testing efforts.
• Work with teams to resolving operational & performance issues.
• Work with architecture/engineering leads and other teams to ensure quality solutions are implements, and engineering best practices are defined and adhered to.
WHAT DO YOU NEED:
• MS/BS degree in a computer science field or related discipline.
• 6+ years’ experience in large-scale software development.
• 1+ year experience in Hadoop or big data technologies.
• Strong Java programming, Python, shell scripting, and SQL.
• Strong development skills around Hadoop, Spark, Hive, and Pig.
• Good understanding of file formats including JSON, Parquet, Avro, and others.
• Experience with performance/scalability tuning, algorithms and computational complexity.
• Ability to understand relational database schemas.
• Proven ability to work cross functional teams to deliver appropriate resolution.
• Experience with AWS components and services, particularly, EMR, S3, and Lambda.
• Automated testing, Continuous Integration / Continuous Delivery.
NICE TO HAVE:
• Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra.
• Experience with messaging & complex event processing systems such as Kafka and Storm.
• Machine learning frameworks.
• Statistical analysis with Python, R or similar.
• Experience (at least familiarity) with data warehousing, dimensional modeling and ETL development.
• 6 to 12 months project (possible extension) based on B2B contract.
• 75 euro/gross/per hour based on 40 hours workweek.