Uber is currently looking for an experienced Hadoop Operations Engineer to join our backend data engineering group. This group is responsible for real-time business metrics aggregation, data warehousing and querying, large scale log processing, schema and data management as well as a number of other analytics infrastructure systems. Our mission is to architect, develop, and deploy world-class data systems to empower every tier of our incredibly fast growing company. It’s a broad goal and we have the right people who are passionate about the right products to make it happen.
In this role you will have the opportunity design and build out the infrastructure that powers the way that all of the groups within Uber access our huge trove of real-world data. Everything we do at Uber is data driven, and as a Hadoop Operations Engineer in Data Engineering you will be at the heart of things.
HERE IS WHAT WE’RE LOOKING FOR:
- Deep experience with building out and managing big data infrastructure, especially: HDFS, Hadoop, Spark, Storm and Kafka
- Capacity planning and scaling systems to keep up with incredible growth
- Ability to work well across the entire engineering organization
- Excellent troubleshooting and debugging skills
- Programming experience, preferably in Java and Python
- Passion for building tools and automating everything