Apply Refer

Job ID: 5626

Data Engineer

Job Description:

• Contribute to the design and growth of our Data Products and Data Warehouses around Content Performance and Content Metadata.

• Design and develop scalable data warehousing solutions, building ETL pipelines in Big Data environments (cloud, on-prem, hybrid)

• Our tech stack includes Hadoop, AWS, Snowflake, Spark and Airflow

• Help architect data solutions/frameworks and define data models for the underlying data warehouse and data marts

• Collaborate with Data Product Managers, Data Architects and Data Engineers to design, implement, and deliver successful data solutions

• Maintain detailed documentation of your work and changes to support data quality and data governance

• Ensure high operational efficiency and quality of your solutions to meet SLAs and support commitment to our customers (Data Science, Data Analytics teams)



* 5+ years of data engineering experience developing large data pipelines

• Strong SQL skills and ability to create queries to extract data and build performant datasets

• Hands-on experience with distributed systems such as Spark, Hadoop (HDFS, Hive, Presto, PySpark) to query and process data

• Experience with at least one major MPP or cloud database technology (Snowflake, Redshift, Big Query)

• Nice to have experience with Cloud technologies like AWS (S3, EMR, EC2)

• Solid experience with data integration toolsets (i.e Airflow) and writing and maintaining Data Pipelines

• Familiarity with Data Modeling techniques and Data Warehousing standard methodologies and practices

• Good Scripting skills, including Bash scripting and Python

• Familiar with Scrum and Agile methodologies

• You are a problem solver with strong attention to detail and excellent analytical and communication skills

• Bachelor’s or Master’s Degree in Computer Science, Information Systems or related field

Apply for Job #5626