Job Location: Chennai
Experience: 2 Years
Primary Skill: Knowledge for Hadoop
Responsibilities
- Design, implement and support efficient & reliable data pipelines to move data from a wide variety of data sources to our data warehouses/data lake
- Design and implement data aggregation, cleansing and reporting layers
- Work cross functionally with analysts, product managers and BU/FG teams to build systems for effective data exploration and consumption
- Work in an agile, SCRUM driven environment to deliver new, innovative products
- Ability to proactively follow issues through to resolution
- Contributes to innovations that fuel the growth of Intuit as a whole
- End to end engineering – design, development, testing, deployment and operations
- Ability to work in a dynamic environment, adopt to business requirements using Agile methodologies and DevOps culture
- Team Player possessing strong analytical, problem solving, communication skills and be willing to share expertise with others
Qualifications
- BE in computer science or equivalent work experience
- Strong CS fundamentals including data structures, algorithms and distributed systems.
- Strong problem solving, decision-making, and analytical skills.
- 3 to 5 years of experience in a software development role with a focus on data systems.
- Hands on experience on AWS (S3, Redshift/Spectrum, Athena, EMR, Spark, Storm, Quicksight, Kinesis Lambda etc)
- Advanced SQL skills and experience developing data warehouse and data mart solutions.
- Hands-on experience working on Informatica ETL (traditional Power Center and Informatica Cloud) is a Plus
- Hands-on experience and programing skills in Python or Java
- Hands-on experience building scalable and reliable data pipelines based on Big Data technologies like Hadoop, MapReduce, Hive, Pig, Spark, etc.