Job Location: Chennai
Experience: 2-5 Years
Primary Skill: Hadoop
- 10+ years of relevant work experience managing and delivering large and complex software solutions.
- Extensive experience in Big Data ecosystem using Hadoop, HBase, Spark, Kafka, Hive and Java technologies. Experience on Cloudera Hadoop Distribution is preferred.
- Nice to have – knowledge of one of the Stream processing frameworks (Spark Streaming/Storm/Flink).
- Nice to have – understanding of search platforms either Solr or Elastic search
- Strong understanding of OLTP, Data Lake, Data Integration and ETL & ELT technologies.
- Nice to have – knowledge of OLAP and data cubes.
- Advance knowledge in multiple design patterns for real-time and batch processing use cases.
- Fluent in data platform architectural principles, engineering principles and implementation of distributed and parallel computing solutions.
- Strong technical vision, exceptional communication skills, including the ability to effectively
communicate and negotiate with partner architects.
- Ability to work closely and interact with multi-location cross platform teams from business and technology.
- Ability to handle multiple competing priorities in a fast-paced environment.
- Undergraduate degree in Computer Science or Engineering discipline, Master’s Level from a top tier school is preferred.