Software Engineer will be responsible for:
Participate in end to end development which includes design, development and deployment.
Work closely with Technical Lead/Architects to ensure that solutions are consistent with IT Roadmap.
Participate in technical life cycle processes, which include impact analysis, design review, code review, and peer testing.
Contribute towards building a highly effective team.
Participate in development in Hadoop ecosystem using Python and Spark
Troubleshooting and fixing any issues reported in production.
Provide input to project planning meetings and system analysis.
List of skills required
Experience in a complex Data Warehouse applications.
Hands on experience with Hadoop Echo System
Experience with Hive (must have)
Experience with Spark – deep understanding of Spark architecture and internals.
Experience with ETL using Spark SQL (must have), Spark SQL Performance Tuning experience
Experience with Python (must have)
Experience with Cloudera distributed version of Hadoop
Experience with data ingestion tools SQOOP and NIFI (Niagara Files)
Experience with Oracle/Netezza databases, ETL, dimensional modeling.
Experience with shell scripting, control M
Good interpersonal skills with ability to work with cross functional teams located across geographies .
Good communication skills and ability to interact with partners
Attention to detail and sound judgment
Effectively deals with ambiguity
Self-motivated with a high degree of intellectual curiosity
Prepares for and adjusts to changes in operating environment
Promotes positive and professional work environment
High level of commitment, initiative, vision and enthusiasm
Education and Experience
Bachelor degree in Computer Science or MCA
2 plus years of relevant experience in the development and implementation of ETL projects in Hadoop Echo system with overall 2.5 – 4.5 years’ Development experience