Responsibilities:
⇒ Demonstrate ability to translate function/high level design in detailed technical design
⇒ Understand basic relational database concepts and design schema based on the design.
⇒ Understand and follow the defined processes in the projects
⇒ Do solid design, coding, testing and debugging
⇒ Handle a team of 2 or 3 juniors and mentor them
⇒ Help debug client side issues in production or UAT
⇒ Completely own a module and see it to completion
⇒ Design and implement Java and Big Data software components from scratch using specifications provided.
⇒ Development of key technologies that support data management involving collection, processing, transformation, storage, and analytics
⇒ Support implementation of application software releases and other related activities
⇒ Provide debugging and code analysis support
⇒ Troubleshoot production issues.
Requirements/Qualifications:
⇒ Extensive experience in a distributed big data environment
⇒ Expertise in Java, J2EE, Spring, Hibernate
⇒ At least 1 – 3 years of EXP in any of these Hadoop,Map Reduce,Hive,Pig, Sqoop Sqoop2, flume, kafka, storm, hbase, Elastic Search, Spark, Oozie
⇒ Skills to gather and process raw data at scale by using techniques including writing scripts, web crawling, web scraping, calling APIs, write SQL queries.
⇒ Experience in managing AWS/Rackspace/other cloud services
⇒ Knowledge of various data modeling concepts and methodologies, Architecture
⇒ Demonstrated ability to translate function/high level design in detailed technical design
⇒ Strong technical documentation skills and Demonstrates initiative and is a self starter
⇒ Skills to do designs, coding, testing, debugging and documentation to satisfy business requirements for large, complex projects
⇒ Responsible for on-time delivery of high-quality code
⇒ Support QA phase by tracking and assigning defects to development team