Capgemini Hiring Big Data Developer

Job Role- Spark Developer
Experience- 2.5 to 4 Years
Job Location- Chennai

The primary focus of the role is to own and drive the delivery of the target solution architecture, analysis, development, administration, installation and management for the APE systems supported by the IRP IT team.

The job holder will participate with the APE management team to define Technical Strategy for deliveries and will also participate in all core development undertaken by the APE (training on Big Data and Ab Initio technologies may be provided if necessary).

  • Act as an analyst/developer for all core (cross-APE) development undertaken by the APE
  • Work with onshore and offshore developers
  • Track and report on the delivery of shared functionality
  • Respect development guidelines (shared environments, versioning, team working)
  • Contribute in installation, administration and management activities&
  • Architectural Oversight
  • Work across multiple projects being delivered by the APE to ensure target architecture of solutions is to standard
  • Definition of technical roadmap/strategy for BI systems within APE and also the CIB Data Hub
  • Working with APE management globally on alignment to overall CIB strategy for APE
  • Definition of data management strategy with CDO teams in the business
  • Reporting and presenting to architecture committees as required
  • Excellent knowledge of concept Java and object architecture and programming(5+ years)
  • Successful experience in Data HUB or Big Data environment using SparkSQL (3 years)
  • Excellent skills for shell development
  • Knowledge and experience of programming in the distributed environment
  • Ability to realize SQL and NoSQL developments
  • Ability to use Hive, Apache SparkSQL, Drill
  • Ability to work in Agile mode and participate in daily meetings and other Agile artefacts
  • Knowledge or experience of Ab Initio or another ETL
  • Knowledge and usage of Eclipse or similar tools
  • Ability to work with Quality Center
  • Knowledge and usage of SVN, GIT or similar tools
  • Ability to work in an international environment
  • Good English Communication and teamwork skills
  • Data Bases (SQL and Big Data) modelling, administration, installation
  • Hadoop basic knowledge, ability to work in this ecosystem and be trained