Pride Technology Hiring BigData Developers

Experience: 3-5 Years
Location : Chennai

Minimum 2 Years hands on experience in Spark/Scala with over all development experience of 3+ Years
In-Depth knowledge on SPARK components and ecosystem is a must, especially Data Frame API – essential
Able to create database schemas and perform ETL that represent and support business processes
Experience / Understanding of any No-SQL or Column-Store Database (Preferably HBase, MapR DB)
Must be able to recognize code that is more parallel, less memory constrained and must show how to apply best practices to avoid runtime issues and performance bottlenecks
Must have worked on a high degree of performance tuning, optimization, configuration, & scheduling in Apache Spark
Must have a basic understanding of file systems preferably on Apache HDFS/MapR FS and inclined to learn it deeper
Knowledge on processing different file formats and experience in building Spark/Scala generic framework for data ingestion, ETL, etc.
Knowledge on abstract class, traits, higher order functions, pure functions, recursions, lazy evaluations, immutable data structures, collections and Multi-Threading
Great problem solver with extremely high curiosity on data processing & patterns
Be an individual contributor and/or responsible for the technical design and experience in guiding/mentoring others.
Strong Functional programming knowledge, including experience with design patterns
Experience with relational databases like MySQL/Oracle or similar
Exposure to Agile/Scrum, TDD, Continuous Integration tools like Jenkins, Bamboo etc.
Excellent analytical aptitude and problem-solving skills
Proficient understanding of code versioning tools
Excellent communication and customer interfacing skills
Experience in API development is a plus
Solid basic SQL skills (joins, outer joins, inner joins, aggregations)
Parquet files basic familiarity
Linux, bash scripting basic skills
Apache Drill basic familiarity on creating views, impersonation etc
MapR DB / HBASE basic familiarity on API and creating index, etc