Vega IntelliSoft Hiring Bigdata Developer

Job Description:

We are looking who have worked in AWS OR .Azure OR knows PySpark.

o Hands on experience in developing routines using Hadoop Map Reduce, Pyspark, HIVE, SQOOP and Linux scripting.
o Expertise in: Hadoop, Hive, Spark (Pyspark), Spark Streaming with Kafka, Sqoop.
o Demonstrated knowledge on Cloud Computing Fundamentals either on AWS(EC2, EMR, Data Lakes and Analytics on AWS) or Azure(HDInsight, Data Factory ..)
o Create PySpark Jobs for data transformation and aggregation, Pyspark query tuning and performance optimization.
o Demonstrated knowledge and use of the following languages: Python, Scala, Shell Scripts, JSON, SQL.
o Demonstrated performance in all areas of the SDLC, specifically related to ETL solutions.
o Design data processing pipelines.
o Experience in implementing production data pipelines and creation of repeatable ingestion patterns.
o Experience with various databases and platforms, including but not limited to: DB2, Oracle, Teradata.

Key Skills: AWS,Azure
Location: Chennai
Experience: 0-3 years
Positions: 1
Skill: Hadoop
Role: BigData Developer