It will be practically impossible for human brains to understand how to run and optimize a 5G network, Machine Learning (ML) and other Artificial Intelligence (AI) technologies- will be vital for us to handle that complexity. We are setting up a Global AI Accelerator in the US, Sweden and India, with 300 experts, to fast-track our strategy execution. Machine Intelligence (MI), the combination of Machine Learning and other Artificial Intelligence technologies is what Ericsson uses to drive thought leadership to automate and transform Ericsson offerings and operations. MI is a key competence to enable new and emerging business. This includes development of models, frameworks and infrastructure that we use to power our 5G networks and services. We engage in both academic and industry collaborations to drive the digitalization of Ericsson and the Industry. Our global group develop state of the art solutions that simplify and automate processes in our products and services and create new value through data insights.
Ericsson’s Global AI Accelerator (GAIA) is a global team of talented data scientists, data engineers and business translators, chartered to accelerate the transformational journey for our customers and their customers. Teams in GAIA India, located in Bangalore and Chennai, work with technologists, engineers and operators around the world to build revolutionary solutions and models that drive economic growth in the real world.
- Bachelors/Masters in Engineering from any of the reputed institutes. Preferably with Computer Science / Information Science Major. First Class, preferably with Distinction.
- Overall industry experience of around 5+ years.
- At least 3 years’ experience as a Data Engineer.
- Programming knowledge in Python, Java, Scala (Advanced level in one language at least)
- Expert knowledge in SQL and traditional RDBMS systems
- Experience in Data warehouse design and dimensional modeling
- Familiarity with NoSQL databases such as Cassandra, Solr, MongoDB, etc..
- Experience with tools/software for big data processing such as Hadoop, Spark
- Experience with handling data streams with tools such as Flink, Spark-Streaming, Kafka or Storm
- Experience with Data and Model pipeline and workflow management tools such as Azkaban, Luigi. Airflow or Dataiku.
- Experience with Docker containers, orchestration systems (e.g. Kubernetes), continuous integration and job schedulers.
- Knowledge of serverless architectures (e.g. Lambda, Kinesis, Glue).
- Experience with microservices and REST APIs.
- Familiar with agile development and lean principles.
- Contributor or owner of GitHub repo.
- Gaining a good understanding of business processes and domain knowledge by working with stakeholders including the Executive, Product, Data and Design teams
- Contributing to the data warehouse design and data preparation by implementing a solid, robust, extensible design that supports key business flows.
- Assist with the creation and maintaining of complex data pipelines
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Keep data separated and secure across national boundaries through multiple data centers and strategic customers/partners.
- Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management systems.
- Build sanity checks and dashboards for monitoring data quality, pipeline performances and infrastructure health.
- Support DataOps competence build-up in Ericsson Businesses and Customer Serving Units