We are looking for a savvy Data Engineer to join our growing team of data engineering and analytics experts. The hire will be responsible for developing data pipeline architecture, as well as optimizing data flow and collection for various divisions in Trimble. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing some of Trimble’s architecture to support our next generation of products and data initiatives.
- Read and understand high level product description or requirement document and propose one or more software designs at the module level that are highly reusable and subject to the design principles of data processing frameworks.
- Decompose design elements into structured code as per prevailing coding guidelines, preparation and execution of unit test cases and development of test codes or test harnesses. Trace back through code, design and resolve issues and bugs.
- Document work, software designs, code and record and produce test reports and release notes.
- Plan, organize and execute assignments with very little or moderate supervision.
- Responsible for deliveries in the required deadlines. Deliveries can be modules, documentation, customer releases etc.
- Coordinate with the team for timely delivery of work products. Ensure the quality of work products by reviews.
- Sub-Module level responsibility in large projects and Module (or component) level responsibility in small / medium sized projects and complete responsibility is small sized projects, depending upon the complexity and decomposition.
- Work with QA team to ensure the validity of the solution.
- Be contemporary by adopting technological and market evolutions.
- Other responsibilities as assigned by the management from time to time.
- Good understanding of operating systems principles, software architectures, software algorithms and software engineering principles
- Proficient in Java/Python and strong knowledge of Object Oriented Programming.
- Good understanding of how databases work
- Implementation of a data warehouse for an enterprise customer
- Good exposure on design, development and debugging tools.
- Knowledge of various Software Development Lifecycles.
- Good analytical and problem solving skills.
- Excellent communication and interpersonal skills.
- Upbeat, highly motivated and self-starter.
- Great to have:
- Working knowledge of Big data technologies like Hadoop/Hive/Spark, ElasticSearch, Hbase, Kafka
- Cloud experience with AWS/Azure/GCP
Qualifications & Experience
- B.E / B.Tech or M.E / M.Tech / M.S. in Computer Engineering or an equivalent degree with a good JEE / AIEEE / GATE score.
- Working experience in a tier-1 or a tier- 2 organization for a period of 2 to 5 years
- Good score in any of the national level Olympiads or talent search examinations will be a value addition.