
Job Location: Pune
Experience: 2 – 4 Years
Skill Required:
- Analyze, design, develop, implement and support Business Intelligence technical solutions leveraging Apache Spark and Amazon Redshift
- Participate in an Agile implementation and maintenance of source control and release procedures
- Develop Spark Jobs in Scala to extract incoming data from multiple sources
- Identify data quality issues and their root causes. Propose fixes and design data audits
- Create technical and end-user documentation for developed BI components
- Take ownership, identify process improvements and provide a strategic direction to their resolution
- Experience in Scala and Java Type Languages. Python is optional
- Solid Understanding of Big Data Systems in the Hadoop Ecosystem. Experience with Apache Spark
- Experience with the AWS Ecosystem
- Ability to take ownership, accountability, and critical thinking skills
- Strong oral and written communication
- Exceptional analytical, conceptual, and problem-solving skills
- Identifying process improvements and providing strategic direction
- Excellent technical architecture and technical support documentation skills
- Ability to translate business requirements into system or technology requirements and vice versa
- Contribute to a collaborative work environment in which all team members are respected regardless of their individual differences and are motivated to improve both their individual and team contributions