Big Data Engineer | Openings in Nishta Solutions - Apply Now! - Online & Classroom Training Courses and Certification
job opening acte

Big Data Engineer | Openings in Nishta Solutions – Apply Now!

Last updated on 19th May 2020, Blog, Jobs in Coimbatore

About author

( )

Ratings 813

Job Description

Plans, designs, develops and tests software systems or applications for software enhancements and new products including cloudbased or internetrelated tools. Analyzes requirements, tests and integrates application components; Ensure the system improvements are successfully implemented. Drives unit test automation. Be well versed in the latest development methodologies like Agile, Scrum, DevOps and test driven development. Should also enable solutions that take into account APIs, security, scalability, manageability, usability, and other critical factors that contribute to complete solutions. Usually holds an academic degree in Computer Science, Computer Engineering or Computational Science.

Navigate the Hadoop Ecosystem and know how to leverage or optimize the use of what Hadoop has to offer

  • Hadoop development, debugging, and implementation of workflows and common algorithms
  • Apache Hadoop and data ETL (extract, transform, load), ingestion, and processing with Hadoop tools
  • Knowledge of building a scalable and integrated Data Lake for an Enterprise
  • Use the HDFS architecture, including how HDFS implements file sizes, block sizes, and block abstraction. Understand default replication values and storage requirements for replication. Determine how HDFS stores, reads, and writes files.
  • Analyze the order of operations in a MapReduce job, how data moves from place to place, how partition and combiners function, and the sort and shuffle process
  • Analyze and determine which of Hadoop’s data types for keys and values are appropriate for a job. Understand common key and value types in the MapReduce framework and the interfaces they implement
  • Organizing data into tables, performing transformations, performance tuning and simplifying complex queries with Hive and Impala
  • How to pick the best tool for a given task in Hadoop, achieve interoperability, and manage recurring workflows
  • Strong programing skills in Java or Python
  • Working Knowledge of data ingestion using spark for supporting various file types like Json, Xml, Csv and databases
  • Hands on development experience to extract the data from various sources like SFTP, Amazon S3 and other cloud data sources
  • Designing optimal HBase schemas for efficient data storage and ingestion to HBase using the native API
  • Knowledge of Kafka, Spark Streaming and stream data loads types and techniques
  • Strong Sql and Data Analysis Skills
  • Strong Shell Script or any other scripting language


  • Opportunity to learn software development skills from the best in the industry
  • Health Insurance
  • Snacks and beverages in the office

Job Type: Full-time


  • Big Data : 1 year (Preferred)


  • Bachelor’s (Preferred)

Are you looking training with Right Jobs?

Contact Us
Get Training Quote for Free