Job Seekers, Please send resumes to email@example.com
Detailed Job Description:
- Overall 7 years of experience and at least 5 year of experience in Hadoop Admin.
- Person will be responsible to Perform Big Data Administration and Engineering activities on multiple Hadoop, Kafka, Hbase and Spark clusters
- Work on Performance Tuning and Increase Operational efficiency on a continuous basis
- Monitor health of the platforms ,Generate Performance Reports and Monitor and provide continuous improvements
- Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
- Develop and enhance platform best practices
- Ensure the Hadoop platform can effectively meet performance & SLA requirements
- Responsible for Big Data Production environment which includes Hadoop( HDFS and YARN), Hive, Spark, Livy, SOLR, Oozie, Kafka, Airflow,Nifi, Hbase etc .
- Perform optimization, debugging and capacity planning of a Big Data cluster
- Perform security remediation, automation and self heal as per the requirement
- Require experience with Cluster Design, Configuration, Installation, Patching, Upgrading and support on High Availability.
- Experience in monitoring tools like Nagios, Ganglia, Chef, Puppet and others
- Should have experience with Hadoop Cluster Security implementation such as Kerberose, Knox & Sentry
Does this position require Visa independent candidates only? No