Hadoop Admin Resume
Addison, TX
SUMMARY:
- Big Data Admin with over 8 years of professional IT experience, which includes 4 years' experience in the field of Big Data.
- Involved in Hadoop Cluster environment administration that includes adding and removing cluster nodes, cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting. installing, Configuring, and Deploying the Cluster - Cloudera, Kerberos with AD LDAP
- Hands on experience with Kerberos -KDC -Kerberos principles.
- Installation of various Hadoop Ecosystems and non default component.
- Installed and configured multi-nodes fully distributed Hadoop cluster.
- Adding/removing nodes to an existing hadoop cluster.
- Recovering from node failures and troubleshooting common Hadoop cluster issues.
- Supported various echo system programs those are running on the cluster.
- Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes. Communicate and escalate issues appropriately.
- Involved in HDFS maintenance and administering it through Hadoop- API.
- Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause
- Configured Dynamic resource pool to provide service-level agreements for multiple users of a cluster.
- Worked with systems engineering team to plan and deploy new hadoop environments and expand existing hadoop clusters.
- Involved in Analyzing system failures, identifying root causes, and recommended course of actions. Documented the systems processes and procedures for future references.
- Monitor Hadoop cluster connectivity and performance.
- As a admin followed standard Back up policies to make sure the high availability of cluster.
- Assisted with data capacity planning and node forecasting.
- Extensive knowledge using Linux/Unix commands
- File system management and monitoring.
- Involved in installing Hadoop Ecosystem components.
- Adding/Removing a Node, Data Re balancing
- Good experience in maintenance activities like Patching, cloning (adop).
- Administration and supporting Hadoop environment.
- Good exposure on backup and recovery tasks. Good knowledge of HOT, COLD and RMAN backups.
TECHNICAL SKILLS:
Distribution Framework: Cloudera, Hortonworks
Hadoop Technologies: Hive, HDFS, Map Reduce, Sqoop, Spark, Sentry, Impala, Kerberos, AD, Oozie, Hue etc
Oracle ERP: EBS R12
Operating Systems: UNIX, Linux and Windows.
Data Bases: Oracle 10g, 11g.
Ticketing Tools: Jira
PROFESSIONAL EXPERIENCE:
Hadoop Admin
Confidential - Addison, TX
Responsibilities:
- Configure load balancer for Impala services.
- Configure HAproxy for web framework application to connect impala services.
- User creations with required access privileges on O/S, Kerberos and Hue.
- Configure cross realm Kerberos between 3 clusters.
- User’s performance issue on Spark, Hive Oozie etc.
- Changing the configurations based on the requirements of the users for the better performance of the jobs
- Configure E-mail alerts for Hadoop services/component.
- Added new nodes to cluster and Data Re-balancing.
- Register new certificate for SSL/TLS across all nodes for level 3 encryption.
- Security configuration on DMZ nodes for APIs
- Performance tuning of Hadoop clusters and Hadoop Map Reduce routines
- Backups for MySQL Database
- Enable new component/services on cluster like Anaconda using Parcel, Numpy.
- Installing Kerberos, Creating Kerberos principals for new users and creating keytab for DMZ user.
Environment: Hadoop YARN, Spark, Hive, Sqoop, Solr, Impala, Cloudera, Oracle 10g, Linux.
Hadoop Admin
Confidential - Carlsbad, CA
Responsibilities:
- Provide admin support for Hortonworks.
- 45 nodes production environment and 10 nodes QA environment
- Adding user to Linux box as well as HDFS.
- Taking action on Nagios alerts related to space and memory.
- Troubleshooting failed jobs.
- Communicating with Hortonworks and raise case for issues.
- Handling HDFS storage utilization.
- Work on ranger for giving access to users on databases.
- Hue administration.
- Adding LDAP users.
- Action on long running jobs.
- Deployed HDP cluster on Google cloud
- Deployed HDP cluster using CloudBreak automation tool.
- Installed Hue separately on Hortonworks.
- Working on Hue dashboard.
- Automation script for long running job.
- Automation script for resource consuming job.
Environment: Hadoop, MapReduce, HDFS, Hive, Java, SQL, Hortonworkd, Sqoop, Flume, Oozie, MySQL, Ranger, Ambari and Unix/Linux.
Hadoop Developer/Admin
Confidential - Peoria, IL
Responsibilities:
- Provide admin support for Cloudera Manager
- 40 nodes production environment and 20 nodes QA environment.
- Providing hue access.
- Taking care of cluster during patch up activity by System engineering team.
- Restarting of failed Hadoop services.
- Logging case with cloudera as required.
- Handling Dev cluster manually for keeping up all the services through CLI.
- Handling job failures.
- Configured alerts For CM as well as Dev cluster.
- Working on lab cluster with CDH deployment and upgradation of CDH.
Environment: HDFS, Map Reduce, Pig, Hive, Oozie, Sqoop, Flume, HBase, Talend, HiveQL, Java, Maven, Avro, Eclipse and Shell Scripting.
Oracle DBA
Confidential
Responsibilities:
- Planning and Performing Daily, Weekly and monthly RMAN Backups.
- Patching the R12 instance for Product enhancements, resolving bugs etc.
- Involved in periodic verification of the alert log file and archive log status.
- Scheduling and managing Cronjobs
- Implementing Backup and recovery strategy for database and applications.
- Expertise in Oracle DBA Activities and Resolving end user issues.
- Managing, Maintenance and troubleshooting of Oracle Applications R12
- Performance Tuning of Applications
- Performed the cloning of Production instance for Test and Dev environment.
- Have experience of Oracle apps R12 Cloning/Refresh, RMAN backup and recovery.
- Routine activities also include monitoring health check reports and take necessary actions if required.
- Applying latest releases of the RDBMS patches using OPatch utility from Oracle on Production and test instance to resolve few technical challenges & issues.
- Monitoring database growth and accordingly take necessary actions on increasing table spaces size by adding new data files or resizing data files.
- Providing transition to team mates and creating documentation for the known issues, activities and upload it to a centralized location in portal.
- Custom top, Printer configuration, Workflow and prepared shell script for custom module and deployed scripts for monitoring database and application health check and backups
- UPK configuration on Window server (multi user)
- Involved in periodic verification of the alert log file and archive.
Environment: Oracle Applications R12.2.4, Endeca &UPK, Database 11g and Linux
Oracle DBA
Confidential
Responsibilities:
- Planning and Performing Daily, Weekly and monthly RMAN Backups.
- Patching the R12 instance for Product enhancements, resolving bugs etc.
- Involved in periodic verification of the alert log file and archive log status.
- Scheduling and managing Cronjobs
- Implementing Backup and recovery strategy for database and applications.
- Expertise in Oracle DBA Activities and Resolving end user issues.
- Managing, Maintenance and troubleshooting of Oracle Applications R12
- Performance Tuning of Applications
- Performed the cloning of Production instance for Test and Dev environment.
- Have experience of Oracle apps R12 Cloning/Refresh, RMAN backup and recovery.
- Routine activities also include monitoring health check reports and take necessary actions if required.
- Applying latest releases of the RDBMS patches using OPatch utility from Oracle on Production and test instance to resolve few technical challenges & issues.
- Monitoring database growth and accordingly take necessary actions on increasing tablespaces size by adding new data files or resizing data files.
- Providing transition to team mates.
- Creating documentation for the known issues, activities and upload it to a centralized location in portal.
Environment: R12 on IBM AIX, and 11g databases.