We provide IT Staff Augmentation Services!

Sr Hadoop Administrator Resume

3.00/5 (Submit Your Rating)

Lewisville, TX

SUMMARY

  • Over 8 + years of Information Technology experience. Extensive experience in design, development and implementations of robust technology systems with specialized expertise in Hadoop, Linux Administration and Data Management.
  • 5+ years of experience in Hadoop Administration & Big Data Technologies and 3+ years of experience into Linux administration.
  • Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems.
  • Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Cloudera, Hortonworks and MapR.
  • Hadoop Cluster capacity planning, performance tuning, cluster Monitoring, Troubleshooting.
  • Design Big Data solutions for traditional enterprise businesses.
  • Backup configuration and Recovery from a Name Node failure.
  • Excellent command in creating Backups & Recovery and Disaster recovery procedures and Implementing Backup and Recovery strategies for off - line and on-line Backups.
  • Involved in bench marking Hadoop/HBase cluster file systems various batch jobs and workloads.
  • Making Hadoop cluster ready for development team.
  • Have Good knowledge on Hadoop Ecosystems.
  • Experience in minor and major upgrades of Hadoop and Hadoop eco system
  • Experience monitoring and troubleshooting issues with Linux memory, CPU, OS, storage and network
  • Hands on experience in analyzing Log files for Hadoop and eco system services and finding root cause.
  • Experience on Commissioning, Decommissioning, Balancing, and Managing Nodes and tuning server for optimal performance of the cluster.
  • As an admin involved in Cluster maintenance, trouble shooting, Monitoring and followed proper backup& Recovery strategies.
  • Experience in setting up the Linux environments, Password less SSH, creating file systems, disabling firewalls, swappiness, SeLinux and installing Java.
  • Experience in Planning, Installing and Configuring Hadoop Cluster in Cloudera, MapR Distributions.
  • Installing and configuring Hadoop eco system like Pig, Confidential, HBase, Sqoop, Flume, Oozie.
  • Hands on experience in Installing, Configuring and managing the Hue and HCatalog.
  • Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems/mainframe and vice-versa.
  • Experience in importing and exporting the logs using Flume.
  • Experience in streaming management using Kafka using kafka brokers and topics
  • Optimizing performance of HBase/ Confidential /Pig jobs.
  • Hands on experience in Zookeeper and ZKFC in managing and configuring in Name Node failure scenarios
  • Hands on experience in Linux admin activities on RHEL & Cent OS.
  • Experience in Installation and configuration on spark.
  • Knowledge in designing and developing applications in Spark with Scala.
  • Knowledge in DevOps tools such as Nagios, Docker, Puppet, Icinga2
  • Experience in writing scripts for automation (shell scripts)
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability
  • Familiar with writing Oozie workflows and Job Controllers for job automation.
  • Effective problem-solving skills and outstanding interpersonal skills.
  • Ability to work independently as well as within a team environment.
  • Driven to meet deadlines. Ability to learn and use recent technologies quickly.
  • Able to understand business and technical requirements quickly, excellent communications skills and work ethics, Able to work independently.

PROFESSIONAL EXPERIENCE:

Confidential, Lewisville, TX

Sr Hadoop Administrator

Responsibilities:

  • Involved in major/minor version upgrades from 4.2 to 5.0 and to 5.2.0 and 5.2.2 in all environments.
  • Performed confidential Patches and OS/Firmware Patches on cluster to maintain interoperability.
  • Add/Decommission nodes as they arrive to expand the production cluster after thorough validation.
  • Worked on troubleshooting and resolving issues including P1 issues. Worked with developers and architects in troubleshooting and analyzing jobs and tuned them for optimum performance
  • Wrote shell scripts to automate day to day tasks and patches/upgrades.
  • Worked on tuning yarn configurations for efficient resource utilization in the clusters.
  • Worked on monitoring, managing, configuring and administering batch, mapr-db and disaster recovery clusters.
  • Worked on maintaining data mirroring process in remote confidential clusters for all data being mirrored from all clusters so as to have backups available at any time.
  • Planned and implemented production changes without causing any impacts and downtime. Documented and prepared change plans for each change.
  • Installed MapR core and ecosystem components on single and multi-node clusters from scratch for production/non-production environments.
  • Perform cluster validations and run various pre-install and post install tests.
  • Gained experience on architecture, planning and preparing the nodes, data ingestion, disaster recovery, high availability, management and monitoring.
  • Experienced in setting up the project and volume setups for the new Hadoop projects.
  • Involved in snapshots and mirroring to maintain the backup of cluster data and even remotely
  • Working experience on MySQL databases creation and setting up the users and maintain the backup of databases.
  • Helping the users in production deployments throughout the process.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Using the configuration management tools (StackIQ, puppet)
  • Writing the shell scripts to automate the processes
  • Coordinating with QA Team during testing phase
  • Providing the application support to production support team

Tools: &Technologies: MapR, MCS, Sqoop, Confidential, HBase, Pig, Spark, Zookeeper, Nagios, Ganglia, MapR tickets, Unravel, Red Hat Enterprise Linux, Puppet, Icinga2.

Confidential, Fort Worth, TX

Hadoop Administrator

Responsibilities:

  • Installed and configured Cloudera distribution of Hadoop and other ecosystem components YARN, MapReduce, Flume, HDFS, Sqoop, Zoo Keeper, Oozie, Confidential, Pig.
  • Managed and monitored the Hadoop cluster by using the Cloudera Manager
  • Installed Hortonworks HDP 2.6.X cluster from scratch.
  • Job deployments and make sure that jobs are up and running fine every time
  • Commissioning and decommissioning nodes in the cluster
  • Handled the ingestion failures, job waiting and job failures
  • Data backup and data purging based on retention policy
  • Data subscription and maintaining cluster health and HDFS space for better performance
  • Handled alerts for CPU, memory, network and storage related processes
  • Integrated Oozie with the rest of the Hadoop stack supporting several types of Hadoop jobs Map Reduce, Pig, Confidential and Sqoop as well as system specific jobs such as Java programs and Shell scripts.
  • The managed full data mine from the huge data volumes is exported to MySQL using Sqoop.
  • Configured Confidential Metastore to use MySQL database to establish multiple user connections to Confidential tables.
  • Performed administration using Hue WebUI to create and manage user spaces in HDFS.
  • Configured the Hadoop Map Reduce and HDFS core properties as a part of performance tuning to achieve high computational performance.
  • Configured Cloudera for receiving alerts on critical failures in the cluster by integrating with custom Shell Scripts
  • Maintained comprehensive project, technical and architectural documentation for enterprise systems.
  • Deploy and monitor scalable infrastructure on Amazon web services (AWS) & configuration management.
  • Installed and configured Hadoop clusters for Dev, QA and Production environments as per the project plan
  • Created the Hadoop user accounts and provide the accessibility to the Hadoop cluster
  • Set up the Hadoop quotas
  • Handled the configuration and maintenance of YARN schedulers
  • Experienced in using the configuration management tools
  • Took the Snapshots to prevent data corruption from user/application and accidental deletes
  • Handled the Hadoop Cluster upgradation and maintained the data replication
  • Wrote the shell scripts to automate the processes

Tools: & Technologies: Cloudera Distribution of Hadoop, Cloudera Manager, Flume, Sqoop, Confidential, HBase, Pig, Zookeeper, Nagios, Red Hat Enterprise Linux.

Confidential, Waterloo, IA

Sr. Linux Administrator

Responsibilities:

  • Major responsibility is providing the server security
  • Performance tuning and monitoring using netstat, iostat, vmstat.
  • Monitoring the server logs and application logs
  • Troubleshooting the network issues and server problems
  • Applying the security polices for hardening the server
  • Used many security tools like Iptables, firewalls, TCP wrappers, Nmap, Wireshark.
  • Developed automated processes that run daily to check disk usage and perform cleanup of file systems on UNIX environments using shell scripting and CRON.
  • Writing shell scripts to automate. appropriate configuration to new hosts being provisioned according to rules defined at the server.
  • Server Installations and configurations of RedHat Enterprise Linux.
  • Compile, build and upgrade Linux kernel.
  • Setup and implement kickstart installation.
  • Worked under Fedora and Red Hat Linux environments
  • Performed regular day-to- day system administrative tasks
  • Performed User Management, Backup, and Network Management
  • Performed reorganization of disk partitions, file systems, hard disk addition, and memory upgrade.
  • Monitored system activities, log maintenance, and disk space management.
  • Administer Apache Servers. Published client's web site in our Apache server.
  • Management and Software Management including Documentation etc.
  • Recommend system configurations for clients based on estimated requirements.
  • Fix all the system problems, based on system email information and users' complaints.
  • Upgrade software, add patches, and add new hardware in Linux machines.
  • Upgrade and patch existing servers.
  • Encapsulated root file systems and mirrored the file systems were mirrored to ensure systems had redundant boot disks.
  • Improve system performance by working with the development team to analyze, identify and resolve issues quickly.
  • Working on ticketing process based on ITIL (IT Infrastructure Library Working on Automated and Manual Tickets)

Tools: & Technologies: Red Hat Enterprise Linux, DNS, Samba, Apache, Nmap, WireShark, Cron, KickStart, Penetration Testing Tools.

Confidential

Linux System Administrator

Responsibilities:

  • Worked under Red Hat Linux environment
  • Performed regular day-to- day system administrative tasks
  • Performed User Management, Backup, and Network Management
  • Performed reorganization of disk partitions, file systems, hard disk addition, and memory upgrade.
  • Monitored system activities, log maintenance, and disk space management.
  • Administer Apache Servers. Published client's web site in our Apache server.
  • Management and Software Management including Documentation etc.
  • Recommend system configurations for clients based on estimated requirements.
  • Fix all the system problems, based on system email information and users' complaints.
  • Upgrade software, add patches, and add new hardware in Linux machines.
  • Upgrade and patch existing servers.
  • The vulnerability management process includes: - data, application, and infrastructure vulnerabilities and will be facilitated by a combination of manual processes and vulnerability management tools (Qualys, Nessus, Ecommerce Application, Laptop Encryption, network Security and Data Security).

Tools & Technologies: Red Hat Enterprise Linux, Penetration Testing Tools, Snort IDS, Windows XP, Windows 2003 server

We'd love your feedback!