We provide IT Staff Augmentation Services!

Hadoop Admin Resume Profile

5.00/5 (Submit Your Rating)

St Louis, MO

Summary:

  • 9 years of extensive experience in Linux/Unix including Hadoop Administration.
  • Experienced managing Linux platform servers and in installation, configuration, supporting and managing Hadoop Clusters.
  • Installed and configured Apache Hadoop, Hive and Pig environment on Amazon EC2 and assisted in designing, development and architecture of Hadoop and HBase systems
  • Hands-on experience on major components in Hadoop Ecosystem including Hive, Sqoop, Flume knowledge of Mapper/Reduce/HDFS Framework.
  • Hands-on programming experience in various technologies like JAVA, J2EE, JSP, Servlets, SQL, JDBC, HTML, XML, Struts, Web Services, SOAP, REST, Eclipse, Visual Studio on Windows, UNIX and AIX.
  • Experienced in database server performance tuning and optimization and troubleshooting and performance tuning for Complex SQL queries
  • Having Working Knowledge on Sqoop and Flume for Data Processing
  • Expertise in developing solutions using Hadoop and Hadoop ecosystem
  • Loading the data from the different Data sources like Teradata and DB2 into HDFS using sqoop and load into partitioned Hive tables
  • Formulated procedures for installation of Hadoop patches, updates and version upgrades and automated processes for troubleshooting, resolution and tuning of Hadoop clusters.
  • Experienced in developing Map Reduce programs using Apache Hadoop for working with Big Data, Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce programming paradigm
  • Involved in log file management where the logs greater than 7 days old were removed from log folder and loaded into HDFS and stored for 3 months.
  • Supported Web Sphere Application Server WPS, IBM HTTP/ Apache Web Servers in Linux environment for various projects.
  • Supported geographically diverse customers and teams in a 24/7 environments.
  • Team player with strong analytical, technical negotiation and client relationship management skills
  • Developed Oozie workflows and sub workflows to orchestrate the Sqoop scripts, pig scripts, hive queries and the Oozie workflows are scheduled through Autosys.
  • Experience on Hadoop cluster maintenance including data and metadata backups, file system checks, commissioning and decommissioning nodes and upgrades.
  • Conducted detailed analysis of system and application architecture components as per functional requirements
  • Ability to work effectively in cross-functional team environments and experience of providing training to business users.

Technical Skills:

  • Operating Systems: Red Hat Linux, UNIX, Windows, Windows Server 2003/2007/2008
  • Application Servers: Weblogic Server Administration, IIS 6.0
  • Hadoop Ecosystem: Hadoop Map Reduce, HDFS, Flume, Sqoop, Hive, Pig, Oozie,
  • Cloudera Manager, Ambari.
  • Database: Database Oracle, DB2, MS-SQL Server, MySQL, MS-Access
  • Tools: Virtualization, Ganglia, Nagios, AWS, BSM, CONTROL-M, Site scope

Work Summary

Confidential

Hadoop Admin

Environment: Hadoop, HDFS, Ambari, Map reduce, Yarn, Oracle 11g/10g, Big Data Cloudera CDH Apache Hadoop, SQL plus, Shell Scripting, Golden Gate, Red hat/Suse Linux, EM Cloud Control.

Responsibilities:

  • Installed and configured various components of Hadoop ecosystem and maintained their integrity
  • Planning for production cluster hardware and software installation on production cluster and communicating with multiple teams to get it done.
  • Designed, configured and managed the backup and disaster recovery for HDFS data.
  • Commissioned Data Nodes when data grew and decommissioned when the hardware degraded.
  • Experience in collecting metrics for Hadoop clusters using Ganglia and Ambari.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning using Ambari.
  • Worked with application teams to install Hadoop updates, patches, version upgrades as required.
  • Installed and configured Hive, Pig, Sqoop and Oozie on the HDP cluster.
  • Involved in implementing High Availability and automatic failover infrastructure to overcome single point of failure for Name node utilizing zookeeper services.
  • Implemented HDFS snapshot feature.
  • Performed a Major upgrade in production environment from HDP 1.3 to HDP 2.0.
  • Worked with big data developers, designers and scientists in troubleshooting map reduce job failures and issues with Hive, Pig and Flume.
  • Administrating Tableau Server backing up the reports and providing privileges to users.
  • Worked on Tableau for generating reports on HDFS data.
  • Installed Ambari on existing Hadoop cluster.

Confidential

Hadoop Admin

Environment: Horton works HDFS , Map Reduce, Hive, Pig, Java JDK 1.6 , AWS, Cent OS 6.4, Shell Scripting, Flume, Apache, Sqoop, base, Red Hat Linux 6.4, My SQL 5.5

Responsibilities:

  • Designed and deployed Hadoop cluster that can scale to petabytes.
  • Commissioned Data Nodes when data grew and decommissioned when the hardware degraded.
  • Managed Hadoop clusters using Cloudera Manager.
  • Worked closely with data analysts to construct creative solutions for their analysis tasks.
  • Installed, Configured and managed Flume Infrastructure.
  • Developed MR jobs for analyzing the data stored in the HDFS by performing map-side joins, reduce-side joins.
  • Performed data analytics in Hive and then exported this metrics back to Oracle Database using Sqoop.
  • Experienced in managing and reviewing Hadoop log files.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Provided ad-hoc queries and data metrics to the Business Users using Hive, Pig.
  • Conducting root cause analysis and resolve production problems and data issues.
  • Proactively involved in ongoing maintenance, support and improvements in Hadoop cluster.
  • Executed tasks for upgrading cluster on the staging platform before doing it on production cluster.
  • Monitor cluster stability, use tools to gather statistics and improve performance.
  • Help to plan for future upgrades and improvements to both processes and infrastructure.
  • Keep current with latest technologies to help automate tasks and implement tools and processes to manage the environment.

Confidential

Unix/Linux Infrastructure Administrator

Environment: Linux, Weblogic Server administration, IIS 6.0, Control-M Scheduler, VERITAS Volume Manager 4.x/5.0, Windows 2000 / 2003 / 2008, CX 500 600, EMC Control Center 5.2, Nagios,

Responsibilities:

  • Maintained 50 Linux and UNIX servers running high availability application and database for client.
  • Monitored and maintained all storage devices, created reports, creating vdisks and allocated them to different hosts under the storage network.
  • Performed storage allocation from EVA storage to match client needs for production and staging environment. Configured LUNs on server end and created file systems as per application/database requirements.
  • Created CIFS and NFS shares on Linux and HPUX hosts as per client request.
  • Managed CRM cluster environment for critical SAP application running in client's production environment.
  • Monitored all environments through Site scope, BSM monitoring system.
  • Installed application and database packages requested by clients.
  • Made changes required by Database and Application team for Linux application servers.
  • Ran monthly security checks through UNIX and Linux environment and installed security patches required to maintain high security level for our clients.
  • Involved in monthly health check reports for SAP HANA environment plus SAN switch and Storage environments.
  • Monitored and maintained backup and disaster recovery environment created for clients.
  • Performed backup and restores for client for production and staging environments.
  • Copied/moved file systems from one server to another as per client requests.
  • Configuring the NFS servers, setting up servers in network environment and configuring FTP/NTP/NIS servers, clients for various departments and clients.
  • Involved in maintenance of systems including patching, setting up Print servers, Configuring File systems, using LVM/VERITAS on HPUX/Linux/Sun Solaris OS.
  • Setting up the Backup solution using native Unix/Linux tools and also worked with Omni back/Data protector solutions.
  • Support of Applications running on Linux machines for multiple clients.
  • Installation, configuration and administration of Linux Red Hat, Centos , Sun Solaris and HP-UX Servers.

Confidential

UNIX Administrator

Environment: Windows 2008/2007 server, Unix Shell Scripting, SQL Manager Studio, Red Hat Linux, Microsoft SQL Server 2000/2005/2008, MS Access, Putty Connection Manager, Putty, SSH, OpenSSH, Telnet

Responsibilities:

  • Provided cross platform data access from Linux to Windows users by deploying codes on Weblogic servers.
  • Managed Users on Unix/Linux systems.
  • Provided application support for various applications deployed in Linux environment.
  • Gathered requirements from Engineering team and build application installation Documents.
  • Validated the Engineering requirements with Process Improvement teams.
  • Managed Nodes, jobs and configuration using HPC Cluster Manager Tool
  • Monitored Sun Solaris and Linux Servers running in a 24x7 data center supporting approximately 100 servers.
  • Helped troubleshoot engineers with Non-linear FEA startup files and database files.
  • Installation, Maintenance, Administration and troubleshooting of Sun Solaris, AIX, HP-UX, Linux.
  • As a Linux administrator primary responsibility includes building of new servers which
  • includes rack mounting, installation of OS, configuring of various OS-native and third party tools, securing of OS,
  • Extensively worked on hard disk mirroring and stripe with parity using RAID controllers.
  • Involved in Server sizing and identifying, recommending optimal server hardware based on User requirements.
  • Support of Applications running on Linux machines for multiple clients.
  • Timely Patching Linux CEL /HP servers to keep them aligned with service level agreement and remove bugs due to earlier patch bundle.

Confidential

UNIX Systems Administrator.

Environment: ESX Server, VMware workstation, Linux, Weblogic sever 9.0/10/11,SSH server client, Weblogic Server 8.1 sp1 sp2, Weblogic portal, JDK1.4, J2EE, Jrockit 8.1, Solaris, Linux.

Responsibilities:

  • Planned, designed and implemented servers and network for engineering test applications.
  • Performed few test deployments on Weblogic servers.
  • Validated the servers to check they meet the customer requirements.
  • Provided application support to applications hosted and developed internally.
  • Created Virtual machines on VMware ESX 3.0 for Windows and Linux clients to perform Integration Testing of Switch components.
  • Creation and scheduling of Cron Jobs such as for Backup, System Monitoring and removal of unnecessary files.
  • Develop, Maintain, update various script for services start, stop, restart, recycle, cron jobs Unix based Korn shell, Bash.
  • Updated Running the various source codes for migration updating follow-up the release management.
  • Engaging and working with different vendors to determine problem resolution and root cause of hardware problems.
  • Maintaining the server farm of around 50 Linux servers and resolving the trouble tickets.
  • Installation of Patches and Packages on internal web applications.

Confidential

Unix/Linux Systems Administrator

Environments: Oracle10g, Weblogic8.1, Windows 2003, Linux, Oracle 10g, SQL Server 2005, PL/SQL, XML, Windows 2000/NT/2003 Server, UNIX.

Responsibilities:

  • Managed server on VMware provided test environments on virtual machines.
  • Provide IT support to internal Presbyterian Home Health Services staff members.
  • Provided application support to large users groups.
  • Installed hardware, installed Linux OS, and configured required network on 25 Node HPC cluster.
  • Configured and Manage Apache web server.
  • Manage software and hardware RAID systems.
  • Manage user accounts and authentication process by NIS service.
  • Manage System Firewall utilizing IP Chains, IP Tables. Implemented SSHSSL.
  • Managed user disk usage by setting up quotas.
  • Administer System logs, Security logs.
  • Update software packages and apply security patches.
  • Write documentation for internal use about System changes, Systems Administration etc.

We'd love your feedback!