We provide IT Staff Augmentation Services!

Hadoopadmin Resume

4.00/5 (Submit Your Rating)

NJ

SUMMARY:

  • Over 6+ years of experience in designing and implementing high availability systems using various cluster software’s across all SUN SPARC and X86 platforms.
  • Experience in designing/configuring and implementing virtualization for dynamic data centers server consolidation via virtualization methods and saving millions of dollars in hardware/software and maintenance costs using VMware, Solaris Zones and Containers, Hadoop, LDOMS using Solaris and Linux Operating Systems.
  • Experience with VMware ESX/ESXi, vSphere, vCenter Hadoop, technologies.
  • Experience installing and configuring dynamic space utilizations using management softwares' such as Symantec volume manager, ZFS, SUN Volume Manager (SVM) and linux LVM.
  • Experience with Sun Blade T and X - series, Cool Threads Enterprise T- series and SPARC64 VI architecture servers.
  • Hands on experience on major components in Hadoop Ecosystem like Hadoop Map Reduce, HDFS, HIVE, PIG, Hbase, Zookeeper, Sqoop, Oozie, Flume and Avro.
  • Experience in Object Oriented Analysis, Design (OOAD) and development of software using UML Methodology, good knowledge of J2EE design patterns and Core Java design patterns.
  • Experience with IBM Blade Center H-Chassis, Hadoop & IBM Blade Center HS22/HS21 including the configuration of BNT virtual Fabric 10g switch module.
  • Experience in Java, JSP, Servlets, EJB, WebLogic, WebSphere, Hibernate, spring, JBoss, JDBC, RMI, Java Script, Ajax, Jquery, XML, and HTML.
  • Experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions.
  • Experience in migrating to LINUX from Solaris onto SUN x86 hardware.
  • Experience in deployment of server automation tools including Operations Orchestration and CMDB setup
  • Experience in NIS, NIS+, NFS, Hadoop, DNS, DHCP and TCP/IP.
  • Experience in Backups, Restores, and disaster recovery using Legato, Netbackup technologies.
  • Experience in Jumpstart technology.
  • Experience in managing Hadoop clusters using Cloudera Manager Tool.
  • Experience in SUN's OPS center tool for OS provisioning, firmware provisioning, patching and update capabilities for operating systems (Solaris(TM) operating system, linux and Windows), and create patch compliance reports.
  • Experience in scripting for automation and Hadoop.
  • Experience in installing/configuring/administering standalone/cluster WEBLOGIC administration.
  • Experience with installing and configuring send mail with mail filters.
  • Experience with Solaris system security implantation.

TECHNICAL SKILLS:

Programming Languages: Java J2SE, Sun Java JDK 1.4, 1.5

Application/Web Servers: Web Sphere portal server 6.0, Web Sphere 6.0/5.1, Tomcat 5.5, Weblogic Server Administration, IIS (6.0)

Databases: Oracle 8.x/10.x, DB2, MS-SQL Server, MySQL, MS-Access

IDE’s & Utilities: WSAD 5.1, and Eclipse 3.3/3.4/3.5.1

Scripting Languages: JavaScript, HTML

Protocols: TCP/IP, HTTP and HTTPS.

Operating Systems: Linux, iOS, WINDOWS 98/00/NT/XP, UNIX, Windows Server 2003/2007/2008

Hadoop ecosystem: Clodera, Hadoop and MapReduce, Sqoop, Hive, PIG, HBASE, HDFS, Zookeeper, Lucene, Sun Grid Engine Administration.

PROFESSIONAL EXPERIENCE

Confidential - NJ

HadoopAdmin

Responsibilities:

  • Installed and configured various components of Hadoop ecosystem and maintained their integrity
  • Planning for production cluster hardware and software installation on production cluster and communicating with multiple teams to get it done.
  • Designed, configured and managed the backup and disaster recovery for HDFS data.
  • Commissioned Data Nodes when data grew and decommissioned when the hardware degraded.
  • Migrated data across clusters using DISTCP.
  • Experience in collecting metrics for Hadoop clusters using Ganglia and Ambari.
  • Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning using Ambari.
  • Worked with application teams to install Hadoop updates, patches, version upgrades as required.
  • Installed and configured Hive, Pig, Sqoop and Oozie on the HDP cluster.
  • Involved in implementing High Availability and automatic failover infrastructure to overcome single point of failure for Name node utilizing zookeeper services.
  • Implemented HDFS snapshot feature.
  • Performed a Major upgrade in production environment from HDP 1.3 to HDP 2.0.
  • Worked with big data developers, designers and scientists in troubleshooting map reduce job failures and issues with Hive, Pig and Flume.
  • Configured custom interceptors in Flume agents for replicating and multiplexing data into multiple sinks.
  • Administrating Tableau Server backing up the reports and providing privileges to users.
  • Worked on Tableau for generating reports on HDFS data.
  • Installed Ambari on existing Hadoop cluster.

Environment: Hadoop, HDFS, Ambari, Map reduce, Yarn, Oracle 11g/10g, Big Data Cloudera CDH Apache Hadoop, SQL plus, Shell Scripting, Golden Gate, Red hat/Suse Linux, EM Cloud Control.

Confidential, Philadelphia, PA

Hadoop Admin

Responsibilities:

  • Designed and deployed Hadoop cluster that can scale to petabytes.
  • Commissioned Data Nodes when data grew and decommissioned when the hardware degraded.
  • Installed and configured Ganglia and Nagios to monitor Hadoop clusters.
  • Managed Hadoop clusters using Cloudera Manager.
  • Worked closely with data analysts to construct creative solutions for their analysis tasks.
  • Installed, Configured and managed Flume Infrastructure.
  • Developed MR jobs for analyzing the data stored in the HDFS by performing map-side joins, reduce-side joins.
  • Performed data analytics in Hive and then exported this metrics back to Oracle Database using Sqoop.
  • Loading logs data directly into HDFS using Flume.
  • Experienced in managing and reviewing Hadoop log files.
  • Installation of various Hadoop Ecosystems and Hadoop Daemons.
  • Provided ad-hoc queries and data metrics to the Business Users using Hive, Pig.
  • Managed Hadoop clusters: setup, install, monitor, maintain.
  • Debugging and troubleshooting the issues in development and Test environments.
  • Conducting root cause analysis and resolve production problems and data issues.
  • Proactively involved in ongoing maintenance, support and improvements in Hadoop cluster.
  • Document and manage failure/recovery (loss of name node, loss of data node, replacement of HW or node)
  • Involved in Minor and Major Release work activities.
  • Executed tasks for upgrading cluster on the staging platform before doing it on production cluster.
  • Installation and Administration of a Hadoop cluster.
  • Implemented total sort for globally sorted reducer result sets.
  • Configured Data Meer for generating reports & visual analysis on HDFS data
  • Worked with Linux server admin team in administering the server hardware and operating system.
  • Perform maintenance, monitoring, deployments, and upgrades across infrastructure that supports all our Hadoop clusters.
  • Monitor cluster stability, use tools to gather statistics and improve performance.
  • Help to plan for future upgrades and improvements to both processes and infrastructure.
  • Keep current with latest technologies to help automate tasks and implement tools and processes to manage the environment.

Environment: Horton works (HDFS), Map Reduce, Hive, Pig, Java (JDK 1.6), AWS, Cent OS 6.4, Shell Scripting, Flume, Apache, Sqoop, base, Red Hat Linux 6.4, My SQL 5.5, Oracle 11g / 10g, PL/SQL, SQL*PLUS, Toad 9.6, Windows NT, UNIX Shell Scripting.

Confidential - Winston Salem, NC

Hadoop Admin

Responsibilities:

  • Design, deploy, Manage Hadoop machines for our data platform operations (racking/stacking). Involving Datacenter Capacity planning and deployment.
  • Install and configure the Hadoop-0.20, Hbase, zookeeper, Hive. Setting up puppet for centralized configuration management.
  • Setting up monitoring tools Ganglis, Nagios for Hadoop monitoring and alerting. Monitoring Hadoop cluster Hadoop/hbase/zookeeper using these tools Ganglia and Nagios.
  • Expertise in Hadoop cluster task like Adding Nodes, Removing Nodes without any effect to running jobs and data.
  • Use Cloudera manager to pull metrics on various cluster features like JVM, Running Map and reduce tasks etc.
  • Write scripts to automate application deployments and configurations. Hadoop cluster performance tuning and monitoring. Troubleshoot and resolve Hadoop cluster related system problems.
  • Extensive use of VERITAS Volume Manager for Disk management, file system management on Sun Solaris environment.
  • Developed UDF's (User Defined Functions) using java for implementing pig programs.
  • Installed, maintained, upgraded and supported Apache and JBoss application servers on Red-Hat Linux systems.
  • Installation and configuration VERITAS Net Backup on Sun Severs and performing Backup using VERITAS Net Backup.
  • Installed various Hadoop ecosystems and Hadoop Daemons.
  • Adding new Data Nodes when ne upgraded the Cloudera Hadoop ecosystems in the cluster using Cloudera distribution packages.
  • Expertise in Linux backup/restore with tar including disk partitioning and formatting.
  • Involved in Planning, building and Administration of various High Availability clusters and heart beat checking on Sun Solaris using VCS in heterogeneous SAN environment.
  • Planned, scheduled and Implemented OS patches on both Solaris & Linux boxes as a part of proactive maintenance.
  • Debug and solve the major issues with Cloudera manager by interacting with the Cloudera team from Cloudera.
  • Worked with other IT teams, customers (users), and other managers in helping build and implement systems and standards.
  • Involved in migration of projects from one flavor to another one.
  • Involved in development, user acceptance, and performance testing, production & disaster recovery server.

Environment: Solaris 8/9/10, Red Hat Linux 4/5, BMC Tools, NAGIOS, VeritasNetBackup, Korn Shell, Bash Scripting, Veritas Volume Manager, web servers, LDAP directory, Active Directory, BEA Web logic servers, SAN Switches, Apache, Tomcat WebSphere, AIX 5.2, 5.3, Jboss 4.0, Weblogic application server, JAVA.

Confidential - Los Angeles, CA

Unix/Linux Infrastructure Administrator

Responsibilities:

  • Provided cross platform data access from Linux to Windows users by deploying codes on Weblogic servers.
  • Managed Users on Unix/Linux systems,
  • Provided application support for various applications deployed in Linux environment.
  • Gathered requirements from Engineering team and build application installation Documents.
  • Validated the Engineering requirements with Process Improvement teams.
  • Managed Nodes, jobs and configuration using HPC Cluster Manager Tool
  • Monitored Sun Solaris and Linux Servers running in a 24x7 data center supporting approximately 100 servers.
  • Helped troubleshoot engineers with Non-linear FEA startup files and database files.
  • Installation, Maintenance, Administration and troubleshooting of Sun Solaris, AIX, HP-UX, Linux.
  • As a Linux administrator primary responsibility includes building of new servers which
  • includes rack mounting, installation of OS, configuring of various OS-native and third party tools, securing of OS,
  • Installing & configuring, job scheduling.
  • Installation and configured Solaris 10 with Zone configuration
  • Extensively worked on hard disk mirroring and stripe with parity using RAID controllers.
  • Involved in Server sizing and identifying, recommending optimal server hardware based on User requirements.
  • Support of Applications running on Linux machines for multiple clients.
  • Timely Patching (Linux (CEL)/HP) servers to keep them aligned with service level agreement and remove bugs due to earlier patch bundle.

Environment: Windows 2008/2007 server, Unix Shell Scripting, SQL Manager Studio, Red Hat Linux, Microsoft SQL Server 2000/2005/2008, MS Access, Putty Connection Manager, Putty, SSH, OpenSSH, Telnet

Confidential - Rhode Island

UNIX/LINUX Systems Admin

Responsibilities:

  • Maintained 50+ Linux and UNIX servers running high availability application and database for client.
  • Monitored and maintained all storage devices, created reports, creating vdisks and allocated them to different hosts under the storage network.
  • Performed storage allocation from EVA storage to match client needs for production and staging environment. Configured LUNs on server end and created file systems as per application/database requirements.
  • Created CIFS and NFS shares on Linux and HPUX hosts as per client request.
  • Managed CRM cluster environment for critical SAP application running in client’s production environment.
  • Monitored all environments through Site scope, BSM monitoring system.
  • Installed application and database packages requested by clients.
  • Made changes required by Database and Application team for Linux application servers.
  • Ran monthly security checks through UNIX and Linux environment and installed security patches required to maintain high security level for our clients.
  • Involved in monthly health check reports for SAP HANA environment plus SAN switch and Storage environments.
  • Monitored and maintained backup and disaster recovery environment created for clients.
  • Performed backup and restores for client for production and staging environments.
  • Copied/moved file systems from one server to another as per client requests.
  • Configuring the NFS servers, setting up servers in network environment and configuring FTP/NTP/NIS servers, clients for various departments and clients.
  • Involved in maintenance of systems including patching, setting up Print servers, Configuring File systems, using LVM/VERITAS on HPUX/Linux/Sun Solaris OS.
  • Setting up the Backup solution using native Unix/Linux tools and also worked with Omni back/Data protector solutions.
  • Support of Applications running on Linux machines for multiple clients.
  • Installation, configuration and administration of Linux (Red Hat, Centos), Sun Solaris and HP-UX Servers.

Environment: Linux, Weblogic Server administration, IIS 6.0, Control-M Scheduler, VERITAS Volume Manager 4.x/5.0, Windows 2000 / 2003 / 2008, CX 500 & 600, EMC Control Center 5.2, Nagios

Confidential - Houston, TX

Sr. UNIX Admin

Responsibilities:

  • Managed server on VMware, provided test environments on virtual machines.
  • Provide IT support to internal Presbyterian Home Health Services staff members.
  • Provided application support to large users groups.
  • Installed hardware, installed Linux OS, and configured required network on 25 Node HPC cluster.
  • Configured and Manage Apache web server.
  • Manage software and hardware RAID systems.
  • Manage user accounts and authentication process by NIS service.
  • Manage System Firewall utilizing IP Chains, IP Tables. Implemented SSHSSL.
  • Managed user disk usage by setting up quotas.
  • Administer System logs, Security logs.
  • Update software packages and apply security patches.
  • Performed hardware maintenance, upgrades and troubleshooting on workstations and servers.
  • Communicate with Hardware and Software vendors and equipment dealers and maintain good relations.
  • Write documentation for internal use about System changes, Systems Administration etc.

Environment: Oracle10g, Weblogic8.1, Windows 2003, Linux, Oracle 10g, SQL Server 2005, PL/SQL, XML, Windows 2000/NT/2003 Server, UNIX.

Confidential - Fairfield, NJ

Unix Adminstrator

Responsibilities:

  • Planned, designed and implemented servers and network for engineering test applications.
  • Performed few test deployments on Weblogic servers.
  • Validated the servers to check they meet the customer requirements.
  • Provided application support to applications hosted and developed internally.
  • Created Virtual machines on VMware ESX 3.0 for Windows and Linux clients to perform Integration Testing of Switch components.
  • Creation and scheduling of Cron Jobs such as for Backup, System Monitoring and removal of unnecessary files.
  • Develop, Maintain, update various script for services (start, stop, restart, recycle, cron jobs) Unix based Korn shell, Bash.
  • Updated & running the various source codes for migration & updating follow-up the release management.
  • Engaging and working with different vendors to determine problem resolution and root cause of hardware problems.
  • Maintaining the server farm of around 50 Linux servers and resolving the trouble tickets.
  • Installation of Patches and Packages on internal web applications.

Environment: ESX Server, VMware workstation, Linux, Weblogic sever 9.0/10/11,SSH server & client, Weblogic Server 8.1 sp1&sp2, Weblogic portal, JDK1.4, J2EE, Jrockit 8.1, Solaris, Linux.

Confidential

UNIX Administration Engineer

Responsibilities:

  • Designed and implemented all network and server resources.
  • Performed User Management, creating user and group accounts assigning permission to file systems.
  • Monitoring and analyzing the batch jobs through control-m tool
  • Installed patch cluster, firmware updates on a periodic basis.
  • Work with other team on planning for making changes on any servers.
  • Managed CRONTAB jobs, batch processing and job scheduling.
  • Software installation and maintenance.
  • Security, users and groups administration.
  • Networking service, performance, and resource monitoring.
  • Worked on 24X7 on-call rotation, Remedy trouble tickets, Break-fix, incident change.

Environment: CRONTAB, XML, SQL, Windows 2000/NT/2003 Server, UNIX.

We'd love your feedback!