We provide IT Staff Augmentation Services!

Hadoop Admin Resume

4.00/5 (Submit Your Rating)

Atlanta, GA

SUMMARY:

  • System administration with 8 plus years of overall IT experience in Enterprise application development in diverse industries which includes hands on experience in Big data Ecosystem related technologies and Websphere application server.Having 3.5 years of comprehensive experience in Cloudera and MapR Hadoop Administration. Experienced in processing of Big data on the Cloudera Hadoop framework using MapReduce programs.
  • Experienced in installation, configuration, supporting and monitoring Hadoop clusters using Apache and Cloudera, Hortonworks and MapR distributions. .
  • Experienced in using Pig, Hive, Sqoop, Oozie, ZooKeeper and Cloudera Manager.
  • Imported and exported data using Sqoop from HDFS to RDBMS.
  • Imported webservers data using Flume.
  • Extended Hive and Pig core functionality by writing custom UDFs.
  • Having much exposure on Hortonworks Distribution Platform and Ambari.
  • Experienced in analyzing data using HiveQL, Pig Latin, and custom Map Reduce programs in Java.
  • Familiar with Java virtual machine (JVM) and multi - threaded processing.
  • Experienced in job workflow scheduling and monitoring tools like Oozie and Zookeeper.
  • Collaborated with various application development teams to design solutions for multi-tenant platforms.
  • Hands on experience on HDFS, HIVE, PIG, Hadoop Map Reduce framework and SQOOP.
  • Worked extensively with HIVE DDLs and Hive Query language (HQLs).
  • Developed PIG Latin scripts for handling business transformations.
  • Implemented SQOOP for large dataset transfer between Hadoop and RDBMS.
  • Collaborated with vendors and users to coordinate and accomplish repairs, upgrades, patches, and other enhancements, additions, or replacements.
  • Query analysis and tuning advice for end users, to maintain throughput and reliable operation.
  • Complete scripting to deploy monitors, checks, and other SysAdmin function automation.
  • Have good knowledge on JSON,XML.
  • Have good knowledge on Python, perl and Http REST.
  • Perform Production Support for any problem leading to acceptable resolution, including daytime, nighttime, and weekend support if required.
  • Perform Incident resolution, Problem Determination and Root Cause Analysis. (i.e. hardware and software diagnostic tools to monitor performance and perform problem determination).
  • Possess familiarity with hardware and software diagnostic tools to monitor performance and perform problem determination.
  • Oversee installations, monitoring and managing change to servers (Overall Change Management for Servers).
  • Oversee implementation of security guidelines in order to prevent unauthorized access to servers and report any violations.
  • Monitor and tune operating systems to achieve optimum performance levels in standalone and multi-tiered environments.
  • Solve complex and recurring operational issues and develops corrective actions, as needed.
  • Interact regularly with Metrics team, developers, engineers, and the IT outsourcer to ensure the Company’s Reliability.
  • Experience in setup, configuration and management of security for Hadoop clusters using Kerberos.
  • Availability and Serviceability (RAS) metrics are sustained and improved from current level.
  • Develop and direct enhancement of application monitoring, reporting, error handling and recovery to ensure customer satisfaction, improved operational efficiency, and improved employee technical knowledge and training.
  • Develop a relationship with business unit customers based on understanding their needs and partnering on developing solutions.
  • Partner with project technical leads to ensure technology solutions and staff are in line with customer expectations and overall technology vision and goals.
  • Coordinate with external project teams / resources, as needed.
  • Frequently communicate progress and issues.
  • Ensure compliance with standard practices, processes and enterprise standards.

TECHNICAL SKILLS:

Operating Systems: IBM AIX 5.x, 6.x, Linux (Red Hat 6.x/7.x), Sun Solaris 7.x/8.x/ 9.x/10.x, Windows Server 2000/2003/2008/2012, Linux (RHEL and SUSE), HP-UX

Languages: Java, JavaScript, HTML, C, C++, Servlets

Web Technologies: JSP, Servlets, EJB, RMI, JAAS, JMS, XML, XSLT.

Tools: Wily Introscope 8.11/8.23/7.1, Resource Analyzer, Log Analyzer, Tivoli Performance Viewer, Thread dump Analyzer, IKeyman, PMAT, IBM support Assistant, Netegrity Siteminder.

Hadoop: Cloudera 4.x,5.x,Sqoop,Flume,Hive,Oozie,Zookeeper,PIG, DHP 2.0,HDP 2.3,HCatalog, Ambari 2.1.2, Ambari REST APIs

RDBMS: Oracle 9i/10g/11g, MS SQL Server, MS Access.

Applicaton Servers: IBM WebSphere Application Server 4.x/5.x/6.x/7.x/8.x

WebServers: JBOSS 6.X, TOMCAT 6.X/7.X,Web Servers IBM HTTP Server 2.x/6.x/7.x/8.x, SunOne Web Server 6.x/7.x, Apache Web Server, IIS Web Server.

Networking: TCP/IP, HTTP/HTTPS, SOAP, SMTP, SSH, FTP.

Scripting Languages: Shell Scripting, Perl, JACL & Jython (wsadmin)

PROFESSIONAL EXPERIENCE:

Confidential, Atlanta,GA

Hadoop Admin

  • Installed and configured Hortonworks Distribution Platform(HDP 2.3) on Amazon EC2 instances.
  • Installed Zookeeper,YARN,Slider,Tez on EC2 instances.
  • Configured Ambari server and ambari metrics server to collect metrics from the cluster.
  • Configured a cluster to run long running jobs using Slider.
  • Enabled High availability for Namenode,Resource Manager and Hive.
  • Hands on experience on configuring Capacity scheduler.
  • Configured queues and their capacities in the cluster.
  • Configured YARN Queue Manager to accept multiple applications by setting User limit factor.
  • Implemented node labels in the cluster to run applications on particular nodes.
  • Imported data from SQL server to HDFS by using Sqoop.
  • Used Hive to do analysis on HDFS data.
  • Hands on experience in using REST APIs to start/stop services.
  • Wrote scripts using Ambari REST APIs to install/uninstall the Hadoop services.
  • Hands on experience on Apache Ambari 2.1.2.
  • Commissioned and decommissioned nodes in the cluster using REST APIs.
  • Created and deleted EC2 instances in the cluster.
  • Developed and documented procedure to replace hosts in the cluster.
  • Hands on experience in expanding volume for Amazon EC2 instances.
  • Integrated Nagios plugins with Hortonworks to monitor the Hadoop services and nodes.
  • Developed procedures, scripted to shut down the services and delete the instances from the clusters using REST APIs.
  • Have good exposure to support applications and development team.
  • Commissioned and decommissioned nodes via ambari.
  • Performed maintenance, monitoring, deployments, and upgrades across infrastructure that supports all our Hadoop clusters.
  • Hands on experience in upgrading the cluster from HDP 2.0 to HDP 2.3
  • Created Ranger policies for hive and HDFS.
  • Having good exposure to tune the spark configuration.
  • Implemented a different use case to run applications in YARN containers as a long running jobs.
  • Worked with development team to give support for their long running applications and done root cause analysis while resolving their issues.

Environment: Cent OS, Oracle, MS-SQL, Zookeeper, Oozie, MapReduce, YARN, Puppet, Nagios, Hortonworks HDP 2.3,REST APIs, Ranger, Amazon web services, Ambari 2.1.2,Sqoop,Hive,Spark,Ranger, SparkAxiom Technology Group

Confidential, Eden Prairie, MN

Hadoop Admin/Developer

  • Installed and configured various components of Hadoop ecosystem and maintained their integrity
  • Planning for production cluster hardware and software installation on production cluster and communicating with multiple teams to get it done.
  • Designed, configured and managed the backup and disaster recovery for HDFS data.
  • Commissioned Data Nodes when data grew and decommissioned when the hardware degraded.
  • Migrated data across clusters using DISTCP.
  • Experience in collecting metrics for Hadoop clusters using Ganglia.
  • Experience in creating shell scripts for detecting and alerting problems system.
  • Worked with systems engineering team to plan and deploy new hadoop environments and expand existing hadoop clusters.
  • Monitored multiple hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning.
  • Worked with application teams to install hadoop updates, patches, version upgrades as required.
  • Installed and configured Hive, Pig, Sqoop and Oozie on the 2.0 cluster.
  • Involved in implementing High Availability and automatic failover infrastructure to overcome single point of failure for Namenode utilizing zookeeper services.
  • Implemented HDFS snapshot feature.
  • Worked with big data developers, designers and scientists in troubleshooting map reduce job failures and issues with Hive, Pig and Flume.
  • Configured custom interceptors in Flume agents for replicating and multiplexing data into multiple sinks.
  • Developed Simple to complex Map/reduce streaming jobs using Python language that are implemented using Hive and Pig.
  • Handled importing of data from various data sources, performed transformations using Hive, Map Reduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop
  • Analyzed the data by performing Hive queries (HiveQL) and running Pig scripts (Pig Latin) to study customer behavior.
  • Implemented Kerberos Security Authentication protocol for existing cluster.
  • Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive.
  • Used Impala to read, write and query the Hadoop data in HDFS or HBase or Cassandra.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Executed tasks for upgrading cluster on the staging platform before doing it on production cluster.
  • Worked with Linux server admin team in administering the server hardware and operating system.
  • Perform maintenance, monitoring, deployments, and upgrades across infrastructure that supports all our Hadoop clusters
  • Provided ad-hoc queries and data metrics to the Business Users using Hive, Pig.
  • Managed Hadoop clusters: setup, install, monitor, maintain.
  • Commissioned DataNodes when data grew and decommissioned when the hardware degraded.
  • Debugging and troubleshooting the issues in development and Test environments.
  • Monitor cluster stability, use tools to gather statistics and improve performance.
  • Help to plan for future upgrades and improvements to both processes and infrastructure.

Environment: MapR, Sqoop, Flume, Hive, HQL, Pig, RHEL, Cent OS, Oracle, MS-SQL, Zookeeper, Oozie, MapReduce, Postgresql, Nagios, Hortonworks HDP 2.2/ 2.3

Confidential - Chicago. IL

Hadoop Administrator

  • Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
  • Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
  • Managing and scheduling Jobs on a Hadoop cluster.
  • Deployed Hadoop Cluster in the following modes.♦ Pseudo-distributed ♦ Fully Distributed
  • Implemented NameNode backup using NFS. This was done for High availability.
  • Worked on importing and exporting data from Oracle and DB2 into HDFS using Sqoop.
  • Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.
  • Created Hive External tables and loaded the data in to tables and query data using HQL.
  • Wrote shell scripts for rolling day-to-day processes and it is automated.
  • Collected the logs data from web servers and integrated in to HDFS using Flume.
  • Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce jobs given by the users.
  • Experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions.
  • Hands-on experience on major components in Hadoop Ecosystem including Hive, HBase, HBase-Hive Integration, PIG, Sqoop, Flume & knowledge of Mapper/Reduce/HDFS Framework.
  • Have knowledge about Hadoop 2.0 version and CDH 5.x.
  • Set up standards and processes for Hadoop based application design and implementation.
  • Worked on NoSQL databases including HBase and MongoDB.
  • Good experience in analysis using PIG and HIVE and understanding of SQOOP and Puppet.
  • Expertise in database performance tuning & data modeling.
  • Experienced in developing MapReduce programs using Apache Hadoop for working with Big Data.
  • Good understanding of XML methodologies (XML, XSL, XSD) including Web Services and SOAP.
  • Expertise in working with different databases like Oracle, MS-SQL Server, Postgresql and MS Access 2000 along with exposure to Hibernate for mapping an object-oriented domain model to a traditional relational database.
  • Extensive experience in data analysis using tools like Syncsort and HZ along with Shell Scripting and UNIX.
  • Involved in log file management where the logs greater than 7 days old were removed from log folder and loaded into HDFS and stored for 3 months.
  • Expertise in development support activities including installation, configuration and successful deployment of changes across all environments.
  • Familiarity and experience with data warehousing and ETL tools.
  • Good understanding of Scrum methodologies, Test Driven Development and continuous integration.
  • Experience in production support and application support by fixing bugs.
  • Used HP Quality Center for logging test cases and defects.
  • Major strengths are familiarity with multiple software systems, ability to learn quickly new technologies, adapt to new environments, self-motivated, team player, focused adaptive and quick learner with excellent interpersonal, technical and communication skills.

Environment: Cloudera, CDH 4.4, and CDH 3,Cloudera manager, Sqoop, Flume, Hive, HQL, Pig, RHEL, Cent OS, Oracle, MS-SQL, Zookeeper, Oozie, MapReduce, Apache Hadoop 1.x, Postgresql, Nagios.

Hadoop Administrator

Confidential, Columbus, OH

  • Installed, configured, administered, troubleshoot, and tuned WAS ND 6.1/7.0 on AIX, HP-UX and Solaris and provided extensive support in deployment, change management and application level troubleshooting for the Dev, Test, Pre-Prod & Production environment.
  • Responsible for setting up the Dev, Test, QA, Pre-Prod and Production Environments which includes installing the WebSphere, Creating Profiles, nodes, federation of nodes, WAS instances, Application Servers, Clusters, Virtual hosts, Data Sources and Plug-in configuration.
  • Installed and Configured IBM HTTP Server V6.0/6.1 and Apache Web server V2.2/2.3 with WebSphere Application Server in cluster environment and configured Site minder to work with WAS.
  • Deployed application EARS on WebSphere Application Server Network Deployment in QA, Staging and Production environments on a daily basis and troubleshoot various configuration and application issues.
  • Highly experienced in optimizing performance of WebSphere Application server using Workload Management (WLM).
  • Configured the Webserver Instances, Session Management and Virtual hosts for WebSphere Application Server.
  • Responsible for deploying enterprise applications from Admin console and enabling security using LTPA and LDAP for WebSphere Application Server.
  • Migrated existing applications from WAS 6.1 to WAS 7.0 using pre-upgrade and WAS post-upgrade tools.
  • Maintained WebLogic Application Servers, JBOSS servers, and Oracle Database Servers.
  • Developed scripts of wsadmin using JACL, JYTHON for automation of websphere processes including start/stop, creating and configuring serves, clusters, JDBC and MQ resources, deploying applications, jvm custom properties.
  • Experience with in product & fix pack installations and version upgrades of Websphere Application Server.
  • Configured the extranet Web Servers and intranet application servers using the firewalls between the Web Servers and application servers (DMZ model).
  • Installed, Configured & regenerated the Web Server Plug-in for IBM HTTP Server.
  • Expert in administering the product using from the command line.
  • Involved in installing WebSphere MQ 5.3/6.0 on AIX and Linux environments.
  • Analyzed Heap dumps and Core dumps using IBM thread analyzer and heap analyzer.
  • Performed troubleshooting on Java applications using WAS logs, traces, Log Analyzer, Resource Analyzer Performance Viewer in production environment.
  • Configured WebSphere resources like JDBC providers, JDBC data sources 4/5 and connection pooling and tuning it and monitoring it using Tivoli Performance viewer by enabling PMI.
  • Installed SSL certificates on the web servers and troubleshoot problem-tickets, worked with developers to identify the root cause and resolve the issue or propose a potential work around.
  • Problem determination using local error logs and by running user traces and service traces.
  • Involved in Creating & Configuring SSL for high security of web application.
  • Worked closely with developers to define and configure application Servers, Virtual Hosts, Web Applications, Web resources, Servlets, JDBC drivers and Servlet Engines-as well as deployment of EJBs across multiple instances of WebSphere.
  • Performed routine management of WebSphere Environment like monitoring Disk Space, CPU Utilization and resolved dynamic cache problems.
  • Managed multiple high profiles, complex Applications and implemented with minimal disruption to end-users and provided 24x7 support on shift rotation basis.

Environment: IBM WebSphere Application Server Network Deployment 6.x/7.x, AIX, HP-UX, WebSphere MQV7.0, IHS Web Server 6.0, 6.1, 7.0, JBoss, Wily, JProbe, TAM and ITCAM

Confidential, Richardson, Tx

Unix/Web Sphere Administrator

  • Installed, configured and administered the WebSphere Application Server 6.0.2/6.1,ND,XD on multiple Platforms.
  • Installed and configured 6.1/6.0 version of IHS, IIS and plug-in in development, staging and production environments.
  • Used WebSphere Admin Console and wsadmin/JACL/Jython scripting to install enterprise EAR, WAR files for deployment on WebSphere Application Server Network Deployment in QA, Staging and Production environments on a daily basis and troubleshoot various configuration and application issues.
  • Configured WebSphere resources like JDBC providers, JDBC data sources 4/5 and connection pooling and tuning it and monitoring it using Tivoli Performance viewer by enabling PMI.
  • Troubleshoot problems on the various environments involving the integrations of WebSphere, IBM HTTP Server, iPlanet web Servers, and LDAP.
  • Configured WebSphere Application Server on multiple platforms for both Horizontal and Vertical clustering for Work Load Management.
  • Worked with developers and QA team in various stages of development and testing and taking the application from DEV to Test to QA and PROD environments.
  • Involved in opening and working on PMR’s with IBM to solve various issues related to the environment.
  • Enabled traces as part of troubleshooting and used collector tool to submit the logs and traces to IBM after running the must gather scripts and enabling various traces and taking thread dumps.
  • Reviewed Web Server, Application Server Performance Monitoring data using both Wily Introscope and Tivoli Performance Viewer and reviewed historical Tivoli logs for root cause analysis, recurring events and involved in troubleshooting the recurring problems.
  • Performed troubleshooting on Java applications using WAS logs, traces, Log Analyzer, Resource Analyzer/Tivoli Performance Viewer in production environment.
  • Configured GC parameters, monitoring the heap sizes by setting verbose GC, fine tuning and fixing memory issues.
  • Installed SSL certificates on the web servers and troubleshoot problem-tickets, worked with developers to identify the root cause and resolve the issue or propose a potential work around
  • Configured enterprise applications and corrected performance problems by monitoring server availability and resource utilization analysis.

Environment: WebSphere Application Server Network Deployment 7.0, 6.0, 6.1, Sun Solaris 10/9, AIX, HP-UX WebSphere MQV6.0, iPlanet 6.1 Web Server, IHS Web Server 6.0,6.1, JBoss, Wily, JProbe, Tivoli Directory Server v5.0 (LDAP), Tivoli Performance Viewer, ITCAM

Confidential

Unix/Web Sphere Administrator

  • Responsible for installation, configuration and maintenance of J2EE applications on WebSphere Application Server, Tomcat, HTTP Server in a multi clustered high availability environments.
  • Installed, configured, administered and supported WebSphere Application Servers 5.0.x/5.1.x/6.0.x/6.1.x.
  • Installed Fix packs, Cumulative Fixes and Refresh Packs on the Base and ND Versions.
  • Highly involved in deploying, troubleshooting, maintaining and configuring J2EE applications in various environments like Dev, Integration, QA, Stress, UAT and Prod.
  • Involved in developing MQ infrastructure in enterprise level with distributed queuing and clustering and managing MQ channels Collection of JVM statistics, garbage collection data for monitoring the heap and physical memory.
  • Involved in Load balancing/Tuning/Clustering for IBM WebSphere Application Server using Deployment Manager (Network Deployment).
  • Managed the security and performance optimizations for EJB containers and web applications in IBM WebSphere, IBM HTTP Server and Apache web server.
  • Administration and troubleshooting of working Application - starting and stopping the application Server - Regenerating/updating plug-in for Apache Web Server.
  • Debugging of the application problems working by very closely with development teams.
  • Installed and configured IBM HTTP web server and Implemented security-using LTPA for Netscape LDAP Server.
  • Used Tivoli Performance Viewer, Log Analyzer, and Thread Analyzer for performance and troubleshooting.
  • Involved in configuring the WebSphere load balancing utilizing WebSphere Workload Management (WLM) including horizontal scaling and vertical scaling.
  • Package, build, integrate and deploy enterprise J2EE applications on WebSphere 5.0 that involves EAR, JAR, WAR files.
  • Involved extensively in troubleshooting the issues and findings out root causes by analyzing Java core dumps in investigating and resolving system crashes on Websphere AS.
  • Developed many JACL, Jython, Perl, WSCP scripts and shell scripts to automate the maintenance process of the WebSphere and recovered the backed up WebSphere configuration using XML Config tool.
  • Involved in troubleshooting and performance tuning using Resource Analyzer and Log Analyzer.
  • Managing cron jobs, batch processing and job scheduling.
  • Debugged WebSphere Application Server connection pooling and connection manager with Oracle.
  • Used various commands to check the Performance of the system, load balances on the CPU and memory.
  • Creating the Problem Management Record (PMR) with the IBM if they are any unsolved bugs.
  • Security, users, groups administration and daily backup and restore operations.
  • Package, build, Integrate and deploy enterprise J2EE applications on WebSphere 6.1.0.9/6.0.2.13/5. x that involves EAR (Enterprise Archives) and WAR (Web Archives).

Environment: Java2EE, Tomcat Application Server4.0/5.2, WebSphere Application Server 6.1/6.0, Apache HTTP Server v1.3, v2.0, WebSphere MQ, CA SiteMinder 5.x, Sun One directory Server v5.x, DB2, Windows NT and Solaris, AIX.

We'd love your feedback!