Weblogic /middleware Administrator Resume
Tampa, FL
SUMMARY
- 9 years of professional work experience in the IT Industry including 4 years in Big data ecosystem related technologies.
- Excellent understanding and knowledge of Hadoop architecture and various components such as HDFS, Map reduce, NameNode, Data Node, Resource Manager, Node Manager, Job Tracker, Task Tracker programming paradigm and Hadoop Ecosystem (Hive, Impala, Sqoop, Flume, Oozie, Zookeeper, Kafka,Spark).
- Having 4 years of proven experience in Hadoop Administration using Apache, Cloudera (CDH), and extensive years of experience in Linux Administration and system Administration.
- Experience with Horton works & Cloudera Manager Administration also experience In Installing, UpdatingHadoopand its related components in Single node cluster as well as Multi node cluster environment using Apache, Cloudera, Horton works.
- Automated SetupHadoopCluster, Implemented Kerberos security for variousHadoopservices using HortonWorks.
- Experience in Database Administration, performing tuning and backup & recovery and troubleshooting in large scale customer facing environment.
- Experience in managing and reviewing Hadoop log files.
- Experience in analyzing data using HiveQL, impala and custom MapReduce programs in Java.
- Extending Hive and Pig core functionality by writing custom UDFs.
- Experience in data management and implementation of Big Data applications using Hadoop frameworks.
- Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice - versa.
- Experience with leveraging Hadoop ecosystem components including Pig and Hive for data analysis, Sqoop for data Ingestion, Oozie for scheduling and HBase as a NoSQL data store.
- Experienced in deployment of Hadoop Cluster using Ambari, Cloudera Manager.
- Experience in Hadoop Shell commands, writing MapReduce Programs, verifying managing and reviewing Hadoop Log files.
- Proficient in configuring Zookeeper, Flume &Sqoop to the existing Hadoop cluster.
- Having good Knowledge in Apache Flume, Sqoop, Hive, Hcatalog, Impala, zookeeper, oozie, Ambari, chef.
- Expertise in deployment ofHadoop, Yarn, Spark and Storm integration with Cassandra, ignite and RabbitMQ, Kafka.
- Servers and maintaining Load balancing and high availability.
- Very Good Knowledge in YARN (Hadoop2.x.x) terminology and High availabilityHadoopClusters.
- Experience in analyzing the log files forHadoopand eco system services and finding out the root cause.
- Performed Thread Dump Analysis for stuck threads and Heap Dump Analysis for leaked memory with Memory analyzer tool manually.
- Extensive usage of Verbose GC for JVM monitoring in performance tuning.
- Having good understanding on Garbage collection and performance tuning of Garbage collection.
- Very Good experience on high-volume transactional systems running on Unix/Linux and Windows.
- Involved in all phases of Software Development Life Cycle (SDLC) in large-scale enterprise software using Object Oriented Analysis and Design.
- Provided 24/7 on-call Support for production.
- Co-ordination with different tighter schedules and efficient in meeting deadlines.
- Self- starter, fast learner and a team player with strong communication and interpersonal skills.
TECHNICAL SKILLS
Hadoop / Big Data: MapReducev1&v2, Hive, Impala, HBase, Sqoop, Flume, Oozie, ZookeeperKafka, Sentry, Kerberos, Ambari, chef.
Frameworks: MVC, Struts, Spring, Hibernate.
Internet/Java Technologies: JSP, Servlets, EJB2.0, XML, HTML, JavaScript, JNDI, JMS and JDBC 2.0.
Web/App servers: Oracle/BEA WebLogic 9.x/10.x/11g/12c, WebLogic Portal Server 9.x, OSBApache Tomcat 5.0/4.5, Apache HTTP server 2.0/2.2, Ohs Webserver.
Databases: Oracle 9i/10g, SQL Server 2008/2005 MS-SQL Server, Oracle 11g/10g/9i, MySQL, DB2, MS-Access.
Scripting: Shell,python,Wlst.
Protocols: Http, Https, Ftp, t3, t3s, TCP/IP, LDAP.
Platforms: Red Hat Linux 4/5/6, Linux 2.6.32, Windows XP/2003.
Others: Jenkins, SunOne Directory Server, Wily Introscope 7.x, Console, LAN/WANJava 1.4/1.5/1.6/1.7.
PROFESSIONAL EXPERIENCE
Hadoop Administrator
Confidential, Atlanta, GA
Responsibilities:
- Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & reviewHadooplog files.
- Experience in Cloudera Hadoop Upgrades and Patches and Installation of Ecosystem Products through Cloudera manager along with Cloudera Manager Upgrade
- Capacity planning, hardware recommendations, performance tuning and benchmarking.
- Cluster balancing and performance tuning of Hadoop components like HDFS, Hive, Impala, MapReduce, Oozie work flows.
- Taking Backups of meta-data & databases before upgrading BDA cluster and deploying patches.
- Adding and Decommissioning Hadoop Cluster Nodes Including Balancing HDFS block data.
- Implemented Fair schedulers on the Resource Manager to share the resources of the Cluster for the Map Reduce jobs given by the users.
- Configuring Kerberos and AD/LDAP for Hadoop cluster
- Worked with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
- Implemented Kerberos security across the cluster
- Working with data delivery teams to setup new Hadoop users, includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce /YARN access for the new users.
- Experience in Setting up Data Ingestion tools like Flume, Sqoop.
- Install and Set up HBASE and Impala
- Setting up Quotas on HDFS, implementing Rack Topology Scripts
- Worked with Big Data Analysts, Designers and Scientists in troubleshooting map reduce job failures and issues with Hive, Pig, Flume, Apache Spark, Sentry.
- Utilized Apache Spark for Interactive Data Mining and Data Processing.
- Accommodate load in its place before the data is analyzed using Apache Kafka with its fast, scalable, fault-tolerant system. configuring Sqoop to import and export data from HDFS to RDBMS and vice-versa.
- Handle the data exchange between HDFS & Web Applications and databases using Flume and Sqoop.
- Used Hive and created Hive tables and involved in data loading.
- Expertise inHadoopStack Map reduces, Sqoop, Pig, Hive, and Hbase, Kafka, Spark.
- Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
- Set up automated processes to analyze the System and Hadoop log files for predefined errors and send alerts to appropriate groups.
- Set up automated processes to archive/clean the unwanted data on the cluster, on Name node and Secondary name node.
- Involved in Analyzing system failures, identifying root causes, and recommended course of actions.
- Documented the systems processes and procedures for future references. aWorked with systems engineering team to plan and deploy newHadoopenvironments and expand existing
- Hadoopclusters. Involved in Installing and configuring Kerberos for the authentication of users andHadoopdaemons.
- Supported technical team members in management and review of Hadoop log files and data backups.
- Participated in development and execution of system and disaster recovery processes.
- Experience with cloud AWS/EMR, Cloudera Manager (also direct-Hadoop-EC2(non EMR))
- Monitoring HadoopCluster through Cloudera Manager and Implementing alerts based on Error messages. Providing reports to management on Cluster Usage Metrics and Charge Back customers on their Usage.
- Performance Tuning, Client/Server Connectivity and Database Consistency Checks using different Utilities.
Environment: Hadoop, MapReducev2, Hive, HDFS, Sqoop, Oozie, Cloudera, Flume, Kafka, Spark HBase, Zookeeper, BI, LDAP, NoSQL,Cognos, DB2 and Unix/Linux, Kerberos.
Hadoop Administrator
Confidential, Richardson, TX
Responsibilities:
- Performed various configurations, which includes, networking and IPTable, resolving hostnames, user accounts and file permissions, http, ftp, SSH key less login.
- Implemented authentication service using Kerberos authentication protocol.
- Created volume groups, logical volumes and partitions on the Linux servers and mounted file systems on the created partitions.
- Regular disk management like adding /replacing hot swap able drives on existing servers/workstations, partitioning according to requirements, creating new file systems or growing existing one over the hard drives and managing file systems.
- Performed benchmarking on the Hadoop cluster using different benchmarking mechanisms.
- Tuned the cluster by Commissioning and decommissioning the Data Nodes.
- Upgraded the Hadoop cluster from cdh3 to cdh4.
- Implemented Fair scheduler on the job tracker to allocate the fair amount of resources to small jobs.
- Deployed high availability on the Hadoop cluster quorum journal nodes.
- Implemented automatic failover zookeeper and zookeeper failover controller.
- Implemented Kerberos for authenticating all the services in Hadoop Cluster.
- Deployed Network file system for Name Node Metadata backup.
- Performed cluster back using DISTCP, Cloudera manager BDR and parallel ingestion.
- Designed and allocated HDFS quotas for multiple groups.
- Utilized ApacheHadoopenvironment by Hortonworks
- Have a hands on Experience on Horton Work's services.
- Performed both major and minor upgrades to the existing Horton worksHadoopcluster.
- Automated SetupHadoopCluster, Implemented Kerberos security for variousHadoopservices using HortonWorks.
- Configured and deployed hive metastore using MySQL.
- Used hive schema to create relations in pig using Hcatalog.
- AWS Elastic Map Reduce and S3/HDFS storage with Hbase, Apache Mahout Platform design and build.
- Developed automated scripts using Unix Shell for running Balancer, file system health check, Schema Creation in Hive and User/Group creation on HDFS.
- Worked on NoSQL databases including HBase, Mongo DB, and Cassandra.
- Deployed Sqoop server to perform imports from heterogeneous data sources to HDFS.
- Deployed and configured flume agents to stream log events into HDFS for analysis.
- Performed deploying yarn, which facilitate multiple applications to run on the cluster.
- Configured Oozie for workflow automation and coordination.
- Custom shell scripts for automating redundant tasks on the cluster.
Environment: Hadoop, Horton Work, HDFS, MapReduce, Hive, Sqoop,BI, Flume, Oozie, Cloudera Manager, Kerberos, Ambari, MySQL, SQL.
Hadoop Administrator
Confidential, Charlotte, NC
Responsibilities:
- Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
- Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
- Installed and configured Hadoop, MapReduce, HDFS (Hadoop Distributed File System), developed multiple MapReduce jobs in java for data cleaning.
- Developed data pipeline using Flume, Sqoop, Pig and Java map reduce to ingest customer behavioral data and financial histories into HDFS for analysis.
- Involved in collecting and aggregating large amounts of log data using Apache and staging data in HDFS for further analysis.
- Collected the logs data from web servers and integrated in to HDFS using Flume.
- Worked on installing cluster, commissioning & decommissioning of DataNodes, NameNode recovery, capacity planning, and slots configuration.
- Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.
- Involved in the installation of CDH3 and up-gradation from CDH3 to CDH4.
- Responsible for developing data pipeline using HDInsight, flume, Sqoop and pig to extract the data from weblogs and store in HDFS.
- Installed Oozie workflow engine to run multiple Hive and Pig Jobs.
- Use of Sqoop to import and export data from HDFS to RDBMS and vice-versa.
- Used Hive and created Hive tables and involved in data loading and writing Hive UDFs.
- Exported the analyzed data to the relational databases
- Deployed Hadoop Cluster in Fully Distributed and Pseudo-distributed modes.
- Supported in setting up QA environment and updating configurations for implementing scripts with Pig and Sqoop.
Environment: Hadoop, MapReduce, Hive, HDFS, PIG, Sqoop, Oozie, Cloudera, Flume, HBase, Zookeeper, CDH4, Mongo DB, Cassandra, Oracle, NoSQL and Unix/Linux.
Weblogic /Middleware Administrator
Confidential, Tampa, FL
Responsibilities:
- Expertise in migrating WebLogicserver 9.2 to 10.0 MP1 including Configuring and administration of the WebLogicbased environment
- Configured WebLogic10.0 MP1 with extensive experience in configuring the web application for new version.
- Work with application development teams on troubleshooting Tomcat administration issues.
- Administrative observation of major J2EE technologies such as JDBC, JNDI, JMS, JMX
- Worked on both Clustered and non-clustered environments for web and application servers
- Automation of existing domains to outcome RAM sized, monitoring etc. using extensive usage of Shell scripts.
- Excellent knowledge with OIM, OAM Ldap directory servers.
- Worked with Jprobe Profiling tool for the optimization of the application.
- Created and monitored the JMS Server, JMS Connection Factories, Queues, Topics, Database Store and Message Bridges
- Involved in 24/7 support for production environments.
Environment: WebLogic 8.1/9.2/10. 0 , Apache Tomcat 5.x, I Planet, SUNOne, Oracle ERP, Apache, Oracle E-Business Suite, Windows 2003/2008, Perl, MQ Series, Clarify, Red hat Linux, Shell Scripting, Oracle 9i, Oracle 10g, JDK 1.5.
Weblogic/Middleware Administrator
Confidential, LA, CA
Responsibilities:
- Production support web support engineer providing 7X24 expert technical support and consultation for WebLogic, Webservers running on Linux 2.6.32, Solaris 10 and RHEL 4 operating systems
- Creating repeatable documented processes for importing services onto the middleware platform, creation of automation for repeatable work.
- Installing and configuring new Tomcat & Oracle Fusion
- Maintaining and debugging existing applications that interfaces with a database back end.
- Reporting on incidents and following through a resolution.
- Identifying offending processed in various systems and prevents future occurrences.
- Creating technical procedures to prevent unscheduled outrages.
- Production support web support engineer providing 7X24 expert technical support and consultation for WebLogic, MQ Series Messaging Webservers running on Solaris 8, Solaris 10 and RHEL 4 operating systems.
- Unix shell scripting for alerting and alarming on applications. applications from WebLogic 10.3.0 to 10.3.5
- Developed and troubleshooting in programming language of Java 1.4/1.5/1.6.
Environment: Oracle WebLogic Server 8.1/9.2/10.x/11g, OSB, OEM, OEM Cloud Control, Oracle SOA Suite 11g, ODSI, JDBC, Red Hat Linux, WSDL, SVN, Hibernate, Oracle 9i/10g, Oracle ERP, Jconsole, WLST, RHEL 4/5, MQ series, IIS, SQL, JDK 1.6.
Middleware Administrator
Confidential, NJ/OH
Responsibilities:
- Installed, configured and administered Oracle WebLogic Server 8.0/9.2/10.3 ; WebLogic integration in various environments.
- Configuring Domains, Admin servers, Managed Servers, Clusters, Node Managers, LDAP Users, connector Modules, JDBC Connection pools, Data sources and Foreign and WebLogic JMS Servers.
- Activities involving Configuration, Deployments, Troubleshooting Issues in Development, Testing, Demo, Certification, Training and Production Environments.
- Extended support for Development team in configuring domains and troubleshooting issues in development environment.
- Have horizontally and Vertically Clustered Environments as per the Client/ Environment owners request.
- Supported on call 24/7 schedule for Non-Production support and support for QA Teams in India and US.
- Trouble shooting emerging application issues, from WebLogic configuration, inter domain Communication to code issues.
- Monitoring performance for load testing using Wily Introscope; Developed and utilized Wily Introscope instrumentation/dashboard presentation scheme.
- Have written Start and Stop WebLogic Scripts in Shell for automation.
- Installed and configured different web servers like Apache and Sun one.
- Instrumented Application working closely with development and performance team to identity any slow running classes/jsp/sql/db operations.
- Worked closely with monitoring group to setup Production Level Dashboards for Management and SLA's teams.
Environment: WebLogic Application Server v8.0/9.2/10.x, IBM WebSphere Application Server v6.1/7.0, Apache, Sun-One Web server, Oracle 9i, UNIX (AIX 5.1/6, Linux 5), Wily Introscope, Python, Java, J2EE.