Devops Engineer Resume
Sunnyvale, CA
SUMMARY
- Around 10 years of professional IT experience including 2.5 years in Hadoop/Big data ecosystem. Domain worked include: Life and Pension insurance, Financial and Banking applications.
- Committed team player with strong analytical and problem - solving skills, willingness to quickly adapt to new environment & technologies, dedicated to successful project completion and excellent communication and interpersonal skills.
- 2.5 years of hands on experience in Hadoop Eco system technologies such as in Pig, Hive, HBase, Map Reduce, Oozie, Flume and Sqoop.
- Good Understanding of Hadoop architecture and Hands-on experience with Hadoop components such as Job Tracker, Task Tracker, Name Node, Data Node and Map Reduce concepts in HDFS Framework.
- Handled importing and exporting data using Sqoop from HDFS to RDBMS and vice-versa.
- Hands on experience in writing map-reduce programs in java.
- Experience in Apache Spark using pyspark and scala to process real time data.
- Experience in analytics with ApacheSpark(RDD, DataFrames and Streaming API).
- UsedSparkStreaming to divide streaming data into batches as an input toSparkengine for batch processing.
- Performed Importing and exporting data into HDFS, Hive and HBase using Sqoop.
- Branching and merging code lines in the Git and resolved all the conflicts arise during the merges.
- Hands-on experience on using Cloudera Platform.
- Hands on experience in Installation of Hadoop Clusters.
- Extracted data from databases with SQL and used Python for analytics and data cleaning.
- Analyzing the data using statistical features in Tableau to develop trend analysis.
- Involved in consuming data from RESTful web services.
- Excellent knowledge with Unit Testing, Regression Testing, Integration Testing, User Acceptance Testing, Production implementation and Maintenance.
- Hands on experience with OS/390, Z/OS, ISPF, COBOL, PL1, DB2, CICS, JCL, VSAM, SYNCSORT, Easytrieve, REXX, File Aid, Xpeditor, InterTest, IBMDebugger, Endevor, OPC, Spufi, IBM Utilities, RDz and Change configuration system.
- Worked on annual statistics reporting which is used for generating detailed information about the pension capitals in Confidential .
- Brief exposure in Oracle10g, TOAD, Informatica Power, Python.
- Brief exposure in ETL Transformations such as Source Qualifier, Lookup, Filter, Expression, Router, Joiner, Update Strategy, Rank, Aggregator, Sorter, Sequence Generator and Normalizer.
- Experience working with Waterfall model and Agile Methodology.
- Perform impact analysis and provide solutions to user’s change requests.
- Production implementation upon successful user acceptance testing.
- Good experience in trouble shooting and system performance tuning.
- Experience in analyzing the entire system and the impact with other backend and front-end systems.
TECHNICAL SKILLS
Hadoop & Spark\Programming languages: HDFS, Mapreduce v2.6.x, YARN, HBase \COBOL, PL1, JCL, REXX, Java, Python, v0.98.0, Pig 0.14.0, Pig Latin, Hive 0.14.0, \Linux shell scripts. Sqoop 1.4.4, Flume 1.4.0, Kafka 0.8.0, Impala, Oozie 4.0.1, Hue, Zookeeper 3.4.6, \ Spark v2.0, Python API, Scala API.\
Java & J2EE Technologies\IDE: Core Java, JSP, JDBC\Eclipse
Frameworks\Databases: MVC\ MySql 5.0.x, DB2 v10
ETL tools\BI tool: Informatica\Tableau 9.0\
Mainframe middleware \Mainframe Utilities: VSAM\TSO, ISPF, Spufi, QMF, DataManager
Mainframe tools\: Endeavor, Expeditor, Intertest, FileMaster, IBM Debugger tool, RDz
PROFESSIONAL EXPERIENCE
Confidential, Sunnyvale, CA
Devops Engineer
Responsibilities:
- Installing, configuring and maintaining Continuous Integration and Configuration Management tools.
- Involved in adding host back to storm and Hbase cluster back.
- Worked on creating community and gave read/write access to the users based on request.
- Created Dashoards for the users for monitoring the apps.
- Used the version control system GIT to access the repositories and used in coordinating with CI tools.
- Provided production support and involved in analyzing and resolving the error.
- Used Splunk for debugging the issues related to our application.
- Worked on Hbase and Cloudera upgrade.
- Involved in adding capacity to various components and Hbase.
- Worked on understanding and resolving the user quries.
- Used Sqoop (hadoop to Oracle/Teradata) to copy the data to intermediate table on destination from temp hive table.
- Developed program for checking the data quality by validating the data after loading the data to temp hive table. Validated data between temporary hive table and intermediate table on destination.
- Exported the data from Hadoop to Oracle with null records in the hive table for non-string columns.
- Worked on automating the regression run using shell script.
Technologies: Hadoop, HDFS, MR 2.0.x, HIVE 0.10.0, Oracle, Hbase, Cloudera Manager, Linux and shell scripting
Confidential
Hadoop Developer
Responsibilities:
- Developed program to list HBase tables and its corresponding regions to identify the empty regions and then merged it with the adjacent non-empty regions. This was done as part of performance improvement.
- Loaded Apache log data into Hive table and created hive queries which helped to spot the long running agent queries.
- Worked on HBase upgrade from 0.94 to 0.96 version.
- Worked on Cloudera upgrade from CDH 4.3 to CDH 5.7.
- Written spark-scala script to find the most popular movie.
- Used Spark RDD to identify the top-rated items.
- Self-Join and filter functions are used to identify the duplicate data.
- Generated structured formatted data and loaded that into Spark cluster using Spark SQL and DataFrame API.
- Implemented logic using spark-scala to calculate the average, maximum, minimum value for a given item per year.
Technologies: Hadoop, HDFS, MR 2.0.x, HIVE 0.10.0, Pig, HBase 0.96, MySql, Putty, Zookeeper 3.4.6, Linux and shell scripting, CDH 4.3, CDH 5.9, JSP, Servlet, Scala 2.11.8, Spark 2.0.0
Confidential
Software Engineer
Responsibilities:
- Handled importing of data from various data sources, performed transformations using Hive, MapReduce and loaded data into HDFS.
- Implemented Partitioning, Dynamic Partitioning, Buckets in Hive.
- Involved in creating Hive tables, then applied HQL on those tables for data validation.
- Used Impala to pull the data from Hive tables.
- Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
- Hands on extracting data from different databases and to copy into HDFS file system using Sqoop.
- Written Sqoop incremental import job to move new / updated info from Database to HDFS.
- Created Oozie coordinated workflow to execute Sqoop incremental job daily.
- Load data from various data sources into HDFS using Kafka.
- Hands on experience in joining raw data with the reference data using Pig scripting.
- Used different file formats like Text files, Sequence Files, Avro.
- Implemented Spark RDD transformations, actions to migrate Map reduce algorithms using scala.
- Created various Parser programs using scala to extract data from unstructured data.
- Experience in Zookeeper to coordinate the servers in clusters to maintain the data consistency and Monitored services.
- Used Oozie workflow engine to run multiple Hive and Pig jobs.
- Used Tableau to generate dashboards for product trend analysis.
- Working with clients on requirements based on their business needs.
Technologies: Hadoop, HDFS, MR 2.5.x, HIVE 0.14.0, Pig, Sqoop 1.4.4, HBase 0.98, OOzie 4.0.1, MySql, Putty, Spark v1.4, Scala, Flume 1.4.0, Impala, Zookeeper 3.4.6, Linux and shell scripting, Tableau 9.0
Confidential
Software Engineer
Responsibilities:
- Worked closely with the business analysts to convert the Business Requirements into Technical requirements and prepared low and high-level documentation.
- Hands on using log files and to copy them into HDFS using flume.
- Hands on writing Map Reduce code to make unstructured data as structured data and for inserting data into HBase from HDFS.
- Experience in creating integration between Hive and HBase.
- Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way.
- Handled importing of data from various data sources, performed transformations using Hive, MapReduce and loaded data into HDFS.
- Involved in loading data from Linux file system to HDFS.
- Hands on experience in joining raw data with the reference data using Pig scripting.
- Hands on extracting data from different databases and to copy into HDFS file system using Sqoop.
- Written Sqoop incremental import job to move new / updated info from Database to HDFS.
- Created Tableau reports and dashboards for business users to show the number of policies falling under a category of products.
- Involved in review process and as a senior member in the team, helped new team member to get involved in the project assignments.
- Established custom Map Reduces programs in order to analyze data and used Pig Latin to cleanunwanted data.
Technologies:Hadoop, HDFS, MR 2.3.x, HIVE 0.12.0, Pig, Sqoop 1.4.1, HBase 0.98.0, DB2 v8, Putty, Zookeeper 3.4.5, UNIX and shell scripting, Tableau 8.0
Confidential
Software Engineer
Responsibilities:
- Involved in analyzing and understanding the requirements & functional specifications from client.
- Prepared technical specifications based on the existing functionality and requirements. Care was taken to re-use most of the existing components/modules.
- Involved in estimating the task once after the high-level solution is being approved by the client.
- Performed analysis for requirement changes to find out the affected list.
- Implemented services using Core Java.
- Developed and deployed UI layer logics of sites using JSP.
- Business logic was implemented using COBOL and PL1 language. DB2 was used for data storage and retrieval.
- Worked on screen changes using Gemini and Netsyrtools in EDI and DLIPS Web applications. Gemini and Netsyr tools are built on top of HTML and Javascript.
- Worked on CICS screen maintenance for implementing business changes.
- Performed debugging using IBM Debugger tool for understanding and fixing the bugs.
- System testing was performed using QC tool to keep track of the defects.
- Change configuration management tool was used for version control.
- FTPJobs were used for sending the report to the client mailbox.
- Involved in review process and as a senior member in the team, helped new team member to get involved in the project assignments.
Technologies:Java, Eclipse, Web Sevices, DB2, COBOL, PL1, JCL, CICS
Confidential
System Engineer
Responsibilities:
- Involved in analyzing functional specification, finding affected list of programs and homogeneous implementation.
- Prepared Technical specification based on the existing functionality and requirement.
- Developed programs and jobs using JCL, COBOL, DB2, CICS, and REXX.
- Used Xpeditor tool for debugging and to understand the program flow.
- Created detailed technical design specification for enhancing the batch programs,
- Care was taken to re-use most of the existing components/modules.
- Responsible for correct versioning of code by creating and moving the package using Endevor.
- Involved in preparing test plans for unit and system testing.
- Followed coding standards to ensure code consistency.
Technologies: COBOL, REXX, JCL, DB2, CICS
Confidential, NY
Mainframe Programmer
Responsibilities:
- Coordinated with management to deliver the task within the time limit and good quality.
- Involved in development/enhancement of applications using COBOL, JCL, VSAM, DB2
- Involved in production support activities. Ensuring the batch cycle gets completed in time.
- Also, fixed the issues within the time mentioned in Service Level Agreement (SLA).
- Involved in fixing abends such as Spaceabends, File contention errors, VSAM space abends and DB2 abends.
- Involved in monitoring the Batch Cycles
- As part of value add, tools were created using REXX to make the routine task easier and faster.
Technologies: COBOL, REXX, JCL, DB2, VSAM
Confidential, Bridgewater, NJ
Mainframe Programmer
Responsibilities:
- Involved in the maintenance of the Confidential UDS application using JCL, COBOL, and DB2.
- Involved in initiating and monitoring Batch trial jobs using OPC scheduler, handled Ad-Hoc requests.
Technologies: COBOL, JCL, DB2