We provide IT Staff Augmentation Services!

Devops Engineer Resume

5.00/5 (Submit Your Rating)

Sacramento, CA

SUMMARY

  • Almost 8 years of experience in ETL & Big Data including software development, operational support, designing and implementing wif major focus on Data Warehousing and Business Intelligence.
  • Good Communication Skills and Strong interpersonal skills to deal TEMPeffectively wif a broad range of contacts from technical staff, to clients, to management.
  • Worked in full Software Development Life Cycle (SDLC).
  • Exceptional ability to quickly master new concepts and capable of working in - group as well as independently. Has good communication skills, both oral and written. Ability to work in tight schedules and on different applications Confidential the same time.
  • Worked extensively in Hadoop, Hive, Pig, Sqoop dev-ops role for more TEMPthan 3 years.
  • Implemented custom scripts for monitoring Hadoop/Hive availability from the user end, for transferring data from one cluster to another, for checking and creating hive partitions of hive tables if partitions are not already created. Created Post maintenance Validation Script to check if all the services are up and running.
  • Retrieved data to and from EDW database using Sqoop and field level verification of data in Hive comparing wif EDW.
  • Monitored regular production Hadoop jobs using Cloudera manager, job and task tracker and taking necessary action in case of failure or any issues.
  • Fixed data in production Hive tables - table was corrupted as two of the fields were populated wif null values. Fixed the data by creating a temporary table containing the corrupted fields, populated correctly from other sources along wif some key fields. This temporary table was joined wif original table and data was corrected.
  • Experience in Utilizing Ab-Initio suite for design/building of scalable architectures addressing parallelism, data integration, ETL, data repositories and analytics, developed heavily parallel CPU bound ETL process jobs in a dynamic, high-volume environment.
  • Developed and deployed well-tuned Ab Initio graph and heavily parallel CPU bound ETL process jobs using Ab Initio in a dynamic, high-volume environment.
  • Extensively worked on Performance tuning the ETL (Ab-Initio graphs), UNIX shell scripts and Database Instance.
  • Worked wif EME data store for version control, Dependency analysis, Product Support, migration metadata, security management etc and used EME common projects, Sandbox, and standard environment.
  • Hands on experience in end-to-end Data warehousing ETL routines, which include writing custom scripts, stress testing, data mining and data quality processes.

PROFESSIONAL EXPERIENCE

Confidential, Sacramento, CA

Devops Engineer

Responsibilities:

  • System Operations — monitoring failures, giving resolution in stipulated time, taking part in maintenance activities. Worked as primary every alternate weekend, including complete 12 hours support Confidential weekends.
  • Resolving user incidents and data issues. User incidents are usually queries raised by users seeking clarifications. Worked in a number of critical priority incidents - never got any escalation from client or reopening of incidents resolved once.
  • Production Enhancements for solving recurring production issues. Worked in many development/enhancement activities.
  • Worked on Ab-Initio and Unix shell scripting extensively.
  • Implemented monitoring scripts.
  • Fixed production issues by modifying graphs.
  • Enhanced performance by introducing mfs instead of accessing no of small files and tuning the max core, changing static table access into lookup etc.
  • Developed graphs from scratch for production deployment e.g. graph for comparing data count between edw and hadoop.
  • Took care of maintenance activities - controlling service graphs etc.
  • Major achievements in Hadoop include -
  • Data fixing in production Hive tables - table was corrupted as two of the fields were populated wif null values. Fixed the data by creating a temporary table containing the corrupted fields, populated correctly from other sources along wif some key fields. This temporary table was joined wif original table and data was corrected.
  • Implemented script for monitoring Hadoop/Hive availability from the user end.
  • Implemented script for transferring data from one cluster to another; Checking and creating hive partitions of hive tables if partitions are not already created.
  • Retrieving data to and from EDW database using sqoop and field level verification of data in hive.
  • Post maintenance Validation Script to check if all the services are up and running.
  • Monitoring regular production Hadoop jobs and taking necessary action in case of failure.
  • Worked on streaming the analyzed data to the existing relational databases using SQOOP for making it available for visualization and report generation by the BI team.
  • Created Pig Latin scripts to sort, group, join and filter the enterprise wise data.
  • Analyzed large data sets by running Hive queries and Pig scripts.
  • Involved in creating workflow engine to run multiple Hive and Pig jobs.

Environment: Ab Initio CO>OS 2.12, GDE 1.15, IBM Mainframe, Unix bashshellscripting, DB2, ClouderaHadoop, Hive, Pig, Sqoop

Confidential, Alpharetta, GA

Hadoop Developer

Responsibilities:

  • Responsible for architecting Hadoop clusters wif CDH4 on CentOS, managing wif Cloudera Manager.
  • Involved in initiating and successfully completing Proof of Concept on FLUME for pre-processing, increased reliability and ease of Scalability over traditional MSMQ.
  • Involved in loading data from LINUX file system to HDFS.
  • Importing and exporting data into HDFS and Hive using Flume.
  • End-to-end performance tuning of Hadoop clusters and Hadoop Map/Reduce routines against very large data sets.
  • Developed the Pig UDF'S to pre-process the data for analysis.
  • Monitored Hadoop cluster job performance and performed capacity planning and managed nodes on Hadoop cluster.

Environment: Cloudera Hadoop, Hive, Pig, Flume, MapReduce, Unix

Confidential, Deerfield, IL

Abinitio Developer

Responsibilities:

  • Responsible for creating the (MFS) Multi-file, which gives the user the ability to centrally control the distributed data files and they provide the scalability and the kinds of access patterns that parallel applications require
  • Designed and deployed well-tuned Ab-Initio graph (Generic and Custom) for ODS and DSS instance both windows and UNIX environments.
  • Developed End-to-End solutions for integrating and processing data throughout the enterprise wif high level scalability
  • Worked wif ETL framework to implement the various Ab Initio Projects provided the best practice approach to deploy and tune the ETL process.
  • Worked on improving the performance of Ab Initio graphs by using various Ab Initio performance techniques such as using Lookups instead of Joins etc.
  • Created ETL low level Design Specifications document from business requirement.
  • Used Ab Initio for Error Handling by attaching error and rejecting files to each transformation and making provision for capturing and analyzing the message and data separately
  • Extract portion of the system was designed to selectively filter a subset of production processing data as specified by the test group users
  • Involved in Oracle Migration from Oracle 9i to Teradata v2r4 worked wif Gap Analysis, Capacity planning, data validation and implementing the Testing requirement.
  • Assisted Legacy Data team members in translation of business requirements into mapping specifications
  • Used Program Components, Dataset Components and Graph Components to build the Ab Initio Graphs
  • Developed necessary technical documentation facilitating easy understanding as per ETL standards
  • Coded well-tuned SQL/PL-SQL/SQL* Loader and UNIX Shell scripts for high volume data warehouse instance.
  • Established and maintained multiple databases to ensure proper development, test, production and historical/archival systems.
  • Created Snapshots to minimize data processing when the network traffic for the database is heavy.
  • Created and populated database tables in Oracle and Teradata and wrote triggers and stored procedures to transfer the business logic to server side.

Environment: Ab Initio CO>OS 2.12, GDE 1.13, Sun Solaris, IBM AIX 4.3.3, Oracle 8i/9i, Teradata V2Rx, PL/SQL

Confidential

Abinitio Developer

Responsibilities:

  • Analyzed the Business Requirement specifications (BRD) and worked wif Business Users and Business Analyst to streamline the requirements.
  • Worked wif Business Analyst in prepartion of Function requirements specification (FRS), and analyzed the data requirements.
  • Developed Abinitio graphs after analyzing the requirements.
  • Worked on performance improvements on existing Ab-initio Graphs.
  • Wrote unix shell scripts to automate job scheduling.
  • Wrote PL/SQL stored procedures and did performance tuning of complex queries.

Environment: Ab Initio, Unix bashshellscripting, SQL, Autosys

Confidential

Abinitio Developer

Responsibilities:

  • Responsible for cleansing the data from source systems and reporting the data quality levels on system-by-system basis using Ab-Initio.
  • Developed UNIX Shell scripts for file manipulation & automation of batch jobs.
  • Converting existing cobol script into Abinitio graph.
  • Created several Test plans and Test Scenarios for Stress Testing of Performance Tuning of Ab Initio graphs and sessions and improved the performance by calculating Cache requirements for Transformations/Components and ptimizing the component.
  • Worked on ten terabytes of data in a Multi files system & Partitioned Components to fine-tune the Graphs.
  • Replicated operational tables into staging tables, Transformed and loaded data into warehouse tables using Ab Initio GDE.
  • Responsible for automating the ETL process through scheduling and exception-handling routines.

Environment: Ab Initio, GDE 1.15, IBM Mainframe, JCL, Cobol, Unix bashshellscripting, DB2

Confidential

Java Developer

Responsibilities:

  • Undergone training on Java and Advanced Java .
  • Developed proto-type test screens in HTML and JavaScript.
  • Involved in developing JSP for client data presentation and, data validation on the client side wif in the forms.
  • Collection framework used to transfer objects between the different layers of the application.
  • Design application architecture and data models
  • Interface wif Product Managers to determine key requirements

Environment:  Java, JDBC, Servlets, JSP, XML, Design Patterns, CSS, HTML, JavaScript, Tomcat

We'd love your feedback!