We provide IT Staff Augmentation Services!

Big - Data Architect Resume

EXPERIENCE SUMMARY:

  • More than 11+ years of experience in designing, developing, deploying & supporting large scale distributed system and 4+ years of experience in Hadoop Eco system, and Big - Data Analytics.
  • Good experience in Apache Hadoop Framework, Spark, Map/Reduce, Pig, Hive, Sqoop and Cloudera's Hadoop distribution.
  • Developed applications Using Hadoop Ecosystem (Hive, Pig, Spark, oozie) .
  • Hands on experience in writing Map Reduce jobs in hive and pig .
  • Experience in developing Hive UDF’s and loading data to Hive Partitions and creating Buckets in Hive.
  • Experience in importing and exporting data from HDFS to Relational Database system and vise versa using Sqoop.
  • Experience in implementing oozie workflow and co-ordinators for hadoop batch jobs.
  • Flexible to work with Cloudera Hadoop User Experience (HUE) browser for better performance.
  • Testing experience includes extensive knowledge in testing of Software Applications for Web based Banking Applications(Retail Cards) and Point Of Sale (Retail Application).
  • Good exposure in Testing Concepts and automation tools.
  • I have great experience in agile and Scrum software development methodologies.
  • Have Involved in Incident management and change management process.
  • Have worked on various production issues and provided immediate hot/break fixes.
  • Experience in working with automation testing tool - Quick Test Professional (QTP) 9.1, Rational Functional Test (RFT) 7.0, Selenium and test management tool - Quality Center 9.0, SOAP UI 4.0.1.
  • Experience is writing complex PL/SQL procedure to purge expired data in production as a part of support activity.
  • Provided effective support with responsibilities to ensure critical application availability,
  • SLA adherence and other routine support activities for Confidential Mexico IPOS application.
  • Good Knowledge in Liferay Portal workflow.
  • A highly focused individual with immense capacity to efficiently work and achieve high levels of productivity at all times.

TECHNOLOGY COMPETENCIES:

Hadoop: HDFS, Spark, Map Reduce, Pig, Hive, HBase, d Sqoop, Oozie, Flume.

Test/Task Management Tool: Quality Center 9.0, JIRA, Rally

Version Control Tools: CVS, GitHub, PVCS

Coding: core Java, Shell Script, Python, Scala, Excel Macro

Database: Oracle, MySQL, MS access,DB2, Teradata

Operating System: MS-Windows, UNIX, Linux

Testing Tool: Quick Test Pro, RFT, Selenium, SOAP UI

Defect Tracking Tools: JIRA, Clear Quest, Service Now, QC

WORK EXPERIENCE:

Confidential

Big - Data Architect

Responsibilities:

  • Involved in creating Hive queries, and loading data to target tables.
  • Developing Shell Script to create jobs, handle audit checks and log messages
  • Implemented Partitioning, Dynamic Partitions and Buckets in HIVE for increasing performance benefit and helping in organizing data in a logical fashion.
  • Experience in loading the data into Spark RDD, perform advanced procedures like text analytics and processing using in memory data Computation capabilities of Spark.
  • Developed scripts to handle purging task. load Hive, Teradata (via tpt) tables using hourly jobs through UC4 scheduler.
  • Owned overall design and deliverables for few modules in the loss system.
  • Hadoop/Teradata optimizations - small file reduction, compression, workload isolation

Tools:

  • Apache Spark
  • Hadoop - hive,pig,sqoop
  • Shell script
  • Python
  • Git
  • ServiceNow

Confidential

Big - Data Architect

Responsibilities:

  • Involved in creating Hive queries, and loading data to target tables.
  • Developing Shell Script to handle audit checks and log messages
  • Involved in developing PIG script and creating job scripts for the same.
  • Implemented Partitioning, Dynamic Partitions and Buckets in HIVE for increasing performance benefit and helping in organizing data in a logical fashion.
  • Experience in loading the data into Spark RDD, perform advanced procedures like text analytics and processing using in memory data Computation capabilities of Spark.
  • Developed Python scripts using both Data frames/SQL and RDD in Spark for Data Aggregation.
  • Developed spark program using Scala api’s for data processing.
  • Involved in testing end to end flow for OPDDPE

Tools:

  • Apache Spark
  • Hadoop - hive,pig,sqoop
  • Shell script
  • Python kafka
  • Git
  • BitBucket
  • ITSM

Confidential

Big - Data Architect

Responsibilities:

  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce.
  • Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.
  • Developed the code for Importing and exporting data into HDFS and Hive using Sqoop.
  • Implementing POC to migrate map reduce jobs into Spark RDD transformations using Scala/Python.

Tools:

  • Apache Spark
  • Hadoop - hive,pig,sqoop,oozie
  • Shell script
  • Python
  • Git
  • BitBucket

Confidential

Big - Data Architect

Responsibilities:

  • Extensive experience in writing HDFS commands
  • Developed complex queries using HIVE
  • Involved in creating Hive Tables, loading with data and writing Hive queries which will invoke and run MapReduce jobs in the backend
  • Worked extensively with Sqoop for importing and exporting the data from HDFS to DB2 Database systems and vice-versa loading data into HDFS.
  • Written HIVE and PIG scripts as per the requirement
  • Designed and created managed/external tables in HIVE as per the requirement.
  • Was involved in writing UDF’s in HIVE.
  • Responsible for Turnover and promoting the code to QA, creating CR and CRQ for the release
  • Successful in creating and implementing complex code change
  • Involved in production support and helped in providing uninterrupted business for Confidential Checkout team

Tools:

  • Hadoop - hive,pig,Sqoop
  • Shell script
  • Clear Quest java

Confidential, Minneapolis

Big - Data Architect

Responsibilities:

  • Work in an agile environment in an effort to validate the continuous enhancement of the application during multiple sprints from the testing stand point.
  • Leading a team of 8 and responsible for sprint stories right from requirement gathering, development, testing and production.
  • Perform testing in the tablet and mobile devices.
  • Perform business, functional and end to end testing.
  • Automation Framework and Automate Functional Scripts using selenium.
  • Creating and maintaining the Test Collaterals which includes Test Execution Status, Defect Reports etc.
  • Integration testing with third party interfaces for data set up and validation in downstream stream.
  • Will be involved in Defect Tracking and Analysis.
  • Act as a subject matter expert for the overall application testing.
  • Ensuring team’s deliverables meet product requirements and quality standards.
  • Responsible for support during project Go Live.

Tools:

  • Selenium
  • Java
  • Jenkins
  • Confidential
  • JIRA
  • Agile Methodology

Confidential, Minneapolis

Big - Data Architect

Responsibilities:

  • Work in an Agile environment in an effort to validate the continuous enhancement of the application during multiple sprints from the testing stand point.
  • Perform business, functional and end to end testing.
  • Automation Framework and Automate Functional Scripts.
  • Creating and maintaining the Test Collaterals, which includes Test Execution Status, Defect Reports etc.
  • Integration testing with third party interfaces for data set up and validation in downstream stream.
  • Will be involved in Defect Tracking and Analysis.
  • Act as a subject matter expert for the overall application testing.
  • Ensuring team’s deliverables meet product requirements and quality standards.
  • Responsible for support during project

Tools:

  • Liferay product
  • Java
  • Selenium
  • Rally
  • SOAP UI
  • Excel Macro

Confidential, Minneapolis

Big - Data Architect

Responsibilities:

  • Handling live production issues and ensuring SLAs (Service Level Agreement) are met determined by the Business.
  • Performing root cause analysis for production issues.
  • Perform development, unit testing and deployment for the change request in non-prod environment.
  • Prompt escalation for application, Servers and database errors. Creating and executing Change Requests for production deployments.
  • Involved in the Monitoring activity for the New Store Opening Work with the third party for the data validations and transaction
  • I'll be involved in Requirement Gathering.
  • Preparation and maintain Automation script for different lingual packages
  • Creating the Performance run with the Functional Automation Tool (RFT)
  • Work in an Agile environment in an effort to validate the continuous enhancement of the application during multiple sprints from the testing stand point.
  • Perform business, functional and end to end testing.
  • Creating and maintaining the Test Collaterals which includes Test Execution Status, Defect Reports etc.
  • Integration testing with third party interfaces for data set up and validation in downstream stream.
  • Will be involved in Defect Tracking and Analysis.
  • Act as a subject matter expert for the overall application testing.
  • Ensuring team's deliverables meet product requirements and quality standards.
  • Responsible for support during project Go Live.

Tools:

  • ORPOS - oracle Point Of Sale
  • Java
  • Rational Functional Tool
  • Excel Macro

Confidential, Seattle

Big - Data Architect

Responsibilities:

  • To Understand the Workflow of the Application
  • Categorize the test Cases to Automate
  • To prepare the Script for the Test Cases
  • Prepare Test data for the Automated Test cases
  • Execute the Script and send the Results to onsite Lead.
  • Generated Regression Test results in HTML format.
  • Preparation of DSR (Daily status Report)
  • Prepared Excel Macro for the Test data Preparation.

Tools:

  • VB Script
  • Quick Test Professional
  • Excel Macro

Hire Now