We provide IT Staff Augmentation Services!

Senior Developer/ Lead Resume

4.00/5 (Submit Your Rating)

Phoenix, AZ

SUMMARY

  • Having 10 years of experience in handling Data Warehousing and Business Intelligence projects in Banking, Finance and Credit card industry. Which includes 3+ years of experience in Application Development using Hadoop and related Big Data technologies.
  • Extensive experience on Data analytics for satisfying Marketing Campaign.
  • Good knowledge on Hadoop Architecture and its components such as HDFS, MapReduce, Job Tracker, Task Tracker, Name Node, Data Node.
  • Having extensive knowledge on Hadoop technology experience in Storage, writing Queries, processing and analysis of data.
  • Strong understanding on NoSQL like Hbase.
  • Experience on Working on Apache Sqoop for relational data ingestion into HDFS.
  • Experience in importing data into HBase using HBase Shell.
  • Experience in Data Warehousing applications, responsible for the Extraction, Transformation and Loading (ETL) of data from multiple sources into Data Warehouse
  • Experience in cost analysis of SQL and Stored procedure
  • Experience in optimizing SQL quarries.
  • Experience in huge file processing and Data Integration.
  • Proven skills working in various role as Programmer, Business Analyst and Technical Leader.
  • Implemented various frameworks like Data Quality Analysis, Data Governance, Data Trending, Data Validation and Data Profiling with the help of technologies like Data stage 11.3 (ETL),Mainframe, with databases like Netezza and DB2.
  • Good knowledge of business process analysis and design, re - engineering, cost control, capacity planning, performance measurement and quality.
  • Learned and implemented Velocidata to integrating AMEX mainframe data into BigData(Hadoop) analytics environment by offloading the mainframe transformation process.
  • Experience with creation of Technical document for Functional Requirement, Impact Analysis, Technical Design documents, Data Flow Diagram with MS Visio.
  • Prepared High Level design document, Low Level design document, Technical design document for various projects and good at Bug fixing, Code reviews, and Unit & System testing.
  • Having experience in delivering the highly complex project with Agile and Scrum methodology.
  • Quick learner and up-to-date with industry trends, Excellent written and oral communications, analytical and problem solving skills and good team player, Ability to work independently and well-organized.

TECHNICAL SKILLS

  • ETL
  • Datastage 8.1/ 8.7 11.3
  • PLSQL
  • DB2
  • Netezza
  • UNIX/LINUX shell scriptingControl M
  • AIX platform
  • HP Quality Centre
  • TOAD
  • Mainframe
  • XML
  • APPTUNE

PROFESSIONAL EXPERIENCE

Confidential

Senior Developer/ Lead

Responsibilities:

  • Coordinated with business customers to gather business requirements. And, interact with other technical peers to derive Technical requirements and delivered the BRD and TDD documents
  • Designed and Modified Database tables and used HBASE Queries to insert and fetch data from tables.
  • Created external Hive table for data validation.
  • Involved in loading and transforming large sets of structured, semi structured and unstructured data from relational databases into Raw Data Zone (HDFS) using Sqoop imports.
  • Developed Hive queries for data sampling and analysis to the analysts.
  • Worked on importing data into HBase using HBase Shell.
  • Extensive experience working on various databases and database script development using SQL and PL/SQL.
  • Written Sqoop Queries to import data into Hadoop from DB2 table.
  • Configured and used Query Surge tool to connect with HBase using Apache Phoenix for Data Validation.
  • Worked on Spark core, Spark Streaming, Spark SQL modules of Spark.
  • Developed spark applications in python(PySpark) on distributed environment.
  • Worked on reading multiple data formats on HDFS using PySpark.
  • Involved in converting Hive Queries into various Spark Actions and Transformations by Creating RDD's from the required files in HDFS.
  • Worked on with several Dataframe modules by performing required validations in the data.
  • Used Spark Dataframe API over entire platform to perform analytics on Hive data.
  • Worked on storing the dataframe into Hive as table using PySpark.
  • Analysed the sql scripts and designed it by using PySpark SQL for faster performance.
  • Worked with PySpark API for writing the data from HDFS to Hive.
  • Developed Spark Code using Python for faster processing of data.
  • Implemented Oozie workflow engine to run multiple Hive and Python jobs.
  • Worked on Unix Shell scripting to call the Datastage jobs from Unix with the help of dsrunjob options
  • Responsible for creating the design documents, establish specific solutions, creating the Test Cases.
  • Responsible for developing and reviewing the Shell script and developing the Datastage jobs and testing the same.
  • Responsible for data cloning in ODS and EDW and validation.
  • Responsible for closing the defects identified by QA team.
  • Responsible for managing the Release process for the modules.

Confidential, Phoenix, AZ

Senior Developer

Responsibilities:

  • Depth Expertise in ETL solutions implementation
  • Responsible for leading a project team in delivering solution to customer.
  • Deliver new and complex high quality solutions to clients in response to varying business requirements
  • Responsible for managing scope, planning, tracking, change control, aspects of the project.
  • Responsible for effective communication between the project team and the onsite project counterpart. Provide day to day direction to the project team and regular project status to the onsite counterpart.
  • Responsible for creating the design documents, establish specific solutions.
  • Establish quality procedure for the team and continuously monitor and audit to ensure team meets quality goals.
  • Worked extensively in NZLOAD, NZSQL to migrate the data from DB2 toNetezzadatabase
  • Worked for migrating the existing DB2 Data toNetezza.
  • Involved in loading data intoNetezzafrom legacy systems and flat files using UNIX scripts.
  • UsedNetezzagroom to reclaim the space for tables, databases
  • Responsible for developing and reviewing the Shell script and developing the Datastage jobs.
  • Involved in gathering Business Requirements from Business users. Analysed all jobs in the project and prepared ADS document for the impacted jobs.
  • Designed, developed and tested the Datastage jobs using Designer and Director based on business requirements and business rules to load data from source to target tables.
  • Worked on various stages like Sequential file, Hash file, Aggregator, Funnel, Change Capture, Change Apply, Row Generator, Peek, Remove Duplicates, Copy, Lookup, Join, Merge, Filter, Datasets during the development process of the Datastage jobs.
  • Deploying the code into all other test environments and making sure QA to pass all their test cases.
  • Resolving the defects which have been raised by QA.
  • Established best practices for Datastage jobs to ensure optimal performance, reusability, and restartability.
  • Involved in developing Business reports by writing complex SQL queries
  • Used SQL explain plan in Query, SQL Developer to fine tune the SQL codes which were used to extract the data in database stages
  • Used Control-M to schedule, run and monitor Datastage jobs.
  • Worked on standing up the Performance Environment by creating Value Files, Parameter Sets, Unix Shell Scripts, running one time PL-SQL scripts for inserting/updating values in the various tables.
  • Extracted the data from the DB2 database and loading into downstream Mainframe files for generating the reports.
  • Ability to propose sound design/architecture solutions for Mainframe.
  • Able to propose or choose best possible Distribution for table based on the Type and contents of columns.
  • Hands on Experience in optimizing DB2 query and cost of Stored procedure using APPTUNE.
  • Used various JCL SORT to join,merge, SORT and filter data on various conditions
  • Presented and Detailed Product Backlog Items to the Scrum Team in the Sprint Planning Sessions and assisted them in arriving at the Story Points for the User Stories.
  • Reviewed user Stories and Acceptance Criteria with the team
  • Assisted the Scrum Master in Creating and Managing the Release Planning Documents
  • Worked closely with the Scrum Dev architects and Content Engineers for Design and Development of
  • Worked with Scrum QA team to go over the various test scenarios for different types System records data.

Confidential

Application Developer

Responsibilities:

  • Monitoring job status and solved the error across the environments.
  • Running jobs and scripts for checking the status of application.
  • Overriding and Restarting Jobs as per the requirement
  • Involved in IMS set up between HP-IBM.
  • Preformed data refreshed and plex build activities File fixes
  • Batch processing of CLAIMS and analysis of batch processing.
  • Involved in data refresh and plex build activities.

Confidential

Software Engineer

Responsibilities:

  • Playing important role in understanding the core business logic for maintenances and bug fix.
  • Interaction with client for specification of user requirement.
  • Prepared Test cases and Test Case Documents.
  • Modified and generated General Ledger report.
  • Modified and generated feeds as per the Business requirement which were sent to DTCC.
  • Handled and Implemented DTCC changes to JETS and PC Mapper.
  • Generation of C-tax Report for May run and Year end run as per the Business need’s
  • Enhancement and Modifications according to the Business need of Pyramid reports
  • Handled change management Request.
  • Analysis of frequently ABEND jobs in production and fix them.

We'd love your feedback!