We provide IT Staff Augmentation Services!

Sr. Hadoop Developer Resume

St Louis, MO

SUMMARY

  • 10+ years of IT experience in software development in Bigdata Technologies, Java, ETL Development & Mainframe Technology.
  • 2+ Years of work experience as Hadoop Developer with good Knowledge of Hadoop framework, Hadoop distributed file system.
  • Experience in Hadoop Ecosystems HDFS, MapReduce, Hive, Java, Pig, Oozie & Sqoop.
  • Excellent understanding / knowledge of Hadoop architecture and various components such as HDFS and Map Reduce programming paradigm.
  • Experience in managing and reviewing Hadoop log files.
  • Hands on experience in Import/Export of data using Hadoop Data Management tool Sqoop.
  • Strong experience in writing Map Reduce programs for Data Analysis. Hands on experience in writing custom partitioners for Map Reduce.
  • Involved in Schema Design
  • Involved in writing the Java/Pig Scripts to reduce the job execution time
  • 1 year of Experience in development of web - based applications using Core Java, Struts 2.0 and EJB 3.0
  • 2 years of strong Experience in analysis, design, development, implementation and troubleshooting of Data Warehouse applications using ETL tools like Informatica power center 9.1/8.x/7.x.
  • Extensive experience in Creating and maintained Database Objects like Tables, Views, Indexes, Constraints, Sequence and Synonyms.
  • Experience in Data Modeling using Dimensional Data modeling, Star Schema/Snow flake schema, FACT & Dimensions tables, Physical & logical data modeling and de-normalization techniques.
  • Experience with Data Extraction, Transformation and Loading from different data sources like DB2, Oracle, MS SQL Server etc. into a common analytical data model using Informatica Power Center 6x/7x/ 8.1/8.5/8.6.
  • 5 Years of Experience on Mainframe Technologies like MVS, JCL, COBOL, MF-COBOL, DB2, IMS DB, CICS and UNIX etc.
  • Tools: Configuration Management Tools CHANGEMAN, ENDEVOR, Work Bench, PANVALET and SCLM. File-Aid, SPUFI, BMS GT, DB2 Visualizer & ABEND AID.

TECHNICAL SKILLS

Hadoop/Big Data: HDFS, MapReduce, Pig, Hive, Sqoop, Oozie, Cloudera

Java Technologies: Core Java, Struts 2.0, Eclipse 3.2.2

Mainframe Technologies: MVS/ZOS,COBOL, JCL, DB2, CICS, IMS DB

Programming Languages: C, C++, Java, Unix Shell Script

Operating System: Linux, Cloudera, MVS/ZOS, Windows 98/2000/XP

Data Bases: MySQL, DB2, Oracle

Tools: SPUFI, BMSGT, File-Aid, Informatica Power Center9.1/8.x/7.x

Version Control Tools: Changeman, Endeavor, Work Bench

PROFESSIONAL EXPERIENCE

Confidential, St.Louis, MO

Sr. Hadoop Developer

Responsibilities:

  • Worked on different big data analytic tools including Pig, Hive, HDFS and Sqoop.
  • Involved in loading data from LINUX file system to HDFS.
  • Implemented best income logic using Pig scripts and UDFs.
  • Implemented test scripts to support test driven development and continuous integration.
  • Worked on tuning the performance Pig queries.
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
  • Responsible to manage data coming from different sources.
  • Load and transform large sets of structured, semi structured and unstructured data
  • Cluster coordination services through Zookeeper.
  • Experience in managing and reviewing Hadoop log files.
  • Manage and review data backups, manage and review Hadoop log files.
  • Installed Oozie workflow engine to run multiple Hive and pig jobs.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Supported in setting up QA environment and updating configurations for implementing scripts with Pig and Sqoop.

Environment: Hadoop, HDFS, Pig, Sqoop, Hive, Oozie, Shell Scripting & MapReduce.

Confidential, Jacksonville, FL

Sr. Hadoop Developer

Responsibilities:

  • Evaluated business requirements and prepared detailed specifications that follow project guidelines required to develop written programs.
  • Leading the team and assign work to the team members.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
  • Developed Simple to complex MapReduce Jobs using Hive and Pig
  • Optimized Map Reduce Jobs to use HDFS efficiently by using various compression mechanisms
  • Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop
  • Exported the analyzed data to the relational databases using Sqoop.
  • Created partitioned tables in Hive.
  • Managed and reviewed Hadoop log files.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in MapReduce way.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Written Pig Latin scripts.
  • Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
  • Load and transform large sets of structured and semi structured data
  • Responsible to manage data coming from different sources

Environment: Hadoop, MapReduce, HDFS, Hive, Pig, SQL and Sqoop.

Confidential, Jacksonville, FL

Sr. Developer

Responsibilities:

  • Preparation/Review of Impact analysis documents.
  • Preparation/Review of Test Plan documents.
  • Leading the team and assign work to the team members
  • Code Developed for Break fix Enhancements.
  • Integration and Unit Testing.

Environment: Java, Struts 2.0, DB2, IntelliJIdea, XML, DMS, Author.

Confidential, Kansas, OH

Sr. Software Engineer

Responsibilities:

  • Responsible for Business Analysis and Requirements Collection.
  • Responsible for data analysis.
  • Extracted data from various internal systems which comprises of details of HR, Project, Time, Trade, Expense, Banker statements and compliance.
  • Extracted from various external sources which comprises of details of deals, mergers and acquisition, Securities, bonds and instruments.
  • Created Complex mappings using Connected/Unconnected Lookup, Aggregator and Router transformations for populating target table in efficient manner.
  • Created events and tasks in the work flows using workflow manager
  • Provided production support by monitoring the processes running daily.
  • Worked on Dimensional modeling to design and develop Star schemas by identifying the facts and dimension.
  • Worked on Dimension/Fact tables to implement the business rules and get required results.
  • Developed Informatica Mappings, Mapplets, Sessions and Workflows as per client’s standards.
  • Prepared design specifications for Informatica mappings.
  • Tuning Informatica Mappings and Sessions for optimum performance.
  • Developed shell scripts for data retrieval and ETL processes.
  • Developed and scheduled Autosys jobs by creating and deploying different JIL files.

Environment: Informatica Power Center 9.0.1, Windows 98, UNIX, Oracle 8i, PL/SQL & Import/Export Utilities.

Confidential

Responsibilities:

  • Communicating and coordinating with the Onsite coordinator for analysis and implementation of Work. Provide solutions for Production Trouble Shooting within predefined deadlines depending on the criticality of the task. OPC batch scheduling and handling issues on the same.
  • Working on MN tickets. Maintaining the applications day-to-day and catering to quarter-end, year-end processes. Coordination with Team members. Analyzing the programs as per the Improvement request from the User. Complete Review of each and every part in Applications.
  • Responsible for Release Activity.

Environment: MVS, COBOL, JCL, DB2, SCLM, OPC.

Hire Now