We provide IT Staff Augmentation Services!

Bigdata/hadoop Technical Lead Resume

4.00/5 (Submit Your Rating)

SUMMARY

  • Around 7 years of progressive professional experience in system design, development using Bigdata/Hadoop, Teradata and Mainframes technologies
  • More than 3+ years of in depth knowledge and hands on experience on Bigdata/Hadoop core components like MapReduce, HDFS, Spark, Hive, Impala, Sqoop, Oozie, Hue
  • Experienced professional with a successful career in banking, finance and insurance domain
  • Good acumen in software development cycles involving system study, analysis, development, enhancement, implementation and support activities
  • Vast experience in mapping client requirement and designing the solutions by understating the core of the change
  • Experienced in ETL processing via Hive and Syncsort DMX - h
  • Experienced in writing Unix Shell scripting for builds and deployment in different environments
  • Strong experience working in relational database like Teradata and in mainframes
  • Good exposure to integrated testing and data validation on Hadoop Environment
  • Proactive nature has earned laurels from clients
  • Excellent interpersonal skills which helps in clearly stating and recording ideas
  • Vast analytical, organizational and leadership skills earned vital roles

TECHNICAL SKILLS

Hadoop Technology: Cloudera Hadoop, MapReduce (MR1, MR2- YARN), Spark, HDFS, Hive, Impala, Pig, Sqoop, Oozie, Hue, Cloudera Manager, Kafka, Flume, HCatalog, Spark Streaming

Operating System: UNIX, Linux, MS-DOS, Windows, OS 390 Mainframe

Database: Teradata, DB2, IMS

ETL Tool: DMXpress Hadoop ETL tool by Syncsort

Scheduling Tool: Autosys, CA7

Language: Java, Scala, JCL, COBOL, SQL, Unix shell script, IMS

Other Software: Eclipse, Maven, SharePoint, Maximo/Remedy, Teradata SQL assistant, TSO/ISPF

PROFESSIONAL EXPERIENCE

Confidential

Bigdata/Hadoop Technical Lead

Responsibilities

  • Proposing technical solution and paving out the plan for successful implementation
  • Preparing High Level and Low Level Design document
  • Converting the existing Mainframe - Teradata ETL to Hadoop ETL in order to leverage Teradata computational storage
  • Using Syncsort’s DMX-H ETL tool to facilitate application development in HDFS
  • Developing Map Reduce and Spark codes to support the use cases
  • Using JAVA and SCALA for programming
  • Developing HIVE Scripts equivalent to Teradata
  • Using SQOOP to import the data in/out of Teradata
  • Developing automated scripts for all jobs in order to complete loading data from MAINFRAME to TERADATA after processing in Hadoop
  • Handling data from FLUME and KAFKA sources via SPARK STREAMING
  • Scheduling the Hadoop jobs using OOZIE and AUTOSYS
  • Developing customized HIVE UDFs
  • Handling Fixed block, Variable block, Text Delimited, Binary, AVRO, PARQUET files
  • Using Network Data Movement (NDM) / Connect Direct to move data across servers
  • Developing MAP REDUCE and SPARK code to structure the data
  • Using IMPALA for end user queries and validation
  • Solving issues raised by other application teams via Nexus request
  • Building archival and recovery jobs for DR purpose
  • Building reusable common components which will reduce application coding effort
  • Preparing necessary technical standard and functional manuals for application

Confidential

Lead Developer

Responsibilities

  • Used Hadoop as a data processing layer when moving the data from MAINFRAME to TERADATA
  • Used Syncsort’s DMX-H ETL tool to facilitate application development in HDFS
  • Developed MAP REDUCE using JAVA for data manipulation
  • Used HIVE, OOZIE and SQOOP extensively for ETL processing
  • Created a batch calculation process with help of historical data which consisted of account balance, aggregated deposits & investments of the customer
  • Designed the model and flow to achieve the requirement
  • Changed the BTEQ /MLOAD/TPUMP/FLOAD/FASTEXPORT/TPT/JCL scripts as per requirement
  • Wrote and executed the Teradata SQL scripts to validate the end data
  • Created views on the tables along with access categories to provide data access to the users
  • Prepared design, test plan, implementation plan, test scripts, validation script and unit testing documents
  • Prepared Job flow diagram in MS VISIO in order to handover the implementation to production support team
  • Tuned the bad performing Teradata SQL queries and inefficient collect stats
  • Provided root cause analysis on critical and non-critical issues occurred in production
  • Analyzed the dashboard and performance metrics
  • Prepared necessary technical and functional manuals for the application

Confidential

Developer

Responsibilities

  • Developed the application as per High Level Design
  • Designed and implemented the changes via change request
  • Used JCL components and checked out the impacted components to CHANGEMAN package
  • Changed the JCL/COBOL/COPYBOOK scripts as per requirement
  • Prepared design, test plan, implementation plan, test scripts, validation script and unit testing documents
  • Scheduled the jobs via CA7 and Autosys
  • Tuned the bad performing jobs
  • Provided root cause analysis on critical and non-critical issues occurred in production
  • Coded, peer reviewed and tested the programs for quality assurance
  • Provided on call support for daily job streams in production

We'd love your feedback!