We provide IT Staff Augmentation Services!

Claims Data Analysis Resume

2.00/5 (Submit Your Rating)

SUMMARY

  • An Information Technology professional wif 5+ years of experience
  • 2+ years of experience in Data Engineering using HDFS, Sqoop, Oozie, Spark and Scala.
  • 3 years 6 months experience in ETL Testing and Data warehouse concepts.
  • Having good noledge on Hadoop Architecture
  • Good Knowledge about Name Node, Data Node, Job Tracker, Task Tracker, Secondary Node and YARN Architecture.
  • Good Knowledge on Spark Architecture including Spark Core, Spark SQL, RDD, Data Set, Data Frames.
  • Experience in usage of Hadoop distribution package like HDP 2.6
  • Experience in transferring data from RDBMS to HDFS and HIVE table using SQOOP.
  • Performance tuning using Optimization techniques Caching Data In Memory, Broadcast Hint for SQL Queries.
  • Good Knowledge in Handling the Bad data in text files and JSON.
  • Having good noledge on scheduling the Spark Jobs using Oozie.
  • Vast experience wif format ETL and Database Testing and Good understanding of ETL testing principals and concepts.
  • My strength lies in delivering organizational expectations and on time delivery of the assigned tasks.
  • Good team player wif good communication and interpersonal skills, good client facing skills. Very good at understanding problems and resolving them, working towards the organization’s goals and deadlines.
  • Able to communicate wif the end clients via Email wif good email etiquette and via phone.

TECHNICAL SKILLS

Big Data Ecosystem: HDFS, Sqoop, Spark, YARN

Operating System: Windows, Linux.

Programming Languages: Scala, Spark - SQL, MYSQL

Hadoop Distributions: Hortonworks Data Platform 2.6

PROFESSIONAL EXPERIENCE

Claims Data Analysis

Confidential

Environment: HDFS, Sqoop, MySQL, Spark and Scala.

Responsibilities:

  • Load and transform large sets of structured, semi structured data wif Spark.
  • Building robust ETL pipelines using Spark
  • Error Handing in JSON files and text files
  • Performance tuning using Optimization techniques.
  • Automating the Spark Jobs using Oozie.
Data Engineering

Confidential

Environment: HDFS, Sqoop, MySQL, Spark and Scala.

Responsibilities:

  • Load and transform large sets of structured, semi structured data wif Spark.
  • Building robust ETL pipelines using Spark.
  • Performance tuning using Optimization techniques.
  • Automating the Spark Jobs using Oozie.

Confidential

SQL Assistant

Environment: Teradata SQL Assistant, Informatica, IBM Rational Clear Quest, Ataccama DQC, SAS Enterprise Guide, Teradata Utilities (Fast load and Multi load), WinSCP.

Responsibilities:

  • Involved in the testing estimates based on the requirements and ensured dat estimates are in line wif the project plan timelines.
  • Prepared the Test environment and loads the data into SIT before the testing activities are commissioned.
  • Identified scenarios and involved in test data preparation.
  • Prepared test plan, test cases, test scripts and test documentation (Testing Metrics document and Defect tracking).
  • Created, optimized, reviewed, and executed SQL test queries to validate transformation rules used in source to target mappings and verify the data in outbound files. Also prepared scripts for various Business Scenarios
  • Prepared daily status report & weekly status reports and publishing it to the project manager & development teams.
  • Preparing the regression test suit.
  • Had defect triage meeting wif the development team and ensured dat all the critical & high severity defects & prioritized and resolved at the earliest.
  • Prepared KT document and had a review wif developers and Subject matter experts and trained new joiners.

We'd love your feedback!