We provide IT Staff Augmentation Services!

Lead/senior Hadoop Developer Resume

3.00/5 (Submit Your Rating)

Minneapolis, MN

SUMMARY

  • Extensive experience in architecting&developing solutions, wif focus on creating business value for more than 9 years in Big Data, Java and Analytics applications for Healthcare, Insurance, Retail, Logistics and Hospitality industries.
  • Key memberof Centre of Excellence team which is responsible for exploring new technologies& executing POC’s, conducting benchmarking/performance testing of technologies, solutions & toolsto in corporate best - fit technologies to deliver efficient & cost effective solutions.
  • Responsible for architecting consumption patterns to deliver data wif low latency, offline (high latency), and in near real time from Datalake to downstream applications & systems.
  • Active role in conducting capacity planning (Queue sharing - allocating resources i.e. Executors,Cores, Memory,Storage in the cluster) for Bigdata applications/systems wif respect to reliability, scalability, security, compliance and cost.
  • Partnered wif Solution Architects and Business Analysts to firm up requirements and create high level solution and design.
  • Contributed heavily to Healthcare by designing & building Bigdata Datalake products & systems to store/process TB’s of health care data (daily) using Spark, Hive, Hadoop Ecosystem, WebMQ, Java, Talend, Mapr-DB (Hbase) &for building Master Data Management(MDM)
  • Collaborated wif Analytical Common Capabilities team & provided near real time data deliverysolutions for Insurance domain using Kafka, Spark to identify risks in members health, claims processes by applying Statistical tests like Nelson Rules, for campaigning & promotions, etc…
  • Exploring Machine learning algorithms & Sparks machine learning libraries.
  • Extensively used various design patterns such as DAO, Facade, Command, Strategy, Adapter and Visitor in developing ERP & Retail application using Spring & Spring security.
  • Extensively used Talend for delivering Bigdata programs.
  • Developed utilities to improve the performance of the production environment.
  • Involved in creating dashboards, alerts, and reports using Splunk for business teams to provide detailed insights on the data sources ingestion status, failure status, Issues, type of errors and health status of the production system.

TECHNICAL SKILLS

Distributed Processing FW: Spark

Hadoop Ecosystem: Hive, Sqoop, MAPR DB (Hbase)

Hadoop Distributions: MapR, CDH4 - Cloudera, Pivotal, AWS

Messaging systems: Kafka, RabbitMQ, WebMQ

NOSQL DB: MAPR DB (Hbase), Marklogic

Reporting: Splunk, Kibana

Workflow Engine: Talend TAC, Bedrock

J2EE Frameworks: Spring, Struts

Languages: Java

J2EE Technologies: JDBC, Servlets, JSP, EJB.

Web UI Technologies: HTML, JavaScript, Angular JS

Application Server: ORACLE OC4J, JBoss4.3.

Web Server: Tomcat 5.5

IDE’s/TOOLS: Talend Studio, Eclipse, IReport, IntelliJ, Karma, Keytool

Database: Greenplum, Oracle, MySQL

Unit Testing: JUnit, Jasmine

Operating Systems: Windows, Linux

PROFESSIONAL EXPERIENCE

Confidential, Minneapolis, MN

Lead/Senior Hadoop Developer

Responsibilities:

  • Conducted capacity planning to acquire & allocate the required resources rom Mapr-BDPAAS environment for the Spark application.
  • Transform and Enrich Data based on Pre-defined Business Rules (Nelson Rules) usingSpark SQL’s DataFrames & DataSets.
  • Storing & structuring of enriched results to cater data visualization wif Hive using Spark SQL.
  • Createdpartitionson the data while storing inhive.
  • Provisioned data to Kibana.
  • Created and deployed jobs using in Talend in TAC.
  • Created context groups for multiple environments& routines using Talend.

Environment: Spark SQL, Hive, RabbitMQ, Kibana, Yarn, IBM CDC, MaprDB (Hbase), Java, Talend, TAC, MapR, BDPaaS, Linux

Confidential

Lead/Senior Developer

Responsibilities:

  • Designed and developed Topics, partitions for the healthcare data from IBM Change Data Capture and broadcasted data to the consumers using Kafka.
  • Designed and developed consumption patternsin consuming the data from the Topics.
  • Performing benchmarking in delivering data in near real time by exploring various technologies like DataTorrent.
  • Root cause analysis and code refactoring for various issues.
  • Involved in the capacity planning in acquiring the resources required on the Mapr-BDPaaS cluster for the project.
  • Executed the program wif agile methodology.

Environment: Kafka, Spark,Yarn, IBM CDC, MaprDB (Hbase), Java, Talend, TAC, MapR, BDPaaS, LinuxProject #3: House Calls DF1.0 & DF2.0

Confidential

Lead/Senior Hadoop Developer

Responsibilities:

  • Created tables & views using Hive for provisioning, querying and managing large datasets from various sources to facilitate downstream systems and analytics teams.
  • Extensively used various components including tHive,tHbase, tSqoop, tWaitForFile, tIterateToFlow, tFlowToIterate, tHashoutput, tHashInput, tMap, tRunjob, tJava, tjavaflex, tNormalize, tfileinputDelimited, tssh, tsystem, map reduce components to create Talend jobs.
  • Created contexts to use the values throughout the process to pass from parent to child jobs and child to parent jobs using tbufferoutput, tcontextload, tjavarow.
  • Developed Talend jobs dat are reused in different processes in the flow
  • Implemented Sqoop to acquire data from disparate data sources into Hadoop.
  • Worked on Talend Routines dat extensively used for pattern matching and formation of SQL Queries throughout the process.
  • Worked on various UAT issues raised before moving the code to production.
  • Migrated project from DF1.0 to DF 2.0 framework
  • Created workflows to ingest the data files for various sources using the bedrock workflow engine.
  • Participated in all sprint demos, ceremonies, planning sessions and retrospection meetings and provided suggestions, changes and ideas during the execution of sprints.
  • Co-ordinated wif Production team during the transition of the code to production.
  • Created deployment planning and document for testing as well as production team.

Environment: Talend, Hive 0.13, Yarn, WebMQ, IBM CDC, MaprDB (Hbase), Java, TAC, MapR, BDPaaS, Linux

Confidential

Senior Hadoop Developer

Responsibilities:

  • Created the Workflow to ingest the data files of the corresponding sources i.e. HSR, EVI, Triage, Optumint, OptumOds in the bedrock workflow engine.
  • Extracted the data from MySQL DB to hive tables for couple of sources using Sqoop.
  • Created external tables using Hive for provisioning, querying and managing large datasets from various sources to facilitate other systems and analytics purposes.
  • Converted the data file to Avro format using Map Reduce programming.
  • Worked on various UAT issues raised before moving the code to production.
  • Worked on defects raised by QA team.
  • Co-ordinated wif Production team during the transition of the code to production.
  • Created deployment planning and document for testing as well as production team.

Environment: Hadoop 2, Hive 0.13, Greenplum, Avro, Java 1.7, CDH4, Pivotal, Bedrock, Linux

Confidential

Senior Developer

Responsibilities:

  • Implemented on the security module using Spring Security’s ACL module to maintain object level access control.
  • Generated SSl certificates for development environment using key tool.
  • Developed UI screens using Angular JS & KPI’s using NVD3 charts.
  • Involved in writing XML configuration files for the spring beans. Used Spring IOC for dependency injection of the beans between the Presentation, Business and Database layers.
  • Involved in writing new promotions for the application using Test Driven Development (TDD) approach.
  • Involved in peripheral integrations.
  • Involved in developing UI & business logic using TB 5.0 AI message flow wif XML configurations.
  • Performed Unit Testing using JUnit, Jasmine and Karma test runner.

Environment: Java, Spring 3.0, Spring Security, Hibernate, RESTful Web Services, JMS, JSON, Glassfish 3.1, PostgreSQL 9.1, JUnit 4, IntelliJ 11.1, Confidential Self Service Checkout Framework, VM Ware, Java 1.7,, NEP Framework, MS SQL, Tomcat 7,, SVN, Eclipse, MQTT, AngularJS, Node.js SenchaTouch, Cordova, ActiveMQ, MQTT (Paho), Taxonomy, CEP, Jenkins CI

Confidential

Software Engineer

Responsibilities:

  • Developed the project using EJB1.x, Jsp, Servlets and JavaScript.
  • Involved in developing the Warehouse Module.
  • Involved to resolve various issues.
  • Followed Session Façade Design Patterns while developing and designing the application.
  • Designed the technical document for Air Operations.
  • Involved in Team Meetings and provided my views in designing the application.
  • Debugging the Application.
  • Unit testing

Environment: Java, Jsp, Servlets, EJBs, JDBC, Java Script, Oracle, OC4J

Confidential

Software Engineer

Responsibilities:

  • Developed the project using Struts framework.
  • Involved in developing DataCapture and Reports modules.
  • Involved in developing the user interface, for invoking the business logic, writing data access logic.
  • Involved in changing code, code reviews, standard performance and functional reviews.
  • Generate Reports as Normal Reports, Printable Version reports as well as Excel Reports and PdfReports.
  • Co-ordinated the onsite and offshore team.

Environment: Struts, JDBC, Jboss, iReport and MS SQL.

We'd love your feedback!