We provide IT Staff Augmentation Services!

Technical Lead Resume

5.00/5 (Submit Your Rating)

SUMMARY:

  • 9.5 years of IT experience which includes requirement elicitations, analysis, design, development, testing, ETL, management and maintenance projects on different platforms including Big Data Analytics, including Hadoop echo system and Java/J2EE.
  • Written generic micro services to consume data from multiple kafka topics and upsert into Cassandra in Skinny row format
  • Written generic frame work to ingest data into Postgress.
  • Extensive working experience on Hadoop eco - system components like HDFS, Spark, MapReduce, Hive, Pig, Hbase, Kafka, Sqoop, Oozie and Zookeeper.
  • Experience in end to end code deployment and performance tuning in Kubernetes.
  • Completed Big Data Spark Foundations certification from Big Data University.
  • Hands on experience in Spark Streaming, Kafka and Scala.
  • Hands on experience in writing generic frame works for various extracts.
  • Experience in Open shift deployment using Docker image.
  • Experience in writing data into Amazon S3 bucket from Spark application.
  • Expert in performance tuning techniques for Hive and Hbase.
  • Experience in developing customized Hive UDFs and UDAFs in Java, JDBC connectivity with hive, development and execution of Pig scripts and Pig UDF's.
  • Good experience in Hive partitioning, bucketing and perform different types of joins on Hive tables and implementing Hive serves like REGEX, JSON and Avro.
  • Clear understanding on Hadoop MRV1 architectural components viz. HDFS, Job Tracker, Task Tracker, Name Node, Data Node, Secondary Name Node and YARN architectural components viz. Resource Manager, Node Manager and Application Master.
  • Expert in working with Spark data frames.
  • Expert in XML parsing techniques in Python using Hadoop Streaming.
  • Experience in writing external Pig Latin scripts.
  • Experience in validating and cleansing the data using Pig statements.
  • Experience in using Apache Sqoop to import and export data to and from HDFS and external RDBMS databases. Hands on experience in setting up workflow using Apache Oozie workflow engine for managing and scheduling Hadoop jobs.
  • Experience in using Sqoop to migrate data to and fro from HDFS and My SQL or Oracle and deployed Hive and HBase integration to perform OLAP operations on HBase data.
  • Hands on experience in scripting languages like PYTHON.
  • Efficient in writing MapReduce Programs and using Apache Hadoop Map Reduce API for analyzing the structured and unstructured data.
  • Hands on experience on handling different file formats like Sequential files, csv, xml, log, ORC and RC.
  • Expertise in design and development of web applications using J2EE, Servlets, GWT.
  • Expertise in design and development of server side using technologies like Java, Spring and hibernate.
  • Experienced in writing Shell, UNIX scripts for builds and deployments to different environments. Network File system, FTP services, and Mail services.
  • Hands on experience in Ant and Maven for writing build scripts.
  • Extensive experience in application development and deployment on Weblogic and Tomcat Application servers.
  • Experienced and skilled Agile Developer with a strong record of excellent team work and project management.
  • Excellent verbal and written communication skills. Strong analytical, organizational and interpersonal skills.

TECHNICAL SKILLS:

Hadoop Technologies and Distributions: Apache Hadoop, Hortonworks Data Platform (HDP) and MAPR

Hadoop Ecosystem: HDFS, MapReduce, Hive, Pig, Sqoop, Oozie, Flume, Spark, Zookeeper, Storm, Kafka, Spark Streaming.

NoSql Databases: HBase,Cassandra,MongoDB

Programming: J2SE & J2EE (Certified), Oracle certified-PL/SQL Developer Associate, Python and Scala.

Framework: Spring Boot,Spring, Spring Batch and Hibernate.

Version and Source Control: SVN,GIT

RDBMS: PostgreSQL,ORACLE, MySQL, SQL Server.

Application Deployment: Kubernetes, Openshift,Docker

Web Development: GWT, Web Services, HTML, JSP, Servlets, JavaScript, CSS, XML

IDE: Eclipse,Intellij, Oracle SQL Developer.

Web Servers: Apache Tomcat and Weblogic.

Domain: Retail Banking,Investment Banking,HealthCare

PROFESSIONAL EXPERIENCE:

Confidential

Technical Lead

Technologies Used: Spark Streaming, Hive Context, Scala,Kubernetes,Spring Boot, Kafka, Cassandra, Apache Drill,Apache Thrift, Mapr DB,Pig, MapReduce, Hbase, Shell Script, Hive, Sqoop .

Responsibilities:

  • Written generic micro services to ingest data from kafka to Cassandra in skinny row model.
  • Written generic frame work to ingest real time data into Postgress.
  • Real time event data processing/ingestion in Big data using Spark Streaming and Scala.
  • Generate big data feeds for various clients like OptumRx, CVS etc to cut of existing legacy application processes that has performance issues.
  • Generate complex transaction feeds on daily and weekly basis using Spark and Scala and load in Kafka topic.
  • Written generic frame work in spark using scala to process ECI Extracts and load in kafka topic.
  • Fine tune the code to insert data into postgreSQL
  • Successfully completed multiple POC's for end to end data migration from legacy source system to big data.
  • Design for various complex processes like RDS feeds, Healthcare Economics, Actuarial and Underwriting reports.
  • Set up Kubernetes environment for new micro services and spark job
  • Denormalization of Member Eligibility data and load in MAPR DB.
  • Handling large datasets using Partitions, Spark in Memory capabilities, Effective & efficient Joins, Transformations during data ingestion process.
  • Performed advanced procedures like text analytics and processing, using the in-memory computing capabilities of Spark.
  • Involve in Project kick-off meetings to understand high level business requirements.
  • Provide estimation & solution design for complex projects (Healthcare Economics, Actuarial, Underwriting ) based on business requirement.
  • Define business requirements, develop business process, validate the process against the system
  • Analyze the ongoing issues and implement solution to improvise performance and scalability
  • Responsible for providing consultation to business and interacting with different internal technical teams and business teams to manage end to end life cycle of the project.

Confidential

Hadoop Tech Lead

Technologies Used: Spark Streaming, Spark Hive Context, Kafka, Shell Scripting and JRules.

Responsibilities:

  • Requirements elicitation, Analysis and prepare high level design and documentation.
  • Develop Spark Scala module to ingest real time customer application data.
  • Extracting, transferring and loading the data from different sources to build the solutions for Hadoop Projects.
  • Design and develop applications in Spark using Scala to compare the performance of Spark with Hive and SQL/Oracle.
  • Developed Spark scripts by using Scala shell commands as per the requirement.
  • Used Spark-Streaming APIs to perform necessary transformations and actions on the fly for building the common learner data model which gets the data from Kafka in near real time.

Confidential

Hadoop Tech Lead

Technologies Used: Spark Streaming, Spark SQL Context,Hive Context, Hive, Pig, MapReduce, Python, KAfka,Hbase and Oozie.

Responsibility:

  • Determine feasibility requirements, compatibility with current system, and system capabilities to integrate new acquisitions and new business functionalities.
  • Independently formulate detailed program specifications using structured data analysis and design methodology. Prepare project documentation when needed.
  • Worked on POC’s with Apache Spark using Scala to implement spark in project.
  • Consumed the data from Kafka using Apache spark.
  • Implement solution to analyze real time transaction data using Kafka and Spark.
  • Migrate Teradata stored proc logic to Big Data ecosystem using Spark.
  • Develop Python script to parse XML and store in HDFS using Hadoop Streaming.
  • Implement Hive script to structure data in HDFS
  • Develop Pig script for ETL.
  • Maintain HBase tables from Hive which is used by downstream system application.
  • Implement dynamic partitions and bucketing.
  • Integrated Hadoop into traditional ETL, accelerating the extraction, transformation, and loading of massive structured and unstructured data.

Confidential

Senior Hadoop Developer

Technologies Used: HBase, Apache Pig, Hive, Sqoop and Map Reduce,Shell Script.

Responsibility:

  • Automation of data Ingestion process.
  • Developed Map Reduce jobs using Java API and HIVEQL.
  • Developed Sqoop scripts to extract the data from the Table and load into HDFS.
  • Developed script to create and load Hbase tables.
  • Implemented dynamic partitions, bucketing, sequence files, Multi Insert queries.
  • Implemented compression techniques like LZO.
  • Developed UDF, UDAF, UDTF functions and implemented it in HIVE Queries.

Confidential

Team Lead/Senior Developer

Technologies Used: GWT, Spring, Spring Batch, Hibernate Framework, Web Service Core Java, Oracle Database, Apache Tomcat.

Responsibilities:

  • Analyzed and estimated the requirements for SEVI application.
  • Strategized project scoping and specifications documents, to clearly communicate the project roadmap
  • Involved in documenting requirement specification and reviewing the technical design document.
  • Implemented credit decision strategies, building new features and tools in UI using Google Web Toolkit and spring and hibernate.
  • Deployed application in Apache Tomcat Server using Ant and Maven.
  • Test and support of existing functionality.

We'd love your feedback!