We provide IT Staff Augmentation Services!

Sr Principal Software Engineer Resume

5.00/5 (Submit Your Rating)

SUMMARY

  • Having 8+ years of experience in IT industry in developing applications with 5 years' experience in Big Data Hadoop, Spark, Scala, Python and hive and 3 years an SDET.
  • Strong background with file distribution systems in a Big - Data arena.
  • Experience in implementation of multi-layer architecture for data ingestion.
  • Strong Domain knowledge in Supply Chain, Finance and Retail Domain.
  • Have good knowledge on writing complex queries using SQL and HIVE QL.
  • Ingested data from RDBMS system to HDFS data lake ( Confidential Data Lake).
  • Worked on High Volume of Data and performed various optimization technique like Partition, Bucketing,
  • Good understanding of Spark Architecture and Framework including Storage Management.
  • Experience in complete project life cycle (design, development, testing and implementation).
  • Experience writing SQL queries for My SQL or another Relational Database is required.
  • Design, develop and implement unit and scenario testing for existing code base and for new functionality under development.
  • Ability to use version control software such as GIT and TFS.
  • Good Knowledge on MongoDB, Cassandra.
  • Strong working experience with UNIX/LINUX environment and Shell Scripting.
  • Experience in UNIX/LINUX Admin and DBA.
  • Good knowledge and working experience with JAVA.
  • Experience of working in Agile environment.
  • Experience in communicating with other technical teams and management to collect requirements, identify tasks, provide estimates, and meet production deadlines.
  • Experience with professional software engineering best practices for the full software development life cycle, including coding standards, code reviews, build processes, testing and operations.
  • Ability to understand requirements clearly and efficiently interact with the client.
  • Quick learner and excellent team player as well as individual player, ability to meet tight deadlines and work under pressure and be productive with new technologies

TECHNICAL SKILLS

BigData/Hadoop Technologies: HDFS, YARN, MapReduce, Hive, Pig, Impala, Sqoop, Flume, Spark, Kafka, Zookeeper and Oozie

NO SQL Databases: HBase, Cassandra, MongoDB

Languages: C, Java, Scala, Python, SQL, PL/SQL, Pig Latin, HiveQL, Java Script, Shell Scripting

Java & J2EE Technologies: Core Java, Servlets, Hibernate, Spring, Struts, JMS, EJB, RESTful

Application Servers: Web Logic, Web Sphere, JBoss, Tomcat.

Operating Systems: UNIX, Windows, LINUX

Databases: Microsoft SQL Server, MySQL, Oracle, DB2

Build Tools: Jenkins, Maven, ANT

Development Tools: Microsoft SQL Studio, Eclipse, NetBeans, IntelliJ

Development Methodologies: Agile/Scrum, Waterfall

Version Control Tools: TFS, Git, GitLab

PROFESSIONAL EXPERIENCE

Sr Principal Software Engineer

Confidential

Responsibilities:

  • Expertise in Unix/LinuxShellScripting.
  • Involved in loading data from UNIX file system to HDFS.
  • Involved in managing and reviewing Hadoop log files.
  • Involved in running Hadoop streaming jobs to process terabytes of text data.
  • Developed HIVE queries for the analysts.
  • Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in Map reduce way.
  • Developed shell scripts to generate the hive create statements from the data and load the data into the table.
  • Exported the result set from HIVE to MySQL using Shell scripts.
  • Used TFS, Git, GitLab for version control.
  • Involved in setting up the Confidential Data lake.
  • Monitor System health and logs and respond accordingly to any warning or failure conditions.
  • Moved the data from Hive tables into Mongo collections.
  • Did various performance optimizations like using distributed cache for small datasets, Partition, Bucketing in hive and Map Side joins.
  • Involved in developing the BI kind of application for Services Configuration.
  • Having good experience with System Testing in PROD environment.
  • Good working experience with Off-shore teams.
  • Experience with SOAP, REST APIs development.
  • Optimizing of existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames and Pair RDD's.
  • Implemented ELK (Elastic Search, Log stash, Kibana) stack to collect and analyze the logs produced by the spark cluster.
  • Worked on Cluster of size 40 nodes.
  • Analyzed the SQL scripts and designed the solution to implement using Pyspark.
  • Responsible for developing data pipeline with Pricing KAFKA to extract the data and store in HDFS.
  • Developed Spark scripts by using PySpark as per the requirement.
  • Used SPARK SQL to transform data from PARQUET files.

Lead Software Engineer

Confidential

Responsibilities:

  • Developed the utilities for reloading the Database and deploying the new Builds
  • Involved in creating the tool for comparing two Databases (Onsite and Offshore DB’s)
  • Design and Developed the Key word Driven Automation Framework with Selenium WebDriver and Java.
  • Automated the Build Deployment Process.
  • Worked under the Agile methodology
  • Design and Developed the Automation Framework for Web Services using Java and SOAP.
  • Involved in developing the Unit Test cases and executing them.
  • Involved in the Code Assessment coverage.
  • Involved in Creating the POC for Automation assessment and Framework Design.
  • Design and Developed the Page Object Modelling Automation Framework using Selenium, C# and .Net Framework

We'd love your feedback!