We provide IT Staff Augmentation Services!

Lead Software Engineer Resume

3.00/5 (Submit Your Rating)

SUMMARY:

  • Certified Hadoop/Big Data/Spark/Azure/Java Engineer with over a decade of working knowledge in the areas of Java/J2EE/Hadoop/Spark/Data Science/Cloud/WMB to explore, learn and extract the best out of the modern technologies in the field of Software Development and BigData Engineering technologies.
  • Expertise in using Spark, Hadoop and its ecosystem technologies (Spark, HDFS, MapReduce, Hive, Pig, Sqoop, Oozie, Kafka)
  • Experience in applying the Data Science models with BigData technologies.
  • Experience in the cloud environment - Azure.
  • Application knowledge in Financial and Retail domains.
  • Experience in using Java Technologies like Portlets, Servlets, JSP, JDBC along with Struts, Spring, Hibernate, WebServices, P2G and ES frameworks.
  • Experience in using middleware services like GSI and IBM Websphere Message Broker, IBM Message Queues and Kafka.
  • Experience in using Adobe LiveCycleDesigner to develop Dynamic PDFs.
  • Excellent ability to quickly learn and implement the new technologies and methodologies.
  • Proven skills in solving technical issues and delivering technical components on time with zero defects.
  • Experience in all the phases of software development including requirements gathering and designing.
  • Excellent Communication and Interpersonal skills .
  • Good team player with an impressive understanding in working with small, moderate and large teams in the software development projects.

TECHNICAL SKILLS:

Languages: JAVA, Hive, Pig, Python, Scala

Java Technologies: JDBC, JNDI, Servlets, JSP, IBM Portlets, JavaScript

Frameworks: P2G, ES 5.3, Struts 1.2, Spring 3.0.5, Hibernate 3.5.1, Web Services

Middleware Tech s: GSI, IBM Websphere Message Broker 7.0, IBM WebSphere MQ Explorer 7.0

Database: Oracle9i, Oracle10g, MySQL, AS400, DB2

Web Servers: Tomcat 5.0, Tomcat 6.0.32

Application Servers: Web Logic 8.0, WebSphere Application Server 5.1.2, WebSphere Portal Server 6.0

Other Tools: Adobe LiveCycleDesigner 7.1, Git, MKS, VSS, RTC, Log4J, Ant, Maven, Teamsite, Sonar, CAWA

IDE: Eclipse 3.1, Eclipse Helios, WSAD 5.1.2, RAD 7.0

Operating System: Linux, Windows 7/NT/98/2000/XP

Hadoop Ecosystem: Hadoop, HDFS, MR1, MR2, Hive, Pig, Java, Python, Sqoop, Oozie, ZooKeeper, Kafka

Hadoop Distributions: HDP 1.3, HDP 2.4.2, CDH5.3.2

Apache Spark: Spark Core, Spark Streaming, Spark SQL, RDDs, DataFrames, Datasets

Azure cloud: blob, HDInsight, ADW, Experience in using different file formats (Text, Sequence Files, Avro, ORC, JSON, Parquet), compression techniques and Performance Tuning, Optimization and Customization.

WORK EXPERIENCE:

Lead Software Engineer

Confidential

Responsibilities:

  • Participate in High Level and Low level Requirement Analysis along with business and peers to understand the requirement.
  • Working with Architect teams, Senior leads to design the solution for the requirements.
  • Performing various feasibility tests, POCs as per the business requirements to check the feasibility before starting the development.
  • Developing the jobs / flows using various technologies like BigData, Spark, Azure cloud technologies as per the design.
  • Developing the jobs, scripts as per the inputs from Infra, Data Science, Business Analysis teams along with Business Intelligence Management.
  • Performing the testing in various environments and fixing the issues, tuning the applications to meet the SLAs.
  • Productionalising the projects using Oozie, CAWA schedulers.
  • Fixing the issues related to cluster space, performance, cluster resources issues caused by our applications.

Environment: HDP 1.3, HDP 2.4.2, Spark Core, Spark Streaming, Spark SQL, Spark DataFrames, Hadoop, Hive, Pig, Java, Python, Sqoop, Kafka, Seebeyond JMS, IBM MQ, Rabbit MQ, CAWA, Oozie, Shell scripting, Azure cloud - blob, HDInsight, ADW.

Lead Software Engineer

Confidential

Responsibilities:

  • Requirement Analysis along with business and peers.
  • Preparing Design Documents (Request-Response Mapping Documents, Hive Mapping Documents).
  • Implementing the design using Hive Scripts, UDFs, Sqoop scripts.
  • Preparing Oozie workflow, Korn Shell jobs and pushing the code to DEV, ANA, PROD environments.
  • Performing the testing in various environments and providing the reports to the business.
  • Performing various POCs as per the business requirements.
  • Project Automation using Oozie & Shell jobs
  • Generating the Reports using Apache POI
  • Implementing Performance Optimization techniques
  • Providing feasibility reports of various file formats, compression techniques.
  • Developing in-house modules using Spark
  • Creating the Metadata in DB2 for Solr API projects
  • Preparing the Solr queries.
  • Developing Java API that acts as a medium between Solr & External world.
  • Fixing the bugs and PROD support.

Environment: CDH5.3.2, Hadoop, Spark, YARN, Pig, Hive, SQOOP, Oozie, Java, DB2, Solr, Kafka

Sr. Software Engineer

Confidential

Responsibilities:

  • As a Senior Developer, Responsible for end to end delivery of critical and new modules.
  • Involving in requirement meetings with the business and preparing Design Reports.
  • Analyzing along with the BE, GSI and MQ teams to preparing the message structures and transformation rules.
  • Assisting with data capacity planning and node forecasting.
  • Preparing the design documents (Sequence Diagrams, Message Mappings, Screen Flow Documents & Message Definitions, External & Internal Design Report) and cascading the design documents to the team.
  • Performing the POCs and providing the solutions to various issues incliding performance tuning, storage options, issues using different Compression Techniques.
  • Analyzing the latest versions, solutions, and software in the Hadoop EcoSystem and proposing them to the stakeholders with the proven solutions for the better architecture of the project.
  • Develop solutions to process data with diff formats (Txt, Avro, Seq Files), develop Pig & Hive scripts to parse the raw data, populate stg tables and store the refined data in partitioned tables in the Hive. Writing sqoop jobs to export the data from Hive to MySQL. Performing POCs in HBase for storing URL's and developing utility classes in Hive using UDFs. Developing validation frameworks using MRUnit, PIGUnit.
  • Coding typical Java modules like Download, ES modules and Internet Banking modules, Adobe Forms for different countries, Developing the services for Message Broker and modules for AS Server.
  • Performing the code review for the peer s modules / fixes.
  • Debugging and testing the software modules, and fixing the bugs, also completing the enhancements on time.
  • Taking builds and deploying the code into various servers (DEV, SIT, UAT and Simulation) as per the business calls.
  • Providing the post implementation & production support after the release.
  • Providing the technical assistance to the Management.
  • Train the newly joined resources in the project, organization.

Environment: BigData: Hadoop, Apache Pig, Hive, SQOOP, Oozie

Database: MySQL, Oracle 9i, AS400.

Middle Ware: IBM Websphere Message Broker 7.0, IBM Websphere Message Broker Toolkit 7.0, IBM Websphere MQ Explorer 7.0, ESQL

Front End: Java 5.0, XML, JavaScript, AdobeScript, Spring 3.0.5, Hibernate 3.5.1, P2G Framework, ES Framework, SOAP, RESTful WebServices, IBM Portlets, Servlets, JSPs, WSAD 5.1.2, RAD 6, RAD 7, AdobeLiveCycleDesigner 7.1

Software Engineer

Confidential

Responsibilities:

  • Coding and development of the java modules for Snapfish.
  • Debugging and testing the software modules.
  • Fixing the bugs.
  • Documentation for the modules.

Environment: Java, JSP, Servlets, Struts1.2, JavaScript, Tomcat 5.0, MyEcllipse, Oracle 9i

We'd love your feedback!