Tech Lead Resume
SUMMARY
- Having 11 year & 8 months of experience in System Analysis, Design, Development, Testing, Implementation and Maintenance of business applications using Hadoop Echo Systems, Mainframe Applications and Java.
- Very good understanding of Hadoop architecture, HDFS & MapReduce 2.0 programming technique, and the daemons of Hadoop 2.0 - Name Node, Data Node, Resource Manager, Node Manager, Proxy Server.
- Working experience in MapReduce programming model and Hadoop Distributed File Systems.
- Hands on experience in installing, configuring, and using Hadoop ecosystem components like Hive, HBase, Pig, Sqoop, Flume, Oozie, Spark for scalability, distributed computing, streaming and high performance computing.
- Experience in managing and reviewing Hadoop log files.
- Experience in importing and exporting data using Sqoop from HDFS to Relational DB Systems and vice-versa.
- Experience in analyzing data using HiveQL, Pig Latin and custom MapReduce programs in Java.
- Extending Hive and Pig core functionality by writing custom UDFs.
- Experienced at migrating map reduce programs into Spark transformations using Spark and Scala.
- Very good understanding and working Knowledge of Object Oriented Programming (OOPS) and Multithreading.
- Experience in Core Java, Web Services, JDBC, MySQL, Oracle, DB2, IMS-DB.
- Expertise in creatingdatabases, users, tables, views, stored procedures, functions, joins and indexes in Oracle DB.
- Expert in Building, Deploying and Maintaining the Applications
- Have very good data Analysis and data validation skills and good exposure to the entire Software Development Lifecycle (SDLC), CoE Centric and Agile Methodologies. Good at preparing SOW and Amendments.
- Acted as Scrum Master for Product teams with a focus on guiding teams towards improving the way they work.
- Offer outstanding talents in resource loading (recruiting/staffing), team building, developing project scope (budget, timelines and delivery dates), cost avoidance, continuous design improvements and customer relationships.
- Involved in review meetings with Project Managers, Developers and Business Associates for Project.
TECHNICAL SKILLS
Big Data Ecosystem: HDFS, MapReduce 2.0, YARN, Hive, Pig, Sqoop, HBase, Oozie Flume, Spark, Scala, Hue, Phyton
Programming Languages: JAVA, PL/SQL, Bash scripting, COBOL, PL/1, JCL, Unix/Linux shell scripts
Databases: IMS-DB, DB2, MS SQL Server, Oracle, MS Access and MySQL
Platforms: Windows 95/98/NT/2K/XP/Win7, UNIX, Linux, MVS, Z/OS
IDEs and Tools: Eclipse, Putty, MS VISIO
Mainframe Tools & Utilities: ENDEVOR, XP-EDITOR, FILE-AID, IMS-FILEAID, SPUFI, QMF, ABEND-AID. IBM Data Studio.
Development Approach: SDLC, Agile, CoE
Operating System: Windows NT/9X/2000, UNIX, LINUX, Z/OS 1.6
Defect Tracking Tools: HP Quality Center (QC), HP Application Lifecycle Management (ALM)
Version Control: SVN, Endevor and Changeman
Other tools: Putty, Win SCP. Version One
PROFESSIONAL EXPERIENCE
Confidential
Tech Lead
Environment: Linux, Map Reduce 2, Hbase, Hive, Pig, Sqoop, Spark, Scala, Oozie, Kafka, PL/SQL, Windows NT, UNIX Shell Scripting, and SQL Server.
Responsibilities:
- Lead the EDW application, directlyinteracting with the operational usersin the Client Statements to gather the functional specifications and understand them to build the technical specifications.
- Involved insoftware architecture, detailed design, coding, testing and creation of functional specs of the application especially for insert/message/special handling/ forcing.
- Developed Map Reduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the HBASE.
- Responsible for building scalable, distributed data solutions using Hadoop.
- Experience in designing and developing applications in Spark using Scala to compare the performance of Spark with hive and pig.
- Developed Spark scripts by using Scala shell commands as per the client requirements.
- Installed and configured Hadoop & developed multiple MapReduce jobs in java for data cleaning and preprocessing.
- Developed pig and hive scripts for transforming/formatting the data as per business rules.
- Used data ingestion tools like Sqoop to import data from Oracle to HDFS and vice versa
- Involved in loading data from UNIX file system to HDFS.
- Hands on experience in writing Hadoop MapReduce jobs to implement the core logic using Java API, Pig scripts and Hive queries.
- Built reusable Hive UDF libraries for business requirements which enabled users to use these UDF’s in Hive querying.
- Involved in replacing the default Metastore of Hive to MySQL from derby database.
- The processed data by all means is imported into Hive warehouse which enabled business analysts and operation groups to write Hive queries.
- Good understanding and exposure to the Hadoop cluster administration
- Used various performance optimization techniques to help run the process quicker
- Developed suit of unit test cases for Mapper, Reducer and Driver classes using MR Testing library
- Provided support in answering the concerns raised by the end customers using the hive queries
Confidential
Hadoop Data Integration Lead & Agile Coach
Environment: Linux, Hadoop, Hbase, Hive, Pig, Sqoop, Oozie, PL/SQL, Windows NT, UNIX Shell Scripting.
Responsibilities:
- Expert in Hadoop Configuration and Map-Reduce programs, designed and developed MR for data integration of Assortment Optimization.
- Responsible for building scalable, distributed data solutions using Hadoop.
- Installed and configured Hadoop & developed multiple Map Reduce jobs in java for data cleaning and preprocessing.
- Developed pig and hive scripts for transforming/formatting the data as per business rules.
- Used data ingestion tools like Sqoop to import data from Oracle to HDFS and vice versa
- Involved in loading data from UNIX file system to HDFS.
- Hands on experience in writing Hadoop Map Reduce jobs to implement the core logic using Java API, Pig scripts and Hive queries.
- Built reusable Hive UDF libraries for business requirements which enabled users to use these UDF’s in Hive querying.
- Good understanding and exposure to the Hadoop cluster administration
- Used various performance optimization techniques to help run the process quicker
- Developed suit of unit test cases for Mapper, Reducer and Driver classes using MR Testing library
- Provided support in answering the concerns raised by the end customers using the hive queries
- Have experience in software product development across all phases of Project Development Life Cycle (PDLC). Have in depth experience in UI Technologies like HTML, JavaScript.
- End to end planning of projects resources and time-line requirement.
- Troubleshooting and resolving hi-priority issues.
- Managed and reviewed Hadoop log files.
- Gained in-depth understanding on various merchandising concepts.
- Shared responsibility for administration of Hadoop, Hive and Pig.
- Gained Hands on experience in Shell scripting
- Experienced in defining job flows using Oozie
- Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics.
- Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
- Managed and reviewed Hadoop log files.
- Tested raw data and executed performance scripts.
- Shared responsibility for administration of Hadoop, Hive and Pig.
- Coached team members on Agile principles and providing general guidance on the methodology
- Engaged with other Scrum Masters to increase the effectiveness of the application of Scrum in the organization.
Confidential
Java Developer & Scrum Master
Environment: JAVA, J2EE, XML and AJAX, Unix Shell Scripts, Apache ANT, Adobe FLEX, JAXB, Apache Ivy, JDBC, Eclipse, SQL, Sql server 2000, Windows NT, UNIX
Responsibilities:
- Lead the IMN application, directlyinteracting with the operational usersin the Client Statements to gather the functional specifications and understand them to build the technical specifications.
- Involved insoftware architecture, detailed design, coding, testing and creation of functional specs of the application especially for insert/message/special handling/ forcing.
- Developed using new features ofJava 1.5likeGenerics, enhanced for loop andEnums. Developed the functionalities usingAgile Methodology
- Usedmultithreadingconcepts in Java to design the application to support multiple users processing the inserts/messages during the month-end.
- DevelopedJava Exception HandlingFramework for whole system.
- Created wrapper classes forJava collections.
- ImplementedORM framework Hibernateinstead of traditionalJDBCcode.
- Created andinjected spring services, spring controllers and DAOs to achieve dependency injectionand to wire objects of business classes.
- Integrated the IMN application with the upstream applications throughJMS, WebSphere MQ, SOAP based Web services, and XML.
- Designed thelogical and physical data model, generated DDL scripts, and wrote DMLscripts for SQL Server database.
- Tuned SQL statements, Hibernate mapping, and WebSphere application server to improve performance, and consequently met the SLAs.
- Preparing builds, deploy and Co-ordinate with the release management team to ensure that the proper process is followed during the release.
- Providing End to End support for the testing activities during System Testing and UAT.
- Production supportfor the application and handling of critical issues in timely manner byanalyzing and writing SQL queries in SQL Server.
- Continuously learned Agile/Scrum techniques and shared findings with the team
- Final review of all deliverables.
Confidential
Database Analyst & Application DBA
Environment: IBM-S 390, Z/OS, Windows NT, JCL, DB2, IMS DB, COBOL ENDEVOR, XPEDITOR, FILEAID, SPUFI, ISPF, Visio Client, CA-7, VSAM, SAM.
Responsibilities:
- Offer DBA DB2 support for application development team.
- Ensure integrity, availability and performance of DB2 database systems by providing technical support and maintenance.
- Monitor database performance and recommend improvements for operational efficiency.
- Assist in capacity planning, space management and data maintenance activities for database system.
- Perform database enhancement and modification as per the requirements.
- Perform database recovery and backup tasks on daily and weekly basis.
- Develop and maintain patches for database environments.
- Identify and recommend database techniques to support business needs.
- Maintain database security and disaster recovery procedures.
- Perform troubleshooting and maintenance of multiple databases.
- Resolve any database issues in accurate and timely fashion.
- Monitor databases regularly to check for any errors such as existing locks and failed updates.
- Oversee utilization of data and log files.
- Manage database logins and permissions for users.
Confidential
Analyst / Developer
Environment: COBOL, PL/I, TELON, JCL, DB2, IMS-DB, FILE AID, XP-EDITOR, ENDEVOR, SDSF, ISPF, ALCHEMIST
Responsibilities:
- Reviewing the requirements sent by the Client/ Onsite coordinators.
- Coding as per the Parker standards.
- Preparing the Impact Analysis Documents and Test plans (UTP).
- Involved in Analysis, Design, Coding and Testing
- Analysis of the program, which got Abended and preparation of the Analysis report.
- Documentation, code reviews, unit testing, integration testing.
- Participated in Implementation & post production product warranty activities.
