Project Lead/ Senior Developer Resume
EXPERIENCE SUMMARY:
- 10 Years of experience in all phases of SDLC including Data analysis, design, development, testing, troubleshooting, bug fixing Oracle SQL, PL/SQL and shell scripting and C, C++ on Unix/Linux platform which includes around 3 years of experience in Big Data analysis using Hadoop distributed file system, MapReduce framework and Hadoop big data ecosystem.
- Three years’ experience installing, configuring, testing Hadoop ecosystem components.
- Creating Proof of Concepts from scratch illustrating how these data integration techniques can meet specific business requirements reducing cost and time to market.
- Capable of processing large sets of structured, semi - structured and unstructured data and supporting systems application architecture.
- Able to assess business rules, collaborate with stakeholders and perform source-to-target data mapping, design and review.
- Familiar with data architecture including data ingestion pipeline design, Hadoop information architecture, data modeling and data mining, machine learning and advanced data processing. Experience optimizing ETL workflows.
- Excellent understanding of Hadoop architecture and complete understanding of Hadoop daemons and various components such as with HDFS, YARN, Resource Manager, Node Manager, Name Node, Data Node.
- Experience on DevOps working with Data Scientists to ensure availability of needed data sets.
- Experience in creating data pipeline to move data from RDBMS to HDFS/HDFS to RDBMS for improved Business Intelligence and Reports.
- Good knowledge in Apache Spark for data analysis and data staging.
- Experience in writing stored procedures to call different both intra and public web services by utilizing XML and UTL HTTP request and response by oracle built-in packages and Extracted data from XML file and loaded it into the database. Experience in SQL and PL/SQL tuning and query optimization tools like SQL Trace, Explain Plan, and DBMS PROFILER and extensively used packages like DBMS STATS, UTL FILE, and DBMS SCHEDULER.
- Experience with Data flow diagrams, Data dictionary, Database normalization theory techniques, Entity relation modeling and design techniques. Effectively made use of Table Functions, Indexes, Table Partitioning, Collections, Analytical functions, Materialized Views, Query Re-Write and Transportable table spaces.
- Partitioned large Tables using range partition technique. Experience with Performance Tuning for Oracle RDBMS using Explain Plan and HINTS.
- Ability to work in high-pressure environments delivering to and managing stakeholder expectations.
- Strong analytical and problem-solving skills with great Inter personnel skills and ability to work as part of a team. Exceptional ability to learn and master new technologies and to deliver outputs in short deadlines.
TECHNICAL SKILLS:
Big Data Ecosystem: Hadoop, MapReduce, HDFS, HBase, Hive, Scala, Spark, Pig, Sqoop, Cassandra, Oozie, Flume.
Languages: C, C++, PL/SQL, Pro* C.
Operating Systems: WINDOWS 9X/NT/2000/XP/7, UNIX (AIX).
Development Tools: Toad, Putty, Exceed, Oracle PL/Sql Developer, Sqlplus. PQEDIT, MQMON
Databases: ORACLE 10g, 11i and 12c, DB2, MySQL, NoSql
Client/Service: Web Services SOAP UI.
Messaging Systems: IBM MQ Series.
Configuration Tools: Rational Clear Case, Rational Clear Quest, Atlassian Git, Atlassian BitBucket
Methodologies: Agile, UML, Design Patterns
Domain Knowledge: RAILROAD TRANSPORTATION
PROFESSIONAL SUMMARY:
Confidential, FL
Project Lead/ Senior Developer C++, Oracle Pl/Sql, Hadoop.
Technologies: Hadoop, Sqoop, PIG, Hortonworks, Shell Scripting, Git, Bit Bucket, Oracle 10G, SQL/PL C++, Pro C*, IBM MQ-Series, CICS, TX-Series, Data Structures, AIX UNIX.
Role / Responsibility and contribution:
- Actively participated in the Sprint Planning, Sprint Review and Sprint Retrospective. Effectively involved in the design meetings and formulation of the solutions for all the tasks for the sprint.
- Processing data using pig, hive and data loading using Sqoop.
- Working with Data Scientists to ensure availability of needed data sets.
- Codes, tests, and documents new or modified data systems to create robust and scalable applications for data analytics.
- Created database objects like tables, views, sequences, synonyms, indexes, functions, procedures, cursor and packages using Oracle tools like SQL*Plus, SQL Developer and Toad.
- Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the EDW.
- Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics.
- Design and develop several programs using C, C++ for Interline settlement system for executing different modules of this system and selecting and integrating Big Data tools and frameworks required to provide requested capabilities.
- Developed Wrote C++ code to access MQ's to consume data and then process and update Oracle Database.
- Developed Pro C* batch program to process huge data to send response on EDI messages.
- Design and development of UNIX Shell Scripts to handle pre and post session processes.
- Responsible for providing the technical assistance on the developed modules to the business partners as required and all deliverables were delivered within the agreed time lines.
Confidential, FL
Project Lead/ Developer (Hadoop)
Technologies: Hadoop, Hive, PIG, Sqoop, Oozie, Hortonworks, Oracle, SQL/PL SQL, Rational ClearCase, ClearQuest, Git, Bit Bucket, Cassandra, Oozie, Flume .
Role / Responsibility and contribution:
- Involved in Data modeling, analyzing ER Diagram and preparing Mapping document.
- Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the EDW.
- Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics.
- Developed Spark scripts by using Scala shell commands as per the requirement.
- Loaded data into Spark RDD and do in memory data Computation to generate the Output response.
- Developed CQL query to interact with Cassandra no sql database.
- Enabled speedy reviews and first mover advantages by using Oozie to automate data loading into the Hadoop Distributed File System and PIG to pre-process the data.
- Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
- Managed and reviewed Hadoop log files.
- Tested raw data and executed performance scripts.
- Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics.
- Enabled speedy reviews and first mover advantages by using Oozie to automate data loading into the Hadoop Distributed File System and PIG to pre-process the data.
- Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
- Managed and reviewed Hadoop log files.
- Tested raw data and executed performance scripts.
Confidential, FL
Project Lead/ Developer (Hadoop)
Technologies: Hadoop, Hive, HDFS, PIG, Flume, Sqoop, HBase, Scala, UNIX, Shell scripting, Oracle Pl/Sql, Git, Bit Bucket.
Role / Responsibility and contribution:
- Hands on experience in developing Applications using Hadoop ecosystem like MapReduce, Hive, Spark, Pig, Flume, Sqoop and HBase.
- Assessed business rules, worked on source to target data mappings and collaborated with the stakeholders.
- Familiarity with the Hadoop information architecture, design of data ingestion pipeline, data mining and modeling, advanced data processing and machine learning. Experience in optimizing ETL workflows.
- Handled structured and unstructured data and applying ETL processes.
- Written Map Reduce procedures to power data for extraction, transformation and aggregation from multiple file formats including XML, JSON, CSV & other compressed file formats.
- Expertise in data migration from various databases to Hadoop HDFS and Hive using Sqoop.
- Worked with Hive's data warehousing infrastructure to analyze large structured datasets.
- Experienced in creating Hive schema, external tables and managing views.
- Responsible for Data loading involved in creating Hive tables and partitions based on the requirement.
- Executed Map Reduce programs to cleanse data in HDFS gathered from heterogeneous data sources to make it suitable for ingestion into Hive schema for analysis.
- Developed Spark applications in Scala utilizing the data frame and spark SQL api.
- Strong Knowledge on Architecture of Distributed systems and parallel processing, In-depth understanding of MapReduce programming paradigm.
- Importing data into HDFS using Sqoop, which includes incremental loading.
- Design and develop MapReduce jobs to process logs and feed Data Warehouse, load Hive tables for analytics and to store daily feed of data on HDFS for other team's use.
- Develop automated shell scripts that are responsible for the data flow, monitoring and status reporting.
- Taking on-call responsibilities and responding whenever needed (if something goes wrong with Hadoop jobs or clusters)
Confidential, FL
Project Lead/ Senior Developer C++, Oracle Pl/Sql
Technologies: Hadoop, Hive, HDFS, PIG, Sqoop, Scala, Spark, Oracle 10G, PL/SQL, C++, IBM MQ-Series, CICS, AIX UNIX, Shell Scripting, Rational ClearCase, Clear Quest.
Role / Responsibility and contribution:
- Hands on experience in developing Applications using Hadoop ecosystem like MapReduce, Hive, Spark, Pig, Flume, Sqoop and HBase.
- Processing data using Hadoop Map reduce.
- Processing unstructured data using pig, hive and data loading using Sqoop.
- Developed Spark scripts by using Scala shell commands as per the requirement.
- Loaded data into Spark RDD and do in memory data Computation to generate the Output response.
- Involved in Data modeling, analyzing ER Diagram and preparing Mapping documents.
- Created Triggers for validation and auditing.
- Created procedure, package, and functions.
- SQL Query Optimization issues fixed using Explain Plans, Hints, Parallel process and PL/SQL performance issues fixed using Bulk collect, for all, Dynamic SQL, Analytics and Sub-query etc.
- Wrote C++ code to access MQ's to consume data and then process and update Oracle Database.
- Created SQL queries to validate and reconcile data in tables.
- Uploaded files using UTL FILE package.
- Wrote custom web service call in oracle using UTL HTTP to consume PC Miler Web service data hosted on AWS to get mileage i nformatio n betwee n two junctio ns and store in oracle database.
- Involved in SQL tuning, PL/SQL tuning and Application tuning using tools like EXPLAIN PLAN etc.
- Implemented serval UNIX shell script to be executed by executed by CA7 to call oracle procedures and then FTP report to mainframe and windows server.
- Created lots of UNIX shell script to automate the process.
- Created and used XML for data management and transmission
Confidential, FL
Project Lead/ Senior Developer C++, Oracle Pl/Sql
Technologies: Oracle 10G, PL/SQL, C++, IBM MQ-Series, CICS, AIX UNIX, Shell Scripting, Rational ClearCase, Clear Quest.
Role / Responsibility and contribution:
- Prepared functional design documents, requirement document and technical design documents.
- Followed Agile scrum Methodology with a sprint of 2 weeks.
- Created lots of procedure, package/packages functions ad wrote triggers for validation and auditing
- Created lots of SQL Query Optimization issues fixed using Explain Plans., Analytics and Sub-query etc.
- Created SQL queries to validate and reconcile data in tables.
- Create oracle stored procedures to call we
- Uploaded files using UTL FILE package.
- Involved in SQL tuning, PL/SQL tuning and Application tuning using tools like EXPLAIN PLAN etc.
- Implemented serval UNIX shell script to be executed by executed by CA7 to call oracle procedures and then FTP report to mainframe and windows server.
- Created lots of UNIX shell script to automate the process.
- Created and used XML for data management and transmission via oracle stored procedure.
Confidential, FL
Senior Developer C++, Oracle Pl/Sql
Technologies: Oracle 10G, PL/SQL, C++, IBM MQ-Series, CICS, AIX UNIX, Shell Scripting, Rational ClearCase, Clear Quest
Role / Responsibility and contribution:
- Prepared functional design documents, requirement document and technical design documents.
- Created Triggers on oracle tales for validation and auditing.
- Created oracle procedure, package, and functions to be used by UI application.
- SQL Query Optimization issues fixed using Explain Plans, Hints, Parallel process and PL/SQL performance issues fixed using Bulk collect, for all, Dynamic SQL, Analytics and Sub-query etc.
- Involved in SQL tuning, PL/SQL tuning and Application tuning using tools like EXPLAIN PLAN etc.
- Implemented serval Unix shell script to be executed by executed by CA7 to call oracle procedures and then FTP report to mainframe and windows server.
- Created UNIX shell script to automate manual process.
Confidential, FL
Senior Developer C++, Oracle Pl/Sql
Technologies: Oracle 10G, PL/SQL, C++, IBM MQ-Series, CICS, AIX UNIX, Shell Scripting, Rational ClearCase, Clear Quest.
Role / Responsibility and contribution:
- Prepared functional design documents, requirement document and technical design documents.
- Created lots of procedure, package, and functions to be called by various other internal application sharing same database and Front-End applications.
- SQL Query Optimization issues fixed using Explain Plans, Hints, Parallel process and PL/SQL performance issues fixed using Bulk collect, for all, Dynamic SQL, Analytics and Sub-query etc.
- Created SQL queries to validate and reconcile data in tables.
- Wrote procedures to call Web services using UTL HTTP packages to consume data from other Rail roads.
- Involved in SQL tuning, PL/SQL tuning and Application tuning using tools like EXPLAIN PLAN etc.
- Created lots of UNIX shell script to automate the process.
Confidential, FL
Developer C++, Oracle Pl/Sql
Technologies: Oracle 10G, PL/SQL, C++, IBM MQ-Series, Visual Basic 6.0,Rational ClearCase, Clear quest.
Role / Responsibility and contribution:
- Coding and testing.
- Installation of software at the vendor sites.
- Developed Visual Basic code to consume message from IBM Message Broker.
- End to End testing of the application.
- Making changes to Visual Basic Code as per new business requirements.
- Debugging the application to resolve critical issue.