We provide IT Staff Augmentation Services!

Senior Developer Resume

2.00/5 (Submit Your Rating)

Weehawken, NJ

SUMMARY

  • 14+ years of strong experience in software development using Big Data, Hadoop (Cloudera), Apache Spark and Oracle PL/SQL technologies.
  • Knowledge of Principles of Data Warehouse using Fact Tables, Dimension Tables and Star/Snowflake schema modeling.
  • Experience with Data flow diagrams, Data dictionary, Database normalization theory techniques, Entity relation modeling and design techniques.
  • Hands on experience on Major components of Hadoop Ecosystem like External Tables, Avro, HDFS, HIVE, Impala, SPARK, HBase, Sqoop.
  • SQL conversion to make it compatible to HIVE/Impala/SPARK. Work on No SQL.
  • Performance tuning of SPARK SQLs using explain plan.
  • Experience in Oracle using Table Functions, Indexes, Table Partitioning, Collections, Analytical functions, and Materialized Views.
  • Developed Complex Oracle database objects like Stored Procedures, Functions, Packages, Triggers, Dynamic SQL, Collections and Exception handling
  • Experience with Oracle Supplied Packages such as DBMS SQL, DBMS JOB and UTL FILE.
  • Good knowledge of key Oracle performance related features such as Query Optimizer, Execution Plans, Indexes and HINTS.
  • Created Shell Scripts for invoking SQL scripts and scheduled them using Appworx and Dollar Universe.
  • Well versed in using Software development methodologies like LEAN, Agile and Scrum.
  • Excellent communication, interpersonal, analytical skills and strong ability to perform as part of a team.
  • Extensive experience in requirement elicitation, documenting and maintaining Business
  • Requirements, Application (HLD and LLD) design documents for business applications.
  • Good Financial knowledge on SAP data, Hyperion, Balance Sheet, Risks, Projections, Assets, Liabilities.

TECHNICAL SKILLS

Database: Big Data, Hadoop (Cloudera), Apache Spark 2.0, Oracle 12c/11g/10g/9i

Languages: Apache Hive, Impala, HQL, SPARK SQL, SQL, PL/SQL, C, C++, UNIX Shell scripting

SQL Utilities Hue, Hive, Impala, Beeline, Toad for Oracle, SQL*PLUS, PL/SQL Developer

Tools: GIT, WinSCP, Putty, JIRA, Kintana, PVCS, Remedy, SVN, PVCS, Appworx, InteliJ, Autosys

Data - Modeling Tools: MS Visio.

Operating Systems: Windows 98/00/XP/2008 server, Linux, UNIX.

Applications:  MS Word, PowerPoint, FrontPage, Outlook

Trainings Attended: Python, Pig, Scala

PROFESSIONAL EXPERIENCE

Confidential, Weehawken, NJ

Senior Developer

Responsibilities:

  • Work with Analysts to understand Business requirement.
  • Load data from files/Oracle using ingestion framework/SQOOP respectively
  • Write complex SQLs to transform data loaded into business format.
  • Transform Impala SQLs to SPARK SQLs which were to be used by Scala code.
  • Create Autosys Jobs for scheduling data refresh jobs and create dependencies.
  • Rewrite existing Daily/Monthly data processing in SPARK for performance improvement.
  • Successfully deployed Autosys and SPARK code for the first time on production.

Environment: Spark 2.0, Hadoop (Cloudera), Big Data, JIRA, GIT Stash, UNIX Shell Script, InteliJ, Autosys.

Confidential, NYC, NY

Senior Developer

Responsibilities:

  • Gather and study business requirements by interacting with functional users and product managers
  • Perform impact and risk analysis of new business requirements on existing system functionality
  • Design Database for the various modules within the application.
  • Mapping biz requirements with IT requirements/ Reviewing/Creating TSD.
  • Collaborating with Application Development, Database, QA and DevOps/Infrastructure
  • Work with Big Data, Hadoop and Oracle Database
  • Write complex Hive QL/SPARK SQL queries to generate data for the FED CCAR reports.
  • Modify SQLs to make it compatible in all three (Impala/HIVE/SPARK)
  • Executing SQLs using spark-shell by assigning desired executors, cores and memory.
  • Create avros, partitions and external table HQLs for Big data tables.
  • Creating HBase tables for logging and batch process tracking.
  • Create Hive table on HBase tables to query the data.
  • Query HBase tables using No SQL (scan, put, get)
  • Environment setup (SIT/UAT/Prod) using distcp for avsc and data files.
  • Creating shell scripts for data validation and database comparison.
  • Comparing avro across DBs using postman .
  • Review team code /Performance tuning .
  • Track all the requirements using JIRA (updates, release tag, etc.)
  • Release Management : Set a process to all production releases and ensure version control.
  • Application Unit testing to validate correctness of code/data used in CCAR Provide long term fixes to reoccurring production issues.

Environment: Oracle 12c/11g, Toad, Hadoop (Cloudera), Big Data, JIRA, GIT Stash, UNIX Shell Script, Team city, Postman, InteliJ

Confidential, NYC, NY

Senior Developer

Responsibilities:

  • Involved in System Analysis, Design, Coding, Data Conversion, Development and implementation.
  • Analyzed the business requirements of the project by studying the Business Requirement Specification document.
  • Created database objects Tables, Views, Indexes, Constraints and Synonyms.
  • Extracted data from Flat files and transformed it in accordance with the Business logic mentioned by the client and loaded data into the staging tables.
  • Created packages for Data Validation after ETL process.
  • Performed Unit Testing on the scripts and ensured all the Exceptions are handled according to the business logic.
  • Analyze data in Big Data environment ingested from Mainframe/Teradata
  • Creating External tables in Hive to view tables in Beeline
  • Logically merge multiple tables into one.

Environment: Oracle 11g, Toad, Hadoop, Big Data, UNIX Shell Scripting.

Confidential, Troy, Michigan

Senior PL/SQL Developer

Responsibilities:

  • Extensively used the advanced features of PL/SQL like Records, Tables, Object types and Dynamic SQL.
  • Created PL/SQL scripts to extract the data from the operational database into simple flat text files using UTL FILE package.
  • Generated server-side PL/SQL scripts for data manipulation and validation and materialized views for remote instances.
  • Created Tables, Views, Constraints, Index (B Tree, Bitmap and Function Based).
  • Designed snow-flake and star schema for ETL process.
  • Performed the uploading and downloading of flat files from UNIX server using FTP
  • Improved the performance of the slow SQL queries by implementing indexes, using FORALL, BULK COLLECT and developed various procedures, functions and packages to implement the new business
  • Written Korn Shell Scripts, control files to load data into staging tables and DW tables.

Environment: PL/SQL Oracle 11g, SQL *PLUS, PL/SQL Developer

Confidential, Raleigh, North Carolina

PL/SQL Developer, Data Modeler

Responsibilities:

  • Involved in full development cycle of Planning, Analysis, Design, Development, Testing and Implementation.
  • Designed logical and physical data models for snowflake schemas for data warehousing.
  • Developed Advance PL/SQL packages, procedures, triggers, functions, Indexes and Collections to implement business logic using Toad.
  • Performed SQL and PL/SQL tuning and Application tuning using various tools like EXPLAIN PLAN, SQL*TRACE, TKPROF and AUTOTRACE. Created indexes on the tables for faster retrieval of the data.
  • Migration from Seibel Analytics to OBIEE, Software upgrades (Java, Oracle, Perl, Dollar Universe). Maintaining Code Version using PVCS.
  • Deployment of application to latest software versions using Kintana.
  • Coordinated with the front-end design team to provide them with the necessary stored procedures and packages and the necessary insight into the data.
  • Extensively worked on ETL code using PL/SQL in order to meet requirements for Extract, transformation and loading of data from source to target data structures.
  • Sending notification emails to support group on completion of load (success/failure)
  • Develop a common UTIL package to have all commonly used functions and procedure at one place.
  • Scheduling jobs In Dollar Universe using Shell Script to load data in system as per business required frequency.
  • Involved in Cisco Year end activity, to seed new geographical hierarchies for new fiscal year.

Environment: PL/SQL Oracle 11g/10g, Dollar Universe, CDB, TOAD, SQL *PLUS, PL/SQL Developer, Shell Script

Confidential, Richmond, Virginia

PL/SQL and Shell Script Developer

Responsibilities:

  • Creating business reports using Magic v2.0 software.
  • Scheduling reports using Appworx.
  • Creating Shell Script to deploy scheduled report run scripts on to Appworx.
  • Creating PL/SQL queries to embed in Magic v2.0 to implement complex business logic
  • Tuning PL/SQL existing report queries to handle huge data improving overall performance.

Environment: PL/SQL Oracle 9i, SQL* PLUS, PL/SQL Developer, Shell Script, Linux v2.1, Appworx, Magic v2.0

Confidential

C++ Developer

Responsibilities:

  • Workspace memory allocation for each user as per Business requirement
  • Assigning server memory location using C++ pointers
  • Writing C++ code in Linux for allocation of memory, freeing unused memory.
  • Periodically calculating used/unused memory by each user.
  • Scheduling report to send usage statistics to admin.

Environment: Linux, C++

We'd love your feedback!