We provide IT Staff Augmentation Services!

Sr Data Warehouse Developer Resume

SUMMARY

  • Oracle DB - 10g, 11g, 12c - more than 15 years of experience
  • EXADATA - Migration from different platforms /databases to Oracle on EXADATA machine
  • Troubleshooting Oracle performance ( in real-time, reproducible/ non-reproducible issues)
  • Extensive hands-on experience in Advanced PL/ SQL techniques in Oracle 11g /12c
  • Experience in building RAC environment (Oracle 11g, Oracle clusterware, ASM)
  • In-Memory option, Multitenant
  • Postgres DB - more than 4 years of experience ( Tuning, Scaling, Replication, Backup/Recovery, Move data into/from )
  • Experience with large databases
  • Extensive experience in relation/dimensional modeling( Ralph Kimball’s methodology ) for Data Warehouse systems as well as for OLTP - 3NF, Star, SnowFlake
  • ETL processing: ODI, Informatica ETL, Pentaho, GoldenGate, Data Pump, SQL*Loader, External tables
  • Building data pipelines as a part of ETL processing flow(Luigi, Airflow, Celery )
  • Extracting /shaping /cleanup data from JSON,XML,Excel, CSV files and databases (cx Oracle, Psycopg2)
  • Data analysis with Pandas
  • Familiarity with NumPy, SciPy
  • Experience working in global teams
  • Experience with version control system Subversion/GitHub
  • Participating in the full lifecycle development, from conceptual modeling to “go-live” in Production
  • Agile( Methodology Scrum )

PROFESSIONAL EXPERIENCE

Confidential

Sr Data Warehouse developer

Responsibilities:

  • I have created Abstraction Layer on top of staging data. This layer hides the complexity of data, provides security of information and represents the data as the separate flows - Orders, Routes, Executions, Actions, Re-Actions for further processing by ETL layer.It was implemented as set of pipelined functions and views on DB side along with extracting and cleaning the data captured from different sources by application written in Python
  • I also designed star schema with several fact tables and dimensions to keep processed data with referential info for reporting purpose.Based on processed data I created 2 type of data sources: the view with all tags needed to created FIX messages and set of reports for Actimize system
  • My responsibility was also to tune SQL queries and PL/SQL code as well as to troubleshoot Oracle DB issues . For example: Old system returned the data for Actimize reports in approx 20-25 minutes for 1 day data. I’ve managed to rebuilt the system in such way that AVG time to get the data became ~ 15-60 secs.
  • I have also participated in managing of Data Quality and Data Lineage, communicating with Business Analysts and other technology teams.

Technologies and tools: Oracle 12c on Exadata, Compression, Parallelism, Partitions, Data Pump, SQL Developer, PL/SQL Developer, Oracle Data Modeler, SQL*Plus, SQL*Loader, External Tables, Python

Confidential

Data Architect

Responsibilities:

  • Per scope of project I proposed the plan and technologies to move the data from Sybase and Oracle to Oracle on EXADATA platform. Data has been divided on three categories: hot,warm, cold depending on importance of information and how quickly the data must be available to work with. During migration the data was extracted, cleaned, enriched ( if needed). Then prepared information has been transformed and loaded into ORACLE on EXADATA. I used very fast and convenient techniques like Insert As Select and Data Pump with Network option. Migration was done in expected time frame.
  • I created logical and physical models for new platform based on analysis of business requirements, data sources, data flows.
  • I also participated into building DB-related part of “FEED” subsystem to retrieve and process the data coming from different source like Bloomberg ( I used ODI together with GoldenGate and PL/SQL packages ).
  • I was constantly doing optimization of PL/SQL code, tuning of SQL queries to benefit from Offloading and Smart scan as well as writing Unit tests.
  • I used “hot patch” of PL/SQL application code ( Edition-based redefinition features of Oracle DB ) hence if we needed to deploy a new set of changes on Production server we didn’t stop application at all.
  • I implemented Audit Trail requirement by using FLASHBACK Oracle feature ( this let us to track the audit information in fast and convenient way )

Technologies and tools:Oracle 11g on EXADATA, Oracle Data modeler, TOAD, PL/SQL, Partitioning, Python, Generic connectivity gateway (from Sybase to Oracle ), Flashback archiving, FGAC to data, Edition-Based Redefinition,Python

Sr Oracle Developer

Confidential

Responsibilities:

  • I completely rewrote the logic to process TRANSACTIONS, POSITIONS, SETTLEMENTS records along with introducing a set of new PL/SQL packages to handle new types of derivatives. I used the collections a lot together with FORALL statement .I optimized those parts of code which were very slow from performance perspective . This greatly improve the performance of flow reducing the total time of processing ( Old code ran about 40-50 minutes through the all feeds to complete . After code optimization it was run about 10 -17 minutes).
  • I have done the testing of code to be sure the system was stable and no performance degradation once we migrate from Oracle 10g to 11g.
  • I modified “FEED” subsystem of platform ( Completely rebuilt by using informatica ETL and PL/SQL code, before it was a mix of code on Oracle side like external tables/ pipelined functions and code on Java side ). After that we had an optimized flow and one point of verification(in case of any issues )
  • I have also had the responsibility to troubleshooting Oracle performance issues.
  • I participated in building OLAP env for further BI reporting.

Technologies and skills: Oracle 10g/11g, TOAD, PL/SQL, SQL, Analytic functions, Partitioning, External tables, Shell scripting, ODI, GoldenGate,OLAP

Confidential

Lead Oracle developer

Responsibilities:

  • created the logical,relation,dimensional models defined the database physical structure build the strategy as well as chose the stack of technologies to capturing the data from different types of sources built the strategy for partitioning, indexing, compressing the data developed the information life cycle (ILC) approach ( Old data rolling over ) implemented the data capturing from different kinds of source systems (SYBASE, MS SQL, files, Oracle DB - Platform Unix/Windows/ftp). I configured generic gateway connectivity between SYBASE and ORACLE through database link. This approach replaced the additional step in obtaining the data that was used previously ( using perl scripts and java application) implemented the transformation of data using MERGE synchronization, multi-table inserts and PL/SQL capabilities implemented the moving data from staging area to historical developed and maintained of source-to-target mapping for the bidirectional movement of data between source system and Data Warehouse designed and maintained Data Validation process was also responsible for capacity planning have lead the team of 6 software developers

Technologies and skills: Oracle 9i-10g, SQL Navigator, Oracle Data Integrator, PL/SQL, SQL, Analytic functions, External tables, Sybase and Teradata gateways, DataPump, Change data capture (CDC), Direct-Path load, Materialized views, Dimensions, Shell scripting

Hire Now