Big Data Developer/ Data Engineer Resume
SUMMARY:
- Over 5 years of overall IT experience in a variety of industries, which includes hands on experience of 2+ year in database technologies and extensive experience of around 3 years in Business Intelligence
- Experience with Software Development Life Cycle (SDLC) and Project Management Methodologies
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and MapReduce concepts and experience in working with MapReduce programs using Apache Hadoop for working with Big Data to analyze large data sets efficiently
- Excellent knowledge in creating Databases, Tables, Stored Procedure, DDL/DML Triggers, Views, User defined data types, Cursors and Indexes using T - SQL
- Experience in Analyzing Execution Plan and managing indexes and troubleshooting deadlocks
- Experience in Tuning SQL queries and T-SQL queries, MDX & improving the performance of the queries
- Experience in database modeling, administration and development using PL/SQL in Oracle, DB2 and SQL Server environments. Extensive experience in working with different databases such as Oracle, IBM DB, RDBMS, SQL Server, MySQL and writing Stored Procedures, Functions, Joins and Triggers for different Data Models
- Experienced in Data Loading using T-SQL, Fine Tuning of Database and improving Performance, creating new Database, Creating and Modifying Table space and Optimization
- Experience in Database Development and Release Management Concepts, Practices and Procedures along with Performance Tuning Database Code in OLTP and OLAP environments
- Experience in data Extraction, Transformation and Loading (ETL) processes using SSIS, DTS Import/Export Data, Bulk Insert, BCP and DTS Packages
- Strong experience in Essbase Cube Analysis, Design, Building and maintenances, Essbase database Administration, Maintenance, database backups, Performance tuning, Optimization, etc.
- Extensive experience in development and maintenance using Hyperion Essbase, Designing Essbase Cubes, Hyperion planning, Hyperion financial Reports
- Strong Knowledge of Hyperion Planning (Web forms, Security, Meta data maintenance, and Smart view for office).
- Expertise in Outline creation, Dimension Rules, Data Load Rules, Calc. Scripts & MAXL Scripts
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and MapReduce concepts and experience in working with MapReduce programs using Apache Hadoop for working with Big Data to analyze large data sets efficiently
- Experience in importing and exporting terra bytes of data using Sqoop from HDFS to Relational Database Systems and vice-versa
- Versatile team player with good communication, analytical, presentation and interpersonal skills
TECHNICAL SKILLS:
Java, SQL, T: SQL, PL/SQL, C, C++
Operating Systems: Windows 95/98/NT/2000/2003/XPs, UNIX/Linux.
OLAP: Hyperion System 11/9 BI+ Analytic Administration Services, Hyperion Planning System 11.3.1/9.x/4.0/3.5.1, Hyperion Shared Services11.x/ 9.x, Hyperion Essbase 11.3.1/9.X/7.X/6.X, Hyperion Excel Add-in, Smart View, Essbase Administration Services, Hyperion Planning 11.1.1.2,9.3, Hyperion Application Link(HAL)9.2 and
MDM 9.2.0.10.0 , DRM 11.1.2.1
HDFS, Map: Reduce, Hive, Sqoop, Oozie, Spark, H-Base, Spark-SQL
Databases: ESSBASE11.x/ 9 BI+, Oracle 9i/10g, DB2, SQL Server 2008/2014/2016 , MS Access
Reporting tools: SAP BO, OBIEE and Hyperion Smart View, Hyperion Spreadsheet Add-in.
IDE: Eclipse, IntelliJ, SQL Developer, Microsoft Visual Studio.
Data Modeling: Logical/Physical/Dimensional, Star/Snow flake Schema, OLAP, Agile/Waterfall
PROFESSIONAL EXPERIENCE:
Confidential
Big Data Developer/ Data Engineer
Responsibilities:
- Follow complete Agile methodology starting from the Product backlog, Sprint Backlog, Sprint planning, User stories etc.
- Participate in gathering the requirements with S&P business users and converting them into User Stories.
- Export analysed data to relational databases using Sqoop
- Working on Apache Spark with scala using sql/hive context for clickstream data for Confidential ’s product suites like Expedient, LPS, Confidential .com
- Use Teradata Fast Export and Parallel Transporter utilities and Sqoop to extract data and load to Hadoop
- Write complex SQL queries including inline queries and sub queries for faster data retrieval from multiple tables
- Develop automated tasks to perform aggregations and manipulations on the data
- Extract, transform and load data into SQL Server tables from Teradata
- Rebuild Indexes and Tables as part of Performance Tuning Exercise
- Use SSIS and T-SQL stored procedures to transfer data from OLTP databases to staging area and finally transfer into data warehouse
- Provide on-call Production support for any issues
- Created ETL packages using Heterogeneous data sources (DB2, ORACLE and Flat Files etc.) and then loaded the data into destination tables by performing different kinds of transformations
- Load data in to AWS S3 bucket using ETLs
- Perform tuning and optimization of queries for reports that take longer execution time using SQLProfiler, index tuning wizard and SQL Query Analyzer
- Develop multiple MapReduce jobs for data cleaning and pre-processing
- Read data from S3 and process it using SparkSQL.
- Supported MapReduce Programs those are running on the cluster
Environment: SQL Server 2014/2016, SSAS and OLAP, SQL Server Integration Services (SSIS), SQL Server Management Studio, Teradata 15.00, Splunk 6.5, Visual Studio 14.0, SQL, T-SQL, MongoDB, Redis, AWS, Amazon S3, Hive, MapReduce, Hadoop, Pig
Confidential
Oracle Hyperion Essbase Administrator
Responsibilities:
- Provided immediate response to Production System calls and bug fixing with immediate effect
- Setup of the planning application and all its components
- Responsible for dimension building and outline maintenance using planning web interface and reflecting changes in Essbase
- Loaded metadata through HAL jobs
- Created web forms according to the requirement
- Created filters, groups and users
- Provided access on the applications to the business users according to their business in shared services
- Checked the drill through reports on daily basis
- Resolved the Data-Reconciliation issues and other issues (mostly security) on UAT phase
- Developed all the required detailed project documentation of all the tasks performed and the issues faced which will be used for future reference
- Assigned (dimension and data form) Access rights to users and user groups
- Developed calc. scripts for final administration tasks such as Full aggregations, full currency conversion, and data copy from one scenario to another
- Developed Business Rules with Run Time prompts for end user calculations by Entity for driver calculations, currency conversion at their entity, aggregation for their entity etc
- Developed Essbase components Load rules for actual and other data loads, setting up substitution variables
- Used various types of Calculations such as Dynamic calc. Two-Pass Calculations, Intelligent calculations for Outline Calculations
- Performed optimization for various data load rules and calc. scripts
- Trained business analysts on retrieving data from Essbase cubes by using Excel spreadsheet add-in and some of the planning features such as alias table and Smart view offline process
- Provided 24/7 technical support to all business units, offered suggestions and proposed solutions to resolve issues
- Monitored the automated data-load and aggregations using Appworx
Environment: Hyperion System 9/11, Hyperion Planning 9/11.x, Hyperion Essbase 9/11.x, EAS 9/11.x, Hyperion Reports 9/11.x, Hyperion Workspace, ODI 10.1.3.5, HFM, Smart View, Oracle 9i/10g, DRM, MDM.