We provide IT Staff Augmentation Services!

Sr. Data Architect Resume

4.00/5 (Submit Your Rating)

Reston, VA

SUMMARY:

  • Over 10+ years of Information Technologies (IT) experience as Sr. Data Modeler/Data Architect and Data Analyst in design, development, testing and maintenance of data warehouse, business intelligence and operational data systems.
  • Experience in developing Entity - Relationship diagrams and modeling Transactional Databases and Data Warehouse using tools like Erwin, ER/Studio and Power Designer.
  • Efficient in enterprise data warehouses using Kimball data warehouse and Inman's methodologies.
  • Experience in designing Star Schema, Snowflake schema for Data Warehouse, by using tools like Erwin data modeler, Power Designer and Embarcadero E-R Studio.
  • Good Knowledge on SQL queries and creating database objects like stored procedures, triggers, packages and functions using SQL and PL/SQL for implementing the business techniques.
  • Having good knowledge of Hive, Sqoop, MR, Storm, Pig, HBase, Flume, and Spark.
  • Expertise in SQL Server Analysis Services (SSAS) and SQL Server Reporting Services (SSRS)
  • Involved in various data modeling tasks that included forward engineering, reverse engineering, complete compare, creating DDL scripts, creating subject areas etc.
  • Good knowledge in Big Data -Hadoop, AWS Cloud, Amazon Redshift, AWS EC2, AWS EC3, AWS Lambda, MongoDB and Python.
  • Good Knowledge of Big Data like Hadoop, Hive, Spark, Pig, Apache Big Data frameworks and standards.
  • Proficient in UML Modeling like Use Case Diagrams, Activity Diagrams, and Sequence Diagrams with Rational Rose and MS Visio.
  • Excellent understanding on Tableau Workbooks and Background Tasks for Targeted Interactions Validation.
  • Experience in NOSQL DB and tools like Apache HBase to handle massive data tables containing billions of rows, millions of columns.
  • Strong hands on experience using Teradata utilities (SQL, BTEQ, Fast Load, Multi Load, Fast Export, Tpump, Visual Explain, and Query man), Teradata parallel support and Unix Shell scripting.
  • Well versed in conducting Gap analysis, Joint Application Design (JAD) session, User Acceptance Testing (UAT), Cost benefit analysis and ROI analysis.
  • Excellent understanding and working experience of industry standard methodologies like System Development Life Cycle (SDLC), as per Rational Unified Process (RUP), AGILE and Waterfall Methodologies.
  • Experience in Logical/Physical Data Modeling, Relational and Dimensional Modeling for Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) systems (ODS, ADS and Data Marts).
  • Good understanding in Normalization (1NF, 2NF, 3NF and BCNF) techniques for OLTP environments and Denormalization techniques for improved database performance in OLAP environments.
  • Experience in designing Enterprise Data warehouse, Reporting data stores (RDS) and Operational data stores (ODS).
  • Excellent understanding of an Approach to MDM to creating a data dictionary, Using Informatica or other tools to do mapping from sources to the Target MDM Data Model.
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems (RDBMS) and from RDBMS to HDFS.
  • Knowledge and working experience on big data tools like Hadoop, AWS Redshift.

TECHNICAL SKILLS:

Data Modeling Tools: Erwin 9.7, IBM Info sphere Data Architect, E/R Studio 17, Power Designer and Oracle SQL Developer.

Big Data Technology: MapReduce, HBase 1.2, HDFS, Sqoop, Hadoop 3.0, Hive 2.3, PIG 0.17

Cloud Platforms: Amazon EC2, EC3, Elastic Search, Elastic Load Balance.

Operating System: Windows 8/10, UNIX, Sun Solaris.

Database Tools: Microsoft SQL Server 16.0, Teradata 16.0, Oracle 12c and MS Access.

BI Tools: Tableau 10, Tableau server, Tableau Reader, Crystal Reports

Packages: Microsoft Office 2016, Microsoft Project 2016, SAP and Microsoft Visio, Share point Portal Server.

Version Tool: VSS, SVN, CVS.

Programming Languages:: Oracle PL/SQL, UNIX Shell Scripting

Methodologies: Agile, Ralph Kimball, BillInmon s data warehousing methodology, Rational Unified Process (RUP), Rapid Application Development (RAD), and Joint Application Development (JAD).

PROFESSIONAL EXPERIENCE:

Confidential - Reston, VA

Sr. Data Architect

Responsibilities:

  • Provide data architecture support to enterprise data management efforts, such as the development of the enterprise data model and master and reference data, as well as support to projects, such as the development of physical data models, data warehouses and data marts.
  • Gathered business requirements, working closely with business users, project leaders and developers. Analyzed the business requirements and designed conceptual and logical data models.
  • Lead the strategy, architecture and process improvements for data architecture and data management, balancing long and short-term needs of the business.
  • Building relationships and trust with key stakeholders to support program delivery and adoption of enterprise architecture.
  • Providing technical leadership, mentoring throughout the project life-cycle, developing vision, strategy, architecture and overall design for assigned domain and for solutions.
  • Assumed leadership role in various divisions of Data Warehouse group such as the Business Analysis (group that defines the data transformation rules), the database architecture (the group that defines the logical and physical architecture), the ETL (with Data stage as the platform) and Business Intelligence (Reporting).
  • Used Python for developing Spark code for faster processing of data on Hive and used Spark streaming to divide streaming data into batches as an input to spark engine for batch processing.
  • Worked on Dimensional and Relational Data Modeling using Star and Snowflake Schemas, OLTP/OLAP system, Fact and Dimension tables, Conceptual, Logical and Physical data modeling using Erwin r9.6.
  • Played key role in defining all aspects of Data Governance - data architecture, data security, master data management, data archival & purging and metadata.
  • Performed PoC for Big data solution using Cloudera Hadoop for data loading and data querying
  • Involved in T-SQL queries and optimizing the queries in Oracle 12c, SQL Server 2014, DB2, and Netezza, Teradata.
  • Worked on migrating of EDW to AWS using EMR and various other technologies
  • Created MDM, OLAP data architecture, analytical data marts, and cubes optimized for reporting.
  • Involved in Logical modeling using the Dimensional Modeling techniques such as Star Schema and Snow Flake Schema.
  • Involved in Normalization and De-Normalization of existing tables for faster query retrieval.
  • Developed LINUX Shell scripts by using NZSQL/NZLOAD utilities to load data from flat files to Netezza database.
  • Developed scripts in Python (Pandas, Numpy) for data ingestion, analyzing and data cleaning
  • Worked in Dimension Data modeling concepts like Star Join Schema Modeling, Snow-Flake Modeling, FACT and Dimensions Tables, Physical and Logical Data Modeling.
  • Collecting large amounts of log data using Apache Flume and aggregating using PIG/HIVE in HDFS for further analysis..
  • Developed logical & physical data model using data warehouse methodologies, including Star schema - Star-joined schemas, conformed dimensions data architecture, early/late binding techniques, data modeling, designing & developing ETL applications using Informatica Power Center.
  • Created data models for AWS Redshift and Hive from dimensional data models and worked on Data modeling, Advanced SQL with Columnar Databases using AWS and driven the technical design of AWS solutions by working with customers to understand their needs.
  • Loaded data into Hive Tables from Hadoop Distributed File System (HDFS) to provide SQL-like access on Hadoop data.
  • Worked on TERADATA15 and utility domains. Optimization of Queries in a Teradata database environment and worked in using Teradata tools like Fast Load, Multi Load, T Pump, Fast Export, Teradata Parallel Transporter (TPT) and BTEQ.
  • Performed data analysis, statistical analysis, generated reports, listings and graphs using SAS tools, SAS Integration Studio, SAS/Graph, SAS/SQL, SAS/Connect and SAS/Access.
  • Worked in importing and cleansing of data from various sources like Teradata 15, Oracle, flat files, SQL Server with high volume data.
  • Successfully loaded files to Hive and HDFS from Oracle and Involved in loading data from UNIX file system to HDFS and involved in the validation of the OLAP Unit testing and System Testing of the OLAP Report Functionality and data displayed in the reports.
  • Involved in creating informatica mapping to populate staging tables and data warehouse tables from various sources like flat files DB2, Netezza and oracle sources.
  • Full life cycle of Data Lake, Data Warehouse with Big data technologies like Spark, Hadoop, Cassandra and developed enhancements to MongoDB architecture to improve performance and scalability and worked with MapReduce frameworks such as Hadoop and associated tools (pig, Sqoop, etc)
  • Developed Data Mapping, Data profiling, Data Governance, and Transformation and cleansing rules for the Master Data Management Architecture involving OLTP, ODS.
  • Tested Complex ETL Mappings and Sessions based on business user requirements and business rules to load data from source flat files and RDBMS tables to target tables.
  • Created HBase tables to load large sets of structured, semi-structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios.
  • Managed definition and execution of data mapping, conversion and reconciliation processes, for data originating from a plethora of enterprise and SAP, leading into to ongoing data governance organization design

Environment: Erwinr9.6, Oracle 12c, Teradata15, Netezza, PL/SQL, T-SQL, MDM, BI(Tableau), DB2, SQL Sever2014, Informatica Power Center, SQL, Bigdata, Hadoop, Hive Queries, Microstrategy, MapReduce, Pig, Cassandra, MongoDB, SAS, Spark, SSRS, SSIS, SSAS, AWS, S3, Redshift, EMR, Tableau Excel, MS Access, SAP etc.

Confidential - Columbus, OH

Sr. Data Architect

Responsibilities:

  • Involved in Sr. Data Architect role to review business requirement and compose source to target data mapping documents.
  • Worked with IT team where worked as Sr. Data Architect/Data Modeler role which involved Data Profiling, Data Modeling.
  • Developed and implemented data cleansing, data security, data profiling and data monitoring processes.
  • Designed and developed architecture for data services ecosystem spanning Relational, NoSQL, and Big Data technologies.
  • Created new database objects like tables, procedures, Functions, Indexes and Views
  • Designed Constraints, rules and set Primary, Foreign, Unique and default key and hierarchical database.
  • Developed stored procedures in SQL Server to standardize DML transactions such as insert, update and delete from the database.
  • Created databases, tables, stored procedures, DDL/DML Triggers, views. User defined data types, functions, cursors and indexes using T-SQL.
  • Used Hadoop eco-system for migrating data from relational staging databases to big data.
  • Responsible for Big data initiatives and engagement including analysis, brain storming, POC, and architecture.
  • Interviewed Business Users to gather Requirements and analyzed the feasibility of their needs by coordinating with the project manager and technical lead.
  • Extracting Mega Data from Amazon Redshift, AWS, and Elastic Search engine using SQL Queries to create reports.
  • Supported project metrics analysis, team communication, resource planning, risk analysis, report generation and documentation control using workbench application.
  • Ensure data integrity, data security and process optimization by identifying trends and provide reports as necessary.
  • Established a business analysis methodology around the RUP (Rational Unified Process). Developed use cases, project plans and manage scope.
  • Reviewed and approved the QA team's test scripts also participated in the QA team's defect meetings to review defects as entered in Quality Center.
  • Manage Production and UAT defects raised by business end-users using HP Quality Centers. Analyze user stories and data attributes.
  • Experienced with AWS ecosystem (EC2, S3, RDS, and Redshift).
  • Backed up databases using MongoDB backup facility in OPS manager.
  • Documented and managed data dictionaries of systems.
  • Supported logical and physical mapping efforts.
  • Helped review the accuracy of the logical data models and data mapping transformation rules.
  • Developed all the required stored procedures, user defined functions and triggers using T-SQL and SQL
  • Created SSRS report, prepared prompt generated/ parameterized report using SSRS
  • Created reports from OLAP, sub reports, bar charts and matrix reports using SSRS.
  • Worked on preparation of Test plans, Test data and execution of test cases to check application functionality that meets user requirements.
  • Developed test plans with QA team and helped test every scenario using Mercury Test Director Tool for system testing.
  • Performance tuning and stress-testing of NoSQL database environments in order to ensure acceptable database performance in production mode.
  • Loaded and transformed large sets of structured, semi structured and unstructured data using Hadoop/Big Data concepts.
  • Used Test Case distribution and development reports to track the progress of test case planning, implementation and execution results.
  • Wrote queries and analyze SQL Server Views, Stored Procedures in the underlying Data Warehouse.
  • Identified the dimension, fact tables and designed the data warehouse using star schema and designed new schema for the data mart.
  • Created SSIS package to load data from Flat files, Excel and Access to SQL server using connection manager
  • Transform the data using various SSIS tools such as derived Columns, conditional splits, data conversions, Aggregate and Pivot transformation
  • Created data transformation task such as BULK INSERT to import data from client
  • Identified the dimension, fact tables and designed the data warehouse using star schema and designed new schema for the data mart.

Environment: SQL, RUP, Quality Center, UAT, SQL Server 2017, DML, DDL, T-SQL, SSIS, SSRS, OLAP, Mercury, Test Director, Rational Rose, Agile, PL/SQL, HTML 5, MS Office 2016, MS Visio 2016, Hadoop 3.0, Big Data, Hive 2.3, AWS

Confidential - Greensboro, NC

Sr. Data Architect/Data Modeler

Responsibilities:

  • Responsible for the data architecture design delivery, data model development, review, approval and Data warehouse implementation.
  • Worked as a Sr. Data Architect/Data Modeler as part of IDW (Integrated Data Warehouse) project under Agile DevOps methodology.
  • Created HBase tables to load large sets of structured, semi-structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios.
  • Designed and architecting AWS Cloud solutions for data and analytical workloads such as warehouses, Big Data, data lakes, real-time streams and advanced analytics
  • Interacted with End-users for gathering Business Requirements and Strategizing the Data Warehouse processes
  • Actively, involved in complete life cycle of agile methodology to design, develop, deploy and support solutions.
  • Write complex Netezza views to improve performance and push down the load to database rather than doing it in the ETL tool.
  • Involved in data model reviews with internal data architect, business analysts, and business users with explanation of the data model to make sure it is in-line with business requirements.
  • Created DDL scripts using ER Studio and source to target mappings to bring the data from source to the warehouse.
  • Worked with MapReduce frameworks such as Hadoop and associated tools (pig, Sqoop, etc)
  • Used ETL methodology for supporting data extraction, transformations and loading processing, in a complex MDM using Informatica.
  • Used M-LOAD, Fast-load and T-pump loading to migrate data from Oracle to Teradata.
  • Designed the data marts using the Ralph Kimball's Dimensional Data Mart modeling methodology using ER Studio 9.7.
  • Driven the technical design of AWS solutions by working with customers to understand their needs.
  • Conducted numerous POCs (Proof of Concepts) to efficiently import large data sets into the database from AWS S3 Bucket.
  • Worked on analyzing source systems and their connectivity, discovery, data profiling and data mapping.
  • Driven the technical design of AWS solutions by working with customers to understand their needs
  • Generated ad-hoc SQL queries using joins, database connections and transformation rules to fetch data from Teradata database.
  • Collected large amounts of log data using Apache Flume and aggregating using PIG in HDFS for further analysis.
  • Generated the frame work model from IBM data Architect for the Cognos reporting team.
  • Created logical and physical 3NF relational models based on XML data extracts from transportation logistics application (ER/Studio)
  • Used Star Schema and Snowflake Schema methodologies in building and designing the Logical Data Model into Dimensional Models
  • Lead a team responsible for design and implementation of ODS for transportation and logistics management system
  • Documented logical, physical, relational and dimensional data models. Designed the Data Marts in dimensional data modeling using Star and Snowflake schemas.
  • Attended numerous trainings to understand the Healthcare Domain and the concepts related to the project (Healthcare Informatics).
  • Collaborated with other data modelers to understand and implement best practices within the organization.
  • Involved in Data Architecture, Data profiling, data mapping and Data architecture artifacts design.
  • Developed strategies for warehouse implementation, data acquisition, and archive recovery.

Environment: ER/Studio 9.7, Teradata 15, Amazon Redshift, AWS, Oracle 12c, ODS, OLAP, OLTP, Hadoop 3.0, MapReduce, HDFS, Sqoop 1.4, Apache Flume 1.8, Agile, OLAP, SAP Kafka, Pig 0.17, Oozie, Cassandra 3.11, MDM, Informatica 9.6, NoSQL, Unix.

Confidential - Stamford, CT

Sr. Data Architect/Data Modeler

Responsibilities:

  • Responsible for the data architecture design delivery, data model development, review, approval and Data warehouse implementation.
  • Generated ad-hoc SQL queries using joins, database connections and transformation rules to fetch data from legacy DB2 and SQL Server 2014 database systems..
  • Managed the meta-data for the Subject Area models for the Data Warehouse environment.
  • Generated DDL and created the tables and views in the corresponding architectural layers.
  • Handled importing of data from various data sources, performed transformations using Map Reduce, loaded data into HDFS and Extracted the data from My SQL into HDFS using Sqoop
  • Involved in performing extensive Back-End testing by writing SQL queries and PL/SQL stored procedures to extract the data from SQL Database.
  • Participate in code/design reviews and provide input into best practices for reports and universe development.
  • Designed and developed the conceptual then logical and finally physical data models to meet the needs of reporting.
  • Involved in designing and developing Data Models and Data Marts that support the Business Intelligence Data Warehouse.
  • Implemented logical and physical relational database and maintained Database Objects in the data model using Erwin
  • Responsible for Big data initiatives and engagement including analysis, brainstorming, POC, and architecture.
  • Involved in Testing like Unit testing, System integration and regression testing.
  • Worked with SQL Server Analysis Services (SSAS) and SQL Server Reporting Service (SSRS).
  • Worked on Data modeling, Advanced SQL with Columnar Databases using AWS.
  • Perform reverse engineering of the dashboard requirements to model the required data marts.
  • Developed Source to Target Matrix with ETL transformation logic for ETL team.
  • Cleansed, extracted and analyzed business data on daily basis and prepared ad-hoc analytical reports using Excel and T-SQL
  • Created Data Migration and Cleansing rules for the Integration Architecture (OLTP, ODS, DW).
  • Handled performance requirements for databases in OLTP and OLAP models.
  • Conducted meetings with business and development teams for data validation and end-to-end data mapping.
  • Responsible for Metadata Management, keeping up to date centralized metadata repositories using Erwin modeling tools.
  • Involved in debugging and Tuning the PL/SQL code, tuning queries, optimization for the Sql database.
  • Lead data migration from legacy systems into modern data integration frameworks from conception to completion.
  • Involved in Netezza Administration Activities like backup/restore, performance tuning, and Security configuration
  • Involved in the validation of the OLAP, Unit testing and System Testing of the OLAP Report Functionality and data displayed in the reports.
  • Created a high-level industry standard, generalized data model to convert it into logical and physical model at later stages of the project using Erwin and Visio
  • Participated in Performance Tuning using Explain Plan and TKPROF.
  • Involved in translating business needs into long-term architecture solutions and reviewing object models, data models and metadata.
  • Developed Master data management strategies for storing reference data.
  • Worked with Data Stewards and Business analysts to gather requirements for MDM Project.
  • Used Agile Methodology of Data Warehouse development using Kanbanize.
  • Worked with Hadoop eco system covering HDFS, HBase, YARN and Map Reduce.
  • Performed the Data Mapping, Data design (Data Modeling) to integrate the data across the multiple databases in to EDW.
  • Designed both 3NF Data models and dimensional Data models using Star and Snowflake schemas.
  • Involved in Normalization/Denormalization techniques for optimum performance in relational and dimensional database environments.

Environment: Erwin 9.5, HDFS, AWS, HBase, Hadoop 3.0, Metadata, MS Visio 2016, SQL Server 2016, Agile, PL/SQL, ODS, OLAP, OLTP, flat files, MDM.

Confidential - Boston, MA

Sr. Data Analyst

Responsibilities:

  • Understand the high level design choices and the defined technical standards for software coding, tools and platforms and ensure adherence to the same.
  • Analyze business requirements and build logical data models that describe all the data and relationships between the data
  • Designed both 3NF data models for ODS, OLTP systems and dimensional data models using Star and Snow Flake Schemas.
  • Developed the batch program in PL/SQL for the OLTP processing and used Unix Shell scripts to run in corn tab.
  • Identified & record defects with required information for issue to be reproduced by development team.
  • Worked on the reporting requirements and involved in generating the reports for the Data Model using crystal reports
  • Collected, analyze and interpret complex data for reporting and/or performance trend analysis
  • Wrote and executed unit, system, integration and UAT scripts in a data warehouse projects.
  • Extensively used ETL methodology for supporting data extraction, transformations and loading processing, in a complex DW using Informatica.
  • Developed and maintain sales reporting using in MS Excel queries, SQL in Teradata, and MS Access.
  • Involved in writing T-SQL working on SSIS, SSRS, SSAS, Data Cleansing, Data Scrubbing and Data Migration.
  • Redefined many attributes and relationships in the reverse engineered model and cleansed unwanted tables/columns as part of Data Analysis responsibilities.
  • Designed the data marts using the Ralph Kimball's Dimensional Data Mart modeling methodology using Erwin.
  • Written complex SQL queries for validating the data against different kinds of reports generated by Business Objects XIR2
  • Worked in importing and cleansing of data from various sources like Teradata, Oracle, flat files, with high volume data
  • Part of team conducting logical data analysis and data modeling JAD sessions, communicated data-related standards.
  • Performed Reverse Engineering of the current application using Erwin, and developed Logical and Physical data models for Central Model consolidation.
  • Translated logical data models into physical database models, generated DDLs for DBAs
  • Performed Data Analysis and Data Profiling and worked on data transformations and data quality rules.
  • Written SQL scripts to test the mappings and Developed Traceability Matrix of Business
  • Involved in extensive data validation by writing several complex SQL queries and Involved in back-end testing and worked with data quality issues.
  • Created SQL tables with referential integrity, constraints and developed queries using SQL, SQL*PLUS and PL/SQL.

Environment: Erwin 8.5, PL/SQL, Business Objects XIR2, Informatica 8.6, Oracle 10g, Teradata R13, Teradata SQL Assistant 12.0, PL/SQL, Flat Files, Teradata

Confidential - Chicago, IL

Data Analyst/ Data Modeler

Responsibilities:

  • Identified and compiled common business terms for the new policy generating system and also worked on contract Subject Area.
  • Maintained the stage and production conceptual, logical, and physical data models along with related documentation for a large data warehouse project.
  • Involved in logical and Physical Database design & development, Normalization and Data modeling using Erwin and SQL Server Enterprise manager.
  • Deleted users from cost centers, deleted users authority to grant certain monetary amounts to certain departments, deleted certain cost centers and profit centers from database
  • Created a report, using SAP reporting feature that showed which users have not performed scanning of journal voucher documents into the system.
  • Created Excel pivot tables, which showed a table of users that, have not performed scanning of journal voucher documents. Users were able to find documents by double-clicking on his/her name within the pivot table
  • Load new or modified Data into back-end Oracle database.
  • Optimizing/Tuning several complex SQL queries for better performance and efficiency.
  • Created various PL/SQL stored procedures for dropping and recreating indexes on target tables. Worked on issues with migration from development to testing.
  • Designed and developed UNIX shell scripts as part of the ETL process, automate the process of loading, pulling the Data
  • Validated cube and query Data from the reporting system back to the source system. Tested analytical reports using Analysis Studio
  • Involved in Regression, UAT and Integration testing
  • Participated in testing of procedures and Data, utilizing PL/SQL, to ensure integrity and quality of Data in Data warehouse.
  • Metrics reporting, Data mining and trends in helpdesk environment using Access
  • Gather Data from Help Desk Ticketing System and write Ad-hoc reports and, charts and graphs for analysis.
  • Compiled Data analysis, sampling, frequencies and stats using SAS.
  • Involved in SQL Server and T-SQL in constructing Tables, Normalization and De-normalization techniques on database Tables.
  • Identify and report on various computer problems within the company to upper management
  • Report on trends that come up as to identify changes or trouble within the systems using Access and Crystal Reports.
  • Performed User Acceptance Testing (UAT) to ensure that proper functionality is implemented.
  • Guide, train and support teammates in testing processes, procedures, analysis and quality control of Data, utilizing past experience and training in Oracle, SQL, Unix and relational databases.
  • Maintained Excel workbooks, such as development of pivot tables, exporting Data from external SQL databases, producing reports and updating spreadsheet information.
  • Modified user profiles, which included changing users cost center location, changed users authority to grant monetary amounts to certain departments - monetary amounts were part of the overall budget amount granted per department
  • Performed segmentation to extract Data and create lists to support direct marketing mailings and marketing mailing campaigns.
  • Defined Data requirements and elements used in XML transactions. Reviewed and recommended database modifications
  • Analyzed and rectified d Data in source systems and Financial Data Warehouse databases.
  • Generated and reviewed reports to analyze Data using different excel formats Documented requirements for numerous Ad-hoc reporting efforts
  • Troubleshooting, resolving and escalating Data related issues and validating Data to improve Data quality.

Environment: SAS/BASE, SAS/Access, SAS/Connect, Informatica Power Center (Power Center Designer, workflow manager, workflow monitor), SQL *Loader, Congas, Oracle 11g, SQL Server 2014, Erwin 9.2, Windows 7, TOAD

Confidential

Data Analyst

Responsibilities:

  • Worked with the business analysts to understand the project specification and helped them to complete the specification.
  • Worked in Data Analysis, data profiling and data governance identifying Data Sets, Source Data, Source Metadata, Data Definitions and Data Formats.
  • Involved with all the phases of Software Development Life Cycle (SDLC) methodologies throughout the project life cycle.
  • Used cascaded parameters to generate a report from two different Data Sets.
  • Involved with Query Optimization to increase the performance of the Report.
  • Built risk templates for team members to enhance the business risk framework of Asset Management Services. Results from these templates will be reported to Senior Management.
  • Translated Data from multiple sources into useful information and business drivers utilized by senior management for strategic decision making.
  • Acknowledged by superiors for excellent communication of findings and projections in both narrative and oral reports.
  • Prepared reports by collecting, merging, analyzing, and summarizing information.
  • Developed and implemented custom inventory management database utilizing Excel (built regression analysis models) transferred it to MS Access which contributed to more efficient inventory control and sales projection.
  • Documented the complete process flow to describe program development, logic, testing, and implementation, application integration, coding.
  • Wrote the test cases and technical requirements and got them electronically signed off.
  • Performed Data analysis and Data profiling using complex SQL on various sources systems.
  • Created SQL scripts to identify keys, Data anomalies, and Data validation issues.
  • Performed Gap Analysis to check the compatibility of the existing system infrastructure with the new business requirement.
  • Helped to maintain systems and provided overall support to internal and external customers.
  • Worked with internal IT resources to communicate both technical and non-technical requirements, proposed solution architecture, and supported implementation of new features and system upgrades.
  • Created package to transfer data between OLTP and OLAP databases.
  • Created pivot tables and charts using worksheet Data and external resources, modified pivot tables, sorted items and group Data, and refreshed and formatted pivot tables.

Environment: Erwin 8.0, Sql, MS Office, MS Visio, SAS/BASE, SAS/Access, SAS/Connect, MS Access, SSRS, Windows 2000, Oracle 9i

We'd love your feedback!