Data Analyst/sql Developer Resume
4.00/5 (Submit Your Rating)
Dallas, TexaS
PROFESSIONAL SUMMARY:
- Almost 9 years of professional experience in Oracle PL/SQL, T - SQL, Data Warehousing, Data Mining, Data Science, Machine Learning, and Data Modelling, Source Mapping and providing solutions for competitive challenges and meeting client expectations.
- Experienced in Data Science, Data Mining, Advanced Analytics, Consulting, ETL, BI, and Reporting, all of which contribute to cost savings and increased ROI.
- Dynamic SQL experience extracting information from unstructured JSON data into relational data tables for analytical consumption.
- Solid, practical experience in hypothesis testing, data cleaning, data translation, and machine learning & model building.
- Designed insightful Visualizations and Interactive Dashboards both in Power BI and Tableau environments.
- Efficient storyteller and created appealing, goal-oriented dashboards in Power BI for target audience.
- Knowledge of Microsoft Azure services such as Azure Data Factory (ADF), Azure Synapse Analytics, Databricks, Cognitive Search, Power platform, and Blob storage.
- Developed and implemented an end-to-end ETL process for consuming data from blob storage by using Azure Data Factory (ADF) pipelines.
- Extensive experience with Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) systems and good understanding of Conceptual, Logical, and Physical Model Development for OLTP and OLAP. • Experienced in Dimensional Data modelling, Star/Snowflake Schema, FACT and Dimension tables.
- Knowledgeable in writing the Business Requirements Document (BRD) both gathering and documentation, developing UAT Test Plans, administering the Traceability Matrix, and assisting with Post Implementation tasks.
- Skilled Data Modeler with strong Logical and Physical Data Modelling abilities, Data Profiling skills, Data Quality Maintenance experience, developed data mapping documents, composed functional specifications, queries.
- Have rich engagement with the healthcare industry working with claims data for Medicare, Medicare-Advantage, and Commercial Insurance Payers.
- Worked on a variety of CMS (Medicare) value-based initiatives including MSSP, BPCI-A, BPCI classic, and CJR models.
- Strong working knowledge in healthcare claims data from varied insurance Payers from both Commercial and Medicare space.
- Strong hold in the medical coding space such as DRG, HCPCS, ICD9 & 10 diagnosis and Procedure codes.
- Exposure to EDI data formats such as 835 and 837 datasets for Claims and Payments.
- Real life experience working with Medicare Fee Schedule for Part-B reimbursements for Physician claims.
- Exposure to Pricing methodology for home health, SNF and Hospice Service types with respect to rural and urban settings and various other factors.
- Working exposure to designing visit level and stay level Financial and Utilization tables and metrics for KPI reporting.
- Knowledge of nuances and subtilities involved in Revenue Centre Codes, Bill Type codes, Place of Service and Provider Type codes for claims processing and categorization.
- Extensive familiarity with Python Pandas, NumPy, Scikit-learn, seaborn, and Matplotlib packages.
- Successfully developed Custom Python packages for Automation jobs that would consume 500 or more individual hours per month.
- Knowledgeable in Best Practices and Design Patterns, Cube design, BI Strategy and Design, and 3NFModeling.
PROFESSIONAL EXPERIENCE:
Confidential, Dallas, Texas
Data Analyst/SQL developer
Responsibilities:
- Writing ad hoc SQL queries employing joins, database connections, and transformation rules to get data from legacy MySQL, Oracle, and SQL Server database systems is one of your responsibilities.
- Assisted with cost reduction and ROI improvement through BI reporting, supported by knowledge of advanced analytics, consulting, and ETL.
- Converted an on-premises application to Azure. processed and stored data sets using Azure services including Databricks, Azure SQL, and Blob storage.
- Created ADF pipelines with event/time triggers to transport data from an aging MySQL database to an Azure SQL data warehouse on a regular basis.
- Very recently, utilizing Azure Data Factory (ADF) and Synapse, we developed and implemented an end-to-end ETL process for consuming huge claims data from blob storage.
- Transferred data using Azure functions from Oracle systems to Azure SQL DB.
- Compiling and constructing data sets, source data, source meta data, data definitions, and data formats.
- To identify low-quality missing data in the existing data and evaluate business rules, data analysis techniques were used.
- Used COVID claims data to calculate provider payments.
- To create sets for statistical model building, data was gathered from a variety of sources (MA claims, commercial, etc.).
- Recreated Multi-variate Regression Models to Estimate Target Prices, LASSO Regression Model for Accurate Patient Risk Output Prediction, and Used Random Forest, Gradient Boosting Techniques for Reducing Readmission Utilization, Adding Value to Patient Care.
- Conducted various model evaluations, such as cross-validation, for machine learning models.
- 400+ man hours were saved by automating the collection of raw flat files by iterating through dynamic folder and sub-folder trees to produce summary reports and write the results to SQL Database.
Confidential: Hartford, CT
Sr HealthCare Data Analyst
Responsibilities:
- Moving a program from a local server to Azure. processed and stored datasets using SQL & Blob storage from Databricks Azure.
- Knowledge with Microsoft Azure management tools such Azure Data Factory (ADF), Azure Synapse Analytics, Databricks, Cognitive Search, Power Platform, and Blob storage.
- Extremely effective at data modelling, sustaining RDBMS, logical and physical data modelling up to 3NormalForm (3NF), and multidimensional data modelling schema principles (Star mapping, Snow-Flake Modelling, Facts, and aspects).
- Built ADF pipelines with time and event triggers for regular data transfer from the inherited MySQL database to the Azure SQL data warehouse.
- Collaborated with Azure DevOps to monitor the transitions from the development environment to the production environment over time while taking lookback scenarios into account.
- Proficient in ETL (Extraction, Transformation, and Loading), ORACLE, SQL Server, and other relational and non-relational data bases; expert in data analysis, design, development, implementation, and testing.
- Created Dynamic SQL storage techniques to decode jumbled patient structured data kept in the JSON architecture.
- Made ad hoc SQL queries employing joins, data base associations, and transformation rules to profile data from the DB2 and SQL Server database frameworks.
- Designed special Azure methods to manage data transfer jobs from Oracle systems to Azure SQL DB.
- Worked on normalization/denormalization procedures for relational and dimensional data base settings to achieve the best performance.
- Moving a program from a local server to Azure. Datasets are processed and stored using Databricks Azure services including SQL and Blob storage.
- In general, I have abilities in data modelling, star/snowflake schema, FACT, and dimension tables.
- Participate in data mapping requirements to determine what data will be taken from an internal data warehouse, transformed, and provided to an outside party. This information is used to develop and carry out thorough system test plans.
- Connected to data sources and created visualizations using Power BI's Direct Query and Import mode to meet reporting requirements.
- Approved business rules and found poor-quality missing data in existing data using data analysis tools.
- Identifying data sets, source data, source meta data, data definitions, and data formats are all pertinent to data analysis.
- The SSAS OLAP cube gives non-technical business leaders access to high-level revenue data analyses that can be customized.
- Techniques like Random Forest and Gradient Boosting, for instance, were used to decrease readmissions while enhancing patient care.
- Developed a customized Python package tool that automates the collection of raw flat files, focusing on dynamic folder and sub-folder trees to produce a summary report and provide results with the "click of a button" in a SQL Database.
- The Python tool allowed even non-technical stakeholders to run the tool and acquire the appropriate outcome in less than 5 minutes with no user engagement other than a single client-provided input.
Confidential, Portland, OR
Data Analyst
Responsibilities:
- Responsibilities include developing a process flow and determining data flow for the current data ingestion system.
- Data analysis was used to convert data in test and production environments. This included data mapping from source to target database schemas, specification, and the creation of data extraction scripts and programming.
- The creation of primary keys was automated using master tables and PL/SQL triggers.
- Automated scraping, ETL Pipeline steps, QA checks for possible data redundancy during the transform step, and the setting up of CRON jobs for scheduling.
- To effectively describe, arrange, and preserve data, I built a variety of database objects using T-SQL, including tables, views, procedures, triggers, and functions.
- Developed high performance data integration solutions for data warehousing utilizing the SQL Server SSIS tool, including extraction, transformation, and load packages.
- Creating and designing form validation processes for data query and updating.
- Exception Handling was extensively used to handle errors to make it easier to debug and provide error messages in the program.
- Developed a polynomial regression model to estimate project costs and completion times.
- Used model regularization approaches to prevent the ML model from being overfitted.
- Created materialized views for remote instances and server-side PL/SQL scripts for data manipulation and validation.
Confidential, Charlotte, NC
Data Analyst/Scientist
Responsibilities:
- Created ad-hoc SQL queries using joins, database connections, and transformation rules to get information from old MySQL, Oracle, and SQL Server database systems.
- Assisted with cost reduction and ROI improvement through BI reporting, supported by knowledge of advanced analytics, consulting, and ETL.
- Created dynamic SQL stored procedures to reveal JSON-formatted data from patient forms.
- Collected, produced, and identified data formats, data definitions, source data, and source meta data.
- Used data analysis tools to find low-quality missing data in the existing data and validate business rules.
- Developed a standardized and normalized data engine for claims reporting using data from commercial insurance payers and Medicare, Medicare-Advantage, and other payers.
- Constructed MA claims, Medicare claims, baseline claims, and monthly claims datasets for, testing, and validation of statistical models.
- Performed model cross-validation and employed objective performance criteria for assessing statistical models.
- Using audit table data trails to create a pricing model framework, we were able to successfully renegotiate a contract and reduce costs by 60%, from $100 million to $40 million.
- Offered a data solution that helped NH and BCBS negotiate successfully (> $1B contract)
- Created new multi-variate regression models to calculate target prices, LASSO regression model for precise patient risk output prediction, and employed Random Forest and Gradient Boosting techniques to cut down on readmission usage and improve patient care.
- Developed tools that allowed even non-technical stakeholders to use them and obtain the needed results in less than five minutes.
- Developed a unique Python program that can investigate any timeline and any time period (month, quarter, or year), enabling retroactive bargaining on previous client overpayments.