We provide IT Staff Augmentation Services!

Data Architect And Data Modeler Resume

5.00/5 (Submit Your Rating)

PennsylvaniA

PROFILE:

  • Overall 8.5+ years of work experience on BIG Data - Hadoop/Modeler & BPM in IT industry.
  • 4.5 + years of versatile on experience on BIG Data Hadoop as Modeler & 4 years on BPM
  • Excellent development techniques for Data Migration to Hadoop environment & Analytics
  • Highly versatile and experienced in adapting and implementing the latest technologies in new application solutions.
  • Able to work adaptively in team as well as individually, and implement scalable critical projects.
  • Strong leadership traits with excellent ability to coordinate with different people at one time under difficult situations.
  • Conducted BigData/Hadoop s for the Unit and act as a Technical panelist for Int/Ext hiring process.
  • Versatile experience on data migration in to Hadoop and Implemented Complex queries by using Hadoop Ecosystems.
  • Versatile experience on Hadoop distributions, Cloudera & Horton Works and basic knowledge on MapR and IBM BuleMix & AWS
  • Strong knowledge in Hadoop ecosystem like HDFS, Map reduce, Hive, Pig, Hcatalog, Sqoop, Oozie and Tez, Spark & Apache NiFi, Ranger Policy.
  • Highly proficient in Data Modeling and retaining concepts of aldm, Logical, Physical and Multidimensional Data Modeling
  • Strong problem solving & technical skills coupled with confident decision making for enabling effective solutions leading to high customer satisfaction.
  • Strong knowledge on Data Governance and Metadata Creation for Different sources and to align with Targets DB s
  • Strong Knowledge on Data warehouse and Data lake.
  • Extensive experience converting business requirements into appropriate Technical solutions.
  • Experience in Application & Web servers like Web logic 8.1 and Tomcat5.0 and databases Oracle 9i and 10g.

TECHINCAL SKILLS:

Languages: Core Java

Scripting Languages: HTML,Python.

Big Data Ecosystem: Hadoop(HDFS,MapReduce),Hive,Sqoop,Hue,oozie,Ranger.

Basic Knowledge: HBase,Hcatalog,Pig,Tez,Spark,apache NiFi.

Data Base: Oracle 11g, My SQL 5.5,PL/SQL

Operating Systems: Windows (XP, Vista, Win 7 & 8) & Linux and Ubuntu

Web/App Servers: Web logic 9.1, Tomcat 6.0

BPM Tools: Savvion 6.5, Pega 6.2 Oracle SQL Developer, Toad9.1

IDE: Eclipse, Net beans 7.0, BPM Studio, PRPC 6.2SP2

Development Methodologies: Agile SCURM, Agile Iterative

PROFESSIONAL EXPERIENCE:

Confidential, Pennsylvania

Environment: Hadoop and Ecosystems, Hortonworks, PostgreSQL & Informatica and MySQL.

Tools: Rally, Remedy, Putty, WinScp, Jira, Box, ALM,github.

Data Architect and Data Modeler

Responsibilities:

  • Involved in Gathering Business requirements by organizing and managing meetings with business stake holders, Subject Matter Experts, Technical architects and IT analysts on a scheduled basis.
  • Design and developed Entity-Relationship diagrams and modeled Transactional Databases and Data Warehouse using Power Designer v 16.x .
  • Identified Facts and Dimensions from the business requirements and developed the Data models. Transformed Logical Data Model to Physical Data Model ensuring the Primary Key and Foreign key relationships in PDM , Consistency of Data Attribute definitions and Primary Index considerations.
  • Experienced in Using Power Designer Repository Services for effective model management in sharing, dividing, reusing information and design for productivity improvement.
  • Generate the Schema’s of Hive for Business Service Tables and upload to box for Developers.
  • Maintain different version of aldm’s and update Schema’s accordingly.
  • Collaborate with data architects for data model management and version control.
  • Conduct data model reviews with project team members.
  • Capture technical metadata through data modeling tools.
  • Create data objects (DDL).
  • Enforce standards and best practices around data modeling efforts.
  • Ensure data warehouse and data mart designs efficiently support BI and end user
  • Collaborate with BI teams to create reporting data structures.
  • Ranger Policy Implementation as per Client and downstream needs.
  • Drive the team to build the tools to ingestion, processing, consumption and security management
  • Driving the team to migrate the HDP to AWS & StreamSets.

Confidential

Environment: Hadoop MapR and Ecosystems, Teradata & Informatica and MetaLaod.

Tools: Remedy, Putty, WinScp.

Bigdata Lead

Responsibilities:

  • Work with Business Analysts to understand the business requirements.
  • Design and develop solutions to implement these business requirements.
  • Design and develop ETL workflows.
  • Monitor data load processes in Tidal Scheduler.
  • Prepare Design documents, Data mapping coding & groups in Meta Load.
  • Written Hive Queries as per the client requirement for pulling data from Hadoop.
  • Convert Hive Hql’s to Spark Sql’s based.
  • Optimization the Hive Hql’s.
  • Perform unit testing, system integration testing and assist the business users in User Acceptance testing.
  • Perform peer code reviews.
  • Coordinate and synchronize with offshore/onsite development team to identify priorities and update scope and delivery schedule.
  • Debugging the Productions issues on priority.
  • Documenting daily weekly status report and sending to client.

Confidential

Environment: Hadoop Horton Works 2.2 and Ecosystems, Teradata.

Tools: JIRA, Active Batch, Patrols, ELK, Remedy, SVN, DatManager, Putty, WinScp.

Hadoop Developer

Responsibilities:

  • Analyzing the Use Case Document and BR Documents
  • Involved in validating the user stories given for development.
  • Involved in the Client calls and Business users & Data Scientist.
  • Involved in the design of Data Lineage life cycle.
  • Moved all FAB’s data flat files generated from various sources to HDFS for further processing.
  • Created HIVE tables to store the data by using complex queries.
  • Written Hive Queries as per the client requirement for pulling data from Hadoop.
  • Monitoring production system and logs by Patrols’ and Dashboards’.
  • Documenting daily weekly status report and sending to client.
  • Resolved JIRA tasks proactively.
  • Prioritizing and resolving issues. Ensuring proper allocation of work to all team members and follow up to closure. End to End Ownership.
  • Act as Offshore Lead and performed End to End tasks.

Confidential

Environment: Hadoop Cloudera 5.X and Ecosystems

Tools: Stash, Putty, WinScp.

Hadoop Developer

Responsibilities:

  • Completely involved in the requirement analysis phase.
  • Act as a Business Analyst.
  • Involved in the Clients/Business Calls.
  • Supported analytics onshore team by generating data sets from data, coming from various systems, by writing Hive Query language scripts.
  • Migrated oracle data to Hadoop cluster and transformed existing SQL scripts to Hive Query.
  • Evaluated various Hive functionalities like partitioning, file formats to bench mark the Hive performance.
  • Purpose of data-loading utility was to migrate constant and moving data sets, like transactional snapshot or accumulating snapshot, from oracle to hadoop cluster.
  • Familiarized with data analytics by taking a use case, to find thresholds of line failure, and generating different views of data.
  • Responsible for daily deliverables.

Confidential

Environment: Pega PRPC 6.2 SP2

Developer/System Architect

Responsibilities:

  • Coding in PEGA Rules Process Commander for the assigned tasks.
  • Designing Harness, Sections and Flow, Flow Actions, when rules, decision tables, flows etc
  • Creation of Report Definitions and reports.
  • Implemented Client side and Server side Validations.
  • Involve in the Implement of SLA and Corresponds and Circumstance.
  • Responsible for daily deliverables.
  • Supporting QA and UAT teams.

Confidential

Environment: Savvion 6.5/7.0, JAVA, J2EE, Oracle10g, BPM Studio, Web Logic server 8.1

BPM Savvion Technical Consultant

Responsibilities:

  • L2 level Development and Production Support.
  • This deals with the challenges of keeping the Production System up and running for 100%.
  • Analyzing the impact and deploying the changes without impacting the production systems.
  • Handling Change Requests. This involves development of new Process Templates, changes in existing templates, changes in adapters, daemons, etc.
  • Prepared User Manual document for this application.
  • Prepared Installation and Deployment Guide for users of the application
  • Issue debugging and resolution
  • Involved in calls with clients

We'd love your feedback!