We provide IT Staff Augmentation Services!

Big Data Architect Resume

Norfolk, VA

SUMMARY:

  • Having 20+ years IT experience predominantly with large Companies, in architect, design and implementing large - scale systems, 6+ Years in Big Data Consulting with result-oriented driving & delivering software solutions through continual innovation in techniques, tools & processes
  • Designed and implemented scalable Big Data solutions for various applications needs and worked with rapidly evolving technologies to Analyze and define unique solutions
  • Have strong hands-on experience implementing big data solutions using technologies including Hadoop, HDFS, Map Reduce, Sqoop, Attunity, Flume, Spark SQL, Hive, Kafka, Oozie, Python, HBase, Informatica Big Data Edition, Snap logic, Mongo DB 3.4.9, Mongo Compass, spark 2.2.0, Beeline, Hue, Impala
  • Worked in multiple capacities across various projects as Solution Architect, Technical Lead and Project Management
  • An individual with excellent interpersonal & communication skills, strong business acumen, creative problem-solving skills, technical competency, team-player spirit and leadership skills
  • Technically strong hands-on with very positive attitude & passion for excellence
  • Results-focused / customer oriented
  • Continuous improvement approach
  • Extensive technical background
  • Diagnostic & problem-solving skills
  • Time management & effort prioritization
  • Communication & Interpersonal skills
  • Proactive & self-starter
  • Energetic & adaptive
  • Common sense / Intuitive approach
  • Ability & willingness to learn

AREAS OF EXPERTISE:

  • Domain: Telecom, Healthcare, e-Commerce, Retail & Banking
  • Architect / Design / Development / Big Data Implementation
  • Requirement Analysis / Process improvements/Strategic Planning

TECHNICAL SKILLS:

Big Data Technologies: Hadoop, HDFS, MapReduce, Hive, spark, HBase, Kafka, Sqoop, Attunity Flume, Oozie, Hue, Splunk, AWS-Kinesis, Zeppelin, Zookeeper, Nagios, HBase, MongoDB

Languages: HiveQL, SQL, PL/SQL, UNIX Shell Scripting, Python, C, C++, CSS, XML, COBOL, Visual Basic, Java, Clipper5.2, SQL, Perl Script, HTML

Databases: HBase, GreenPlum, DB2, Oracle 9i, Oracle 10g, Teradata 14.3, Oracle, MySQL, MS SQL Server, MySQL, MS Access

Tools: Eclipse, TOAD, SQL Developer, Tidal Job Scheduler, Autosys, AccuRev, Remedy, TFS

Source Control: Git, Subversion, CVS, Clear Case, TFS, Bitbucket

Data Warehousing Tools: Informatica, Informatic BDE, Talend, Pentaho, SSIS

PROFESSIONAL EXPERIENCE:

Confidential, Norfolk, VA

Big Data Architect

Responsibilities:

  • Architect design a bigdata application solution, Pharmacy Data Resiliency data repository to have consolidated pharmacy member demographic & eligibility data to be sent via eligibility service APIs exposed to external system for prior authorization. This is fail-over system when primary SOA is unable to respond within 5sec or during downtime
  • Lead the implementation of the defined strategy through the design and architecture for Hadoop Big Data technologies including Hadoop, Sqoop, Map Reduce, Hive, Spark, Python, NoSQL databases Mongo DB
  • Define the Big Data and Analytics strategy leveraging relevant technology and analytics
  • Understand and translate project requirements into technical requirements and solutions for engineering team to execute
  • Deploy guidelines, standards, and processes to ensure the highest data quality and integrity deployment process and version control
  • Collaborate and partner with business stakeholders, data SMEs to determine usage of analytic solutions to drive business value
  • Drive innovations through developing proof-of-concept’s and prototypes to help illustrate approaches to technology and business problems
  • Design a core engine using Spark Framework to handle high volume of data to load into Mongo DB Collections and integrate with External System using Web Services
  • Developing and/or migrating this entire Big Data solutions on to AWS cloud platform in the next phase Voyager Initiative
  • Lead the team of Onsite and Offshore developer resources from multivendor in an Agile development

Environment: Cloudera Enterprise Data Hub - Major Version: CDH 5.12.2, Mongo DB 3.4.9 Enterprise, Mongo Compass, spark 2.2.0, SQL Server, Beeline, Hue, Impala, Bit Bucket, Confluence, Control-M, JIRA, Bitbucket

Confidential, TN

Lead Architect

Responsibilities:

  • Responsible for Architect, Design Application Solution for solving the data integration challenges of Magellan Med Management systems.
  • Design Solution for Aetna Coventry Specialty business build an ingestion and Outgestion Framework with integration tool Snap logic.
  • Create solution map, physical, technical architecture for Analytic Platform that provides users with reporting and ad-hoc querying capabilities
  • Responsible for building the solution which entails creating a comprehensive system with loosely coupled modules which can be configured easily per business need of Magellan Medical Management The system shall be flexible enough for the business user to set up a secured and efficient data integration process from different data sources in Magellan environment as well as raw data received through different external data sources and store it at a single big data repository.

Environment: Cloudera CDH 5.12.2, Zaloni, Hive 1.2, Snap logic, Spark, SQL Server

Confidential, ATLANTA, GA

Big Data Architect

Responsibilities:

  • Responsible for Architect, Design Application Solutions &deployment for Contour2 IP-Playback Data for Various Reporting Requirement
  • Create solution map, physical, technical architecture for Analytic Platform that provides users with reporting and ad-hoc querying capabilities
  • Interact with Business to Finalise\prioritize requirement, create an interactive data analysis using Zeppelin for business discussion and understanding.
  • Optimizing process, implement naming standards and best practices. Identify and Document bad\Incomplete Data Issue, Analyse & Validate Data from Kinesis Stream using Splunk
  • Apply Business Rules using Hive Queries to Enrich data with various data from EDW, ODS and build hive Aggregate table data for Reporting connecting to Tableau
  • Automate the Process of data Load with Oozie Workflows and Scheduling with different option to deliver Report Email, SFTP and Hive Aggregate Data Table
  • Build Vendor Payment Analytics for TV Everywhere Mobile App usage for IP-Playback Data Provide extensible solution to generate Report monthly, using hive Scripts to perform ETL, Extract enrich data with Customer and Site info from different source
  • Build VOD-Analytics by Unique House hold and App Usage with different dimension TVOD, SVOD in comparison to Linear, VOD, DVR, ROVI, DTA & C1 users
  • App Usage Analytics to Determine True C2 App usage, Reduce Calls generated by App Users,
  • Tracking and Trending Plant Issues, Behavioural Analytics for In-home and Out of Home usage which Provides insights to Marketing, Advanced Advertising, Incentivize Users

Environment: HortonWorks HDP 2.7.3, Kerberos, Hive1.2, Splunk6.3.3, Kinesis Stream, Zeppelin0.7.0, YARN, Ambari\Tez Views 0.7.2.6, Oozie Workflow 1.0.0, Toad for Apache Hadoop 1.5.3, Nifi, Spark, HBase, Linux, Kafka, Oracle 11i, SQL Developer 4.1.5, SQL Server, Tableau.

Confidential, Tampa, FL

Architect\ Sr. Big Data Consultant

Responsibilities:

  • Responsible for leading efforts of modernizing and migrating legacy Analytics System to big data-based Analytics System
  • Responsible for Design EDW Application Solutions & deployment, optimizing processes, definition and implementation of best practices
  • Created solution maps, physical, technical and data architectures for Analytics Platform that provides users with reporting and ad-hoc querying capabilities
  • Designed and implemented data acquisition and ETL pipeline using Attunity, Spark, Streaming and Hive technologies. Successfully architecture for Data Ingestion process, ingested more than 2500 Tables ingest to Landing, Raw zone and transform the data with business logic to refined zone and to Green plum data marts for reporting layer for consumption thro Tableau
  • Engage with business analysts to understand business requirements and translate them to functional specs and technical design. Ensure full requirement traceability
  • Advised management team on tactical goals and provided the long-term road map for IT and business systems and processes
  • PEGA: Business Process Management Application: Build and deploy end to end data integration on Hadoop for PEGA for their Care, Appeals & Grievances (CAG) and Medical Management Platform (MMP) AUTH application, Members, Claims & Pharmacy Analytics Design and develop Informatica BDE Application and Hive Queries
  • Implemented critical solution components using technologies including Spark Streaming, Spark SQL, Python, Hadoop, MapReduce, Hive, HDFS, Sqoop, Oozie, Shell scripting and other big data technologies
  • Implemented Spark by leveraging Interactive SQL queries for processing large volumes of data. Semi Automate Chart Review and Doctor notes
  • Working with Data science Analytics team for the POCs on newer tech Jupyter Notebook, NumPy, SciPy and Pandas, R, Tableau, H2O, Spark MLIB

Environment: Horton Works HDP 2.3, Spark, Hadoop, Map-Reduce, HBase, Hive, Linux, Agile-Scrum, Storm, Kafka, Oozie, Informatica 9.6.1 BDE, Oracle 11i, PLSQL Developers, SQL Server, Green plum, Attunity, Autosys, Hue, YARN, Python, Spark, Tableau.

Confidential, Dayton, OH

Architect

Responsibilities:

  • Architect future technology goals & strategy initiative to implement Hadoop
  • Technical assessment of current architecture & identify gaps
  • Design technical footprint and implementation of Hadoop Data Lake
  • Data Integration Tool Assessment of Pentaho, Talend with Informatica BDE
  • Designed strategic architectural road map for the enterprise Hadoop Platform. Envisioned and architected integration framework as the overall solution
  • Signoff deliverables from Assessment Phase to the implementation phase
  • Designed Business use cases & documented processes with data flow & diagrams

Environment: HortonWorks HDP 2.2, Informatica 9.6.1, Teradata 14.11, HDFS, Flume, Sqoop, YARN, Hive, Ambari, Zookeeper & Oozie.

Confidential, Atlanta, GA

Architect \ Sr. Consultant

Responsibilities:

  • Analyse, Source and Parse the web log activity data of my. Confidential .com, identify search activity patterns of customers before churn
  • Ingest data sets like Customer usage data, event logs, and web logs to HDFS.
  • Responsible for Design EDW Application Solutions & deployment, optimizing processes, definition and implementation of best practices
  • Designed and implemented Hadoop to store high volume data such as billing P and Contact history, Data delivered as batch and stream as well
  • Designed and implemented system wide data retention and archive strategy in legacy EDW system to avert a multimillion-dollar expansion in 2013, by saving existing Teradata system
  • Decreased the incremental EDW load time by removing and or reorganize EDW load schedule by Tuning ETL jobs by employing enhanced TD functionality and techniques
  • Collaborated with departments and with cross-organizational teams to implement improvements in processes, Business Support System/Operations Support System support. Designing, executing and updating of Implementation Plan

Environment: HortonWorks HDP 2.0, Informatica 9.1, Teradata 14, Oracle 11i, SQL Server, Perl script, AccuRev, Remedy, TIBCO CEP, Hue, Flume, Sqoop, Hive, Spark, Ambari, Tableau.

Confidential, Raleigh, NC

Tech Lead

Responsibilities:

  • Developed Informatica Mappings Re-design - for the Enterprise Data warehouse
  • Architected the ETL flow to capture the record count of different layers staging, EDW and Semantic layer to provide a data lineage
  • Standardized process & procedures. Enforced processes to enable proactive involvement in value-added activities for the organization, developed innovative testing strategies & streamlined process to reduce testing redundancy

Environment: Informatica 9.5.1, Teradata 14, Oracle 11i

Confidential, Chicago, IL

Architect

Responsibilities:

  • Architect the Enterprise EDW to Integrate with SAP
  • High-level design document with the new architectural set up
  • Requirements into technical solutions. Get signoff and finalize ETL technical design
  • Designed & Developed 30+ mappings that extract data for Sales, Returns, Inventory, Forecast, Contract, and Chargeback from different feeds Internal and external IMS, Chargeback, Cardinal, McKesson, AHA. Apply business logic & load data to SAP PSA ed Customer Excellence Project Star in Recognition for managing End to End implementation of the project with Successful client requirements gathering onsite and with development to implementation

Environment: Informatica 9.1, SAP BW, Xcelsius, SAP Business Objects 4.0, Bex Queries

Confidential

Project Lead

Responsibilities:

  • Lead the delivery of the entire offshore development activity
  • Re-Design Framework for Profitability, Sales, Order, and Inventory& Purchasing as per the business rules
  • Lead a team of 4 resources, Allocation, Track and update status to Delivery Manager Onsite.
  • Configure pre-packaged 40+ ETL, SDE and SIL mappings, sessions using Oracle-DAC
  • Configured the DAC execution plans for full and incremental loads ed Spot - Recognition for managing End to End implementation of project 1 month ahead of timeline

Environment: Informatica 8.6, Oracle-DAC 10.1.3.4.1 , Oracle 11i and OBIEE 10.1

Confidential

Tech Lead

Responsibilities:

  • Implementation of KANA Phase-3 Changes
  • Redesigned the Informatica logic to handle the new data flow as per client requirement to include Metrics stored in the subject areas Feedback, Rating, Content, Freshness and Search.
  • Implement complicated business to maintain history of changes to the subject areas
  • Developed Mapping, Optimized the mappings written by others and guided team members

Environment: Informatica 8.6.1 & Oracle 11

Hire Now