We provide IT Staff Augmentation Services!

Next Generation Data Analytics Platform Architect Resume

3.00/5 (Submit Your Rating)

SUMMARY:

  • Around 14 years of IT experience specializing in creating solution architecture in Bigdata, Data Migration and Data Warehousing for large and complex institutions.
  • Around 4 years of experience in Designing and Building the Data integration and Big Data platform in AWS.
  • Working as Next Generation Data Analytics Platform Architect and worked as Technical Architect, Project Manager, Informatica Administrator, Technical Lead roles and managed having an excellent experience managing large size teams.
  • Attended Snowflake Summit - 2019 and AWS Reinvent - 2019.
  • Hands on experiences in building the next generation analytics platform in Amazon Web Services (IAM, Dynamodb, EMR, EC2, S3, SNS, SQS, Kinesis, Lambda, Athena, ECS, API Gateway, Redshift).
  • Hands on experience in building the next generation data scientist platform in Amazon Web Services (Sagemaker, Juputerhub and R-Studio).
  • Having good hands-on experience Deploying and administration of airflow environment using Celery Executor.
  • Having good hands-on experience Architecture, Design and Implementing, administration large scale S nowflake cloud data warehouse .
  • Having good hands-on working in Snowflake Business Critical and Enterprise Editions.
  • Used import and export from internal stage (Snowflake) vs external stage (S3 Bucket).
  • Hands on experience in design and building CI/CD pipelines for the code deployment pipeline using Ansible, Jenkins, GitHub, Sbt.
  • Extensive experience in Infrastructure as a code using the Terraform and AWS cloud formation templates
  • Having excellent hands-in experience in Designing, Implementing Visualization tool Looker.
  • Looker admin and development experience with large size teams.
  • Skilled in designing and configuring secured VPC through private and public networks in AWS by creating multiple subnets, routing table, Network ACL and NAT gateways as per requirement and distributed them as groups into various availability zones of the VPC.
  • Experience with AWS instances spanning across Dev, Test and Pre-production and Cloud Automation through Open-Source DevOps tools like Ansible, Jenkins & Docker.
  • Worked on solutions to migrate existing systems to AWS, proposing lift and shift and re-design of Data Warehouse applications.
  • Having 8 years of onsite experience and also engaged with client in estimating and handling large size projects as a Solution Architect.
  • Having knowledge in Hadoop and on Map reduce, Hive, HBase, Spark, Scala.

TECHNICAL SKILLS:

  • Cloud Platform: AWS EMR, SageMaker, EBS, EFS, ALB, EC2, S3, SNS, SQS, Lambda, Athena, ECS, API Gateway, Sagemaker, Salesforce
  • Cloud DB Platform: Redshift, Snowflake
  • Languages & Tools: C,C++, PL/SQL, Visual Source Safe, Toad, PERL, Python
  • Data Integration: Informatica, Snaplogic, Data Stage
  • Business Intelligence: Looker, Tableau, Business Objects, Micro strategy
  • DBMS: Oracle - 8i, PL/SQL, IBM DB2, Teradata, PostgreSQL, Enterprise DB, Netezza, SOQL
  • CI/CD: Github, Nexus, Jenkins, Terraform, Ansible

PROFESSIONAL EXPERIENCE:

Next Generation Data Analytics Platform Architect

Confidential

Roles and Responsibilities:

  • Experience in AWS, implementing solutions using services like ( IAM, Dynamodb, EMR, EC2, S3, SNS, SQS, Kinesis, Lambda, Athena, ECS, API Gateway, Redshift).
  • Building Next generation platform in Cloud and Implementation of tools ( Snaplogic, Looker, Alation, Tigergraph, Airflow ).
  • Having good hands-on setting up and administration of the Snaplogic environment.
  • Having good hands-on setting up and administration of the Looker environment.
  • Implemented solutions for ingesting data from various sources and processing the Data-at-Rest utilizing Big Data technologies such as Hadoop, Spark, Scala .
  • Loaded and transformed large sets of structured, semi structured and unstructured data using Hadoop/Big Data concepts.
  • Having good hands-on experience building the Infrastructure code using Terraform .
  • Good hands-on experience in designing, building CI/CD pipelines using Github, Ansible and Jenkins.
  • Exposed to Agile environment and familiar with tools like JIRA, Service Now and Confluence.
  • Provided recommendations to machine learning group about customer roadmap using Sagemaker and Containerization.
  • Exploring with the Spark for improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Worked on setting up and configuring AWS's EMR Clusters and Used Amazon IAM to grant fine-grained access to AWS resources to users
  • Worked on Sequence files, RC files, Map side joins, bucketing, partitioning for Hive performance enhancement and storage improvement.
  • Enable and configure AWS services such as HDFS, Spark, Scala, Kinesis, Kafka, Sqoop, Zeppeline Notebook and Spark/Spark2.
  • Installing and configuring dashboard on Looker, Tableau visualization tools.
  • Hands on expertise in running the SPARK & SPARK SQL.
  • Evaluate machine learning algorithms Segmentation, prospect and recommendation models.

Role Data Integration Platform Admin

Confidential

Responsibilities:

  • Provide production support on shared platforms with 50+ projects.
  • Setup and configure Powercenter domain, grid and services
  • Identify root cause and implement resolution for production processing issues.
  • Perform object deployments on versioned repositories from QA to Production
  • Setup and administer Powercenter security for projects.
  • Implement capacity management and performance best practice guidelines and monitoring.
  • Coordinate Powercenter technical review/approval before on-boarding.

RTechnical Lead Designer

Confidential

Responsibilities:

  • As ETL lead performed assessing requirements and defining the strategy for data migration and data synchronization.
  • Analyzing source systems and creating a Best Practices mapping to the target database schema
  • Defining and implementing the required business transformation rules and logic (ETL development)
  • Designing and developing exception handling and data cleansing / standardization procedures
  • Optimizing ETL performance
  • Identifying and managing data quality issues
  • Defining a suitable architecture for data integration from multiple, disparate data sources

Technical Architect

Confidential

Responsibilities:

  • As architect performed assessing requirements and defining the strategy for data migration and data synchronization.
  • Defining technical architecture and implementation plan for Salesforce Data Migration and Data Synchronization.
  • As architect designing appropriate technical solutions and architectures to meet customer needs.
  • Facilitating the management of product issue escalation and resolution working with other Informatica, Storage, System Admins, Database Admins and Network performance monitoring teams.

Confidential

Technology/Software

Responsibilities:

  • Provide production support on shared platforms with 50+ projects.
  • Setup and configure Powercenter domain, grid and services
  • Identify root cause and implement resolution for production processing issues.
  • Perform object deployments on versioned repositories from QA to Production
  • Setup and administer Powercenter security for projects.
  • Implement capacity management and performance best practice guidelines and monitoring.
  • Coordinate Powercenter technical review/approval before on-boarding.

Project ECS Confidential

Technology/Software

Responsibilities:

  • Developing data naming standards
  • Analyzing source systems and creating a Best Practices mapping to the target database schema
  • Defining and implementing the required business transformation rules and logic (ETL development)
  • Designing and developing exception handling and data cleansing / standardization procedures
  • Optimizing ETL performance
  • Identifying and managing data quality issues
  • Assessment and recommendation of ETL tools
  • Defining a suitable architecture for data integration from multiple, disparate data sources

Confidential

Technology/Software Informatica

Responsibilities:

  • Created Mappings with the help ETL Tool.
  • Performed data cleansing & transformation.
  • Performed Performance Tuning while extracting the data.
  • Support for the user acceptance testing
  • I have delivered on time and with high quality so far for the current project. Customer is very much satisfied with my deliverables.
  • I have designed ETL’s which would run successfully even if there is huge volume of data (Normal load-30M).
  • I am happy to quote that I have received the CAP from the customer for the March Release.
  • 15 Highest Bingo Scores from the customers as excellent support in the areas of handling the case, communication of problem status, technically and overall experience.
  • I have given sessions on Informatica to the team members as part of knowledge transfer.

We'd love your feedback!