We provide IT Staff Augmentation Services!

Big Data Architect Consultant Resume

PROFILE:

  • I have been working in IT industry since august 2004 and have extensive experience in managing/leading teams with excellent knowledge of functional programming and object oriented programming, Agile and Waterfall SDLC, NiFi, HDFS, Spark, Scala, Python, Kafka, PIG, Hive, Sqoop, Phoenix, Squirrel, Splunk, Podium Data, Mongo DB, Postgresql, Map Reduce, Hbase, MVC, ETL Used Hadoop, Springs, Core Java/J2EE, IBM IGC, Global IDs (metadata discovery), SOAP/REST, IBM websphere application server (WAS) and JBoss Application Server, WAS, REST API, Apache Camel, Maven, Cloud based openshift docker pods/servers, IBM Bluemix cloud, Solace, Oracle DB, Java Multi Threading and Unix Shell Script, Jenkins, TeamCity, UCD, Ansible, Dynatrace, AWS Cloud, GCP Cloud (Google Cloud Platform).
  • Expertise in several GCP service focusing on Data Flow, Beam, Cloud Functions, Data Proc, Spark, Hive, Kubernetes, Spanner, Data Store and Biq Query.
  • Expertise in automation of the infrastructure using Terraform for both AWS and GCP.
  • Build and configure a virtual data center in the AWS cloud to support Enterprise Data Warehouse hosting including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load Balancer.
  • Through experience in setting up and build AWS infrastructure various resources, VPC, EC2, RDB, S3, IAM, EBS, Security Group, Auto Scaling, SES, SNS, and RDS in cloud formation JSON templates. Along with Expertise in AWS Lambda to deploy micro - services and trigger to run code by S3 & SNS. Provided highly durable and available data by creating and managing policies for the S3 data store, versioning, Life Cycle Policies.
  • Expertise in various AWS services like Managing Amazon instances by taking AMIs, performing administration and monitoring of Amazon instances like EC2 using Amazon Cloud Watch. Used Amazon Route53, to manage DNS zones and give public DNS names to elastic load balancers IP's.
  • Extensive experience in managing Amazon RedShift cluster such as launching the cluster and specifying the node type as well. Expertise in transferring data from Datacenters to cloud using AWS Import/Export Snowball Service. For web application-deployment, used AWS Beanstalk for deploying and scaling web applications; services developed with Java, Node.js, Python, Ruby and Docker on familiar servers like Apache.
  • A proven track record of successfully managing & delivering projects to tight deadlines in Business change, Regulatory Compliance, Service Transformation and Large-Scale System Implementation areas.
  • Have a strong track record of project delivery in the banking/finance and retail sectors for clients.
  • A delivery focused big data architect and lead with attention to detail, personal commitment to work, a can do attitude and a hands on approach. Builds high performance delivery teams through building relationships with the business and suppliers, team development, coaching & mentoring, encouraging competition and collective ownership for delivery. I am very good at oral and written communication.

PROFESSIONAL EXPERIENCE:

Confidential

Big Data Architect Consultant

Responsibilities:

  • Designed & build infrastructure for the Google Cloud environment from the scratch. Identified the architectural constructs used as boundaries between various environments.
  • Provided insight into designing the architecture to the specific usecase securely. Since this is a regulated financial institution.
  • Designed the architecture and automated the infrastructure using Terraform.
  • Deployed Istio on GKE using Google Cloud Deployment Manager. Bootstraped gcloud ie kubectl for cluster.
  • Configured deployment scripts for kubernetes clusters. Configure the Docker file to Optimize containers performance and capabilities to support required existing builds.
  • Setup and tuned GKE scheduling and scaling to ensure low queuing times of Jenkins builds
  • Configured Jenkins to provision one-use ephemeral Jenkins slave agents on GKE.
  • Expertise in multiple Google Cloud Services for computing: Compute Engine and App Engine, for networking: Cloud Virtual Networks, Cloud Load balancing, Cloud Bigtable, Cloud SQL, Big Query and Cloud Data Flow, for Identity and Security: IAM, CRM and Cloud Security Scanners, for monitoring: Stackdriver, Stackdriver Logging, Stackdriver Monitoring and Pub/Sub, Beam, Data Proc, Spark, Hive, Data Store, Kafka.
  • Configured and standardized permanent and highly available communication between on-premises network and Google Cloud Platform.
  • Created a VPN connection between On-Prem to the GCP with multiple tunnels and policy based routing. Automated the whole process with Terraform scripting.
  • Assisted in selection of appropriate VPC layout by creating a Shared VPC to share the VPN connectivity to the On Prem network. Automated the whole process with Terraform scripting.
  • Established network micro-segmentation between different workloads. Created subnets and configured routing for workloads to be deployed across availability zones.
  • Used PubSub for exporting all the project level objects and organizational level objects to Splunk.
  • Encrypted data at rest for the core GCP services such as virtual machines, containers, and PD's. By designing the architecture for Key Management System (KMS).
  • Responsible for creating reconciliation framework in Spark and Scala for more than 60 systems.
  • Responsible for designing the architecture of Data Processing framework for trading data.
  • Prepared detailed blue prints and architectural proof of concepts.
  • Responsible for migrating old spark applications to new spark framework.
  • Setting best practices guidelines. See and fix the blockers before they become blockers.
  • Used Spark 2.3.3, Scala 11.8, Java 8, Elastic Search 7, Hive 1.2, Confluent Kafka, Avro 1.8.2 and Confluent Schema Registry for data serialization.
  • Developed application test cases using cucumber framework.
  • Used Intellij 19, Maven, and TeamCity for code build. BitBucket is used for version controlling.
  • Used object oriented paradigm and functional programming for developing solutions in Scala and Spark.
  • Solution is staged in Linux and HDFS and GCP environments.
  • Developed front applicstion in react JS for data visualization.

Confidential

Big Data and Cloud Consultant - Data and Analytics

Responsibilities:

  • Designed the architecture with Google Cloud security model, secure-by-design infrastructure, built-in protection, and global network that Google uses to protect your information, identities, applications, and devices.
  • To automate the building of the infrastructure used Terraform. And for resource automation.
  • Expertise in various GCP services including Bigtable, Big Query, StackDriver, Pub/Sub, Data Flow, Data Proc,Google Functions, Data Store, Spark, Hive, Kafka.
  • Encrypted data at rest and in-transit using KMS.
  • Integrated the Google Cloud Identity to IAM for SSO.
  • Customized firewall rules, user domain rules.
  • Created customized security groups as requested in both organization and project level along with folder level security groups.
  • Created custom Policies to attach to the security groups as per the use-cases.
  • Created a VPN connection between GCP to On-Prem and GCP to AWS. Automated using Terraform.
  • Converted the existing GCP deployment manager configuration.
  • Integrated GitHub hooks to automate build process in GCP.
  • Bootstraped gcloud ie kubectl for cluster.
  • Configured deployment scripts for kubernetes clusters. Configure the Docker file to Optimize containers performance and capabilities to support required existing builds.
  • Setup and tuned GKE scheduling and scaling to ensure low queuing times of Jenkins builds
  • Configured Jenkins to provision one-use ephemeral Jenkins slave agents on GKE.
  • Used StackDriver for Inventory management and to monitor deployed services. Configured several alerts for various different usecases.
  • Created dashboard in StackDriver for resource monitoring, Error reporting.
  • Lead formal and informal sessions with multiples teams along with the Google Support Engineers to teach the best practices in GCP for a Financial Enterprise..
  • Responsible for planning strategies for various data ingestion requirements and finding suitable solutions for the same.
  • Responsible for designing the architecture of various data solutions i.e. Data Foundation Security product, Data Ingestion Framework, Surveillance Data Factory solution etc.
  • Prepared detailed blue prints and architectural proof of concepts.
  • Delivered working solutions for data security production and surveillance data processing pipeline.
  • Setting best practices guidelines. See and fix the blockers before they become blockers.
  • Used Spark 2.3.3, Scala 11.8, Java 8, Elastic Search 7, Hive 1.2, Python, Confluent Kafka 0.11, Avro 1.8.2, GCP Cloud, AWS Cloud and Confluent Schema Registry for data ingestion.
  • Developed application test cases and application build script in Python.
  • Used Eclipse, Intellij 19, Maven and SBT for code build. GITHUB is used for version controlling.
  • Used object-oriented paradigm and functional programming for developing solutions.
  • Solution is staged in Linux and HDFS and GCP environments.
  • Ran new technology assessment for Trisata metadata discovery product.
  • Implementing DaaS (Data as a Service) platform on Google cloud to host data consumption micro services.
  • Used Elastic Search for data storage as part of lambda architecture developed on Kafka and Spark. Daily batch data is stored in HDFS Used spark batch jobs.
  • Mentoring team members to achieve the best results considering the delivery deadline.
  • Conducted stake holder meetings for capturing various data requirements.
  • Lead a team of 5 developers for delivering data solutions.

Confidential

Big Data Ingestion Lead

Responsibilities:

  • Responsible for planning strategies for various data ingestion requirements and finding suitable solutions for the same.
  • Responsible for designing the architecture of ACE data ingestion platform.
  • Prepared detailed blue prints and architectural diagram for ACE data ingestion platform.
  • Setting best practices guidelines. See and fix the blockers before they become blockers.
  • Used NiFi 1.4, Scala 11.8, Spark 2.2, Java 8, Scoop, Hive 1.2, Python, Confluent Kafka 0.11, AWS Cloud, Kotlin, Avro 1.8.2, Active MQ and Confluent Schema Registry for data ingestion.
  • Developed application test cases and application build script in Python.
  • Used Eclipse, Intellij 17.3, Maven and SBT for code build. GIT is used for version controlling.
  • Used object-oriented paradigm and functional programming for developing solutions.
  • Solution is staged in Linux and HDFS environments.
  • Used Splunk queries to generate various reports/dashboards which are looked upon by the business team every day.
  • Implementing DaaS (Data as a Service) platform on Bluemix cloud to host data consumption micro services.
  • Used MongoDB for data storage as part of lambda architecture developed on Kafka and Spark. Daily batch data is stored in HDFS Used spark batch jobs.
  • Used ERWin for data modeling.
  • Mentoring team members to achieve the best results considering the delivery deadline.
  • Working with Various stakeholders to achieve their goals.
  • Deployed application on PCF cloud.
  • Extensively involved in infrastructure as code, execution plans, resource graph and change automation using Terraform. Managed AWS infrastructure as code using Terraform.
  • Utilized Puppet for configuration management of hosted Instances within AWS. Configuring and Networking of Virtual Private Cloud (VPC). Utilized S3 bucket and Glacier for storage and backup on AWS.
  • Automated alerts coming to Slack via AWS, SumoLogic and Pager duty.
  • Developed Templates for AWS infrastructure as a code using Terraform to build staging and production environments.
  • Experience in setting up CI/CD pipeline integrating various tools with CloudBeesJenkins to build and run Terraform jobs to create infrastructure in AWS.
  • Designed the architecture and automated the infrastructure using Terraform.
  • Deployed Istio on GKE using Google Cloud Deployment Manager. Bootstraped gcloud ie kubectl for cluster.
  • Configured deployment scripts for kubernetes clusters. Configure the Docker file to Optimize containers performance and capabilities to support required existing builds.
  • Setup and tuned GKE scheduling and scaling to ensure low queuing times of Jenkins builds
  • Configured Jenkins to provision one-use ephemeral Jenkins slave agents on GKE.
  • Expertise in multiple Google Cloud Services for computing: Compute Engine and App Engine, for networking: Cloud Virtual Networks, Data Proc, BEAM, Data Flow, Cloud Load balancing, Cloud Bigtable, Cloud SQL, Big Query and Cloud Data Flow, for Identity and Security: IAM, CRM and Cloud Security Scanners, for monitoring: Stackdriver, Stackdriver Logging, Stackdriver Monitoring and Pub/Sub.

Confidential

Big Data Architect - Big Data Enterprise Architecture

Responsibilities:

  • Worked closely with team of 17 members for metadata discovery and classification project.
  • Worked with various lines of business and vendors teams for enabling data ingestion and extraction into Hadoop data lake.
  • Designed architectural patterns for data ingestion and extraction for enterprise level data lake.
  • Provided solutions to various big data in house applications Used HDFS, Hbase, Phoenix, Sentry, Hive, Talend, Podium Data, Spark, Zookeeper, Kafka, Cloudera Navigator, Cloudera Impala, CDH 5.7, Teradata FSDM, Global IDs, IBM IGC and IBM Information Analyzer.
  • Work with various project teams to standardize the data ingestion, transformation and extractions.
  • Created lambda architecture patterns Used Scala, Spark streaming, Kafka, HDFS, Hive, Phoenix and HBase.
  • Created big data architectural design patterns for metadata ingestion, provisioning, profiling, discovery and classification Used, Podium Data, Global IDs, IBM IGC (Information Governance Catalog) and Information Analyzer.
  • Performed architectural designed and setup the infrastructure for the design and development.
  • Worked with the infrastructure team to stabilize connectivity for underlying architecture.
  • Provided alignment to existing big data projects for newly developed design patterns and ran PoCs with the development team.
  • Working knowledge of SCM tools like GIT and SVN.
  • Working knowledge of Hadoop data lake architecture and design, data architecture principles, data requirements gathering, data modelling and database design.
  • Worked on Data Management solutions, including Metadata Management, Data Lineage, Data Security and Data Quality Solutions.
  • Created physical and logical data models also performed reverse and forward engineering Used ERWin.
  • Used object oriented and functional programming for developing the solutions.

Confidentiala

Solutions Architect and Project Lead

Responsibilities:

  • Led/managed team of 8 members for driving the solutions Used agile framework.
  • Worked with the team distributed at offshore and onshore.
  • Designed the architecture of the merged application based on java platform application stacks, which include Java Springs, Java Multithreading, Solace MQ, Apache Camel, Mongo DB, Junit, Postgres and Oracle db, Radis in-memory cache, micro services, Mongo DB and Hbase/HDFS.
  • Developed, deployed and tested Java framework based micro services on docker based Cloud pods, Used various java technology frameworks.
  • Optimized Postgres, Oracle and Mongo DB queries for better throughput.
  • Followed a test driven development approach.
  • Created architectural blueprints for the projects worked on.
  • Performed architectural designed and setup the infrastructure for the design and development.
  • Worked with the infrastructure team to stabilize connectivity for underlying architecture.
  • Led development team in onshore and offshore model.
  • Implemented multithreaded solutions Used java 7 executor framework for faster processing.
  • Working knowledge of SCM tools GIT and SVN.
  • Developed an ingestion framework for ingesting mainframe data to HBase Used Kafka and Spark Streaming.
  • Created physical and logical data models also performed reverse and forward engineering Used ERWin.
  • Installed solutions on Confidential in-house cloud, which is an openshift distribution.
  • Used object oriented and functional programming for developing the solutions.

Confidential

Prod Support Manager

Responsibilities:

  • Managed a team of 18 members distributed at onshore and offshore.
  • Gathered requirements for new change requests (CRs) which came as a part of BAU activities.
  • Analyzed the new CRs and defect fixes in detail and prepared design approach, development approach and production implementation approach. This process takes multiple reviews and approvals from the different stake holders in the bank.
  • Prepared impact analysis reports for the different production changes and defect fixes.
  • Implemented the industry standards during the design, coding and testing of the changes and fixes aka developed loosely coupled, scalable, reusable and cohesive components as a part of production changes and enhancements.
  • Developed code and pseudo codes for helping offshore team members.
  • Created several artefacts to document and stream line the production support processes which helped entire team to develop solutions effectively.
  • Implemented multithreaded solutions Used java 7 executor framework for faster processing.
  • Reviewed the artefacts developed by the offshore team and guided them for developing the quality artefacts/deliverables.
  • Delegated work items to the offshore team in India and worked with the team to develop the solution with a pragmatic approach.
  • Prepared and shared daily status report with the customer.
  • Tested the developed code in Dev and CIT environments.
  • Sought various approvals from the business for implementing the production changes within the SLA
  • Used OOM for development and database creation.
  • Used ERWin for data modeling in oracle.

Confidential

Big Data Development Lead

Technologies: HDFS, Cloudera CDH4 framework for hadoop, Map Reduce, PIG, Hive, Sqoop, Oozie and Java based UDFs, mysql, Maven and Unix/Linux.

Responsibilities:

  • Managed a team of 6 members and worked with various teams to develop successful big data solutions.
  • Involved in the complete life cycle of Custom Dataset generation which is a report for the researchers, universities and publishers. This involves understanding requirements, creating design document, developing PIG scripts, Hive queries and testing.
  • Optimized queries for better throughput.
  • Designed and populated hive tables in HDFS for the research articles.
  • Lead a team of 6 members for helping Thomson Rueters India and helped in ramping up the team.
  • Worked with the infrastructure team to configure the HDFS distributed clusters, PIG and Hive.
  • Used Oozie to orchestrate the map reduce jobs that extract the data on a timely manner.
  • Developed Map Reduce jobs in Java for data formatting and custom input format inclusion.
  • Used ERWin for data modeling.
  • Developed Hive User Defined Functions in java, compiled them into jars and added them to the HDFS for executing them with Hive Queries.

Confidential, Charlotte, NC

Development Lead

Technologies: Springs MVC, webservices, SOAP, IBM WebSphere Application Server 6.0, Java, Tuxedo, Tivoli JSC console. Unix and Oracle DB.

Responsibilities:

  • Managed a team of 16 members which was distributed at onshore, offshore and nearshore.
  • Gathered requirements for new the merger of the MBNA credit card Netaccess applications (Canada and US) in the BoA OLB Borneo portal.
  • Prepared design approach, development approach and production implementation approach. This process takes multiple reviews and approvals from the different stake holders in the Confidential .
  • Developed code Used Rational Application Developer and followed the bank standards for coding. Also developed pseudo code for helping offshore team members.
  • Used SOAP for communicating with other systems in Confidential .
  • Used TCS Accent tool for modeling of the new services.
  • Reviewed the artefacts developed by the offshore team and guided them for developing the quality artefacts/deliverables.
  • Used ANT scripts for compilation and build of the deliverables ear files.
  • Used clear case for version control of the code.
  • Delegated work items to the offshore team in India and worked with the team to develop the solution with a pragmatic approach.
  • Prepared and shared daily status report with customer on the work items for transparency.
  • Tested the developed code in Dev and CIT environments.
  • Validated and summarized the production changes after the production implementation. Kept the business informed during the various stages of the production deployment and implementation.

Confidential, Newark, Delaware

Production Support Lead

Technologies: Java Swings, SOAP, IBM WebSphere Application Server 6.0, Core Java, Oracle DB and DB2.

Responsibilities:

  • Managed a team of 7 members distributed at onshore and offshore.
  • Gathered requirements for new change requests (CRs) which came as a part of BAU activities and regulatory changes in the credit card area.
  • Analyzed the new CRs in detail and prepared design approach, development approach and production implementation approach. This process takes multiple reviews and approvals from the different stake holders in the Confidential .
  • Prepared causal analysis reports for the different production changes and defect fixes.
  • Implemented the industry standards during the design, coding and testing of the changes and fixes aka developed loosely coupled, scalable, reusable and cohesive components as a part of production changes and enhancements.
  • Developed code Used Rational Application Developer and followed the bank standards for coding. Also developed pseudo code and stubs for helping offshore team members.
  • Created several artefacts to document and stream line the production support processes which helped entire team to develop solution effectively.
  • Reviewed the artefacts developed by the offshore team and guided them for developing the quality artefacts/deliverables.
  • Delegated work items to the offshore team in India and worked with the team to develop the solution with a pragmatic approach.
  • Prepared and shared daily status report with customer on the work items for transparency.
  • Tested the developed code in Dev, CIT and SIT environments.
  • Sought various approvals from the business for implementing the production changes within the SLA duration.
  • Validated and summarized the production changes after the production implementation. Kept the business informed during the various stages of the production deployment and implementation.

Confidential, Charlotte, NC

Module Lead

Technologies: IBM WCC, Core Java, J2EE, z/OS, Websphere application server 6.0, TSO and Unix.

Responsibilities:

  • Prepared strategies to migrate the existing BOSS functions/APIs on to the WCC platform.
  • Developed code Used WCC extensions.
  • Designed WCC database structure to map the relevant tables from the BOSS legacy system.
  • Implemented the industry standards during the design, coding and testing phases of the conversion aka developed loosely coupled, and scalable, reusable and cohesive components.
  • Developed code/APIs Used Rational Application Developer 7 and followed the bank standards for coding. Also developed pseudo code and stubs for helping offshore team members.
  • Analyzed the existing APIs and created several artefacts related to conversion of the APIs.
  • Delegated work items to the offshore team in India and worked with the team to develop the solution with a pragmatic approach.
  • Prepared and shared daily status report with customer on the work items for transparency.
  • Tested the developed code in Dev, CIT and SIT environments.
  • Created the database migration scripts for uploading the data to the target system (WCC database).

Hire Now