We provide IT Staff Augmentation Services!

Senior Data Architect Resume

2.00/5 (Submit Your Rating)

SUMMARY

  • Over 11+ years’ experience in designing, planning, maintaining and implementing system applications both on premise and in cloud.
  • Experienced working as an AWS Cloud Solutions Architect, Big Data Architect, Software Engineer and a Data Scientist.
  • Experience working in Agile Scrum Software Development Lifecycle with respect to delivering Operations, Functional and Technical Specifications, Development, Resource Planning, Testing (Manual and automation) and Maintenance.
  • Conclusively, Expert in product support and analytics insights (Business Intelligence) with strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models by creating algorithms and creating/running simulations.

TECHNICAL SKILLS

EXPERT IN: Orchestration Services ECS, Docker Containers, Cloud Formation, Elastic Beanstalk, Cloud Computing - Amazon Web Services (EC2, EBS, S3, AWS, Azure, BCP & DRP, PKI, IAM, AMI, VPC, VPC Peering, NACL, Security Groups, Route53, Auto Scaling, ELB, SNS, Cloud Watch,, MySQLSqlServer,AZURE,Automation,EC2,Kinesis,CI/CD,Jenkins,GitHub, SALESFORCE,KAFKA,EMR,TALEND

DATABASE SPECIALTY: PostgreSQL, MySQL,SQLserver, NOSQL/Dynamo DB, Datawarehouse, Data Validation, Data Analysis, Data Visualization, Data reporting, Data Metric analysis, Data Migration, Business Objects, D3, Map/Reduce, Hadoop, Spark.

Language: Python, Sql and R

Business Intelligence Tools: Tableau, Power Bi, SAS, QlikView, Alteryx, Splunk

Project Management Tools: HP ALM, JIRA, MONDAY, MICROSOFT TEAMS

PROFESSIONAL EXPERIENCE

Confidential

Senior Data Architect

Responsibilities:

  • Developed cost/benefit modeling and created compelling business use cases/total cost of ownership studies for migration.
  • Configured/Customized the AI/BI Applications as per the client’s requirement.
  • Maintained and configured user accounts for dev, QA, and production servers.
  • Built an end-to-end real-time data pipeline by building four micro-services on top of Apache Kafka for data processing.
  • Developed enterprise platforms using Big Data tools and technologies -( Hadoop, Spark, Hive, Impala, Zeppelin, Jupiter
  • Responsible for ingesting large volumes of IOT data to Kafka.
  • Developed Microservices with Java using Spring Boot IDE.
  • Worked on identifying present Scripted syntax Jenkins pipeline style and suggested to changing to Declarative style for reducing deployment time.
  • Provided expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages).
  • Provided expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.
  • Provided expertise and hands on experience working on Avro Converters, JsonConverters, and StringConverters.
  • Provided expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.
  • Provided expertise and hands on experience on custom connectors using the Kafka core concepts and API.
  • I Had A Working knowledge on Kafka Rest proxy.
  • Ensured optimum performance, high availability and stability of solutions.
  • Created topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Created stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.
  • Wrote Kafka producers to stream the data from external rest APIs to Kafka topics.
  • Used and implemented several Security groups in AWS cloud and working with S3.
  • Good experience with continuous Integration of application using Jenkins.
  • Used chef, Terraform as Infrastructure as code (IaaS) for defining Jenkins plugins.
  • Responsible for maintaining inbound rules of a security group(s) and preventing duplication of EC2 instances.
  • Spun up servers such as Jenkins and automated builds. for development.
  • Built intuitive and interactive Dashboards as per client requirement for their internal cooperate affairs.
  • Provided in-depth Onsite and remote technical guidance to customers to ensure project implementation
  • Worked collaboratively in a cross functional team which includes (Product Management, Partner Operations, Development).
  • Gathered, Analyzed and documented pre-project and post project requirements.
  • Partnered with the sales team to design solutions for customers that drive AWS adoption and revenue
  • Engaged with C-level executives to define and execute on Enterprise cloud strategies
  • Analyzed application portfolios, identifying dependencies & common infrastructure platform components, and assessing migration feasibility
  • Built VPCs from scratch, creating private and public subnets, creating security groups and network access lists, configuring internet gateways, OpenVPN, creating AMI, understanding of user access management/role-based access/multi factor authentication and API access, configuration of auto scaling and elastic load balancer for scaling services
  • Setup NAT gateway as a route out to the internet for instances in private subnet
  • Providing expertise to client's early adoption strategy such as end user training, evangelizing cloud solutions, bringing understanding, experience and best-practice in the AWS cloud ecosystem
  • Partner with the sales team to design solutions for the customer to drive AWS adoption and revenue
  • Configuring and deploying micro-services and instances, EC2, ECS, Auto-scaling, S3, Security groups using Cloud-formation
  • Deployed DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code
  • Used Terraform in building infrastructure to host web/applications and RDS infrastructure; using AWS cloud provider.
  • Acted as a liaison between Customers and Product Management to drive product development.
  • Collaborated with the sales team on existing customers up-sell and cross-sell opportunities.
  • Performed hands on technical design, configuration, and troubleshooting of the Alion Science & Technology’s solutions.
  • Manage multiple concurrent deployment projects.
  • Applied knowledge of technologies and protocols to support identity federation and robust access control models, such as SAML 2.0, WS-Federation, OAuth, and OpenID Connect.
  • Applied software development experience to build Multi-Tier applications when working with customers.
  • Utilized knowledge of typical enterprise identity life cycle management processes and standards.
  • Provided mentoring, guidance, and expertise to less experienced team members.
  • Ensured quality and time management processes are followed by team (e.g., change controls, time tracking). Commit to and use a knowledge repository for deployment standard methodologies and other customer ideas as continuous improvement.

Confidential

ENTERPRISE SOLUTIONS ARCHITECT/Big Data architect

Responsibilities:

  • Engaged customers - collaborated with ad tech sales managers and sales executives to develop strong customer relationships, vetted requirements upfront, and drive excitement for the right ad tech solution that achieves the customer’s business outcomes.
  • Managed and delivered ad tech integration engagements in services such as EC2, S3, RDS and other AWS services.
  • Responsible for launching Amazon EC2 cloud instances using amazon Web services (Linux) and configuring launched instances with respect to specific applications and regions.
  • Responsible for S3 buckets creation, policies and the IAM role based policies.
  • Build Servers using AWS, importing volumes, launching EC2, RDS, SQS, SNS, Lambda, and Kinesis and creating VPC from scratch based on client specification.
  • Worked as Subject Matter Expert in AWS Public cloud for the company and designed architect and operating solutions built on AWS.
  • Provide technical guidance concerning business implications of application development projects.
  • Leveraged ETL programming skills in open source languages including Python, Scala, and SQL on various frameworks using Apache Spark.
  • Used the services of AWS Cloud technologies at IaaS layer (network, compute, storage) and managed services such as RDS, SQS, SNS, Kinesis, Elastic Cache, Elastic Beanstalk, IAM, Cognito and others
  • Designed Resiliency, High-Availability, Fault Tolerance, and Scalability in context of AWS Cloud
  • Developed Hybrid Cloud environments
  • Involved in assessing, planning, designing, and migrating/transforming legacy applications to AWS Cloud
  • Worked with cloud native/12-factor application architecture and micro services
  • Built and executed micro service applications using Spring Boot, Spring Cloud, NodeJS, python/flask/Django
  • Architected stateless and stateful applications for containers and container manager/schedulers such as Kubernetes, AWS EKS, AWS ECS and Docker
  • Demonstrated ability to architect and model mission critical solutions leveraging multiple DBMS technologies i.e. Relational, Big Data, NoSQL (K-V stores, document stores, graph and column)
  • Experience with event driven and real-time architectures, patterns, messaging and streaming technologies such as using Apache Kafka, AWS Kinesis, Amazon SQS/SNS, Amazon MQ, AWS Managed Services for Kafka etc.
  • Used Big Data, analytics and machine learning technologies on AWS such as EMR, Apache Spark, Sage Maker
  • Practiced and used software engineering practices using CI/CD and associated toolsets such as git, AWS Code Commit, Jenkins, Travis, Bamboo, Concourse, Salt, AWS Code Deploy, and AWS Code Pipeline.
  • Worked with Distributed Systems Architecture, MapReduce and Spark execution frameworks for large scale parallel processing.
  • Worked extensively on Hadoop eco-system components Map Reduce, Pig, Hive, HBase, Flume, Sqoop, Hue, Oozie, Spark and Kafka.
  • Worked with all major Hadoop distributions like Cloudera (CDH), Horton works(HDP) and AWS EMR.
  • Developed highly scalable Spark applications using Spark Core, Data frames, Spark-SQL and Spark Streaming API's in Scala.
  • Gained good experience troubleshooting and fine-tuning Spark Applications.
  • Experience in working with D-Streams in Streaming, Accumulators, Broadcast variables, various levels of caching and optimization techniques in Spark.
  • Worked on real time data integration using Kafka, Spark streaming and HBase.
  • Interacted with NoSQL databases such as HBase and its Integration with Hadoop cluster.
  • Performed extracting, wrangling, ingestion, processing, storing, querying and analyzing structured, semi-structured and unstructured data.
  • Worked with Hadoop MRV1 and Hadoop MRV2 (or) YARN Architecture.
  • Developed, deployed and supported several Map Reduce applications in Java to handle semi and unstructured data.
  • Involved in Map side join, Reducer side join, Shuffle & Sort, Distributed Cache, Compression techniques, Multiple Hadoop Input & output formats.
  • Experienced in working with csv, text, sequential, Avro, parquet, orc, Jason formats of data.
  • Expertise in working with Hive data warehouse tool - creating tables, data distribution by implementing static and dynamic partitioning, bucketing and optimizing the Hive QL queries.
  • Involved in ingestion of structured data from SQL Server, My Sql, Tera data to HDFS and Hive using Sqoop.
  • Experience in writing AD-hoc Queries in Hive and analyzing data using HiveQL.

Confidential

Senior Data Engineer /Data Scientist

Responsibilities:

  • Provided best of breed, fit for purpose data science and architectural recommendations leveraging Cloud &traditional on premise data services.
  • Provide detailed, hands-on expertise in creating data, AI, and advanced analytics solutions for clients.
  • Responsible for successful delivery of cloud and data science solutions and services in a client consulting environment.
  • Developed and implemented platform architecture as per established standards.
  • Formulated architectural plans for mitigation purpose.
  • Supported integration of reference architectures and standards.
  • Utilized Big Data technologies for producing technical designs.
  • Prepared architectures and blue prints for Big Data implementation.
  • Evaluated and documented use cases and proof of concepts.
  • Participated in learning of tools in Big Data systems.
  • Installed and maintained Big Data systems on laptops with Linux.
  • Designed data architecture, AI, and advanced analytics proposal support, design and delivery.
  • Defined key business problems to be solved, formulate mathematical approaches and gather data to solve those problems, develop, analyse/draw conclusions, test solutions and present to client.
  • Assisted in assuring client satisfaction and maintaining a strong client relationship through delivery excellence.
  • Used predictive modelling, optimization, and/or machine learning analytics techniques and tools/programming languages.
  • Worked independently, leading a work stream effectively on a team.
  • Used statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets.
  • Performed machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
  • Used advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
  • Performed statistical and executed several data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc.
  • Involved in querying databases and using statistical computer languages: R, Python, SLQ for AI and BI reporting.
  • Used web services: Redshift, S3, Spark, and Digital Ocean.
  • Created and used advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modelling, clustering, decision trees, and neural networks.
  • Analysed data from 3rd party providers: Google Analytics, Site Catalyst, Core metrics, AdWords, Crimson Hexagon.
  • Used distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL,
  • Created visualizations and presented dashboards for stakeholders using: Periscope, Business Objects, D3, ggplot,

Confidential

Senior Big Data Architect

Responsibilities:

  • Created Views and Database Triggers, Stored Procedures and Functions using SQL such that information entered by a certain WM can make appropriate changes to the respective tables.
  • Researched best practices procedures pertaining to performance metrics and standards.
  • Designed and developed servers and integrated application components.
  • Prepared automation systems and tested server hardware.
  • Supported administration and designing of server infrastructure.
  • Utilized Spice works for providing helpdesk support services.
  • Analyzed and resolved software bugs with hardware manufacturers.
  • Created architecture components with cloud and visualization methodologies.
  • Evaluated and documented source system from RDBMS and other data sources.
  • Developed process frameworks and supported data migration on Hadoop systems.
  • Provided best practices in data visualization and business intelligence software, including recent versions of Tableau.
  • Performed ad-hoc reporting analysis as well as manipulate complex data on MS SQL server.
  • Involved in extensive data validation by writing several complex SQL queries and Involved in back-end testing and worked with data quality issues.
  • Analysed duplicate data or errors in data to provide appropriate inter-departmental communication and monthly reports.
  • Provided guidance to business, solution development and operations teams in applying leading and emerging data technologies
  • Provided general guidance to teams on data architecture topics
  • Created, establish and enforce a MDM strategy and architecture.
  • Established and managed metadata, semantic master data on data domains such as products, pricing, parts, customers, contacts, contracts.
  • Facilitated data architecture working sessions across solution teams.
  • Maintained currency with leading and emerging data technologies including “big data” platforms, NoSQL databases, data streaming, in-memory data management, real-time analytics, cloud data services.
  • Collaborated with the various Business, Operations and IT stakeholders to define, establish and run a data governance processes and working groups as part of the overall architecture governance that would fit with current PTC culture.
  • Designed Database Architecture, Administration, System Analysis, Design, Development and Support of MS SQL Server, MSBI ETL tools, Core Java, JSP, Servlets, JavaScript, XML, jQuery, Python and Scala scripting.
  • Worked extensively on Database programming, Database Architecture, Hadoop.
  • Worked with HDFS, MapReduce framework and Hadoop ecosystem such as Hive, HBase, Sqoop, and Oozie.
  • Installed, configured, and used Hadoop components like Hadoop Map Reduce, HDFS, HBase, Hive, Sqoop, Pig and Flume.
  • Responsible to manage data coming from different sources and involved in HDFS maintenance and loading of structured and unstructured data,
  • Analyzed data using Hive QL, Pig Latin and custom MapReduce programs in Java.
  • Worked on backend using Scala and Spark to perform several aggregation logics.
  • Involved in requirement analysis, design, coding and implementation.
  • Processed data into HDFS by developing solutions, analyzed the data using Map Reduce, Pig, Hive and produce summary results from Hadoop to downstream systems.
  • Created algorithms on Address cleansing and Address matching count factors.
  • Worked on various performance optimizations like using distributed cache for small datasets, Partition, Bucketing in hive and Map Side joins.
  • Experienced in Data mapping, Data transformation between sources to target data models.
  • Analysed reporting requirements and develop various Dashboards.
  • Involved in extraction, transformation and loading of data directly from different source systems like flat files, Excel, Oracle and SQL Server.
  • Experience in creating various views in Tableau (Tree maps, Heat Maps, Scatter plot).
  • Experience in creating Filters, quick filters, table calculations, calculated measures and parameters.
  • Responsible for developing, analysing and reporting key risk indicator and key performance indicator metrics that enable management to make timely and effective decisions related to threats, risks and control requirements.
  • Strong ability in developing SQL queries to extract, manipulate, and/or calculate information to fulfil data and reporting requirements including identifying the tables and columns from which data is extracted.
  • Used statistical techniques such as regression, cluster analysis, factor analysis, time series forecasting, experimental and design, etc. to solve business problems.
  • Used SAS to solve problems with data and analytics.

We'd love your feedback!