Aws Architect/engineer Resume
Richmond, VA
SUMMARY
- Strong understanding of the entire AWS Product and Service suite primarily EC2, S3, VPC, Lambda, Redshift, Spectrum, Athena, EMR(Hadoop) and other monitoring service of products and their applicable use cases, best practices and implementation, and support considerations.
- Designing highly available, elastic and performable cloud infrastructure, planning and testing new solutions, executing quick POC to test out new functionalities and features.
- Experience in migrating existing databases from on premise to AWS Redshift using various AWS services
- Worked on various automation tools like GIT, Terraform, Ansible.
- Built auto - scaling solutions including multi-region deployments and/or hybrid cloud solutions (connectivity points to on-premises technologies)
- Instantiated, created, and maintained CI/CD (continuous integration & deployment) pipelines and apply automation to environments and applications.
- As an Architect, I was involved in multiple initiatives for cloud based applications working closely with various development teams and team members.
- Strong knowledge in Hadoop/EMR Ecosystem.
- Have knowledge in processing data using Pig scripts and Hive queries.
- Strong knowledge of Data Warehousing implementation concept in Redshift. Has done a POC with Matillion and Redshift for DW implementation.
- Excellent interpersonal and communication skills.
- Have strong Database and ETL background.
TECHNICAL SKILLS
- Redshift
- AWS services
- DevOps
- Ansible
- Big data
- Hadoop
- Cloudera Manager
- HDFS
- Hive
- Informatica
- Matillion
- ETL
- Linux
- Oracle
- Tidal
- Python Scripting
- Shell Scripting
- GIT and Terraform.
PROFESSIONAL EXPERIENCE
AWS Architect/Engineer
Confidential - Richmond, VA
Responsibilities:
- Used different AWS Data Migration Services and Schema Conversion Tool along with Matillion ETL tool.
- Designed tables and columns in Redshift for data distribution across data nodes in the cluster keeping columnar database design considerations.
- Used different custom housekeeping utilities and monitoring services to have the cluster running efficiently.
- Wrote Redshift UDFs and Lambda functions using Python for custom data transformation and ETL.
- Used AWS Redshift, S3, Spectrum and Athena services to query large amount data stored on S3 to create a Virtual Data Lake without having to go through ETL process.
- Provided seamless connectivity between BI tools like Tableau and Qlik to Redshift endpoints.
- Manage IAM roles and console access for EC2, RDS and ELB services.
- Monitor and create alarms for CPU, memory, disk space, using Cloud Watch
- Automated the code deployment and EC2 provisioning using Ansible and Terrafoam.
- Have done POC on Redshift spectrum to create external tables by using S3 files.
- Have done POC on AWS Athena service.
Environment: AWS, EBS, VPC, Redshift, Elastic Load balancer (ELB), Auto Scaling groups, IAM, Cloud Watch, Glacier, JIRA, Chef, S3, SCT, DMS, CloudFormation, CloudFront, Direct Connect, Linux, WinSCP, Oracle 11G, Teradata 15, Tableau 10.
AWS Architect/Engineer
Confidential, St. Louis
Responsibilities:
- Responsible for configuration and management, logical access control, data encryption, network configuration and management and security logging and monitoring.
- Responsible for creating Well-Architected Application on AWS using Auto scaling, SQS, SNS, ELB, Caching and database layer as necessary.
- Created an AWS Identity and Access Management role for specific privileged user with cross-account access to resources in AWS Account.
- Working closely with PMO, Developers and QA team from building the infrastructure and application troubleshooting.
- Design high availability applications on AWS across availability zones and availability regions.
- Design applications on AWS taking advantage of disaster recovery design guidelines.
- Technical point of contact to plan, debug and navigate the operational challenges of cloud computing.
- Setup of Cloud Watch alarms, setting up CloudTrail, creating cloud formation templates, creating S3 buckets.
- Data Migration from oracle to redshift using SCT and DMS.
- Involved in AWS Data Migration Services and Schema Conversion Tool along with Talend ETL tool.
- Create, modify and execute DDL in table AWS Redshift tables to load data.
- Performance tuning the tables in Redshift.
- Reviewing the explain plan for the SQLs in Redshift.
Environment: AWS, EBS, VPC, Redshift, Elastic Load balancer (ELB), Auto Scaling groups, IAM, Cloud Watch, Glacier, Talend, JIRA, Chef, S3, SCT, DMS, CloudFormation, CloudFront, Direct Connect, Linux, WinSCP, Oracle 11G, Toad.
AWS Architect
Confidential, Pleasanton, CA
Responsibilities:
- Understanding requirement from the architects and proposing designs as per requirements.
- Managing Amazon Web Services (AWS) infrastructure with automation and orchestration tools such as Chef.
- Proficient in AWS services like VPC, EC2, S3, ELB, AutoScaling Groups(ASG), EBS, RDS, IAM, CloudFormation, Route 53, CloudWatch, CloudFront, CloudTrail.
- Experienced in creating multiple VPC's and public, private subnets as per requirement and distributed them as groups into various availability zones of the VPC.
- Manage IAM roles and console access for EC2, RDS and ELB services.
- Used IAM for creating roles, users, groups and implemented MFA to provide additional security to AWS account and its resources.
- Created snapshots to take backups of the volumes and also images to store launch configurations of the EC2 instances.
- Installed Workstation, Bootstrapped Nodes, wrote Recipes, and Cookbooks and uploaded them to Chef-server and managed AWS for EC2/S3 & ELB with Chef Cookbooks.
- Monitor and create alarms for CPU, memory, disk space, using Cloud Watch.
- Manage Route 53 DNS hosted zones configuring aliases for Elastic Load Balancer applications.
Environment: EC2, S3, Auto Scaling, AMI, ELB, EBS, IAM, RDS, DNS, Cloud watch, Route53, VPC, TOAD, Unix- SunOS, SQL Developer.
Informatica/ETL Architect
Confidential, Gaithersburg, MD
Responsibilities:
- Working closely with the client and analysts examined the existing business models and flows of data; discussed the findings with the client, and designed new systems.
- Worked as data modeler, created Conceptual, Logical and physical data models, data modeling guidelines.
- Created Dimensional (Star and Snowflake) models, known different CDC, slowly changing dimension, Unbalanced Hierarchy techniques.
- Involved in Informatica issue resolution, folder migration and administration and served as the second line of support internally for any Informatica issues escalated by project teams.
- Created multiple nodes, integration service, repository service, assigned integration service to grid.
- Created folders, user management, groups and permissions, as well as privilege management.
- Involved in Informatica issue resolution, folder migration, Administration and served as the first line of support internally for any Informatica issues raised by the project teams.
- Involved in Software upgrades plan and manage Informatica software patching and upgrades.
- Involved in Resource management. Monitor and manage resources associated with the Informatica environment.
- Involved to build a process to purge repository process for leaner repository with better performance.
- Experience in security administration creating and managing users, groups and roles and assigning privileges.
- Interface with the support team for enterprise scheduler in order to schedule and monitor Informatica production jobs.
Environment: Oracle 10g, SQLServer2008, IBM ISeries (DB2), Erwin, Toad, SQL developer, Informatica 9.5.
Sr. ETL Developer
Confidential, East Hanover, NJ
Responsibilities:
- Involved in understanding the client’s requirement and preparing the Estimation plan for development.
- Involved in Preparing design document and mapping document for each table involved in Crescendo Weekly Reports Project.
- Used various transformations like Filter, Router, Expression, Lookup (connected and unconnected), Aggregator, Sequence Generator, Update Strategy, Joiner, Normalizer, Sorter and Union to develop robust mappings in the Informatica Designer.
- Developed job sequencer with proper job dependencies, job control stages, triggers.
- Developed error handling process to collect rejects for data analysis.
- Worked on Optimizing and Performance tuning on targets, sources and mappings to increase the efficiency of the process.
- Done extreme analysis on data quality using column, primary keys, cross domain techniques.
- Excellent Real-Time stages like MQs, Web Services and XML Input and XML Output stages to send and receive the messages from different vendors.
- Preparation of Unit Test cases and Unit test results document and end - end testing of the jobs.
Environment: Oracle 10g, IBM iSeries (DB2), MS Access, Toad, SQL developer, Informatica 8.6, Cognos 8.3.
ETL Developer
Confidential
Responsibilities:
- Involved in understanding the client’s requirement and preparing the Estimation plan for development.
- Involved in Preparing design document and mapping document for each table involved in Crescendo Weekly Reports Project.
- Used different Stages like Join, Lookup, Sparse Lookup, Sequential File, Dataset, Transformer, Sort, Aggregator, Merge, Funnel, Filter, Copy, Modify, Remove Duplicate, Change data Capture, Stored Procedure for developing different Jobs.
- Developed DataStage 7.5 Jobs in Designer to Extract data from the Sources Flat Files, CSV Files and COBOL Files, cleanse it, and Transform it by applying business rules and Load into Target DWH.
- Used the DataStage Director for monitoring the jobs and debugging the Issues.
- Implementing performance-tuning techniques along various stages of the ETL process.
- Involved in meetings with the client for requirements and provide services to meet the required SLAs. Worked with DataStage Manager for importing and exporting the DSX of the jobs.
- Performed trouble shooting, maintenance and performance tuning of DataStage Jobs and SQL statements.
- Created parameter sets to group DataStage and Quality Stage job parameters and store default values in files to make sequence jobs and shared containers faster and easier to build.
- Worked with CVS for Version Control and to migrate the code from Development to UAT and then to Production.
Environment: IBM DataStage Server 7.5, IBM DB2, UNIX, Windows XP, DB2 8.2.2, Erwin, Control-M.