Technical Project Manager Resume
4.00/5 (Submit Your Rating)
Richmond, VA
SUMMARY
- AWS Certified Solutions Architect with 14+ years of expertise in architecture, definition of large distributed systems, technical consulting and technology implementation in Cloud, Big Data, Hadoop, Business Analytics and Business Intelligence.
- Currently working as an AWS/Snowflake Architect at Confidential Consultancy Services.
- Worked on business problems and use cases across multiple industry verticals such as Banking, Insurance, Telecom, Energy, Utilities and Manufacturing.
- Extensive exposure in authoring proposals for RFPs/RFIs, managing deliveries and providing technical thought leadership on cloud enablement, digital transformation, big data adoption and data migration.
TECHNICAL SKILLS
- Big Data & Cloud
- Data Science & Machine Learning
- Enterprise Data Warehouse Architect
- ETL Architect
- Analytics Solutions
- Delivery Manager
- Project Manager - Agile Methodology.
PROFESSIONAL EXPERIENCE
Confidential
AWS/Snowflake Architect
Responsibilities:
- Currently halping customer to adopt AWS cloud services with its current technology landscape.
- Created ETL pipeline to read data from S3 to Kafka and Kafka to Snowflake using custom Framework.
- Enhanced the custom ETL framework to read encrypted data and added Snowflake Connector.
- Created custom logic to enrich the Workday data for business users using custom Snowflake procedure utility.
- Also halping the recruitment team with hiring candidates by technical evaluation.
Confidential
Technical Project Manager
Responsibilities:
- Provided technical expertise in design and planning the overall architecture of the system for Talend to Pyspark conversion project.
- Designed a python-based framework to orchestrate the jobs and manage the audit logs.
- Managing the overall execution strategy, project plan, resource alignment.
- Ramped up the team of 15 resources with mixed skills and system/domain noledge.
- Interacting with client to feeds up over all project execution plan, design architecture, major milestones and any challenges needs to resolve.
- Designed and delivered internal technical trainings for the company.
- Published white paper for best spark practices and hands on project experience as project manager.
- Successfully delivered the project within given timelines with more than estimated project margin.
- Supervising the planning and development for a Web development project as one SSO application to manage redshift user’s access, scratch space and Dataiku access.
- Working on project assessment for transition of SAS to Dataiku.
Confidential
AWS Architect / Lead Data Engineer
Responsibilities:
- Provided technical leadership in planning, executing, and governing the data integration project in AWS.
- Proposed the design of architect for data migration from source (ETL) to end target (Snowflake) via different channels.
- Setup the automation pipeline for incremental source raw file to load into cleans layer using Glue, Lambda and DynamoDB.
- Converted complex business logics to Pyspark script using Juypter, Glue EMR and Snowflake.
- Created an API to reconcile the source file size from in-premise to Cloud.
- Performed PoC with Airflow and AWS Step functions to explore the job workflow setup.
- Incorporated the steps for Data Quality checks in the Glue jobs and automation workflow.
- Did performance tuning in Glue jobs to handle huge datasets in production.
- Worked on automation framework pipeline to modularize it and make it generic for other teams to use inside the organization.
Confidential, Richmond, VA
Lead Data Analyst
Responsibilities:
- Managed the team for script remediation of Teradata to Snowflake database migration.
- Designed and setup the Airflow scheduler for processing business report for fraud detection on AWS EMR cluster to save time and manual effort of manage reports individually.
- Created python module for Data-validation b/w two rule engine database (Bonsai and PRM) using python, zeppelin and SQLs
- Setup a module to upload a report file into AWS S3 using presigned API URL.
- Wrote an API in python to process load ready file and convert into delta file for each day and load into S3.
- Created machine learning model using scikit/python to understand the customer behavior for any suspicious activity or fraud.
- Used Machine Learning/Text Analytics to perform classification and prioritize by most relevant articles to present to an AML Investigator.
Confidential, Summit, NJ
Hadoop Architect
Responsibilities:
- Design and review the CLAD and TAD and other integration spec documents.
- Planning and setting up AWS infrastructure and integrate with in premise integration hub server.
- Set up the Cloudera Hadoop cluster on AWS cloud nodes.
- Setup a tool to feed latest incoming data into target like SAS cloud and Amazon S3.
- Setup a client hub server to feed data on Amazon S3 cloud server and alert using web service api using python.
- Loading the data into impala tables matching metadata and resolve the file format, data issue.
- Setup the Hadoop Docker cluster and make connectivity with database like oracle and Netezza.
- Setup an email alert script for high CPU, server status and WebLogic server stats.
- Setup a file manager tool to provision MDM file target vendors.
- Setup tool to load CSV data to Netezza, manipulating the tables and columns in Netezza database.
- Setup a tool to transform the incoming data and load data into Netezza.
- Created graphical report for database growth pattern using R.
- Installed Graylog, Neo4J, MongoDB, Apache Hadoop cluster on Docker.
Confidential, Charlotte, NC
Bigdata Lead
Responsibilities:
- Experienced in Managing, Monitoring and Administration of Hadoop Node Cluster.
- Experience supporting systems with High Availability and Federation.
- Monitor Hadoop cluster connectivity and security.
- Manage and review Hadoop log files.
- Using Splunk to analysis logs for troubleshooting and create dashboards.
- Load log data into HDFS using Flume.
- Strong noledge in Hadoop monitoring tools like Ambari 2.1.
- Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
- Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
- Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
- Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
- Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
Confidential, Bentonville, AR
DataStage Architect
Responsibilities:
- Worked with system admin to acquire the necessary hardware and capacity planning.
- Installed IIS9.1 on zLinux platform and apply necessary patches.
- Setup for DB connection and SAP BW.
- Help developers to resolve issue with job migration and test.
- Worked with DataStage lead developer to plan and execution of cutover smoothly.
Confidential, Newark, DL
ETL Lead
Responsibilities:
- Supported any DataStage issue and other technical support for developers on IIS9.1 and IIS8.7 servers.
- Recently deployed IIS91 grid server on Linux platform.
- To deploy DataStage patches and keep server and clients updated with patches.
- To cleanup, server recycles, maintaining the logs and monitor the filesystem space.
- To perform regular check for machine performance.
- Worked with various other teams within bank to integrate DataStage application with their applications.
- Explored new tools and process, like SVN integration with DataStage.
- To diagnosis any issue while developing jobs by DataStage developer.
- Worked on Blaze-Talend PoC to integrate the Talend ETK tool with java-based Blaze application.
- Planned and managed the IIS81 migration to IIS87. Completed end to end installation of IIS87 and configuration.
- Implemented DataStage connection with Hadoop.
Confidential
Product Deployment Specialist
Responsibilities:
- Provided support for any DataStage issue and other technical support.
- Deployed DataStage patches to keep server and clients updated with patches
- Performed cleanup and maintenance of the logs and monitoring the filesystem space
- Performed regular check for machine performance
- Created build for MDM, to deploy it on the MDM-server.
- Created software package for Infosphere tool software
- Performed diagnosis of any issue while developing jobs by DataStage developer.
- Created an automated code deployment package for DataStage.
- Performed IIS81 installation and configuration.
- Performed transition activities for MDM (Master data Management) timely being sole member of it in ISL.
- Preformed setup the UNIX servers with supporting software for deployment.
- Deployed Master Data Management on servers using deployment tool and to run Sanity test to ensure it successful before handing it over to QA for testing
- Resolved issues for QA facing while testing or related to machine environment.
- Worked with developers to test bug fix on initial stage before QA to test it.