- Worked on Ansible orchestration and automation defined through Playbooks using YAML format which was the entry point for Ansible provisioning and managing servers.
- Worked extensively in Data Warehousing and Business Intelligence Application support.
- Efficient in supporting data warehouse and management solutions using SAS, SQL and basic UNIX.
- Exposure to full life cycle data warehousing development projects which include Requirement Gathering, Analysis, Design, Development, UAT Testing, Implementation and Support.
- Core experience in Project implementation life cycle, issue resolution, enhancements, and SAS ETL/Reporting tools.
- Experience in various SAS 9.3 Tools like SAS/BASE, SAS/MACROS and SAS Management Console.
- Expert in developing data loading jobs for RAMP using Control - M as well as SAS Programming.
- Experience in Data extraction and Data Importing of different file structures.
- Created & Modify reports using SAS BI toolsets (SAS Enterprise Miner and SAS Enterprise Guide).
- Extensive use of PROC SQL for extracting data from different data base system.
- Proficient in handling large databases from Oracle, SQL server and Hadoop Ecosystem.
- Preparing migration documents and task automation using Ansible.
- Highly motivated, positive and goal oriented, with analytical approach as well as well-developed interpersonal and communication skills, dealt with various clients.
- Expertise with MS Office Suite, MS Word, MS Excel, MS Access, MS Outlook.
- Creating Ansible playbooks to automate the production gaps and implement the auto healing mechanism in multiple application platforms for reliably and efficiently orchestrating the existing system managed by the system administrators by hand. Other activities performed are installing software, changing configurations, and administering services on individual servers.
- Work closely with business to identify and fix data quality issues reported via incidents through batch job, schedule corrections in Control-M, ad-hoc data patches, manual extraction and processing of data.
- Meeting Development and infrastructure teams in Software upgrades and migrations such as SAS uplift from 9.3 to 9.4, upgrading Spark code from 1.6 version to 2.1.
- Engage and discuss with Database Operations partners to identify long running sessions, data slice utilization, increase table space in Netezza.
- On-board APM tool Dynatrace to provide an auto remediation, which will help to maintain the higher application availability for our customers.
- Analyze existing design and optimize processes to utilize minimum resources by implementing enhancements/performance tuning algorithms in SAS, Linux, Hive and SPARK.
- Lead and drive high severity bridges in case of high business impact issues, which involve engaging and coordinating with different tiers of application, network and storage teams.
- Administrator role for Rapid Analytics and Modelling Platform (RAMP) and Self-Managed Analytics and Reporting tool projects (SMART) to validate the user access and audit level check.
- Review application code, turnover documents and maintain version control for seamless code changes in Bitbucket and Jira, Confluence.
- Implement and validate disaster recovery exercise as per annual AXP policies. Setup failover environment for business continuity in case of hardware failures.
- Manage, monitor and validate system patches as per ITIL standards, approve and examine RFC documents in Service Now and communicate stakeholders on the impacts of the changes in production.
- Develop, Test and deploy automation solutions to first touch resolution events in collaboration with Enterprise Command Center 2.0.
- Apply SAS hotfixes, license key updates as per vendor recommendation, Browser certificate renewals on Citrix servers.
- Deliver recommendations and permanent solutions to re-occurring events by monitoring events in SPLUNK/Nagios dashboards, categorizing and gaining insights from the same.
- Maintain and update knowledge Base repository, playbooks, runbooks and Service Mapping documents.
- Monitor long running application id’s in Hadoop Platinum cluster, resource allocations, Executors lost and assigned to jobs, stages completed at node level on the cluster.