Technical Operations Resume
SUMMARY:
- A dynamic professional with 9.5 years of experience in delivering software services with core competency in Data Analysis technologies.
- Certified SPLUNK Power User V6.3
- Certified SPLUNK Admin V6.3
- Certified in Python and R Programming
- Trained and Certified in Tableau 10
- Trained in SolarWinds (NPM - Network performance monitoring) and App Dynamics (APM - Application performance monitoring)
- Exposure of SDLC and Agile processes, including Rally and Analysis, Design, Coding, Testing and Implementation.
- Gained exposure in Agile Process - Scrum Development Methodology: Backlog /User Stories grooming, Release Planning, Sprint iteration, Show and Tell Process, Sprint Retrospective / Review, Release / Implementation.
- Experience in handling end-to-end software development projects, re-engineering projects and maintenance projects, ensuring project delivery on time and with good quality.
- Possesses excellent interpersonal, communication and analytical skills.
- Currently working in a team of 30 members including both onsite and offsite.
- Utilized static tables (TAMS table) as a cost efficient method for look-up purpose in place of DB2 tables.
- Preparing mainframe jobs to check for the CPU utilization and generate automated mails on daily basis if the cost exceeds more than $20 per day.
- Automated testing using scheduled jobs, overcoming manual intervention.
- Modernization of mainframe using IBM ODM. Moving it to a rule driven system which will provide the flexibility to changes the rules without compilation and implementation.
- Using Splunk to raise alerts for issue detection and fixing.
- R Programming to depict the analytical visualization of issues identified in the monitoring.
TECHNICAL SKILLS:
Languages: COBOL, JCL, Easytrieve, MQ Services, TAMS Table, REXX, CICS, File handling, VSAM, Control-M, XML, Core Java, SPLUNK, Tableau, Solar winds, App dynamics and R Programming
Script: Java script and Python script
Database: DB2, IMS, MySQL
Special Software/ Tools: Endevor, SPUFI, Xpeditor, Intertest, Changman, BMC tools (BMCADM & BMCAES), Infoman, XMLGEN, Control-M, CA-7, Autosys (R11), File Aid, BRMS ODM, CLIC case handling
Methodology: Waterfall and Agile using Rally
Change Management: Service Now
Project Maintenance: Sharepoint and Confluence
PROFESSIONAL EXPERIENCE:
Confidential
Technical Operations
Responsibilities:
- Late batch monitoring of the mainframe jobs using the Sysview utility in mainframe
- Late batch monitoring of distributed batch jobs using Autosys(R11)
- Identification and documentation of job dependencies and workflows.
- Support batch consolidation and implementation activities.
- Identification and documentation of interface dependencies of batch jobs.
- Creating daily, monthly and quarterly reports of the Risks, Issues and delayed jobs for the impacted applications.
- Coordinate with the application teams in cleaning up non-running jobs in MBFS batch process.
- Coordinate with the application teams in cleaning up the unused files or datasets and directories. Utilize the unused disk space and prepare the metrics for CPU utilization and Heap and garbage collection.
- Modernize the Mainframe applications
- Coordination with the application teams in case of job abnormal termination and job delays.
- Organize and support the problem management activity.
- Liaison with interfacing teams for support during outages.
- Production of materials needs to be aligned with MBFS IT strategy under MBFS IT Technical Operation direction.
- Support IT-Technical Operations command centre activities.
- Pulling reports for the unused files in disks and Tape; sharing the details with the application teams and getting their confirmation for deleting the files in both the production and development regions.
- Monitoring the Network performance, CPU utilization, Server capacity, Garbage and Heap collection for distributed applications using the Solarwinds and AppDynamics tool.
- Monitoring the application transactions using the AppDynamics.
- Monitor the alerts raised in the Splunk system.
- Prepare and test Splunk search strings.
- Develop, evaluate and document specific error logs and audit logs for management purpose.
- Create and configure management reports and dashboards.
- Validate the alerts raised are legitimate or not.
- Check the query used to raise the alert, if it is handling the scenarios properly. If it is optimised to handle the scenarios.
- Manage and maintain use cases into correlation systems.
- Coordination with the application teams in case of any incidents raised.
- Grouping the incidents based on category and ETA, ensure taking proper actions within the ETA
- Involving the correct application teams and helping them in RCA
- Preparing the documentation of the RCA and resolving the incident within ETA
- Creating the CISM tickets for change tasks and handling the incidents using the same.
- Co-ordinate with the application teams to plan the release activities.
- Raising the change tickets for the change requests on behalf of the application teams.
- Co-ordinate the implementation activities
- Provide support in scheduling the batch jobs in both Mainframe and Distributed environment.
- Providing access to individual team members to particular applications for handling the scheduling of jobs; using JRS (Job request System)
Environment: Mainframe, COBOL, JCL, DB2, IMS, R11 (Autosys), Solarwinds, AppDynamics, Splunk, CISM, CA SysView, JRS\
ConfidentialSr. System Architect
Responsibilities:
- Onboarding formalities for new resources (Including initial knowledge transfer on Infrastructure and Project related structures)
- Requirement gathering, data setup, developing query using Splunk, testing and implementation activities in E3.
- Creating Data Model, Field Extractions, Tags and Event Types Field, Aliases, Calculated Fields & Macros.
- Setup of KV store to store data for source types having huge data to make the searches run faster.
- Develop the dashboards for the application team to display the top 10 errors, major failures, authentication errors, and other errors for the whole application and each and every module of the application.
- Created Retro-run dashboard to trigger alerts for old dates without having to actually understand different earliest and latest time ranges.
- Set up scripted inputs, file monitors and update various configuration files. Develops reliable, efficient queries that will feed custom Alerts, Dashboards, Reports and Data Models.
- Provide warranty support and transition to Technical investigation team.
- Provide forecast, estimate staffing and effort to deliver solutions aligned with strategies and technology plans.
- Monitor the alerts raised in the Splunk system.
- Prepare and test Splunk search strings.
- Develop, evaluate and document specific error logs and audit logs for management purpose.
- Create and configure management reports and dashboards.
- Validate the alerts raised are legitimate or not.
- Go thru the business processes and scenarios.
- Check the query used to raise the alert, if it is handling the scenarios properly; if it is optimised to handle the scenarios.
- Manage and maintain use cases into correlation systems.
- Python Scripting to check the balancing during file transmission, record mismatch and failure in the intermediate jobs.
- Check if the file is fed to the system on time or if duplicate files are fed to the system.
- Intimate Business team, if the alert is legitimate.
- Communicate the steps to be followed to handle the alerts and prevent future alerts.
- Discuss with the Data sourcing and configuration teams to raise change requests, if any modification or optimization is required for the query.
- Daily and Weekly status meeting to discuss the open and in-progress defects and alerts with team and client director.
Environment: SPLUNK, Python scripting, UNIX Bash, Archer, CLIC case monitoring, Tableau and R Programming
ConfidentialSystem Architect
Responsibilities:
- Meeting the interfacing teams to discuss the scope of data to be utilized for this project.
- Develop thorough understanding of the system.
- Create the high-level and detailed process flows and data structure for storing the data.
- Evaluate and document use cases and proof of concepts.
- Install, test and deploy monitoring solution with Splunk services.
- Provide technical services by preparing Splunk queries to projects, user requests and data queries.
- Requirement gathering, data setup, developing query using Splunk, testing and implementation activities in E3.
- Creating Data Model, Field Extractions, Tags and Event Types Field, Aliases, Calculated Fields & Macros.
- Set up scripted inputs, file monitors and update various configuration files. Develops reliable, efficient queries that will feed custom Alerts, Dashboards, Reports and Data Models.
- Setting up Search heads and Indexers and make sure they are connected and are able to communicate properly.
- Ensuring the Search head is able to search the data from Indexer.
- Ensuring the Indexer is able to receive data properly from the forwarder.
- Configuring data inputs for collecting data from forwarders and ensuring that data is coming in correctly is being read by Splunk.
- Coordinating troubleshooting activities with Splunk Technical team.
- Support data source configurations and change management processes.
- Analyse and monitor incident management and incident resolution problems in Service now
- Log the alert details in CLIC management system.
- Maintain and manage assigned systems, Splunk related issues and administrators.
- Raise access authorization for new joinee
- Mentoring team on various aspects of technology and domain knowledge.
Environment: SPLUNK, Python scripting, HTML, XML, UNIX Bash commands and CLIC management system
ConfidentialTechnology Lead
Responsibilities:
- Analysing the AS-IS Triumph process.
- Discuss with Subject matter experts to get the thorough understanding of the system.
- Prepare the Task flows and Process flows of the existing system and futuristic TO-BE approach.
- Figure out the unused codes and variables in the programs, which counts towards unnecessary cost
- Figure out the scope for automation
- Modernization of mainframe and moving the core modules to rule based engine.
- Extract the rules from the core modules driving the system
- Prepare the flow in ODM (Operational data management) using the Rules extracted.
- Prepare the UI (Graphical user interface) to make the code flexible and provide scope for Business to make the changes in Rules in Decision center without code changes, compilation and implementation. Thus reducing the cost of maintenance and modification.
- Executing the batch jobs and online jobs for fetching the code coverage.
- Tracking the test results, analysing the modules and preparing the performance analysis reports.
- Prepare the Feature break down of the processes handled in Triumph in coordination with Business.
- After finalizing the To-Be approach, prepared the Iteration plan to split the tasks in Sprint wise and estimated the time taken for delivering the processes.
- Prepare the Estimation and sizing of the suggestive approach.
- Conduct multiple discussions with Client Product Director, Client Tech Director and Client VP, for the presentation of estimation and sizing.
- Conduct knowledge transfer session to groom up the new peoples joining the project on both Domain and technology.
Environment: COBOL, JCL, IMS, DB2, Stored proc, XML, MQ, Changman & Expeditor, Code overage, File Aid and IBM ODM.
ConfidentialTechnology Analyst
Responsibilities:
- Kick off calls with the Business team to understand the requirements.
- Study the current state of the processes. Prepare the AS-IS process map and the TO-BE scenario.
- Bridging the gap in domain understanding between the client and technical team. Coordinating with the stakeholders to translate the business requirements into actual project plan.
- Creating Detailed Requirement Design Documents (DRD).
- Creating Use Cases, User Interface Designs, Report Specifications, Print Specifications and System-to-System Interfaces.
- Identifying areas for automation.
- Following the Agile methodology of software process and Preparing the iteration plan for sprint wise break down of the tasks to ensure successful delivery.
- Coding, testing and preparing test scenarios of the functionalities. Work with the testing team to prepare the testing plan, scope and other relevant documents and reviewing the test cases prepared.
- Analysing the change request and co-ordinating the changes required with the end-to-end.
- Preparing different reports like, Signoff report, Root Cause Analysis Reports.
- Training new resources on Domain and Technology understanding.
- Taking calls with the end customer and remotely working on the environment to troubleshoot the issue.
- Reviewing call status & client deliverables and reporting with client and Infosys management and managing onsite builds for Pre-production and Production environments.
- Deploying the components in production and provide support in warranty period.
- Contributing for successful and smooth running of the project.
Environment: COBOL, JCL, IMS, DB2, Stored proc, XML, SOAP UI, MQ, Changeman & Expeditor.
ConfidentialSr. Systems Engineer
Responsibilities:
- Impact analysis of the existing system based on Symbology changes. High level and detailed design of identified modules. Coding and Unit testing of the designed modules
- Peer and group review of the modified components, test results with leads and onsite coordinates. Uploading the test results, test cases and logged defects in quality center
- Creating change requests, packages and performing install activities including implementation plans and pre & post install activities. Review of project deliverables
- Provide warranty support for the installed components
- Meetings with the production support team to provide the knowledge transfer on the project details and installed components. Monitoring the quality adherence of project activities
Environment: COBOL, JCL, CICS, DB2, IMS-DB, Endevor.