- Over 8 Years of IT experience in Data analytics and managing large platforms as Sys admin.
- In depth and extensive knowledge of Splunk architecture and various components. Passionate about Machine data and operational Intelligence.
- Headed Proof - of-Concepts (POC) on Splunk implementation, mentored and guided other team members on Understanding the use case of Splunk.
- In depth Knowledge with search head clustering and Index clustering
- Implemented workflow actions to drive troubleshooting across multiple event types in Splunk .
- Expert in installing and using Splunk apps for Unix and Linux ( Splunk nix) .
- Knowledge on Configuration files in Splunk (props. conf, Transforms, Output.confg)
- BigFix experience in detecting malicious activity, threats and analysis. Auditing endpoints, vulnerability detection, patch work through BigFix. Good experience in building Splunk Security Analytics. Lead logging enrollments from multi-tier applications into the enterprise logging platforms.
- Extensive experience and actively involved in Requirements gathering, Analysis, Reviews.
- Expertise in creating accurate reports, Dashboards, Visualizations and Pivot tables for the business users.
- Expert in using rex, Sed, erex and IFX to extract the fields from the log files.
- In depth and extensive Knowledge in setting up alerts and Monitoring recipes from the Machine generated data
- In depth knowledge of Multi-vendor platforms such as Cisco, Checkpoint, Juniper Netscreen, PaloAlto and Bluecoat
- Created customized dashboard panels for specific requirements and updated to the existing panels in dashboard for SOC urgent requirements
- Extensive experience and actively involved in Requirements gathering, Analysis, Reviews, Coding and Code Reviews, Unit and Integration Testing.
- Provided engineering support for Threat Intelligence, Security Operations, Incident Response, and Inspection Services for the client.
- Managed and maintained use cases and data needs into correlation systems.
- About 3 years of experience in Big Data.
- Familiar with components of Hadoop Ecosystem: HDFS, HAWQ, Hive, HBase, Pig.
- Expertise in Hadoop Application Development.
- Proficiency in using HAWQ and HIVE to develop Hadoop applications and jobs.
- Advance analytics and interpretation skill on large data.
- Worked on Agile methodology, SOA for many of the applications
- Experience using XML, XSD and XSLT.
- Worked on NoSQL databases including HBase, Cassandra and MongoDB.
- Good knowledge of Log4j for error logging and Terraform
- Developed end to end monitoring script for the content publishing system.
- Team player with excellent communication, presentation and interpersonal skills.
- Highly motivated team player with zeal to learn new technologies.
- Strong willingness to accept new projects and learn new tools
Hadoop/Big Data: Splunk, Splunk Hunk, HDFS, HBase, Pig, Hive, Sqoop, Power pivot, Puppet, Oozie, Zookeeper.
Big data Analytics: Datameer 2.0.5, Splunk, Tableau
Frameworks: Automation Tools
Python: Ansible and Git
Programming Languages: Python, Linux shell scripts
Databases: Oracle 11g/10g/9i, MySQL, DB2, MS-SQL Server
Web Servers: Web Logic, Web Sphere, Apache Tomcat
Network Protocols: TCP/IP, UDP, HTTP, DNS, DHCP
ETL Tools: Informatica, Talend
Confidential, New York
Splunk Site Reliability Engineer
- Configured AWS Splunk Environment with the forecasting of data with 10TB of data per day.
- Managed various teams alerting and scheduling with help of some of the applications
- Created alerts and dashboards in DataDog to monitor the CPU and other instance related metrics
- Provided support for cloudera where we are monitoring the KAFKA application’s
- Knowledgeable with SolarWinds monitoring
- Experience in Splunk development creating Apps, Dashboards, Data Models, etc.
- Experience on Splunk Enterprise Deployments and enabled continuous integration on as part of configuration management.
- Managed Indexer Clusters including security, hot and cold bucket management and retention policies.
- Developed Splunk infrastructure and related solutions as per automation toolsets. Experience in Splunk GUI development creating Splunk apps, searches, Data models, dashboards, and Reports using the Splunk query language (SPL).
- Responsible for documenting the current architectural configurations and detailed data flow and troubleshooting guides for application support.
- Architecture various components within Splunk (indexer, forwarder, search head, deployment server), Heavy and Universal forwarder, Terraform, Parsing, Indexing, Searching concepts, Hot, Warm, Cold, Frozen bucketing, License model
- Upgrade and Optimize Splunk setup with new discharges.
- Disaster Recovery and Change Management: Designed and implemented disaster recovery solutions such as clusters using VERITAS Cluster server for storage replication and to allow seamless failover of IP on the DR site.
- Weekly meeting to discuss change management, datacenter and infrastructure approvals/issues.
- Production Support & Systems engineering duties related to Redhat system administration DNS, DHCP, NFS, NIS, LDAP, user account maintenance.
- Backup & recovery, Auto-mounting, License Management, Printer configuration.
- Client interaction for requirement gathering, so as to design and plan the software and hardware infrastructure; Handled installation and configuration of SQUID Web proxy.
- Installed, configured Oracle 10g & MySQL databases for Dev, Prod & QA environments.
- Installed and maintained web servers Jakarta Tomcat and Apache HTTP (1.3, 2.2) Web server in Red Hat Linux.
- Handled installation and configuration of Linux MTAs - (Sendmail, Postfix and Axigen).
- Preparation of operational testing scripts for Log check, Backup and recovery and Failover.
- Creation and implementation of shell scripts to take care of DB backup, monitoring alert log and log rotation reports.
Confidential, Dallas, Tx
- Configured Splunk on various systems which consist of Linux, windows and Mac OS collecting more than 12TB per day.
- Created a complex application from scratch including making calls to REST API's through various scripts.
- Handled configuration files such as inputs.conf - to schedule time for scripts to run, props.conf, transforms.conf - to handle unstructured data, to change the source checksum and many more.
- Possess excellent Regex skills, due to the data being very complex with mixed structure and variety of formats.
- Experienced in implementing REST API based scripts including all its HTTP methods - POST, GET, PUT, PATCH and DELETE.
- Excellent knowledge of scripting language like Python, shell scripts. Used Python to write various scripts which perform complex operations like gathering the data from different sources and fine tune it. This process makes the raw data look much better such that the end user can understand it with ease just in case if they come across it.
- Provided access control using LDAP to various teams based on their role and requirement.
- Used Splunk Enterprise Security (ES) as an SIEM tool and provided insight into machine data generated from security technologies such as network, endpoint, access, malware, vulnerability and identity information.
- Solid experience with Splunk Processing Language as it is used in creating complex alerts, Reports and saved searches. Also created various Pivot tables based on complex raw data upon request.
- For automation created ansible play to deploy apps and configuration changes to the cluster.
- Followed Agile model throughout the project. Used JIRA to implement the model and track progress.
- Responsible for creating documents like Release Notes, ES Upgrade, 'useful information for running scripts, etc up to date.
- Installed/Configured/Maintained Pivotal Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
- Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions.
- Developed data pipeline using Shell scripting, HAWQ, Hive and Java map reduce to ingest customer behavioral data into HDFS for analysis.
- Involved in collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
- Collected the logs data from web servers and integrated in to HDFS using Flume.
- Worked on installing cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity Planning and slots configuration.
- Implemented Name Node Backup using NFS. This was done for High availability.
- Developed PIG Latin scripts to extract the data from the web server output files to load into HDFS.
- Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing data onto.
Environment:: Hadoop, GreenPlum, HAWQ, MapReduce, Terraform, Phantom, Hive, HDFS, PIG, Sqoop, Oozie, Pivotal Command Center, Flume, HBase, ZooKeeper, MongoDB, Cassandra, Oracle, NoSQL and Greenplum, Unix/Linux., Datameer.
Confidential, San Diego, CA
Sr Splunk Engineer
- Good understanding on Splunk architecture and design for both on premise and AWS cloud and best practices
- Responsible for designing, developing, testing, troubleshooting, deploying and maintaining Splunk solutions, Reporting, alerting and dashboards
- I have helped in writing Strong Splunk search language and Created production quality dashboards, reports and threshold alerting mechanisms
- Supported Splunk on Linux, Windows and virtualized platforms and Parsing, Indexing, Searching concepts Hot, Warm, Cold, Frozen bucketing.
- Solid understanding of logging technologies (syslog, Windows and UNIX native logging)
- Involved on tier Splunk installation and configured indexers, forwarders, search heads, clusters
- I have helped teams to on-board data, create various knowledge objects, install and maintain the Splunk Apps, TA’s.
- De-coded and Debug complex Splunk queries.
- Created many of the proof-of-concept dashboards for IT operations, and service owners which are used to monitor application and server health.
- Functional understanding of TCP/IP networks and firewalls
- Done System backups, patching and updates with BigFix, RDP, SSH
- Installed Splunk DB Connect App to feed database logs to Splunk
- Configured various summary indexes by created saved searches to collect the aggregated data to run, create dashboards on top of summary index
- Lead the team in actively implementing smart Splunk solutions.
- Used rest end points in onboarding salesforce data.
- Handled the Server builds, F5 LTM, Brocade, Solar Winds network monitor/reports, Installed Palo Alto app for network data.
- Extensively worked on building of range maps for various SLA conditions by using all kinds of Splunk 6.x Dashboard Examples for SOC urgent requests regarding active threats.
- Risk and Threat Analysis. IT security monitoring and analysis, vulnerability analysis by using BigFix.
- BigFix experience in detecting malicious activity, threats and analysis and in patching.
- Auditing endpoints, vulnerability detection through BigFix.
- Supports, Monitors and manages the SIEM environment. Splunk Administration and analytics development on Information Security, Infrastructure and network, data security, Splunk Enterprise Security app, Triage events, Incident Analysis.
- Created data models and used report acceleration for faster searches.
- Managed the 25 indexers clusters and 15 Search Heads.
- Provided knowledge about Splunk architecture and various components (indexer, forwarder, and search head). Experience in working on Enterprise Security log management and SIEM solutions.
- Integrated Splunk with Service now to create automatic incidents based on the alert
- Been part of Indexers, Heavy forwarder and Search Heads built.
- Extensive knowledge in installing and configuring Hadoop name nodes and data nodes with Splunk
- Been part of Splunk platform upgrade from 6.4.3 to 6.5
- Integrated Splunk with LDAP.
- Created custom Splunk app and TA for each team on onboarding their data and access it with LDAP security roles.
- Updated active directory to add new users, creating new security roles and set permissions.
- Writing new firewall rules (Access rules and reverse Access rules).
- Design and implement syslog network traffic and syslog server.
- Installed and configured universal and heavy forwarders.
Environment: Splunk Enterprise, Splunk 6.2.x, 6.4.x, 6.5 Universal Splunk forwarder, Splunk Db connect, Phantom, Oracle, MS SQL 2008, Regular expressions, Windows, UNIX, UNIX shell scripting, XML, Microsoft Active Directory, Splunk App for Enterprise Security (SIEM).
- Expertise with Splunk 6.3.04 (Currently involved in cluster upgrade to 6.3.15)
- Installed and configured heavy, universal, and intermediate forwarders.
- Installed and deployed forwarders with the help of puppet team
- On boarded data from various sources such as Oracle, Informatica, Salesforce, Autosys and Cognos.
- Extracted various fields using field extractor, field extractions (rex) and calculated fields to optimize the search performance and reduce the load on the search ahead.
- Lead the team in communicating with Application subject matter expertise to understand the pain point of the logs.
- Wrote complex alert logics for smart proactive alerting for the various other teams.
- In depth experience with props. conf, transforms. conf, inputs.conf.
- Conduct scheduled patching with BIGFIX
- Implemented Zone Based Firewalling and Security Rules on the Palo Alto Firewall.
- Exposure to wild fire feature of Palo Alto.
- Managed cisco IDS and IPS modules with Firepower Management Center.
- As a part of SIEM team, monitored notable events through Splunk Enterprise Security.
- Used FireEye tool to run against application servers to generate reports about vulnerabilities for that server.
- Onboard new log sources and parsing to enable SIEM correlation.
- Experience with search ahead clustering and Index clustering.
- Assisted various other power users in optimizing the searches.
- In depth knowledge with search ahead clustering and Index clustering.
- Supporting Linux System Administrator customers with RHEL/SUSE a combination of systems architecture, systems development, Unix/Linux operating system lifecycle development.
- Technical assessments, implementation administration, regression process development, SAN support from a directory standpoint, technology scaling and server builds.
- To provide ongoing support in systems management and administration of LINUX/UNIX in global-scale environment.
- Provisioning and support of Production, QA and Developers environment.
- Deployment of UNIX /Linux, VERITAS and Build of various environments.
- Provide rotational on-call Tier II support responsibilities.
- Production Support: Provided 24x7 supports for various divisions within the Company and resolve all production issues in a high pressure and time sensitive environment.
- Issues related to NFS, NIS, LVM, Grub corruption, configuration & maintenance of RAID (level 0,1,5), troubleshooting VERITAS Volume Management and cluster server environment.
- Developing scripts in PERL and SHELL to automate the process.
- • Disaster Recovery and Change Management: Designed and implemented disaster recovery solutions such as clusters using VERITAS Cluster server for storage replication and to allow seamless failover of IP on the DR site.
Environment: Splunk 6.3.04, Appdynamics, New Relic, Linux, Bash, Perl, Sed, rex, erex, Splunk Knowledge Objects, Python.