Hadoop Bigdata Engineer/admin Resume
Newport Beach, CA
SUMMARY
- 7+ years of expertise in Hadoop, Big Data Analytics and Linux including architecture, design, installation, configuration and management of Apache Hadoop Clusters, Mapr, and Hortonworks&ClouderaHadoop Distribution.
- Hands on experience in installing, configuring, monitoring and using Hadoop components like Hadoop MapReduce, HDFS, HBase, Hive, Sqoop, Pig, Zookeeper, Hortonworks, Oozie, Apache Spark, Impala.
- Working experience with large scale Hadoop environments build and support including design, configuration, installation, performance tuning and monitoring.
- Working knowledge of monitoring tools and frameworks such as Splunk, Influx DB, Prometheus, SysDig, Data Dog, App - Dynamics, New Relic, and Nagios.
- Experience in setting up automated monitoring and escalation infrastructure for Hadoop Cluster using Ganglia and Nagios.
- Standardize Splunk forwarder deployment, configuration and maintenance across a variety of Linux platforms. Also worked on Devops tools like Puppet and GIT.
- Hands on experience on configuring a Hadoop cluster in a professional environment and on Amazon Web Services (AWS) using an EC2 instance.
- Experience with complete Software Design Lifecycle including design, development, testing and implementation of moderate to advanced complex systems.
- Hands on experience in installation, configuration, supporting and managing Hadoop Clusters using Apache, Hortonworks,Cloudera and Map Reduce.
- Extensive experience in installing, configuring and administrating Hadoop cluster for major Hadoop distributions like CDH5 and HDP.
- Experience in Ranger, Knox configuration to provide the security for Hadoop services (hive, base, hdfs etc.).Experience in administration of Kafka and Flume streaming using Cloudera Distribution.
- Developed automated scripts using Unix Shell for performing RUNSTATS, REORG, REBIND, COPY, LOAD, BACKUP, IMPORT, EXPORT and other related to database activities.
- Installed, configured, and maintained several Hadoop clusters which includes HDFS, YARN, Hive, HBase, Knox, Kafka, Oozie, Ranger, Atlas, Infra Solr, Zookeeper, and Nifi in Kerberized environments.
- Experienced with deployments, maintenance and troubleshooting applications on Microsoft Azure Cloud infrastructure.Excellent knowledge of NOSQL databases like HBase, Cassandra.
- Experience in large scale Hadoop cluster, handling all Hadoop environment builds, including design, cluster setup, performance tuning .
- Experience in hbase replication and maprdb replication setup between two clusters
- Release process implementation like Devops and Continuous Delivery methodologies to existing Build and Deployments.Experience with scripting languages python, Perl or shell script also.
- Involving architecture of our storage service to meet changing requirements for scaling, reliability, performance, manageability, also.
- Worked on Google Cloud Platform Services like Vision API, Instances.
- Modified reports and Talen ETL jobs based on the feedback from QA testers and Users in development and staging environments.
- Worked with systems engineering team to plan and deploy new Hadoop environments and expand existing Hadoop clusters.
- Deployed Grafana Dashboards for monitoring cluster nodes using Graphite as a Data Source and collect as a metric sender.
- Having good Knowledge in Apache Flume, Sqoop, Hive, Hcatalog, Impala, Zookeeper, Oozie.
- Experienced in workflow scheduling and monitoring tool Rundeck and Control-M.
- Proficiency with the application servers like Web Sphere, WebLogic, JBOSS and Tomcat.
- Working experience on designing and implementing complete end to end Hadoop Infrastructure.
- Good experience on Design, configure and manage the backup and disaster recovery for Hadoop data.
- Experienced in developing Map Reduce programs using Apache Hadoop for working with Big Data.
- Responsible for designing highly scalable big data cluster to support various data storage and computation across varied big data cluster - Hadoop, Cassandra, MongoDB & Elastic Search.
TECHNICAL SKILLS
Hadoop/BigData Technologies: HDFS, MapReduce, Hive, Pig, Sqoop, Flume, Oozie, Storm, Zookeeper, Kafka, Impala, HCatalog, Apache Spark, Spark Streaming, Spark SQL, HBase, NiFi and Cassandra, AWS (EMR, EC2), Horton Works, Cloudera
Languages: Java, SQL
Protocols: TCP/IP, HTTP, LAN, WAN
Network Services: SSH, DNS/BIND, NFS, NIS, Samba, DHCP, Telnet, FTP, IPtables, MS AD/LDS/ADC and OpenLdap.
Other Tools: Tableau, SAS
Mails Servers and Clients: Microsoft Exchange, Lotus Domino, Send mail, Postfix.
Databases: Oracle 9g/10g & MySQL 4.x/5.x, HBase, NoSQL, Postgres
Platforms: Red Hat Linux, Centos, Solaris, and Windows
Methodologies: Agile Methodology -SCRUM, Hybrid
PROFESSIONAL EXPERIENCE
Confidential, Newport Beach, CA
Hadoop Bigdata Engineer/Admin
Responsibilities:
- Experience in architecting, designing, installation, configuration and management of Apache Hadoop Clusters, MapR, and Hortonworks& Cloudera Hadoop Distribution.
- Worked on analyzing Hortonworks Hadoop cluster and different big data analytic tools including Pig, HBase Database and Sqoop.
- Responsible for installing, configuring, supporting and managing of Hadoop Clusters.
- Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes.
- Monitoring and support through Nagios and Ganglia.
- Responsible for troubleshooting issues in the execution of MapReduce jobs by inspecting and reviewing log files.
- Created MapR DB tables and involved in loading data into those tables.
- Maintaining the Operations, installations, configuration of 100+ node clusters with MapR distribution.
- Responsible for designing and implementing the data pipeline using Big Data tools including Hive, Oozie, Airflow, Spark, Drill, Kylin, Sqoop, Kylo, Nifi, EC2, ELB, S3 and EMR.
- Installed and configured Cloudera CDH 5.7.0 REHL 5.7, 6.2, 64-bit Operating System and responsible for maintaining cluster.
- Installed and configured apache airflow for workflow management and created workflows in python.
- Worked on setting up cluster for various services such as Hadoop, Spark, HBase, Kafkaetc on Azure HDInsights.
- Experience in Azure services beyond basic IaaS functionality
- Continuous monitoring and managing the Hadoop cluster through Cloudera Manager.
- Experience with Cloudera Navigator and Unravel data for Auditing hadoop access.
- Performed data blending of Cloudera Impala and TeraData ODBC data source in Tableau.
- Cloudera Navigator installation and configuration using Cloudera Manager.
- Cloudera RACK awareness and JDK upgrade using Cloudera manager.
- Sentry installation and configuration for Hive authorization using Cloudera manager.
- Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop.
- Involved in analyzing data using google bigquery to discover information, business value, patterns and trends in support of decision making & Worked on data profiling and data quality analysis
- Utilized Apache Spark with Python to develop and execute Big Data Analytics.
- Developed Spark/Scala, Python for regular expression (regex) project in the Hadoop/Hive environment with Linux/Windows for big data resources.
- Data sources are extracted, transformed and loaded to generate CSV data files with Python programming and SQL queries.
- Built pipelines to move hashed and un-hashed data from Azure Blob to Data lake.
- Utilized Azure HDInsight to monitor and manage the Hadoop Cluster.
- Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Scala and Python .
- Developed backend code using Hadoop Rest API - WebHDFS to perform file system operations such as create, delete files and directories, open, read or write files etc.
- Developing UDFs in java for hive and pig, Worked on reading multiple data formats on HDFS using Scala.
- Developed RESTful endpoints (Rest Controller) to retrieve data or perform an operation on the back end.
- Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala.
- Experience designing and supporting RESTful Web Services
- Hands on experience in provisioning and managing multi-node Hadoop Clusters on public cloud environment Amazon Web Services (AWS) - EC2 and on private cloud infrastructure.
- Actively involved on proof of concept for Hadoop cluster in AWS. Used EC2 instances, EBS volumes and S3 for configuring the cluster.
- Used Nifiprocessor to process and deploy end to end data processing pipelines and scheduling the work flows.
- Improved reporting mechanisms for the Splunk tool to the clients
- Gained experience in the tool related to Splunk DB Connect and imported the data from the Oracle platform to the Splunk platform.
- Worked on setting up Apache NiFi and performing POC with NiFi in orchestrating a data pipeline.
- Enhanced and provided core design impacting the Splunk framework and components
- Actively involving in monitoring the server’s health using the Splunk Monitoring and Alerting tool, Tivoli Alerting tool
- Installed and Configured Hadoop Ecosystem (MapReduce, Pig, and Sqoop. Hive, Kafka) both manually and using Ambari Server. scheduler
- Integrated Apache Storm with Kafka to perform web analytics. Uploaded click stream data from Kafka to HDFS, HBase and Hive by integrating with Storm.
- Implemented an instance of Zookeeper for Kafka Brokers..
- Successfully secured the Kafka cluster with Kerberos
- Experienced in writing Spark Applications in Scala and Python (Pyspark).
- Analyzed the SQL scripts and designed the solution to implement using PySpark .
- Worked on reading multiple data formats on HDFS using PySpark
- Implemented Kafka Security Features using SSL and without Kerberos. Further with more grain-fines Security I set up Kerberos to have users and groups this will enable more advanced security features.
- Installed Ranger in all environments for Second Level of security in Kafka Broker.Successfully did set up a no authentication kafka listener in parallel with Kerberos (SASL) Listener. Also I tested non authenticated user (Anonymous user) in parallel with Kerberos user.
- Experience in setup, configuration and management of security for Hadoop clusters using Kerberos and integration with LDAP/AD at an Enterprise level.
- Used Hive and created Hive tables, loaded data from Local file system to HDFS.
- Production experience in large environments using configuration management tools like Chef and Puppet supporting Chef Environment with 250+ servers and involved in developing manifests.
- Created EC2 instances and implemented large multi node Hadoop clusters in AWS cloud from scratch using automated scripts such as terraform.
- Created NoSQL solution for a legacy RDBMS Using Kafka, Spark, SOLR, and HBase indexer for ingestion SOLR and HBase for and real-time querying
- Imported data from AWS S3 into Spark RDD, performed transformations and actions on RDD's.
- Hands on experience in provisioning and managing multi-node Hadoop Clusters on public cloud environment Amazon Web Services (AWS) - EC2 and on private cloud infrastructure.
- Setting up test cluster with new services like Grafana and integrating with Kafka and Hbase for intense monitoring.
- Developed multiple POCs using PySpark and deployed on the Yarn cluster, compared the performance of Spark, with Hive and SQL/Teradata.
- Analyzed the SQL scripts and designed the solution to implement using PySpark
- Administering and configuring Kubernetes .
- Worked with Spark for improving performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frames, and Pair RDD's.
- Responsible for copying 400 TB of HDFS snapshot from Production cluster to DR cluster.
- Responsible for copying 210 TB of Hbase table from Production to DR cluster.
- Created SOLR collection and replicas for data indexing.
- Worked on Google Cloud Platform Services like Vision API, Instances.
- Administering 150+ Hadoop servers which need java version updates, latest security patches, OS related upgrades and taking care of hardware related outages.
- Upgraded Ambari 2.2.0, Ambari 2.4.2.0. SOLR update from 4.10.3 to Ambari INFRA which is SOLR 5.5.2.
- Implemented Cluster Security using Kerberos and HDFS ACLs.
- Involved in Cluster Level Security, Security of perimeter (Authentication- Cloudera Manager, Active directory and Kerberoes) Access (Authorization and permissions- Sentry) Visibility (Audit and Lineage - Navigator) Data (Data Encryption at Rest). Written Pig Latin Scripts to analyze and process the data.
- Involved in loading data from UNIX file system to HDFS. Created root cause analysis (RCA) efforts for the high severity incidents.
- Investigate the root cause of Critical and P1/P2 tickets.
Environment: Cloudera, Apache Hadoop, HDFS, YARN, Cloudera Manager, Sqoop, Flume, Oozie, Zookeeper, Kerberos, Sentry, AWS, Pig, Spark, Hive, Docker, Hbase, Python, LDAP/AD, NOSQL, Exadata Machines X2/X3, Toad, MySQL, PostgreSQL, Teradata.
Confidential - Louisville, KY
Hadoop Big data Engineer
Responsibilities:
- Responsible for designing highly scalable big data cluster to support various data storage and computation across varied big data cluster - Hadoop, Cassandra, MongoDB & Elastic Search.
- Responsible for installing, configuring, supporting and managing of Cloudera Hadoop Clusters.
- Installed Kerberos secured Kafka cluster with no encryption on POC also set up Kafka ACL's
- Created NoSQL solution for a legacy RDBMS Using Kafka, Spark, SOLR, and HBase indexer for ingestion SOLR and HBase for and real-time querying
- Experienced in Administration, Installing, Upgrading and Managing distributions of Hadoop clusters with MapR 5.1 on a cluster of 200+ nodes in different environments such as Development, Test and Production (Operational & Analytics) environments.
- Troubleshooting issues in the execution of MapReduce jobs by inspecting and reviewing log files.
- Worked on implementing NOSQL database Cassandra cluster.
- Extensively worked on Elastic search querying and indexing to retrieve the documents in high speeds.
- Developed Spark scripts using Python on Azure HDInsight for Data Aggregation, Validation and verified its performance over MR jobs.
- Experience in automation of code deployment across multiple cloud providers such as Amazon Web Services, Microsoft Azure, Google Cloud, VMWare and OpenStack
- Involved in migrating the ON PREMISE data to AWS.
- Installed, configured, and maintained several Hadoop clusters which includes HDFS, YARN, Hive, HBase, Knox, Kafka, Oozie, Ranger, Atlas, Infra Solr, Zookeeper, and Nifi in Kerberized environments.
- Used Python for pattern matching in build logs to format errors and warnings.
- Developed Simple to complex Map/reduce Jobs using Hive, Pig and Python.
- Installed and configured Hadoop, Map Reduce, HDFS (Hadoop Distributed File System), developed multiple Map Reduce jobs in java for data cleaning.
- Experience in managing the Hadoop cluster with IBM Big Insights, Hortonworks Distribution Platform.
- Regular Maintenance of Commissioned/decommission nodes as disk failures occur using MapR File
- Worked on setting up of Hadoopecosystem&Kafka Cluster on AWS EC2 Instances.
- Experience in managing the Hadoop cluster with IBM Big Insights, Hortonworks Distribution Platform
- Worked on installing cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, and slots configuration in MapR Control System (MCS).
- Experience in innovative, and where possible, automated approaches for system administration tasks.
- Worked on setting up high availability for major production cluster and designed automatic failover control using zookeeper and quorum journal nodes.
- Used Spark Streaming to divide streaming data into batches as an input to spark engine for batch processing. Mentored EQM team for creating Hive queries to test use cases.
- Sqoop configuration of JDBC drivers for respective relational databases, controlling parallelism, controlling distchache, controlling import process, compression codecs, importing data to hive, HBase, incremental imports, configure saved jobs and passwords, free form query option and trouble shooting.
- Collection and aggregation of large amounts of streaming data into HDFS using Flume Configuration of Multiple Agents, Flume Sources, Sinks, Channels, Interceptors defined channel selectors to multiplex data into different sinks and log4j properties
- Responsible for implementation and ongoing administration of MapR 4.0.1 infrastructure.
- Maintaining the Operations, installations, configuration of 150+ node cluster with MapR distribution.
- Monitoring the health of the cluster and setting up alert scripts for memory usage on the edge nodes.
- Experience on Linux systems administration on production and development servers (Red Hat Linux, Cent OS and other UNIX utilities). Worked on NoSQL database like HBase and created hive tables on top.
Environment: HBase,Hadoop 2.2.4, Hive, Kerberos,Kafka, YARN, Spark, Impala, SOLR, Hadoop cluster, HDFS, Ambari, Ganglia, RedHat, Windows, MapR, Yarn, Sqoop, Cassandra.
Confidential - New York, NY
Hadoop Cloudera Administrator
Responsibilities:
- Installing and Working on Hadoop clusters for different teams, supported 50+ users to use Hadoop platform and resolve tickets and issues they run into and provide training to users to make Hadoop usability simple and updating them for best practices.
- Installed/Configured/Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
- Cloudera Manager is installed on Oracle Big Data Appliance to help in (CDH) operations.
- Involved in collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
- Upgraded the Hadoop cluster CDH5.8 to CDH 5.9.
- Worked on Installing cluster, Commissioning & Decommissioning of DataNodes, NameNode Recovery, Capacity Planning, and Slots Configuration.
- Creating collection within Apache Sol and Installing the Solr service through the Cloudera Manager installation wizard.
- Managing and scheduling Jobs on Hadoop Clusters using Apache, Cloudera (CDH5.7.0, CDH5.10.0) distributions.
- Successfully upgraded Cloudera Distribution of Hadoop distribution stack from 5.7.0 to 5.10.0.
- Installed and configured a Cloudera Distribution of Hadoop (CDH) manually through command line.
- Created graphs for each HBase table in cloudera on basis of writes, reads, file size in respective dashboards.
- Exported and created Dashboards of cloudera logs in to Grafana by using JMX exporter and Prometheus.
- Installed and configured Hadoop cluster across various environments through Cloudera Manager.
- Managing, monitoring and troubleshooting Hadoop Cluster using Cloudera Manager
- Working on Oracle Big Data SQL. Integrate big data analysis into existing applications
- Using Oracle Big Data Appliance Hadoop and NoSQL processing and also integrating data inHadoop and NoSQL with data in Oracle Database
- Worked on Installing Cloudera Manager, CDH and install the JCE Policy File to Create a Kerberos Principal for the Cloudera Manager Server, Enabling Kerberos Using the Wizard.
- Monitored cluster for performance, networking, and data integrity issues.
- Responsible for troubleshooting issues in the execution of MapReduce jobs by inspecting and reviewing log files.
- Used NiFi to ping snowflake to keep Client Session alive.
- Created Snowpipe for continuous data load.
- Install OS and administrated Hadoop stack with CDH5.9 (with YARN) Cloudera Distribution including configuration management, monitoring, debugging, and performance tuning.
- Supported MapReduce Programs and distributed applications running on the Hadoop cluster.
- Scripting Hadoop package installation and configuration to support fully-automated deployments.
- Designing, developing, and ongoing support of a data warehouse environments.
- Deployed the Hadoop cluster using Kerberos to provide secure access to the cluster.
- Converting Map Reduce programs into Spark transformations using Spark RDD's and Scala.
- Perform maintenance, monitoring, deployments, and upgrades across infrastructure that supports all our Hadoop clusters
- Worked on Hive for further analysis and for generating transforming files from different analytical formats to text files.
- Created Hive External tables and loaded the data in to tables and query data using HQL.
- Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
- Worked on Hive for exposing data for further analysis and for generating transforming files from different analytical formats to text files.
Environment: MapReduce, Hive 0.13.1, PIG 0.16.0, Sqoop 1.4.6, Spark 2.1, Oozie 4.1.0, Flume, HBase 1.0, Cloudera Manager 5.9, Oracle Server X6, SQL Server, Solr, Zookeeper 3.4.8, Cloudera 5.8, Kerberos and RedHat 6.5.
Confidential, Boston, MA
Hadoop Administrator
Responsibilities:
- Responsible for building scalable distributed data solutions using Hadoop
- Resource management of Hadoop Cluster in configuring the cluster with optimal parameter.
- Performing day to day activities such as upgrades, applying patches, adding/removing nodes from the cluster for maintenance and capacity needs.
- Responsible for monitoring the Hadoop cluster using Nagios.
- Performed benchmarking of Kafka cluster to measure the performance and resource considerations and tuning the cluster for optimal performance.
- Worked with tuning and configuring various parameters to maintain High Availability and consistency targets of the cluster.
- Implemented Apache Centri in Access Control and Authorizations.
- Extensively worked on managing Kafka logs for traceability and debugging.
- Worked on designing, implementing and managing Secure Authentication mechanism to Hadoop Cluster with Kerberos.
- Working on Centri in enabling metadata management, governance and audit.
- Performed backup of metadata at regular intervals and other maintenance activities such as balancing the cluster, and HDFS health check.
- Continuous monitoring and managing the Hadoop cluster using Cloudera Manager.
- Responsible for maintaining the clusters in different environments.
- Involved in upgradation process of the Hadoop cluster from CDH4 to CDH5.
- Installed and configured Flume, Oozie on the Hadoop cluster.
- Managing, defining and scheduling Jobs on a Hadoop cluster.
- Worked on installing cluster, commissioning & decommissioning of datanode, namenode recovery, capacity planning, and slots configuration.
- Involved in performance tuning of various hadoop ecosystem components like YARN, MRv2.
- Worked with different file formats such as Text, Sequence files, Avro, ORC and Parquette.
- Installed and configured Spark on Yarn.
- Implemented indexing for logs from Oozie to Elastic Search.
- Analysis on integrating Kibana with Elastic Search.
- Monitoring the log flow from LM Proxy to Elastic Search-Head
- Responsible to manage data coming from different sources.
- Experienced on loading and transforming of large sets of structured, semi structured and unstructured data.
- Experience in managing and reviewing Hadoop log files.
- Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.
Environment: Cloudera Manager, Ambari, Hadoop, Nagios, Zabbix, Spark, Kafka, Storm, Shark, Hive, Pig, Sqoop, MapReduce,Oracle, Teradata, SAS,Log4J, Junit, MRUnit, SVN, JIRA.
Confidential - Cambridge, MA
Hadoop Linux Administrator
Responsibilities:
- Experiences in Server build using Jumpstart, NIM, Ignite and kickstart Process.
- Involved in Designing, Planning, Administering, Installation, Configuring, Updating, Troubleshooting, Performance monitoring and Fine-tuning of Hadoop cluster.
- Collecting the requirements from the clients, analyzing and finding a solution to setup the Hadoop cluster environment.
- Conducting meeting with team members and assigning work on each of them. Reporting to Manager on weekly basis about the work progress.
- Planning on data topology, rack topology and resources availability for users to share requirements for migrating users to production and implementing data migration from existing staging to production cluster and proposed effective Hadoop solutions to meet the specific customer requirements.
- Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
- Logical Volume Manager and creating of volume groups/logical performed Red Hat Linux Kernel Tuning.
- Running different jobs on a daily basis to test the issues and improve the performance.
- Monitoring, managing & reviewing Hadoop log files and conducting performance tuning, capacity management and root cause analysis on failed components & implement corrective measures.
- Setting up the cluster size and memory size based on the requirements, queues to run the jobs based on the capacities and node labels and enabling them for the job queues to run.
- Running jobs configuration with combination of default, per-site, per node and per job configuration.
- Performing minor and major upgrades, commissioning and decommissioning of nodes on Hadoop cluster.
- Work with network and Linux system engineers to define optimum network configurations, server hardware and operating system.
- Created HBase tables to store various data formats of data coming from different portfolios..
- Implemented the Kerberos security software to CDH cluster for user level as well as service level to provide strong security to the cluster.
- Troubleshooting, diagnosing, tuning, and solving Hadoop issues.
- Setting up the racks to improve the HDFS availability and increase the cluster performance.
- Limiting the creating, adding of HDFS file, and folders by setting up the HDFS Quotas.
- Tracking and protecting to sensitive data access, who is accessing what data and what are they doing with it.
Environment: Hadoop, Map Reduce, Hive, HDFS, PIG, Sqoop, Solr, Flume, HBase, and Unix/Linux, Hue (Beeswax).
Confidential
Linux System Administrator
Responsibilities:
- Installation, configuration and administration of Linux (RHEL 4/5) and Solaris 8/9/10 servers.
- Maintained and support mission critical, front end and back-end production environments.
- Configured RedHat Kickstart server for installing multiple production servers.
- Provided Tier 2 support to issues escalated from Technical Support team and interfaced with development teams, software vendors, or other Tier 2 teams to resolve issues.
- Installing and partitioning disk drives. Creating, mounting and maintaining file systems to ensure access to system, application and user data.
- Maintenance and installation of RPM and YUM package installations and other server.
- Creating users, assigning groups and home directories, setting quota and permissions; administering file systems and recognizing file access problems.
- Experience in Managing and Scheduling Cron jobs such as enabling system logging, network logging of servers for maintenance, performance tuning and testing.
- Maintaining appropriate file and system security, monitoring and controlling system access, changing permission, ownership of files and directories, maintaining passwords, assigning special privileges to selected users and controlling file access.
- Extensive use of LVM, creating Volume Groups, Logical volumes.
- Performed various configurations which include networking and IPTables, resolving hostnames, SSH key less login.
- Build, configure, deploy, support, and maintain enterprise class servers and operating systems.
- Building Centos 5/6 Servers, Oracle Enterprise Linux 5 and RHEL (4/5) Servers from scratch.
- Organized the patch depots and act as POC for the patch related issues.
- Configuration & Installation of Red hat Linux 5/6, Cent OS 5 and Oracle Enterprise Linux 5 by using Kick Start to reduce the installation issues.
- Attended team meetings, change control meetings to update installation progress and for upcoming changes in environment.
- Handled patch upgrades and firmware upgrades on and RHEL Servers, Oracle Enterprise Linux Servers.
- User and Group administration on RHEL Systems.
- Creation of various user profiles and environment variables to ensure security.
- Server hardening and security configurations as per the client specifications.
- A solid understanding of networking/distributed computing environment concepts, including principles of routing, bridging and switching, client/server programming, and the design of consistent network-wide file system layouts.
- Strong understanding of Network Infrastructure Environment.
Environment: Red Hat Linux AIX, RHEL, Oracle 9i/10g, Samba, NT/2000 Server, VMware 2.x, Tomcat 5.x, Apache Server.