We provide IT Staff Augmentation Services!

Senior Hadoop Administrator Resume

3.00/5 (Submit Your Rating)

Pleasontan, CA

PROFESSIONAL SUMMARY

  • Over 8+ years of experience in IT field including 5 years of experience in Hadoop Administration in diverse industries which includes hands on experience in Big data ecosystem related technologies.
  • Extensive knowledge and experience in Big Data with Map - Reduce, HDFS, Hive, Pig, Impala, Sentry and Sqoop.
  • Good knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Resource Manager, Name Node, Data Node, and MapReduce(MRV1 and YARN) concepts.
  • Excellent understanding of Hadoop architecture and underlying framework including storage management.
  • Hands on experience in installation, configuration, management and development of big data solutions using MapR, Azure, Cloudera (CDH4, CDH5) andHortonworks distributions.
  • Good experience with design, management, configuration and troubleshooting of distributed production environments based on Apache Hadoop/ HBase etc
  • Production experience in large environments using configuration management tools Chef and Puppet.
  • Experience in building new OpenStack Deployment through Puppet and managing them in production environment.
  • Working experience on designing and implementing complete end to end Hadoop Infrastructure.
  • Setting up automated 24x7 monitoring and escalation infrastructure for Hadoop cluster using Nagios and Ganglia.
  • In-depth knowledge of modifications required in static IP (interfaces), hosts, setting up password-less SSH and Hadoop configuration for Cluster setup and maintenance.
  • Experienced using Sqoop to import data into HDFS from RDBMS and vice-versa.
  • In which my responsibilities are collecting information from, and configuring, network devices, such as servers, printers, hubs, switches, and routers on an InternetProtocol(IP) network.
  • Experience in understanding the security requirements for Hadoop and integrating with Kerberos authentication infrastructure- KDC server setup, creating realm /domain.
  • Experience in Administering, Installation, configuration, troubleshooting, Security, Backup, Performance Monitoring and Fine-tuning of Redhat Linux and CentOS.
  • Good understanding of HDFS Designs, Daemons and HDFS high availability (HA)
  • Implementing a Continuous Integrations and Continuous Delivery framework using Jenkins, Puppet, Maven & Nexus in Linux environment.
  • Integration of Maven/Nexus, Jenkins, Urban Code Deploy with Patterns/Release, Git, Confluence, Jira and Cloud Foundry .
  • Extensive experience in data analysis using tools like Syncsort and HZ along with Shell Scripting and UNIX.

TECHNOLOGY SKILLS:

Big Data Technologies: HDFS, Hive, Map Reduce, Cassandra, Pig, Hcatalog, Phoenix, Falcon, Scoop, Flume, Zookeeper, Mahout, Oozie, Avro, HBase, MapReduce, HDFS, Storm, CDH 5.3, CDH 5.4

Scripting Languages: Shell Scripting, Puppet, Scripting, Python, Bash, CSH, Ruby, PHP

Databases: Oracle 11g, MySQL, MS SQL Server, Hbase, Cassandra, MongoDB

Networks: HTTP, HTTPS, FTP, UDP, TCP/TP, SNMP, SMTP

Monitoring Tools: Cloudera Manager,Solr, Ambari, Nagios, Ganglia

Application Servers: Apache Tomcat, Weblogic Server, Websphere

Security: Reporting Tools:

Kerberos, Ranger, LDAP: Cognos, Hyperion Analyzer, OBIEE & BI+

Analytic Tools: ElasticSearch-Logstash-Kibana

PROFESSIONAL EXPERIENCE:

Senior Hadoop Administrator

Confidential, Pleasontan, CA

Roles & Responsibilities:

  • Primary Participant in cluster installation and maintenance, cluster upgrades, Patch management and manual installation of Cloudera Manager, setup and configured High Availability, day to day operational activities like monitoring critical parts of the cluster, adding/decommissioning data nodes, different service related issues, tuned multiple services like Yarn, Kafka, Impala, Spark, Hive, and configuration checks, user access management, HDFS support and maintenance, role addition, code and data migration, Backup metadata, and capacity planning.
  • POCs:
  • Hadoop Evaluation POC to evaluate Hadoop capabilities as a better, faster, cost efficient and deeper way to source data and support discovery analytics
  • Hadoop POC Development Engagement
  • Performance Benchmarking Report
  • Functional Design Deliverable
  • POC Space Sizing, Software & Hardware Requirements
  • Used Teradata Grid to move data between Teradata and Hadoop and Aster platforms.
  • Runbook and Load Routines
  • Performed Pepper data POC to get real time reallocation of resources what YARN can provide to allow for the creation of additional YARN Containers. Throttles non-high priority jobs to ensure that high priority job SLA’s are met.
  • Hadoop Cluster Installation and Maintenance:
  • Worked on setting up high availability for major production cluster
  • Worked on installing and configuring cluster, Adding new nodes to an existing cluster, safely decommissioning nodes like data node hang issues, recovering from a NameNode failures like NameNode down issues, memory issues, Monitoring cluster health using Teradata Viewpoint, Ganglia, capacity planning, tuning MapReduce job parameters and slots configuration
  • Installed and configured CDH5.5.2 on AWS (Amazon Web Services)
  • Primary participant in planning Hadoop Cluster includes planning, identifying the right hardware, network considerations and configuring Nodes.
  • Participated in upgrading cluster, which involved coordination with all ETL application teams, working with System Engineer and performed pre and post checks.
  • Responsible for Cluster Maintenance includes copying data between clusters, adding and removing cluster nodes, checking HDFS status, and rebalancing the Cluster
  • Worked on installation of Cloudera Manager and used all the features of the Cloudera Manager like configuration management, service management, resource management, reports, alerts, aggregated logging, and also involved in Hadoop (CDH) installation.
  • Handled the Python errors during upgrading CM 5.5.2 manager agent
  • High Availability
  • Worked on setting up high availability (HA) for major production cluster
  • Replaced a Standby NameNode in a Running HDFS HA cluster
  • Created multiple instances of the Hue service for HA
  • Handled Oozie/Hue HA issues and changed Name Service ID for HDFS HA
  • Setup multiple HiveServer2 (HS2) instances behind a Load Balancer for HA
  • Enabled HUE HA and load balance HUE queries between both the servers
  • Security:
  • Primary participant in integrating Enterprise Data Lake (EDL) with the LDAP/Kerberos for Authentication and setting up Authorization via the Sentry roles and ACL permissions and is specific to the business application to which the User/Service Account belongs.
  • Used Cloudera Manager to configure and enable Kerberos and setup Kafka in Kerberos Cluster. Fixed multiple Kerberos issues like Kerberos error in Hue logs, Kerberos role error after Impala upgrade issue, Kerberos error when connecting to Hive Metastore
  • Machine Learning
  • Designed and developed scalable statistical machine learning framework using Localized Linear/Logistic Regression, Auto Encoders, and Decision Trees on top Topological Data analysis.
  • Kafka
  • Handled many Kafka issues includes Kafka and Zookeeper services not coming up after patch upgrade to CDH 5.7.1, unable to produce messages from kafka-console-producer tool due to kafka exception error,
  • HUE:
  • Used HUE applications to browse HDFS and jobs, manage a Hive Metastore, run Hive, Cloudera Impala queries and Pig Scripts, browse HBase, export data with Sqoop, submitted MapReduce programs, build custom search engines with Solr, and scheduled repetitive workflows with Oozie and tested Oozie workflows via Hue.
  • Enabled Django Debug mode for Hue to handle different Hue related issues
  • Resolved Hue related issues includes Hue is slow/hangs/login timeouts and poor concurrent performance with Hue Web Server.
  • Scheduled Hue history cron.sh script via Cron job to clean up old data in Oozie and Beeswax Hue tables
  • Metadata (MySQL) Backup and recovery:
  • Primary participant in setting up the MySQL in a Master-Slave format on each of the EDL environment.
  • Setup backup job to backup Master data on a daily basis via a housekeeping job (Python scripting) which has been scheduled in the Crontab and sends out an email notification on success/failure of the job to the EDL team. Once the backup completes the backup copy is replicated to one of date node for increased availability.
  • Patch Management:
  • Participated in evaluation of the severity of the patch by working with the Cloudera System Engineer (SE)
  • Decision making in necessity of the requirement of patch.
  • Based on the patch severity (critical/non-critical), deployed the necessity software by following the change management window deadlines
  • Alerting and Monitoring
  • Worked in setting up Alerting mechanism which has been integrated with the Cloudera Manager and setup-alerting emails in case of any changes in the health status for the cluster related services or configuration/security related changes, which have been scheduled to send to the EDL Administration team.
  • Added new alerts based on the business requirement timely
  • Monthly Log Review: Performed Monthly log review of Audit Logs and security related events from the Navigator Audit Logs and other Security/Audit related logs to look for any suspicious activity on the cluster from both the user and admin related activities and shall report to the management to take the action accordingly.
  • Co-ordination with Application Leads: Worked closely with App Leads in addressing the impacts reported by the alert emails
  • Python Scripting:
  • Built a ‘Transporter’ migration tool to migrate files and new objects from DEV to QA and QA to PRD environments.
  • Automated Housekeeping/Audit jobs using Python Scripting
  • HDFS & Tools:
  • Worked in HDFS management, upgrade process and rack management and consider NameNode memory considerations and involved in HDFS security enhancements.
  • Configured Flume for efficiently collecting, aggregating and moving large amounts of log data from many different sources to HDFS
  • Installed and configured Hive, Impala, MapReduce, and HDFS
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs and responsible for Oozie job scheduling and maintenance
  • Resolved Class loader issues with parquet-scala
  • Used Hive and Python to clean and transform geographical event data.
  • Used Pig and a Pig user-defined filter function (UDF) to remove all non-human traffic from a sample web server log dataset and used Pig to sessionize web server log data.
  • Troubleshooting user issues on services such as Spark, Kafka, Hue, Solr, HBase, Hive, and Impala
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team
  • Configured spark with the python APIs
  • Implemented UDFS, UDAFS, UDTFS in java for hive to process the data that can’t be performed using Hive inbuilt functions
  • Designed and implemented PIG UDFS for evaluation, filtering, loading and storing of data
  • Used Oozie workflow engine to manage interdependent Hadoop jobs and to automate several
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting
  • Used Ganglia and Nagios to monitor system health and give reports to management
  • HBase
  • Diagnosing and fixing performance problems on HBase Cluster.
  • Troubleshooting HBase problems and fine-tuned HBase configurations
  • Security enhancements, Authentication and Access Control to HBase
  • Troubleshooting and Optimization:
  • Checked configuration files, error messages and java exceptions to troubleshoot any Cluster startup issues, data node issues and task tracker issues
  • Used fsck (file system checker) to check for corruption in data node blocks and to clean unwanted blocks
  • Tweaked MAPRED configuration file to enhance the performance of the job.
  • Tuned Hadoop Configuration files to fix performance issues such as swapping and CPU Saturation issues
  • Assisted Application developer in fixing the application code using Python scripting.
  • Addressed multiple execution problems of Python based streaming jobs
  • Design & Documentation:
  • Documented EDL (Enterprise Data Lake) best practices and standards includes Data Management (File formats (Avro, Parquet), Compression, Partitioning, Bucketing, De-Normalizing), Data Movement (Data Ingestion, File Transfers, RDBMS, Streaming Data/Log files, Data Extraction, Data Processing, MapReduce, Hive, Impala, Pig, HBASE)
  • Documented on EDL Overview and services offered on EDL Platform includes Environments, Data Storage, Data Ingestion, Data Access, Security, Indexing,
  • Primary participant in designing EDL Migration form
  • Worked with the management in defining and documenting the EDL onboarding process.
  • Tested and documented the step by step process for Application users on how to download/Install/Run/ Testing Cloudera Impala ODBC Driver and connectivity to Tableau.
  • Tested and documented the step by step process in creating a HDFS connection, Teradata Connection, Hive Connection, HDFS file object, Hive Table Object, Teradata object and pushdown (load balancing) optimization using Hive.
  • Technical Environment: CDH 4.5 to 5.5.2, RHEL, Cloudera Manager 5.5.3, Cloudera Navigator, Yarn, Impala 2.1.2, Hive, HUE, Kafka, Sqoop, Pig, HBase, Avro.

Hadoop Infrastructure Administrator

Confidential, San Ramon, CA

Roles & Responsibilities:

  • Worked on the CIP Rating (4-Clusters,Data Aquisation,Anomaly detection,Rating & ML).
  • Working on a project called Datalake project which is a multitenant platform for Analytics, Different small businesses get the data over here and work on different use cases.
  • Worked on Hadoop Stack, ETL TOOLS like TALEND, Reporting tools like Tableau and Security like Kerberos, User provisioning with LDAP and lot of other Big Data technologies for multiple use cases.
  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Cluster Planning, Manage and review data backups, Manage & review log files
  • Worked with the Data Science team to gather requirements for various data mining projects.
  • Here I have installed 5 Hadoop clusters for different teams, we have developed a Data lake which serves as a Base layer to store and do analytics for Developers, we provide services to developers, Install their custom softwares, upgrade hadoop components, solve their issues, and help them troubleshooting their long running jobs, we are L3 and L4 support for the Datalake, and I also manage clusters for other teams.
  • Building automation frameworks for data ingestion, processing in Python, and Scala with NoSQL and SQL databases and Chef, Puppet, Kibana, Elastic Search, Tableau, GoCD, Redhat infrastructure for data ingestion, processing, and storage.
  • Im a mix of Devops and hadoop admin here, and work on L3 issues and installing new components as the requirements comes and did as much automation and implemented CI /CD Model.
  • Involved in implementing security on Hortonworks Hadoop Cluster using with Kerberos by working along with operations team to move non secured cluster to secured cluster.
  • Responsible for upgrading Hortonworks Hadoop HDP2.2.0 and Mapreduce 2.0 with YARN in Multi Clustered Node environment. Handled importing of data from various data sources, performed transformations using Hive, Map Reduce, Spark and loaded data into HDFS.
  • Hadoop security setup using MIT Kerberos, AD integration(LDAP) and Sentry authorization.
  • Migrated services from a managed hosting environment to AWS including: service design, network layout, data migration, automation, monitoring, deployments and cutover, documentation, overall plan, cost analysis, and timeline.
  • Used R for an effective data handling and storage facility,
  • Managing Amazon Web Services (AWS) infrastructure with automation and configuration management tools such as Chef, Ansible, Puppet, or custom-built .designing cloud-hosted solutions, specific AWS product suite experience.
  • Performed a Major upgrade in production environment from HDP 1.3 to HDP 2.2. As an admin followed standard Back up policies to make sure the high availability of cluster.
  • Monitored multiple Hadoop clusters environments using Ganglia and Nagios. Monitored workload, job performance and capacity planning using Ambari. Installed and configured Hortonworks and Cloudera distributions on single node clusters for POCs.
  • Created Teradata Database Macros for Application Developers which assist them to conduct performance and space analysis, as well as object dependency analysis on the Teradata database platforms
  • Implementing a Continuous Delivery framework using Jenkins, Puppet, Maven & Nexus in Linux environment. Integration of Maven/Nexus, Jenkins, Urban Code Deploy with Patterns/Release, Git, Confluence, Jira and Cloud Foundry .
  • Defined Chef Server and workstation to manage and configure nodes.
  • Experience in setting up the chef repo, chef work stations and chef nodes.
  • Involved in running Hadoop jobs for processing millions of records of text data. Troubleshoot the build issue during the Jenkins build process. Implement Docker to create containers for Tomcat Servers, Jenkins.
  • Worked with application teams to install operating system, Hadoop updates, patches, version upgrades as required.
  • Involved in leading Automation Deployment Team by working with Puppet.
  • Configure and build Openstack Havana, Icehouse using Ansible and Puppet scripts.
  • Established a business-centric data check like comparing/contrasting daily transaction and dollar volumes between the loaded DW data, and Cognos reports
  • Developed Python, Shell/Perl Scripts and Power shell for automation purpose.
  • I have used Service now and JIIRA to track issues, Mostly Managing and reviewing Log files as a part of administration for troubleshooting purposes, meeting the SLA’s on time.

Technical Environment: Hortonworks Hadoop, Cassandra, Flat files, Oracle 11g/10g, mySQL, Toad 9.6, Windows NT, Sqoop, Hive, Oozie, Ambari, SAS, SPSS, Unix Shell Scripts, Zoo Keeper, SQL, Map Reduce, Pig.

Hadoop System Administrator

Confidential, San Ramon, CA

Roles & Responsibilities:

  • I’ve Worked on a live Big Data Hadoop production environment with 300 nodes.
  • Involved in up gradation process of the Hadoop cluster from CDH4 to CDH5.
  • Worked on installing cluster, commissioning & decommissioning of datanode, namenode recovery, capacity planning, and slots configuration.
  • Installed and configured Flume & Oozie on the Hadoop cluster and Managed, Defined and Scheduled Jobs on a Hadoop cluster.
  • Developed MapR Distribution for Apache Hadoop, which speeds up MapReduce jobs with an optimized shuffle algorithm, direct access to the disk, built-in compression, and code written in Scala.
  • Responsible for Cluster maintenance, Monitoring, commissioning and decommissioning Data nodes, Troubleshooting, Manage and review data backups, Manage & review log files.
  • Day to day responsibilities includes solving developer issues, deployments moving code from one environment to other environment, providing access to new users and providing instant solutions to reduce the impact and documenting the same and preventing future issues.
  • Adding & installation of new components and removal of them through Cloudera Manager.
  • Responsible for implementation and ongoing administration of Hadoop infrastructure.
  • Installed, Configured & Maintained Apache Hadoop clusters for application development and Hadoop tools like Hive, Pig, HBase, Zookeeper and Sqoop.
  • Responsible for Cluster maintenance, commissioning and decommissioning Data nodes, Cluster Monitoring, Troubleshooting, Manage and review data backups, Manage & review Hadoop log files.
  • Involved in implementing security on Hortonworks Hadoop Cluster using with Kerberos by working along with operations team to move non secured cluster to secured cluster.
  • Experience in installation, configuration, supporting and monitoring Hadoop clusters using Apache, Cloudera distributions and AWS.
  • Involved in architecting Hadoop clusters using major Hadoop Distributions - CDH4 and CDH5.
  • Bootstrapping instances using Chef and integrating with auto scaling.
  • Manage the configurations of more than 40 servers using Chef
  • Monitoring systems and services, architecture design and implementation of hadoop deployment, configuration management, backup, and disaster recovery systems and procedures.
  • Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
  • Working with data delivery teams to setup new Hadoop users.
  • It includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users.
  • Used Informatica Power Center to create mappings, mapplets, User defined functions, workflows, worklets, sessions and tasks.
  • Developed a framework for the automation testing on the ElasticSearch index Validation, Java, MySQL.
  • Created User defined types to store specialized data structures in Cloudera.
  • Followed standard Back up policies to make sure the high availability of cluster.
  • Involved in Analyzing system failures, identifying root causes, and recommended course of actions.
  • Documented the systems processes and procedures for future references.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
  • Screen Hadoop cluster job performances and capacity planning.
  • Built, Stood up and delivered Hadoop cluster in Pseudo distributed Mode with NameNode, Secondary Name node, Job Tracker, and the Task tracker running successfully with Zookeeper installed, configured and Apache Accumulo ( NO SQL Google's Big table) is stood up in Single VM environment.
  • Participated in Database Migration from Sybase IQ to Teradata
  • Created Teradata incidents and worked closely with Viewpoint engineering, GSC and GTS hunt groups
  • Monitored Hadoop cluster connectivity and security and also involved in management and monitoringHadoop log files.
  • Involved in migrating java test framework to python flask.
  • Defined instances in code, next to relevant configuration on what running and then created the instances via puppet.
  • Assembled Puppet Master, Agent and Database servers on Red Hat Enterprise Linux Platforms.

Technical Environment: Hadoop, Map Reducer,Cassandra, Cloudera Manager, HDFS, Hive, Pig, HBase, Sqoop, Oozie, AWS, SQL, Java (JDK 1.6), Eclipse.

Hadoop Administrator

Confidential, Mountain View, CA

Roles & Responsibilities:

  • Worked on Administrating Hadoop Clusters, Installation, Configuration and Management of Hadoop Cluster.
  • Designed and developed Hadoop system to analyze the SIEM (Security Information and Event Management) data using MapReduce, HBase, Hive, Sqoop and Flume.
  • Developed custom writable MapReduce JAVA programs to load web server logs into HBase using flume.
  • Worked on Hadoop CDH upgrade from CDH3 to CDH4
  • Integrated Oozie with the rest of the Hadoop stack supporting several types of Hadoop jobs out of the box (like MapReduce, Pig, Hive, Sqoop) as well as system specific jobs.
  • Developed entire data transfer model using Sqoop framework.
  • Explicit support for partitioning messages over Kafka servers and distributing consumption over a cluster of consumer machines while maintaining per-partition ordering semantics.
  • Integrated Kafka with Flume in sand box Environment using Kafka source and Kafka sink.
  • Configured flume agent with flume syslog source to receive the data from syslog servers.
  • Implemented the Hadoop Name-node HA services to make the Hadoop services highly available.
  • Exporting data from RDBMS to HIVE, HDFS and HIVE, HDFS to RDBMS by using SQOOP.
  • Installed and managed multiple hadoop clusters - Production, stage, development.
  • Performance tuning for infrastructure and Hadoop settings for optimal performance of jobs and their throughput.
  • Involved in analyzing system failures, identifying root causes, and recommended course of actions and lab clusters.
  • Designed the Cluster tests before and after upgrades to validate the cluster status.
  • Regular Maintenance of Commissioned/decommission nodes as disk failures occur using Cloudera Manager.
  • Documented and prepared run books of systems processes and procedures for future references.
  • Performed Benchmarking and performance tuning on the Hadoop infrastructure.
  • Automated data loading between production and Disaster Recovery cluster.
  • Migrated hive schema from production cluster to DR cluster.
  • Worked on Migrating application by doing POC's from relation database systems.
  • Helping users and teams with incidents related to administration and development.
  • Onboarding and training on best practices for new users who are migrated to our clusters.
  • Guide users in development and work with developers closely for preparing a data lake.
  • Migrated data from SQL Server to HBase using Sqoop.
  • Log data Stored in HBase DB is processed and analyzed and then imported into Hive warehouse, which enabled end business analysts to write HQL queries.
  • Replicated the Jenkins build server to a test VM using Packer, Virtual Box, Vagrant, Chef, Perl brew andServerspec
  • Built re-usable Hive UDF libraries which enabled various business analysts to use these UDF's in Hive querying.
  • Created Hive external tables for loading the parse data using partitions.
  • Developed various workflows using custom MapReduce, Pig, Hive and scheduled them using Oozie.
  • Responsible for Installing, setup and Configuring Apache Kafka and Apache Zookeeper.
  • Extensive knowledge in troubleshooting code related issues.
  • Developed suit of Unit Test Cases for Mapper, Reducer and Driver classes using MR Testing library.
  • Auto Populate Hbase tables with data.
  • Designed and coded application components in an agile environment utilizing test driven development approach.

Technical Environment: Hadoop, HDFS, Map Reduce, Shell Scripting, spark, Splunk, solr, Pig, Hive, HBase, Sqoop, Flume, Oozie, Zoo keeper, Base, cluster health, monitoring security, Redhat Linux, impala, Cloudera Manager, Hortonworks.

Linux System Administrator

Confidential, Pune, IN

Roles & Responsibilities:

  • Day - to-day administration on Sun Solaris, RHEL 4/5 which includes Installation, upgrade & loading patch management & packages.
  • Responsible for monitoring overall project and reporting status to stakeholders.
  • Developed project user guide documents which help in knowledge transfer to new testers and solution repository document which gives quick resolution of any issues occurred in the past thereby reducing the number of invalid defects.
  • Identify repeated issues in production by analyzing production tickets after each release and strengthen the system testing process to arrest those issues moving to production to enhance customer satisfaction
  • Designed and coordinated creation of Manual Test cases according to requirement and executed them to verify the functionality of the application.
  • Manually tested the various navigation steps and basic functionality of the Web based applications.
  • Experience interpreting physical database models and understanding relational database concepts such as indexes, primary and foreign keys, and constraints using Oracle.
  • Writing, optimizing, and troubleshooting dynamically created SQL within procedures
  • Creating database objects such as Tables, Indexes, Views, Sequences, Primary and Foreign keys, Constraints and Triggers.
  • Responsible for creating virtual environments for the rapid development.
  • Responsible for handling the tickets raised by the end users which includes installation of packages, login issues, access issues User management like adding, modifying, deleting & grouping.
  • Responsible for preventive maintenance of the servers on monthly basis.
  • Configuration of the RAID for the servers. Resource management using the Disk quotas.
  • Responsible for change management release scheduled by service providers.
  • Generating the weekly and monthly reports for the tickets that worked on and sending report to the management.
  • Managing Systems operations with final accountability for smooth installation, networking, and operation, troubleshooting of hardware and software in Linux environment.
  • Identifying operational needs of various departments and developing customized software to enhance System's productivity.
  • Established/implemented firewall rules, Validated rules with vulnerability scanning tools.
  • Proactively detecting Computer Security violations, collecting evidence and presenting results to the management.
  • Accomplished System/e-mail authentication using LDAP enterprise Database.
  • Implemented a Database enabled Intranet web site using LINUX, Apache, MySQL Database backend.
  • Installed Cent OS using Pre-Execution environment boot and Kick-start method on multiple servers. Monitoring System Metrics and logs for any problems.
  • Worked for GIS for "Atlas" tool for google maps.

Technical Environment: Windows 2008/2007 server, Unix Shell Scripting, SQL Manager Studio, Red Hat Linux, Microsoft SQL Server 2000/2005/2008, MS Access, NoSQL, Linux/Unix, Putty Connection Manager, Putty, SSH.

We'd love your feedback!