Seeking a responsible and challenging position with a dynamic organization which offers the opportunities for personal and professional development and to utilize my knowledge and skills.
- Specialized in deployment of Hadoop and its Eco - System components.
- Experienced in Hadoop installation, configuration and deployment on Linux server.
- Assisted in planning and estimating cluster capacity and creating roadmaps for Hadoop cluster deployment.
- Have working experience with internals of Hadoop like HDFS, Sqoop, flume, hive, Yarn, MapReduce and was part of building Hadoop architecture.
- Strong knowledge of components, services and daemons of Hadoop Eco-System (like HDFS, MapReduce v2, Yarn and configuration of Hadoop Eco-System like Pig, Hive, Oozie, Sqoop, Zookeeper and Flume).
- Managing various services like name node, data node and resource manager by configuring various XML files such as Core-site.xml, Hdfs-site.xml, Mapred-site.xml and Yarn-site.xml
- Experienced in monitoring cluster health, resources, troubleshooting Hadoop cluster and services configuration related issues, performance tuning using CLI or by WebUI.
- Handle responsibilities such as, increasing jobs priority, check hung jobs and update the users about long running jobs and take appropriate actions accordingly.
- Experienced in Commissioning and Decommissioning, Trash configuration and node balancer.
- Experience in planning and implementing Backup & Disaster Recovery for Hadoop Cluster.
- Processing of large data sets and assisting in hardware architecture.
- Integration and Migration of Data from various sources to HDFS and vice versa, by creating data pipeline.
- Integration and migration of data through Distributed copy between production and integration clusters
- Configuring connectivity between hive server2 and Tableau.
- Experienced in deploying a production ready Data Warehouse using Hive by configuring remote metastore and configuring MySQL database for it.
- Configured security for Authorization in Hadoop production cluster using POSIX style permissions, Confidential . Knowledge of Sentry
- Knowledge of Ranger and Sentry security tools
- Alert configuration in Cloudera Manager.
- Good understanding of Software Development Life Cycle (SDLC), agile methodology.
- Extensive knowledge on database administration for Oracle 11g, 10g with a very large scale database environment and mission critical OLTP systems in a variety of environments
- Experience in various data load techniques like export/import, Data pump, SQL loader
- Experience on Data Replication using Materialized views, import/export.
- Writing scripts in the UNIX based environments for managing the users roles and privileges, scheduled backups (using data-pump /cold backups), manage statistics, Disk space management.
- Provided 24*7 on-call production database support and maintenance including remote database support.
Hadoop Ecosystems: HDFS, Yarn, Hive, Solr, Hue, Oozie, HBase, Sqoop, kafka, HBase Spark and Zookeeper
Operating Systems: Linux (Redhat)
Database: Oracle 10G,11g, My SQL
Monitoring Tools: Cloudera Manager, Remedy (Ticketing Tool), Ambari
Scripting: Shell Scripting
Technologies: Hadoop HDFS, MapReduce, Yarn, Hive, Oozie, Sqoop, Flume, Spring MVC framework, Tableau
- Working on Development & production Cluster for Setting-up Hadoop Cluster and installing required Ecosystems components.
- HDFS support and maintenance.
- Working with data delivery teams to setup new Hadoop users. This job includes setting up new Hadoop users and providing permissions at various levels and testing them
- Commissioning and decommissioning of nodes, running cluster balancer to manage the data load.
- Experience in securing the cluster using HA configuration, Confidential and Kerberos.
- Managed and reviewed Hadoop Log files as a part of administration for troubleshooting purposes. Monitoring Cluster health status, fixing issues, alerts and warnings related to memory, resources, performance, etc.
- Experience in Performance tuning, Cluster resources management, Job schedulers and resource pool allocations for clusters. Monitoring and troubleshooting OOZIE scheduler jobs.
- Writing scripts to automate daily activities
- Ingesting streaming data using Flume and kafka aswell.
- Data Ingesting from various data sources, Import and Exporting the structured data using Sqoop from various RDBMS sources like Oracle, MySql, DB2 to HDFS and vice versa.
- Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
- Coordinating with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
- Deploying Hadoop cluster into production.
- Provided 24*7 production database support and maintenance including on-call support.
- Monitoring mount points and space management of archive logs, application backup dumps and taking appropriate action within SLA timelines.
- Worked on tickets for creation and maintenance of data files, tablespaces and other database structures like undo, temp space management.
- Handling the new requests of user management tasks like creating, dropping user and assigning corresponding roles, privileges and profiles.
- Handling schema refreshes, selective objects export and import from one database to another.
- Handling the listener issues and purging the listener log to avoid the connectivity issues.
- Handling the RMAN backup failures and confirming the consistency of production databases backup.
- Working with traditional and data pump utility for schema refreshes, data export and import on production/testing/development on engineering team request.
- Installation and configuration for catalog server for RMAN backups.
- Used data pump utility to do table level and full database level defragmentation.
- Monitoring database regarding backup status (failure or success).
- Actively participated in performing and communication of Disaster Recovery drill along with all the corresponding team and stake holders.
Environment: Oracle 11i,10g, RHEL5, RMAN, UNIX Shell Scripting, Toad, OEM.