We provide IT Staff Augmentation Services!

Database Developer/administrator Resume

2.00/5 (Submit Your Rating)

SUMMARY

  • 3+ years of experience in Database field and seeking opportunities in an organization to leverage my technical knowledge and hone my skills that contribute to the technology industry.
  • A self - motivated, responsible & reliable team player with strong technical expertise and excellent communication skills.
  • Hands on experience with Hadoop ecosystem components like HDFS, MapReduce, Pig, Hive, Oozie, Sqoop, Spark, Zookeeper and HBase
  • Knowledge on various Hadoop distributions like Cloudera, Hortonworks, MapR Comprehensive understanding of HDFS architecture, its components like Job Tracker, Task Tracker, Name Node and Data Node
  • Hands on experience with the data extraction, transformation and load using Hive, Pig and HBase .
  • Good experience in analyzing data using Hive Query Language, Pig Latin and custom MapReduce programs in Java along with using User Defined Functions.
  • Setup ingestion systems for data feeds using Apache Kafka into HDFS
  • Hands on experiences on MR, Hive, Beeline, SQLite, Pig languages
  • Experience in capturing data and importing it to HDFS using Flume and Kafka for semi- structured data and Sqoop for existing relational databases.
  • Fair knowledge of NoSQL database management system like HBase, MongoDB Experience in designing RDBMS schemas, writing SQL queries to maintain and extract data
  • Good knowledge of ETL process and data warehousing concepts Worked on various execution engines like Tez, Spark along with MR
  • Implemented Daily Cron jobs that automate parallel tasks of loading the data into HDFS using Oozie co-coordinator jobs
  • Strong passion and expertise in Linux and Windows Operating Systems
  • Experience in fulfilling DBA daily activities, including schema management, space management, monitoring, and scheduling jobs
  • Experience with Oracle Enterprise Manager, RAC and RMAN
  • Experienced in developing UNIX shell scripts and performing database health checks
  • Experience in design and maintenance of Oracle Data guard and RAC administration
  • Proficiency in Oracle database Backup/Recovery (RMAN ), Installation, Maintenance,
  • Exp/Imp/Data pump, PL/SQL Programming
  • Good knowledge on OEM Grid Control for monitoring and user management Experience in working on Service now, Remedy tool, CA service desk (Incident management / Change management)
  • Expertise in understanding the E-Commerce Database Infrastructure and worked on the Automation scripts for Database Security environment
  • Good knowledge in developing Complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL

TECHNICAL SKILLS

  • Hadoop Ecosystems HBase, MapReduce, Hive, Scala, Pig, Oozie, SqoopLanguagesShellPLSQLJavaPythonCScalaToolsOEM, Oracle Grid Control Golden Gate, Data guardWebHTML/XHTML, CSS, JavaScript, J2EE, CSS
  • DATABASEHadoopOracle database (10g, 11g, 12c)MySQLSQL Server 2016 SQL Server 2012 Mongo DBServer PlatformRed Hat/CentOS 5.x, 6.x, 7.x, Ubuntu 14.x, 16.x, Debian, SuSe 9.x, 11.x Windows Server 2016, 2012 R2, 2008 R2, Windows 8.x, Windows 10 Windows 7, Solaris, MAC OSXWeb/Application Server Apache/Tomcat

PROFESSIONAL EXPERIENCE

Database Developer/Administrator

Confidential

Responsibilities:

  • Worked on live nodes Hadoop cluster running on CDH5.5
  • Hands on experience in Hadoop components like HDFS, MapReduce, Job Tracker, Name Node, Data Node and Task Tracker
  • Involved in the process of load, transform and analyze data from various sources into HDFS (Hadoop Distributed File System) using Hive, Pig and Sqoop
  • Created Hive tables based on the business requirements and Hive queries, Pig scripts were used to analyze the large data sets.
  • Generated reports based on Hive queries and ingested data using Apache Sqoop
  • Extracted the data from SQL, Teradata into HDFS using Sqoop
  • Developed multiple Kafka Producers and Consumers as per the software requirement specifications
  • Developed UDFs in Java and Python to use in PIG and HIVE queries
  • Developed multiple Kafka Producers and Consumers as per the software requirement specifications
  • Developed Spark code using Scala and Spark-SQL/Streaming for faster processing of data
  • Handled importing of data from various sources, performed transformations using Hive, Spark and loaded data into HDFS
  • Worked on Payroll and Time Management services. Created databases of different versions (10g and 11g RAC) based on the client's requirement
  • Performed clones and migrations from one database to another. Experience in working with the upgrade and downgrade of the databases of different versions
  • Wrote automation scripts for the database monitoring, sanity checks and other routine tasks
  • Worked on different RMAN recovery scenarios using hot backup and point in time recoveries
  • Performed sanity check of the databases, which includes backup job, alert log, file system monitoring and troubleshooting the issues and perform several system admin tasks
  • Effectively handled all Customer/end user requests like user creation, refreshes & other tasks by strictly adhering to defined SLAs/SOWs
  • Worked on deployments which involved analysis of complex scripts. Additionally, wrote sub queries for improving their efficiency
  • Setting up Golden Gate for replication and fixing the issues related to it. Tuning the database as the load or user base grows with time
  • Involved in maintenance of RAC environment of multi nodes with ASM file system using clusterware. Developed stored procedures, Triggers, Joins, views and synonyms for databases
  • On-Call Support - Provided on-call support for P1/P2 (Prioritized) databases and interacting with customers and also providing regular updates to Management on a parallel management bridge

We'd love your feedback!