We provide IT Staff Augmentation Services!

Big Data Administrator Resume

5.00/5 (Submit Your Rating)

Minneapolis, MN

SUMMARY

  • Senior IT professional wif 5years of experience in Banking, Insurance and telecommunications.
  • dis 5 years of Big data experience includes enterprise data lake administration and storage management and operations in a large scale distributed environment.
  • Currently working as, a lead Big data administrator for a client who is one of the major financial service corp in United states.
  • Excellent knowledge of Hadoop architecture and extensive experience wif Hadoop Ecosystem in installation and configuration of different Hadoop eco - system components in the existing cluster.
  • Experience in Hadoop Administration (HDFS, MAP REDUCE, HIVE, PIG, SQOOP, FLUME, OOZIE, and HBASE) and NoSQL Administration.
  • Setting up automated 24x7monitoring and escalation infrastructure for Hadoop cluster using Nagios.
  • Experience in installing Hadoop cluster using different distributions of Apache Hadoop and Cloudera.
  • Good experience in understanding client's Big Data business requirements and transforming them into Hadoop centric technologies.
  • Experience in analyzing client’s existing Hadoop infrastructure and understanding the performance bottlenecks and providing the performance tuning accordingly.
  • Installed, configured and maintained HBASE.
  • Worked wif Sqoop in Importing and Exporting data from different databases like MySQL and Oracle into HDFS and Hive.
  • Experience in configuring Zookeeper to provide Cluster coordination services.
  • Loading logs from multiple sources directly into HDFS using Flume.
  • Experience in benchmarking, performing backup and recovery of Name Nodemetadata, and data residing in the cluster.
  • Familiar in commissioning and decommissioning of nodes on Hadoop Cluster.
  • Adept at configuring NameNodeHigh Availability.
  • Worked on Disaster Management wif Hadoop Cluster.
  • Strong knowledge on Hadoop HDFSarchitecture and Map-Reduce framework.
  • Experience in deploying and managing the multi-nodedevelopment, testing and production.
  • Experience in understanding the security requirements for Hadoop and integrating wif Kerberosauthentication infrastructure KDC server setup, creating and managing the realm domain.
  • Worked on setting up Name Node high availability for major production cluster and designed Automatic failover control using zookeeper and quorum journal nodes.
  • Well experienced in building servers like DHCP, PXE wif kick-start, DNS and NFS, and also used them in building infrastructure in a Linux Environment.
  • Experienced in Linux Administration tasks like IP Management (IP Addressing, Subnetting, Ethernet Bonding and Static IP).

TECHNICAL SKILLS

Operating System: RedHat, CentOS, Solaris, Windows 2008/’08R2

Hardware: Sun Ultra Enterprise Servers (E3500, E4500), SPARC server 1000, SPARC server 20 Enterprise Servers.

Languages: C, C++, Java, Python, R

Hadoop Distribution: Cloudera and Hortonworks

Ecosystem Hadoop: MapReduce, YARN, HDFS, Sqoop, Hive, Pig, HBase, Sqoop, Flume, and Oozie

Tools: JIRA, Putty, WinSCP, Git

Protocols: TCP/IP, FTP, SSH, SFTP, SCP, SSL.

Database: HBase, Oracle 7.x/8.0/9i, MySQL, SQL Server.

PROFESSIONAL EXPERIENCE

Confidential, Minneapolis MN

Big Data Administrator

Responsibilities:

  • Maintaining the 160node production Hadoop cluster along wif smaller stage cluster
  • Installed software on Hadoop cluster to support data science and analytics users. dis includes but not lmited to R, Python packages, Spark, H2O, JyputerHub, Rshiny, Knime and other Big Data Open source components/services
  • Maintaining user accounts/system security and cluster resource management to optimize user query execution time and data loading time
  • Helped wif migration of Hadoop to cloud based distribution and help in deciding and implementing the new configuration
  • Helped wif data migration and issue resolution while migrating the existing Hadoop/Java applications from on-premise cluster to cloud distribution
  • Experienced supporting data science teams, super users and analytics teams on complex code deployment, debugging and performance optimization problems.
  • Experienced wif Hadoop, HDFS, Hive, Spark, R, Python, Java and UNIX.
  • System monitoring and controls.
  • Experienced managing CPU, memory, storage resources for a large Hadoop cluster wif 100s of users.
  • Worked exclusively wif Horton works and IBM big insights (4.1).
  • Successfully migrated to AWS from us on perm EHC.

Confidential, Phoenix, AZ

Lead Big Data Administrator

Responsibilities:

  • Responsible for architecting Hadoop clusters Translation of functional and technical requirements into detailed architecture and design.
  • Worked exclusively on Cloudera distribution of Hadoop.
  • Installed and configured multi-node fully distributed Hadoop cluster of large number of nodes.
  • Provided Hadoop, OS, and Hardware optimizations.
  • Setting up the machines wif Network Control, Static IP, Disabled Firewalls, and Swap memory.
  • Installed and configured Cloudera Manager for easy management of existing Hadoop cluster.
  • Worked on setting up high availability for major production cluster and designed automatic failover control using zookeeper and quorumjournal nodes.
  • Implemented Fair scheduler on the job tracker to allocate fair amount of resources to small jobs.
  • Performed operating system installation and Hadoop version updates using automation tools.
  • Implemented rack aware topology on the Hadoop cluster.
  • Importing and exporting structured data from different relational databases into HDFS and Hive using Sqoop.
  • Configured Zookeeper to implement node coordination in clustering support.
  • Configured Flume for efficient collection, aggregation and transformation of huge log data from various sources to HDFS.
  • Involved in collecting and aggregating large amounts of streaming data into HDFS using Flume and defined channel selectors to multiplex data into different sinks.
  • Implemented Kerberos Security Authentication protocol for existing cluster.
  • Backed up data on regular basis to a remote cluster using distcp.
  • Good experience in troubleshoot production level issues in the cluster and its functionality.
  • Regular Commissioning and Decommissioning of nodes depending upon the amount of data.

Environment: Hadoop, HDFS, MapR, Hive, Sqoop, Flume, Cloudera, MySQL

Confidential, Boston, MA

Senior Hadoop Administrator

Responsibilities:

  • Client used Hortonworks distribution of Hadoop to store and process their huge data generated from different enterprises.
  • Installed Yarn (Resource Manager, Node manager, Application master) and Created volumes and CLDB in edge nodes.
  • Responsible for implementation and ongoing administration of MapR infrastructure.
  • Monitored already configured cluster of 54 nodes.
  • Installed and configured Hadoop components, Hive, Pig, and Hue.
  • Communicating wif the development teams and attending daily meetings.
  • Addressing and Troubleshooting issues on a daily basis.
  • Launched R-statistical tool for statistical computing and Graphics.
  • Working wif data delivery teams to setup new Hadoop users. dis job includes setting up Linux users, setting up Kerberos principals and testing MFS, and Hive.
  • Cluster maintenance as well as creation and removal of nodes.
  • MonitorHadoop clusterconnectivity and security
  • Manage and reviewHadoop log files.
  • File system management and monitoring.
  • Diligently teaming wif the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.

Environment: Hortonworks distribution, Hive, Pig, Hue, Linux, MySQL.

Confidential, Silver Spring, MD

Hadoop Operation engineer/ Administrator

Responsibilities:

  • Installed, Configured and Maintained the Hadoop cluster for application development and Hadoopecosystemcomponents like Hive, Pig, HBase, Zookeeper and Sqoop.
  • Assigning number of mappers and reducers to Map reduce cluster.
  • Setting up HDFS Quotas to enforce the fair share of computing resources.
  • Configuring and maintaining YARN Schedulers (Fair and Capacity).
  • Wrote the shell scripts to monitor the health check of Hadoop daemon services and respond accordingly toany warning or failure conditions.
  • Setting up HBase cluster which includes master and region server configuration, High availabilityconfiguration, performance tuning and administration.
  • Created user accounts and given users the access to the Hadoop cluster.

Environment: Hadoop, HDFS, Hive, Oozie, Java (jdk1.6), Cloudera, MySQL.

We'd love your feedback!