We provide IT Staff Augmentation Services!

Support Engineer Resume

TX

SUMMARY

  • Around 7 years of professional experience in Hadoop and Linux administration activities such as administering, upgrading, securing, patching, configuring, troubleshooting, monitoring, and maintaining of systems/clusters.
  • Hands on experience in applying Linux and Hadoop patches and upgrading Hadoop clusters.
  • Experience in configuring and working with HDFS, YARN (Map Reduce), Hue, HBase, Hive, Impala, Oozie, Sqoop, Spark, Kafka, Spark2, Flume, Ranger services.
  • Experience in setting up High Availability for Hadoop services like HDFS, YARN, Oozie, HUE, HS2 & Impala etc.
  • Experience in creating cookbooks/playbooks and documentations for special projects like upgrades etc.
  • Participated in application on - boarding meetings along with Architects and helps application teams by suggesting application/user accounts character lengths and onboarding the application & user accounts into Hadoop world.
  • Applies advanced methods/techniques & assists other admins in the administration, implementation & documentation of processes and procedures to ensure compliance with standard business practices.
  • Fix the issues by interacting with dependent and support teams and log the cases based on the priorities.
  • In a collaboration with application and dependent teams for Hadoop updates, special projects like cluster upgrades, patches and configuration changes/updates when required.
  • Hands on experience in doing Hadoop benchmark, functionality testing and helps application teams/users to integrate third party tools with Hadoop environment.
  • Experience in analyzing Log files and finding root cause and then involved in analyzing failures, identifying root causes and taking/recommending course of actions.
  • Additional responsibilities include interacting with offshore team on a daily basis, communicating the requirement and delegating the tasks to offshore/onsite team members and reviewing their delivery.
  • Effective problem-solving skills & ability to learn and use new technologies/tools quickly.
  • Experience in providing 24x7 production support and experience in providing on-call and weekly support.

TECHNICAL SKILLS

HADOOP DISTRIBUTIONS & ECOSYSTEMS: Azure HDInsight, Cloudera, Cloudera Manager, Hortonworks, Ambari, HDFS, YARN (MR), Hive, HBase, Storm, Tez, Flume, Pig, Oozie, Zookeeper, Spark, Kusto, Ranger, Hue, Kafka, Impala, HBase, Sqoop, Ranger, Knox, Phoenix, Zeppelin, Grafana, Azure Data Factory, Azure Databricks

OPERATING SYSTEMS: Unix & Linux (CentOS6 & RHEL 5, 6 & SUSE SLES 11,12, Ubuntu 16.x, 18.x), Windows 10/8/7/XP

SQL & NOSQL DATA STORAGE: PostgreSQL, MySQL, MongoDB, Teradata, Oracle, MS SQL Server, CosmosDB

SECURITY: Kerberos, Sentry, Ranger, Knox, LDAP, AD, SSL/TLS

BIG DATA INTEGRATED TOOLS: Informatica BDM, Talend, DbVisualizer, Tableau, Cognos, NetApp, Data stage, Denodo, ADF

LANGUAGES: PowerShell, SQL, PLSQL, Shell and Python Scripting, XML, HTML, JSON

NETWORKING & OTHERS: DNS, PSSH, FTP, SFTP, mRemoteNG, MobaXterm, Keepass, Putty, WinSCP, Kusto

PROFESSIONAL EXPERIENCE

Confidential, TX

Support Engineer

Responsibilities:

  • Responsible for the customer support experience with Azure HDInsight and Azure Databricks products.
  • Own, troubleshoot and solve customer technical issues, using collaboration, troubleshooting best practices and transparency within and across teams (e.g., swarming)
  • Identify cases that require escalation (either technically or strategically)
  • Create and maintain incident management requests to product group or engineering group as and when needed for patches or fixes.
  • Contribute to case deflection initiatives, automation, and other digital self-help assets to improve customer and engineer experience
  • Provide ramp activities, knowledge sharing, technical coaching, and mentoring with other team members.
  • Drive technical collaboration and engagement with Product Engineering teams, Services, Support, Regions
  • Lead or participate in building communities with peer delivery roles; may be workload or specialty specific.

Environment: Azure HDInsight 3.6 & 4.0, Azure Databricks, Hive LLAP, Hadoop, Spark, Kafka, Storm, HBase, Phoenix, Zeppelin, Grafana, Zookeeper, MS Outlook, and Teams

Confidential, Peoria, IL

Azure HDInsight Engineer

Responsibilities:

  • Work with various teams to establish Hadoop best practices and help build governance, process and procedures and training around platform changes/releases, integrations, and security.
  • Coordinate with various Teams, IT, business, and vendors, to plan and implement deployments and rollouts.
  • Introduce the new services like Zeppelin, Grafana and implement/associate with the HDInsight clusters.
  • Responsible for capacity planning and estimating the requirements for lowering or increasing the capacity of the Hadoop cluster.
  • Responsible for Hadoop cluster maintaining, monitoring, and keeping up the cluster all the time by supporting 24/7 for all applications.
  • Work with application and dependent teams to come up with strategic solutions to remediate frequent and intermittent issues.
  • Responsible for HDInsight cluster upgrades, patches and improvising the security.
  • Discussions with Hadoop technical teams on regular basis regarding upgrades, process changes, any special processing and feedback.
  • Coordinate with other admins for root cause analysis (RCA) efforts to minimize future system issues.
  • Write the scripts to automate the recurring work and prepare the documentation.
  • Owning, tracking, and resolving Hadoop related incidents and providing service improvements and root cause analysis.
  • Collaborate with the application teams and provide the solutions/recommendations depending upon the requirement.
  • Responsible in sending Hadoop user alerts and create change tickets for the production changes which are going in Change Weekend window.
  • Point of contact for the client and responsible for owning, tracking, and coordinating with offshore team to resolve Hadoop related incidents within SLAs.
  • Ensure that critical production priority issues are addressed quickly and effectively.
  • Experience in providing 24x7 production support and experience in providing on-call and weekly support.

Environment: Azure HDInsight 3.5 & 3.6, HDP 2.6, Ambari 2.5, Ubuntu 16.04.4, Storm, HBase, Phoenix, Zeppelin, Grafana, Zookeeper, Azure storage, Key Vaults, MobaXterm, Service Now, MS Outlook, Skype for Business

Confidential, Chesterfield, MO

Hortonworks Hadoop Administrator

Responsibilities:

  • Responsible for cluster Maintaining, Monitoring, and keeping up the cluster all the time by supporting 24/7 for the applications.
  • Experience in creating and maintaining documentation for upgrades, processes, and procedures.
  • Hands on experience in analyzing Log files for Hadoop eco system services and finding root cause.
  • Working experience on Kerberized Hadoop cluster & worked on Configuring queues in capacity scheduler.
  • Involved in Cluster Monitoring backup, restore and troubleshooting activities.
  • Experience on On-Boarding process of new Hadoop application teams into Hadoop cluster with proper POC's to give better picture to application teams about how Hadoop helps them for their requirements and application/user accounts character lengths etc.
  • Experience in doing benchmark testing with Test DFSIO, TeraSort, TeraGen, TeraValidate, Pi test and granular functionality testing for all services in Hadoop cluster.
  • Integrated different tools like DB Visualizer, Tableau, SQL Developer with Hadoop this way users can pull data from HDFS hive.
  • Owning, tracking, and resolving Hadoop related incidents and providing service improvements and root cause analysis.
  • Commissioning and decommissioning of the data nodes from cluster in case of any problems.
  • Reviewing service-related reports (e.g.: Hadoop cluster configuration, maintenance, monitoring) on a daily basis to ensure service-related issues are identified and resolved within established SLAs.
  • Manage the backup and disaster recovery for Hadoop data and manage the scalable Hadoop cluster environment (tuning and optimizing for performance requirements).
  • Create and publish various production metrics including system performance and reliability information to systems owners and management.
  • Installed and upgraded of MYSQL, PostgreSQL database servers on all nodes for storing Hadoop metadata.
  • Balancing HDFS manually to decrease network utilization and increase job performance.
  • Discussions with other technical teams on regular basis regarding upgrades, Process changes, any special processing and feedback.
  • Perform ongoing capacity management forecasts including timing and budget considerations.
  • Coordinate root cause analysis (RCA) efforts to minimize future Hadoop system issues.

Environment: Hortonworks HDP2.x, Ambari, HDFS, Hive, Flume, Oozie, Sqoop, Spark, Ranger, Pig, YARN, SUSE Linux, Kerberos, SSH, PSSH, PostgreSQL, MySQL, DB Visualizer, Tableau, Putty, WinSCP, Service Now, MS Outlook, Skype Business.

Confidential, Peoria, IL

Hadoop Infrastructure Administrator

Responsibilities:

  • Experience in upgrading the cluster from CDH 5.3.8 to CDH 5.8.0 then to CDH 5.8.2.
  • Perform on-going daily cluster maintenance and management by reviewing Cloudera Manager dashboards and log files to spot potential issues and take preventative and corrective measures.
  • Troubleshoot and resolve issues with ETL/Data Ingest, Hive and Impala queries, Spark jobs and other related items by analyzing job logs and error files for Hadoop services in both Dev and Prod clusters.
  • Work closely with developers to troubleshoot and resolve software errors (or bugs).
  • Collaborating with application teams for Hadoop updates, patches, version upgrades when required.
  • Experience in Sentry administration for creating databases and providing/revoking user access.
  • Setup and configure user and service accounts - Linux, Cloudera HUE.
  • Configure security for HDFS & Cluster databases (Hive, Impala) using Sentry and Linux/HDFS ACLs.
  • Created cookbooks/playbooks and documentations for special projects like cluster upgrades, security implementation, SSL certification upgrades etc.
  • Enabled utilization report after we upgrade cluster to CDH 5.8.2 for granular level YARN and Impala CPU & Memory utilization reports.
  • Installed Spark2 and upgraded CM 5.8.2 to CM 5.8.3 (Prerequisite for Spark2).
  • Introduced Cloudera Navigator Optimizer, which would help developers to provide recommendations for Hive/Impala SQL queries.
  • Participated in implementing cluster authentication using Kerberos and authorization using Sentry.
  • Experience in doing benchmark testing and functionality testing for all services in Hadoop cluster while doing the cluster upgrades.
  • Setup password less login for several servers using SSH and installed PSSH to save massive amount time to implement stuff on all servers at the same time.
  • Written scripts to take backup of the critical meta-data (like Fsimage, edit logs and Postgresdb) and CM configuration blueprints etc.
  • Attend bi-weekly Cloudera CSM meetings/calls and interact with Cloudera support to log issues and take corrective measures.
  • Collaborating with Linux and MySQL teams for OS level, security vulnerabilities patch implementations and to fix the Hadoop issues.
  • Responsible in sending Hadoop user alerts and create change tickets for the production changes which are going in Infrastructure Change Weekend (ICW) window.
  • Point of contact for the client and responsible for owning, tracking and coordinating with offshore team to resolve Hadoop related incidents within SLAs.
  • Ensure that critical production P1 and P2 support issues are addressed quickly and effectively.
  • Configure and manage the BDR jobs for HDFS data and schedule fortnightly meetings with the application teams to check about backups and incorporate changes, if needed.
  • Coordinate with other admins for root cause analysis (RCA) efforts to minimize future system issues.
  • Experience in providing 24x7 production support and experience in providing on-call and weekly support.

Environment: CDH5.x, Cloudera Manager, HDFS, Hive, Impala, HUE, Flume, Oozie, Sqoop, Spark, Spark2, Zookeeper, RHEL6.x, Kerberos, MySQL, Remedy, IBM Lotus Notes, mRemoteNG, WinSCP, SSH, PSSH, Citrix NetScaler.

Confidential, Waukegan, IL

Big Data Administrator

Responsibilities:

  • Responsible for cluster monitoring, maintaining, managing, commissioning, and decommissioning of data nodes.
  • Perform on-going daily cluster maintenance and management by reviewing Cloudera Manager and log files to spot potential issues and take preventative and corrective measures.
  • Day to day responsibilities includes solving user’s issues and assist them in code deployments by providing instant solutions to reduce the impact and preventing cluster from future issues.
  • Create and provide access to new users and then add them to requested groups by working with Linux team with proper.
  • Configure and manage permissions for the Hadoop users in HUE.
  • Monitoring and managing the Hadoop cluster using Cloudera Manager and built Cloudera charts for users according to their needs for monitoring purpose.
  • Troubleshoot and resolve issues with ETL/Data Ingest, Hive and Impala queries and other related items by analyzing job logs and error files for Hadoop services.
  • Configure security for HDFS & Cluster databases (Hive, Impala) using Sentry and Linux/HDFS ACLs.
  • Integrated third-party tools like Talend, Tableau, DbVisualizer, Denodo, Cognos and Data stage using ODBC/JDBC drivers with Hadoop clusters.
  • Diligently teaming with the infrastructure, network, database, application, and business intelligence teams to guarantee high data quality and availability.
  • Rebalance an HDFS cluster and running HDFS and Hbase reports bi-weekly and fix issues, if any.
  • Experience with Sentry administration for creating databases and providing/revoking access as requested.
  • Worked on MYSQL and JAVA upgrades on all Hadoop servers.
  • Experience in troubleshooting with Oozie, Hive, Impala and YARN jobs issues.
  • Created NFS mounts for taking backup of the critical meta-data like Fsimage, Postgresdb etc.
  • Worked on commission and decommission of data-nodes and enable/disable the w.r.t services in LB.
  • Involved in analyzing failures, identifying root causes, and recommending course of actions to application teams/users.
  • Setup password less login for all Hadoop servers through SSH keys.
  • Monitor running Impala and hive jobs and provide recommendations to optimize the queries performance.
  • Experience in analyzing log files for Hadoop eco system services and finding root cause and involved in analyzing failures, identifying root causes.
  • Worked with app teams to come up with strategic solutions to remediate frequent and intermittent issues.
  • Interacting with Cloudera support by logging the issues in Cloudera portal and fixing them as per the recommendations.
  • Provided after hours and on-call support for development team & internal customers.

Environment: CDH5.x, Cloudera Manager, HDFS, YARN, Hive, HUE, Flume, Oozie, Kafka, Sqoop, Zookeeper, RHEL6.x and Centos6.x, MySQL, PostgreSQL, MS Outlook with Lync, Service Now, SSH, PSSH, Putty, WinSCP.

Hire Now