Hadoop Platform Engineer / Designer Resume
SUMMARY
- Seeking a Hadoop Administrator / Cloud Engineer / DevOps Engineer position to demonstrate and advance my strong technical abilities
- Over Four years' experience on Apache Hadoop ecosystem (both Hortonworks and Cloudera distributions) design/deployment/configuration/support
- Certified Google Cloud Associated Engineer. Sound experience on Google Cloud Platform, AWS, Openstack, Docker, Kubernetes.
- Over Four years’ experience on DevOps, include Github, Bitbucket, Ansible Tower, Jenkins, Ansible, SaltStack, JIRA, Maven
- Over Twenty years’ experience on Linux/UNIX system administration, include RHEL, SLES, VMware, AIX and other flavors
TECHNICAL SKILLS
Big Data Related Technologies: (both Hortonworks and Cloudera distributions) HDFS, Yarn, MapReduce, Spark, Zookeeper, Zeppelin, Oozie, Hue, Hive, HBase, Knox, Sentry, Ambari, Ranger, Solr, Flume, Impala, Sqoop, Kerberos, NiFi, PostgreSQL, Informatica, Podium, Pentaho, Ansible, SaltStack, HPE, Zabbix, UCS, Java, Scala, Jenkins, Splunk
Platform: GCP, AWS, OpenStack, Docker, Kubernetes, Cisco Rack servers, HP ProLiant Servers, IBM Power systems (P8/P7/P6/P5), IBM Blade Center, HP Blade Servers, IBM X Serials servers, Oracle M serials servers, Oracle T serials Servers, EMC disk array, Hitachi disk array, IBM disk array, SVC, USPV, VSP, InfoBlox Appliance
DevOps: Jenkins, Github, Ansible, Maven, JIRA, Splunk, Docker, shell scripting, Agile, Kanban
Operating Systems: Red Hat Linux 3/4/5/6/7, SLES 9/10/11, VMware ESX/ESXi 3/4/5, AIX 5/6/7, Solaris 2.6/7/8/9/10/11 , HP - UX 10.20/11.00/11.00 i
PROFESSIONAL EXPERIENCE
Hadoop Platform Engineer / Designer
Confidential
Responsibilities:
- Design/Install/Configure Hortonworks cluster with HDFS, Yarn, MapReduce, Spark, Zookeeper, Hive, Zeppelin
- Upgrade Hadoop Cluster from HDP 2.6 to HDP 3.1; test and document upgrade procedures; provide support during production cutover
- Design/Implement Google Cloud Platform and Openstack based Hadoop clusters on Cloud; monitor and troubleshoot cluster performance
- Design/Implement GPU based data science environment, install GPU drivers and Python modules, configure Zeppelin authentication/authorization to meet security requirements
- Provide guidance to Tier2 and Tier3 support teams; troubleshoot Hadoop cluster and Linux OS related issues, identify root cause and provide permanent solutions; fine tune cluster configuration parameters
- Install/Config Ansible Tower; configure RBAC access control; define workflow in Ansible Tower; Monitor workflow execution status via Splunk
- Create Ansible scripts for deployment automation and operational tasks; Store Ansible code in GitHub for CI/CD and version control
- Support Informatica BDM environments; support HPE/Voltage implementation
- Participant disaster recovery activities; provide disaster recovery documents; create script to automate disaster recovery tasks
Hadoop Platform Engineer / DevOps Engineer
Confidential
Responsibilities:
- Design/Install/Configure Cloudera cluster with HDFS, Yarn, MapReduce, Spark, Zookeeper, Hive, Impala, HBase services
- Implement multiple projects; identify and clarify business requirements, customize environment based on project requirements, work with developers to test and improve software quality, create production implementation documents
- Configure Jenkins to automate environment build tasks; deploy new environments via Jenkins
- Utilize Ansible for platform automation tasks; Build new releases via Maven; Upload new releases onto Bitbucket
- Provide third level support in the administrator team; troubleshoot Hadoop cluster and Linux OS related issues; identify root cause and provide permanent solutions; fine tune cluster configuration parameters; add new data nodes to increase cluster capacity
- Automate configuration procedures for various tasks and services, include flume, HBase, Hive; perform code deployment via Jenkins in production environment
- Document technical procedures; provide backend support to team members and developers; mentoring and coaching team members
- Participant disaster recovery tasks: meet with DR team to identify recovery requirements, create DR technical plans; implement DR tasks
Hadoop Platform Engineer / Designer
Confidential
Responsibilities:
- Design/Install/Configure new Hortonworks Hadoop cluster, include HDFS, Yarn, MapReduce, Spark, Zookeeper, Oozie, Hive, HBase, Ambari, Ranger; set up Kerberos authentication for the cluster
- Review multiple big data vendor solutions; provide technology requirements and grading; implement POCs for multiple vendors
- Design and implement cluster user access standard via Ranger and Hadoop ACL to allow a secure and flexible environment access
- Design and configure Hadoop components in Production cluster, including Knox and Spark2
- Identify environment issues, work with developers and business owner to clarify business requirements, troubleshoot and address various issues to stabilize the environment
- Analyze log files, identify root cause, and provide solution; troubleshoot Linux operating system level issues, include both Hardware and OS configuration
Hadoop Platform Engineer
Confidential
Responsibilities:
- Configure Jenkins to automate environment build tasks; deploy new environments via Jenkins
- Utilize Saltstack for platform automation tasks; Build new releases via Maven
- Provide third level support to the enterprise Hadoop environments, including PROD and non-PROD clusters
- Worked on cluster expansion projects to add capacity to the cluster; Install and configure various Hadoop Ecosystems, including HDFS, Yarn, MapReduce, Zookeeper, Spark, Oozie, Hue, Hive, HBase, Pig, Impala, Sqoop, Sentry, Podium, PostgreSQL, SaltStack
- Configure data replication between PROD and DR sites; Performed DR test and document DR procedure
- Commission, decommission, balance, manage nodes and tuning server for optimal performance of the cluster
- Configure various level of access controls to secure the Hadoop environment; configure file encryption for PCI data on both HDFS and local file system
- Communicate with business, developers, and other teams to design and implement Spark, Impala
- Analyze log files for Hadoop and eco system services, identify root cause, and provide solution; troubleshooting issues on Linux operating system level, include both Hardware and OS configuration
Senior UNIX Consultant
Confidential
Responsibilities:
- Involved in the Hadoop POC project, provide technical input and recommendations
- Install Hadoop servers, include OS, Hadoop Ecosystems and Hadoop Daemons
- Work on data migration project to migrated more than 300 Linux servers; troubleshoot and resolve major road blocks to ensure the migration meet target timeline
- Plan, build and customize multiple IBM Power8 and Power7 frames; create frame build document and test plans; perform resilience test to ensure frame built meet standard
- Design Unix component Disaster Recovery planes; deployed scripts to automate recovery activities; prepare and participate various DR tests
- Lead the Unix team on PCI audit remediation to ensure the environment is PCI compliance; deploy and implement scripts based on DISA STIG Compliance standards
- Develop and tested AIX migration procedure; work with business units to schedule and implement changes on server migration
- Provide mentoring and coaching within the team
- Certified in both IBM POWER8 scale-out and POWER8 enterprise technologies
Senior UNIX Consultant
Confidential
Environment: more than 4000 UNIX servers running AIX, SLES, RHEL, and Solaris
Responsibilities:
- Work on a complex data center migration project to migrate DR site to new location
- Contribute to the development of project plans by providing input on UNIX platform and architecture; provide time line estimates, scope and control. Participate in pre and post implementation reviews, provide hands on support during system cutover
- Plan, build and customize multiple IBM Power P780, P770 frames; create frame build document and test plans; perform resilience test to ensure frame built meet standard
- Deployed various tools for system automation to improve server build performance; Document installation guides, procedures, and operational documentation; provide QA checklist to support production handover
- Build AIX LPARs and Linux VMs/servers, customize servers to meet various application and database requirement; provide 7/24 server support during server build life cycle
- Participate in pre and post implementation reviews, provide hands on support during system cutover
- Collaborate with business leaders, IT professionals, and vendors, review customer and application owners' requirement, design Infrastructure Specification Spreadsheet
Senior UNIX Consultant
Confidential
Environment: More than 2000 AIX LPARs running on over 50 IBM P795/P770/P595/P570 frames
Responsibilities:
- Designed, installed and configured multiple IBM Power P795, P770 P595, P570 frames
- Leading a team, migrated more than 1000 AIX LPARs from IBM Power5/Power6 frames to IBM Power7 frames. This includes project planning, solution designing, documenting, frame deployment, OS upgrade, PowerHA upgrade, Oracle 11g upgrade, LPAR migration, and Power6 frames decommissioning. This project was accomplished in very tight timeline; the team received recognition from customer’s senior management team.
- Planned, implemented and documented the storage migration project, migrated more than 1000 LPARs from IBM SVC to Hitachi USPV and VSP
- Reviewed customer’s requirement, created system buildbook for server build for multiple projects
- Developed and documented various procedures for the server build team and operational support team, i.e. server build process, ORT (Operational Readiness Test) runbook template, legacy Redhat Linux recovery procedure, OS upgrade procedure.
- Installed and configured IBM NIM servers; deployed server with PowerHA configuration; configured PowerHA with advanced options
- Coaching and motivating team members; created a more productive environment
- Followed the change management process, created implementation plans for the schedule change; meet with customer for the plan walkthrough
- First person that received employee award twice within a 130 people team