Sr. Aws Cloud Devop Resume
Santa Clara, CA
SUMMARY:
- Experienced IT professional seeking a leadership position driving strategy, management, planning, development, team building and end - to-end solution with focus on leveraging information technology, Big Data and data science to maximize business values.
- Developer / DevOp with over 8 years of development, administration and programming experience with primary focus in "Big Data" and emphasis on Hadoop
- Spark RDD and Spark DF development and DevOp using Python and Scala
- Worked with Hadoop, HBase, Cascading, Zookeeper, Oozie, Hive, HiveQL, MapR, MogoDB, Pentaho & Pig
- DevOp with AWS Cloud, EC2, EMR, RedShift, S3 etc.
- Experienced with using Docker CLI on multiple projects
- Experienced with using Rancher and RancherOS
- Experienced with using GlusterFS and CEPH
- Supported Hadoop and Verita as administrator about 7 to 700 nodes.
- Supported and administrated users accounts and security rules with BigData domain.
- Worked with Application Servers, Tomcat, Oracle and MySQL.
- Experienced with distributed systems, large scale non-relational data stores, map-reduce systems, data modeling, database performance, and multi-terabyte data warehouses.
- Experienced in a SaaS environment that has an agile development process.
- Experienced in Java, Python or other object-oriented programming language.
- Experienced in Mobile, Linux, Unix, Android, MAC platform.
- Experienced with server management, server operating systems(Windows, Linux, Unix), and VMware.
- Experienced in testing APIs, Restful APIs.
- Experienced with full SW Lifecycle and Agile practice .
- Experienced with scripting language, in Perl, Python, Ruby, QTP for automated and performance tests.
- Experienced with testing DNA sequence machines SW for regulatory approval.
- Experienced testing networking and storage technologies, protocols and hardware.
- Experienced with web services and SOAP UI testing.
- Experienced with Oracle, TOAD, and SQL.
- Experienced in testing UI technologies such as HTML5.
- Experienced in Big Data/ Hadoop / Cassandra/Memcached/NoSQL/MapReduce.
- Lead and coached about 2 to 20 engineers onshore, offshore and nearshore.
- Lead the hiring process.
- Big/data and analytics platform administrator (Vertica, Greenplum, Hadoop, Hive).
- Experienced with Display Advertising, Behavioral Ad Networks .
- Recommended Engines, Personalization.
- Experienced with Data Analytics, Data mining, Predictive Modeling.
WORK EXPERIENCE:
Confidential, Santa Clara, CA
Sr. AWS Cloud DevOp
Responsibilities:
- DB development and DB performance tuning
- Managing and confitguring AWS services including EC2, S3, SQS, RedShift, ElastiCache, Cloudwatch, etc.
- Docker Master and install/deployment
- Worked on DB persistence with Java Hibernate development in AWS
- Redshift performance tuning and testing
- Spark RDD and Airflow workflow designing and testing
- Report design using MicroStrategy
- Build lead recommender systems for Confidential website
- Scala functional programming with Spark streaming
- Solr Programming
- Hortonworks Programming.
- VM programming .
- Docker programming
- Using Docker CLI on multiple projects.
Confidential, San Francisco, CA
Sr. Big Data DevOp
Responsibilities:
- Provision, configure and maintain cloud resources including server instances, load balancers, security groups, DNS management, certificates in AWS and company data center
- Manage and maintain development and production systems. Troubleshoot issues as they arise by viewing logs, attempting to reproduce in a controlled environment. Apply periodic updates, patches, etc to servers.
- Ensure system uptime by monitoring production systems and responding to issues with corrective action.
- Ensure network uptime and troubleshoot any issues or outages to ensure business continuity
- Diagnose and troubleshoot day to day issues users may experience with their PC and Mac computers - network connectivity, printing, etc.
- Spark RDD and Spark DF development.
- Docker Master and install/deployment.
- Solr programming
- NoSQL database design and Spark SQL development.
- Unix/Hadoop administrator and development
- Using Docker CLI
- Expertise with Splunk UI/GUI development and operations roles.
- Prepared, arranged and tested Splunk search strings and operational strings.
- Managed the Splunk licenses and restricted the daily indexing limit.
- AWS EMR / Redshift administrator/development
- Cloudera Administrator/development/DevOps
- Cloudera Navigator Administrator/development/DevOps
- Kerbero Administrator.
- Zoomdata administrator.
- Using GlusterFS and CEPH.
- Big data platform design and architecture/development
- Hive and Hbase development.
- Security office for big data
- POC all other big data tools.
- Working with RancherOS
- Work with scripts for Oozie and Hue systems.
- Cassandra, MongoDB, Hbase, Hive development.
- Cloudera Hue and Solr development.
- Data governance policy/management and big data security.
- Big data security and Kerberos development.
- Multi-tenant set up and development.
Confidential, San Francisco, CA
Sr. DataBase Architect
Responsibilities:
- Converting large SAS SQL scripts to HiveQL for Data Science
- Converting Oracle scripts to HiveQL programming for Data Science
- Architecture No-SQL databases and coding the MapReduce functions for over 10 years’ data about emails campaigning results analytics
- Working with data science to prototype data model and programming the business logic with Hive, Pig
- Solr programming
- Building database application on Hive, Pig, Hbase, MongoDB, Canssdra, Spark, Solr and shark on a large hadoop nodes cluster in house
- Backend No-SQL database design, development, architecture and testing
- Data gaverment
Confidential, Redwood City, CA
Backend Big Data/Hadoop Engineer
Responsibilities:
- Spark RDD and Spark SQL/Hive SQL.
- No-SQL database modeling.
- AWS cloud computing and architecture with Hadoop on Big Data.
- Build cloud application on AWS, S3, EMR, Hive, Pig, Hbase, MongoDB, Canssdra, Spark.
- Backend No-SQL database design, development, architecture and testing.
- Solr programming
Confidential, San Jose, CA
Big/Data and BI/BW Consultant
Responsibilities:
- Create and maintain scripts and programs to Extract, Transform and Load data
- Automate ETL and aggregation processes using BASH/PHP and/or other scripting languages
- Create logging and monitoring elements to all Data Warehouse and ETL processes to allow Operations team to monitor
- Respond to Data Warehouse and ETL process alerts, with support from operations team.
- Create, maintain and automate processes to distribute data warehouse extracts to various users
- Expertise with Splunk UI/GUI development and operations roles.
- Continually monitor measure and improve all data warehouse and ETL processes for speed, reliability and accuracy.
- Perform analyses and execute improvements regarding overall data quality, including making recommendations to the business regarding inputs.
- Source and/or create tools to deliver monitoring metrics and dashboard reports to allow the rest of the company to understand the quality and timeliness of data warehouse data.
- Install and integrate 3rd party data maintenance tools such as address cleaning software, etc.
- Use new technologies to service data warehousing needs such as hadoop, column databases, etc.
- Confidential big data analytics platform development, testing and deployment
- Working on data integration application design, development, and testing
- Using Pentaho data integration tool to create data integration jobs
- Using MapR to run Hadoop / YARN jobs
- Moving enterprise data from Oracle to Hive for big data platform
- Moving WebEx, voice, phone, video and email enterprise data to Hive and build data mart.
- Writing use story, task, plan and testing cases
- Writing scripts for automating testing
- Writing design document for enterprise data on board
- Analysis deployment log for errors
- Monitoring big data staging environment
- Monitoring hive staging environment
Confidential, Santa Clara, CA
Big Data DevOps consultant
Responsibilities:
- Building and maintaining production systems on AWS using EC2, RDS, S3, ELB, Cloud Formation, etc. and familiarity interacting with the AWS APIs. You should be equally comfortable in a traditional datacenter setting.
- Using script languages (Python and/or Ruby) as well as script environments like bash
- Administering Linux (Centos, RHEL, Ubuntu) systems.
- Using Puppet, Chef, Ansible, or Salt.
- Monitoring, metrics, and visualization tools for network, server, and application status (Zenoss, Sensu, Nagios, Graphite, Collectd, Ganglia, etc.)
- Using IPS, WAF, and additional security layers (LDAP, SSO, 2Factor)
- Maintaining RDBMS (PostgreSQL and MySQL). Bonus points for NoSQL (Cassandra, DynamoDB, Couchbase, Mongo)
- Spark RDD development and No-SQL database modeling
- Design/Architecture Big Data Hadoop Testing Frameworks
- Building Hadoop cluster in Cloud and local
- Working with AWS cloud environment: EMR, Redshift, S3
- Working with Scala, Java, Python, Kafka, akka, SQL, MongoDB, JSON, Avro, Tableau
- Working with Git, SBT, Ant, Maven, Ganglia, Jenkins,
- Working with Hadoop, Hbase, Zookeeper, Oozie, Scalding Spark Shark
- Testing automation and coding in Scala
- Worked with ETL and reporting tools (OBIEE, SAP etc.)
Confidential, Milpitas, CA
Big/Data and BI consultant
Responsibilities:
- Hadoop /MapR BW/BI data warehouse project.
- Big data move from Informatica and TD.
- Load data from different source to new build Hadoop analyst platform
- Build Hadoop QA team
- Using Hadoop /Hive/Sqoop/bash to deployment the data load and query data.
- Monitor QA environment for Hadoop problems
- Testing report form MapR report tool
- Provide the ideas/working process to other team
- Review other reports and code to understand coding logic
- Managed a technical team or functioning as a team lead.
- Worked with Hadoop stack (e.g. MapReduce, Sqoop, Pig, Hive, Hbase, Flume). related/complementary open source software platforms and languages (e.g. Java, Linux, Apache,
- Perl/Python/PHP, Chef).
- Worked with ETL (Extract-Transform-Load) tools (e.g. Informatica, Talend, Pentaho).
- Worked with BI tools and reporting software (e.g. Microstrategy, Cognos, OBIEE, Pentaho).
- Worked with analytical tools, languages, or libraries (e.g. SAS, SPSS, R, Mahout).
- Supported business development activities to shape and communicate proposed solutions to client executives - Implemented of ETL applications:
- Implemented of reporting applications
- Application/implementation of custom analytics support
- Administrated of relational databases
- Data migration from existing data stores
- Infrastructure and storage design
- Developed capacity plans for new and existing systems.
Confidential, San Jose, CA
Big/Data and BI consultant
Responsibilities:
- Conducted detailed design applications developed on Hadoop platforms (Feature testing, Regression Testing, Acceptance Testing and Sanity Testing)
- Implemented Business analysis tools with Hadoop MapReduce scripts from ETL data to data warehouse for BI and enterprise analysis platform
- Administrated Log analysis scripts for Business Analyst tool with HDFS Hadoop (file system level)
- Advised the Hadoop and analytical workloads IO optimizing solutions
- Running benchmarked Hadoop/HBase clusters
- Supported development of Hadoop and Vertica Analytics Platform activities.
- Tested NetApp Open Solution for Hadoop(NOSH) settings
- Supported and scaled Hadoop systems
- Supported cluster failed over test and document the results with various configurations
- Administrated enterprise-grade storage arrays and eliminated Hadoop network bottlenecks
- Supported hot-pluggable disk shelves, added storage and administrated services
- Supported NFS and HDFS file systems
- Loaded network-free hardware RAID
- Day-to-day support Hadoop hardware and software issues
- Used automation scripts to run performance testing of the new storage OS design.
Confidential, Concord, MA
Big Data System Consultant
Responsibilities:
- Supported about over 100 projects and many functional teams
- Supported analytic massive amounts of data and BI
- Communicated and tracked defects to closure
- Administrated with large data set testing with BigData NoSQL Hadoop and Oracle, DB2 scripts
- Wrote and reviewed scripts and executed scripts, tracked defects
- Expertise with Splunk UI/GUI development and operations roles.
- Used scripts languages in Perl and Python for Automations testing and test data condition
- Used scripts languages for ETL load with BW.
Confidential, Walnut Creek, CA
Middleware Architect
Responsibilities:
- Designed all middleware with 3000 interfaces for BigData.
- IBM MQ design/ architecture support.
- Designed QA environment and production support.
- Designed Message broker and production support.