We provide IT Staff Augmentation Services!

Cloud And Bigdata Engineer In Devops Resume

4.00/5 (Submit Your Rating)

CAREER OBJECTIVES:

Seeking a senior engineering position in the development and testing of distributed systems infrastructure and applications

SUMMARY:

  • Solution oriented, customer focused software architect with 13+ years in building, and testing, automating and delivering distributed BigData and cloud systems and applications using Devops and Agile methodologies.
  • Well - rounded experience with complete software lifecycle - development, testing, project management, system administration, deployment and product support.

SUMMARY OF TECHNICAL EXPERIENCE:

Cloud technologies: OpenStack - Neutron, Nova, Keystone, AWS-EC2, S3, Redshift

BigData technologies: DataScience with sentiment analysis NLTK, Hadoop (Cloudera), Spark, Kafka, Pig, Hive, Flume ElasticSearch, Logstash, Kibana, RabbitMQ

Devops and monitoring: Docker, Ansible, Sensu, Nagios, Zabbix, Kubernetes, Ganglia

Operating System and Programming: Linux, Windows Server, Unix scripting, Python, PL/SQL, Git

Business Intelligence: - Cognos report developer, Informatica developer and Data Modeler, Oracle, MySQL

TECHNOLOGY:

Cloud Services: OpenStack, AWS, Redshift

Devops: Ansible, Docker, Kubernetes, Jenkins

Big Data - Hadoop, Spark: Data Science NLTK, Cloudera, Hortonworks (Complete Hadoop Eco-System - Hdfs, Pig, Hive, Hbase, Sqoop, Flume), Cassandra

Programming, Scripting: Shell, Python, PL/SQL

Business Intelligence: Cognos, Informatica

Databases: Oracle, MySQL

Messaging Infrastructure: Kafka, RabbitMQ

Monitoring Infrastructure: Zabbix, Nagios, Ganglia, Sensu

Operating Systems: Linux (RedHat, Centos, Ubuntu), Windows Server

WORK EXPERIENCE:

Cloud and BigData Engineer in Devops

Confidential

Technology: Openstack, ELK (ElasticSearch, Logstash, Kibana), Docker, Ansible, Rhel7, python, Kafka, Cloudera, Bigdata

Responsibilities:

  • As part of continuously delivering Agile team, develop, test, and deploy OSS and Big Data platform features
  • Develop ongoing test automation using Ansible, Python, Docker based framework
  • Use Keystone, Nova, Neutron and other OpenStack APIs to setup cloud environments
  • Deployment of BigData CDH 5 parcel in openstack using Ansible playbook
  • Test automation using python cm api for cluster health check, HIVE/Pig/Impala/SQOOP/SPARK
  • Benchmarking and Stress Testing an Hadoop Cluster With TeraSort, TestDFSIO Big-Data Cluster-Computation: Hadoop/Hive/Pig
  • Creating docker image with the required libraries and services in place for the management node
  • Using Ansible to Setup/teardown of ELK stack (ElasticSearch, Logstash, Kibana)
  • Setup and test monitoring infrastructure based on Sensu, Nagios, Zabbix, Kafka
  • Design and execute load, performance and stress tests on the OSS and Big Data platforms. Leverage OpenStack Tempest and Rally frameworks where possible
  • Design and execute messaging integrity and throughput testing of Kafka infrastructure

AWS Data Cloud Migration Engineer

Confidential

Technology: Amazon Web Services, EMR (Hadoop- PIG HIVE), Redshift & Oracle 11g

Responsibilities:

  • Dimensional Modeling for Sales Data Mart on Amazon Redshift Database
  • Complex ETL using PIG & HIVE Hadoop Eco System
  • Query optimization on Redshift Database
  • Exporting table scripts and data file from Oracle with Toad
  • Uploading data-files in S3 buckets
  • Write pig script and put that in S3 location
  • Setting up EMR cluster with pig which will point the data files/pigscript from S3 bucket and will do some data cleaning operations like removing header and unwanted data

Sr. Big Data Hadoop Architect

Confidential

Technology: Hadoop ( Confidential BigInsight) - MapReduce, HDFS, and Ecosystem - HIVE, PIG, Hbase, Unix, Sqoop, Oozie, Python, and NLTK

Responsibilities:

  • Determine the cluster size
  • Working with the team to troubleshoot any issue with the cluster
  • Develop custom crawlers to fetch relevant social media data from Twitter
  • Create Hive schema structure on the ingested data
  • Mapping Hive external table to the twitter extracted data
  • All other sources are crawled with SMA inbuilt board reader and in the raw stage being mapped with hive external table
  • Mapping the board reader data with twitter extracted data with PIG ETL.
  • Sentiment analysis with NLTK
  • Export the data to DB2 database by Sqoop
  • Integrate Hadoop with DB2 to support business intelligence reporting with Cognos

BigData Engineer

Confidential

Technology: Hortonworks, HCatalog, HIVE, PIG, Sqoop

Responsibilities:

  • Write python daemon to look for the latest files in the network storage area sent by Oracle Golden gate.
  • In the Raw Stage send the data to the Edge nodes as it is
  • Pushing data with raw data to HDFS in the directory structure mentioned as parameter in oracle table.
  • Creating Hive external table and add partition based on the date. Till this time there is no change in raw Data.
  • ETL is with PIG using Hcatalog for parameters, we find out the records set with new entries, unmodified entries and modified entries
  • Create a temp history table with union of the above 3 datasets. Then delete the old history table.
  • Loading it in Hive dynamic partition for analysis with table join

Data Architect

Confidential

Technology: Cognos report studio, Cube Transformer, InformaticaRole: Design and architect data warehouses and data marts for various operational units.

Responsibilities:

  • Data Modeling and ETL data Mapping in Informatica
  • Report design in COGNOS
  • Construction of Cognos Cubes/ Report Studio reports

Sr. Data Architect

Confidential

Technology: Cognos 8.3, Report Studio, Frame Work Manager

Responsibilities:

  • Analysis of the Requirements & Data Model of the Application
  • Data modeling in FM
  • Prepare Report specification
  • Prepare Technical specification
  • Develop report
  • Prepared Unit Test documents for every module.

ETL and Report Developer

Confidential

Technology: Business Objects, Oracle, SQL, PL/SQL

Responsibilities:

  • Performing detailed analysis of user requirements
  • Coordinating with Confidential WT point of contact for resolution of queries
  • Development of Reports
  • Work with the ETL mapping development in Informatica

We'd love your feedback!