We provide IT Staff Augmentation Services!

Big Data Cloud Analytics Developer Resume

4.00/5 (Submit Your Rating)

Professional Summary:

  • Over 9 years of IT experience in application design, development, testing and technical support and guidance in Python, Big Data and Cloud Tools.
  • Experience working in all stages of Software Development Life Cycle (SDLC) including Requirements, Analysis and Design, implementation, integration and testing, deployment and maintenance.
  • Hands on experience in using Hadoop Technologies such as Map Reduce, HDFS, Hive, Spark, Oozie, Pig, Kafka, Flume, NiFi, Impala, Storm, Zookeeper
  • Hands - on experience in Import / Export the data using Sqoop from HDFS to RDBMS and vice-versa.
  • Good experience with Cassandra NoSQL Database.
  • Good Understanding with distributed technologies such as Spark and Hadoop
  • Integrated Apache Kafka for data ingestion.
  • Good understanding of machine learning algorithms, statistical analysis and predictive modeling.
  • Developed large scale data ingestion pipelines and involved in designing robust ETL workflows
  • Hands-on experience in AWS Cloud platform and its features which includes services like: EC2, S3, EBS, VPC, ELB, IAM, Glacier, Elastic Beanstalk, Route 53, Auto scaling, LAMBDA, Cloud Front, Cloud Watch, Cloud Trail, Cloud Formation.
  • Experience in using AWS SDK Java, Python.
  • Experience in uploading the data, Host Static Websites, Encrypt Data, Implement Bucket Policy and Setup CORS in S3 using, Web Console, AWS CLI and AWS SDK for Python (Boto3).
  • Experienced with Jenkins as Continuous Integration / Continuous Deployment Tool and strong experience with Ant and Maven Build Frameworks.
  • Exceptional expertise in technologies such as Core Java, HTML5, CSS3, AJAX, XHTML, JavaScript, CSS, jQuery, JQuery Mobile, Bootstrap, Dojo, Backbone.js, Node.js and AngularJS.
  • Experience in Data visualization tools using Tableau.
  • Hands on experience with Rundeck to run a job.
  • Strong abilities in Design Patterns, Database Design, Normalization, writing Stored Procedures, Triggers, Views, Functions in MS SQL Server, Oracle and PostgreSQL.
  • Adept at preparing business requirements documents, defining project plans, writing system requirements specifications.
  • Developed a data pipeline using Kafka and Strom for real-time streaming to store data into HDFS
  • Experience in using Sqoop, Zookeeper and cloud-based computing Manager Services through Zookeeper.
  • Worked on Python classes from the respective APIs so that they can be incorporated in the overall application.
  • Experience with setting up and developing flows in Apache NiFI using processer and groups
  • Worked with different distributions of Hadoop like Hortonworks and Cloudera
  • Worked on using different file formats like Sequence, AVRO, ORC files, Parquet files and CSV using different compression Techniques.
  • Experience in Apache Flume for collecting, aggregating and moving large amounts of data from application servers.
  • Expertise in writing Hive UDF, Generic UDF's to incorporate complex business logic into Hive Queries.

We'd love your feedback!