We provide IT Staff Augmentation Services!

Hadoop Developer Resume

0/5 (Submit Your Rating)

SUMMARY:

  • 12 years experience in High Performance Computing, Optimization and Big Data.
  • Developing Big Data applications, installing, administration and monitoring Hadoop ecosystems.
  • Design and implementation of large - scale fault tolerant parallel applications over Clusters and GPUs.
  • Strong experience in acquisition and installation of Confidential infrastructures including clusters.
  • Large experience and collaborative work in Big Data, Confidential and Distributed Systems.

TECHNICAL SKILLS:

Big Data: HDFS, Hadoop/MapReduce, Spark, Yarn, Pig, Hive, Sqoop, Flume, Kafka, Cassandra, Hue, Ambari, Ranger, Kerberos

Hadoop Solutions: Databricks, Hortonworks HDP, HDF, Cloudera CDH, Azure HDinsight

BI: SSAS, SSIS, SSRS, Tableau, Visual Studio

Database: MySql, PostgreSQL, SQL Server

Cluster: Slurm, Open Confidential, OAR, OARGRID, Nagios, Ganglia, Puppet, KCONF, BULL ClusterDB

Systems: Redhat, CentOS, Debian, Ubuntu, Windows

Big Data: Java/MapReduce, Scala, PySpark, SparkSQL, MLibC/C++, Java, Python, Scala, R

Cluster/Multicore: MPI/OpenMP, Pthread

GPU: CUDA, OpenCL, jCUDA

PROFESSIONAL EXPERIENCE:

Hadoop Developer

Confidential

Responsibilities:

  • Developing Big Data Applications using using MapReduce, Hive, Spark, and Databricks
  • Installing Hadoop platform using Hortonworks HDP2.x, HDP3 and HDF3
  • Securing the Hadoop platform with Kerberos and Ranger
  • Integrating HDP Cluster, Ranger and Ambari with the existing Active Directory system
  • Testing, monitoring and performance evaluation
  • Connecting HDF and HDP
Technical Environment: HDP3.0.1.0/HDP2.x, HDF3.2.0, Ambari 2.7.1.0 Hadoop, MapReduce, Yarn, HDFS, Hive, Pig, Spark, SparkStreaming, Flume, Kafka, Sqoop, Databricks, Azure HDinsight SSAS, SSIS, SSRS, Tableau, Visual Studio Python, PySpark, SparkSQL, Scala, R

Confidential and Big Data Consultant

Confidential

Responsibilities:

  • Developing parallel and Big Data applications
  • Installing Apache Hadoop ecosystem and spark
  • Installing, administration and monitoring of computing clusters
  • Designing and supervising a fleet and delivery management with geolocation project
Technical environment: Hadoop/MapRedure, Hive, Pig, Hbase, Spark, PySpark Java, C/MPI, OpenMP, CUDA, OpenCL VMWare ESXi, vSphere, pfSense

Researcher/Engineer

Confidential

Responsibilities:

  • Leading a research and development team in Confidential, Optimization and Big Data
  • Design and implementation of large scale parallel and distributed applications over Cluster and GPU
  • Installing, configuring, administration and monitoring computing and Hadoop clusters.
  • Developing hadoop/mapreduce and spark/pyspark applications
  • Design and installation of new Confidential solutions
Technical Environment: Clusters: BULL, Supermicro, NVIDIA, Puppet, KCONF, SLURM, OAR, OARGRID Hadoop/MapRedure, Yarn, Hive, Pig, Spark, PySpark Nagios, Ganglia, BULL ClusterDB C/C++, Java, CUDA, OpenCL, OpenMP, MPI

We'd love your feedback!