We provide IT Staff Augmentation Services!

Sr. Systems Engineer/ Hadoop Developer Resume

3.00/5 (Submit Your Rating)

Coralville, IA

PROFESSIONAL SUMMARY:

  • Over 5+ years of overall IT development experience including 2.5+ years of experience exclusively on Big Data ecosystem using Hadoop framework and related technologies such as HDFS, MapReduce, Hive, Pig, Spark, HBase, Flume, Oozie, Sqoop, Impala, Kafka and Zookeeper.
  • Excellent knowledge on distributed storages (HDFS) and distributed processing (MapReduce, Yarn) for real - time streaming and batch processing.
  • Experience in developing Map-Reduce programs to perform Data Transformation in Java.
  • Experience in writing Custom MapReduce programs in java and extending Hive (HQL) and Pig core functionality by writing custom UDFs.
  • Extensive experience with big data query tools like Pig Latin and HiveQL.
  • Experience in extracting the data from RDBMS into HDFS using Sqoop.
  • Experience in collecting the logs from log collector into HDFS using Flume.
  • Good understanding of NoSQL databases such as HBase, Mongo DB and kudu.
  • Experience in analyzing data in HDFS through MapReduce, Hive and Pig.
  • Excellent understanding / knowledge of Hadoop architecture and various components such as HDFS, JobTracker, Task Tracker, Name Node, Data Node and MapReduceprogramming paradigm.
  • Application Development using Java, Hadoop, RDBMS and shell scripting with performance tuning.
  • Experience in managing Hadoop clusters and services using Cloudera Manager. Proficient in using Cloudera Manager, an end-to-end tool to manage Hadoop operations.
  • Experience in building and maintaining multiple Hadoop clusters of different sizes and configuration and setting up the rack topology for large clusters.
  • Good Understanding on XML and JSON files.
  • Experience in Working with Build Tools like MAVEN and Continuous Integration Tools like Jenkins
  • Good working knowledge data visualization using Tableau.
  • Experienced in handling Avro data files in MapReduceprograms using Avro data serialization system.
  • Experience with OozieWorkflow Engine in running workflow jobs with actions that run Hadoop MapReduce and Pig jobs.
  • Experienced in developing custom UDFs for Pig and Hive using Java by extending core functionality
  • Experienced in loading unstructured data into HDFS using Flume/Kafka.
  • Having good knowledge in ApacheSpark.
  • Programmed datasets with Transformations and Actions using RDDs.
  • Used Dataframes to query and join datasets.
  • Created HiveTables using Spark HiveContext.
  • Good Knowledge on Scalaconcepts like creating Arrays, Lists, Collection Objects, Inheritance etc.
  • Performed SparkStreaming on the real time data from various sources.

TECHNICAL SKILLS:

Hadoop: MapReduce, Spark with Scala PIG, Hive, Sqoop, Zookeeper, Flume, Oozie, Impala, kafka, Yarn, HDFS

NoSQL: HBase, Cassandra

Java Technologies: J2EE, JSTL, JDBC, JSP, Java Servlets, Struts, Spring, Hibernate

Languages: C, C++, Java, Scala,Pig Latin, HiveQL, UNIX shell scripts

Web Services: XML, SOAP, REST

Web Technologies: JavaScript, CSS, CSS3, HTML, HTML5, DHTML, XML, Bootstrap, XHTML, JQUERY

Databases: DB2, SQL Server, MySQL, Teradata

Web Servers: JBoss, Web Logic, Web Sphere, Apache Tomcat.

Modeling Tools: UML on Rational Rose, Rational Clear Case, Enterprise Architect, Microsoft Visio, Eclipse

Build Tools: Apache, Maven, Ant

PROFESSIONAL EXPERIENCE:

Confidential, Coralville, IA

Sr. Systems Engineer/ Hadoop Developer

Responsibilities:

  • Workedwith development and administration teams in quick stream and profile data.
  • Stream data from MySQL, Oracle. Into existing Data Lake.
  • Work with Kafka heavily, the Horton Works platform and Kubernetes daily.
  • Will analyze, design, develop, test, implement, and maintain computer systems to meet functional objectives of the business.
  • Will be designing the project, implementation and some project management of small teams.
  • Work as part of a Scrum team, following SAFe agile practices.
  • Used version control such as Git / Bitbucket and Jenkins for builds.
  • Used messaging broker solutions such as RabbitMQ.
  • Expertised in production monitoring solutions such as Splunk and NewRelic.
  • Developed scripts and automated data management from end to end and sync up b/w all the clusters.
  • Worked on Kafka while dealing with raw data, by transforming into new Kafka topics for further consumption.

Environment: Big Data, SQL, Java,Kafka, Spark, Kubernetes, Mysql,Oracle, Hortonworks Platform, GIT, Bitbucket, DevOps,NewRelic, Splunk, Hive, Zeppelin, MongoDB, Maven.

Confidential, Charlotte, NC

Hadoop Developer

Responsibilities:

  • Worked on loading CSV/TXT/AVRO/PARQUET files using Scala/Java language in Spark Framework and process the data by creating Spark Data frame and RDD and save the file in parquet format in HDFS to load into fact table using ORC Reader.
  • Loaded the data into Spark RDD and do in memory data Computation to generate the Output response.
  • Ingested data in mini-batches and performed RDD transformations on those mini-batches of data.
  • Good knowledge in setting up batch intervals, split intervals and window intervals in Spark Streaming.
  • Used Oozie workflow engine to run multiple jobs which run independently.
  • Worked on Kafka while dealing with raw data, by transforming into new Kafka topics for further consumption.
  • Involved in creating Hive Tables, loading with data and writing Hive queries which will invoke and run MapReduce jobs in the backend.
  • Writing MapReduce (Hadoop) programs to convert text files into AVRO and loading into Hive (Hadoop) tables.
  • Implemented the workflows using Apache Oozie framework to automate tasks.
  • Worked with NoSQL databases like HBase in creating HBase tables to load large sets of semi structured data coming from various sources.
  • Developing design documents considering all possible approaches and identifying best of them.
  • Loading Data into HBase using Bulk Load and Non-bulk load.
  • Developed scripts and automated data management from end to end and sync up b/w all the clusters.
  • Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop.
  • Import the data from different sources like HDFS/HBase into SparkRDD.
  • Experienced with Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Import the data from different sources like HDFS/HBase into Spark RDD.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDD, Scala and Python.
  • Involved in gathering the requirements, designing, development and testing.
  • Developing traits and case classes etc in scala.
  • Ability to work with onsite and offshore team members.
  • Able to work on own initiative, highly proactive, self-motivated commitment towards work and resourceful.
  • Strong debugging and critical thinking ability with good understanding of frameworks advancement in methodologies and strategies.

Environment: Hadoop, MapReduce, HDFS, hql, Pig, Java, Spark, Kafka, SBT, Maven, sqoop, Zookeeper, Scala, Impala.

Confidential

Java Developer

Responsibilities:

  • Designed, developed and deployed various data gathering forms using HTML, CSS, JavaScript, JSP and Servlets.
  • Developed user interface module using JSP, Servlets and MVC framework.
  • Used Struts tiles libraries for layout of Web page, Struts validation using validation.xml and validation-rules.xml for validation of user Inputs and Exception Handling using Struts Exceptional Handler.
  • Used Validator plug-in to Struts for server-side validation.
  • Designed and developed struts action classes for the controller responsibility.
  • Created user Interface through HTML and JSP. Designed and Developed interactive and dynamic front end web applications using HTML, Bootstrap and CSS.
  • Involved in developing various Servlets.
  • Developed JUnit test cases for unit testing.
  • Used Struts to implement the MVC framework for the presentation tier and Simplified client-side scripting of HTML using JQuery, a cross-browser JavaScript library.
  • Used JDBC API to connect to the database and Performed CRUD operations to get and check the data.
  • Implemented different design patterns like Singleton, Factory, Data Access Objects and Front controller.
  • Prepared EJB deployment descriptors using XML and Used JAXB components for transferring the objects between the application and the database.
  • Used Java /J2EE Design patterns like Business Delegate, Session Façade and Service Locator in the project which facilitates clean distribution of roles and responsibilities across various layers of processing.
  • Code review and walkthrough of the developed code and coordinating the code review by component leads.
  • Developed Java Beans to use in JSPs.
  • Designed and developed various user interface screens using JSP, HTML.
  • Developed web interfaces using JSP and Java Script.
  • Analyzed, Designed and developed components for the business logic.

Environment: Java1.6, J2EE 1.4, Servlets2.4, JSP2.0, Eclipse, JDBC, HTML, Struts, CSS3, JavaScript, SQL Server, SQL, Servlets, JSP, JDBC 3.1, Spring, Web services, Oracle10g, PL/SQL, UML, XML, ANT, JUnit, Log4j and Linux.

We'd love your feedback!