We provide IT Staff Augmentation Services!

Application Architect / Sr. Hadoop Developer Resume

2.00/5 (Submit Your Rating)

Phoenix, AZ

SUMMARY

  • 7 years of overall IT experience on various technologies like Big Data Hadoop, Java, Splunkand Mainframes.
  • Around 3.5 years of experience in Hadoop and its components
  • Deep understanding of Hadoop Architecture of versions 1x,2x and various components such as HDFS, YARN and MapReduce concepts along with Hive, Pig, Sqoop, Oozie, Zookeeperand NoSQL databases likeHBase
  • Participated in entire Software Development Life Cycle(SDLC) including Requirement Analysis, Design, Development, Testing, Implementation, Production support and post implementation
  • Support of software applications and Agile Developmentmodel right from the requirement gathering to Deployment andproduction support.
  • Focus on designing and delivering most optimum and critical business solutions for Big Data Technologies.
  • Lead contributor for Big Data Centre of excellence in various emerging technologies like Big Data, Texts analytics, No SQL databases and other related areas.
  • Keen in building knowledge on emerging technologies in the Analytics, Information Management, Big data, Data science and related areas and in providing best businesssolutions.
  • Experience in application development frameworks like spring, Hibernate and also on validation plug - ins like Validator frameworks.
  • Capable of storing and processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture.
  • Good experience on core java and OOPS concepts. Good experience on shell script, crone jobs and ehcache.
  • Practical knowledge on SPARK, Scala and Python. Good working experience on MySQL database.
  • Extensive experience in Banking, HealthCare and Retail domains.
  • Strong experience with version control tools such as tortoise SVN and GIT. Experienced in Developing J2EE Application on IDE tools like
  • Eclipse and Net Beans. Expertise in build scripts like ANT and Maven and build automation.

TECHNICAL SKILLS

Big Data Distributions: Cloudera 4, MapR M5 and M7

Big Data Echo Systems: HDFS, Map Reduce, Pig, Hive, Sqoop, Oozie, Hbase, MongoDB, Flume, Oozie, Kafka, Storm.

Frame Works: Map Reduce, Struts, Hadoop, Ext JS, Spring, Hibernet, Splunk and Platfora.

Servers: Apache Tomcat and MVS.

Database: Oracle, MySQL, VSAM and DB2.

Java Technologies: Core Java, Spring, Servlets, JSP and Android, ehcache.

Cloud Technologies: Google Apps(GAE), OrangeScape

Tools: NetBeans, Eclipse, WinSCP, Filezilla, Putty, SVN, EPV, Maven, SQL Developer, SOAP UI.

Knowledge on: SPARK, Scala, AWS, Data Stage and Android

Markup Languages: HTML, XML, Json

Scripting Languages: PIG, JavaScript, Python and Shell

Languages: Java, J2EE, SQL, Shell, C, C++, JCL and COBOL

PROFESSIONAL EXPERIENCE

Confidential, Phoenix,AZ

Application Architect / Sr. Hadoop Developer

RESPONSIBILITIES:

  • Involved in understanding and analyzing business requirements. Analyze different data sources and design for data pull from source
  • Involved in User story grooming and sprint planning
  • Prepare queries for extracting queries for extracting data from cornerstone- data source
  • Extract data from cornerstone a source system to pull data and load data into Hadoop cluster by using corner stone java API
  • Load extracted data to staging table and final partitions tables subsequently using java hive JDBC Connection
  • Load static data to ehcache from MySQL tables
  • Develop hive scripts to perform aggregations on extracted data and load results to partitioned tables
  • Develop hive scripts to load Hive-Hbase integrated tables and calculate Aggregations
  • Develop LINUX shell scripts for handing exception created while running aggregation tasks
  • Prepare workflow and coordinator jobs for Scheduling multiple jobs based on time and data availability in staging tables
  • Coordinating with infrastructure team to fix cluster related issues
  • Provide design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
  • Provided training for newly joined resources in the team.

Environment: Hadoop - MapR M5/M7, MapR FS (HDFS), Hive, MapR DB (Hbase), Oozie, Crone Jobs, Shell Script, Datameer, Tomcat, Java, Spring, EhCache, Maven, MySQL, PUTTY, WinSCP, SVN, SOAP UI, Rally.

Confidential, Columbus, GA 

Software Engineer

Responsibilities: 

  • Understanding the Requirement Specifications & Business Logic. Java and Hadoop changes to include new fields and its related business requirements.
  • Involved in Unit testing and documenting Unit test plans and results.
  • Working on ad-hoc changes(Hadoop and Java code changes, unit testing) based on the change request
  • Experienced Incremental and full back up methodologies using Sqoop.
  • Extended myself in learning and understanding the concepts of Hadoop with a quick pace.
  • Extensively involved in Design phase and delivered SDS documents
  • Extensively involved in Data Extraction, Transformation and Loading (ETL process) from Source to target systems.
  • Part of Batch process implementation for MR jobs. Involved in Data Quality Analysis to determine cleansing requirements
  • Involved in Debugging, Troubleshooting, Testing, and Documentation of Data Warehouse.

We'd love your feedback!