We provide IT Staff Augmentation Services!

Software Developer Resume

3.00/5 (Submit Your Rating)

Frosters City, CA

SUMMARY:

  • Hadoop Big Data/Java Engineer with over a 5 years of working knowledge in the areas of Java/Hadoop with interest to explore, learn and extract the best out of the modern technologies in the field of Software Development and Big Data Engineering technologies.
  • Expertise in Hadoop and its ecosystem technologies (HDFS, MapReduce, HIVE, SOLR, SQOOP, OOZIE).
  • Great interest in observing and learning the latest technologies in the Big Data world.
  • Responsible for ownership of complete software development lifecycle - which includes making requirement specifications, analysis design, implementation and testing - for several projects.
  • Proficiency in SDLC methodologies like Agile, Waterfall models.
  • Accountable to management for leading code reviews with junior developers and peers.
  • Exceptional ability to quickly master new technology and capable of working in-group as well as independently with excellent communication skills.
  • Ability to work under pressure with critical deadlines.
  • Excellent team player with problem solving and troubleshooting capabilities.

TECHNICAL SKILLS:

Primary Languages: Java, Hive, Solr, Sqoop.

Big Data Technology skills: HDFS, MapReduce, YARN, Hive, Pig, Solr, HBase, Sqoop, Oozie, Zookeeper, File formats - Avro, Sequence Files, ORC, Parquet, Experience in Performance Tuning, Optimization, Compression techniques and Customization, Knowledge in Apache Spark, Shark.

Scripting Languages: Shell Scripting.

RDBMS: Oracle, Sybase

IDE, Tools, editors and debugger: Eclipse Juno, Vi editor, Visual Studio 2005, GDB, DBX, Total Viewer, Putty, Reflection, Squirrel SQL Client, NX Client, FileZilla, Visual Difference, and Beyond Compare.

Others: Json, Junit.

Version Control: GIT, Clear care, Turbo Star Team, PVCS

Operating Systems: Linux Red hat, Unix Sun Solaris, IBM AIX, Windows

PROFESSIONAL EXPERIENCE:

Software Developer

Confidential, Frosters City, CA

Responsibilities:

  • Requirement Analysis along with business and peers.
  • Preparing Design Documents (Request-Response Mapping Documents, Hive Mapping Documents).
  • Designed appropriate partitioning/bucketing schema to allow faster data retrieval during analysis using HIVE.
  • Involved in creating Hive tables, loading data and running hive queries.
  • Extensive working knowledge of partitioned table, UDFs , performance tuning , compression-related properties in Hive.
  • Good working experience using Sqoop to import data into HDFS from RDBMS.
  • Used OOZIE engine for creating workflow jobs for executing Hadoop jobs such as Hive , and Sqoop operations.
  • Used Log4J for standard logging and used Jackson API for JSON request handling.
  • Extensively worked on Shell Scripts .
  • Used Morphline tool for ingesting and indexing bulk data in Solr.
  • Used Solr search API and have developed custom Solr Request Handler .
  • Developed Java API for connecting Solr and used DB2 for store metadata.
  • Performing testing in various environments (Dev, ANA, PREF) and providing the reports to the business.
  • Performing various POCs as per the business requirements.
  • Project Automation using OOZIE & Shell jobs.
  • Implementing Performance Optimization techniques.
  • Providing feasibility reports of various file formats, compression techniques.
  • Fixing the bugs and PROD support.
  • Used GIT and clear case for software configuration management in collaboration with clear quest to log and keep track of SCM Activities.

Environment: CDH5.3.2, Hadoop, Yarn, Hive, Sqoop, Oozie, Java, DB2, Shell Scripting, Solr, Json, Log4j, Shell Scripts, Kerberos, Eclipse, Sql developer, WinSCP.

Confidential, NY

Java Hadoop developer

Responsibilities:

  • Designing, testing and deploying Java Map Reduce applications in a Cloudera Apache Hadoop Ecosystem.
  • Imported trading and derivatives data in Hadoop Distributed File System and Eco System (MapReduce, Hive).
  • Involved in creating Hive tables , loading data and running hive queries.
  • Was part of activity to setup Hadoop ecosystem at dev & QA Environment.
  • Managed and reviewed Hadoop Log files.
  • Responsible for requirement gathering, analysis, design implementation and Strong Experience in phases of software development life cycle (SDLC).
  • Involved in initial product design discussion, requirement gathering, Enhancement and code flow discussions.
  • Participated in the performance tuning of application.
  • Writing and understanding the Sybase Triggers, and Stored Procedures.
  • DB Lock and Deadlock related issues and analysis to resolve deadlocks.
  • UNIX Debugging and monitoring of running process using GDB and DBX.
  • Conducted code review walkthrough meetings for other developer’s.
  • Also involved in critical production fixing bugs, unit testing, and maintenance of the project.
  • Used PVCS for software configuration management and keep track of SCM Activities.

Environment: Java, C++, CDH4.3, MapReduce Architecture, Hive, Sybase, Design Pattern, UML, LINUX/UNIX, Eclipse, Visual Studio 2005, Total Viewer, Squirrel Tool, PVCS (version control tool), FileZilla.

Confidential

Java/C++ developer

Responsibilities:

  • Customization of various functionality of Manhattan WMOS product as per client requirement.
  • Manage implementation of Manhattan WMOS products including Inbound and outbound warehouse flow along with other modules needed by client like LM.
  • Worked on C++ using smart pointers and various design patterns.
  • Design and implemented end to end functionality for big retailers of United States Market.
  • Provided after implementation functional and technical support by troubleshooting all kind of issue to keep the warehouse operation up and running uninterrupted.
  • Used vi editor along with dbx debugger in Sun Solaris Unix environment, gdb in Linux environment and Visual studio in Windows environment for C/C++ code development.
  • Used gdb and dbx to debug C++ core dump files.
  • Implemented CORBA framework for C++ communication.
  • Used SQL and PL/SQL on SQL developer 3.0 to interact with database.
  • Used clear case and Tubro Starteam for software configuration management in collaboration with clear quest to log and keep track of SCM Activities.
  • Work closely with consultant to understand the flow and solve the issues with tough timelines with full satisfaction.

Environment: Java, C++, Oracle 10g, PL/SQL, SQL developer 3.0, CORBA, XML, Star Team, UML, Design Pattern, LINUX/UNIX, FileZilla, Visual Studio 2005.

Confidential

Java Developer

Responsibilities:

  • Design and manage implementation of Patient Billing System. Exclusively worked on ward management and billing system.
  • Understanding functional and technical requirement gathering and writing Software requirement specification.
  • Creating various documents like High Level Design, Low Level Design, Test plans and Deployment Plans.
  • Perform coding in Java and J2EE in Eclipse as IDE.
  • Building and Deployed the Java application in Apache Server.
  • Used SQL and PL/SQL to interact with oracle database.
  • Used JSP and Java script on front end web page designing.

Environment: Java/J2EE, Oracle 10g, PL/SQL, SQL developer 3.0, UML, Design pattern, LINUX/UNIX, FileZilla, Eclipse Helios.

We'd love your feedback!