We provide IT Staff Augmentation Services!

Full Stack Architect/developer Resume

2.00/5 (Submit Your Rating)

- Westport, CT

SUMMARY

  • 8 years of experience as a Developer and a business system analyst in various industry domains like robotics, retail, finance, health care, Education and data technology.
  • Experienced and good understanding of the various industry domains like Power, Robotics and manufacturing
  • HadoopEcosystem including Hive and Pig.
  • Excellent knowledge onHadooparchitecture; as in HDFS, Job Tracker, Task Tracker, Name Node, Data.
  • Node and Map Reduce programming paradigm.
  • BigData frame work and Eco system with evolving technologies.
  • Experience with distributed systems, large scale non - relational data stores, MapReduce, Systems, data modeling, and big data systems.
  • Knowledge of administrative tasks such as installing Hadoop and its ecosystem components such as Hive and Pig
  • Knowledge of NoSQL databases such as HBase, and MongoDB
  • In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, JobTracker, TaskTracker, NameNode, DataNode and MapReduce concepts
  • Developed unit test cases using Jasmine to test Angular controllers and services and developed custom validations using AngularJS and Node JS.
  • Extending Hive and Pig core functionality by writing customUDFs
  • Good understanding of Data Mining and Machine Learning techniques
  • Experience in analyzing data using HiveQL, Pig Latin, and custom MapReduce programs in Java. AngularJS, JavaScript, or reusable web components
  • Installation experience in Hbase Oozie, Hive, PIG, Sqoop and Flume.
  • Hadoop Cluster Monitoring using Nagios and Ganglia Troubleshooting OS issues.
  • Installing safenet and Tripwire for data encryptions and security monitoring of systems.
  • Installation and configuring Squid Proxy,VNC, NFS, FTP,Rsync,DNS,DHCP Server and clients
  • Linux Kernel up gradation to add additional tune/capabilities to Linux server and optimizing Linux Server.
  • Hardening Linux Server and Compiling, building and installing Apache Server from sources with min modules.
  • Experience in installing RPM's scripting in Bash,Ksh,Perl and usage of awk and sed.
  • Experience with virtualization, VMware Administration (ESX/VSPHERE),
  • Experience in compiling source files and installing the open source software in unix,Linux servers.
  • Installation of Tripwire to check the integrity of the file system on Linux System.
  • Task automation through Crontabs, Autosys, Maestro to invoke java programs and shell scripts.
  • Conducting Pre-prod meetings for requirements gathering and Post-production meeting forreviewing the Prod deployments and re-mediating mistakes for the smooth deployments in future.
  • Maintenance of technical documentation of installation, Upgradation and configuration changes andRunBook for the production support.
  • Experience on Apache, Cloudera, HortonworksHadoopdistributions.
  • Experience in analyzing data using Hive QL, Pig Latin, and custom Map Reduce programs in Java.
  • Experienced in building strong websites confirming Web 3.0 standards like SOAP communication protocol using valid code and table free sites with HTML, XHTML and XML.
  • Experience in Agile development environments using SCRUM/SPRINT based methodology.
  • Experience with Java Scrip technologies Viz. JQuery, AJAX.
  • Extensively supported different types of BI tools such as digital dashboards, data mining, data warehousing.
  • Participated in the Quality Assurance (QA) process and maintained the traceability matrix to ensure requirements are consistent, comprehensible, traceable, feasible, and do conform to standards.
  • Participated in Bug-Review meetings with software developers, QA engineers, managers and suggested enhancements to the existed application from business perspectives, provided solutions to existing bugs.

TECHNICAL SKILLS

Languages: JAVA, .Net 4.0 and 3.5, C++

Database: ORACLE, SQL Server, Microsoft Access

Web Technologies: PHP, ASP, PERL, SHELL, HTML, DHTML, Drupal, Flash Script

IDE: Microsoft Visual Studio, Net Beans, Eclipse, IntelliJ

Operating Systems: LINUX, Windows and their server counter parts

PROFESSIONAL EXPERIENCE

Confidential - Westport, CT

Full stack Architect/Developer

Environment: HDFS, Hadoop, JAVA, UNIX, Jboss, High Charts

Responsibilities:

  • Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables in the EDW.
  • Created Hive queries that helped market analysts spot emerging trends by comparing fresh data with EDW reference tables and historical metrics.
  • Enabled speedy reviews and first mover advantages by using Oozie to automate data loading into the Hadoop Distributed File System and PIG to pre-process the data.
  • Provided design recommendations and thought leadership to sponsors/stakeholders that improved review processes and resolved technical problems.
  • Managed and reviewed Hadoop log files.
  • Tested raw data and executed performance scripts.
  • Shared responsibility for administration of Hadoop, Hive and Pig.
  • Create robust and scalable designs for multiple features that meet business requirement.
  • Used Linux based DEV and UAT environments.
  • Ensure designs, code and processes are optimized for performance, scalability, security reliability and maintainability.
  • Create appropriate artifacts (white papers, video talks, etc.) to educate organization on features or underlying technology.
  • Security was crucial thus supported in development of DRM files for authentication.
  • Extensively involved in the integration of the Front End web interface with the Spring MVC, Angular JS, Node JS, JSP, and HTML.

Confidential - Bridgeport, CT

Full stack Architect/Developer

Environment: HDFS, Hadoop, JAVA, UNIX, Java High Charts, Java Spring Hibernate, HTML 5, PHP, SQL, HQL.

Responsibilities:

  • Developed a platform independent deployable product.
  • Create robust and scalable designs for multiple features that meet business requirement.
  • Built wrapper shell scripts to hold these Oozie workflow.
  • Developed PIG Latin scripts to transform the log data files and load into HDFS.
  • Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS.
  • Hands on experience with NoSQL databases like Cassandra for POC (proof of concept) in storing URL's and images.
  • Developed hive UDF for functions that were not preexisting in Hive like the rank etc.
  • Created External Hive tables and involved in data loading and writing Hive UDFs.
  • Experienced in implementing POC's to migrate iterative map reduce programs into Spark transformations using Scala.
  • Created concurrent access for hive tables with shared and exclusive locking that can be enabled in hive with the help of Zookeeper implementation in the cluster.
  • Wrote the shell scripts to monitor the health check ofHadoopdaemon services and respond accordingly to any warning or failure conditions.
  • Developed Unit test cases using MR unit for map reduce code.
  • Involved in creatingHadoopstreaming jobs.
  • Involved in Installing, ConfiguringHadoopEco System, Cloudera Manager using CDH4 Distribution.

Confidential, Plymouth Meeting, PA

Sr. Data Analyst

Environment: HDFS, Hadoop, JAVA, UNIX, Java High Charts, Hippo CMS, Open DJ, Mongo DB and Groovy.

Responsibilities:

  • Developed data pipeline using Flume to ingest customer behavioral data and financial histories into HDFS for analysis.
  • Prepared the best practices in writing map reduce programs and hive scripts.
  • Scheduled a workflow to import the weekly transactions in the revenue department from RDBMS database using Oozie.
  • Built wrapper shell scripts to hold these Oozie workflow.
  • Developed PIG Latin scripts to transform the log data files and load into HDFS.
  • Used Pig as ETL tool to do transformations, event joins and some pre-aggregations before storing the data onto HDFS.
  • Developed hive UDF for functions that were not preexisting in Hive like the rank etc.
  • Created External Hive tables and involved in data loading and writing Hive UDFs.
  • Implemented POC's to migrate iterative map reduce programs into Spark transformations using Scala.
  • Created concurrent access for hive tables with shared and exclusive locking that can be enabled in hive with the help of Zookeeper implementation in the cluster.
  • Wrote the shell scripts to monitor the health check ofHadoopdaemon services and respond accordingly to any warning or failure conditions.
  • Developed Unit test cases using MR unit for map reduce code.
  • Involved in creatingHadoopstreaming jobs.
  • Involved in Installing, ConfiguringHadoopEco System, Cloudera Manager using CDH4 Distribution.
  • Developed reports using the SQL Server Reporting Services (SSRS).
  • Made extensive use of AJAX and the AJAX Toolkit to improve performance of the application in whole.

Confidential - Bridgeport, CT

Senior Researcher/ Developer

Environment: HDFS, Hadoop, JAVA, UNIX, Linux, C#, Aforge, Cygwin.

Responsibilities:

  • Worked with students develop and deploy full scalable projects.
  • Worked with students in the development stages as well as the deployment stages for final product and project deployments.
  • Designed the web-based system for educational research analytics and data visualization inHadoopecosystem integrated Tableau onHadoopframe work to visualize and analyze data. Hive is used for data delivery.
  • Worked on several apachesHadoopprojects. Maps reduce programs were developed usingHadoop java API and also using hives and pig.
  • Designing and implementing solutions forHadoopAFMS system validation for all components of data analytics stack.
  • Worked with scoop to import/export data from relational database toHadoopand flume to collect data and populate inHadoop.
  • Implemented and integratedHadoopbased business intelligence and Data Warehouse system including implementations of searching, filtering, indexing, aggregation for reporting and report generation and general information retrievalfrom data sets for research purposes.
  • MaintainedHadoopclusters for dev/staging/production. Trained the development, administration, testing and analysis teams onHadoopframework andHadoopeco system.
  • Gave extensive presentations about theHadoopecosystem, best practices, data architecture in Hadoop.
  • Developed wireless sensor labs and programming modules for testing and developing new and efficient wireless algorithms.
  • Worked on Data mining algorithms for images provided by NASA, (currently implemented in the Discovery Museum project).
  • Published 5 Journal and 11 conference papers related to my research and have been awarded the best research paper in the ASEE and many other conferences.

Confidential

Lead Java and UNIX Full Stack Developer

Environment: JAVA, JDBC, HTML

Responsibilities:

  • Implemented first of its kind cloud based portal system in India.
  • Data sets and analytics were carried on UNIX servers running scripts.
  • Data segmentation was done on remote computers were clients information and ran on a simple servlet based programming model for information exchange.
  • Learnt a little CSR terminology.
  • Involved in Design of application using UML.
  • Coding, Testing and Implementation of the application.
  • Designed and developed process flow diagrams for both green field projects as well as Brown field projects.
  • Developed various index methods to help calculate the activity progress.
  • Involved in the SRS (system requirement specification) development of the product.
  • Managed the marketing and sales of our product.
  • Development of documentation and manuals.
  • Development of custom programs and robotic kits.
  • Custom designing PCB’s.
  • Conducted workshops and classes.

We'd love your feedback!