Software Engineer Resume
Chicago, IL
SUMMARY:
- Experienced Software Engineer in delivering software applications utilizing Object Oriented and SOLID principles and design patterns.
- Extensive experience in the full life cycle of the software design process including gathering requirements, implementation, testing, deployment, and maintenance (defects).
- Took ownership of modules developed and provided support to others who required to use or interact with them.
- Developed and Deployed Spring Boot Microservices utilizing Java 8 and Angular 6 onto a SaaS cloud platform (GE’s Predix). The platform provided services which were interfaced with the java code such as database management and message queueing with RabbitMq.
- Developed unit and integration tests to increase code coverage and automate the system.
- Designated Scrum Master on various project to promote the practice of scrum which helped the teams to deliver features iteratively in a fast changing environment.
- Proponent of, encouraged, and practiced Test Driven Development.
- Utilized design patterns to aid in designing solutions in order to aid in refactoring existing code as well as designing ideas for new solutions.
- Designed and developed user - interfaces for applications using IntelliJ, Eclipse, and Xcode.
- Utilized Grade/Maven to assemble software projects to deploy onto application servers including Cloudfoundry and Tomcat in orde r to test functionality.
- Maintained version control of code using tools such as Bitbucket and GitHub in order to deploy a single source of code written by many developers.
- Collaborated with other developers in order to resolve merge conflicts upon
- Queried large database tables with SQL in various database environments including Hive, Oracle SQL, and PostGres.
- Used CA Rally, Jira for requirements gathering and issue tracking in order to start development.
- Participated in SAFe & Agile by CA Technologies.
- Certified in Hadoop Development
- Team player.
TECHNOLOGY
Programming Languages: Java (8 and earlier), Javascript, Typescript, Python, HTML, CSS, SQL
Frameworks: Spring, Spring boot, Jackson (JSON Serializer), Jersey/Spring (Rest APIs), Hibernate, Lombok, AngularJS, PolymerJS
Unit Testing: Junit, Mockito
Behavioral Testing: Cucumber, Rest Assured
Platforms: Cloudfoundry
Data: SQL (PostGres, Oracle SQL Developer, DB2), Hive, Hadoop, RabbitMQ
Tools: IntelliJ, Eclipse, Vi Editor, DB Visualizer, Terminal (Putty, Cygwin), Git, SVN, Junit, PgAdmin4, Postman, Sublime Text, Slack, Confluence Wiki, Chrome developer tools.
Build Tools: Gradle, Maven
Continuous Integration: Jenkins, Grafana (Dashboard UI)
Methodology: Agile (Rally, Jira)
EXPERIENCE:
Confidential, Chicago, IL
Software EngineerResponsibilities:
- Assisted/provided mentorship to those in the internship program to configure their unit test environment in order to perform data conversions for over 20 different platform specific data formats.
- Implemented RESTFul APIs for the front end business tool to request client-specific data extracted from the snowflake db via queries generated by the internal data engine.
- Refactored large code bases by writing clean, reusable code in order to allow for new features to be implemented in typical SOLID design.
- Modified current logging framework by introducing a wrapper to convert log statements into java objects for better readability while debugging/monitoring application activity.
Confidential, Chicago, IL
Software EngineerResponsibilities:
- Implemented enhancements / new features on a web-based application designed to handle for 5 million trades open and closed per day.
- The technologies consist of: a UI component in Javascript, HTML, CSS for the front end
- Java/Spring for the middleware
- Oracle DB for the backend
- Implemented enhancements in JavaScript to improve the Front end behavior of the application.
- Agile/SCRUM environment which consists of daily standups, planned backlog refinement meetings as well as sprint reviews.
- Collaborate with Business Analysts and Production support team to integrate the new features and enhancements developed into the production environment for the new functionality to be used.
- Supported release events during the weekends to help test or release new versions of the application so that delivery meets expectations.
- Refactored the server side portion of the web component, utilizing dependency injection techniques for allowing extensibility as well as easily testable.
- Utilized JUnit and Mockito to add unit tests across the entire application to increase the code coverage from 0 to 100% as well as to cover all critical and edge paths of the code.
Confidential, Chicago, IL
Software Engineer / SCRUM Master
Responsibilities:
- Worked on 2 projects which were applications based on a micro-service architecture deployed onto Predix - an industrial IOT cloud Paas (Platform as a Service)
- Worked with a team to set up and start a legacy application rewrite from scratch.
- Collaborated with Architects and QA team to come up with designs for implementation and testing.
- Led the effort to establish coding standards and guidelines (git-flow).
- Worked on a POC to integrate Polymer components onto Angular 2 webpages in order to set up the front end framework for the rest of the project.
- Built a test framework using Junit Mockito for TDD and Cucumber for BDD practices.
- Developed a singleton class to store a JWT token for use in REST calls which micro-services use to access or put resources.
- Wrote Java code to retrieve data for business related purposes by interfacing with Spring JPA in order to fetch data in a code-friendly way.
- Developed an entity micro-service to represent a business task (Adding an entity to the data store)
- Microservices consisted of REST API’s and business logic utilizing the Jersey and spring rest template to interact with the data store.
- Developed a component in Typescript to communicate with the microservice above to add functionality to the UI to add the entity.
- Acted as SCRUM master in a Scaled Agile Framework environment (SAFe) with over 6 scrum teams.
- Developed Java classes to communicate with redis to cache business logic data to enable faster retrieval of data during runtime.
- Used software design patterns such as strategy and factory to implement different behaviors for data translation per business requirements.
- Interfaced java classes with rabbit mq to listen and publish json messages to persist data across entity microservices.
- Worked on a system level feature to map unique work orders across the legacy and new system.
- Provided support to Customer
- Fixed customer - defects
- Tested features along with customer
Confidential, Dublin, OH
Big Data Developer
Responsibilities:
- Retrieved 7 TB of client data from a landing windows server and stored into a Big Insights HDFS cluster.
- Wrote a script to dynamically load the structured data onto Hive by using command line arguments to automatically load the headers and table name/data.
- Created a user/password account on linux to have restricted access to the cluster to only be able to SFTP data onto the cluster so that we can load the data into HDFS.
- Installed the SPSS server and client software to connect to the analytic node to retrieve the Hadoop data.
- Performed setup and connection of SPSS client to server by maintaining contact with the solution architect team.
Technologies: BigInsights, SAS, SPSS, Linux, Hive
Confidential, Bentonville, AR
Hadoop Developer
Responsibilities:
- Worked in an agile environment and production 750 Node pivotal cluster consisting of 5 years worth of customer transaction data.
- Designed the process of the project through a flow chart depicting all the steps, layers, and blockers of the components.
- Wrote a Java application to generate metrics for customer attribute data.
- The output of the application was a report which analyzed metrics of Confidential customer data.
- The report also displayed the hierarchy of each attribute of a product based on relevance to the category.
- The application generated the metrics of the data, such as the number of distinct and missing values for each attribute in the category.
- Deployed this application as a runnable jar on the HDFS environment.
- Delivered a Shell Script, which automates the entire project consisting of a java program, and a script (also developed) that loads a high volume of rows into a Hive external table.
- Used the Map-Reduce design pattern to Develop a Python Map-Reduce program, which subjects the input excel sheet through a string-matching text across thousands of input excel sheets cleaning out all errors and misspellings in the values of each attribute.
- Developed the java program, which takes in the Map-Reduce output from Hadoop and generates a CSV file containing all the attributes of products and their values.
- Developed a Hive script, which loads the CSV file into a Hive external table.
- Wrote a shell script which combines all of these steps.
- Presented the project as a technical demo in front of the business customers.
- Gave a Keynote Presentation which covered the background information.
Confidential
Software EngineerResponsibilities:
- Modified the Confidential monitoring website to include 5 years’ worth of production inventory data alongside sales data.
- Created factories in Javascript to represent the new data types for inventory.
- Added statements to retrieve webservice responses from the backend.
- Modified existing applications in the website to retrieve data from multiple URLS.
- Restructured the website to include a security layer.
- Modified the front end to generate requests based on the new security architecture.
- Modified the data types to reflect on the new security architecture; The whole purpose of this was to be able to call both inventory and sales data without having to use multiple urls to save time and have a security layer.
Technologies: Java, Hadoop, Job Tracker, Hive, SVN, Linux. Python, JavaScript
Confidential, Chicago, IL
Software Engineer
Responsibilities:
- Refactored a Map-Reduce job class in Java in which an interface was written to unify the contexts and their methods with which the jobs utilized.
- Tested this interface by running JUnit tests on the whole job itself to ensure that it ran exactly the same as before.
- Performed data aggregations from AVRO files being passed through Flume by using the Map-Reduce design pattern to modify the existing Java application.
- Modified job configuration class by editing the key, value, mapper, and reducer type for each aggregation being done.
- Wrote MRUnit tests for each mapreduce job.
- Ran a bash script, which extracted the aggregated data into a PostgreSQL table.
- Configured a job scheduler to automatically run the jobs nightly.
- Visualized this data on a Django Website using a graph by writing a module in Python, which queried data from a PostgreSQL table.
- Tested this graph by deploying it on a personal domain before deploying it to QA.
- Calculated metrics for ongoing Flume Events in order to troubleshoot an issue concerning the extraneous amount of events with low amounts of Avro logs.
- Wrote a class in Java that utilized Map and ArrayList collections to visualize and store the log types (event names, count) logged from each event.
- Optimized current map reduce jobs by editing Job configuration XML files to reduce the time it took for a job to successfully complete.
- Created a spreadsheet which listed the recent successfully completed jobs and their statistics from the job tracker.
- Modified the mapred.max.split.size job property in order to effectively reduce the job duration from 52 minutes to 19 minutes.
- Queried Log data from Hadoop by writing Pig Scripts to verify that an ample amount of data was flowing through the servers.
- Performed Unix commands to transfer files between HDFS and Linux servers.
- Maintained version control of Java and Python by using Git. (Merge, Branch, Push, Commit)
- Utilized JIRA for requirements and issue tracking.
Technologies: Java, Python, Django, Hadoop, Hbase, Pig, Job Tracker, Git, PostgreSQL, JavaScript, Linux.
Confidential, Chicago, IL
Big-Data Developer
Responsibilities:
- Received a Java developer with Hadoop.
- Performed data ingestion of client data by importing relational tables into HDFS using sqoop.
- Wrote queries in HiveQL to retrieve data from Hadoop to perform analytics.
- Performed transfers of files from the local filesystem to the Hadoop Distributed Filesystem (HDFS) using Hadoop commands on the UNIX shell.
- Developed a user-defined function in Java for Pig in order to perform custom filtering to narrow the amount of data being returned.
- Wrote a Python module to connect and view the status of an Apache Cassandra instance.
- Wrote a report that convers the monitoring of the status of the different nodes in a Cassandra cluster.
Technologies: Hadoop, Hive, Pig, Hbase, Java, Python, Linux.