Software Developer Resume
OBJECTIVE :
Seeking a position of Software Developer to utilize and enhance my skills acquired as a graduate student in the field of Computer Science and to make a valuable contribution to your company.
EDUCATION:
Master of Science, Computer Science
Bachelor of Engineering, Information Technology
RELEVANT COURSES:
Database Design, Multimedia Systems, Design and Analysis of Computer Algorithms, Advanced Operating Systems, Advanced Computer Networks, Information Retrieval, Data and Application Security, Machine Learning, Concurrent Data Structures in Multi-Core Systems, Artificial Intelligence, Computer Graphics (OpenGL).
TECHNICAL SKILLS:
LANGUAGESJava, SQL, PL/SQL, C, C++, TCL
WEB TECHNOLOGIES AND LANGUAGES
HTML, CSS, JSP, Servlets, XML, Javascript, JSON
DATABASES
MySQL 5 , Oracle 9i, 10g, Microsoft SQLServer
DATABASE TOOLS
SQL Developer, Toad for Oracle, MySQL Workbench
WEB APPLICATION SERVERS
Apache Tomcat, GlassFish
UML TOOLS
Altova UML Spy, MS Visio, Rational Rose
PROFESSIONAL EXPERIENCE:
Employer: Confidential, MAY \'12 - AUG \'12
Position: Performance profiling intern
Project: Development of a web application for analyzing a web page components, performance and back end resource utilization for generating that web page:
- Extract various elements like images, JavaScripts, style sheets and iframes in the web page and capture all the \'GET\' requests generated by the browser to display the web page using webdriver, HTML Unit and BrowserMob.
- Analyze each of the extracted elements and parse to find any reference to external files. Also determine request and response headers for the extracted elements using proxies, HTTP Components and webdriver.
- Analyze back end resource utilization for a particular request using Splunk\'s REST API and capturing details regarding calls generated, errors encountered entities involved and time spent at each entity while completing the request.
- Implement a proxy that stores and automates user activity like navigating through web pages and filling out forms which can be played over and over again. Details for all the web pages that are involved in the activity are generated in form of a csv file and can be used to identify differences between distinct runs of the same sequence of actions.
- Develop profile for a web page over time and track changes in page and elements on the page.
- Servlets and JSPs used for web interface, hosted using Tomcat and Hibernate-Oracle 10g used for persistence.
ACADEMIC PROJECTS:
Content based image retrieval using SVM: JAN \'11 - MAY \'11
- This project was an implementation of Histogram of Oriented Gradients for Human Detection (HOG) using SVM.
- Used a set of training images for training the SVM (Support Vector Machine). Each training image was converted to a grayscale image which is filtered to obtain derivatives of each pixel to calculate the magnitude and orientation of gradients.
- Image was divided in 16 parts and for each part, the histogram is generated based on the gradient orientation and gradient magnitudes of the pixels in that part. This set of histograms is normalized and is used to train the SVM. SVM builds a model file from the data which is used for classification.
- Generate features from the query image in a similar way and use SVM model file generated from training images to determine the objects in the image and download similar images.
Web Search Engine: AUG \'11 - DEC \'11 HTML, Servlet, Java IDE
- This was a group project for implementing a web search engine. The objective was to crawl a part of the web and build a search engine that searches for query terms in crawled web index, runs a page ranking algorithm and gives the output. We used a custom designed crawler based on open source Heritrix to crawl and a standard iterative page ranking algorithm.
- My responsibility was development of the Indexing Module:
- Created an indexer for the indexing the crawled documents from ground up (we did not use existing open source indexer). The indexer extracted words from the crawled documents. Root for each of the extracted word was determined using Porter Stemmer. For each extracted word, document frequency and term frequency were calculated and this information was mapped to the page from which the word was extracted.
- This index was used for link analysis and document retrieval.
- The web interface was designed using Java Server Pages and the search engine was hosted using Apache Tomcat 7.0 server.
Implementation of Chandy-Lamport\'s protocol and Roucairol-Carvalho's protocol for a distributed environment: AUG \'11 - DEC \'11
- Implemented a client-server architecture in a distributed environment.
- Distributed System consisted of N number of separate processes connected to each other in a random topology and Channels between the nodes were implemented using reliable TCP socket connection.
- Wrote a program that made the Distributed System to follow Chandy-Lamport\'s protocol to collect a global snapshot and determine whether it is consistent or not using multi-threading.
- Wrote a program that made the Distributed System to follow Roucairol and Carvalho's protocol for distributed mutual exclusion using multi-threading to make sure that at one at most one system is executing critical section.
- Both the protocols involved multiple message exchanges between the participating processes using bi-directional channels.
Confidential, JAN \'11 - MAY \'11
Technologies: SQL Developer, Oracle 10g, Java Swing framework, JDBC, Java IDE
- This project involved gathering requirements of the database at the beginning and creating conceptual design which was converted to a logical design using EER and class diagrams.
- Tables were implemented, normalized and SQL queries were executed on the tables. Multiple views were also created as a part of the project.
- Easy to use user interface was designed and developed using Swing used to maintains records.
Implementation and analysis of Machine Learning algorithms: JAN \'12 - MAY \'12
- Implementation of decision tree algorithm to build a tree to classify example into classes. This tree was built using attributes of the training examples. Three separate heuristics: Information gain, one step look ahead and Variance Impurity were used to choose the branching attribute and the tree was used to classify the test examples.
- Implementation of Spam-Mail detection using Naive Bayes Classifier, Logistic Regression and Perceptron algorithms. Both Naive Bayes and Logistic Regression algorithms involve calculation of probability with which a word can occur in a spam mail. These probabilities were calculated using a set of training files. Perceptron algorithm works by calculating and assigning weights to individual words. Words are extracted from a query mail. Probabilities and weights calculated earlier are used to classify the mail as spam or non-spam.
- Implementation of k-means algorithm for image compression. Each pixel of the image was assigned to one of the \'k\' clusters defined by pixels that are randomly selected at the beginning. All the pixels in a cluster are assigned the same value which is equivalent to cluster mean. Cluster means are updated as the above step is repeated until convergence.
Implementation of concurrent data structures JAN \'12 - MAY \'12
- Goal was to implement and analyze algorithms for concurrent data structures like queues, stacks, hash tables and linked lists and tested on multi core machine for analysis of performance of the algorithms.
- Locks for concurrent data structures were implemented using Java\'s util.concurrent package. Techniques like exponential back off, item exchanger and elimination array were used to increase efficiency.
Design and implementation of a new MAC layer protocol JAN \'12 - MAY \'12
- Goal was to design a new MAC layer protocol that would enable an aircraft to know about aircrafts in its vicinity using GPS instead of RADAR using NS2 and TCL.
- Simple protocol for 5-10 nodes was implemented in NS-2 which was then enhanced and modified to accommodate 90+ aircrafts and movements of these aircrafts.
