We provide IT Staff Augmentation Services!

Security Developer Resume

5.00/5 (Submit Your Rating)

Philadelphia, PA

SUMMARY

  • Programming experience wif Python, Java 8, Scala 2.7.0/3.5.0 , R 3.5.1, HTML 5, Node.js, CSS 3 and environments like Linux and UNIX.
  • Technical experience of using Hortonworks 2.6.5, Databricks 2.4.2 and Hadoop working environment including Hadoop 2.8.3, Hive 1.2.2, Sqoop 1.4.7, Apache Spark 2.2.1.
  • Good working knowledge on Eclipse IDE 4.14.0, IntelliJ IDE 2020.3.3 for developing and debugging Java and Scala applications.
  • Comprehensive knowledge of Core Java Concepts and Collections Framework, Object Oriented Design and Exception Handling and Postman for Rest APIs collections.
  • Experience in testing applications using JUnit 4.12 and ScalaTest.
  • Used JIRA for project tracking and reporting.
  • Habituated wif Agile, software development methodologies.
  • Experience working wif source and version control systems like BitBucket, GitHub.
  • Worked wif Amazon Web Services using EC2, Lambda for computations and S3, DynamoDB, Redshift as a storage mechanism.
  • Hands - on experience wif Python libraries like Matplotlib 2.2.2, NumPy, Pandas.
  • Strong knowledge wif design & analysis of ML/data science algorithms like Classification, Association rules, Clustering and Regression and models like Descriptive, Predictive and Prescriptive analytics, Machine Learning (ML), Data Mining.
  • Neural network libraries: TensorFlow r1.8.0, Keras 2.2.1.
  • Hands-on working experience wif Brain Computer Interfaces - Emotiv technology for Data Mining on Brainwaves

TECHNICAL SKILLS

Programming Languages: C, Java 8, R 3.5.1, Node.js

Web programming and Scripting: HTML5, CSS3, Bash - Shell scripting, Python 2.7/3.5.0

Databases: Oracle 10g, MySQL 5/ 8

Operating Systems: Windows 7, 8.1, 10; Linux; xv6 - Unix; Mac

Software: Microsoft Excel 2016, Microsoft Access 2016, Microsoft Word 2016, Visual Studio 2017, Eclipse Oxygen, NetBeans IDE 8.2, RStudio 1.1.456, Amazon S3, Anaconda 5.1.0, Rapid Miner 7.2, Knime 3.5, PyCharm 2.0, Emotiv, MySQL Workbench, Postman

Analysis Tools: MATLAB 2015, SciLab, Tableau

Hadoop Ecosystem: HDFS 2, MapReduce, Hive 1.2.2

AWS Services: S3, CloudWatch, DynamoDB, Lambda, API Gateway, Redshift

PROFESSIONAL EXPERIENCE

Confidential - Philadelphia, PA

Security Developer

Responsibilities:

  • Developing features for teh de-identification tool and maintenance.
  • Calculating associated risk for PII datasets by classifying teh datasets and running through teh risk models using Java, AWS, MySQL.
  • Using docker to deploy UI code. Maven is used as build tool for Backend.
  • AngularJS for frontend development and some NodeJS scripts for backend.
  • Assisting teams in securing and protecting their datasets.
  • Testing and debugging issues using AWS CloudWatch.
  • AWS S3 and AWS DynamoDB for data storage and retrieval.
  • AWS Lambda for serverless computations and modularity.
  • Data cleaning and data preparation for running through teh model.

Environment: Java, Postman, AWS - S3, Lambda, API Gateway, CloudWatch, DynamoDB, Redshift, MySQL, DBeaver, Node.js, Microsoft Excel, MySQL Workbench

Confidential - Santa Monica, CA

Software Developer

Responsibilities:

  • Developing features for teh job scheduler using Scala and Scala-SQL.
  • Testing features using MySQL Workbench by executing MySQL queries.
  • Deploying and rollback procedures using traditional Maven commands.
  • Jenkins automation for deployment and Maven Release.
  • Salt scripting for parallel workflow execution.
  • Big Query usage estimation and testing.
  • Github repository migration and testing.
  • Research and analysis on teh existing and new features.
  • Testing Kerberos secrets.

Environment: Scala, IntelliJ, Python 2.7/3.5, HDFS 2, Hive 1.2.2, MySQL 5, MySQL Workbench, Linux/Unix - Bash Shell Scripting

Confidential - Phoenix, AZ

Big Data Architect

Responsibilities:

  • Maintaining, enhancing, and upgrading teh Cornerstone Data ingestion capabilities for data ingestion.
  • Optimizing data retrieval using Hive.
  • Efficiently store and retain data in Cornerstone.
  • Creating and managing nodes dat utilize Java jars and python, shell scripts for scheduling jobs to customize data ingestion for users.
  • Implement code changes in existing modules - Java, python, shell-scripts for enhancement.
  • Performing MySQL queries extensively on several tables for efficient retrieval of ingested data using MySQL Workbench.
  • Rewriting teh existing scripts which are written, using Python and Shell Scripting for efficient code execution.
  • Perform testing on various modules as a part of environment upgradation.
  • Solve JIRA tickets for Customer Request Tracking by debugging teh code for errors.
  • Agile is used as Project Management tool and Bit Bucket is used for source code tracking.
  • Teh data formats dealt wif are XML, JSON, Parquet, and Text.

Environment: Python 2.7/3.5, Putty, Java 7, Eclipse - Oxygen, HDFS 2, Hive 1.2.2, MySQL 5/8, MySQL Workbench, Linux/Unix - Bash Shell Scripting

Confidential

Responsibilities:

  • Abandoning teh cart refers to teh scenario of leaving teh cart wifout purchasing.
  • Teh relevance of teh project was underlined by teh clickstream data used which was 3.2 GB in size wif 971 columns and 1.1 million rows.
  • Predictive Analysis was performed on Dell Shopping Cart Clickstream Data using Keras library running on top of TensorFlow.
  • PySpark and Spark was used for Data cleaning and Data pre-processing.
  • Seaborn library was used for data visualization and analysis.
  • Decision Rule Classifier and Treatment Learner TAR3 was used to obtain paths having a high likelihood of leading to a specific outcome-abandoning/purchasing user.
  • Agile Methodology was used for project management and GitHub was used for source code tracking.

Environment: Pandas, TensorFlow r1.8.0, Keras libraries, PySpark 2.2.1, Spark 2.2.1

Confidential

Responsibilities:

  • Extracted problem datasets from Kaggle Repository for analyzing and decision making.
  • Used k-fold validation for dividing teh training and test data to get teh optimum division wifout data skew.
  • Trained teh neural network model to classify for unknown input values until there was less difference between teh estimated and actual values and ensuring it was not over fitted.
  • Genetic Algorithm Analysis was used for finding teh best possible solution from available options.
  • Processed teh datasets using Ga, Neuralnet and NNet packages in Rstudio.
  • Trained neural network models until a high precision value was obtained.
  • Utilized RStudio packages for generating covariance, correlation matrices.
  • Visualization was done using ggplot.

Environment: RStudio 1.1.456 - Neuralnet, NNet, Ga packages, R programming, Python 3.5.0 - Pandas, Seaborn libraries, Apache Spark 2.2.1 - MLlib

We'd love your feedback!