We provide IT Staff Augmentation Services!

Software Engineering Intern Resume

2.00/5 (Submit Your Rating)

SKILLS:

Algorithm

API

B2B Software

Software Engineering

C++, Git

HDFS, Hive

JavaScript

Jenkins

Python

Pyspark

Tensorflow

Amazon Elastic Compute Cloud

EC2, Amazon Kinesis

Kinesis

Amazon Web Services

AWS

Docker

Serverless Architecture

Serverless

Amazon Dynamodb

Dynamodb

Apache Hadoop HDFS

Apache Hadoop Impala

Impala

Data Cleansing

Data Science

ETL

MAP Reduce

Teradata

BLOB

Hdinsight

Microservice

Microservices, DDL

MySQL, Oracle, Postgres, SQL

Data Warehouse

Data Warehousing

Model - View-Presenter

MVP, Secure File Transfer Protocol

SFTP Project Plans, Security

Dynamo, EMR, Linux

Java, OpenGL

Open GL

MATLAB

Pipeline excel

Telecommunication

Subject Matter Expert

Segmentation

ECS Teaching, MRI

Architecture

EXPERIENCE:

Confidential

Software Engineering Intern

Responsibilities:

  • Focus on building scalable cloud backend systems using AWS and integrate them into customer deliverables

Confidential

Subject Matter Expert

Responsibilities:

  • Design course content labs using services like Guard Duty, Security Hub, Key Vault for secure cloud implementations

Confidential

Graduate Teaching Assistant

Responsibilities:

  • Deliver TA sessions, give solutions on Piazza and guide ~80 students with their difficulties.
  • Grade assignments and exams

Confidential

Data Science Intern 

Responsibilities:

  • Worked on building a smart face recognition system that detects a person's face and provide entry to valid people.
  • Spearheaded team and gave crucial inputs in designing the flow of the algorithm.
  • Further, worked on a python - tensorflow based module to generate face embeddings from input dataset.

Confidential

Senior Business Technology Analyst

Responsibilities:

  • Led team of three to develop an Azure based ETL Airflow, pyspark solution that based on an existing AWS based solution.
  • Integrated pyspark modules like DQM engine, DDL generator, DML executor and delivered an MVP in three weeks
  • Managed a team of four and architected a python based microservices oriented serverless data ingestion orchestrator to automate recurring data transfer from HDFS to S3. Made project plans in JRA and SIPs in Excel for timely completion
  • Implemented the Data Engineering pipeline using Snaplogic to ingest high volume data (~300 GB) on a weekly basis
  • Performed data cleansing and analytics and improved data ingestion times by ~30% by tuning Hive on Spark queries
  • Designed a data transfer API from SFTP to S3 to handle any file formats, sizes, frequencies and tested on 5MB to 50GB data
  • Developed RESTful APIs in Mulesoft that fetch data of Redshift data warehouse, tables in Alation and trigger Airflow jobs
  • Presented project outcomes mentioned above to company and client stakeholders on several occasions

We'd love your feedback!