We provide IT Staff Augmentation Services!

Software Engineer Resume

2.00/5 (Submit Your Rating)

SUMMARY:

  • 4+ years of experience in data science & big data
  • Experience with multiple programming languages, including Python, Scala, SQL, Shell scripting
  • Experience with big data technologies, including Spark, Hadoop, Hortonworks, EMR, AWS cloud
  • Worked with Puppet and other DevOps tools to automate production configurations and deployments on 250+ Hadoop nodes
  • US citizen and Army veteran, served in Iraq

TECHNICAL SKILLS:

Programming: Python (SciKit, Pandas, NumPy, Seaborn, Matplot), Scala, R, MatLab, SQL, Java, C++, Linux, Shell

Big data distributed development in: Scala - Spark and PySpark, Big data & cloud

Hadoop Ecosystem: HDFS, MapReduce, Ambari, Hive, Knox, Rest, Kafka, Storm, HBase

AWS cloud services: EC2, EMR, S3, SageMaker

Data Science: Machine learning, feature engineering, model validation, parameter tuning, neural networks

Proficient in cutting: edge data science platforms: Databricks, Domino Data Lab, IBM Watson Studio

Other: Puppet, MicroStrategy, Docker, Kubernetes, Excel Macros, Pivot Tables, Agile, Kanban, Actuarial exams (1-P, 2-FM), Jira, Confluence

EXPERIENCE:

Software EngineeR

Confidential

Responsibilities:

  • Works in a data science team to develop machine learning projects, configure infrastructure, create deployment pipelines, and provide data science across the organization
  • Acted as a lead in data science platform evaluations for the entire enterprise
  • Installed, deployed, configured and tested multiple data science platforms: Databricks, Domino Data Lab, IBM Watson Studio among others
  • Communicated with enterprise leadership and key decision makers to select and plan architecture for data science platforms valued at $3 Million / year
  • Contributed to the development and production deployment of machine learning models to predict which loans will be delivered to Confidential
  • Utilized Scala-Spark and PySpark for ETL, data preparation, GBT models, grid search and hyperparameter tuning
  • Developed binary classification model to detect anomalies in accounting records
  • Performed data preparation and Random Forest models in Python Pandas & SciKit Learn

Hadoop platform Admin

Confidential

Responsibilities:

  • Hadoop administrator, responsible for installing, configuring, upgrading, testing and troubleshooting Hadoop clusters on the Hortonworks platform
  • Administered 250+ nodes on Dev, Test, Acpt and Prod clusters containing up to 1 Petabyte of loan data
  • Provided 24/7 on-call production support, troubleshooting in real-time urgent production issues
  • Big Data Financial Reporting in Hive:
  • Generated AWS billing reports calculated from company-wide AWS billing statements on scale of hundreds of millions of line items, using the Hive Query Language on HDFS
  • Fully automated the reporting process using Shell scripting, Macros & pivot tables
  • DevOps Configuration Management with Puppet:
  • Led the automation effort of production deployment and configuration management of 250+ nodes using Puppet

IT Consultant trainee

Confidential

Responsibilities:

  • Trained as an IT Consultant for Confidential, in preparation of upcoming role in Confidential
  • Intensive full-time included Java, Linux, SQL, web development, Agile Scrum and Finance

Accounting Clerk

Confidential

Responsibilities:

  • Prepared accounting statements and financial reports for a small real estate brokerage firm, handling amounts of up to $1,000,000
  • Performed maintenance and testing on power generation equipment
  • Deployed to Iraq in operation New Dawn, providing on-site support for electrical equipment

Teaching asssitant / tutor

Confidential

Responsibilities:

  • Provided private tutoring to students in Calculus, Trigonometry, Micro & Macro Economics, SAT, GMAT

We'd love your feedback!