Senior Python Developer Resume
4.00/5 (Submit Your Rating)
Williamsville, NY
SUMMARY:
- 4+ years of experience as a Python Developer, proficient coder in multiple languages and environments including Python, C, C++, HTML, JavaScript and SQL.
- Worked on several standard python packages Pandas, Matplotlib, Beautiful Soup, httplib2, Jinja2, NumPy, PySide, SciPy, wxPython, PyTables, Requests, Urllib, SQL Alchemy, MySQL DB and XML Docx.
- Developed various Python scripts to generate reports, send FIX messages (FIX Simulator), SOAP requests, TCP/IP programming and multiprocessing jobs.
- Good experience in developing web applications and implementing Model View Control (MVC) architecture using server side applications like Django, Django REST, Flask and Pyramid.
- Extensive experience in developing Front End using HTML/HTML5, XML, DHTML CSS/CSS3, SASS, LESS, JavaScript, React JS, Redux, Angular JS (1.X) JQuery, JSON, Node.js, Ajax, JQUERY Bootstrap.
- Strong experience using REST Web Services for data communication between remote systems, designed, developed and testing of REST interfaces in Java.
- Expertise in working with different databases like Microsoft SQL Server, Oracle, MySQL, PostgreSQL and Good knowledge in using NoSQL databases MongoDB and Cassandra.
- Proficient in developing complex SQL queries, Stored Procedures, Functions, Packages along with performing DDL and DML operations on the database.
- Hands on Experience in Data mining and Data warehousing using ETL Tools and Proficient in Building reports and dashboards in Pentaho (BI Tool).
- Excellent working knowledge in UNIX and Linux shell environments using command line utilities.
- Application Stress tester, created and stress tested standalone, web - applications and generated graph reports.
- Experience in working on BDPaaS (Big Data Platform as a Service).
- Experience in analyzing data using Big Data Tools like Hive QL program. Also Experience in using Hbase, Oozie.
- Extensively used SQL, Numpy, Pandas, Scikit-learn, Spark, Hive for Data Analysis and Model building.
- Extensively worked on Hadoop, Hive, Spark, Cassandra to build ETL and Data Processing systems having various data sources, data targets and data formats
- Excellent knowledge of Machine Learning, Mathematical Modeling and Operations Research. Comfortable with R, Python, and Relational databases. Deep understanding and exposure of Big Data Eco - system.
- Experienced in implementing Cloud solutions in AWS (EC2, EMR. S3, Cloudwatch, Lambda, Cloudtrail, SNS, SES, EBS, CLI, VPC, ELB, IAM, Redshift, RDS, Root53), Google Cloud, Microsoft Azure.
- Strong experience on DevOps essential tools like Ansible, Docker, Kubernetes, GIT, Hudson, Jenkins and Ant
- Strong knowledge of Data Structures and Algorithms, Object Oriented Analysis, machine learning and software design patterns.
- Expertized in Implementing Spark using Scala and SparkSQL for faster testing and processing of data and Experience in NoSQL databases such as HBase and Cassandra.
- Hands on experience with Spark-Scala programming with good knowledge on Spark Architecture and its In-memory Processing and Experience with Unit testing/ Test driven Development (TDD), Load Testing.
- Hands on experience working in WAMP (Windows, Apache, MYSQL, and Python/PHP) and LAMP (Linux, Apache, MySQL, and Python/PHP) Architecture.
- Expertise in Production support. Provided first, second and third level support to different organizations. Used Jira, Gitlab, pdb, gdb and other debugging tools and deployed production hot fixes.
- Experience with Agile, Scrum and Waterfall methodologies. Used ticketing systems like Jira and Bugzilla.
- Knowledge of testing deployment and deployment tools using Heroku, Jenkins, pylint, cppCheck and Coverity.
- Expert at version control systems like Git, GitHub, SVN and CVS. Migrated repos from SVN to GitHub.
- Familiarity with development best practices such as code reviews, unit testing, system integration testing (SIT) and user acceptance testing (UAT).
- Experience in writing test scripts, test cases, test specifications and test coverage.
- Experience in Test driven development for functional and integration testing using Python PyUnit for unit Testing and for Test Automation used Robot and Selenium frameworks.
TECHNICAL SKILLS:
- Programming Languages: Python, R, C, C++, C# and Core Java
- Python Frameworks and Libraries: Python, Django, Flask, Pandas, Matplotlib, Beautiful Soup, httplib2, Jinja2, NumPy, PySide, SciPy, wxPython, PyTables, Requests, Urllib, SQLAlchemy, MySQLDB, XMLDocx.
- Web Technologies: HTML, CSS, JavaScript, Bootstrap, JQuery, AJAX, XML, Angular JS, Node JS
- Development/ Deployment Tools: Sublime Text, Eclipse, EMACS, Jenkins, Coverity, pyLint, PyCharm, Spyder, Docker, Kubernetes
- Databases: Microsoft SQL Server, Oracle, MySQL, MS Access, PostgreSQL
- NoSQL databases: Mongo DB, Cassandra, HBase.
- Cloud Technologies: AWS, Amazon EC2, S3, EMR, Lambda, Redshift, Heroku, MS Azure.
- Operating Systems: Linux 4.x/5.x/6.x, Ubuntu, Red Hat Linux, Windows server 2008, 2012, IBM AIX.
- Version Controls: CVS, SVN, Git, GitHub.
- Testing, Issue Tracking and Debugging Tools: Jira, PDB, GDB, Bugzilla, Gitlab.
- Automation Testing: Selenium, Robot, PyTest.
- Development Methodologies: Agile, SCRUM and Waterfall.
PROFESSIONAL EXPERIENCE:
Confidential, Williamsville, NY
Senior Python Developer
Responsibilities:
- Used Python programming and Django for the backend development, Bootstrap and Angular for frontend connectivity and MongoDB for database. Used JavaScript for data validations and designed validations modules.
- Designed and developed the complete admin module and resolved issues and enhanced the Admin module for more s.
- Developed test automation framework scripts using Python Selenium WebDrive.
- Tested compatibility of application for dynamic and static content in cross browsers such as Chrome, IE, Edge, Firefox and Safari.
- Setup Selenium GRID to run automation scripts on different browsers.
- Implemented programming automations using Jenkins and Ansible on unix/linux OS over cloud like Docker.
- Automated MySQL container deployment in Docker using Python and monitoring of these containers.
- Automated most of task using python scripting, RaspberryPI and Raspbian OS, and Unix/Linux shell Scripting.
- Developed a Django ORM module queries that can pre-load data to reduce the number of databases queries needed to retrieve the same amount of data.
- Developed Restful micro services using Flask and Django and deployed on AWS servers using EBS and EC2.
- Developed views and templates with Django view controller and template to create a user-friendly website interface.
- Created private cloud using Kubernetes that supports DEV, TEST, and PROD environments.
- Deployed and tested different modules in Docker containers and GIT.
- Identified and created issues and bugs based on the User Stories in JIRA.
- Worked on REST Web Services for data communication between remote systems, designed, developed and testing of REST interfaces in Java.
- Developed project's web page as Single Page Application (SPA) by using AngularJS and JavaScript API and build delivery driver application.
- Used Pandas API to put the data as time series and tabular format for local timestamp data manipulation and retrieval and storing it into MongoDB.
- Used Python Library Beautiful Soup for web scrapping to extract data for building graphs
- Uploaded the admin module on Elastic Bean Stalk (EBS) & EC2 and stored the static files in S3 on Amazon cloud.
- Track automation results on a daily basis for better performance in testing.
- Followed Agile testing methodology, participated in daily Scrum meetings and tested each Sprint deliverables.
Environments: Python, Django, Django Rest, Flask, AngularJS, JavaScript, Jenkins, Dockers, Kubernetes, Container, GitHub, Ansible, Java, HTML5/CSS, Bootstrap, Selenium & Robot Framework, Jira, MS SQL Server 2013, AWS, S3, EC2, EBS, MySQL, PyCharm, Linux, Shell Scripting, JIRA.
Confidential, NYC, NY
Python Data Engineer
Responsibilities:
- Utilized Apache Spark with Python to develop and execute Big Data Analytics and Machine learning applications, executed Machine Learning use cases under Spark ML and Mllib.
- Identified areas of improvement in existing business by unearthing insights by analyzing vast amount of data using machine learning techniques.
- Interpret problems and provides solutions to business problems using data analysis, data mining, optimization tools, and machine learning techniques and statistics.
- Involved in data modeling the tables in Cassandra DB. Familiar with all the internal tools of Cassandra NoSQL.
- Created several tables as a part of data modeling and determined the performance of the table through load testing with Cassandra -stress tool.
- Restoring the backups through sstable loader tool in NoSQL database management system Cassandra
- Led discussions with users to gather business processes requirements and data requirements to develop a variety of Conceptual, Logical and Physical Data Models. Expert in Business Intelligence and Data Visualization tools: Tableau, Microstrategy.
- Creating data pipelines using Apache Spark, a big-data processing and computing framework and Updated and maintained Jenkins for automatic building jobs and deployment.
- Worked on machine learning on large size data using Spark and Map Reduce.
- Let the implementation of new statistical algorithms and operators on Big Data and SQL platforms and utilized optimizations techniques, linear regressions, K-means clustering, Native Bayes and other approaches.
- Developed Spark/Scala, Python for regular expression (regex) project in the Big Data Hive environment with Linux/Windows for big data resources.
- Data sources are extracted, transformed and loaded to generate CSV data files with Python programming and SQL queries.
- Stored and retrieved data from data-warehouses using Amazon Redshift.
- Used Data Warehousing Concepts like Ralph Kimball Methodology, Bill Inmon Methodology, OLAP, OLTP, Star Schema, Snow Flake Schema, Fact Table and Dimension Table.
- Refined time-series data and validated mathematical models using analytical tools like R and SPSS to reduce forecasting errors.
- Familiarity in working with TCP/IP, IPv4, and IPv6 protocols in an environment, which provides multithreading, multi tenancy and high availability support at Network Layer.
- Created Data Quality Scripts using SQL and Hive to validate successful das ta load and quality of the data. Created various types of data visualizations using Python and Tableau.
Environment: Python, Django, Big Data, Map Reduce, Spark, Spark MLLib, Tableau, SQL, Excel, VBA, SAS, Matlab, AWS, SPSS, Cassandra, Oracle, MongoDB, SQL Server 2012, DB2, T-SQL, PL/SQL, XML, Tableau
Confidential
Python Developer
Responsibilities:
- Worked with the Stakeholders, gathered requirements developed High level design Detail design documents.
- Developed website both frontend and backend modules using Python Django Web Framework.
- Designed front end website using HTML, CSS, JavaScript, jQuery, Ajax, Bootstrap.
- Re-engineered various modules for implementing changes and creating efficient system.
- Developed rich UI web application using JavaScript libraries like jQueryUI, data grid, jscolor, high charts.
- Design and develop components using Python. Implemented code in python to retrieve and manipulate data.
- Implemented database access using Django ORM.
- Used MySQL as backend DB and MySQL DB of python as database connector to interact with MySQL server.
- Used Restful APIs to access data from different suppliers.
- Developed Python and shell scripts for automation of the build and release process.
- Support the scripts configuration, testing, execution, deployment and run monitoring and metering.
- Used Python and Django creating graphics, XML processing of documents, data exchange and business logic implementation between servers.
- Used Restful API's to gather network traffic data from Servers.
- Supported Apache Tomcat web server on Linux Platform.
- Developed and executed User Acceptance Testing portion of test plan.
- Debugging Software for Bugs