Python Developer Resume
New York City, NY
SUMMARY
- IT professional with 6 years of experience with proficiency in Design & Development using, Python, Django, Java/J2EE.
- Experience in application development usingPython, Django, HTML5, CSS, JavaScript, Angular JS, jQuery and Node.js by following W3C standards Oracle, PostgreSQL and SQLite.
- Proficient in developing Web Services (SOAP, RESTful) in Python using XML, JSON.
- Worked on various applications using Pythonintegrated IDEs Sublime Text, PyCharm.
- Experience in working on Application Servers like WebSphere, WebLogic, Tomcat, Web Servers like Apache Server, NGINX and Integrated Development Environments like PyCharm, Eclipse.
- Knowledge on working in WAMP (Windows, Apache, MySQL, and Python) and LAMP (Linux, Apache, MySQL, and Python) Architecture.
- Experience configuring and developing with different Database servers including MySQL, MSSQL, Oracle and Mongo DB. Knowledge in AWS services like Auto Scaling, CloudFormation, CloudTrail, CloudWatch.
- Experience with Version Control, ideally GIT, AccuRev, SVN, and CVS. Good programming, problem solving skills, commitment, and result oriented, with a quest and zeal to learn new technologies. Good knowledge of TCP/IP, UDP, HTTP, HTTPS, SSH and SSL protocols.
- Good experience of software development using Python libraries like Beautiful Soup, markdown, JsonLogic, ReportLab, Pandas data frame, network, urllib2 and MySQL dB for database connectivity.
- Worked on big data technologies like Hadoop/ HDFS, Spark, MapReduce, Pig, Hive, Scoop, Flume to extract and load data of various heterogeneous sources like Oracle, flat files, XML, other streaming data sources into EDW and transform for analysis (ETL/ELT), using machine learning tools. Created Hive tables and queries using HiveQL.
- Worked on cloud technologies and experience in setting up Hadoop clusters in Amazon EC2 & S3 and hands - on experience on RDD architecture, Spark operations.
- Experience in working with business intelligence and data warehouse software, including SSAS/SSRS/SSIS, Business Objects, Amazon Redshift, Azure Data Warehouse.
- Working experience in data analysis techniques using Python libraries like NumPy, Pandas, SciPy and visualization libraries of Python like Seaborn, Matplotlib.
- Experience working with RDBMS including Oracle/ DB2, SQL Server, PostgreSQL 9.x, MS Access and Teradata for faster access to data on HDFS.
- Experience in AGILE (Scrum) Methodology. Involved in sprint planning, product backlog creation and acted in the capacity of Scrum Master.
TECHNICAL SKILLS
Programming Languages: Python, UNIX shell scripting, PERL, Visual Basic, T-SQL, PL/SQL, C#, Spark, HiveQL
Python Libraries: Requests, Urllib, Pandas, NumPy, SciPy, matplotlib, BeautifulSoup, HtmlParser, Swagger, SQLAlchemy, MySQLDB, XMLDocx, Boto3, PEP 8.
Web Technologies: HTML/HTML5, CSS/CSS3, XML, DOM, AJAX, jQuery, JSON, CSS, Bootstrap.
Frameworks: Django, Flask, Pyramid, PyJamas, Angular JS, Node JS, Spring, Hibernate.
Big Data Technologies: HDFS, Scoop, Flume, Oozie, Apache Apex (In memory streaming), PySpark, Data Lake, TensorFlow, HBase, Cassandra, Redshift, MongoDB, Kafka, YARN, Spark Streaming, Edward, CUDA, ML Lib, ZooKeeper
Data Modeling Tools: Toad Data Modeler, SQL Server Management Studio, MS Visio, SAP Power designer, Erwin 9.x
Databases: Teradata MVS, MySQLServer, Oracle12c/11g/10g/9i, MS Access 2016/2010, Sybase, Hive, SQL Server 2014/2016, Amazon Redshift, Azure SQL Database
Reporting Tools: Crystal reports XI/2013, SSRS, Business Objects 5.x/ 6.x, Tableau, Informatica Power Center
Deployment Tools: Heroku, Amazon EC2, S3, Jenkins and Fabric, AWS, Amazon Lambda, Docker.
Cloud Technologies: Amazon Web Services (AWS), Microsoft Azure, Amazon EC2, S3, Kinesis
Analytics: Tableau, Power BI, MS Excel
Project Execution Methodologies: Agile, Scrum, Lean Six Sigma, Ralph Kimball and Bill Inmon data warehousing methodology
BI Tools: Tableau, Birst, Power BI, SAP Business Objects, SAP Business Intelligence
Operating Systems: Windows Server 2012 R2/2016, UNIX, CentOS
PROFESSIONAL EXPERIENCE
Confidential, New York city, NY
Python Developer
Responsibilities:
- Web-services backend development using Python (Django, SQLAlchemy).
- Involved in Developing a Restful service usingPythonDjango framework.
- Worked onMongoDBdatabase tasks such as locking, transactions, indexes, Sharding, replication, schema design.
- Developed Restful API's using Python Flask and SQLAlchemy data models as well as ensured code quality by writing unit tests using Pytest
- Experience in working with Python ORM Libraries including Django ORM, SQLAlchemy.
- Worked with Requests, Pysftp, GnuPG, ReportLab, Numpy, SciPy, Matplotlib, HTTPLib2, Urllib2, Beautiful Soup and Pandas python libraries during development lifecycle.
- Created Data tables utilizing PyQt to display patient and policy information and add, delete, update patient records.
- Utilize PyUnit, the Python Unit test framework, to run test cases for Python applications.
- Performed troubleshooting, fixed and deployed many Python bug fixes of the two main applications that were a main source of data for both customers and internal customer service team.
- Performed enhancements to existing Python/Django modules to deliver certain format of data.
- Involved in Web development, programming, and engineering Django, Web Server Gateway Interface (WSGI) and SQL internal admin tools behave Behavior-driven development (BDD).
- Designed and maintained databases using Python and developed Python based API (RESTful Web Service) using Flask, SQLAlchemy and PostgreSQL.
- Used Python and Django to interface with the jQuery UI and manage the storage and deletion of content.
- Worked on migrating data from Datawarehouse to AWS using Python.
- Created programs in Python for automating the processes for creating Excel sheet reading data from Redshift.
- Working on Multiple AWS instances, Boto3 to access S3 buckets, set the security groups, Elastic Load Balancer and AMIs.
- Automated cleaning and processing of 150+ data sources using Python and Informatica.
- Perform Data Analysis on the Analytic data present in Hadoop/HIVE/Oozie/Sqoop and AWS using SQL, Python, Apache Spark, SQL Workbench.
Environment: Python, Requests, Pysftp, Boto3, PEP 8, GnuPG, ReportLab, Numpy, SciPy, Matplotlib, HTTPLib2, Urllib2, Beautiful Soup, Django, MySQL, Windows, Linux, HTML, CSS, JQuery, JavaScript, Apache, Jira, Linux, Git, Teradata, Oracle, SQL, UNIX
Confidential, Irvine, CA
Python Developer
Responsibilities:
- Used MVT pattern to encapsulate client/server interactions helps to illustrate software-pattern roles as well as developer roles by separating object, components, and services into multi-tiers with well-defined boundaries.
- Developed tools using Python, Shell scripting, XML, BIG DATA to automate the scheduled jobs.
- Designed and developed integration methodologies between client web portals and existing software infrastructure using SOAP API's and vendor specific frameworks.
- Designed and maintained databases using Python and developed Python based API (RESTful Web Service) using Django, SQLAlchemy and PostgreSQL.
- Worked with research teams of health services and business analysts to understand the clear overview of large claims database, e-medical records and other healthcare registry data related.
- Used PyQt for the functionality filtering of columns helping customers to effectively view their transactions and statements. Implemented navigation rules for the application and page.
- Performed testing using Django's Test Module.
- Used Django configuration to manage URLs and application parameters.
- Develop and test hypotheses in support of research and product offerings, and communicate findings in a clear, precise, and actionable manner to our clients.
- Understanding of data management technologies that include Hadoop, Python, Hive, Spark, Flume, Oozie and cloud technologies like AWS Redshift, S3. Created EC2 instances in AWS as well as migrated data to AWS from data Center using snowball and AWS migration service.
- Extensively used python libraries like NumPy, Pandas, SciPy for data wrangling and analysis, while libraries like Requests, Pysftp, GnuPG, ReportLab, Seaborn, Matplotlib for developing the code.
- Working on Multiple AWS instances, set the security groups, Elastic Load Balancer and AMIs.
Environment: Python, Django, MongoDB, HTML5, CSS, XML, MySQL, AWS, JavaScript, JQuery, Bootstrap, RESTful, IDLE, pandas, Requests, Pysftp, GnuPG, ReportLab, Numpy, SciPy, Matplotlib, HTTPLib2, Urllib2, Beautiful Soup, Nginx, GitHub and Windows, Linux, Apache Spark, AWS EC2, S3, SQL Workbench, Redshift, Power BI.
Confidential, Norwalk, CT
Python Developer/ Engineer
Responsibilities:
- Performed troubleshooting fixed and deployed many Python bug fixes of the two main applications that were a main source of data for both customers and internal customer service team.
- Work with teams to identify/ construct relations among various parameters to analyzing customer response data.
- Develop and improve bidding algorithms for daily optimization using Python and continuously analyze and test new data sources. Also, perform research analysis on bidding strategies.
- Executed queries from Python using Python-MySQL connector and MySQL dB package.
- Implemented UI guidelines and standards throughout the development and maintenance of the website using the CSS, HTML, JavaScript, and JQuery.
- Python/Django based web application, PostgreSQL DB, and integrations with 3rd party email, messaging, storage services.
- Developed GUI using webapp2 for dynamically displaying the test block documentation and other features of Python code using a web browser.
- Develop automated data pipelines from various external data sources (web pages, API etc.) to internal data warehouse (SQL Server, AWS), then export to reporting tools like Tableau.
- Carried out various mathematical operations for calculation purpose using Python libraries NumPy, SciPy, Pandas.
- Configured various big data workflows to run on top of Hadoop and these workflows comprise of heterogeneous jobs like Pig, Hive, Spark, etc.
- Define and create Cloud Data strategies, including designing multi-phased implementation using AWS, S3.
- Created views in Tableau Desktop that were published to internal team for review and further data analysis and customization using filters and actions, used KPI’s for business performance.
Environment: Python, Django, HTML5, CSS, XML, MySQL, AWS, JavaScript, JQuery, Bootstrap, RESTful, IDLE, pandas, Requests, Pysftp, GnuPG, ReportLab, Numpy, SciPy, Matplotlib, HTTPLib2, Urllib2, Beautiful Soup, API, NumPy, SciPy, Pandas, scikit-learn, Hadoop, Pig, Hive, Tableau, AWS, S3, Redshift, JSON, Hive, HiveQL, KPI, ETL.
Confidential, Wilmington, DE
Python Developer/ Data Engineer
Responsibilities:
- Involved in complete software development life cycle (SDLC) - Requirement Analysis, Conceptual Design, and Detail design, Development, System and User Acceptance Testing (UAT).
- Developed web-based LAMD stack applications using Python and Django for large dataset analysis.
- Designed and Implemented a Random Unique Test Selector Package on processing large volume of data using Python and Django ORM.
- Work with internal DuPont Pioneer team to analyze, track and visualize the data genetically modified products.
- Identify and analyze patterns. Design experiments, test hypotheses, and build models.
- Conducts advanced data analysis and complex algorithm to help stakeholders to discover and quantify leading information using data analytics software and tools including Python, R, Hadoop, Spark, Redshift.
- Work with statistical analysis methods like time series, regression models, principal component analysis, multi-variance analysis.
- Involved in design, implementation and modifying the Python code and MySQL database schema on-the back end.
- Designed Test system with SAS, SAS Macros, Oracle SQL, Oracle, AIX/Unix and Unix Shell Scripts.
- Perform Data Analysis on the Analytic data present in Hadoop/HIVE/Oozie/Sqoop and AWS using SQL, Python, Apache Spark, SQL Workbench.
- Created Daily, Weekly, monthly reports related by using Hadoop/HIVE/Oozie/Sqoop, MS Excel, and UNIX.
Environment: Python, MySQL, AWS, JavaScript, JQuery, Bootstrap, RESTful, IDLE, Hadoop, Spark, Oracle, Redshift, HIVE, Oozie, Sqoop, SQL Server, AWS, SQL Workbench, SDLC, UAT, AWS.
Confidential
Python/ Big Data Engineer
Responsibilities:
- Work with domain experts to identify data relevant for analysis with large data sets from multiple sources to understand linkages and to develop use cases.
- Assist in the development of risk management predictive / analytical models to help management identify, measure and manage risk.
- Developed Python scripts to read from Excel files, generate XML configuration files and horizontally scalable APIs using Python Flask. Designed schema for the APIs.
- Create big data clusters with big data technologies like Hadoop, Hive, Flume, Sqoop, Oozie, Python and data analytics libraries like Pandas, NumPy, Matplotlib, Plotly, to efficiently ingest, store and analyze data.
- Expertise in creatingdatabases, users, tables, triggers,macros, views, stored procedures, functions, Packages, joins and hash indexes in database.
- Worked with TOAD, SQL Developer, PL/SQL developer to export and load data to/from different source systems including flat files and experience using query tools like.
- Create ETL jobs to load data from different sources into the Data Warehouse for data analysis.
- Supported applications using Ticket Management System (TMS) called Jira.
- Knowledge of statistics and experience using statistical packages for analyzing datasets like NumPy, Pandas, matplotlib, etc., and automate processes using shell scripts in Unix environment.
Environment: Python, Vim, BeautifulSoup, Requests, MySQL, CVS, SQL Developer, Hadoop, Hive, Flume, Sqoop, Oozie, SAS, SPSS, Unix, Oracle/ DB2, Jira, BTEQ, TOAD, PL/SQL, Hadoop, Hive, Flume, Sqoop, Oozie, Excel, SAS, Agile.
Confidential
BI/ VBA Developer
Responsibilities:
- Creating Excel macros and VBA code to massage data in Exceland modified SQL Stored Procedures. Created and Configured Data Source and Data Source Views, Dimensions, Cubes, Measures, Partitions, KPI’s using SQL Server Analysis Services.
- Requirement Gathering and Business Analysis. Define functional specifications and use-cases to meet the new system (Comverse) requirements, compared to the old system (Kenan) and involved in direct client discussions to have clear understanding.
- Involved in Data Modeling and populating the business rules using Data mappings into the target tables and understanding of the telecom billing related static and dynamic tables and their usage.
- Support Java based applications for the client and created Crystal Reports that includes Ad Hoc reports, Frequency reports, Summary reports SQL, PL/ SQL and Unix scripts.
- Tested the reports in TOAD, SQL Developer. Created stored procedures, functions, triggers to run and schedule the reports in Oracle database.
- Supported applications using bug tracker called Bugzilla.
- Experience of using Informatica to extract, transform and load data into dataware-house.
- Perform the post deployment (SIT/UAT/prod) checks and monitor for any issues during the initial stages.
Environment: VBA, TOAD, SQL Developer, Oracle 11g, SAP Crystal reports, UNIX, SQL, PL/ SQL, Informatica