Python Backend Developer Resume
4.00/5 (Submit Your Rating)
SUMMARY
- Proficient experience in working with Amazon Web Services like EC2, Virtual private clouds (VPCs), Storage models (EBS, S3, instance storage), Elastic Load Balancers (ELBs).
- Experienced in MVW frameworks like Django, Angular.js, Java Script, backbone.js, jQuery and Node.js.
- Have experience on Kubernetes and Docker for runtime environment of system to build, test & deploy.
- Knowledge on integrating different eco - systems like Kafka - Spark - HDFS.
- Experienced in WAMP (Windows, Apache, MYSQL, and Python) and LAMP (Linux, Apache, MySQL, and Python) Architecture and Wrote Automation test cases using Selenium Web Driver, JUnit, Maven, and spring.
- Proficient knowledge in Software Development Life Cycle (SDLC) and Software Testing Life Cycle (STLC).
- Experienced in developing Web Services with Python programming language and Good working experience in processing large datasets with Spark using Scala and PySpark.
- Hands-on experience in MVC architecture and Java EE frameworks like Struts2, Spring MVC, and Hibernate.
- Worked in agile and waterfall methodologies with high quality deliverables delivered on-time.
- Experienced with Test Driven Development (TDD), Agile, Scrum and Waterfall methodologies. Used ticketing systems like JIRA, Bugzilla and other proprietary tools.
- Experienced in WAMP (Windows, Apache, MYSQL, and PHP) and LAMP (Linux, Apache, MySQL, and PHP) Architecture.
- Knowledge on Scala Programming Language. Good experience with Talend open studio for designing ETL Jobs for Processing of data. Familiar with JSON based REST Web services and Amazon Web services.
- Deeply involved in writing complex Spark-Scala scripts, Spark context, Cassandra SQL context, used multiple API's, methods which support data frames, RDD's, Cassandra table joins and finally write/save the data frames/RDD's to Cassandra database.
- Worked extensively in design and development of business process using SQOOP, PIG, HIVE, HBASE.
- Expertise in Working on Data Encryption (Client-Side and Server-Side) and securing data at rest and in transit for data in S3, EBS, RDS, EMR, Red Shift using Key Management Service (KMS).
- Beneficial Knowledge in Amazon AWS concepts like EMR and EC2 web services which provides fast and efficient processing of Big Data.
- Experienced in real time data from various data sources through Kafka data pipelines and applied various transformations to normalize the data stored in HDFS Data Lake.
- Hands-on experience in configuring and working with Flume to load the data from multiple sources. Having a Complete Understanding on Lambda architectures
- Proficient in designing and querying the NoSQL databases like HBase, Cassandra, MongoDB, Impala.
- Experienced with Web Development, Amazon Web Services, Python and the Django framework.
- Experienced in using MVC architecture using RESTful, Soap Web services and SoapUI and high-level Python Web frameworks like Django and Flask. Experience object-oriented programming (OOP) concepts using Python, Django, and Linux.
- Experienced in running Spark streaming applications in cluster mode and Spark log debugging.
- Skilled on migrating the data from different databases to Hadoop HDFS and Hive using SQOOP.
- Proficient Experience in the core concepts of MapReduce Framework and Hadoop ecosystem.
- Experienced in optimizing volumes, EC2 instances and created multiple VPC instances and created alarms and notifications for EC2 instances using Cloud Watch
PROFESSIONAL EXPERIENCE
Python Backend Developer
Confidential
Responsibilities:
- Hands-on experience Installation, configuration, maintenance, monitoring, performance and tuning, and troubleshooting Hadoop clusters in different environments such as Development Cluster, Test Cluster and Production.
- Developed a Python Script to load the CSV files into the S3 buckets and created AWS S3buckets, performed folder management in each bucket, managed logs and objects within each bucket. Developed an application in Linux environment and dealt with all its commands.
- Used jQuery and AJAX calls for transmitting JSON data objects between front end and controllers and Utilized continuous integration and automated deployments with Jenkins, Ansible and Docker.
- Managed and reviewed Hadoop log file and worked in analysing SQL scripts and designed the solution for the process using PySpark.
- Administrate Continuous Integration services (Jenkins, Nexus Artifactory and Repository).
- Performed dynamic UI designing with HTML5, CSS3, less, Bootstrap JS, JavaScript, jQuery, JSON and AJAX.
- Implemented RESTful Web-services for sending and receiving the data between multiple systems.
- Rewrite existing Python/Flask module to deliver certain format of data. Loading, analysing and extracting data to and from Elastic Search with Python.
- Created Hive DDL on Parquet and Avro data files residing in both HDFS and S3 bucket. Involved in application development for Cloud platforms using technologies like Java/J2EE, Spring Boot, Spring Cloud, Micro Services, REST.
- Designed and developed the application using Agile Methodology and followed TDD and Scrum.
- Successfully migrated the Django database from SQLite to MySQL to PostgreSQL and Designed, developed and deployed CSV Parsing using the big data approach on AWS EC2.
- Designed and developed middleware, using RESTful web services based on a centralized schema Wrote Python scripts to parse JSON documents and load the data in database.
- Worked on AWS data pipeline for Data Extraction, Transformation and Loading from the homogeneous or heterogeneous data sources.
- Used Pandas API to put the data as time series and tabular format for data manipulation and retrieval.
- Developed API to Integrate with Confidential EC2 cloud-based architecture in AWS, including creating machine Images.
- Designing, implementing, and maintaining solutions for using Docker, Jenkins, Git, and Puppet for microservices and continuous deployment.
- Knowledge of the Software Development Life Cycle (SDLC), Agile and Waterfall Methodologies and Familiar with concepts and devices such routers, switches and TCP/IP protocols and OSI layer.
- Utilized Python libraries such as NumPy and Pandas for processing tabular format data.
- Involved in the development of the applications using Python, HTML5, CSS3, AJAX, JSON and jQuery.
Python Programmer
Confidential
Responsibilities:
- Developed an application in Linux environment and dealt with all its commands.
- Administrate Continuous Integration services (Jenkins, Nexus Artifactory and Repository).
- Designed and Developed DB2 SQL Procedures and UNIX Shell Scripts for Data Import/Export and Conversions.
- Hands-on experience Installation, configuration, maintenance, monitoring, performance and tuning, and troubleshooting Hadoop clusters in different environments such as Development Cluster, Test Cluster and Production.
- Developed SQOOP scripts to migrate data from Oracle to Big data Environment.
- Extensively worked with Avro and Parquet files and converted the data from either format Parsed Semi Structured JSON data and converted to Parquet using Data Frames in PySpark.
- Developed a Python Script to load the CSV files into the S3 buckets and created AWS S3buckets, performed folder management in each bucket, managed logs and objects within each bucket.
- Performed dynamic UI designing with HTML5, CSS3, less, Bootstrap JS, JavaScript, jQuery, JSON and AJAX.
- Loading, analysing and extracting data to and from Elastic Search with Python.
- Monitored and Troubleshooted Hadoop jobs using Yarn Resource Manager and EMR job logs using Genie and Kibana.
- Consumed the data from Kafka using Apache spark.
- Worked with SQOOP jobs to import the data from RDBMS and used various optimization techniques to optimize Hive, Pig and SQOOP.
- Developed analytical component using Scala and KAFKA.
- Designed Forms, Modules, Views and Templates using Django and Python.
- Involved in application development for Cloud platforms using technologies like Java/J2EE, Spring Boot, Spring Cloud, Micro Services, REST.
- Created Hive DDL on Parquet and Avro data files residing in both HDFS and S3 bucket
- Worked with Amazon Web Services (AWS) using EC2 for hosting and Elastic map reduce (EMR) for data processing with S3 as storage mechanism.
- Worked with various HDFS file formats like Avro, Sequence File and various compression formats like Snappy, bzip2.
- Worked extensively on AWS Components such as Airflow, Elastic Map Reduce (EMR), Athena, Snowflake.
- Implemented RESTful Web-services for sending and receiving the data between multiple systems. Rewrite existing Python/Flask module to deliver certain format of data.
- Created data partitions on large data sets in S3 and DDL on partitioned data. Converted all Hadoop jobs to run in EMR by configuring the cluster according to the data size. Extensively used Stash Git-Bucket for Code Control