Software Developer Resume
Littleton, CO
SUMMARY:
- Strong IT Professional with 8 Years of programming experience and several years with Big Data and Big Data analytics.
- Experienced in installing, configuring Hadoop cluster of major Hadoop distributions.
- Have hands on experience in writing MapReduce jobs in Java.
- Hands on experience in installing, configuring and using ecosystem components like Hadoop Map Reduce, HDFS, HBase, Zoo Keeper, Oozie, Hive, Cassandra, Sqoop, Pig, Flume, Avro on Horton Works and Talend.
- Hands on Experience in working with ecosystems like Hive, Pig, Sqoop, Map Reduce.
- Developed analytical components using Scala, Spark, Apache Mesos and Spark Stream.
- Strong Knowledge of Hadoop and Hive and Hive's analytical functions.
- Efficient in building Hive, pig and map Reduce scripts.
- Implemented on Hadoop stack and different big data analytic tools, migration from different databases (SQL Server2008 R2, Oracle, and MYSQL) to Hadoop.
- Successfully loaded files to Hive and HDFS from MYSQL.
- Loaded the dataset into Hive for ETL Operation.
- Good knowledge on Hadoop Cluster architecture and monitoring the cluster.
- Good understanding of cloud configuration in Amazon web services (AWS).
- Experience in using Zoo keeper and Horton works Hue and HDP.
- In - depth understanding of Data Structure and Algorithms.
- Experience in deploying applications in heterogeneous Application Servers TOMCAT, Weblogic, IBM WebSphere and Oracle Application Server.
- Strong Communication skills of written, oral, interpersonal and presentation
- Implemented Unit Testing using JUNIT testing during the projects.
- Ability to perform at a high level, meet deadlines, adaptable to ever changing priorities.
TECHNICAL SKILLS:
Big Data: Hadoop, Map Reduce, HDFS, HBase, Zookeeper, Hive, Spark, Pig, Sqoop, KafkaOozie, Flume.
ETL Tools: Informatica, Talend.
Methodologies: Agile, waterfall, UML, Design Patterns
Database: Oracle 10g, 11g, MySQL, SQL Server 2008 R2, MariaDB, NoSQL, HBase, Cassandra
Application Server: Apache Tomcat 5.x, 6.0.
Web Tools: HTML, XML, DTD, Schemas, Json, Ajax.
Tools: SQL developer, Toad, SQL Loader.
Operating System: Windows7, 8, Linux Ubuntu.
Testing API: JUNIT
PROFESSIONAL EXPERIENCE:
Confidential, Littleton, CO
Software Developer
Responsibilities:
- Understanding and analyzing business requirements of the Application and getting the clarification by finding gaps in the Requirement.
- Help and support in solving queries and provide solutions for defects using Java/J2EE technology as required.
- Analyzing the pre / post-production issues to provide optimal fix.
- Maintaining technical documentation for software and systems.
- Create, validate, and maintain scripts to load data into databases.
- Create, validate and maintain scripts to load data using Sqoop
- Working with Hibernate Framework to persist data through MDW to data bases.
- Create Oozie workflows and coordinators to automate Sqoop jobs weekly and monthly.
- Develop, validate and maintain HIVEQL queries.
- Using Impala for running the legacy system queries.
- Written MapReduce programs to validate the data
- Schema design on Hbase and cleaning data
- Written Hive queries for analytics on user’s data.
- Writing Spark Data Frame application to read from HDFS and analyze records
Environment: s: Cloudera distribution, Hadoop Stack (Hive, PIG, HCatlog, Impala, SqoopOozie), Spark, Java JDK 1.5(legacy system compilation), 1.6, 1.7, 1.8, Hibernate, Spring, EJB, Eclipse Kepler, Web Logic 10.0, SQL Server 2010, GIT, Windows 7.
Confidential, Irving, Texas
Hadoop Programmer
Responsibilities:
- Create, validate and maintain scripts to load data using Sqoop manually.
- Created several different dashboards for multiple different teams and clusters.
- Created Splunk Search Processing Language (SPL) queries, Reports, Alerts and Dashboards.
- Create Oozie workflows and coordinators to automate Sqoop jobs weekly and monthly
- Responsible for POC (Proof of Concept) and Production cluster.
- Developed ANT build scripts, UNIX shell scripts and auto deployment process.
- Problem determination using local error logs and by running user traces and service traces.
- Technical Owner for the clusters on readmissions and claims data.
- Responsible for Data Profiling and data analysis from the legacy system.
- Integrated Apache Kafka for data ingestion
- Design/Implement large scale pub-sub message queues using Apache Kafka.
- Production error monitoring and root cause analysis using SPLUNK.
- Develop, validate and maintain HIVEQL queries.
- Create, validate and maintain scripts to load data from and into tables in SQL Server 2012.
- Fetch data to/from HBase using Map Reduce jobs.
- Writing Map Reduce jobs.
- Analyzing data and running reports in Pig and HIVE Queries.
- Designed HIVE tables to load data to and from external tables.
- Writing DistCp shell scripts to load data across servers.
- Run executive reports using HIVE and Qlik View.
- Documenting design procedures and test plans.
Environment: s: Hadoop Horton Works, Hadoop Stack (HIVE, PIG, Sqoop, Oozie, HBase), Qlik view, Splunk 6.2, Kibana, Restful Services, Windows 8, SQL Server 2010, Windows Server, WebSphere Application Server, Bit Bucket.
Confidential, Marietta, Georgia
Hadoop Developer
Responsibilities:
- Setting up Amazon EC2 Servers and installing required database servers and ETL tools.
- Installed Oracle Server 10g and SQL Server 2008 R2.
- Importing and Exporting database dumps through Oracle Server 10g and 11g (using IMPDP and EXPDP), SQL Server 2008.
- Gather and understand the customer requirements and onboard new data sources into Splunk
- Setting up light, universal and heavy forwarders across different platforms
- Setting up alerts and monitoring from the Machine generated live data.
- Create, validate and maintain scripts to load data from and into tables in Oracle PL/SQL and in SQL Server 2008 R2
- Create validate and maintain Oracle scripts, store procedures and triggers for monitoring DML operations. Installing and deploying IBM Web-sphere.
- Converting, testing and validating Oracle scripts to SQL Server.
- Importing data from MySQL database to HIVEQL using Scoop.
- Develop, validate and maintain HIVEQL queries.
- Designed complex HIVE queries to load data.
- Develop Apache PIG scripts to load data from and to store data into HIVE.
- Wrote Store Procedures and Triggers.
- Expertise with AWS services like S3 and EMR.
- Migrated Oracle Server from 10g to 11g.
- Converting, testing and validating Oracle scripts to SQL Server.
- Upgraded IBM Maximo database from 5.2 to 7.5.
- Through understanding of the MIF (Maximo Integration Framework) configuration.
- Analyze, validate and document the changed records for IBM Maximo web application.
- Designed business models using SpagoBI, an analytic platform.
- Creating QBE (Query by Example) models and calculating KPI (Key Performance Indicators) for business purposes on SpagoBI server.
- Created Splunk Search Processing Language (SPL) queries, Reports, Alerts and Dashboards.
- Creating Reports, Pivots, alerts, advance Splunk search and Visualization in Splunk enterprise.
- Develop custom app configurations (deployment-apps) within SPLUNK to Parse, Index multiple types of log format across all application environment.
- Integrated Apache Kafka for data ingestion
- Design/Implement large scale pub-sub message queues using Apache Kafka.
- Creating backend tables for OASIS-BI to capture change records for storing data.
- Encrypting sensitive data in tables using MySQL(hashing and slating methods).
- Research and Troubleshooting of emerging application issues, from WebLogic configuration to code issues.
Environment: s: Amazon EC2 Server, Oracle Server 10g, 11g, SQL Server 2008 R2, MySQL, Apache PIG, HIVE 2.0, Scoop, SpagoBI 4.0, 4.1, OASIS-BI, Kibana, Splunk, WebSphere Application Server, WebLogic Application Server, IBM Data power, IBM Web-sphere, IBM Maximo, Talend, Informatica MDM.
Confidential, Irving, Texas
Oracle Developer
Responsibilities:
- Analyze business needs and translate them into reports.
- Provide the business with thorough analysis on systems/risks.
- Design, build, create and validate Reports, Test Process Matrixes, Macros
- Performing SQL code debugging.
- Automating reports through batch scripting.
- Design, create and maintain VBA and VBS scripts for automation of the reports.
- Writing Stored Procedures, Packages and Triggers, design and build DSS using SQL Script.
- Thorough understanding of Mortgage banking industry with the emphasis on Default MIS (Management Information System).
- Performing data analysis, patterns and trends.
- Loading data into tables using SQL Loader.
- UAT testing.
Environment: s: Oracle 10g, 11g, TOAD 10.4.1.8, SQL Loader, Windows 7 i-Space Generation X VDI on VM ware, Oracle Applications R 11.5.10, JIRA.
Confidential, Irving, Texas
ETL Developer
Responsibilities:
- Worked with business analysts to identify appropriate sources for Data warehouse and to document business needs for decision support data.
- Implementing ETL processes using Informatica to load data from Flat Files to target Oracle Data Warehouse database.
- Written SQL overrides in S ource Qualifier according to business requirements.
- Written pre-session and post session scripts in mappings.
- Created Sessions and W orkflow for designed mappings.
- Redesigned some of the existing mappings in the system to meet new functionality.
- Used Workflow Manager for Creating, Validating, Testing and running the sequential and concurrent Batches and Sessions and scheduling them.
- Extensively worked in the performance tuning of the programs, ETL Procedures and processes.
- Developed PL/SQL procedures for processing business logic in the database.
Environment: s: Informatica 9.x, Teradata, Oracle 10g, Windows 7, UNIX Shell Programming.
Confidential
ETL Developer
Responsibilities:
- Analyzed the source data coming from Oracle, Flat file.
- Used Informatica Designer to create mappings using different transformations to move data to a Data Warehouse. Developed complex mappings in Informatica to load the data from various sources into the Data Warehouse.
- Involved in identifying the bottlenecks in Sources, Targets & Mappings and accordingly optimized them.
- Worked with NZLoad to load flat file data into Netezza tables.
- Good understanding about Netezza architecture.
- Assist DBA to identify proper distribution keys for Netezza tables.
- Created mappings using pushdown optimization to achieve good performance in loading data into Netezza.
- Created and Configured Workflows, Worklets, and Sessions to transport the data to target warehouse Netezza tables using Informatica Workflow Manager.
Environment: Informatica Power Center 8.x, Flat files, Netezza 4x, Oracle, UNIX, WinSQL & Shell Scripting.