Sr Software Developer Resume
Roanoke, VA
SUMMARY:
- Around 14 years of IT Experience in development, analysis and design of ETL methodologies in all the stages of data warehousing life cycle with knowledge in Data Stage and Hadoop technologies.
- Extensive Information Technology (IT) experience in Banking, Finance, Media and Entertainment & Telecom Domains.
- Experience on DATASTAGE 11.5 (ETL), UNIX Scripting, Control - M, SSMS and SQL Developer
- Experience on Business IT Analysis role with data modeling and technical specification document preparation. Provides design support for the development of business intelligence solutions. Works on medium to complex and cross-functional IT and business intelligence solutions
- Experience on Mainframes tools such as Cobol, JCL, VSAM and DB2
- Extensively worked on several ETL assignments to extract, transform and load data into tables as part of Data Warehouse development.
- Extensive experience in writing generic UNIX shell scripts for scratch development tasks and for other performance tuning activities.
- Built simple and complex codes in SQL Stored Procedures, Views and Triggers.
- Developed DataStage production job on scheduling process using the Control-M. Predecessor and successor jobs need to be setup based on the requirement, In Similar way cyclic jobs also need to setup.
- Experience on Transform Components such as Aggregate, Router, Sorted, Filter by Expression, Join, Normalize and Created appropriate DMLs.
- Responsible for Preparation of Design Documents / Test Case Specifications / Performance Review and Coding and getting it signed off from client.
- Holding internal and external certifications on many technologies.
- Experience on traditional Waterfall model and Agile methodology with Devops tools like Jira and Confluence.
- Experience on Amazon S3 Connector to load and retrieve the data from Amazon Web Services (AWS). Knowledge on EMR, S3 Buckets and Dynamo DB.
- Onsite-Offshore communication with the clients and the customers in regular intervals in order to complete the work in stipulated time frames.
- Technical document preparation such as functional and design documents. Conducting code walkthrough with the team. Very good understanding of Hadoop architecture and the daemons of Hadoop - Name Node, Data Node, Resource Manager, Node Manager, Task Tracker, Job Tracker.
- Worked with Hive/HQL to query data from Hive tables in HDFS.
- Experience in Hadoop eco-components such as YARN, Spark, Hive and Sqoop
- Experience with Sqoop to import/export data from a RDBMS into HDFS and used Flume to collect data and populate to Hadoop.
- Hands on experience on Cloudera Hadoop environment and SPARK for data transformation for larger data sets.
- Installation, configuration and administration of Hadoop Cluster. Extensive experience in working IDE tools like IntelliJ IDEA and PYCHARM.
- Effectively lead the team with all stakeholders such as offshore teams and clients
- Skilled in writing technical specification documents, translating user requirements to technical specifications.
- Excellent analytical, interpersonal and communication skills with aptitude to assimilate new technologies.
TECHNICAL SKILLS:
Technologies: Data stage11.3 (ETL), LINUX Scripting, PLSQL, HADOOP (Spark, Hive, Sqoop, Flume and Scala/Python), AWS, Mainframes and HDFS
Database: DB2, SSMS, SQL Developer 2008 and Teradata
Tools: Qlik View, Quality center, GSD, DEVOPS tools (Jira, Kanban Confluence)
Scheduling tools: Control-M on DataStage, Zeke on Mainframes
PROFESSIONAL EXPERIENCE:
Confidential, Roanoke, VA
Sr Software Developer
Responsibilities:
- Technical requirement gathering and technical analysis for each job data stream, including Identified source systems, their connectivity, the related tables and fields and ensured that data was appropriate for mapping.
- Performance monitoring and performance tuning of ETL design to enhance the performance of the job workflow by identifying performance bottlenecks by appropriately using the best ETL standard practices in the DataStage job design.
- Analyzed and enhanced the performance of the jobs and project using standard techniques.
- Effective engagement with project stakeholders and participated in weekly meetings with project team, peers and less experienced staff regarding progress on activities.
- Making the necessary enhancement to fix the project bugs based on the business priority.
- Improved job performance by gathering the performance statistics of the jobs in the production server using DataStage Director.
- Simplified the existing transformer logic in the jobs to make them perform better.
- Performance tuning of SQL queries by appropriately using the best SQL design standards.
- Coordinating monthly releases into production.
- Documented the changes made to the jobs, the comparative performances of the old and new jobs and the test plan that was used to test the jobs.
- Imported and exported repositories across different servers using Data Stage.
- Performed unit and Integration testing and validated the test cases by comparing the actual results with expected results.
Environment: IBM DataStage 11.5, Oracle SQL Developer, WINSCP, Putty, MS office 10, Oracle 11g, Unix scripting, AWS S3, JIRA and KANBAN
Confidential, Arlington Heights, IL
Technical Development Lead
Responsibilities:
- Ingest data from multiple source tables with help of running CDC subscriptions using management console
- Variance calculation needs to be performed on files between previous day and current day.
- File validation need to be performed on files to verify accuracy on file transformation.
- File base systems like CSV and tab delimited files need to process and loaded into Landing pad source tables.
- Full refresh files need to be process full files on specified intervals (daily/monthly/yearly) and Delta files also processed in similar way, but quantity of data is very low compare to full file.
- Implemented SQL views and triggers on landing source tables
- Design developed and test DATASTAGE jobs along with sequencers by using agile methodology.
- Experience in setting up Audit routines, config files and audit table setups.
- Implemented new Unix scripting when requirement changes otherwise need to use existing scripts, if new enhancement then need to amend old script.
- Implemented SQL Stored procedures when requirement changes otherwise need to use existing scripts, if new enhancement then need to amend old script.
- Perform on unit testing in DEV, System testing in ST and System integration testing on SIT regions along with deployments
- Need to work on defect fixing using JIRA or QC.
- In SIT region all the UNIX scripts need to be integrated with help of Control-M scheduling tool.
- Code deployment into preproduction and production regions after getting full signoff from testing team
- Experience in creating Change requests and Incident tickets
- Experience in creating GSRs to DB and DS admin if any environmental issues occur.
- Experience in writing generic scripts used by entire team
- Experience in implementing productivity initiatives on existing code
Environment: IBM DataStage 8.1, SQL Server 2008, Oracle 11g, PL/SQL, Linux,AWS, AWS S3, JIRA, KANBAN, Confluence and GSD
Confidential, Arlington Heights, IL
Sr ETL Developer
Responsibilities:
- Initially code was in DataStage ETL later few modules moved to Hadoop ETL.
- Prepared Technical Design Documents with help of functional specification document.
- Experience in writing complex Linux codes such reconciliation and file validation scripts.
- Experience in writing Stored procedures, triggers and views on SQL server.
- Job scheduling on Control-M performed.
- Importing and exporting of data onto AWS S3 path using Flume and Sqoop.
- Moved S3 files to HDFS and loaded same data HIVE Tables.
- Experience in writing on different file formats like JSON, Text files, Sequence Files, Avro.
- Experience in writing transformations using PYCHARM with help of RDDs and DATAFRAMES.
- Loaded data into HIVE Tables after transformations.
- Performance tuning performed using spark submit.
- Create Hive scripts to load the historical data and also partition the data
- Experienced in managing and reviewing Hadoop log files.
- Experience in writing with infrastructure team in installing cluster, commissioning & decommissioning of Data nodes, Name node recovery, capacity planning.
- Experience on Amazon EMR, S3 Buckets, Dynamo DB.
- Assisted in creating and maintaining Technical documentation to launching HADOOP Clusters and even for executing Hive queries and Python Scripts.
- Documented the entire process.
- Load and transform large sets of structured, semi structured and unstructured data.
- Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
- Worked with different team in ETL, Data Integration and Migration to Hadoop
Environment: DATASTAGE 8.1, UNIX, CONTROL-M, QlikView, AWS, HIVE, HDFS and TERADATA
Confidential, Arlington Heights, IL
Sr DataStage Developer
Responsibilities:
- Understanding technical specification documents and address concern queries to IT analysis team.
- Building Data Stage Jobs precisely based on technical requirements
- Performance tuning was performed on DataStage jobs using accurate partitioning techniques and rewriting existing code.
- Code Implementation on the region such Development, System testing and System Integration testing.
- Unix code need to be implemented for new config file setup and Unix scripts, or update existing generic scripts based on the requirements.
- Worked on scheduling the jobs on Control-M by performing Calendar setups, prerequisite setup as IN/OUT conditions and Cyclic jobs setup.
- Code deployment to production by creating GSD Change requests and Incident tickets for resolution of the issues on environmental issues
- Generic code changes required during architecture amendments.
- Working with advance DEVOPS tools such as Kanban, Jira and confluence.
- Involved in the development of Common Error Management to capture and handle the error records.
- Preparing documentation for unit testing
- Optimized/Tuned DS jobs for better performance and efficiency.
- Working on various QC/Jira defects raised by the concerned business teams from various entities
- Participate in the requirement specification process for new software functionality to ensure that definitions of new functionality are clearly defined and understood.
- Works with the business analysts and team members with respect to software capabilities, functionality and design options.
Environment: UNIX, DataStage 8.1, Sequel Server Management Studio,JIRA and Control-M
Confidential, Arlington Heights, IL
BI Analyst
Responsibilities:
- Perform analysis for a wide range of requests using data in different formats and from various platforms. Host business meeting with all respective stake holders in order to grab accurate information.
- Logical and Physical Data modeling need to perform based on the services identified.
- Preparation of Solution proposal yield the projects into right path and Need to support built team during implementation code.
- Technical specification documents need to be implemented based on the solution proposal and data modeling designs.
- Provides design support for the development of business intelligence solutions. Works on medium to complex and cross-functional IT and business intelligence solutions
- Works on multiple tasks/projects as team member. Few times need to be work as a functional lead.
- Participates in workstream planning process including inception, technical design, development, testing and delivery of BI solutions.
- Develops work plans or reviews other work plan timelines and manages workflows to meet timeframes. Participates in project management estimation process.
- Identifies and provides input to new technology opportunities that will have an impact on the enterprise wide BI systems.
- Provide direction for the planning, designing, and execution of user efforts. Reviews test plans and monitors testing process to ensure that business results are tested.
- Analyzes testing results to ensure the solution meets the needs of the business. Provides support to test teams.
Environment: Solution Proposal, Technical Specification, Physical and Logical Data Modeling, DataStage 7.5, QC and SSMS
Confidential
Application Developer
Responsibilities:
- Participated in architecture and design reviews and provide leadership in the areas of lifecycle, supportability, monitoring, care & feeding, and service restoration.
- Actively provided inputs on designing strategies and architecture for building a data warehouse.
- Provided recommendations regarding enhancements and/or improvements regarding to the enterprise master data, operational data stores, data warehouse and data marts.
- Actively participated in project coordination with business groups in order to define the requirements, data analysis, data validation, data profiling, data discovery and system designing.
- Extensively used sequential files, complex Flat Files, Sort, Funnel, Remove Duplicates, Merge, Aggregator and created datasets.
- Extensively used Change Data Capture(CDC) stage to implement the slowly changing dimensonal and Fact tables.Extensively used Before/After sub-routine in jobs.
- Experience in SQL scripting, and PL/SQL scripting.
- Experience in writing Unix shell scripts for scheduling DataStage jobs, to handle error management and audit management, and also developed scripts for cleansing, moving and archiving the files.
- Used DataStage sequencer jobs extensively to take care of inter dependencies and to run DataStage parallel jobs in sequence.
- Responsible for supporting and managing DataStage projects and monitor daily runs of the projects in production environment and troubleshooting of the production issues.
- Performance monitoring and performance tuning of ETL design to enhance the performance of the job workflow by identifying performance bottlenecks by appropriately using the best ETL standard practices in the DataStage job design.
- Performance tuning of SQL queries by appropriately using the best SQL design standards.
- Making the necessary enhancement to fix the project bugs based on the business priority.
- Coordinating monthly releases in to production.
- Effective engagement with project stakeholders and participated in weekly meetings with project team, peers and less experienced staff regarding progress on activities.
Environment: IBM DataStage 7.5, SQL Server 2005, Oracle 11g, PL/SQL, Red hat 4.2 Linux.
Confidential, Abilene, K S
Application Programmer
Responsibilities:
- Consults with business and users to identify needs/problems; determines functional requirements; prepares definition of problems/solutions
- Translates functional requirements into systems requirements to ensure business needs/problems are addressed. Recommends, where appropriate, end-user changes or additions to ensure an effective systems application
- Outlines and recommends modifications necessary to processing activities, and interacts with programming staff throughout the project to ensure it meets user requirements
- Prepares system test plans along with involvement in user acceptance testing and prepares implementation plan. Works closely with Quality Assurance staff to support QA test phase for UAT signoff. Also, develops internal systems documentation and end-user documentation as needed
- Provides production support for all activities associated with complex and time critical JPMorgan Chase Treasury Solutions applications. Nighttime and weekend support periodically required
- Provides leadership, sets goals, assigns, and directs team activities as needed; provides guidance and training; reviews and evaluates work of staff and prepares periodic performance appraisals
- Performs performance tuning to improve performance across multiple systems. Manages all aspects of joint development and assists with vendor negotiations
- Develops feasibility studies and proposals for senior management and executive decisions on large complex projects. Manages all aspects of testing and verification ensuring all tasks are performed for all interfaces
- This is a first shift position working with a team of co-developers. Can create Technical Design documents for themselves or others, from Functional Design documents
- Strong program/job development and unit testing. Estimating tasks for project planning. Performing quality coding per IBM Technology standards.
- Writing and executing effective test plans to ensure quality and stability. Responsible for complete deliverables according to the business needs & project plans.
Environment: JCL, COBOL, DB2 and PEGASYS
Confidential, Millburn, NJ
System Engineer
Responsibilities:
- Analyzing the specifications mentions in PRD/DOE and address the quires to Analysis team in order to resolve intangible to quires through defect tracker QC.
- Developing the code according to the Technical specifications and need to performance tuning.
- Unit testing need performed at development environment and deploy the code in system testing environment to test code with help of integration of all other modules
- System integration testing need to perform to run with scheduling tool. Job flow need to be determined to setup the predecessor and successor jobs.
- Auditing the output files according to the specifications and reimplement the code based on the out come of the auditing.
- Sending the QA material to Onsite coordinator situated at client place for the review. After the review comments code should be promoted into other regions.
- Output files need to be FTP to the customer or releasing the JES to BH environments. All the reviewed by customer and will go for further analysis.
- Need to Close all the DOE orders in ACDB5 region in different intervals and specified time lines. Along with this all pending QC defects need to be closed.
- Need to Close the PRD/DOE in Timekeeper and Sending the daily/weekly report of the status to Team lead for the review.
- Team building actives were incorporated in order to achieve phenomenal results in work place. Shared work experience in regular intervals will declined to future delinquencies with other team members.
- Performance tuning need to implement on existing codes such as JCL, COBAL and VSAM as part every green process.
- Code deployment actives also need to performed from one region to other based on the requirements.
- Need to share productive initiative tasks to team by writing smart code.
Environment: JCL, SAS, EASYTRIVE+, VSAM, File-Aid, UTIL50, ROSCOE, TSO, ACDB5, AROSCOE4