We provide IT Staff Augmentation Services!

Ams Data & Analytics Solutions Resume

3.00/5 (Submit Your Rating)

ChicagO

PROFESSIONAL SUMMARY:

  • I have 11+ years of experience in Information Technology with experience in Development, Testing, Enhancement and Production Support. As part of assignments, I have been part of Production Support activities, Requirement Analysis, Solution Design, Application Development, Application Maintenance, Quality Reviews, Testing, Data Analysis, and Impact Analysis in Oracle database ( PL/SQL Developer ), Bigdata ecosystem, Hadoop, HDFS, MapReduce concepts, Hive, Sqoop, Kafka, Spark, Scala etc. with good understanding of cloud systems like AWS and ELK ( Elasticsearch, Logstash, Kibana ) stack

TECHNICAL SKILLS:

Technology: Oracle, Hadoop, HDFS, Hive, Sqoop, Kafka, Spark

Languages: Spark, Hive, Kafka, SQL (Tuning of Queries, advanced function), PL/SQL (extensively worked on Packages, Procedures, Functions, PL SQL collections, partitioning, Dynamic SQL), SQL * Loader, HiveQL, Scala

Databases: Oracle 10G,11G

Scripting Languages: UNIX Shell Scripting, JavaScript, HTML

PROFESSIONAL EXPERIENCE:

Confidential, Chicago

AMS Data & Analytics Solutions

Responsibilities:

  • Capture data requirements and create a comprehensive data architecture that meets business reporting needs and promotes a standard driven enterprise data strategy.
  • Participate in the data model (logical/physical) discussions with Architects, DBAs and create both logical and physical data models for the application.
  • Load data from various data sources to Hadoop production and Hive by designing and developing programs to run on Hadoop Clusters using, Spark, HQL, Scala, Kafka, Python to build and improve business reporting layer in enterprise data lake.
  • Create and identify potential bottlenecks with SQL and HQL queries and Unix shell scripts to effectively transform the source data to meet the business requirements.
  • Develop Hadoop ETL process to schedule the jobs in production using Zena job scheduler.
  • Build large data sets from Teradata and Hadoop to develop and train algorithms for predicting future data characteristics and provide recommendations in real time.
  • Defect and Incident analysis: solve reported issues for all enterprise - wide big data applications within defined SLA (Service Level Agreement).
    • Design, develop, and/or modify enterprise-wide systems and big data applications using data analytical models to predict, measure outcome and consequences of design.
  • Create mock-ups to effectively visualize and demonstrate proposed analytical dashboard solutions in order to gain buy-in on requirements
  • Tune Hadoop solutions to improve performance and end-user experience.
  • Maintain reports, fix data, visualization, analytical issues/defects, and add new features.
  • Troubleshoot Data lake performance issues using Shell, Pig, Hive, Scala, Python and Map Reduce, and support technical challenges during development, test and production.
  • Influence decision-making with regards to infrastructure, back-end data solutions, reporting requirements and design.
  • Work in Agile/Scrum software development environment. Actively participate in all sprint planning meetings, daily standups and retrospective meetings.
  • Increase organizational and personal capacity by mentoring and enabling team members to complete coding and testing assignments.
  • Evaluates emerging technology trends, competitive products, and business need to set direction and roadmap for future application growth or buy-vs-build inflection points.
  • Recommends new technologies based upon business value drivers and return on investment; drives new technologies towards implementation and exploitation.
  • Mentor other team members on any ongoing issues, provide technical support and ensure no impact to the project and incident deadlines.

Environment: FTP, Putty, Zena Workstation, Unix RHEL, Spark, Hadoop, Windows 7, GitHub, Jenkins

Confidential

Responsibilities:

  • Design and build data processing pipelines using tools and frameworks in the Hadoop ecosystem
  • Design and build ETL pipelines to automate ingestion of datafrom different sources.
  • Design and Build pipelines to facilitate data analysisusing SparkSQL.
  • Proficiency and knowledge of best practices with the Hadoop (YARN, HDFS,MapReduce)
  • Performed complex data analysis in support of ad-hoc and standing customer requests using analytical functions(RANK, LEAD, LAG, ROW NUMBER and more) to address the business data needs.
  • Identify the entities and relationship between the entities to develop Conceptual Model.
  • Responsible for different Data mapping activities from Source systems and accommodate any necessary changesacross the model.
  • Capture data requirements and create a comprehensive data architecture that meets business reporting needsand promotes a standard driven enterprise data strategy.
  • Involved in Requirement analysis, Design, Development and Production support of multiple applications of Datawarehouse.
  • Using Kafka to build the real time data stream from various other exchanges and move the data to HDFS foranalytics and report generation usingHiveQL.
  • Follow-up and coordination with various firms, affiliate, stakeholders for the inbound/outbound daily EODbusiness files, Firms andAffiliates transaction fee and regulatory fee.
  • Daily monitoring of the data warehouse batch jobs on all the CBOE exchanges along with the surveillance andregulatory batch streams.
  • Involved in Requirement analysis, Design, Development and Production support of multiple applications of Datawarehouse.
  • Involve in production support activities, fixing the bug in the applications and responsible to find out the rootcause of the Bugs in the production.
  • Weekly support turnover meeting with (onsite/offshore) client to avoid abends after a production roll out

Environment: Oracle 11g, Toad 11.6, BMC Remedy, Putty, Control - M Version 8, SQL * Plus, IBM Rational Clear Case, Unix Sun Solaris, Windows XP,7, ERwin

Confidential

Oracle PL/SQL

Responsibilities:

  • Offshore/onsite team coordination as Test Lead to assign work in each of the areas of CDB-DWS replication and Data warehouse applications like Analysis, Coding and Testing.
  • Interaction with solution design teams & Process owners to understand the requirements and finalize the Analysis and Test coverage and to lead the offshore team to automate the process of test orders creation for various scenarios.
  • End to end testing of ATG environments and performance testing of the Data warehouse applications.
  • Test environment setup and perform production release activities where he will be working closely with client EDMG Data warehouse testing team and external firms to gather requirements and understand impacts on DWS.
  • Requirement analysis, Enhancements in DWH application, performing unit testing and making necessary code changes and conduct unit testing to achieve functional changes in the various DWH application.
  • Coordinate with other dependent (upstream/downstream) systems like Options Clearing Corporation (“OCC”), CBOE Trade Match Rewrite (“CTMR”),, CBOE Direct, Regulatory, and Accounting for integration testing.
  • Involve in production support activities, fixing the bug in the applications and responsible to find out the root cause of the Bugs in the production.

Environment: Oracle 11g, Toad 11.6, Remedy, Putty, Control - M Version 8, SQL * Plus, IBM Rational Clear Case,FTP, Unix Sun Solaris, Oracle SQL, PL/SQL, Windows XP,7

Confidential

Oracle PL/SQL Developer

Responsibilities:

  • Incident Analysis and Resolution
  • Root cause Analysis ( RCA) and Batch Failure Analysis
  • Batch Support and PBI/PKE creation for frequent Client requests
  • Error Analysis and Report generation for Job failures
  • Raising trackers to reduce future incidents
  • Performance Tuning to make batch run more efficiently
  • Consistently look for potential Automation areas as part of Summit goals.
  • Offshore Point of Contact for VISTA Commissions Business
  • Documentation and Knowledge Transition to fellow Production Support team

Environment: Oracle SQL, PL/SQL, Oracle 10g, SQL Navigator, Remedy, FTP Pro, Autosys, SQL * Plus, Windows XP

Confidential

Oracle PL/SQL Developer

Responsibilities:

  • Requirement Analysis and clarifications
  • Solution Design ( High Level, Low level ) Using VISIO tool
  • Knowledge Transition and Tuning fellow team mates.
  • Coding (PL/SQL following QUEST architecture and Standards) with Peer Review.
  • Creation of control file ( SQL LOADER )
  • Test Plan Creation and review
  • Unit Testing, Assembly Testing with Peer Review
  • Performance Tuning of the Code developed
  • System Integration Testing ( SIT ) offshore Point of Contact
  • Batch Assembly Testing (BAT) offshore Point of Contact
  • Offshore Delegate for Daily Triage Call with client
  • Documentation and Knowledge Transition to Production Support team

Environment: Oracle SQL, PL/SQL, SQL Navigator (QUEST Architecture), TELNET, FTP Pro, HP Quality Center, Serena Version Manager, Microsoft VISIO, Oracle 10g, Intel based PC, Windows XP

Confidential

PL/SQL Developer

Responsibilities:

  • Coding (PL/SQL following QUEST architecture and Standards)
  • Solution Design Using VISIO tool
  • Test Plan Creation
  • Unit Testing, Assembly Testing
  • System Integration Testing ( SIT )

Environment: Oracle SQL, PL/SQL, SQL Navigator (QUEST Architecture), TELNET, FTP Pro, Oracle 10g, Windows XP

Confidential

 PL/SQL Developer

Responsibilities:

  • Coding (QUEST architecture and Standards)
  • Test Plan Creation
  • Unit Testing, Assembly Testing

Environment: Oracle SQL, PL/SQL, SQL Navigator (QUEST Architecture), TELNET, FTP Pro, Oracle 10g, Windows XP

We'd love your feedback!