We provide IT Staff Augmentation Services!

Performance Test Lead Resume

PROFILE SUMMARY:

  • Expert in test automation and performance testing of multi - tiered enterprise SOA BPMS applications, mobile applications and big data distributed cloud applications.
  • Expert in performance & scalability testing—such as: load testing, stress testing, scalability testing, capacity testing, availability testing and reliability testing. Excellent analytical and trouble shooting skills for:
  • Identification and elimination of performance bottlenecks in the system,
  • Tuning of optimal parameter configurations for application server, database server, web server, and OS/JVM for,
  • Profiling and fixing of applications for optimal performance.
  • Excellent test automation and programming skills for writing functional, performance, integration, regression and unit test scripts, covering all areas of software applications and infrastructure.
  • Front-end Web, GUI and Mobile interfaces,
  • Backend APIs, J2EE application servers, BPMS, Web services, databases, CRMs, Business Intelligence systems,
  • Distributed Big data applications in cloud (Hadoop HDFS and HBase)
  • Expert in all phases of test management, such as: (1) test planning, (2) test strategy and design, (3) test cases and test scripts development, (4) test execution and analysis, (5) defect reporting, (6) Agile SDLC and SCRUM practices.
  • Expert in data analysis:
  • Machine learning and data mining based predictive analytics,
  • Business intelligence based solutions—such as, dimensional modeling, ETL, OLAP cubes, reports and dashboards.

Test Tools:

  • Experienced in performance testing tools: JMeter, VisualVM, HP LoadRunner / Performance Center, HP SiteScope, HP Diagnostics, IBM Rational Performance Tester (RPT), DynaTrace APM, Hyperic HQ, Perfmon, Unix Tools
  • Experienced in functional testing tools: Selenium, SOAPUI, HP QTP / UFT, IBM Rational Functional Tester (RFT)
  • Experienced in test management tools: HP ALM / QC, IBM Rational ClearQuest Test Manager, RequisitePro
  • Experienced in software development and configuration management tools: Git, SVN, MS Visual Studio, Eclipse, Rational ClearCase

Programming Skills:

  • Java, J2EE
  • Python
  • .NET: C#, LINQ, C++
  • T-SQL, PL/SQL
  • XML, JSON
  • Unix/Linux/Solaris Shell Scripts
  • Pig, Hive, Mahout, Cosmos Scope
  • R, Matlab, Weka

Middleware Platforms Skills:

  • SOA frameworks: ESB, SOAP and JSON Web services and CORBA, XML: WSDL, UDDI, XSD/XSLT
  • Application servers: JBOSS Server, Oracle WebLogic Server
  • CRM platforms: SAP CRM (Telecom SAP Care), Microsoft Dynamics
  • Web servers: IIS Web Server, Apache Web Server
  • Business Intelligence: SQL Server BI (SSIS, SSAS, SSRS), Excel PowerPivot.
  • CMS: Sharepoint Server, WordPress, Moodle

Databases and Business Intelligence:

  • Oracle
  • MS SQL Server
  • Sybase
  • MySQL

OS Platforms Skills:

  • OS: Windows, Linux, Solaris
  • Mobile: Android
  • Big data platforms: Hadoop HDFS, HBase, Microsoft Dryad / Bing Cosmos, Azure
PROJECTS:

Appian: Performance Test Lead

Responsibilities:
  • Providing test automation and performance testing of Appian’s cloud deployed multi-tiered J2EE and SOA based Enterprise solution for FTA—National Transit Database. The NTD system is used by transit agencies in USA to report transit-related information on recurring basis. The application stack consists of JBOSS application server, BPMS rule engine, MySQL database server, and Apache Web Server integrated and orchestrated through automated business rules, workflows and JSON web services.
  • Test tools and programming language currently using on the project are: JMeter, Hyperic HQ Monitoring, VisualVM, and Java

Ericsson (9-years): Performance Test Lead / Test Automation Lead / Sr. Systems Integrator

  • Provided systems integration, test automation and performance testing solutions for:
  • Ericsson-T-Mobile’s multi-tiered J2EE SOA based Enterprise telecom billing and charging solution. The solution manages customer experience, billing and provisioning for 60-million T-Mobile Wireless subscribers and B2B customers across USA. The solution includes Ericsson and T-Mobile in-house developed products, such as, BSCS, CS, CBiO-3, Rating, EMA, and Activation & Provisioning, Resource Management; and 3rd-party products such as SAP CRM, Catalog driven Order Management, Supply chain & Inventory Management, Sales & Marketing, Campaign Management, Finance & Payment processing and BI Reporting. The BSS application stack consists of WebLogic application server, BPMS rule engine, Oracle database server, Apache Web Server, Linux and Solaris boxes—integrated and orchestrated through automated business rules, workflows and SOA communication protocols such as SOAP web services, CORBA, sftp etc.
  • Ericsson’s multi-tiered CORBA based Enterprise telecom management / operation support system (OSS) platform. The OSS platform is used by telco operators around the world for configuration, performance and fault management of 4G/3G wireless networks.
  • Ericsson-TEMS’ wireless network planning and performance optimization products.

Alcatel-Lucent: SME LTE OAM / Test Lead

Responsibilities:
  • Provided consulting services for integration and testing of Alcatel-Lucent’s multi-tiered Enterprise telecom operation and maintenance (OAM) solution for 4G/3G wireless networks. Led test team for extensive lab testing, audit and certification of Alcatel-Lucent equipment prior to launch on Sprint LTE network.

Microsoft: Senior Software Development Engineer

Responsibilities:
  • Provided performance engineering and predictive analytics solutions for big data applications based on Microsoft’s Bing Internet search engine. Conducted performance engineering studies and analysis to discover congestion and outliers in Cosmos MapReduce implementation. Created benchmark performance tests to measure the performance improvements or regression in the cluster. Designed and developed predictive models and programs for capacity, utilization, and scalability planning. Developed models for live-site job execution flows in the production cluster and created congestion control algorithms to address the job failures scenarios. Performed code reviews, query optimization and technical discussions for data architecture.

Unisys: Test Manager

Responsibilities:
  • Provided consulting services for functional testing, performance testing, integration testing and UAT testing of Unisys’ multi-tiered J2EE SOA based Enterprise BPMS Contract Management solution for GSA. Performed requirements analysis and created feature packets for functional and performance requirements and system use cases. Worked with leads and engineers to ensure that the product meets functional, quality, performance, scalability, privacy, security & usability goals. Developed effective UAT testing methodologies, strategies, test cases, scripts, test automation framework, KPI metrics, and QA metrics to ensure that the product meets the specified requirements with excellent quality.

Work Experience Details:

Confidential

Responsibilities:
  • Providing consulting services for performance testing and tuning of Appian’s multi-tiered J2EE SOA based Enterprise BPMS National Transit Database solution for FTA. The solution consists of a multi-tier J2EE SOA based cloud deployed enterprise applications stack (JBOSS application server, BPMS rule engine, MySQL database server, Apache Web Server, Linux) that is integrated and orchestrated through automated rules based workflows using JSON web services.
  • Created test plan for benchmarking and performance tuning of National Transit Database (NTD).
  • Gathered performance requirements and acceptance criteria—such as, current and desired load; response time; throughput; resource utilization; error-rate; configuration parameters; performance intensive business use cases.
  • Identified performance intensive business scenarios and usage patterns—typical workload for each scenario.
  • Identified most time consuming database queries/stored procedures and resource intensive batch jobs for NTD application.
  • Identified the physical topology and differences between test and production systems.
  • Identified performance measurement metrics for each tier of the application—such as, response time, throughput, utilization (CPU, Memory, Disk I/O, Network I/O), JVM’s heap size, threads, GC interval, Application Server’s thread pool size, JDBC connection pool, queue length, prepared statements and Database Server’s SQL query execution time etc.
  • Setup test automation environment—evaluated performance tools and performed distributed installation of JMeter and Hyperic HQ for load balancing and applying workload to the application under test and collecting / monitoring performance metrics from NTD application deployed in the cloud.
  • Designed performance test cases and created workload profile for each test.
  • Created and validated JMeter performance test scripts.
  • Created test data for automation of performance tests—distinct name value pairs for input parameters required by individual virtual user / thread. Generated data NTD system to preload / populate the database for realistic production scenario.
  • Executed load and stress tests using actual scenarios with desired load conditions (workload).
  • Performed resource monitoring and captured performance test results.
  • Analyzed test results and prepared test metrics.
  • Verified that the NTD framework meets the minimum performance standards established for this project.
  • Executed capacity testing and determined the maximum number of concurrent users that the NTD system can support under a given configuration, while maintaining an acceptable response time, throughput, resource utilization and error rate.
  • Executed baseline tests and compared response time, throughput and resource utilization for on-going releases.
  • Published the performance test results.
  • Investigated performance bottlenecks in the system—the application tier which was bound by a system resource. Eliminated the congestion by tuning performance impacting configuration parameters of the bottlenecked backend resource.
  • Optimized code as needed by profiling it and focusing on the operations in the profile taking the most time, and compared the performance between creating new threads vs. using thread pools, and looked for synchronization bottlenecks.
  • Published regression & improvements comparison charts: Response time correlation with system metrics / load / tuning parameters / resource utilization charts. Summary tables for, mean, standard deviation, confidence interval, outliers and histogram distribution charts. Reported performance improvements trend over multiple builds.
  • Reported the current capacity of the system. Performed capacity planning and predicted the resource utilization for future load growth. Predicted the performance of production system under given load condition. Predicted the resource utilization of production system under given load conditions.
  • Published performance tuning and recommendation reports.
  • Provided conclusion for the overall quality of application under normal and extreme load conditions to help make go-no go decision for production system.
  • Published scalability recommendation report.
  • Reported bug fixes.

Confidential

Responsibilities:
  • Provided consulting services for performance testing, tuning and integration of Ericsson-T-Mobile’s multi-tiered J2EE SOA based Enterprise BPMS telecom billing and charging solution. The solution manages customer experience, billing and provisioning for 60-million T-Mobile Wireless subscribers and B2B customers across USA. The BSS solution consists of a multi-tier J2EE SOA based enterprise applications stack (WebLogic application server, BPMS rule engine, Oracle database server, Apache Web Server, Linux and Solaris boxes) that is integrated and orchestrated through automated rules based workflows using various SOA communication protocols such as SOAP web services, CORBA, sftp etc. The solution includes Ericsson and T-Mobile in-house developed products (BSCS, CS, CBiO-3, Rating, EMA, and Activation & Provisioning, Resource Management) as well as 3rd-party products such as SAP CRM, Catalog driven Order Management, Supply chain & Inventory Management, Sales & Marketing, Campaign Management, Finance & Payment processing and BI Reporting.
  • Responsible for defining, refining, implementing and managing automated performance testing solution in order to identify and eliminate performance problems in the BSS solution, such as, identification, fine tuning and scaling of bottlenecks components, profiling and diagnostics of application code, optimal parameter configuration, optimal resource utilization with desired throughput and responsiveness of the system prior to deployment into the production environment.
  • Participated in requirements analysis and design sessions and identified critical business use cases, workload and system performance requirements, benchmarks and KPIs for better user experience, scalability, resource utilization (average and max CPU Utilization, Memory usage, Disk I/O usage, Network I/O usage), capacity, high availability, reliability, and recoverability of the system—such as, end-user response time and latency threshold for load balancer, normal and peak number of concurrent users, normal and peak throughput, orders per second, page view per second, scheduled, unscheduled downtime, slowdown, unexpected reboot or system crash, data-integrity, hot patch, backups and disk failure recovery etc.
  • Responsible for implementing, leading and managing the performance testing environments, test data, test artifacts such as reusable scripts, test strategy. Identified scope of test impacts by test phases, including level of efforts, time line and resource allocation. Assigned tasks to performance testers and tracked the test progress through ongoing test reports and team meetings.
  • Developed test plan that identified and established test metrics, workload range, test strategy, planned baseline tests and scripts, test data to be collected, performance counters, pre-requisites, external resources, pass/fail/completion criteria, risks, dependencies, test environment, test automation tools, tasks and activities and test reporting processes.
  • Identified metrics to collect (counters/KPIs) to measure performance against targets under normal and peak load conditions—response time, throughput and utilization for CPU, Memory, Disk I/O, Network I/O and metrics for router, switches, gateways, OS, JVM, Web Server, App Server, Database, CEB Solution such as threads allocation and number of concurrent threads, number of requests queued, connections per second, deadlocks etc.
  • Generated load patterns and models based on past and projected load numbers. Created workload profile for key usage scenarios under load and stress conditions (approximate usage model)—think time, number of sessions/sec, session duration, concurrent and overlapping users, ramp-up and ramp-down, distribution of load to various scenarios, normal and peak load for current and new users, test duration.
  • Determined the performance test approach (load test, volume test, stress test, endurance test) and interleaving this across the identified scenarios. Designed test scenarios, baseline tests and test scripts for load, stress and capacity testing of the BSS solution. Developed scripts and executed the performance tests to properly and rigorously estimate the performance metrics of the system.
  • Executed load and stress tests and performed analysis of test results to uncover the bottleneck components in the solution bound by CPU, memory, disk I/O and network bandwidth and suggested recommendations for tuning the configuration and scaling the hardware resources to remove the bottleneck from the system. Investigated solution components that cause congestion in the system. Specifically monitored max utilization (CPU, Memory, Disk I/O, network I/O), memory leaks, race condition, data locking and blocking, network congestion, synchronization issue and experimented with different parameter settings to perform tuning of configuration parameters. Performed profiling of source code to find un-optimized code/algorithms and suggested recommendations for optimization of algorithms to increase the overall performance of the application.
  • Reported the current capacity of the system (capacity and scalability)—the maximum load that can be applied to the test environment before system wide failure. Provided scalability recommendation—hardware configuration required for production system, based on the resource utilization the components are bound by under extreme load, in the test environment.
  • Developed capacity planning models using the performance counters and machine learning algorithms to predict the future scalability requirements under specific workload and resource utilization. Performed response time correlations with system metrics, load, and hardware/software configuration. Predicted the response time of production system under given load condition; predicted the throughput of production system under given load condition; predicted the resource utilization of production system under given load condition.
  • Provided the conclusion for the overall quality of application under normal and extreme load conditions (current vs. desired) to help make go-live decision for production system. Reported the baseline test results (benchmarks) for future performance comparison (improvements/regression). Reported the performance improvements results (analysis and diagnostics) before vs. after the bottleneck removed, before vs. after changing the configuration parameters, before vs. after tuning / profiling of components.
  • Provided recommendation for optimal parameter configurations of production system based on the performance improvement results in the test environment. Reported high availability testing result (availability and reliability). Provided conclusion whether or not system meets performance acceptance criteria consistently over time, provides consistent and acceptable response times for the entire user base, whether system violates stability SLAs (reliability, availability, recoverability).
  • Identified, analyzed, and documented defects discovered during testing. Worked with the development teams to troubleshoot and resolve issues and provided fine tuning recommendations.

Confidential

SME LTE OAM / Test Lead

Responsibilities:
  • Provided consulting services for integration and testing of Alcatel-Lucent’s multi-tiered Enterprise telecom operation and maintenance (OAM) solution for 4G/3G wireless networks. Led test team for extensive lab testing, audit and certification of Alcatel-Lucent equipment prior to launch on Sprint LTE network—activities included: installation, upgrade, integration, configuration management (CM), fault management (FM), performance management (PM), wireless data analytics and maintaining of OAM / OSS products (5620 SAM, NPO, NEM, WPS), LTE Core (MMEs, S/P-GWs, backhaul routers), and RAN (eNodeBs).
  • Performed all phases of test development: requirement analysis, test planning, test strategy, test cases and scripts development, test execution, test reporting, test result analysis, defect retesting, regression testing, and test management. Created test plans, test strategy document, test cases and test scripts and developed and executed feature and regression tests to verify connectivity, mobility, performance measurement for various releases LE3.0 - LR13.3 products. Designed acceptance test, exit reports, progress reports, MOP, knowledge notes and identified delta alarms for each release. Architected test automation framework and developed test scripts for functional and performance testing using testing tools and Python scripts.
  • Analyzed wireless performance and faults data such as, performance counters and indicators, OM counters, QoS counters, per call measurement, logical parameters, PCAP/Snoop, call traces, NMS server logs and nodal debug files, historical alarms and faults etc. Discovered correlations and failure patterns; and formulated optimal parameter configurations for LTE nodes to address those issues. Developed KPIs, metrics and reports for monitoring network performance.
  • Planned, installed and upgraded 5620 SAM and NPO in Network Readiness Test Lab ((NRT, MOCN). Performed software upgrade and executed regression, integration and interoperability tests for SAM, NPO, SGW, PGW, ENodeB. Designed test scripts using XMLAPI, CLI commands, HTTP SOAP etc. Performed eNodeB database reconfiguration and call trace analysis; prepared snapshot instances; created work orders and templates using WPS for eNodeB parameter updates.

Confidential

Senior Software Development Engineer

Responsibilities:
  • Provided performance engineering and predictive analytics solutions for big data applications based on Microsoft’s Bing Internet search engine. Conducted performance engineering studies and analysis to discover congestion and outliers in Cosmos MapReduce implementation. Created benchmark performance tests to measure the performance improvements or regression in the cluster. Designed and developed predictive models and programs for capacity, utilization, and scalability planning. Developed models for live-site job execution flows in the production cluster and created congestion control algorithms to address the job failures scenarios. Performed code reviews, query optimization and technical discussions for data architecture.
  • Developed data models in support of analyzing the results of search A/B experiments, periodic standard reports and visualization graphs, scorecards and data feeds to other groups—such as business users, engineers, and management to promote understanding of the high-level behavior and performance of the systems from which the data was generated. Investigated data visualization and summarization techniques for conveying key findings. Performed ad-hoc exploratory statistics and data mining tasks.
  • Extensively used Microsoft big data technologies (Dryad, Cosmos Scope, Azure, C# LINQ, SQL); open source implementations (Hadoop, HBase, Hive, Pig, and Mahout) and machine learning tools (R and Matlab) for predictive modeling, prototyping and data analytics..
  • Extensively used Python, R, Matlab, SQL and Business Intelligence framework for deep data munging, information retrieval, data analysis, predictive modeling, automation and prototyping. Performed data integration, data migration, data collection, cleansing, data reduction, data validation, data transformation, pre-processing, aggregation etc. for large structured and unstructured datasets.
  • Performed complex data analytics using machine learning and statistical approaches on high dimensional massive data sets and streams coming from various pipelines—consisting of thousands of servers and multiple data centers around the world—such as impression logs, query result page views, user events and click streams, user behavior for each query, ad impressions, ad clicks, ad conversion and revenue, seen URLs and web browsing history etc. from Cosmos, Search, AdCenter, MSN Toolbar, Internet Explorer, and Web content generated by search crawlers and Web graphs generated by Index pipelines.

Confidential

Test Manager

Responsibilities:
  • Provided consulting services for functional testing, performance testing, integration testing and UAT testing of Unisys’ multi-tiered J2EE SOA based Enterprise BPMS Contract Management solution for GSA. Performed requirements analysis and created feature packets for functional and performance requirements and system use cases. Worked with leads and engineers to ensure that the product meets functional, quality, performance, scalability, privacy, security & usability goals. Developed effective UAT testing methodologies, strategies, test cases, scripts, test automation framework, KPI metrics, and QA metrics to ensure that the product meets the specified requirements with excellent quality.
  • Worked with end users to capture existing / manual business processes and work flows. Identifies and developed new product enhancements, performed business case analysis, and obtained acceptance from stakeholders for the recommended enhancements. Major deliverable included: use cases, flowcharts, business rules analysis, business requirement analysis, and synthesis into system specification, traceability matrix, prototypes and interface design. Formulated meaningful KPI metrics and reporting and provided recommendations to Senior Management on process and work flow improvements.
  • Worked with the technology team to deliver the enhancement that generates business value. Worked closely with development team to create reference implementation and prototypes for ESB Message Transformation and Routing service, which enabled interoperability among various dissimilar interfaces and entities.

Confidential

Test Automation Lead / Sr. Systems Integrator

Responsibilities:
  • Provided functional testing, integration testing, performance testing and test automation solutions for Ericsson’s multi-tiered CORBA based Enterprise telecom management and operation support system (OSS) and TEMS’ wireless networks planning, performance monitoring and optimization applications.
  • Led the development of test automation framework for functional and non-functional testing to support incremental code development and release cycles. The functional testing framework included installation, smoke and sanity tests; API, unit and feature tests; regression tests; system integration tests; database tests; Web, GUI, and Web Services tests; compatibility tests; UAT tests. The non-functional testing framework included performance tests (load, stress, capacity, scalability, availability and reliability tests); A/B tests; security tests; usability and accessibility tests. Received service excellence award and recognition from Confidential . for major contributions in performance and quality improvements of Ericsson TEMS products.
  • Responsible for defining, refining, implementing and managing automated performance testing solution in order to identify and eliminate performance problems in the OSS solution and TEMS products, such as, identification, fine tuning and scaling of bottlenecks components, profiling and diagnostics of application code, optimal parameter configuration, optimal resource utilization with desired throughput and responsiveness of the system, prior to its deployment into the production environment.
  • Reviewed project specifications and worked with other organizations to understand the performance requirements of the project, including the system architecture, design, internal and external interfaces, use cases, etc. Worked across a broad range of technologies and hardware configurations; partnered with development teams to identify key risks and improvement opportunities; worked with managers and technical staff from Applications Development, Infrastructure, DBA and Quality Assurance team to determine the scope of deliverables requiring performance testing.
  • Participated in requirements analysis and design sessions and identified critical business use cases, workload and system performance requirements, benchmarks and KPIs for better user experience, scalability, resource utilization (average and max CPU Utilization, Memory usage, Disk I/O usage, Network I/O usage), capacity, high availability, reliability, and recoverability of the system—such as, end-user response time and latency threshold for load balancer, normal and peak number of concurrent users, normal and peak throughput, orders per second, page view per second, scheduled, unscheduled downtime, slowdown, unexpected reboot or system crash, data-integrity, hot patch, backups and disk failure recovery etc.
  • Identified metrics to collect (counters/KPIs) to measure performance against targets under normal and peak load conditions—response time, throughput and utilization for CPU, Memory, Disk I/O, Network I/O and metrics for router, switches, gateways, OS, JVM, Web Server, App Server, Database, CEB Solution such as threads allocation and number of concurrent threads, number of requests queued, connections per second, deadlocks etc.
  • Generated load patterns and models based on past and projected load numbers. Created workload profile for key usage scenarios under load and stress conditions (approximate usage model)—think time, number of sessions/sec, session duration, concurrent and overlapping users, ramp-up and ramp-down, distribution of load to various scenarios, normal and peak load for current and new users, test duration.
  • Determined the performance test approach (load test, volume test, stress test, endurance test) and interleaving this across the identified scenarios. Designed test scenarios, baseline tests and test scripts for load, stress and capacity testing of the OSS solution. Developed scripts and executed the performance tests to properly and rigorously estimate the performance metrics of the system.
  • Executed load and stress tests and performed analysis of test results to uncover the bottleneck components in the solution bound by CPU, memory, disk I/O and network bandwidth and suggested recommendations for tuning the configuration and scaling the hardware resources to remove the bottleneck from the system. Investigated solution components that cause congestion in the system. Specifically monitored max utilization (CPU, Memory, Disk I/O, network I/O), memory leaks, race condition, data locking and blocking, network congestion, synchronization issue and experimented with different parameter settings to perform tuning of configuration parameters. Performed profiling of source code to find un-optimized code/algorithms and suggested recommendations for optimization of algorithms to increase the overall performance of the application.
  • Reported the current capacity of the system (capacity and scalability)—the maximum load that can be applied to the test environment before system wide failure. Provided scalability recommendation—hardware configuration required for production system, based on the resource utilization the components are bound by under extreme load, in the test environment.
  • Developed capacity planning models using the performance counters and machine learning algorithms to predict the future scalability requirements under specific workload and resource utilization. Performed response time correlations with system metrics, load, and hardware/software configuration. Predicted the response time of production system under given load condition; predicted the throughput of production system under given load condition; predicted the resource utilization of production system under given load condition.
  • Provided the conclusion for the overall quality of application under normal and extreme load conditions (current vs. desired) to help make go-live decision for production system. Reported the baseline test results (benchmarks) for future performance comparison (improvements/regression). Reported the performance improvements results (analysis and diagnostics) before vs. after the bottleneck removed, before vs. after changing the configuration parameters, before vs. after tuning / profiling of components.
  • Provided recommendation for optimal parameter configurations of production system based on the performance improvement results in the test environment. Reported high availability testing result (availability and reliability). Provided conclusion whether or not system meets performance acceptance criteria consistently over time, provides consistent and acceptable response times for the entire user base, whether system violates stability SLAs (reliability, availability, recoverability).
  • Identified, analyzed, and documented defects discovered during testing. Worked with the development teams to troubleshoot and resolve issues and provided fine tuning recommendations.
  • Performed Monte Carlo simulations to estimate the coverage and capacity issues under various uncertain radio conditions. Analyzed drive test data and performed tuning of propagation model. Performed link budget analysis, and analysis of predicted versus actual data and determined optimized antenna configurations. Conducted coverage and capacity predictions and identification of coverage holes under various loading and fading scenarios. Modified and analyzed system for capacity. Performed neighbor list and PN offsets planning, interference analysis. Identified and resolved RF design deficiencies such as pilot pollution and hand-offs issues in wireless networks. Performed code reviews, query optimization and technical discussions for software and network issues.
  • Used data mining and machine learning algorithms for root-cause analysis of wireless network problems, such as coverage holes, trends, hidden patterns, capacity bottlenecks and outliers. Extensively used Java, Python, R, Matlab, Weka, SQL and Business Intelligence framework for deep data munging, information retrieval, data analysis, predictive modeling, automation and prototyping.

Hire Now