nav-img
Advanced

Managing JMeter Test Reports

Test Report Description

Real-time and offline JMeter test reports allow you to view and analyze test data at any time.

For details about the JMeter test report, see Table 1.

This report shows the response performance of the tested system in a scenario with a large number of concurrent users. To help you understand the report, the following information is provided for reference:

  • Statistical dimension: In this report, RPS, response time, and concurrency are measured in a single thread. If a request has multiple packets, the request is considered successful only when all the packets are responded to. The response time of the request is the sum of the response time of the packets.
  • Response timeout: If the corresponding TCP connection does not return the response data within the set response timeout (customized in *.jmx files), the thread request is counted as a response timeout. Possible causes include: the tested server is busy or crashes, or the network bandwidth is fully occupied.
  • Verification failure: By default, the expected response code of HTTP/HTTPS is 200. However, when the response packet content returned from the server does not meet the expectation, response code such as 404 or 502 is returned. A possible cause is that the tested service cannot be processed normally in scenarios with a large number of concurrent users. For example, a database bottleneck exists in the distributed system or the backend application returns an error.
  • Bandwidth: This report collects statistics on the bandwidth of the execution end of the performance test service. The uplink indicates the traffic sent from the performance test service, and the downlink indicates the received traffic. In the external pressure test scenario, you need to check whether the EIP bandwidth of the executor meets the uplink bandwidth requirement and whether the bandwidth exceeds the downlink bandwidth of 1 GB.
  • RPS: RPS is the number of requests that CPTS sends to the tested server per second.
  • How to determine the quality of tested applications: According to the service quality definition of an application, the optimal status is that there is no response or verification failure. If there is any response or verification failure, it must be within the defined service quality range. Generally, the value does not exceed 1%. The shorter the response time is, the better. User experience is considered good if the response time is within 2s, acceptable if it is within 5s and needs optimization when it is over 5s. TP90 and TP99 can objectively reflect the response time experienced by 90% to 99% of users.
Table 1 Description of the JMeter test report

Concept

Description

Total Number of Metrics

Total number of metrics in all threads.

  • Maximum Concurrency: indicates the maximum number of concurrent virtual users.
  • Normal Response: indicates the number of transaction responses that pass the set checkpoints. If no checkpoints are set, the number of transactions that return 2XX response code will be calculated by default.
  • Bandwidth: records the real-time bandwidth usage during the running of a pressure test task.
  • Response Time: indicates the duration from the time when a client sends a request to the time when the client receives a response from the server.
  • Abnormal Response: indicates the number of parsing failures, verification failures, response timeout, 3XX, 4XX, 5XX, and connection failures.
  • Average RPS: indicates the average number of requests that CPTS sends to the tested server per second in a statistical period.

Response distribution statistics

Indicates the number of transactions that return normal responses, parsing failures, verification failures, and response timeout per second. This metric is related to the think time, concurrent users, and server response capability. For example, if the think time is 500 ms and the response time of the last request for the current user is less than 500 ms, two requests can be processed in 1s.

  • Normal response: indicates the number of transaction responses that pass the set checkpoints. If no checkpoints are set, the number of transactions that return code 2XX will be calculated by default.
  • Verification failure: indicates the number of transaction responses that do not pass the set checkpoints. If no checkpoints are set, the number of transactions that return 2XX response code will not be calculated.
  • 3XX: indicates that the client needs to perform further operations to complete the request.
  • 4XX: indicates that an error occurs on the client. As a result, the request cannot be processed.
  • 5XX: indicates that the server cannot complete the request.
  • Rejected: indicates the number of rejected connection requests.
  • Others: indicates the number of other errors.

Bandwidth (kbit/s)

Records the real-time bandwidth change consumed in the running of the pressure test task.

  • Uplink bandwidth: speed at which the JMeter execution node sends out data.
  • Downlink bandwidth: speed at which the JMeter execution node receives data.

RPS/Average Response Time

  • RPS: indicates the number of requests that CPTS sends to the tested server per second.
  • Average response time: indicates the average response time of all requests sent in a second.

Concurrent Users

Indicates the changes on the number of concurrent virtual users during testing.

Response Time Ratio

Indicates the ratio of response time of cases.

TP (ms)

If you want to calculate the top percentile XX (TPXX) for a request, collect all the response time values for the request over a time period (such as 10s) and sort them in an ascending order. Remove the top (100–XX)% from the list, and the highest value left is the value of TPXX.

  • TP50: Remove the top 50% from the list, and the highest value left is the value of TP50.
  • TP90: Remove the top 10% from the list, and the highest value left is the value of TP90.
  • TP97: Remove the top 3% from the list, and the highest value left is the value of TP97.
  • TP99: Remove the top 1% from the list, and the highest value left is the value of TP99.

Monitoring metrics

Overview: monitoring information, such as CPU usage, memory usage, disk read speed, and disk write speed.

CPU (%): CPU usage of a test object.

Memory (G): memory usage of a test object.

Disk read (kbit/s): number of bytes read from the monitored disk per second.

Disk write (kbit/s): number of bytes written to the monitored disk per second.

JVM monitoring: displays the memory and thread of the JVM running environment based on Java applications. You can monitor metric trends in real time for performance analysis.

Tracing analysis

Failed tracing: indicates application calling failures. It is displayed only when the calling fails. Click View Call Relationship to view calling relationships in a pop-up window. The latest 600 records are displayed. (The latest 50 records of a single application are displayed.)

Topology relationship: one of the calling relationships between applications: Use this relationship to analyze the success rate of calling and response time and to view the relationships, number of calls, and latency.

Viewing a Real-Time Test Report

View the monitoring data of each metric during a pressure test through the real-time test report.

Prerequisites

A test task has been started.

Procedure

  1. Log in to the CPTS console. In the navigation pane on the left, choose JMeter Test Projects. Click Edit Test Plan next to the project or click More > View Real-Time Report.
  2. On the Real-Time Reports tab page, select the test plan whose test report you want to view.

    Click Stop Task to stop the test plan of a real-time report.

  3. Select the thread under the test plan and view the report. For details about the parameters, see Table 1.

    Note
    • By default, the report of the first thread is displayed. If no thread is selected from the drop-down list, the reports of all threads are displayed.
    • You can select multiple threads under the test plan to view the reports of these threads.
    • You can click the drop-down list box before the thread name under Test metrics, and select one or more packets of the thread to view performance metrics of these packets.

  4. View the response distribution statistics, bandwidth, response time ratio, TP (ms), and number of concurrent users.
  5. View the monitoring metrics of the test plan, and perform calling analysis on the test plan. For details about the parameters, see Table 1.

    • Monitoring metrics: CPU, memory, disk read rate, and disk write rate.
    • Tracing analysis: supports analysis of the failed tracing and topology relationship to locate problems during the test.

Viewing an Offline Test Report

After a pressure test is complete, the system generates an offline test result report.

Prerequisites

The test task is complete.

Procedure

  1. Log in to the CPTS console. In the navigation pane on the left, choose JMeter Test Projects. Click Edit Test Plan next to the project or click More > View Offline Report.
  2. On the Offline Reports tab page, select the test plan whose report you want to view.
  3. (Optional) Click in front of the report name to view the test plan definition and thread definition.
  4. Click View Report, select a thread from the drop-down list, and view the test report result. For details about the parameters, see Table 1.

    Note
    • By default, the report of the first thread is displayed. If no thread is selected from the drop-down list, the reports of all threads are displayed.
    • You can select multiple threads under the test plan to view the reports of these threads.
    • You can click the drop-down list box before the thread name under Test metrics, and select one or more packets of the thread to view performance metrics of these packets.

  5. View the response distribution statistics, bandwidth, response time ratio, TP (ms), and number of concurrent users.

    Note

    Monitoring data in offline reports can be stored for 30 days, but failed tracings and topology relationships can only be stored for 7 days.