Verification of measures consistency

Once you have measured your application, you should verify the consistency of the results (stability, potential errors...).

These verifications are useful at the beginning of a project but will be optional throughout the life of the project.

You can also choose the strategy consisting of not correcting the standard deviation because you want to have measures that represent the instability of your application (due to network, server...).

Step-by-step guide

In your application on Greenspector App:

  1. Verify the minimum of iterations for each of your test cases (at least 3 iterations).

  2. On the Test Results view, check that the functional tests are all passed. If not, you can have measurements but you should not be confident about them (it might be that you were not measuring what you thought)!

  1. On the Test Results view, check that all your measures are present.

  2. On the Meter view, control the standard deviation of the measures. If the standard deviation is too high (in that case, the average of the measures is displayed in red or orange), it means that the measures of the iterations are not stable.

  1. To improve the standard deviation, you can add iterations and/or deactivate some iterations (in case you think some should be considered as errors). Measures outside confidence range are displayed in color (from yellow to red).

  2. Select the Test Duration metric and verify that you have durations higher than 0.5 second. If not, it means that you have too few measurement points. In that case, you might want to modify your automated tests in order to increase the test time.


Link between measure instability and backend instability

If backend platform is unstable (This is common on development platforms), several iterations could be done to avoid test error. However, we advice you to improve the stability of your environment to improve the test ability of your application.

Link between measure instability and sufficiency

Your application can be “unstable” in term of resource consumption. It is possible if you have a lot of network request, CPU treatment… In this case, we advice you to increase the number of iterations (>5) and to not deactivate iterations. You need first to increase the sufficiency of your solutions, the instability of the measure is certainly the reality on the user device.

Link between measure instability and (micro)benchmarking

In the case or you want to make some microbenchmarking (Comparison of framework or of some coding best practices for example), you can be on the margin error of the measure. We advice you to increase the number of iterations (>10) .


Campaign coherence

The measure which are on your version need to be coherent. It mean that :

  • if you have debugged the tests on a version, this version should not be used to analyze the results. You need a specific version.

  • Avoid launching campaigns too far apart in time