Understand the Workshop
update 26 nov 2024
You will see here to what correspond metrics in different tabs on the Workshop.
Each tab of side and horizontal bars in Workshop are explained here. They are presented in the order of the side bar:
Applications
Benchmark
Tests tracking
Modules
Rules
Documentation
Applications
On this view, you can:
Check the applications list and their properties.
Edit an application by clicking on the pen
Archive it by clicking on the folder
Add a new application which is not yet included in the list: click on the button
Add an application
.
If you click on an application, you can view its detailed analysis, spanned in 5 tabs:
Dashboard
Meter
Test Results
Evolution
Dashboard
The Dashboard
tab gives you a summary of your application's results. You can select the version of the application by clicking in the box right next to its name.
This tab is only useful for benchmark measure.
1 - Eco-score
The global eco-score is an indicator of your level of ecodesign for the application. It is the average of eco-scores for the following domains : Network
, Client resources
.
All eco-scores are noted between 0 and 100. To get the highest score, it is important to launch all analyses.
Just below the eco-score is indicated the evolution of eco-score since the previous version of the application.
2 - Eco-score by domain
Network: evaluate the level of eco-design of exchanged requests between client and server. It is calculated by analyzing the requests and their content. To get a score, it is necessary to launch measurements either with the TestBench or with the Test Runner (and activating the
http_request
option).Client resources: evaluate if your application is client-side resources efficient. It is calculated by the client-side resources consumption (energy, CPU, memory…). To get a score, it is necessary to launch measurements either with the TestBench or with the Test Runner on a device.
3 - Improvements summary
This is a small bar graph summarizing the number of rules by priority. You can filter the rules in the Improvements
table below by clicking on the corresponding priority. To display again all the rules, select All rules
in the select box at the top right of the Improvements
table.
4 - Autonomy
This graph gives the impact of the application on the phone battery. It gives the impact of the use of the application on the battery but also the reduced autonomy due to the presence of the application on the phone.
5 - Tests details
This stick diagram visually represents different steps according to the selected metric. The color means if steps are impactful in that metric (from dark green the most sober to red the most impactful).
6 - Improvements
This is the list of all the rules that were checked during analysis: rules that are correctly respected appear in green, whereas rules that are violated appear in yellow, orange or red. They are prioritized according to earnings you can get. Each rule is classified according to its domain. The number next to the domain corresponds to its eco-score.
Each rule is associated with a priority, a score and a gain.
The priority is an indicator to help you to prioritize your work and is directly correlated with the gain.
The score is an indicator of how good the rule is verified on the application. Adding up all scores of a specific domain gives this domain’s eco-score.
The gain is the difference between the maximum score and the actual score. It is correlated with the priority to help you choosing which rules you should work on at first place.
7 - Consumed resources
A summary of consumed resources for the following metrics: Platform Discharge
, Process Data
, Process Memory
, Process CPU
.
The difference between the last two versions of the metric is shown on the left side of the value.
Meter
On the left menu, search and sort your test cases by metrics. When you click on a test case you can check its detailed analysis.
Metrics are sorted by metrics associated to the entire platform and metrics associated to a process or a thread (Metrics List ).
You can verify the stability of your measures with Verification of measures consistency.
Test Results
You will find the results of your functional tests (passed or failed). If some iterations are failed you can see the error by clicking on the drop down arrow.
Evolution
In the evolution tab you can compare on a graphic the average measures result by versions. On the test case list, you can check their detailed analysis between two versions.
Benchmark
On the side bar, this tab allows you to launch benchmark tests.
To launch benchmark tests, please refer to the page Measure on the Testbench or 01 - Launch a first benchmark to discover.
Tests tracking
This tab lists all the tests you have launched on the Testbench.
Pending
: jobs that are in pending state (device not available for the moment).Running
: jobs in progress.Finished
: jobs that are done.
Once finished, several information on the finished list are available:
Ended at
: date and time at which the test was finished.Status: failed or finished
, in case of a failure, fly over the exclamation mark icon to get more details.Tests Passed
: status of the functional tests.
Modules
Find here all the Greenspector modules you can download.
Rules
Here, more detailed information about software ecodesign good practices. Rules are organised in 3 domains: code, network and client resources. For the code domain, you can also filter the rules by language.
Each rule includes an estimate of potential gains in energy, memory, performance, as well as an indication of the difficulty to apply it.