Maybe a flag for verbose 'testSuite' benchmark outputs
Each test case has benchmark output data. Benchmark data represents the expected outcome of a test case against which subsequent test runs will be measured. Benchmark data is generated with a version of the code at a specific GIT commit #.
Benchmark data is stored in a sub-folder within the test case numbered folder.
Benchmark data sub-folder is named output.######, where ###### represents the GIT commit ID # corresponding to the version of Jetstream that was used to generate the benchmark data.
The test case run-date refers to the date the test case was submitted to the queue.
The test case GIT commit # refers to the state of the code when the test case was run.
After a test case has run, critical test case values are compared against the benchmark values.
Test case receives a pass only if critical test case values match the corresponding benchmark values.
Each test case has benchmark output data. Benchmark data represents the expected outcome of a test case against which subsequent test runs will be measured. Benchmark data is generated with a version of the code at a specific GIT commit #.
Benchmark data is stored in a sub-folder within the test case numbered folder.
Benchmark data sub-folder is named output.######, where ###### represents the GIT commit ID # corresponding to the version of Jetstream that was used to generate the benchmark data.
The test case run-date refers to the date the test case was submitted to the queue.
The test case GIT commit # refers to the state of the code when the test case was run.
After a test case has run, critical test case values are compared against the benchmark values.
Test case receives a pass only if critical test case values match the corresponding benchmark values.
Each test case has a set of benchmark values