Total Statistics | Total | Pass | Fail | Skip | Elapsed | Pass / Fail / Skip |
---|---|---|---|---|---|---|
All Tests | 2 | 1 | 1 | 0 | 00:01:04 |
Statistics by Tag | Total | Pass | Fail | Skip | Elapsed | Pass / Fail / Skip |
---|---|---|---|---|---|---|
critical | 1 | 0 | 1 | 0 | 00:01:04 | |
singlenode_setup | 1 | 0 | 1 | 0 | 00:01:04 |
Statistics by Suite | Total | Pass | Fail | Skip | Elapsed | Pass / Fail / Skip |
---|---|---|---|---|---|---|
2 | 1 | 1 | 0 | 00:01:12 |
Full Name: | controller-benchmark.txt |
---|---|
Documentation: | MD-SAL Data Store benchmarking. Copyright (c) 2015 Cisco Systems, Inc. and others. All rights reserved. This program and the accompanying materials are made available under the terms of the Eclipse Public License v1.0 which accompanies this distribution, and is available at http://www.eclipse.org/legal/epl-v10.html This test suite uses the odl-dsbenchmark-impl feature controlled via dsbenchmark.py tool for testing the MD-SAL Data Store performance. (see the 'https://wiki.opendaylight.org/view/Controller_Core_Functionality_Tutorials:Tutorials:Data_Store_Benchmarking_and_Data_Access_Patterns') Based on values in test suite variables it triggers required numbers of warm-up and measured test runs: odl-dsbenchmark-impl module generates (towards MD-SAL Data Store) specified structure, type and number of operations. The test suite performs checks for start-up and test execution timeouts (Start Measurement, Wait For Results) and basic checks for test runs results (Check Results). Finally it provides total numbers per operation structure and type (by default in the perf_per_struct.csv, perf_per_ops.csv files) suitable for plotting in system test environment. See also the 'https://wiki.opendaylight.org/view/CrossProject:Integration_Group:System_Test:Step_by_Step_Guide#Optional_-_Plot_a_graph_from_your_job' Included totals can be filtered using the FILTER parameter (RegExp). Because of the way how graphs are drawn, it is recomended to keep all test suite variables unchanged as defined for the 1st build. Parameters WARMUPS, RUNS and accordingly the TIMEOUT value can be changed for each build if needed. Parameter UNITS defines time units returned by odl-dsbenchmark-impl module. The dsbenchmark.py tool always returns values in miliseconds. When running this robot suite always use --exclude tag for distinguish the run for 3node setup: need a benchmark for leader and follow (--exclude singlenode_setup) the run for 1node setup: no followr present (--exclude clustered_setup) |
Source: | /w/workspace/controller-csit-1node-benchmark-all-titanium/test/csit/suites/controller/benchmark/dsbenchmark.robot |
Start / End / Elapsed: | 20250209 23:19:40.319 / 20250209 23:20:52.018 / 00:01:11.699 |
Status: | 2 tests total, 1 passed, 1 failed, 0 skipped |
Documentation: | Setup imported resources, SSH-login to mininet machine, create HTTP session, put Python tool to mininet machine. |
---|---|
Start / End / Elapsed: | 20250209 23:19:40.808 / 20250209 23:19:47.918 / 00:00:07.110 |
Documentation: | Cleaning-up |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.969 / 20250209 23:20:52.018 / 00:00:00.049 |
Full Name: | controller-benchmark.txt.Measure_Both_Datastores_For_One_Node_Odl_Setup |
---|---|
Tags: | critical, singlenode_setup |
Start / End / Elapsed: | 20250209 23:19:47.917 / 20250209 23:20:51.831 / 00:01:03.914 |
Status: | FAIL |
Message: |
Documentation: | Test case setup which skips on previous failure. If not, logs test case name to Karaf log. Recommended to be used as the default test case setup. |
---|---|
Start / End / Elapsed: | 20250209 23:19:47.918 / 20250209 23:19:48.023 / 00:00:00.105 |
Documentation: | Keywork which will cover a whole banchmark. If ${file_prefix} is we have 1 node odl. |
---|---|
Start / End / Elapsed: | 20250209 23:19:48.028 / 20250209 23:20:51.814 / 00:01:03.786 |
Documentation: | Returns the node ip which should be tested |
---|---|
Start / End / Elapsed: | 20250209 23:19:48.029 / 20250209 23:19:48.030 / 00:00:00.001 |
Documentation: | Start the benchmark tool. Check that it has been running at least for 10s period. If the script exits early, retry once after ${retry} if specified. |
---|---|
Start / End / Elapsed: | 20250209 23:19:48.031 / 20250209 23:20:51.633 / 00:01:03.602 |
Documentation: | Wait until the benchmark tool is finished. Fail in case of test timeout (3h). In order to prevent SSH session from closing due to inactivity, newline is sent every check. |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.633 / 20250209 23:20:51.633 / 00:00:00.000 |
Documentation: | Fails if the given |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.633 / 20250209 23:20:51.633 / 00:00:00.000 |
Documentation: | Fails if the given |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.633 / 20250209 23:20:51.633 / 00:00:00.000 |
Documentation: | Check outputs for expected content. Fail in case of unexpected content. |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.634 / 20250209 23:20:51.634 / 00:00:00.000 |
Documentation: | Store the provided file from the MININET to the ROBOT machine. |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.634 / 20250209 23:20:51.634 / 00:00:00.000 |
Documentation: | Store the provided file from the MININET to the ROBOT machine. |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.634 / 20250209 23:20:51.634 / 00:00:00.000 |
Documentation: | Returns the node ip which should be tested |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.634 / 20250209 23:20:51.634 / 00:00:00.000 |
Documentation: | Fails if the given objects are unequal. |
---|---|
Start / End / Elapsed: | 20250209 23:20:51.634 / 20250209 23:20:51.634 / 00:00:00.000 |
Full Name: | controller-benchmark.txt.Merge_Csvs_Together |
---|---|
Documentation: | Merge created csvs into given file necause plot plugin cannot have more source files for one graph. |
Start / End / Elapsed: | 20250209 23:20:51.832 / 20250209 23:20:51.969 / 00:00:00.137 |
Status: | PASS |