Merge topic 'ctest-j-default' into release-3.29

5de1e21659 ctest: Allow passing -j without value to choose a contextual default
bbcbcff7d9 cmCTestMultiProcessHandler: Modernize member initialization
7457b474a1 Tests: Remove unnecessary parallel suppression from CTestCoverageCollectGCOV
ae69801d96 Tests: Convert CTestTestSkipReturnCode to RunCMake.ctest_test case
30dda49416 Tests: Convert CTestTestSerialOrder to RunCMake.ctest_test case

Acked-by: Kitware Robot <kwrobot@kitware.com>
Acked-by: buildbot <buildbot@kitware.com>
Acked-by: scivision <michael@scivision.dev>
Merge-request: !9315
This commit is contained in:
Brad King
2024-03-11 14:18:38 +00:00
committed by Kitware Robot
48 changed files with 440 additions and 208 deletions

View File

@@ -18,7 +18,7 @@ Perform the :ref:`CTest Test Step` as a :ref:`Dashboard Client`.
[EXCLUDE_FIXTURE <regex>]
[EXCLUDE_FIXTURE_SETUP <regex>]
[EXCLUDE_FIXTURE_CLEANUP <regex>]
[PARALLEL_LEVEL <level>]
[PARALLEL_LEVEL [<level>]]
[RESOURCE_SPEC_FILE <file>]
[TEST_LOAD <threshold>]
[SCHEDULE_RANDOM <ON|OFF>]
@@ -104,9 +104,14 @@ The options are:
Same as ``EXCLUDE_FIXTURE`` except only matching cleanup tests are excluded.
``PARALLEL_LEVEL <level>``
Specify a positive number representing the number of tests to
be run in parallel.
``PARALLEL_LEVEL [<level>]``
Run tests in parallel, limited to a given level of parallelism.
.. versionadded:: 3.29
The ``<level>`` may be omitted, or ``0``, to let ctest use a default
level of parallelism, or unbounded parallelism, respectively, as
documented by the :option:`ctest --parallel` option.
``RESOURCE_SPEC_FILE <file>``
.. versionadded:: 3.16

View File

@@ -8,4 +8,15 @@ For example, if ``CTEST_PARALLEL_LEVEL`` is set to 8, CTest will run
up to 8 tests concurrently as if ``ctest`` were invoked with the
:option:`--parallel 8 <ctest --parallel>` option.
.. versionchanged:: 3.29
The value may be empty, or ``0``, to let ctest use a default level of
parallelism, or unbounded parallelism, respectively, as documented by
the :option:`ctest --parallel` option.
On Windows, environment variables cannot be set to an empty string.
CTest will interpret a whitespace-only string as empty.
In CMake 3.28 and earlier, an empty or ``0`` value was equivalent to ``1``.
See :manual:`ctest(1)` for more information on parallel test execution.

View File

@@ -118,17 +118,27 @@ Run Tests
previously interrupted. If no interruption occurred, the ``-F`` option
will have no effect.
.. option:: -j <jobs>, --parallel <jobs>
.. option:: -j [<level>], --parallel [<level>]
Run the tests in parallel using the given number of jobs.
Run tests in parallel, optionally limited to a given level of parallelism.
This option tells CTest to run the tests in parallel using given
number of jobs. This option can also be set by setting the
:envvar:`CTEST_PARALLEL_LEVEL` environment variable.
.. versionadded:: 3.29
The ``<level>`` may be omitted, or ``0``, in which case:
* Under `Job Server Integration`_, parallelism is limited by
available job tokens.
* Otherwise, if the value is omitted, parallelism is limited
by the number of processors, or 2, whichever is larger.
* Otherwise, if the value is ``0``, parallelism is unbounded.
This option may instead be specified by the :envvar:`CTEST_PARALLEL_LEVEL`
environment variable.
This option can be used with the :prop_test:`PROCESSORS` test property.
See `Label and Subproject Summary`_.
See the `Label and Subproject Summary`_.
.. option:: --resource-spec-file <file>

View File

@@ -150,6 +150,12 @@ CTest
* :manual:`ctest(1)` now supports :ref:`job server integration
<ctest-job-server-integration>` on POSIX systems.
* The :option:`ctest -j` option may now be given without a value to let
ctest choose a default level of parallelism, or with ``0`` to let ctest
use unbounded parallelism. The corresponding :envvar:`CTEST_PARALLEL_LEVEL`
environment variable, if set to the empty string, is now equivalent to
passing ``-j`` with no value.
* The :command:`ctest_test` command gained options
``INCLUDE_FROM_FILE`` and ``EXCLUDE_FROM_FILE`` to run or exclude
tests named in a file.

View File

@@ -43,6 +43,17 @@
#include "cmUVJobServerClient.h"
#include "cmWorkingDirectory.h"
namespace {
// For unspecified parallelism, limit to the number of processors,
// but with a minimum greater than 1 so there is some parallelism.
constexpr unsigned long kParallelLevelMinimum = 2u;
// For "unbounded" parallelism, limit to a very high value.
// Under a job server, parallelism is effectively limited
// only by available job server tokens.
constexpr unsigned long kParallelLevelUnbounded = 0x10000u;
}
namespace cmsys {
class RegularExpression;
}
@@ -66,18 +77,14 @@ private:
cmCTestMultiProcessHandler* Handler;
};
cmCTestMultiProcessHandler::cmCTestMultiProcessHandler()
cmCTestMultiProcessHandler::cmCTestMultiProcessHandler(
cmCTest* ctest, cmCTestTestHandler* handler)
: CTest(ctest)
, TestHandler(handler)
, ProcessorsAvailable(cmAffinity::GetProcessorsAvailable())
, HaveAffinity(this->ProcessorsAvailable.size())
, ParallelLevelDefault(kParallelLevelMinimum)
{
this->ParallelLevel = 1;
this->TestLoad = 0;
this->FakeLoadForTesting = 0;
this->Completed = 0;
this->RunningCount = 0;
this->ProcessorsAvailable = cmAffinity::GetProcessorsAvailable();
this->HaveAffinity = this->ProcessorsAvailable.size();
this->HasCycles = false;
this->HasInvalidGeneratedResourceSpec = false;
this->SerialTestRunning = false;
}
cmCTestMultiProcessHandler::~cmCTestMultiProcessHandler() = default;
@@ -102,9 +109,43 @@ void cmCTestMultiProcessHandler::SetTests(TestMap tests,
}
// Set the max number of tests that can be run at the same time.
void cmCTestMultiProcessHandler::SetParallelLevel(size_t level)
void cmCTestMultiProcessHandler::SetParallelLevel(cm::optional<size_t> level)
{
this->ParallelLevel = level < 1 ? 1 : level;
this->ParallelLevel = level;
if (!this->ParallelLevel) {
// '-j' was given with no value. Limit by number of processors.
cmsys::SystemInformation info;
info.RunCPUCheck();
unsigned long processorCount = info.GetNumberOfLogicalCPU();
if (cm::optional<std::string> fakeProcessorCount =
cmSystemTools::GetEnvVar(
"__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING")) {
unsigned long pc = 0;
if (cmStrToULong(*fakeProcessorCount, &pc)) {
processorCount = pc;
} else {
cmSystemTools::Error("Failed to parse fake processor count: " +
*fakeProcessorCount);
}
}
this->ParallelLevelDefault =
std::max(kParallelLevelMinimum, processorCount);
}
}
size_t cmCTestMultiProcessHandler::GetParallelLevel() const
{
if ((this->ParallelLevel && *this->ParallelLevel == 0) ||
(!this->ParallelLevel && this->JobServerClient)) {
return kParallelLevelUnbounded;
}
if (this->ParallelLevel) {
return *this->ParallelLevel;
}
return this->ParallelLevelDefault;
}
void cmCTestMultiProcessHandler::SetTestLoad(unsigned long load)
@@ -451,10 +492,11 @@ void cmCTestMultiProcessHandler::UnlockResources(int index)
inline size_t cmCTestMultiProcessHandler::GetProcessorsUsed(int test)
{
size_t processors = static_cast<int>(this->Properties[test]->Processors);
size_t const parallelLevel = this->GetParallelLevel();
// If processors setting is set higher than the -j
// setting, we default to using all of the process slots.
if (processors > this->ParallelLevel) {
processors = this->ParallelLevel;
if (processors > parallelLevel) {
processors = parallelLevel;
}
// Cap tests that want affinity to the maximum affinity available.
if (this->HaveAffinity && processors > this->HaveAffinity &&
@@ -508,8 +550,9 @@ void cmCTestMultiProcessHandler::StartNextTests()
size_t numToStart = 0;
if (this->RunningCount < this->ParallelLevel) {
numToStart = this->ParallelLevel - this->RunningCount;
size_t const parallelLevel = this->GetParallelLevel();
if (this->RunningCount < parallelLevel) {
numToStart = parallelLevel - this->RunningCount;
}
if (numToStart == 0) {
@@ -523,7 +566,7 @@ void cmCTestMultiProcessHandler::StartNextTests()
}
bool allTestsFailedTestLoadCheck = false;
size_t minProcessorsRequired = this->ParallelLevel;
size_t minProcessorsRequired = this->GetParallelLevel();
std::string testWithMinProcessors;
cmsys::SystemInformation info;
@@ -818,7 +861,7 @@ void cmCTestMultiProcessHandler::ReadCostData()
this->Properties[index]->PreviousRuns = prev;
// When not running in parallel mode, don't use cost data
if (this->ParallelLevel > 1 && this->Properties[index] &&
if (this->GetParallelLevel() > 1 && this->Properties[index] &&
this->Properties[index]->Cost == 0) {
this->Properties[index]->Cost = cost;
}
@@ -847,7 +890,7 @@ int cmCTestMultiProcessHandler::SearchByName(std::string const& name)
void cmCTestMultiProcessHandler::CreateTestCostList()
{
if (this->ParallelLevel > 1) {
if (this->GetParallelLevel() > 1) {
this->CreateParallelTestCostList();
} else {
this->CreateSerialTestCostList();

View File

@@ -58,12 +58,12 @@ public:
unsigned int Slots;
};
cmCTestMultiProcessHandler();
cmCTestMultiProcessHandler(cmCTest* ctest, cmCTestTestHandler* handler);
virtual ~cmCTestMultiProcessHandler();
// Set the tests
void SetTests(TestMap tests, PropertiesMap properties);
// Set the max number of tests that can be run at the same time.
void SetParallelLevel(size_t);
void SetParallelLevel(cm::optional<size_t> level);
void SetTestLoad(unsigned long load);
virtual void RunTests();
void PrintOutputAsJson();
@@ -81,13 +81,6 @@ public:
this->TestResults = r;
}
void SetCTest(cmCTest* ctest) { this->CTest = ctest; }
void SetTestHandler(cmCTestTestHandler* handler)
{
this->TestHandler = handler;
}
cmCTestTestHandler* GetTestHandler() { return this->TestHandler; }
void SetRepeatMode(cmCTest::Repeat mode, int count)
@@ -171,22 +164,26 @@ protected:
bool InitResourceAllocator(std::string& error);
bool CheckGeneratedResourceSpec();
private:
cmCTest* CTest;
cmCTestTestHandler* TestHandler;
bool UseResourceSpec = false;
cmCTestResourceSpec ResourceSpec;
std::string ResourceSpecFile;
std::string ResourceSpecSetupFixture;
cm::optional<std::size_t> ResourceSpecSetupTest;
bool HasInvalidGeneratedResourceSpec;
bool HasInvalidGeneratedResourceSpec = false;
// Tests pending selection to start. They may have dependencies.
TestMap PendingTests;
// List of pending test indexes, ordered by cost.
std::list<int> OrderedTests;
// Total number of tests we'll be running
size_t Total;
size_t Total = 0;
// Number of tests that are complete
size_t Completed;
size_t RunningCount;
size_t Completed = 0;
size_t RunningCount = 0;
std::set<size_t> ProcessorsAvailable;
size_t HaveAffinity;
bool StopTimePassed = false;
@@ -204,7 +201,15 @@ protected:
ResourceAvailabilityErrors;
cmCTestResourceAllocator ResourceAllocator;
std::vector<cmCTestTestHandler::cmCTestTestResult>* TestResults;
size_t ParallelLevel; // max number of process that can be run at once
// Get the maximum number of processors that may be used at once.
size_t GetParallelLevel() const;
// With no '-j' option, default to serial testing.
cm::optional<size_t> ParallelLevel = 1;
// Fallback parallelism limit when '-j' is given with no value.
size_t ParallelLevelDefault;
// 'make' jobserver client. If connected, we acquire a token
// for each test before running its process.
@@ -214,16 +219,14 @@ protected:
// Callback invoked when a token is received.
void JobServerReceivedToken();
unsigned long TestLoad;
unsigned long FakeLoadForTesting;
unsigned long TestLoad = 0;
unsigned long FakeLoadForTesting = 0;
cm::uv_loop_ptr Loop;
cm::uv_idle_ptr StartNextTestsOnIdle_;
cm::uv_timer_ptr StartNextTestsOnTimer_;
cmCTestTestHandler* TestHandler;
cmCTest* CTest;
bool HasCycles;
bool HasCycles = false;
cmCTest::Repeat RepeatMode = cmCTest::Repeat::Never;
int RepeatCount = 1;
bool Quiet;
bool SerialTestRunning;
bool Quiet = false;
bool SerialTestRunning = false;
};

View File

@@ -105,8 +105,8 @@ cmCTestGenericHandler* cmCTestTestCommand::InitializeHandler()
if (this->StopOnFailure) {
handler->SetOption("StopOnFailure", "ON");
}
if (!this->ParallelLevel.empty()) {
handler->SetOption("ParallelLevel", this->ParallelLevel);
if (this->ParallelLevel) {
handler->SetOption("ParallelLevel", *this->ParallelLevel);
}
if (!this->Repeat.empty()) {
handler->SetOption("Repeat", this->Repeat);

View File

@@ -8,7 +8,9 @@
#include <utility>
#include <cm/memory>
#include <cm/optional>
#include "cmArgumentParserTypes.h"
#include "cmCTestHandlerCommand.h"
#include "cmCommand.h"
@@ -56,7 +58,7 @@ protected:
std::string ExcludeFixture;
std::string ExcludeFixtureSetup;
std::string ExcludeFixtureCleanup;
std::string ParallelLevel;
cm::optional<ArgumentParser::Maybe<std::string>> ParallelLevel;
std::string Repeat;
std::string ScheduleRandom;
std::string StopTime;

View File

@@ -550,9 +550,21 @@ bool cmCTestTestHandler::ProcessOptions()
return false;
}
}
if (this->GetOption("ParallelLevel")) {
this->CTest->SetParallelLevel(
std::stoi(*this->GetOption("ParallelLevel")));
if (cmValue parallelLevel = this->GetOption("ParallelLevel")) {
if (parallelLevel.IsEmpty()) {
// An empty value tells ctest to choose a default.
this->CTest->SetParallelLevel(cm::nullopt);
} else {
// A non-empty value must be a non-negative integer.
unsigned long plevel = 0;
if (!cmStrToULong(*parallelLevel, &plevel)) {
cmCTestLog(this->CTest, ERROR_MESSAGE,
"ParallelLevel invalid value: " << *parallelLevel
<< std::endl);
return false;
}
this->CTest->SetParallelLevel(plevel);
}
}
if (this->GetOption("StopOnFailure")) {
@@ -1361,10 +1373,9 @@ bool cmCTestTestHandler::ProcessDirectory(std::vector<std::string>& passed,
this->StartTestTime = std::chrono::system_clock::now();
auto elapsed_time_start = std::chrono::steady_clock::now();
auto parallel = cm::make_unique<cmCTestMultiProcessHandler>();
parallel->SetCTest(this->CTest);
auto parallel =
cm::make_unique<cmCTestMultiProcessHandler>(this->CTest, this);
parallel->SetParallelLevel(this->CTest->GetParallelLevel());
parallel->SetTestHandler(this);
if (this->RepeatMode != cmCTest::Repeat::Never) {
parallel->SetRepeatMode(this->RepeatMode, this->RepeatCount);
} else {

View File

@@ -179,7 +179,7 @@ struct cmCTest::Private
int MaxTestNameWidth = 30;
int ParallelLevel = 1;
cm::optional<size_t> ParallelLevel = 1;
bool ParallelLevelSetInCli = false;
unsigned long TestLoad = 0;
@@ -380,14 +380,14 @@ cmCTest::cmCTest()
cmCTest::~cmCTest() = default;
int cmCTest::GetParallelLevel() const
cm::optional<size_t> cmCTest::GetParallelLevel() const
{
return this->Impl->ParallelLevel;
}
void cmCTest::SetParallelLevel(int level)
void cmCTest::SetParallelLevel(cm::optional<size_t> level)
{
this->Impl->ParallelLevel = level < 1 ? 1 : level;
this->Impl->ParallelLevel = level;
}
unsigned long cmCTest::GetTestLoad() const
@@ -1892,14 +1892,31 @@ bool cmCTest::HandleCommandLineArguments(size_t& i,
std::string arg = args[i];
if (this->CheckArgument(arg, "-F"_s)) {
this->Impl->Failover = true;
} else if (this->CheckArgument(arg, "-j"_s, "--parallel") &&
i < args.size() - 1) {
i++;
int plevel = atoi(args[i].c_str());
this->SetParallelLevel(plevel);
} else if (this->CheckArgument(arg, "-j"_s, "--parallel")) {
cm::optional<size_t> parallelLevel;
// No value or an empty value tells ctest to choose a default.
if (i + 1 < args.size() && !cmHasLiteralPrefix(args[i + 1], "-")) {
++i;
if (!args[i].empty()) {
// A non-empty value must be a non-negative integer.
unsigned long plevel = 0;
if (!cmStrToULong(args[i], &plevel)) {
errormsg =
cmStrCat("'", arg, "' given invalid value '", args[i], "'");
return false;
}
parallelLevel = plevel;
}
}
this->SetParallelLevel(parallelLevel);
this->Impl->ParallelLevelSetInCli = true;
} else if (cmHasPrefix(arg, "-j")) {
int plevel = atoi(arg.substr(2).c_str());
// The value must be a non-negative integer.
unsigned long plevel = 0;
if (!cmStrToULong(arg.substr(2), &plevel)) {
errormsg = cmStrCat("'", arg, "' given invalid value '", args[i], "'");
return false;
}
this->SetParallelLevel(plevel);
this->Impl->ParallelLevelSetInCli = true;
}
@@ -2799,10 +2816,20 @@ int cmCTest::Run(std::vector<std::string>& args, std::string* output)
// handle CTEST_PARALLEL_LEVEL environment variable
if (!this->Impl->ParallelLevelSetInCli) {
std::string parallel;
if (cmSystemTools::GetEnv("CTEST_PARALLEL_LEVEL", parallel)) {
int plevel = atoi(parallel.c_str());
this->SetParallelLevel(plevel);
if (cm::optional<std::string> parallelEnv =
cmSystemTools::GetEnvVar("CTEST_PARALLEL_LEVEL")) {
if (parallelEnv->empty() ||
parallelEnv->find_first_not_of(" \t") == std::string::npos) {
// An empty value tells ctest to choose a default.
this->SetParallelLevel(cm::nullopt);
} else {
// A non-empty value must be a non-negative integer.
// Otherwise, ignore it.
unsigned long plevel = 0;
if (cmStrToULong(*parallelEnv, &plevel)) {
this->SetParallelLevel(plevel);
}
}
}
}

View File

@@ -12,6 +12,7 @@
#include <string>
#include <vector>
#include <cm/optional>
#include <cm/string_view>
#include "cmDuration.h"
@@ -116,8 +117,8 @@ public:
cmDuration GetGlobalTimeout() const;
/** how many test to run at the same time */
int GetParallelLevel() const;
void SetParallelLevel(int);
cm::optional<size_t> GetParallelLevel() const;
void SetParallelLevel(cm::optional<size_t> level);
unsigned long GetTestLoad() const;
void SetTestLoad(unsigned long);

View File

@@ -48,9 +48,9 @@ const cmDocumentationEntry cmDocumentationOptions[] = {
"Truncate 'tail' (default), 'middle' or 'head' of test output once "
"maximum output size is reached" },
{ "-F", "Enable failover." },
{ "-j <jobs>, --parallel <jobs>",
"Run the tests in parallel using the "
"given number of jobs." },
{ "-j [<level>], --parallel [<level>]",
"Run tests in parallel, "
"optionally limited to a given level of parallelism." },
{ "-Q,--quiet", "Make ctest quiet." },
{ "-O <file>, --output-log <file>", "Output to log file" },
{ "--output-junit <file>", "Output test results to JUnit XML file." },

View File

@@ -2806,7 +2806,6 @@ if(BUILD_TESTING)
-S "${CMake_BINARY_DIR}/Tests/CTestCoverageCollectGCOV/test.cmake" -VV
--output-log "${CMake_BINARY_DIR}/Tests/CTestCoverageCollectGCOV/testOut.log"
)
set_property(TEST CTestCoverageCollectGCOV PROPERTY ENVIRONMENT CTEST_PARALLEL_LEVEL=)
configure_file(
"${CMake_SOURCE_DIR}/Tests/CTestTestEmptyBinaryDirectory/test.cmake.in"
@@ -3073,19 +3072,6 @@ if(BUILD_TESTING)
"Test command:.*Working Directory:.*Environment variables:.*foo=bar.*this=that"
)
configure_file(
"${CMake_SOURCE_DIR}/Tests/CTestTestSkipReturnCode/test.cmake.in"
"${CMake_BINARY_DIR}/Tests/CTestTestSkipReturnCode/test.cmake"
@ONLY ESCAPE_QUOTES)
add_test(CTestTestSkipReturnCode ${CMAKE_CTEST_COMMAND}
-S "${CMake_BINARY_DIR}/Tests/CTestTestSkipReturnCode/test.cmake" -V
--output-log "${CMake_BINARY_DIR}/Tests/CTestTestSkipReturnCode/testOutput.log"
-C \${CTEST_CONFIGURATION_TYPE}
)
set_tests_properties(CTestTestSkipReturnCode PROPERTIES
PASS_REGULAR_EXPRESSION "CMakeV1 \\.* +Passed.*CMakeV2 \\.+\\*+Skipped")
set_property(TEST CTestTestSkipReturnCode PROPERTY ENVIRONMENT CTEST_PARALLEL_LEVEL=)
ADD_TEST_MACRO(CTestTestSerialInDepends ${CMAKE_CTEST_COMMAND} -j 4
--output-on-failure -C "\${CTestTest_CONFIG}")
@@ -3095,10 +3081,6 @@ if(BUILD_TESTING)
PASS_REGULAR_EXPRESSION "\\*\\*\\*Not Run"
)
ADD_TEST_MACRO(CTestTestSerialOrder ${CMAKE_CTEST_COMMAND}
--output-on-failure -C "\${CTestTest_CONFIG}")
set_property(TEST CTestTestSerialOrder PROPERTY ENVIRONMENT CTEST_PARALLEL_LEVEL=)
if(NOT BORLAND)
set(CTestLimitDashJ_CTEST_OPTIONS --force-new-ctest-process)
add_test_macro(CTestLimitDashJ ${CMAKE_CTEST_COMMAND} -j 4

View File

@@ -1,40 +0,0 @@
cmake_minimum_required(VERSION 3.5)
project(CTestTestSerialOrder)
set(TEST_OUTPUT_FILE "${CMAKE_CURRENT_BINARY_DIR}/test_output.txt")
enable_testing()
function(add_serial_order_test TEST_NAME)
add_test(NAME ${TEST_NAME}
COMMAND ${CMAKE_COMMAND}
"-DTEST_OUTPUT_FILE=${TEST_OUTPUT_FILE}"
"-DTEST_NAME=${TEST_NAME}"
-P "${CMAKE_CURRENT_SOURCE_DIR}/test.cmake"
)
if(ARGC GREATER 1)
set_tests_properties(${TEST_NAME} PROPERTIES ${ARGN})
endif()
endfunction()
add_serial_order_test(initialization COST 1000)
add_serial_order_test(test1)
add_serial_order_test(test2)
add_serial_order_test(test3)
add_serial_order_test(test4 DEPENDS test5)
add_serial_order_test(test5)
set_tests_properties(test5 PROPERTIES DEPENDS "test6;test7b;test7a")
add_serial_order_test(test6 COST -2)
add_serial_order_test(test7a COST -1)
add_serial_order_test(test7b COST -1)
add_serial_order_test(test8 COST 10)
add_serial_order_test(test9 COST 20)
add_serial_order_test(test10 COST 0)
add_serial_order_test(test11)
add_serial_order_test(test12 COST 0)
add_serial_order_test(verification COST -1000)

View File

@@ -1,31 +0,0 @@
list(APPEND EXPECTED_OUTPUT
initialization
test9
test8
test1
test2
test3
test6
test7a
test7b
test5
test4
test10
test11
test12
)
if("${TEST_NAME}" STREQUAL "initialization")
file(WRITE ${TEST_OUTPUT_FILE} "${TEST_NAME}")
elseif("${TEST_NAME}" STREQUAL "verification")
file(READ ${TEST_OUTPUT_FILE} ACTUAL_OUTPUT)
if(NOT "${ACTUAL_OUTPUT}" STREQUAL "${EXPECTED_OUTPUT}")
message(FATAL_ERROR "Actual test order [${ACTUAL_OUTPUT}] differs from expected test order [${EXPECTED_OUTPUT}]")
endif()
else()
file(APPEND ${TEST_OUTPUT_FILE} ";${TEST_NAME}")
endif()

View File

@@ -1,8 +0,0 @@
cmake_minimum_required(VERSION 3.5)
project(CTestTestSkipReturnCode)
include(CTest)
add_test (NAME CMakeV1 COMMAND ${CMAKE_COMMAND} "--version")
add_test (NAME CMakeV2 COMMAND ${CMAKE_COMMAND} "--version")
set_tests_properties(CMakeV2 PROPERTIES SKIP_RETURN_CODE 0)

View File

@@ -1,4 +0,0 @@
set (CTEST_NIGHTLY_START_TIME "21:00:00 EDT")
set(CTEST_DROP_METHOD "http")
set(CTEST_DROP_SITE "open.cdash.org")
set(CTEST_DROP_LOCATION "/submit.php?project=PublicDashboard")

View File

@@ -1,23 +0,0 @@
cmake_minimum_required(VERSION 3.5)
# Settings:
set(CTEST_DASHBOARD_ROOT "@CMake_BINARY_DIR@/Tests/CTestTest")
set(CTEST_SITE "@SITE@")
set(CTEST_BUILD_NAME "CTestTest-@BUILDNAME@-SkipReturnCode")
set(CTEST_SOURCE_DIRECTORY "@CMake_SOURCE_DIR@/Tests/CTestTestSkipReturnCode")
set(CTEST_BINARY_DIRECTORY "@CMake_BINARY_DIR@/Tests/CTestTestSkipReturnCode")
set(CTEST_CVS_COMMAND "@CVSCOMMAND@")
set(CTEST_CMAKE_GENERATOR "@CMAKE_GENERATOR@")
set(CTEST_CMAKE_GENERATOR_PLATFORM "@CMAKE_GENERATOR_PLATFORM@")
set(CTEST_CMAKE_GENERATOR_TOOLSET "@CMAKE_GENERATOR_TOOLSET@")
set(CTEST_BUILD_CONFIGURATION "$ENV{CMAKE_CONFIG_TYPE}")
set(CTEST_COVERAGE_COMMAND "@COVERAGE_COMMAND@")
set(CTEST_NOTES_FILES "${CTEST_SCRIPT_DIRECTORY}/${CTEST_SCRIPT_NAME}")
#CTEST_EMPTY_BINARY_DIRECTORY(${CTEST_BINARY_DIRECTORY})
CTEST_START(Experimental)
CTEST_CONFIGURE(BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)
CTEST_BUILD(BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)
CTEST_TEST(BUILD "${CTEST_BINARY_DIRECTORY}" RETURN_VALUE res)

View File

@@ -0,0 +1,9 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-0
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,7 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-4
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,10 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-N
Test #1: test1
Test #2: test2
Test #3: test3
Test #4: test4
Test #5: test5
Test #6: test6
Total Tests: 6

View File

@@ -0,0 +1 @@
1

View File

@@ -0,0 +1 @@
^CMake Error: '--parallel' given invalid value 'bad'$

View File

@@ -0,0 +1,5 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-empty
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,9 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-env-0
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,6 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-env-3
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,4 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-env-bad
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,5 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-env-empty
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1 @@
1

View File

@@ -0,0 +1 @@
^CMake Error: '-j' given invalid value 'bad'$

View File

@@ -0,0 +1,5 @@
Test project [^
]*/Tests/RunCMake/CTestCommandLine/Parallel-j
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -244,6 +244,44 @@ add_test(Echo \"${CMAKE_COMMAND}\" -E echo \"EchoTest\")
endfunction()
run_SerialFailed()
function(run_Parallel case)
set(RunCMake_TEST_BINARY_DIR ${RunCMake_BINARY_DIR}/Parallel-${case})
set(RunCMake_TEST_NO_CLEAN 1)
file(REMOVE_RECURSE "${RunCMake_TEST_BINARY_DIR}")
file(MAKE_DIRECTORY "${RunCMake_TEST_BINARY_DIR}")
file(WRITE "${RunCMake_TEST_BINARY_DIR}/CTestTestfile.cmake" "
foreach(i RANGE 1 6)
add_test(test\${i} \"${CMAKE_COMMAND}\" -E true)
endforeach()
")
run_cmake_command(Parallel-${case} ${CMAKE_CTEST_COMMAND} ${ARGN})
endfunction()
# Spoof a number of processors to make these tests predictable.
set(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING} 1)
run_Parallel(bad --parallel bad)
run_Parallel(j-bad -j bad)
set(RunCMake_TEST_RAW_ARGS [[--parallel ""]])
run_Parallel(empty) # With 1 processor, defaults to 2.
unset(RunCMake_TEST_RAW_ARGS)
run_Parallel(j -j) # With 1 processor, defaults to 2.
run_Parallel(0 -j0)
run_Parallel(4 --parallel 4)
run_Parallel(N --parallel -N)
set(ENV{CTEST_PARALLEL_LEVEL} bad)
run_Parallel(env-bad)
if(CMAKE_HOST_WIN32)
set(ENV{CTEST_PARALLEL_LEVEL} " ")
else()
set(ENV{CTEST_PARALLEL_LEVEL} "")
endif()
run_Parallel(env-empty) # With 1 processor, defaults to 2.
set(ENV{CTEST_PARALLEL_LEVEL} 0)
run_Parallel(env-0)
set(ENV{CTEST_PARALLEL_LEVEL} 3)
run_Parallel(env-3)
unset(ENV{CTEST_PARALLEL_LEVEL})
unset(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING)
function(run_TestLoad name load)
set(RunCMake_TEST_BINARY_DIR ${RunCMake_BINARY_DIR}/TestLoad)
set(RunCMake_TEST_NO_CLEAN 1)

View File

@@ -1,11 +1,11 @@
NoPipe:
env MAKEFLAGS= $(CMAKE_CTEST_COMMAND) -j6
env MAKEFLAGS= $(CMAKE_CTEST_COMMAND) -j0
.PHONY: NoPipe
NoTests:
+$(CMAKE_CTEST_COMMAND) -j6 -R NoTests
+$(CMAKE_CTEST_COMMAND) -j -R NoTests
.PHONY: NoTests
Tests:
+$(CMAKE_CTEST_COMMAND) -j6
+$(CMAKE_CTEST_COMMAND) -j
.PHONY: Tests

View File

@@ -90,10 +90,13 @@ function(run_CTestJobServer)
set(RunCMake_TEST_BINARY_DIR ${RunCMake_BINARY_DIR}/CTestJobServer-build)
run_cmake(CTestJobServer)
set(RunCMake_TEST_NO_CLEAN 1)
# Spoof a number of processors to make sure jobserver integration is unbounded.
set(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING} 1)
run_make_rule(CTestJobServer NoPipe 2)
run_make_rule(CTestJobServer NoTests 2)
run_make_rule(CTestJobServer Tests 2)
run_make_rule(CTestJobServer Tests 3)
unset(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING)
endfunction()
# Jobservers are currently only supported by GNU makes, except MSYS2 make

View File

@@ -0,0 +1,9 @@
Test project [^
]*/Tests/RunCMake/ctest_test/Parallel0-build
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,7 @@
Test project [^
]*/Tests/RunCMake/ctest_test/Parallel4-build
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1 @@
(-1|255)

View File

@@ -0,0 +1 @@
^ParallelLevel invalid value: bad$

View File

@@ -0,0 +1,5 @@
Test project [^
]*/Tests/RunCMake/ctest_test/ParallelEmpty-build
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,9 @@
Test project [^
]*/Tests/RunCMake/ctest_test/ParallelEnv0-build
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,6 @@
Test project [^
]*/Tests/RunCMake/ctest_test/ParallelEnv3-build
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,4 @@
Test project [^
]*/Tests/RunCMake/ctest_test/ParallelEnvBad-build
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,5 @@
Test project [^
]*/Tests/RunCMake/ctest_test/ParallelEnvEmpty-build
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -0,0 +1,5 @@
Test project [^
]*/Tests/RunCMake/ctest_test/ParallelOmit-build
Start [0-9]+: test[0-9]+
Start [0-9]+: test[0-9]+
1/6 Test #[0-9]+: test[0-9]+ ............................ Passed +[0-9.]+ sec

View File

@@ -1,8 +1,10 @@
include(RunCTest)
set(RunCMake_TEST_TIMEOUT 60)
set(CASE_CTEST_TEST_ARGS "")
set(CASE_CTEST_TEST_LOAD "")
set(CASE_CTEST_TEST_RAW_ARGS "")
function(run_ctest_test CASE_NAME)
set(CASE_CTEST_TEST_ARGS "${ARGN}")
@@ -11,6 +13,60 @@ endfunction()
run_ctest_test(TestQuiet QUIET)
set(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING} 4)
set(CASE_CMAKELISTS_SUFFIX_CODE [[
foreach(i RANGE 1 6)
add_test(NAME test${i} COMMAND ${CMAKE_COMMAND} -E true)
endforeach()
set_property(TEST test1 PROPERTY COST -2)
set_property(TEST test2 PROPERTY COST -1)
set_property(TEST test3 PROPERTY COST 0)
set_property(TEST test4 PROPERTY COST 1)
set_property(TEST test5 PROPERTY COST 2)
set_property(TEST test6 PROPERTY COST 3)
set_property(TEST test6 PROPERTY DEPENDS test1)
]])
run_ctest_test(SerialOrder INCLUDE test)
unset(CASE_CMAKELISTS_SUFFIX_CODE)
unset(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING)
set(CASE_CMAKELISTS_SUFFIX_CODE [[
add_test(NAME skip COMMAND ${CMAKE_COMMAND} -E true)
set_property(TEST skip PROPERTY SKIP_RETURN_CODE 0)
]])
run_ctest_test(SkipReturnCode)
unset(CASE_CMAKELISTS_SUFFIX_CODE)
# Spoof a number of processors to make these tests predictable.
set(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING} 1)
set(CASE_CMAKELISTS_SUFFIX_CODE [[
foreach(i RANGE 1 6)
add_test(NAME test${i} COMMAND ${CMAKE_COMMAND} -E true)
endforeach()
]])
run_ctest_test(ParallelBad INCLUDE test PARALLEL_LEVEL bad)
set(CASE_CTEST_TEST_RAW_ARGS "PARALLEL_LEVEL \"\"")
run_ctest_test(ParallelEmpty INCLUDE test) # With 1 processor, defaults to 2.
unset(CASE_CTEST_TEST_RAW_ARGS)
run_ctest_test(ParallelOmit INCLUDE test PARALLEL_LEVEL) # With 1 processor, defaults to 2.
run_ctest_test(Parallel0 INCLUDE test PARALLEL_LEVEL 0)
run_ctest_test(Parallel4 INCLUDE test PARALLEL_LEVEL 4)
set(ENV{CTEST_PARALLEL_LEVEL} bad)
run_ctest_test(ParallelEnvBad INCLUDE test)
if(CMAKE_HOST_WIN32)
set(ENV{CTEST_PARALLEL_LEVEL} " ")
else()
set(ENV{CTEST_PARALLEL_LEVEL} "")
endif()
run_ctest_test(ParallelEnvEmpty INCLUDE test) # With 1 processor, defaults to 2.
set(ENV{CTEST_PARALLEL_LEVEL} 0)
run_ctest_test(ParallelEnv0 INCLUDE test)
set(ENV{CTEST_PARALLEL_LEVEL} 3)
run_ctest_test(ParallelEnv3 INCLUDE test)
unset(ENV{CTEST_PARALLEL_LEVEL})
unset(CASE_CMAKELISTS_SUFFIX_CODE)
unset(ENV{__CTEST_FAKE_PROCESSOR_COUNT_FOR_TESTING)
# Tests for the 'Test Load' feature of ctest
#
# Spoof a load average value to make these tests more reliable.

View File

@@ -0,0 +1,16 @@
Test project [^
]*/Tests/RunCMake/ctest_test/SerialOrder-build
Start 2: test1
1/6 Test #2: test1 ............................ Passed +[0-9.]+ sec
Start 7: test6
2/6 Test #7: test6 ............................ Passed +[0-9.]+ sec
Start 6: test5
3/6 Test #6: test5 ............................ Passed +[0-9.]+ sec
Start 5: test4
4/6 Test #5: test4 ............................ Passed +[0-9.]+ sec
Start 4: test3
5/6 Test #4: test3 ............................ Passed +[0-9.]+ sec
Start 3: test2
6/6 Test #3: test2 ............................ Passed +[0-9.]+ sec
+
100% tests passed, 0 tests failed out of 6

View File

@@ -0,0 +1,8 @@
Test project [^
]*/Tests/RunCMake/ctest_test/SkipReturnCode-build
Start 1: RunCMakeVersion
1/2 Test #1: RunCMakeVersion .................. Passed +[0-9.]+ sec
Start 2: skip
2/2 Test #2: skip .............................\*\*\*Skipped +[0-9.]+ sec
+
100% tests passed, 0 tests failed out of 2

View File

@@ -19,5 +19,5 @@ if("@CASE_NAME@" STREQUAL "TestChangingLabels")
ctest_test(${ctest_test_args} INCLUDE_LABEL "^a$")
ctest_test(${ctest_test_args} INCLUDE_LABEL "^b$")
else()
ctest_test(${ctest_test_args})
ctest_test(${ctest_test_args} @CASE_CTEST_TEST_RAW_ARGS@)
endif()