linux_kselftest-next-6.13-rc1

kselftest update for Linux 6.13-rc1
 
 -- timers test - removes duplicates defines
 -- timers test - fixes to improve error reporting
 -- rtc test - adds check rtc alarm status to alarm test
 -- resctrl test - adds array overrun checks during iMC config parsing code
 -- resctrl test - adds array overflow checks when reading strings
 -- resctrl test - fixes and reorganizing code
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEPZKym/RZuOCGeA/kCwJExA0NQxwFAmc7o5wACgkQCwJExA0N
 QxyY6A//RD3Cvt/9Zt6jmuqKDnXkfM+Ry5CAzm+YYPegCLj+yVnzdps8Juf91vd/
 KOmE0GaoTMu+Q3NEiNTTTlihYyTT3rA0JJYhAqfQ9b7m9QgMxSaTGhUDXjq83gXU
 ImbauuD1O3Sr84jLibdvWkgGfuhShz3a8ds3DzN+4S7xKMRguFsyYA/v4shugHMv
 X59gnuE2dtIFzHFOWJmTVuU3fyedcCiO6nUeIbaq6OFvz7dLKqMOP20r6YGfHUw+
 oc640OwbijhLRHINmXGUV8d0B/kEkvljTXfqHdLUIzHVgvwHR4eMcWN6ErKa/knB
 Phhm6crhC1CNVk0cFdS/VZweOpIMs2A7oBdvtTs3ANz/7ne1IX39jtyoeONkrcBU
 jh5wIrSuyHLgM+812RCvagRyJ/yMTKkISJFrQDCKTUJTZ2Iq9vDQVSnXBfcYiVLU
 Ff7PCtlir2OtvdO3RuT8pmEeFTxBMmnXwr0tZ4N1YDMX1dE/5DQofY57XWYWXVGc
 usBDpzssZda8K155KJnL9HYTWv4sUwh7I6nF2z95kW6llGFKNtJthqx2sdLNIi27
 PltopCXiKYtrL8HB3YY1+Oh3auL8NYlg49+W2J3zkbRCvRyJduVkPbZorqooQyLC
 8sjZEzlPCj8RvjzQ+nxzl1xS/wwvB/JC8Hsfv9+GOC+XI0HZaNk=
 =TIPs
 -----END PGP SIGNATURE-----

Merge tag 'linux_kselftest-next-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest

Pull kselftest update from Shuah Khan:
 "timer test:
   - remove duplicate defines
   - fixes to improve error reporting

  rtc test:
   - check rtc alarm status in alarm test

  resctrl test:
   - add array overrun checks during iMC config parsing code and when
     reading strings
   - fixes and reorganizing code"

* tag 'linux_kselftest-next-6.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest: (23 commits)
  selftests/resctrl: Replace magic constants used as array size
  selftests/resctrl: Keep results from first test run
  selftests/resctrl: Do not compare performance counters and resctrl at low bandwidth
  selftests/resctrl: Use cache size to determine "fill_buf" buffer size
  selftests/resctrl: Ensure measurements skip initialization of default benchmark
  selftests/resctrl: Make benchmark parameter passing robust
  selftests/resctrl: Remove unused measurement code
  selftests/resctrl: Only support measured read operation
  selftests/resctrl: Remove "once" parameter required to be false
  selftests/resctrl: Make wraparound handling obvious
  selftests/resctrl: Protect against array overflow when reading strings
  selftests/resctrl: Protect against array overrun during iMC config parsing
  selftests/resctrl: Fix memory overflow due to unhandled wraparound
  selftests/resctrl: Print accurate buffer size as part of MBM results
  selftests/resctrl: Make functions only used in same file static
  selftests: Add a test mangling with uc_sigmask
  selftests: Rename sigaltstack to generic signal
  selftest: rtc: Add to check rtc alarm status for alarm related test
  selftests:timers: remove local CLOCKID defines
  selftests: timers: Remove unneeded semicolon
  ...
This commit is contained in:
Linus Torvalds 2024-11-20 11:54:39 -08:00
commit 856385e0c5
31 changed files with 703 additions and 571 deletions

View File

@ -31,6 +31,15 @@ kselftest runs as a userspace process. Tests that can be written/run in
userspace may wish to use the `Test Harness`_. Tests that need to be
run in kernel space may wish to use a `Test Module`_.
Documentation on the tests
==========================
For documentation on the kselftests themselves, see:
.. toctree::
testing-devices
Running the selftests (hotplug tests are run in limited mode)
=============================================================

View File

@ -0,0 +1,47 @@
.. SPDX-License-Identifier: GPL-2.0
.. Copyright (c) 2024 Collabora Ltd
=============================
Device testing with kselftest
=============================
There are a few different kselftests available for testing devices generically,
with some overlap in coverage and different requirements. This document aims to
give an overview of each one.
Note: Paths in this document are relative to the kselftest folder
(``tools/testing/selftests``).
Device oriented kselftests:
* Devicetree (``dt``)
* **Coverage**: Probe status for devices described in Devicetree
* **Requirements**: None
* Error logs (``devices/error_logs``)
* **Coverage**: Error (or more critical) log messages presence coming from any
device
* **Requirements**: None
* Discoverable bus (``devices/probe``)
* **Coverage**: Presence and probe status of USB or PCI devices that have been
described in the reference file
* **Requirements**: Manually describe the devices that should be tested in a
YAML reference file (see ``devices/probe/boards/google,spherion.yaml`` for
an example)
* Exist (``devices/exist``)
* **Coverage**: Presence of all devices
* **Requirements**: Generate the reference (see ``devices/exist/README.rst``
for details) on a known-good kernel
Therefore, the suggestion is to enable the error log and devicetree tests on all
(DT-based) platforms, since they don't have any requirements. Then to greatly
improve coverage, generate the reference for each platform and enable the exist
test. The discoverable bus test can be used to verify the probe status of
specific USB or PCI devices, but is probably not worth it for most cases.

View File

@ -91,7 +91,7 @@ TARGETS += rust
TARGETS += sched_ext
TARGETS += seccomp
TARGETS += sgx
TARGETS += sigaltstack
TARGETS += signal
TARGETS += size
TARGETS += sparc64
TARGETS += splice

View File

@ -99,14 +99,13 @@ static int check_results(struct resctrl_val_param *param, size_t span, int no_of
}
/* Field 3 is llc occ resc value */
if (runs > 0)
sum_llc_occu_resc += strtoul(token_array[3], NULL, 0);
sum_llc_occu_resc += strtoul(token_array[3], NULL, 0);
runs++;
}
fclose(fp);
return show_results_info(sum_llc_occu_resc, no_of_bits, span,
MAX_DIFF, MAX_DIFF_PERCENT, runs - 1, true);
MAX_DIFF, MAX_DIFF_PERCENT, runs, true);
}
static void cmt_test_cleanup(void)
@ -116,15 +115,13 @@ static void cmt_test_cleanup(void)
static int cmt_run_test(const struct resctrl_test *test, const struct user_params *uparams)
{
const char * const *cmd = uparams->benchmark_cmd;
const char *new_cmd[BENCHMARK_ARGS];
struct fill_buf_param fill_buf = {};
unsigned long cache_total_size = 0;
int n = uparams->bits ? : 5;
unsigned long long_mask;
char *span_str = NULL;
int count_of_bits;
size_t span;
int ret, i;
int ret;
ret = get_full_cbm("L3", &long_mask);
if (ret)
@ -155,32 +152,26 @@ static int cmt_run_test(const struct resctrl_test *test, const struct user_param
span = cache_portion_size(cache_total_size, param.mask, long_mask);
if (strcmp(cmd[0], "fill_buf") == 0) {
/* Duplicate the command to be able to replace span in it */
for (i = 0; uparams->benchmark_cmd[i]; i++)
new_cmd[i] = uparams->benchmark_cmd[i];
new_cmd[i] = NULL;
ret = asprintf(&span_str, "%zu", span);
if (ret < 0)
return -1;
new_cmd[1] = span_str;
cmd = new_cmd;
if (uparams->fill_buf) {
fill_buf.buf_size = span;
fill_buf.memflush = uparams->fill_buf->memflush;
param.fill_buf = &fill_buf;
} else if (!uparams->benchmark_cmd[0]) {
fill_buf.buf_size = span;
fill_buf.memflush = true;
param.fill_buf = &fill_buf;
}
remove(RESULT_FILE_NAME);
ret = resctrl_val(test, uparams, cmd, &param);
ret = resctrl_val(test, uparams, &param);
if (ret)
goto out;
return ret;
ret = check_results(&param, span, n);
if (ret && (get_vendor() == ARCH_INTEL))
ksft_print_msg("Intel CMT may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");
out:
free(span_str);
return ret;
}

View File

@ -88,18 +88,6 @@ static int fill_one_span_read(unsigned char *buf, size_t buf_size)
return sum;
}
static void fill_one_span_write(unsigned char *buf, size_t buf_size)
{
unsigned char *end_ptr = buf + buf_size;
unsigned char *p;
p = buf;
while (p < end_ptr) {
*p = '1';
p += (CL_SIZE / 2);
}
}
void fill_cache_read(unsigned char *buf, size_t buf_size, bool once)
{
int ret = 0;
@ -114,20 +102,11 @@ void fill_cache_read(unsigned char *buf, size_t buf_size, bool once)
*value_sink = ret;
}
static void fill_cache_write(unsigned char *buf, size_t buf_size, bool once)
{
while (1) {
fill_one_span_write(buf, buf_size);
if (once)
break;
}
}
unsigned char *alloc_buffer(size_t buf_size, int memflush)
unsigned char *alloc_buffer(size_t buf_size, bool memflush)
{
void *buf = NULL;
uint64_t *p64;
size_t s64;
ssize_t s64;
int ret;
ret = posix_memalign(&buf, PAGE_SIZE, buf_size);
@ -151,19 +130,15 @@ unsigned char *alloc_buffer(size_t buf_size, int memflush)
return buf;
}
int run_fill_buf(size_t buf_size, int memflush, int op, bool once)
ssize_t get_fill_buf_size(int cpu_no, const char *cache_type)
{
unsigned char *buf;
unsigned long cache_total_size = 0;
int ret;
buf = alloc_buffer(buf_size, memflush);
if (!buf)
return -1;
ret = get_cache_size(cpu_no, cache_type, &cache_total_size);
if (ret)
return ret;
if (op == 0)
fill_cache_read(buf, buf_size, once);
else
fill_cache_write(buf, buf_size, once);
free(buf);
return 0;
return cache_total_size * 2 > MINIMUM_SPAN ?
cache_total_size * 2 : MINIMUM_SPAN;
}

View File

@ -21,7 +21,7 @@ static int mba_init(const struct resctrl_val_param *param, int domain_id)
{
int ret;
ret = initialize_mem_bw_imc();
ret = initialize_read_mem_bw_imc();
if (ret)
return ret;
@ -39,7 +39,8 @@ static int mba_setup(const struct resctrl_test *test,
const struct user_params *uparams,
struct resctrl_val_param *p)
{
static int runs_per_allocation, allocation = 100;
static unsigned int allocation = ALLOCATION_MIN;
static int runs_per_allocation;
char allocation_str[64];
int ret;
@ -50,7 +51,7 @@ static int mba_setup(const struct resctrl_test *test,
if (runs_per_allocation++ != 0)
return 0;
if (allocation < ALLOCATION_MIN || allocation > ALLOCATION_MAX)
if (allocation > ALLOCATION_MAX)
return END_OF_TESTS;
sprintf(allocation_str, "%d", allocation);
@ -59,7 +60,7 @@ static int mba_setup(const struct resctrl_test *test,
if (ret < 0)
return ret;
allocation -= ALLOCATION_STEP;
allocation += ALLOCATION_STEP;
return 0;
}
@ -67,13 +68,14 @@ static int mba_setup(const struct resctrl_test *test,
static int mba_measure(const struct user_params *uparams,
struct resctrl_val_param *param, pid_t bm_pid)
{
return measure_mem_bw(uparams, param, bm_pid, "reads");
return measure_read_mem_bw(uparams, param, bm_pid);
}
static bool show_mba_info(unsigned long *bw_imc, unsigned long *bw_resc)
{
int allocation, runs;
unsigned int allocation;
bool ret = false;
int runs;
ksft_print_msg("Results are displayed in (MB)\n");
/* Memory bandwidth from 100% down to 10% */
@ -84,18 +86,21 @@ static bool show_mba_info(unsigned long *bw_imc, unsigned long *bw_resc)
int avg_diff_per;
float avg_diff;
/*
* The first run is discarded due to inaccurate value from
* phase transition.
*/
for (runs = NUM_OF_RUNS * allocation + 1;
for (runs = NUM_OF_RUNS * allocation;
runs < NUM_OF_RUNS * allocation + NUM_OF_RUNS ; runs++) {
sum_bw_imc += bw_imc[runs];
sum_bw_resc += bw_resc[runs];
}
avg_bw_imc = sum_bw_imc / (NUM_OF_RUNS - 1);
avg_bw_resc = sum_bw_resc / (NUM_OF_RUNS - 1);
avg_bw_imc = sum_bw_imc / NUM_OF_RUNS;
avg_bw_resc = sum_bw_resc / NUM_OF_RUNS;
if (avg_bw_imc < THROTTLE_THRESHOLD || avg_bw_resc < THROTTLE_THRESHOLD) {
ksft_print_msg("Bandwidth below threshold (%d MiB). Dropping results from MBA schemata %u.\n",
THROTTLE_THRESHOLD,
ALLOCATION_MIN + ALLOCATION_STEP * allocation);
continue;
}
avg_diff = (float)labs(avg_bw_resc - avg_bw_imc) / avg_bw_imc;
avg_diff_per = (int)(avg_diff * 100);
@ -103,7 +108,7 @@ static bool show_mba_info(unsigned long *bw_imc, unsigned long *bw_resc)
avg_diff_per > MAX_DIFF_PERCENT ?
"Fail:" : "Pass:",
MAX_DIFF_PERCENT,
ALLOCATION_MAX - ALLOCATION_STEP * allocation);
ALLOCATION_MIN + ALLOCATION_STEP * allocation);
ksft_print_msg("avg_diff_per: %d%%\n", avg_diff_per);
ksft_print_msg("avg_bw_imc: %lu\n", avg_bw_imc);
@ -122,8 +127,9 @@ static bool show_mba_info(unsigned long *bw_imc, unsigned long *bw_resc)
static int check_results(void)
{
unsigned long bw_resc[NUM_OF_RUNS * ALLOCATION_MAX / ALLOCATION_STEP];
unsigned long bw_imc[NUM_OF_RUNS * ALLOCATION_MAX / ALLOCATION_STEP];
char *token_array[8], output[] = RESULT_FILE_NAME, temp[512];
unsigned long bw_imc[1024], bw_resc[1024];
int runs;
FILE *fp;
@ -170,11 +176,27 @@ static int mba_run_test(const struct resctrl_test *test, const struct user_param
.setup = mba_setup,
.measure = mba_measure,
};
struct fill_buf_param fill_buf = {};
int ret;
remove(RESULT_FILE_NAME);
ret = resctrl_val(test, uparams, uparams->benchmark_cmd, &param);
if (uparams->fill_buf) {
fill_buf.buf_size = uparams->fill_buf->buf_size;
fill_buf.memflush = uparams->fill_buf->memflush;
param.fill_buf = &fill_buf;
} else if (!uparams->benchmark_cmd[0]) {
ssize_t buf_size;
buf_size = get_fill_buf_size(uparams->cpu, "L3");
if (buf_size < 0)
return buf_size;
fill_buf.buf_size = buf_size;
fill_buf.memflush = true;
param.fill_buf = &fill_buf;
}
ret = resctrl_val(test, uparams, &param);
if (ret)
return ret;

View File

@ -22,17 +22,13 @@ show_bw_info(unsigned long *bw_imc, unsigned long *bw_resc, size_t span)
int runs, ret, avg_diff_per;
float avg_diff = 0;
/*
* Discard the first value which is inaccurate due to monitoring setup
* transition phase.
*/
for (runs = 1; runs < NUM_OF_RUNS ; runs++) {
for (runs = 0; runs < NUM_OF_RUNS; runs++) {
sum_bw_imc += bw_imc[runs];
sum_bw_resc += bw_resc[runs];
}
avg_bw_imc = sum_bw_imc / 4;
avg_bw_resc = sum_bw_resc / 4;
avg_bw_imc = sum_bw_imc / NUM_OF_RUNS;
avg_bw_resc = sum_bw_resc / NUM_OF_RUNS;
avg_diff = (float)labs(avg_bw_resc - avg_bw_imc) / avg_bw_imc;
avg_diff_per = (int)(avg_diff * 100);
@ -40,7 +36,8 @@ show_bw_info(unsigned long *bw_imc, unsigned long *bw_resc, size_t span)
ksft_print_msg("%s Check MBM diff within %d%%\n",
ret ? "Fail:" : "Pass:", MAX_DIFF_PERCENT);
ksft_print_msg("avg_diff_per: %d%%\n", avg_diff_per);
ksft_print_msg("Span (MB): %zu\n", span / MB);
if (span)
ksft_print_msg("Span (MB): %zu\n", span / MB);
ksft_print_msg("avg_bw_imc: %lu\n", avg_bw_imc);
ksft_print_msg("avg_bw_resc: %lu\n", avg_bw_resc);
@ -90,7 +87,7 @@ static int mbm_init(const struct resctrl_val_param *param, int domain_id)
{
int ret;
ret = initialize_mem_bw_imc();
ret = initialize_read_mem_bw_imc();
if (ret)
return ret;
@ -121,7 +118,7 @@ static int mbm_setup(const struct resctrl_test *test,
static int mbm_measure(const struct user_params *uparams,
struct resctrl_val_param *param, pid_t bm_pid)
{
return measure_mem_bw(uparams, param, bm_pid, "reads");
return measure_read_mem_bw(uparams, param, bm_pid);
}
static void mbm_test_cleanup(void)
@ -138,15 +135,31 @@ static int mbm_run_test(const struct resctrl_test *test, const struct user_param
.setup = mbm_setup,
.measure = mbm_measure,
};
struct fill_buf_param fill_buf = {};
int ret;
remove(RESULT_FILE_NAME);
ret = resctrl_val(test, uparams, uparams->benchmark_cmd, &param);
if (uparams->fill_buf) {
fill_buf.buf_size = uparams->fill_buf->buf_size;
fill_buf.memflush = uparams->fill_buf->memflush;
param.fill_buf = &fill_buf;
} else if (!uparams->benchmark_cmd[0]) {
ssize_t buf_size;
buf_size = get_fill_buf_size(uparams->cpu, "L3");
if (buf_size < 0)
return buf_size;
fill_buf.buf_size = buf_size;
fill_buf.memflush = true;
param.fill_buf = &fill_buf;
}
ret = resctrl_val(test, uparams, &param);
if (ret)
return ret;
ret = check_results(DEFAULT_SPAN);
ret = check_results(param.fill_buf ? param.fill_buf->buf_size : 0);
if (ret && (get_vendor() == ARCH_INTEL))
ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");

View File

@ -41,18 +41,48 @@
#define BENCHMARK_ARGS 64
#define DEFAULT_SPAN (250 * MB)
#define MINIMUM_SPAN (250 * MB)
/*
* Memory bandwidth (in MiB) below which the bandwidth comparisons
* between iMC and resctrl are considered unreliable. For example RAS
* features or memory performance features that generate memory traffic
* may drive accesses that are counted differently by performance counters
* and MBM respectively, for instance generating "overhead" traffic which
* is not counted against any specific RMID.
*/
#define THROTTLE_THRESHOLD 750
/*
* fill_buf_param: "fill_buf" benchmark parameters
* @buf_size: Size (in bytes) of buffer used in benchmark.
* "fill_buf" allocates and initializes buffer of
* @buf_size. User can change value via command line.
* @memflush: If false the buffer will not be flushed after
* allocation and initialization, otherwise the
* buffer will be flushed. User can change value via
* command line (via integers with 0 interpreted as
* false and anything else as true).
*/
struct fill_buf_param {
size_t buf_size;
bool memflush;
};
/*
* user_params: User supplied parameters
* @cpu: CPU number to which the benchmark will be bound to
* @bits: Number of bits used for cache allocation size
* @benchmark_cmd: Benchmark command to run during (some of the) tests
* @fill_buf: Pointer to user provided parameters for "fill_buf",
* NULL if user did not provide parameters and test
* specific defaults should be used.
*/
struct user_params {
int cpu;
int bits;
const char *benchmark_cmd[BENCHMARK_ARGS];
const struct fill_buf_param *fill_buf;
};
/*
@ -87,21 +117,29 @@ struct resctrl_test {
* @init: Callback function to initialize test environment
* @setup: Callback function to setup per test run environment
* @measure: Callback that performs the measurement (a single test)
* @fill_buf: Parameters for default "fill_buf" benchmark.
* Initialized with user provided parameters, possibly
* adapted to be relevant to the test. If user does
* not provide parameters for "fill_buf" nor a
* replacement benchmark then initialized with defaults
* appropriate for test. NULL if user provided
* benchmark.
*/
struct resctrl_val_param {
const char *ctrlgrp;
const char *mongrp;
char filename[64];
unsigned long mask;
int num_of_runs;
int (*init)(const struct resctrl_val_param *param,
int domain_id);
int (*setup)(const struct resctrl_test *test,
const struct user_params *uparams,
struct resctrl_val_param *param);
int (*measure)(const struct user_params *uparams,
struct resctrl_val_param *param,
pid_t bm_pid);
const char *ctrlgrp;
const char *mongrp;
char filename[64];
unsigned long mask;
int num_of_runs;
int (*init)(const struct resctrl_val_param *param,
int domain_id);
int (*setup)(const struct resctrl_test *test,
const struct user_params *uparams,
struct resctrl_val_param *param);
int (*measure)(const struct user_params *uparams,
struct resctrl_val_param *param,
pid_t bm_pid);
struct fill_buf_param *fill_buf;
};
struct perf_event_read {
@ -126,7 +164,6 @@ int filter_dmesg(void);
int get_domain_id(const char *resource, int cpu_no, int *domain_id);
int mount_resctrlfs(void);
int umount_resctrlfs(void);
const char *get_bw_report_type(const char *bw_report);
bool resctrl_resource_exists(const char *resource);
bool resctrl_mon_feature_exists(const char *resource, const char *feature);
bool resource_info_file_exists(const char *resource, const char *file);
@ -139,19 +176,17 @@ int write_schemata(const char *ctrlgrp, char *schemata, int cpu_no,
int write_bm_pid_to_resctrl(pid_t bm_pid, const char *ctrlgrp, const char *mongrp);
int perf_event_open(struct perf_event_attr *hw_event, pid_t pid, int cpu,
int group_fd, unsigned long flags);
unsigned char *alloc_buffer(size_t buf_size, int memflush);
unsigned char *alloc_buffer(size_t buf_size, bool memflush);
void mem_flush(unsigned char *buf, size_t buf_size);
void fill_cache_read(unsigned char *buf, size_t buf_size, bool once);
int run_fill_buf(size_t buf_size, int memflush, int op, bool once);
int initialize_mem_bw_imc(void);
int measure_mem_bw(const struct user_params *uparams,
struct resctrl_val_param *param, pid_t bm_pid,
const char *bw_report);
ssize_t get_fill_buf_size(int cpu_no, const char *cache_type);
int initialize_read_mem_bw_imc(void);
int measure_read_mem_bw(const struct user_params *uparams,
struct resctrl_val_param *param, pid_t bm_pid);
void initialize_mem_bw_resctrl(const struct resctrl_val_param *param,
int domain_id);
int resctrl_val(const struct resctrl_test *test,
const struct user_params *uparams,
const char * const *benchmark_cmd,
struct resctrl_val_param *param);
unsigned long create_bit_mask(unsigned int start, unsigned int len);
unsigned int count_contiguous_bits(unsigned long val, unsigned int *start);

View File

@ -148,6 +148,78 @@ cleanup:
test_cleanup(test);
}
/*
* Allocate and initialize a struct fill_buf_param with user provided
* (via "-b fill_buf <fill_buf parameters>") parameters.
*
* Use defaults (that may not be appropriate for all tests) for any
* fill_buf parameters omitted by the user.
*
* Historically it may have been possible for user space to provide
* additional parameters, "operation" ("read" vs "write") in
* benchmark_cmd[3] and "once" (run "once" or until terminated) in
* benchmark_cmd[4]. Changing these parameters have never been
* supported with the default of "read" operation and running until
* terminated built into the tests. Any unsupported values for
* (original) "fill_buf" parameters are treated as failure.
*
* Return: On failure, forcibly exits the test on any parsing failure,
* returns NULL if no parsing needed (user did not actually provide
* "-b fill_buf").
* On success, returns pointer to newly allocated and fully
* initialized struct fill_buf_param that caller must free.
*/
static struct fill_buf_param *alloc_fill_buf_param(struct user_params *uparams)
{
struct fill_buf_param *fill_param = NULL;
char *endptr = NULL;
if (!uparams->benchmark_cmd[0] || strcmp(uparams->benchmark_cmd[0], "fill_buf"))
return NULL;
fill_param = malloc(sizeof(*fill_param));
if (!fill_param)
ksft_exit_skip("Unable to allocate memory for fill_buf parameters.\n");
if (uparams->benchmark_cmd[1] && *uparams->benchmark_cmd[1] != '\0') {
errno = 0;
fill_param->buf_size = strtoul(uparams->benchmark_cmd[1], &endptr, 10);
if (errno || *endptr != '\0') {
free(fill_param);
ksft_exit_skip("Unable to parse benchmark buffer size.\n");
}
} else {
fill_param->buf_size = MINIMUM_SPAN;
}
if (uparams->benchmark_cmd[2] && *uparams->benchmark_cmd[2] != '\0') {
errno = 0;
fill_param->memflush = strtol(uparams->benchmark_cmd[2], &endptr, 10) != 0;
if (errno || *endptr != '\0') {
free(fill_param);
ksft_exit_skip("Unable to parse benchmark memflush parameter.\n");
}
} else {
fill_param->memflush = true;
}
if (uparams->benchmark_cmd[3] && *uparams->benchmark_cmd[3] != '\0') {
if (strcmp(uparams->benchmark_cmd[3], "0")) {
free(fill_param);
ksft_exit_skip("Only read operations supported.\n");
}
}
if (uparams->benchmark_cmd[4] && *uparams->benchmark_cmd[4] != '\0') {
if (strcmp(uparams->benchmark_cmd[4], "false")) {
free(fill_param);
ksft_exit_skip("fill_buf is required to run until termination.\n");
}
}
return fill_param;
}
static void init_user_params(struct user_params *uparams)
{
memset(uparams, 0, sizeof(*uparams));
@ -158,11 +230,11 @@ static void init_user_params(struct user_params *uparams)
int main(int argc, char **argv)
{
struct fill_buf_param *fill_param = NULL;
int tests = ARRAY_SIZE(resctrl_tests);
bool test_param_seen = false;
struct user_params uparams;
char *span_str = NULL;
int ret, c, i;
int c, i;
init_user_params(&uparams);
@ -239,6 +311,10 @@ int main(int argc, char **argv)
}
last_arg:
fill_param = alloc_fill_buf_param(&uparams);
if (fill_param)
uparams.fill_buf = fill_param;
ksft_print_header();
/*
@ -257,24 +333,11 @@ last_arg:
filter_dmesg();
if (!uparams.benchmark_cmd[0]) {
/* If no benchmark is given by "-b" argument, use fill_buf. */
uparams.benchmark_cmd[0] = "fill_buf";
ret = asprintf(&span_str, "%u", DEFAULT_SPAN);
if (ret < 0)
ksft_exit_fail_msg("Out of memory!\n");
uparams.benchmark_cmd[1] = span_str;
uparams.benchmark_cmd[2] = "1";
uparams.benchmark_cmd[3] = "0";
uparams.benchmark_cmd[4] = "false";
uparams.benchmark_cmd[5] = NULL;
}
ksft_set_plan(tests);
for (i = 0; i < ARRAY_SIZE(resctrl_tests); i++)
run_single_test(resctrl_tests[i], &uparams);
free(span_str);
free(fill_param);
ksft_finished();
}

View File

@ -12,13 +12,10 @@
#define UNCORE_IMC "uncore_imc"
#define READ_FILE_NAME "events/cas_count_read"
#define WRITE_FILE_NAME "events/cas_count_write"
#define DYN_PMU_PATH "/sys/bus/event_source/devices"
#define SCALE 0.00006103515625
#define MAX_IMCS 20
#define MAX_TOKENS 5
#define READ 0
#define WRITE 1
#define CON_MBM_LOCAL_BYTES_PATH \
"%s/%s/mon_data/mon_L3_%02d/mbm_local_bytes"
@ -41,85 +38,71 @@ struct imc_counter_config {
static char mbm_total_path[1024];
static int imcs;
static struct imc_counter_config imc_counters_config[MAX_IMCS][2];
static struct imc_counter_config imc_counters_config[MAX_IMCS];
static const struct resctrl_test *current_test;
void membw_initialize_perf_event_attr(int i, int j)
static void read_mem_bw_initialize_perf_event_attr(int i)
{
memset(&imc_counters_config[i][j].pe, 0,
memset(&imc_counters_config[i].pe, 0,
sizeof(struct perf_event_attr));
imc_counters_config[i][j].pe.type = imc_counters_config[i][j].type;
imc_counters_config[i][j].pe.size = sizeof(struct perf_event_attr);
imc_counters_config[i][j].pe.disabled = 1;
imc_counters_config[i][j].pe.inherit = 1;
imc_counters_config[i][j].pe.exclude_guest = 0;
imc_counters_config[i][j].pe.config =
imc_counters_config[i][j].umask << 8 |
imc_counters_config[i][j].event;
imc_counters_config[i][j].pe.sample_type = PERF_SAMPLE_IDENTIFIER;
imc_counters_config[i][j].pe.read_format =
imc_counters_config[i].pe.type = imc_counters_config[i].type;
imc_counters_config[i].pe.size = sizeof(struct perf_event_attr);
imc_counters_config[i].pe.disabled = 1;
imc_counters_config[i].pe.inherit = 1;
imc_counters_config[i].pe.exclude_guest = 0;
imc_counters_config[i].pe.config =
imc_counters_config[i].umask << 8 |
imc_counters_config[i].event;
imc_counters_config[i].pe.sample_type = PERF_SAMPLE_IDENTIFIER;
imc_counters_config[i].pe.read_format =
PERF_FORMAT_TOTAL_TIME_ENABLED | PERF_FORMAT_TOTAL_TIME_RUNNING;
}
void membw_ioctl_perf_event_ioc_reset_enable(int i, int j)
static void read_mem_bw_ioctl_perf_event_ioc_reset_enable(int i)
{
ioctl(imc_counters_config[i][j].fd, PERF_EVENT_IOC_RESET, 0);
ioctl(imc_counters_config[i][j].fd, PERF_EVENT_IOC_ENABLE, 0);
ioctl(imc_counters_config[i].fd, PERF_EVENT_IOC_RESET, 0);
ioctl(imc_counters_config[i].fd, PERF_EVENT_IOC_ENABLE, 0);
}
void membw_ioctl_perf_event_ioc_disable(int i, int j)
static void read_mem_bw_ioctl_perf_event_ioc_disable(int i)
{
ioctl(imc_counters_config[i][j].fd, PERF_EVENT_IOC_DISABLE, 0);
ioctl(imc_counters_config[i].fd, PERF_EVENT_IOC_DISABLE, 0);
}
/*
* get_event_and_umask: Parse config into event and umask
* get_read_event_and_umask: Parse config into event and umask
* @cas_count_cfg: Config
* @count: iMC number
* @op: Operation (read/write)
*/
void get_event_and_umask(char *cas_count_cfg, int count, bool op)
static void get_read_event_and_umask(char *cas_count_cfg, int count)
{
char *token[MAX_TOKENS];
int i = 0;
strcat(cas_count_cfg, ",");
token[0] = strtok(cas_count_cfg, "=,");
for (i = 1; i < MAX_TOKENS; i++)
token[i] = strtok(NULL, "=,");
for (i = 0; i < MAX_TOKENS; i++) {
for (i = 0; i < MAX_TOKENS - 1; i++) {
if (!token[i])
break;
if (strcmp(token[i], "event") == 0) {
if (op == READ)
imc_counters_config[count][READ].event =
strtol(token[i + 1], NULL, 16);
else
imc_counters_config[count][WRITE].event =
strtol(token[i + 1], NULL, 16);
}
if (strcmp(token[i], "umask") == 0) {
if (op == READ)
imc_counters_config[count][READ].umask =
strtol(token[i + 1], NULL, 16);
else
imc_counters_config[count][WRITE].umask =
strtol(token[i + 1], NULL, 16);
}
if (strcmp(token[i], "event") == 0)
imc_counters_config[count].event = strtol(token[i + 1], NULL, 16);
if (strcmp(token[i], "umask") == 0)
imc_counters_config[count].umask = strtol(token[i + 1], NULL, 16);
}
}
static int open_perf_event(int i, int cpu_no, int j)
static int open_perf_read_event(int i, int cpu_no)
{
imc_counters_config[i][j].fd =
perf_event_open(&imc_counters_config[i][j].pe, -1, cpu_no, -1,
imc_counters_config[i].fd =
perf_event_open(&imc_counters_config[i].pe, -1, cpu_no, -1,
PERF_FLAG_FD_CLOEXEC);
if (imc_counters_config[i][j].fd == -1) {
if (imc_counters_config[i].fd == -1) {
fprintf(stderr, "Error opening leader %llx\n",
imc_counters_config[i][j].pe.config);
imc_counters_config[i].pe.config);
return -1;
}
@ -127,7 +110,7 @@ static int open_perf_event(int i, int cpu_no, int j)
return 0;
}
/* Get type and config (read and write) of an iMC counter */
/* Get type and config of an iMC counter's read event. */
static int read_from_imc_dir(char *imc_dir, int count)
{
char cas_count_cfg[1024], imc_counter_cfg[1024], imc_counter_type[1024];
@ -141,7 +124,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
return -1;
}
if (fscanf(fp, "%u", &imc_counters_config[count][READ].type) <= 0) {
if (fscanf(fp, "%u", &imc_counters_config[count].type) <= 0) {
ksft_perror("Could not get iMC type");
fclose(fp);
@ -149,9 +132,6 @@ static int read_from_imc_dir(char *imc_dir, int count)
}
fclose(fp);
imc_counters_config[count][WRITE].type =
imc_counters_config[count][READ].type;
/* Get read config */
sprintf(imc_counter_cfg, "%s%s", imc_dir, READ_FILE_NAME);
fp = fopen(imc_counter_cfg, "r");
@ -160,7 +140,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
return -1;
}
if (fscanf(fp, "%s", cas_count_cfg) <= 0) {
if (fscanf(fp, "%1023s", cas_count_cfg) <= 0) {
ksft_perror("Could not get iMC cas count read");
fclose(fp);
@ -168,34 +148,19 @@ static int read_from_imc_dir(char *imc_dir, int count)
}
fclose(fp);
get_event_and_umask(cas_count_cfg, count, READ);
/* Get write config */
sprintf(imc_counter_cfg, "%s%s", imc_dir, WRITE_FILE_NAME);
fp = fopen(imc_counter_cfg, "r");
if (!fp) {
ksft_perror("Failed to open iMC config file");
return -1;
}
if (fscanf(fp, "%s", cas_count_cfg) <= 0) {
ksft_perror("Could not get iMC cas count write");
fclose(fp);
return -1;
}
fclose(fp);
get_event_and_umask(cas_count_cfg, count, WRITE);
get_read_event_and_umask(cas_count_cfg, count);
return 0;
}
/*
* A system can have 'n' number of iMC (Integrated Memory Controller)
* counters, get that 'n'. For each iMC counter get it's type and config.
* Also, each counter has two configs, one for read and the other for write.
* A config again has two parts, event and umask.
* counters, get that 'n'. Discover the properties of the available
* counters in support of needed performance measurement via perf.
* For each iMC counter get it's type and config. Also obtain each
* counter's event and umask for the memory read events that will be
* measured.
*
* Enumerate all these details into an array of structures.
*
* Return: >= 0 on success. < 0 on failure.
@ -256,55 +221,46 @@ static int num_of_imcs(void)
return count;
}
int initialize_mem_bw_imc(void)
int initialize_read_mem_bw_imc(void)
{
int imc, j;
int imc;
imcs = num_of_imcs();
if (imcs <= 0)
return imcs;
/* Initialize perf_event_attr structures for all iMC's */
for (imc = 0; imc < imcs; imc++) {
for (j = 0; j < 2; j++)
membw_initialize_perf_event_attr(imc, j);
}
for (imc = 0; imc < imcs; imc++)
read_mem_bw_initialize_perf_event_attr(imc);
return 0;
}
static void perf_close_imc_mem_bw(void)
static void perf_close_imc_read_mem_bw(void)
{
int mc;
for (mc = 0; mc < imcs; mc++) {
if (imc_counters_config[mc][READ].fd != -1)
close(imc_counters_config[mc][READ].fd);
if (imc_counters_config[mc][WRITE].fd != -1)
close(imc_counters_config[mc][WRITE].fd);
if (imc_counters_config[mc].fd != -1)
close(imc_counters_config[mc].fd);
}
}
/*
* perf_open_imc_mem_bw - Open perf fds for IMCs
* perf_open_imc_read_mem_bw - Open perf fds for IMCs
* @cpu_no: CPU number that the benchmark PID is bound to
*
* Return: = 0 on success. < 0 on failure.
*/
static int perf_open_imc_mem_bw(int cpu_no)
static int perf_open_imc_read_mem_bw(int cpu_no)
{
int imc, ret;
for (imc = 0; imc < imcs; imc++) {
imc_counters_config[imc][READ].fd = -1;
imc_counters_config[imc][WRITE].fd = -1;
}
for (imc = 0; imc < imcs; imc++)
imc_counters_config[imc].fd = -1;
for (imc = 0; imc < imcs; imc++) {
ret = open_perf_event(imc, cpu_no, READ);
if (ret)
goto close_fds;
ret = open_perf_event(imc, cpu_no, WRITE);
ret = open_perf_read_event(imc, cpu_no);
if (ret)
goto close_fds;
}
@ -312,60 +268,52 @@ static int perf_open_imc_mem_bw(int cpu_no)
return 0;
close_fds:
perf_close_imc_mem_bw();
perf_close_imc_read_mem_bw();
return -1;
}
/*
* do_mem_bw_test - Perform memory bandwidth test
* do_imc_read_mem_bw_test - Perform memory bandwidth test
*
* Runs memory bandwidth test over one second period. Also, handles starting
* and stopping of the IMC perf counters around the test.
*/
static void do_imc_mem_bw_test(void)
static void do_imc_read_mem_bw_test(void)
{
int imc;
for (imc = 0; imc < imcs; imc++) {
membw_ioctl_perf_event_ioc_reset_enable(imc, READ);
membw_ioctl_perf_event_ioc_reset_enable(imc, WRITE);
}
for (imc = 0; imc < imcs; imc++)
read_mem_bw_ioctl_perf_event_ioc_reset_enable(imc);
sleep(1);
/* Stop counters after a second to get results (both read and write) */
for (imc = 0; imc < imcs; imc++) {
membw_ioctl_perf_event_ioc_disable(imc, READ);
membw_ioctl_perf_event_ioc_disable(imc, WRITE);
}
/* Stop counters after a second to get results. */
for (imc = 0; imc < imcs; imc++)
read_mem_bw_ioctl_perf_event_ioc_disable(imc);
}
/*
* get_mem_bw_imc - Memory bandwidth as reported by iMC counters
* @bw_report: Bandwidth report type (reads, writes)
* get_read_mem_bw_imc - Memory read bandwidth as reported by iMC counters
*
* Memory bandwidth utilized by a process on a socket can be calculated
* using iMC counters. Perf events are used to read these counters.
* Memory read bandwidth utilized by a process on a socket can be calculated
* using iMC counters' read events. Perf events are used to read these
* counters.
*
* Return: = 0 on success. < 0 on failure.
*/
static int get_mem_bw_imc(const char *bw_report, float *bw_imc)
static int get_read_mem_bw_imc(float *bw_imc)
{
float reads, writes, of_mul_read, of_mul_write;
float reads = 0, of_mul_read = 1;
int imc;
/* Start all iMC counters to log values (both read and write) */
reads = 0, writes = 0, of_mul_read = 1, of_mul_write = 1;
/*
* Get results which are stored in struct type imc_counter_config
* Log read event values from all iMC counters into
* struct imc_counter_config.
* Take overflow into consideration before calculating total bandwidth.
*/
for (imc = 0; imc < imcs; imc++) {
struct imc_counter_config *r =
&imc_counters_config[imc][READ];
struct imc_counter_config *w =
&imc_counters_config[imc][WRITE];
&imc_counters_config[imc];
if (read(r->fd, &r->return_value,
sizeof(struct membw_read_format)) == -1) {
@ -373,12 +321,6 @@ static int get_mem_bw_imc(const char *bw_report, float *bw_imc)
return -1;
}
if (read(w->fd, &w->return_value,
sizeof(struct membw_read_format)) == -1) {
ksft_perror("Couldn't get write bandwidth through iMC");
return -1;
}
__u64 r_time_enabled = r->return_value.time_enabled;
__u64 r_time_running = r->return_value.time_running;
@ -386,27 +328,10 @@ static int get_mem_bw_imc(const char *bw_report, float *bw_imc)
of_mul_read = (float)r_time_enabled /
(float)r_time_running;
__u64 w_time_enabled = w->return_value.time_enabled;
__u64 w_time_running = w->return_value.time_running;
if (w_time_enabled != w_time_running)
of_mul_write = (float)w_time_enabled /
(float)w_time_running;
reads += r->return_value.value * of_mul_read * SCALE;
writes += w->return_value.value * of_mul_write * SCALE;
}
if (strcmp(bw_report, "reads") == 0) {
*bw_imc = reads;
return 0;
}
if (strcmp(bw_report, "writes") == 0) {
*bw_imc = writes;
return 0;
}
*bw_imc = reads + writes;
*bw_imc = reads;
return 0;
}
@ -448,7 +373,7 @@ static int get_mem_bw_resctrl(FILE *fp, unsigned long *mbm_total)
return 0;
}
static pid_t bm_pid, ppid;
static pid_t bm_pid;
void ctrlc_handler(int signum, siginfo_t *info, void *ptr)
{
@ -506,13 +431,6 @@ void signal_handler_unregister(void)
}
}
static void parent_exit(pid_t ppid)
{
kill(ppid, SIGKILL);
umount_resctrlfs();
exit(EXIT_FAILURE);
}
/*
* print_results_bw: the memory bandwidth results are stored in a file
* @filename: file that stores the results
@ -552,35 +470,31 @@ static int print_results_bw(char *filename, pid_t bm_pid, float bw_imc,
}
/*
* measure_mem_bw - Measures memory bandwidth numbers while benchmark runs
* measure_read_mem_bw - Measures read memory bandwidth numbers while benchmark runs
* @uparams: User supplied parameters
* @param: Parameters passed to resctrl_val()
* @bm_pid: PID that runs the benchmark
* @bw_report: Bandwidth report type (reads, writes)
*
* Measure memory bandwidth from resctrl and from another source which is
* perf imc value or could be something else if perf imc event is not
* available. Compare the two values to validate resctrl value. It takes
* 1 sec to measure the data.
* resctrl does not distinguish between read and write operations so
* its data includes all memory operations.
*/
int measure_mem_bw(const struct user_params *uparams,
struct resctrl_val_param *param, pid_t bm_pid,
const char *bw_report)
int measure_read_mem_bw(const struct user_params *uparams,
struct resctrl_val_param *param, pid_t bm_pid)
{
unsigned long bw_resc, bw_resc_start, bw_resc_end;
FILE *mem_bw_fp;
float bw_imc;
int ret;
bw_report = get_bw_report_type(bw_report);
if (!bw_report)
return -1;
mem_bw_fp = open_mem_bw_resctrl(mbm_total_path);
if (!mem_bw_fp)
return -1;
ret = perf_open_imc_mem_bw(uparams->cpu);
ret = perf_open_imc_read_mem_bw(uparams->cpu);
if (ret < 0)
goto close_fp;
@ -590,17 +504,17 @@ int measure_mem_bw(const struct user_params *uparams,
rewind(mem_bw_fp);
do_imc_mem_bw_test();
do_imc_read_mem_bw_test();
ret = get_mem_bw_resctrl(mem_bw_fp, &bw_resc_end);
if (ret < 0)
goto close_imc;
ret = get_mem_bw_imc(bw_report, &bw_imc);
ret = get_read_mem_bw_imc(&bw_imc);
if (ret < 0)
goto close_imc;
perf_close_imc_mem_bw();
perf_close_imc_read_mem_bw();
fclose(mem_bw_fp);
bw_resc = (bw_resc_end - bw_resc_start) / MB;
@ -608,87 +522,30 @@ int measure_mem_bw(const struct user_params *uparams,
return print_results_bw(param->filename, bm_pid, bw_imc, bw_resc);
close_imc:
perf_close_imc_mem_bw();
perf_close_imc_read_mem_bw();
close_fp:
fclose(mem_bw_fp);
return ret;
}
/*
* run_benchmark - Run a specified benchmark or fill_buf (default benchmark)
* in specified signal. Direct benchmark stdio to /dev/null.
* @signum: signal number
* @info: signal info
* @ucontext: user context in signal handling
*/
static void run_benchmark(int signum, siginfo_t *info, void *ucontext)
{
int operation, ret, memflush;
char **benchmark_cmd;
size_t span;
bool once;
FILE *fp;
benchmark_cmd = info->si_ptr;
/*
* Direct stdio of child to /dev/null, so that only parent writes to
* stdio (console)
*/
fp = freopen("/dev/null", "w", stdout);
if (!fp) {
ksft_perror("Unable to direct benchmark status to /dev/null");
parent_exit(ppid);
}
if (strcmp(benchmark_cmd[0], "fill_buf") == 0) {
/* Execute default fill_buf benchmark */
span = strtoul(benchmark_cmd[1], NULL, 10);
memflush = atoi(benchmark_cmd[2]);
operation = atoi(benchmark_cmd[3]);
if (!strcmp(benchmark_cmd[4], "true")) {
once = true;
} else if (!strcmp(benchmark_cmd[4], "false")) {
once = false;
} else {
ksft_print_msg("Invalid once parameter\n");
parent_exit(ppid);
}
if (run_fill_buf(span, memflush, operation, once))
fprintf(stderr, "Error in running fill buffer\n");
} else {
/* Execute specified benchmark */
ret = execvp(benchmark_cmd[0], benchmark_cmd);
if (ret)
ksft_perror("execvp");
}
fclose(stdout);
ksft_print_msg("Unable to run specified benchmark\n");
parent_exit(ppid);
}
/*
* resctrl_val: execute benchmark and measure memory bandwidth on
* the benchmark
* @test: test information structure
* @uparams: user supplied parameters
* @benchmark_cmd: benchmark command and its arguments
* @param: parameters passed to resctrl_val()
*
* Return: 0 when the test was run, < 0 on error.
*/
int resctrl_val(const struct resctrl_test *test,
const struct user_params *uparams,
const char * const *benchmark_cmd,
struct resctrl_val_param *param)
{
struct sigaction sigact;
int ret = 0, pipefd[2];
char pipe_message = 0;
union sigval value;
unsigned char *buf = NULL;
cpu_set_t old_affinity;
int domain_id;
int ret = 0;
pid_t ppid;
if (strcmp(param->filename, "") == 0)
sprintf(param->filename, "stdio");
@ -699,111 +556,65 @@ int resctrl_val(const struct resctrl_test *test,
return ret;
}
/*
* If benchmark wasn't successfully started by child, then child should
* kill parent, so save parent's pid
*/
ppid = getpid();
if (pipe(pipefd)) {
ksft_perror("Unable to create pipe");
return -1;
}
/*
* Fork to start benchmark, save child's pid so that it can be killed
* when needed
*/
fflush(stdout);
bm_pid = fork();
if (bm_pid == -1) {
ksft_perror("Unable to fork");
return -1;
}
if (bm_pid == 0) {
/*
* Mask all signals except SIGUSR1, parent uses SIGUSR1 to
* start benchmark
*/
sigfillset(&sigact.sa_mask);
sigdelset(&sigact.sa_mask, SIGUSR1);
sigact.sa_sigaction = run_benchmark;
sigact.sa_flags = SA_SIGINFO;
/* Register for "SIGUSR1" signal from parent */
if (sigaction(SIGUSR1, &sigact, NULL)) {
ksft_perror("Can't register child for signal");
parent_exit(ppid);
}
/* Tell parent that child is ready */
close(pipefd[0]);
pipe_message = 1;
if (write(pipefd[1], &pipe_message, sizeof(pipe_message)) <
sizeof(pipe_message)) {
ksft_perror("Failed signaling parent process");
close(pipefd[1]);
return -1;
}
close(pipefd[1]);
/* Suspend child until delivery of "SIGUSR1" from parent */
sigsuspend(&sigact.sa_mask);
ksft_perror("Child is done");
parent_exit(ppid);
}
ksft_print_msg("Benchmark PID: %d\n", (int)bm_pid);
/*
* The cast removes constness but nothing mutates benchmark_cmd within
* the context of this process. At the receiving process, it becomes
* argv, which is mutable, on exec() but that's after fork() so it
* doesn't matter for the process running the tests.
*/
value.sival_ptr = (void *)benchmark_cmd;
/* Taskset benchmark to specified cpu */
ret = taskset_benchmark(bm_pid, uparams->cpu, NULL);
/* Taskset test to specified CPU. */
ret = taskset_benchmark(ppid, uparams->cpu, &old_affinity);
if (ret)
goto out;
return ret;
/* Write benchmark to specified control&monitoring grp in resctrl FS */
ret = write_bm_pid_to_resctrl(bm_pid, param->ctrlgrp, param->mongrp);
/* Write test to specified control & monitoring group in resctrl FS. */
ret = write_bm_pid_to_resctrl(ppid, param->ctrlgrp, param->mongrp);
if (ret)
goto out;
goto reset_affinity;
if (param->init) {
ret = param->init(param, domain_id);
if (ret)
goto out;
goto reset_affinity;
}
/* Parent waits for child to be ready. */
close(pipefd[1]);
while (pipe_message != 1) {
if (read(pipefd[0], &pipe_message, sizeof(pipe_message)) <
sizeof(pipe_message)) {
ksft_perror("Failed reading message from child process");
close(pipefd[0]);
goto out;
/*
* If not running user provided benchmark, run the default
* "fill_buf". First phase of "fill_buf" is to prepare the
* buffer that the benchmark will operate on. No measurements
* are needed during this phase and prepared memory will be
* passed to next part of benchmark via copy-on-write thus
* no impact on the benchmark that relies on reading from
* memory only.
*/
if (param->fill_buf) {
buf = alloc_buffer(param->fill_buf->buf_size,
param->fill_buf->memflush);
if (!buf) {
ret = -ENOMEM;
goto reset_affinity;
}
}
close(pipefd[0]);
/* Signal child to start benchmark */
if (sigqueue(bm_pid, SIGUSR1, value) == -1) {
ksft_perror("sigqueue SIGUSR1 to child");
ret = -1;
goto out;
fflush(stdout);
bm_pid = fork();
if (bm_pid == -1) {
ret = -errno;
ksft_perror("Unable to fork");
goto free_buf;
}
/* Give benchmark enough time to fully run */
/*
* What needs to be measured runs in separate process until
* terminated.
*/
if (bm_pid == 0) {
if (param->fill_buf)
fill_cache_read(buf, param->fill_buf->buf_size, false);
else if (uparams->benchmark_cmd[0])
execvp(uparams->benchmark_cmd[0], (char **)uparams->benchmark_cmd);
exit(EXIT_SUCCESS);
}
ksft_print_msg("Benchmark PID: %d\n", (int)bm_pid);
/* Give benchmark enough time to fully run. */
sleep(1);
/* Test runs until the callback setup() tells the test to stop. */
@ -821,8 +632,10 @@ int resctrl_val(const struct resctrl_test *test,
break;
}
out:
kill(bm_pid, SIGKILL);
free_buf:
free(buf);
reset_affinity:
taskset_restore(ppid, &old_affinity);
return ret;
}

View File

@ -182,7 +182,7 @@ int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size
return -1;
}
if (fscanf(fp, "%s", cache_str) <= 0) {
if (fscanf(fp, "%63s", cache_str) <= 0) {
ksft_perror("Could not get cache_size");
fclose(fp);
@ -831,23 +831,6 @@ int filter_dmesg(void)
return 0;
}
const char *get_bw_report_type(const char *bw_report)
{
if (strcmp(bw_report, "reads") == 0)
return bw_report;
if (strcmp(bw_report, "writes") == 0)
return bw_report;
if (strcmp(bw_report, "nt-writes") == 0) {
return "writes";
}
if (strcmp(bw_report, "total") == 0)
return bw_report;
fprintf(stderr, "Requested iMC bandwidth report type unavailable\n");
return NULL;
}
int perf_event_open(struct perf_event_attr *hw_event, pid_t pid, int cpu,
int group_fd, unsigned long flags)
{

View File

@ -1,5 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
CFLAGS += -O3 -Wl,-no-as-needed -Wall
CFLAGS += -O3 -Wl,-no-as-needed -Wall -I$(top_srcdir)/usr/include
LDLIBS += -lrt -lpthread -lm
TEST_GEN_PROGS = rtctest

View File

@ -25,6 +25,12 @@
static char *rtc_file = "/dev/rtc0";
enum rtc_alarm_state {
RTC_ALARM_UNKNOWN,
RTC_ALARM_ENABLED,
RTC_ALARM_DISABLED,
};
FIXTURE(rtc) {
int fd;
};
@ -82,6 +88,24 @@ static void nanosleep_with_retries(long ns)
}
}
static enum rtc_alarm_state get_rtc_alarm_state(int fd)
{
struct rtc_param param = { 0 };
int rc;
/* Validate kernel reflects unsupported RTC alarm state */
param.param = RTC_PARAM_FEATURES;
param.index = 0;
rc = ioctl(fd, RTC_PARAM_GET, &param);
if (rc < 0)
return RTC_ALARM_UNKNOWN;
if ((param.uvalue & _BITUL(RTC_FEATURE_ALARM)) == 0)
return RTC_ALARM_DISABLED;
return RTC_ALARM_ENABLED;
}
TEST_F_TIMEOUT(rtc, date_read_loop, READ_LOOP_DURATION_SEC + 2) {
int rc;
long iter_count = 0;
@ -197,11 +221,16 @@ TEST_F(rtc, alarm_alm_set) {
fd_set readfds;
time_t secs, new;
int rc;
enum rtc_alarm_state alarm_state = RTC_ALARM_UNKNOWN;
if (self->fd == -1 && errno == ENOENT)
SKIP(return, "Skipping test since %s does not exist", rtc_file);
ASSERT_NE(-1, self->fd);
alarm_state = get_rtc_alarm_state(self->fd);
if (alarm_state == RTC_ALARM_DISABLED)
SKIP(return, "Skipping test since alarms are not supported.");
rc = ioctl(self->fd, RTC_RD_TIME, &tm);
ASSERT_NE(-1, rc);
@ -210,6 +239,11 @@ TEST_F(rtc, alarm_alm_set) {
rc = ioctl(self->fd, RTC_ALM_SET, &tm);
if (rc == -1) {
/*
* Report error if rtc alarm was enabled. Fallback to check ioctl
* error number if rtc alarm state is unknown.
*/
ASSERT_EQ(RTC_ALARM_UNKNOWN, alarm_state);
ASSERT_EQ(EINVAL, errno);
TH_LOG("skip alarms are not supported.");
return;
@ -255,11 +289,16 @@ TEST_F(rtc, alarm_wkalm_set) {
fd_set readfds;
time_t secs, new;
int rc;
enum rtc_alarm_state alarm_state = RTC_ALARM_UNKNOWN;
if (self->fd == -1 && errno == ENOENT)
SKIP(return, "Skipping test since %s does not exist", rtc_file);
ASSERT_NE(-1, self->fd);
alarm_state = get_rtc_alarm_state(self->fd);
if (alarm_state == RTC_ALARM_DISABLED)
SKIP(return, "Skipping test since alarms are not supported.");
rc = ioctl(self->fd, RTC_RD_TIME, &alarm.time);
ASSERT_NE(-1, rc);
@ -270,6 +309,11 @@ TEST_F(rtc, alarm_wkalm_set) {
rc = ioctl(self->fd, RTC_WKALM_SET, &alarm);
if (rc == -1) {
/*
* Report error if rtc alarm was enabled. Fallback to check ioctl
* error number if rtc alarm state is unknown.
*/
ASSERT_EQ(RTC_ALARM_UNKNOWN, alarm_state);
ASSERT_EQ(EINVAL, errno);
TH_LOG("skip alarms are not supported.");
return;
@ -307,11 +351,16 @@ TEST_F_TIMEOUT(rtc, alarm_alm_set_minute, 65) {
fd_set readfds;
time_t secs, new;
int rc;
enum rtc_alarm_state alarm_state = RTC_ALARM_UNKNOWN;
if (self->fd == -1 && errno == ENOENT)
SKIP(return, "Skipping test since %s does not exist", rtc_file);
ASSERT_NE(-1, self->fd);
alarm_state = get_rtc_alarm_state(self->fd);
if (alarm_state == RTC_ALARM_DISABLED)
SKIP(return, "Skipping test since alarms are not supported.");
rc = ioctl(self->fd, RTC_RD_TIME, &tm);
ASSERT_NE(-1, rc);
@ -320,6 +369,11 @@ TEST_F_TIMEOUT(rtc, alarm_alm_set_minute, 65) {
rc = ioctl(self->fd, RTC_ALM_SET, &tm);
if (rc == -1) {
/*
* Report error if rtc alarm was enabled. Fallback to check ioctl
* error number if rtc alarm state is unknown.
*/
ASSERT_EQ(RTC_ALARM_UNKNOWN, alarm_state);
ASSERT_EQ(EINVAL, errno);
TH_LOG("skip alarms are not supported.");
return;
@ -365,11 +419,16 @@ TEST_F_TIMEOUT(rtc, alarm_wkalm_set_minute, 65) {
fd_set readfds;
time_t secs, new;
int rc;
enum rtc_alarm_state alarm_state = RTC_ALARM_UNKNOWN;
if (self->fd == -1 && errno == ENOENT)
SKIP(return, "Skipping test since %s does not exist", rtc_file);
ASSERT_NE(-1, self->fd);
alarm_state = get_rtc_alarm_state(self->fd);
if (alarm_state == RTC_ALARM_DISABLED)
SKIP(return, "Skipping test since alarms are not supported.");
rc = ioctl(self->fd, RTC_RD_TIME, &alarm.time);
ASSERT_NE(-1, rc);
@ -380,6 +439,11 @@ TEST_F_TIMEOUT(rtc, alarm_wkalm_set_minute, 65) {
rc = ioctl(self->fd, RTC_WKALM_SET, &alarm);
if (rc == -1) {
/*
* Report error if rtc alarm was enabled. Fallback to check ioctl
* error number if rtc alarm state is unknown.
*/
ASSERT_EQ(RTC_ALARM_UNKNOWN, alarm_state);
ASSERT_EQ(EINVAL, errno);
TH_LOG("skip alarms are not supported.");
return;

View File

@ -1,2 +1,3 @@
# SPDX-License-Identifier: GPL-2.0-only
mangle_uc_sigmask
sas

View File

@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
CFLAGS = -Wall
TEST_GEN_PROGS = sas
TEST_GEN_PROGS = mangle_uc_sigmask
TEST_GEN_PROGS += sas
include ../lib.mk

View File

@ -0,0 +1,184 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (C) 2024 ARM Ltd.
*
* Author: Dev Jain <dev.jain@arm.com>
*
* Test describing a clear distinction between signal states - delivered and
* blocked, and their relation with ucontext.
*
* A process can request blocking of a signal by masking it into its set of
* blocked signals; such a signal, when sent to the process by the kernel,
* will get blocked by the process and it may later unblock it and take an
* action. At that point, the signal will be delivered.
*
* We test the following functionalities of the kernel:
*
* ucontext_t describes the interrupted context of the thread; this implies
* that, in case of registering a handler and catching the corresponding
* signal, that state is before what was jumping into the handler.
*
* The thread's mask of blocked signals can be permanently changed, i.e, not
* just during the execution of the handler, by mangling with uc_sigmask
* from inside the handler.
*
* Assume that we block the set of signals, S1, by sigaction(), and say, the
* signal for which the handler was installed, is S2. When S2 is sent to the
* program, it will be considered "delivered", since we will act on the
* signal and jump to the handler. Any instances of S1 or S2 raised, while the
* program is executing inside the handler, will be blocked; they will be
* delivered immediately upon termination of the handler.
*
* For standard signals (also see real-time signals in the man page), multiple
* blocked instances of the same signal are not queued; such a signal will
* be delivered just once.
*/
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <ucontext.h>
#include "../kselftest.h"
void handler_verify_ucontext(int signo, siginfo_t *info, void *uc)
{
int ret;
/* Kernel dumps ucontext with USR2 blocked */
ret = sigismember(&(((ucontext_t *)uc)->uc_sigmask), SIGUSR2);
ksft_test_result(ret == 1, "USR2 blocked in ucontext\n");
/*
* USR2 is blocked; can be delivered neither here, nor after
* exit from handler
*/
if (raise(SIGUSR2))
ksft_exit_fail_perror("raise");
}
void handler_segv(int signo, siginfo_t *info, void *uc)
{
/*
* Three cases possible:
* 1. Program already terminated due to segmentation fault.
* 2. SEGV was blocked even after returning from handler_usr.
* 3. SEGV was delivered on returning from handler_usr.
* The last option must happen.
*/
ksft_test_result_pass("SEGV delivered\n");
}
static int cnt;
void handler_usr(int signo, siginfo_t *info, void *uc)
{
int ret;
/*
* Break out of infinite recursion caused by raise(SIGUSR1) invoked
* from inside the handler
*/
++cnt;
if (cnt > 1)
return;
/* SEGV blocked during handler execution, delivered on return */
if (raise(SIGSEGV))
ksft_exit_fail_perror("raise");
ksft_print_msg("SEGV bypassed successfully\n");
/*
* Signal responsible for handler invocation is blocked by default;
* delivered on return, leading to recursion
*/
if (raise(SIGUSR1))
ksft_exit_fail_perror("raise");
ksft_test_result(cnt == 1,
"USR1 is blocked, cannot invoke handler right now\n");
/* Raise USR1 again; only one instance must be delivered upon exit */
if (raise(SIGUSR1))
ksft_exit_fail_perror("raise");
/* SEGV has been blocked in sa_mask, but ucontext is empty */
ret = sigismember(&(((ucontext_t *)uc)->uc_sigmask), SIGSEGV);
ksft_test_result(ret == 0, "SEGV not blocked in ucontext\n");
/* USR1 has been blocked, but ucontext is empty */
ret = sigismember(&(((ucontext_t *)uc)->uc_sigmask), SIGUSR1);
ksft_test_result(ret == 0, "USR1 not blocked in ucontext\n");
/*
* Mangle ucontext; this will be copied back into &current->blocked
* on return from the handler.
*/
if (sigaddset(&((ucontext_t *)uc)->uc_sigmask, SIGUSR2))
ksft_exit_fail_perror("sigaddset");
}
int main(int argc, char *argv[])
{
struct sigaction act, act2;
sigset_t set, oldset;
ksft_print_header();
ksft_set_plan(7);
act.sa_flags = SA_SIGINFO;
act.sa_sigaction = &handler_usr;
/* Add SEGV to blocked mask */
if (sigemptyset(&act.sa_mask) || sigaddset(&act.sa_mask, SIGSEGV)
|| (sigismember(&act.sa_mask, SIGSEGV) != 1))
ksft_exit_fail_msg("Cannot add SEGV to blocked mask\n");
if (sigaction(SIGUSR1, &act, NULL))
ksft_exit_fail_perror("Cannot install handler");
act2.sa_flags = SA_SIGINFO;
act2.sa_sigaction = &handler_segv;
if (sigaction(SIGSEGV, &act2, NULL))
ksft_exit_fail_perror("Cannot install handler");
/* Invoke handler */
if (raise(SIGUSR1))
ksft_exit_fail_perror("raise");
/* USR1 must not be queued */
ksft_test_result(cnt == 2, "handler invoked only twice\n");
/* Mangled ucontext implies USR2 is blocked for current thread */
if (raise(SIGUSR2))
ksft_exit_fail_perror("raise");
ksft_print_msg("USR2 bypassed successfully\n");
act.sa_sigaction = &handler_verify_ucontext;
if (sigaction(SIGUSR1, &act, NULL))
ksft_exit_fail_perror("Cannot install handler");
if (raise(SIGUSR1))
ksft_exit_fail_perror("raise");
/*
* Raising USR2 in handler_verify_ucontext is redundant since it
* is blocked
*/
ksft_print_msg("USR2 still blocked on return from handler\n");
/* Confirm USR2 blockage by sigprocmask() too */
if (sigemptyset(&set))
ksft_exit_fail_perror("sigemptyset");
if (sigprocmask(SIG_BLOCK, &set, &oldset))
ksft_exit_fail_perror("sigprocmask");
ksft_test_result(sigismember(&oldset, SIGUSR2) == 1,
"USR2 present in &current->blocked\n");
ksft_finished();
}

View File

@ -1,5 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
CFLAGS += -O3 -Wl,-no-as-needed -Wall
CFLAGS += -O3 -Wl,-no-as-needed -Wall -I $(top_srcdir)
LDLIBS += -lrt -lpthread -lm
# these are all "safe" tests that don't modify

View File

@ -22,14 +22,10 @@
#include <sys/time.h>
#include <sys/timex.h>
#include <time.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define CLOCK_MONOTONIC_RAW 4
#define NSEC_PER_SEC 1000000000LL
#define USEC_PER_SEC 1000000
#define MILLION 1000000
long systick;

View File

@ -28,24 +28,10 @@
#include <signal.h>
#include <stdlib.h>
#include <pthread.h>
#include <include/vdso/time64.h>
#include <errno.h>
#include "../kselftest.h"
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 1
#define CLOCK_PROCESS_CPUTIME_ID 2
#define CLOCK_THREAD_CPUTIME_ID 3
#define CLOCK_MONOTONIC_RAW 4
#define CLOCK_REALTIME_COARSE 5
#define CLOCK_MONOTONIC_COARSE 6
#define CLOCK_BOOTTIME 7
#define CLOCK_REALTIME_ALARM 8
#define CLOCK_BOOTTIME_ALARM 9
#define CLOCK_HWSPECIFIC 10
#define CLOCK_TAI 11
#define NR_CLOCKIDS 12
#define NSEC_PER_SEC 1000000000ULL
#define UNREASONABLE_LAT (NSEC_PER_SEC * 5) /* hopefully we resume in 5 secs */
#define SUSPEND_SECS 15
@ -142,8 +128,8 @@ int main(void)
alarmcount = 0;
if (timer_create(alarm_clock_id, &se, &tm1) == -1) {
printf("timer_create failed, %s unsupported?\n",
clockstring(alarm_clock_id));
printf("timer_create failed, %s unsupported?: %s\n",
clockstring(alarm_clock_id), strerror(errno));
break;
}

View File

@ -28,24 +28,13 @@
#include <sys/timex.h>
#include <string.h>
#include <signal.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define CALLS_PER_LOOP 64
#define NSEC_PER_SEC 1000000000ULL
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 1
#define CLOCK_PROCESS_CPUTIME_ID 2
#define CLOCK_THREAD_CPUTIME_ID 3
#define CLOCK_MONOTONIC_RAW 4
#define CLOCK_REALTIME_COARSE 5
#define CLOCK_MONOTONIC_COARSE 6
#define CLOCK_BOOTTIME 7
#define CLOCK_REALTIME_ALARM 8
#define CLOCK_BOOTTIME_ALARM 9
/* CLOCK_HWSPECIFIC == CLOCK_SGI_CYCLE (Deprecated) */
#define CLOCK_HWSPECIFIC 10
#define CLOCK_TAI 11
#define NR_CLOCKIDS 12
#define CALLS_PER_LOOP 64
char *clockstring(int clockid)
{
@ -152,7 +141,7 @@ int main(int argc, char *argv[])
{
int clockid, opt;
int userclock = CLOCK_REALTIME;
int maxclocks = NR_CLOCKIDS;
int maxclocks = CLOCK_TAI + 1;
int runtime = 10;
struct timespec ts;

View File

@ -48,9 +48,9 @@
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define NSEC_PER_SEC 1000000000ULL
#define CLOCK_TAI 11
time_t next_leap;

View File

@ -29,9 +29,9 @@
#include <signal.h>
#include <errno.h>
#include <mqueue.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define NSEC_PER_SEC 1000000000ULL
#define TARGET_TIMEOUT 100000000 /* 100ms in nanoseconds */
#define UNRESONABLE_LATENCY 40000000 /* 40ms in nanosecs */

View File

@ -27,23 +27,11 @@
#include <sys/timex.h>
#include <string.h>
#include <signal.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define NSEC_PER_SEC 1000000000ULL
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 1
#define CLOCK_PROCESS_CPUTIME_ID 2
#define CLOCK_THREAD_CPUTIME_ID 3
#define CLOCK_MONOTONIC_RAW 4
#define CLOCK_REALTIME_COARSE 5
#define CLOCK_MONOTONIC_COARSE 6
#define CLOCK_BOOTTIME 7
#define CLOCK_REALTIME_ALARM 8
#define CLOCK_BOOTTIME_ALARM 9
/* CLOCK_HWSPECIFIC == CLOCK_SGI_CYCLE (Deprecated) */
#define CLOCK_HWSPECIFIC 10
#define CLOCK_TAI 11
#define NR_CLOCKIDS 12
#define UNSUPPORTED 0xf00f
@ -132,11 +120,12 @@ int main(int argc, char **argv)
{
long long length;
int clockid, ret;
int max_clocks = CLOCK_TAI + 1;
ksft_print_header();
ksft_set_plan(NR_CLOCKIDS);
ksft_set_plan(max_clocks);
for (clockid = CLOCK_REALTIME; clockid < NR_CLOCKIDS; clockid++) {
for (clockid = CLOCK_REALTIME; clockid < max_clocks; clockid++) {
/* Skip cputime clockids since nanosleep won't increment cputime */
if (clockid == CLOCK_PROCESS_CPUTIME_ID ||

View File

@ -24,26 +24,13 @@
#include <sys/timex.h>
#include <string.h>
#include <signal.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define NSEC_PER_SEC 1000000000ULL
#define UNRESONABLE_LATENCY 40000000 /* 40ms in nanosecs */
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 1
#define CLOCK_PROCESS_CPUTIME_ID 2
#define CLOCK_THREAD_CPUTIME_ID 3
#define CLOCK_MONOTONIC_RAW 4
#define CLOCK_REALTIME_COARSE 5
#define CLOCK_MONOTONIC_COARSE 6
#define CLOCK_BOOTTIME 7
#define CLOCK_REALTIME_ALARM 8
#define CLOCK_BOOTTIME_ALARM 9
/* CLOCK_HWSPECIFIC == CLOCK_SGI_CYCLE (Deprecated) */
#define CLOCK_HWSPECIFIC 10
#define CLOCK_TAI 11
#define NR_CLOCKIDS 12
#define UNSUPPORTED 0xf00f
@ -145,11 +132,12 @@ int main(int argc, char **argv)
{
long long length;
int clockid, ret;
int max_clocks = CLOCK_TAI + 1;
ksft_print_header();
ksft_set_plan(NR_CLOCKIDS - CLOCK_REALTIME - SKIPPED_CLOCK_COUNT);
ksft_set_plan(max_clocks - CLOCK_REALTIME - SKIPPED_CLOCK_COUNT);
for (clockid = CLOCK_REALTIME; clockid < NR_CLOCKIDS; clockid++) {
for (clockid = CLOCK_REALTIME; clockid < max_clocks; clockid++) {
/* Skip cputime clockids since nanosleep won't increment cputime */
if (clockid == CLOCK_PROCESS_CPUTIME_ID ||

View File

@ -15,13 +15,12 @@
#include <string.h>
#include <unistd.h>
#include <time.h>
#include <include/vdso/time64.h>
#include <pthread.h>
#include "../kselftest.h"
#define DELAY 2
#define USECS_PER_SEC 1000000
#define NSECS_PER_SEC 1000000000
static void __fatal_error(const char *test, const char *name, const char *what)
{
@ -86,9 +85,9 @@ static int check_diff(struct timeval start, struct timeval end)
long long diff;
diff = end.tv_usec - start.tv_usec;
diff += (end.tv_sec - start.tv_sec) * USECS_PER_SEC;
diff += (end.tv_sec - start.tv_sec) * USEC_PER_SEC;
if (llabs(diff - DELAY * USECS_PER_SEC) > USECS_PER_SEC / 2) {
if (llabs(diff - DELAY * USEC_PER_SEC) > USEC_PER_SEC / 2) {
printf("Diff too high: %lld..", diff);
return -1;
}
@ -448,7 +447,7 @@ static inline int64_t calcdiff_ns(struct timespec t1, struct timespec t2)
{
int64_t diff;
diff = NSECS_PER_SEC * (int64_t)((int) t1.tv_sec - (int) t2.tv_sec);
diff = NSEC_PER_SEC * (int64_t)((int) t1.tv_sec - (int) t2.tv_sec);
diff += ((int) t1.tv_nsec - (int) t2.tv_nsec);
return diff;
}
@ -479,7 +478,7 @@ static void check_sigev_none(int which, const char *name)
do {
if (clock_gettime(which, &now))
fatal_error(name, "clock_gettime()");
} while (calcdiff_ns(now, start) < NSECS_PER_SEC);
} while (calcdiff_ns(now, start) < NSEC_PER_SEC);
if (timer_gettime(timerid, &its))
fatal_error(name, "timer_gettime()");
@ -536,7 +535,7 @@ static void check_gettime(int which, const char *name)
wraps++;
prev = its;
} while (calcdiff_ns(now, start) < NSECS_PER_SEC);
} while (calcdiff_ns(now, start) < NSEC_PER_SEC);
if (timer_delete(timerid))
fatal_error(name, "timer_delete()");
@ -587,7 +586,7 @@ static void check_overrun(int which, const char *name)
do {
if (clock_gettime(which, &now))
fatal_error(name, "clock_gettime()");
} while (calcdiff_ns(now, start) < NSECS_PER_SEC);
} while (calcdiff_ns(now, start) < NSEC_PER_SEC);
/* Unblock it, which should deliver a signal */
if (sigprocmask(SIG_UNBLOCK, &set, NULL))

View File

@ -25,11 +25,9 @@
#include <sys/time.h>
#include <sys/timex.h>
#include <time.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define CLOCK_MONOTONIC_RAW 4
#define NSEC_PER_SEC 1000000000LL
#define shift_right(x, s) ({ \
__typeof__(x) __x = (x); \
__typeof__(s) __s = (s); \

View File

@ -27,10 +27,9 @@
#include <unistd.h>
#include <time.h>
#include <sys/time.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define NSEC_PER_SEC 1000000000LL
#define KTIME_MAX ((long long)~((unsigned long long)1 << 63))
#define KTIME_SEC_MAX (KTIME_MAX / NSEC_PER_SEC)

View File

@ -28,24 +28,12 @@
#include <signal.h>
#include <stdlib.h>
#include <pthread.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define CLOCK_REALTIME 0
#define CLOCK_MONOTONIC 1
#define CLOCK_PROCESS_CPUTIME_ID 2
#define CLOCK_THREAD_CPUTIME_ID 3
#define CLOCK_MONOTONIC_RAW 4
#define CLOCK_REALTIME_COARSE 5
#define CLOCK_MONOTONIC_COARSE 6
#define CLOCK_BOOTTIME 7
#define CLOCK_REALTIME_ALARM 8
#define CLOCK_BOOTTIME_ALARM 9
/* CLOCK_HWSPECIFIC == CLOCK_SGI_CYCLE (Deprecated) */
#define CLOCK_HWSPECIFIC 10
#define CLOCK_TAI 11
#define NR_CLOCKIDS 12
#define NSEC_PER_SEC 1000000000ULL
#define UNRESONABLE_LATENCY 40000000 /* 40ms in nanosecs */
#define TIMER_SECS 1
@ -80,7 +68,7 @@ char *clockstring(int clockid)
return "CLOCK_BOOTTIME_ALARM";
case CLOCK_TAI:
return "CLOCK_TAI";
};
}
return "UNKNOWN_CLOCKID";
}
@ -254,6 +242,7 @@ int main(void)
struct sigaction act;
int signum = SIGRTMAX;
int ret = 0;
int max_clocks = CLOCK_TAI + 1;
/* Set up signal handler: */
sigfillset(&act.sa_mask);
@ -262,7 +251,7 @@ int main(void)
sigaction(signum, &act, NULL);
printf("Setting timers for every %i seconds\n", TIMER_SECS);
for (clock_id = 0; clock_id < NR_CLOCKIDS; clock_id++) {
for (clock_id = 0; clock_id < max_clocks; clock_id++) {
if ((clock_id == CLOCK_PROCESS_CPUTIME_ID) ||
(clock_id == CLOCK_THREAD_CPUTIME_ID) ||

View File

@ -29,11 +29,9 @@
#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <include/vdso/time64.h>
#include "../kselftest.h"
#define NSEC_PER_SEC 1000000000LL
#define USEC_PER_SEC 1000000LL
#define ADJ_SETOFFSET 0x0100
#include <sys/syscall.h>