2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-11-27 12:04:22 +08:00

There is no particular theme here - mainly quick hits all over the tree.

Most notable is a set of zlib changes from Mikhail Zaslonko which enhances
 and fixes zlib's use of S390 hardware support: "lib/zlib: Set of s390
 DFLTCC related patches for kernel zlib".
 -----BEGIN PGP SIGNATURE-----
 
 iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCY/QC4QAKCRDdBJ7gKXxA
 jtKdAQCbDCBdY8H45d1fONzQW2UDqCPnOi77MpVUxGL33r+1SAEA807C7rvDEmlf
 yP1Ft+722fFU5jogVU8ZFh+vapv2/gI=
 =Q9YK
 -----END PGP SIGNATURE-----

Merge tag 'mm-nonmm-stable-2023-02-20-15-29' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull non-MM updates from Andrew Morton:
 "There is no particular theme here - mainly quick hits all over the
  tree.

  Most notable is a set of zlib changes from Mikhail Zaslonko which
  enhances and fixes zlib's use of S390 hardware support: 'lib/zlib: Set
  of s390 DFLTCC related patches for kernel zlib'"

* tag 'mm-nonmm-stable-2023-02-20-15-29' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (55 commits)
  Update CREDITS file entry for Jesper Juhl
  sparc: allow PM configs for sparc32 COMPILE_TEST
  hung_task: print message when hung_task_warnings gets down to zero.
  arch/Kconfig: fix indentation
  scripts/tags.sh: fix the Kconfig tags generation when using latest ctags
  nilfs2: prevent WARNING in nilfs_dat_commit_end()
  lib/zlib: remove redundation assignement of avail_in dfltcc_gdht()
  lib/Kconfig.debug: do not enable DEBUG_PREEMPT by default
  lib/zlib: DFLTCC always switch to software inflate for Z_PACKET_FLUSH option
  lib/zlib: DFLTCC support inflate with small window
  lib/zlib: Split deflate and inflate states for DFLTCC
  lib/zlib: DFLTCC not writing header bits when avail_out == 0
  lib/zlib: fix DFLTCC ignoring flush modes when avail_in == 0
  lib/zlib: fix DFLTCC not flushing EOBS when creating raw streams
  lib/zlib: implement switching between DFLTCC and software
  lib/zlib: adjust offset calculation for dfltcc_state
  nilfs2: replace WARN_ONs for invalid DAT metadata block requests
  scripts/spelling.txt: add "exsits" pattern and fix typo instances
  fs: gracefully handle ->get_block not mapping bh in __mpage_writepage
  cramfs: Kconfig: fix spelling & punctuation
  ...
This commit is contained in:
Linus Torvalds 2023-02-23 17:55:40 -08:00
commit d2980d8d82
66 changed files with 1778 additions and 297 deletions

View File

@ -1852,11 +1852,11 @@ E: ajoshi@shell.unixbox.com
D: fbdev hacking
N: Jesper Juhl
E: jj@chaosbits.net
E: jesperjuhl76@gmail.com
D: Various fixes, cleanups and minor features all over the tree.
D: Wrote initial version of the hdaps driver (since passed on to others).
S: Lemnosvej 1, 3.tv
S: 2300 Copenhagen S.
S: Titangade 5G, 2.tv
S: 2200 Copenhagen N.
S: Denmark
N: Jozsef Kadlecsik

View File

@ -453,9 +453,10 @@ this allows system administrators to override the
kexec_load_disabled
===================
A toggle indicating if the ``kexec_load`` syscall has been disabled.
This value defaults to 0 (false: ``kexec_load`` enabled), but can be
set to 1 (true: ``kexec_load`` disabled).
A toggle indicating if the syscalls ``kexec_load`` and
``kexec_file_load`` have been disabled.
This value defaults to 0 (false: ``kexec_*load`` enabled), but can be
set to 1 (true: ``kexec_*load`` disabled).
Once true, kexec can no longer be used, and the toggle cannot be set
back to false.
This allows a kexec image to be loaded before disabling the syscall,
@ -463,6 +464,24 @@ allowing a system to set up (and later use) an image without it being
altered.
Generally used together with the `modules_disabled`_ sysctl.
kexec_load_limit_panic
======================
This parameter specifies a limit to the number of times the syscalls
``kexec_load`` and ``kexec_file_load`` can be called with a crash
image. It can only be set with a more restrictive value than the
current one.
== ======================================================
-1 Unlimited calls to kexec. This is the default setting.
N Number of calls left.
== ======================================================
kexec_load_limit_reboot
=======================
Similar functionality as ``kexec_load_limit_panic``, but for a normal
image.
kptr_restrict
=============

View File

@ -231,6 +231,71 @@ proc entries
This feature is intended for systematic testing of faults in a single
system call. See an example below.
Error Injectable Functions
--------------------------
This part is for the kenrel developers considering to add a function to
ALLOW_ERROR_INJECTION() macro.
Requirements for the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since the function-level error injection forcibly changes the code path
and returns an error even if the input and conditions are proper, this can
cause unexpected kernel crash if you allow error injection on the function
which is NOT error injectable. Thus, you (and reviewers) must ensure;
- The function returns an error code if it fails, and the callers must check
it correctly (need to recover from it).
- The function does not execute any code which can change any state before
the first error return. The state includes global or local, or input
variable. For example, clear output address storage (e.g. `*ret = NULL`),
increments/decrements counter, set a flag, preempt/irq disable or get
a lock (if those are recovered before returning error, that will be OK.)
The first requirement is important, and it will result in that the release
(free objects) functions are usually harder to inject errors than allocate
functions. If errors of such release functions are not correctly handled
it will cause a memory leak easily (the caller will confuse that the object
has been released or corrupted.)
The second one is for the caller which expects the function should always
does something. Thus if the function error injection skips whole of the
function, the expectation is betrayed and causes an unexpected error.
Type of the Error Injectable Functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Each error injectable functions will have the error type specified by the
ALLOW_ERROR_INJECTION() macro. You have to choose it carefully if you add
a new error injectable function. If the wrong error type is chosen, the
kernel may crash because it may not be able to handle the error.
There are 4 types of errors defined in include/asm-generic/error-injection.h
EI_ETYPE_NULL
This function will return `NULL` if it fails. e.g. return an allocateed
object address.
EI_ETYPE_ERRNO
This function will return an `-errno` error code if it fails. e.g. return
-EINVAL if the input is wrong. This will include the functions which will
return an address which encodes `-errno` by ERR_PTR() macro.
EI_ETYPE_ERRNO_NULL
This function will return an `-errno` or `NULL` if it fails. If the caller
of this function checks the return value with IS_ERR_OR_NULL() macro, this
type will be appropriate.
EI_ETYPE_TRUE
This function will return `true` (non-zero positive value) if it fails.
If you specifies a wrong type, for example, EI_TYPE_ERRNO for the function
which returns an allocated object, it may cause a problem because the returned
value is not an object address and the caller can not access to the address.
How to add new fault injection capability
-----------------------------------------

View File

@ -35,7 +35,7 @@ config HOTPLUG_SMT
bool
config GENERIC_ENTRY
bool
bool
config KPROBES
bool "Kprobes"
@ -55,26 +55,26 @@ config JUMP_LABEL
depends on HAVE_ARCH_JUMP_LABEL
select OBJTOOL if HAVE_JUMP_LABEL_HACK
help
This option enables a transparent branch optimization that
makes certain almost-always-true or almost-always-false branch
conditions even cheaper to execute within the kernel.
This option enables a transparent branch optimization that
makes certain almost-always-true or almost-always-false branch
conditions even cheaper to execute within the kernel.
Certain performance-sensitive kernel code, such as trace points,
scheduler functionality, networking code and KVM have such
branches and include support for this optimization technique.
Certain performance-sensitive kernel code, such as trace points,
scheduler functionality, networking code and KVM have such
branches and include support for this optimization technique.
If it is detected that the compiler has support for "asm goto",
the kernel will compile such branches with just a nop
instruction. When the condition flag is toggled to true, the
nop will be converted to a jump instruction to execute the
conditional block of instructions.
If it is detected that the compiler has support for "asm goto",
the kernel will compile such branches with just a nop
instruction. When the condition flag is toggled to true, the
nop will be converted to a jump instruction to execute the
conditional block of instructions.
This technique lowers overhead and stress on the branch prediction
of the processor and generally makes the kernel faster. The update
of the condition is slower, but those are always very rare.
This technique lowers overhead and stress on the branch prediction
of the processor and generally makes the kernel faster. The update
of the condition is slower, but those are always very rare.
( On 32-bit x86, the necessary options added to the compiler
flags may increase the size of the kernel slightly. )
( On 32-bit x86, the necessary options added to the compiler
flags may increase the size of the kernel slightly. )
config STATIC_KEYS_SELFTEST
bool "Static key selftest"
@ -98,9 +98,9 @@ config KPROBES_ON_FTRACE
depends on KPROBES && HAVE_KPROBES_ON_FTRACE
depends on DYNAMIC_FTRACE_WITH_REGS
help
If function tracer is enabled and the arch supports full
passing of pt_regs to function tracing, then kprobes can
optimize on top of function tracing.
If function tracer is enabled and the arch supports full
passing of pt_regs to function tracing, then kprobes can
optimize on top of function tracing.
config UPROBES
def_bool n
@ -154,21 +154,21 @@ config HAVE_EFFICIENT_UNALIGNED_ACCESS
config ARCH_USE_BUILTIN_BSWAP
bool
help
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
__arch_bswapXX() macros, allows the compiler to see what's
happening and offers more opportunity for optimisation. In
particular, the compiler will be able to combine the byteswap
with a nearby load or store and use load-and-swap or
store-and-swap instructions if the architecture has them. It
should almost *never* result in code which is worse than the
hand-coded assembler in <asm/swab.h>. But just in case it
does, the use of the builtins is optional.
Modern versions of GCC (since 4.4) have builtin functions
for handling byte-swapping. Using these, instead of the old
inline assembler that the architecture code provides in the
__arch_bswapXX() macros, allows the compiler to see what's
happening and offers more opportunity for optimisation. In
particular, the compiler will be able to combine the byteswap
with a nearby load or store and use load-and-swap or
store-and-swap instructions if the architecture has them. It
should almost *never* result in code which is worse than the
hand-coded assembler in <asm/swab.h>. But just in case it
does, the use of the builtins is optional.
Any architecture with load-and-swap or store-and-swap
instructions should set this. And it shouldn't hurt to set it
on architectures that don't have such instructions.
Any architecture with load-and-swap or store-and-swap
instructions should set this. And it shouldn't hurt to set it
on architectures that don't have such instructions.
config KRETPROBES
def_bool y
@ -720,13 +720,13 @@ config LTO_CLANG_FULL
depends on !COMPILE_TEST
select LTO_CLANG
help
This option enables Clang's full Link Time Optimization (LTO), which
allows the compiler to optimize the kernel globally. If you enable
this option, the compiler generates LLVM bitcode instead of ELF
object files, and the actual compilation from bitcode happens at
the LTO link step, which may take several minutes depending on the
kernel configuration. More information can be found from LLVM's
documentation:
This option enables Clang's full Link Time Optimization (LTO), which
allows the compiler to optimize the kernel globally. If you enable
this option, the compiler generates LLVM bitcode instead of ELF
object files, and the actual compilation from bitcode happens at
the LTO link step, which may take several minutes depending on the
kernel configuration. More information can be found from LLVM's
documentation:
https://llvm.org/docs/LinkTimeOptimization.html
@ -1330,9 +1330,9 @@ config ARCH_HAS_CC_PLATFORM
bool
config HAVE_SPARSE_SYSCALL_NR
bool
help
An architecture should select this if its syscall numbering is sparse
bool
help
An architecture should select this if its syscall numbering is sparse
to save space. For example, MIPS architecture has a syscall array with
entries at 4000, 5000 and 6000 locations. This option turns on syscall
related optimizations for a given architecture.
@ -1356,35 +1356,35 @@ config HAVE_PREEMPT_DYNAMIC_CALL
depends on HAVE_STATIC_CALL
select HAVE_PREEMPT_DYNAMIC
help
An architecture should select this if it can handle the preemption
model being selected at boot time using static calls.
An architecture should select this if it can handle the preemption
model being selected at boot time using static calls.
Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
preemption function will be patched directly.
Where an architecture selects HAVE_STATIC_CALL_INLINE, any call to a
preemption function will be patched directly.
Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
call to a preemption function will go through a trampoline, and the
trampoline will be patched.
Where an architecture does not select HAVE_STATIC_CALL_INLINE, any
call to a preemption function will go through a trampoline, and the
trampoline will be patched.
It is strongly advised to support inline static call to avoid any
overhead.
It is strongly advised to support inline static call to avoid any
overhead.
config HAVE_PREEMPT_DYNAMIC_KEY
bool
depends on HAVE_ARCH_JUMP_LABEL
select HAVE_PREEMPT_DYNAMIC
help
An architecture should select this if it can handle the preemption
model being selected at boot time using static keys.
An architecture should select this if it can handle the preemption
model being selected at boot time using static keys.
Each preemption function will be given an early return based on a
static key. This should have slightly lower overhead than non-inline
static calls, as this effectively inlines each trampoline into the
start of its callee. This may avoid redundant work, and may
integrate better with CFI schemes.
Each preemption function will be given an early return based on a
static key. This should have slightly lower overhead than non-inline
static calls, as this effectively inlines each trampoline into the
start of its callee. This may avoid redundant work, and may
integrate better with CFI schemes.
This will have greater overhead than using inline static calls as
the call to the preemption function cannot be entirely elided.
This will have greater overhead than using inline static calls as
the call to the preemption function cannot be entirely elided.
config ARCH_WANT_LD_ORPHAN_WARN
bool
@ -1407,8 +1407,8 @@ config ARCH_SUPPORTS_PAGE_TABLE_CHECK
config ARCH_SPLIT_ARG64
bool
help
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.
If a 32-bit architecture requires 64-bit arguments to be split into
pairs of 32-bit arguments, select this option.
config ARCH_HAS_ELFCORE_COMPAT
bool

View File

@ -73,7 +73,7 @@ struct halt_info {
static void
common_shutdown_1(void *generic_ptr)
{
struct halt_info *how = (struct halt_info *)generic_ptr;
struct halt_info *how = generic_ptr;
struct percpu_struct *cpup;
unsigned long *pflags, flags;
int cpuid = smp_processor_id();

View File

@ -628,7 +628,7 @@ flush_tlb_all(void)
static void
ipi_flush_tlb_mm(void *x)
{
struct mm_struct *mm = (struct mm_struct *) x;
struct mm_struct *mm = x;
if (mm == current->active_mm && !asn_locked())
flush_tlb_current(mm);
else
@ -670,7 +670,7 @@ struct flush_tlb_page_struct {
static void
ipi_flush_tlb_page(void *x)
{
struct flush_tlb_page_struct *data = (struct flush_tlb_page_struct *)x;
struct flush_tlb_page_struct *data = x;
struct mm_struct * mm = data->mm;
if (mm == current->active_mm && !asn_locked())

View File

@ -283,7 +283,7 @@ config ARCH_FORCE_MAX_ORDER
This config option is actually maximum order plus one. For example,
a value of 13 means that the largest free memory block is 2^12 pages.
if SPARC64
if SPARC64 || COMPILE_TEST
source "kernel/power/Kconfig"
endif

View File

@ -2615,8 +2615,8 @@ static bool emulator_io_port_access_allowed(struct x86_emulate_ctxt *ctxt,
return true;
}
static bool emulator_io_permited(struct x86_emulate_ctxt *ctxt,
u16 port, u16 len)
static bool emulator_io_permitted(struct x86_emulate_ctxt *ctxt,
u16 port, u16 len)
{
if (ctxt->perm_ok)
return true;
@ -3961,7 +3961,7 @@ static int check_rdpmc(struct x86_emulate_ctxt *ctxt)
static int check_perm_in(struct x86_emulate_ctxt *ctxt)
{
ctxt->dst.bytes = min(ctxt->dst.bytes, 4u);
if (!emulator_io_permited(ctxt, ctxt->src.val, ctxt->dst.bytes))
if (!emulator_io_permitted(ctxt, ctxt->src.val, ctxt->dst.bytes))
return emulate_gp(ctxt, 0);
return X86EMUL_CONTINUE;
@ -3970,7 +3970,7 @@ static int check_perm_in(struct x86_emulate_ctxt *ctxt)
static int check_perm_out(struct x86_emulate_ctxt *ctxt)
{
ctxt->src.bytes = min(ctxt->src.bytes, 4u);
if (!emulator_io_permited(ctxt, ctxt->dst.val, ctxt->src.bytes))
if (!emulator_io_permitted(ctxt, ctxt->dst.val, ctxt->src.bytes))
return emulate_gp(ctxt, 0);
return X86EMUL_CONTINUE;

View File

@ -446,7 +446,7 @@ iscsi_iser_conn_create(struct iscsi_cls_session *cls_session,
* @is_leading: indicate if this is the session leading connection (MCS)
*
* Return: zero on success, $error if iscsi_conn_bind fails and
* -EINVAL in case end-point doesn't exsits anymore or iser connection
* -EINVAL in case end-point doesn't exists anymore or iser connection
* state is not UP (teardown already started).
*/
static int iscsi_iser_conn_bind(struct iscsi_cls_session *cls_session,

View File

@ -38,7 +38,7 @@ config CRAMFS_MTD
default y if !CRAMFS_BLOCKDEV
help
This option allows the CramFs driver to load data directly from
a linear adressed memory range (usually non volatile memory
a linear addressed memory range (usually non-volatile memory
like flash) instead of going through the block device layer.
This saves some memory since no intermediate buffering is
necessary.

View File

@ -786,11 +786,10 @@ static void ext4_update_bh_state(struct buffer_head *bh, unsigned long flags)
* once we get rid of using bh as a container for mapping information
* to pass to / from get_block functions, this can go away.
*/
old_state = READ_ONCE(bh->b_state);
do {
old_state = READ_ONCE(bh->b_state);
new_state = (old_state & ~EXT4_MAP_FLAGS) | flags;
} while (unlikely(
cmpxchg(&bh->b_state, old_state, new_state) != old_state));
} while (unlikely(!try_cmpxchg(&bh->b_state, &old_state, new_state)));
}
static int _ext4_get_block(struct inode *inode, sector_t iblock,

View File

@ -200,7 +200,7 @@ static const struct dentry_operations vfat_dentry_ops = {
/* Characters that are undesirable in an MS-DOS file name */
static inline wchar_t vfat_bad_char(wchar_t w)
static inline bool vfat_bad_char(wchar_t w)
{
return (w < 0x0020)
|| (w == '*') || (w == '?') || (w == '<') || (w == '>')
@ -208,7 +208,7 @@ static inline wchar_t vfat_bad_char(wchar_t w)
|| (w == '\\');
}
static inline wchar_t vfat_replace_char(wchar_t w)
static inline bool vfat_replace_char(wchar_t w)
{
return (w == '[') || (w == ']') || (w == ';') || (w == ',')
|| (w == '+') || (w == '=');

View File

@ -31,7 +31,7 @@ vxfs_put_page(struct page *pp)
/**
* vxfs_get_page - read a page into memory.
* @ip: inode to read from
* @mapping: mapping to read from
* @n: page number
*
* Description:
@ -81,14 +81,14 @@ vxfs_bread(struct inode *ip, int block)
}
/**
* vxfs_get_block - locate buffer for given inode,block tuple
* vxfs_getblk - locate buffer for given inode,block tuple
* @ip: inode
* @iblock: logical block
* @bp: buffer skeleton
* @create: %TRUE if blocks may be newly allocated.
*
* Description:
* The vxfs_get_block function fills @bp with the right physical
* The vxfs_getblk function fills @bp with the right physical
* block and device number to perform a lowlevel read/write on
* it.
*

View File

@ -165,7 +165,7 @@ static int vxfs_try_sb_magic(struct super_block *sbp, int silent,
}
/**
* vxfs_read_super - read superblock into memory and initialize filesystem
* vxfs_fill_super - read superblock into memory and initialize filesystem
* @sbp: VFS superblock (to fill)
* @dp: fs private mount data
* @silent: do not complain loudly when sth is wrong

View File

@ -274,6 +274,7 @@ static struct hfs_bnode *__hfs_bnode_create(struct hfs_btree *tree, u32 cnid)
tree->node_hash[hash] = node;
tree->node_hash_cnt++;
} else {
hfs_bnode_get(node2);
spin_unlock(&tree->hash_lock);
kfree(node);
wait_event(node2->lock_wq, !test_bit(HFS_BNODE_NEW, &node2->flags));

View File

@ -486,7 +486,7 @@ void hfs_file_truncate(struct inode *inode)
inode->i_size);
if (inode->i_size > HFS_I(inode)->phys_size) {
struct address_space *mapping = inode->i_mapping;
void *fsdata;
void *fsdata = NULL;
struct page *page;
/* XXX: Can use generic_cont_expand? */

View File

@ -554,7 +554,7 @@ void hfsplus_file_truncate(struct inode *inode)
if (inode->i_size > hip->phys_size) {
struct address_space *mapping = inode->i_mapping;
struct page *page;
void *fsdata;
void *fsdata = NULL;
loff_t size = inode->i_size;
res = hfsplus_write_begin(NULL, mapping, size, 0,

View File

@ -257,7 +257,7 @@ end_attr_file_creation:
int __hfsplus_setxattr(struct inode *inode, const char *name,
const void *value, size_t size, int flags)
{
int err = 0;
int err;
struct hfs_find_data cat_fd;
hfsplus_cat_entry entry;
u16 cat_entry_flags, cat_entry_type;
@ -494,7 +494,7 @@ ssize_t __hfsplus_getxattr(struct inode *inode, const char *name,
__be32 xattr_record_type;
u32 record_type;
u16 record_length = 0;
ssize_t res = 0;
ssize_t res;
if ((!S_ISREG(inode->i_mode) &&
!S_ISDIR(inode->i_mode)) ||
@ -606,7 +606,7 @@ static inline int can_list(const char *xattr_name)
static ssize_t hfsplus_listxattr_finder_info(struct dentry *dentry,
char *buffer, size_t size)
{
ssize_t res = 0;
ssize_t res;
struct inode *inode = d_inode(dentry);
struct hfs_find_data fd;
u16 entry_type;
@ -674,10 +674,9 @@ end_listxattr_finder_info:
ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size)
{
ssize_t err;
ssize_t res = 0;
ssize_t res;
struct inode *inode = d_inode(dentry);
struct hfs_find_data fd;
u16 key_len = 0;
struct hfsplus_attr_key attr_key;
char *strbuf;
int xattr_name_len;
@ -719,7 +718,8 @@ ssize_t hfsplus_listxattr(struct dentry *dentry, char *buffer, size_t size)
}
for (;;) {
key_len = hfs_bnode_read_u16(fd.bnode, fd.keyoffset);
u16 key_len = hfs_bnode_read_u16(fd.bnode, fd.keyoffset);
if (key_len == 0 || key_len > fd.tree->max_key_len) {
pr_err("invalid xattr key length: %d\n", key_len);
res = -EIO;
@ -766,12 +766,12 @@ out:
static int hfsplus_removexattr(struct inode *inode, const char *name)
{
int err = 0;
int err;
struct hfs_find_data cat_fd;
u16 flags;
u16 cat_entry_type;
int is_xattr_acl_deleted = 0;
int is_all_xattrs_deleted = 0;
int is_xattr_acl_deleted;
int is_all_xattrs_deleted;
if (!HFSPLUS_SB(inode->i_sb)->attr_tree)
return -EOPNOTSUPP;

View File

@ -40,8 +40,21 @@ static inline struct nilfs_dat_info *NILFS_DAT_I(struct inode *dat)
static int nilfs_dat_prepare_entry(struct inode *dat,
struct nilfs_palloc_req *req, int create)
{
return nilfs_palloc_get_entry_block(dat, req->pr_entry_nr,
create, &req->pr_entry_bh);
int ret;
ret = nilfs_palloc_get_entry_block(dat, req->pr_entry_nr,
create, &req->pr_entry_bh);
if (unlikely(ret == -ENOENT)) {
nilfs_err(dat->i_sb,
"DAT doesn't have a block to manage vblocknr = %llu",
(unsigned long long)req->pr_entry_nr);
/*
* Return internal code -EINVAL to notify bmap layer of
* metadata corruption.
*/
ret = -EINVAL;
}
return ret;
}
static void nilfs_dat_commit_entry(struct inode *dat,
@ -123,11 +136,7 @@ static void nilfs_dat_commit_free(struct inode *dat,
int nilfs_dat_prepare_start(struct inode *dat, struct nilfs_palloc_req *req)
{
int ret;
ret = nilfs_dat_prepare_entry(dat, req, 0);
WARN_ON(ret == -ENOENT);
return ret;
return nilfs_dat_prepare_entry(dat, req, 0);
}
void nilfs_dat_commit_start(struct inode *dat, struct nilfs_palloc_req *req,
@ -149,19 +158,19 @@ void nilfs_dat_commit_start(struct inode *dat, struct nilfs_palloc_req *req,
int nilfs_dat_prepare_end(struct inode *dat, struct nilfs_palloc_req *req)
{
struct nilfs_dat_entry *entry;
__u64 start;
sector_t blocknr;
void *kaddr;
int ret;
ret = nilfs_dat_prepare_entry(dat, req, 0);
if (ret < 0) {
WARN_ON(ret == -ENOENT);
if (ret < 0)
return ret;
}
kaddr = kmap_atomic(req->pr_entry_bh->b_page);
entry = nilfs_palloc_block_get_entry(dat, req->pr_entry_nr,
req->pr_entry_bh, kaddr);
start = le64_to_cpu(entry->de_start);
blocknr = le64_to_cpu(entry->de_blocknr);
kunmap_atomic(kaddr);
@ -172,6 +181,15 @@ int nilfs_dat_prepare_end(struct inode *dat, struct nilfs_palloc_req *req)
return ret;
}
}
if (unlikely(start > nilfs_mdt_cno(dat))) {
nilfs_err(dat->i_sb,
"vblocknr = %llu has abnormal lifetime: start cno (= %llu) > current cno (= %llu)",
(unsigned long long)req->pr_entry_nr,
(unsigned long long)start,
(unsigned long long)nilfs_mdt_cno(dat));
nilfs_dat_abort_entry(dat, req);
return -EINVAL;
}
return 0;
}

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* aops.c - NTFS kernel address space operations and page cache handling.
*
* Copyright (c) 2001-2014 Anton Altaparmakov and Tuxera Inc.
@ -1646,7 +1646,7 @@ hole:
return block;
}
/**
/*
* ntfs_normal_aops - address space operations for normal inodes and attributes
*
* Note these are not used for compressed or mst protected inodes and
@ -1664,7 +1664,7 @@ const struct address_space_operations ntfs_normal_aops = {
.error_remove_page = generic_error_remove_page,
};
/**
/*
* ntfs_compressed_aops - address space operations for compressed inodes
*/
const struct address_space_operations ntfs_compressed_aops = {
@ -1678,9 +1678,9 @@ const struct address_space_operations ntfs_compressed_aops = {
.error_remove_page = generic_error_remove_page,
};
/**
/*
* ntfs_mst_aops - general address space operations for mst protecteed inodes
* and attributes
* and attributes
*/
const struct address_space_operations ntfs_mst_aops = {
.read_folio = ntfs_read_folio, /* Fill page with data. */

View File

@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/**
/*
* aops.h - Defines for NTFS kernel address space operations and page cache
* handling. Part of the Linux-NTFS project.
*

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* compress.c - NTFS kernel compressed attributes handling.
* Part of the Linux-NTFS project.
*
@ -41,12 +41,12 @@ typedef enum {
NTFS_MAX_CB_SIZE = 64 * 1024,
} ntfs_compression_constants;
/**
/*
* ntfs_compression_buffer - one buffer for the decompression engine
*/
static u8 *ntfs_compression_buffer;
/**
/*
* ntfs_cb_lock - spinlock which protects ntfs_compression_buffer
*/
static DEFINE_SPINLOCK(ntfs_cb_lock);

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* dir.c - NTFS kernel directory operations. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2007 Anton Altaparmakov
@ -17,7 +17,7 @@
#include "debug.h"
#include "ntfs.h"
/**
/*
* The little endian Unicode string $I30 as a global constant.
*/
ntfschar I30[5] = { cpu_to_le16('$'), cpu_to_le16('I'),

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* inode.c - NTFS kernel inode handling.
*
* Copyright (c) 2001-2014 Anton Altaparmakov and Tuxera Inc.
@ -2935,7 +2935,7 @@ out:
}
/**
* ntfs_write_inode - write out a dirty inode
* __ntfs_write_inode - write out a dirty inode
* @vi: inode to write out
* @sync: if true, write out synchronously
*
@ -3033,7 +3033,7 @@ int __ntfs_write_inode(struct inode *vi, int sync)
* might not need to be written out.
* NOTE: It is not a problem when the inode for $MFT itself is being
* written out as mark_ntfs_record_dirty() will only set I_DIRTY_PAGES
* on the $MFT inode and hence ntfs_write_inode() will not be
* on the $MFT inode and hence __ntfs_write_inode() will not be
* re-invoked because of it which in turn is ok since the dirtied mft
* record will be cleaned and written out to disk below, i.e. before
* this function returns.

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* mft.c - NTFS kernel mft record operations. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2012 Anton Altaparmakov and Tuxera Inc.

View File

@ -259,7 +259,7 @@ err_out:
}
}
/**
/*
* Inode operations for directories.
*/
const struct inode_operations ntfs_dir_inode_ops = {
@ -364,7 +364,7 @@ static struct dentry *ntfs_fh_to_parent(struct super_block *sb, struct fid *fid,
ntfs_nfs_get_inode);
}
/**
/*
* Export operations allowing NFS exporting of mounted NTFS partitions.
*
* We use the default ->encode_fh() for now. Note that they

View File

@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/**
/*
* runlist.c - NTFS runlist handling code. Part of the Linux-NTFS project.
*
* Copyright (c) 2001-2007 Anton Altaparmakov

View File

@ -58,9 +58,17 @@ const option_t on_errors_arr[] = {
};
/**
* simple_getbool -
* simple_getbool - convert input string to a boolean value
* @s: input string to convert
* @setval: where to store the output boolean value
*
* Copied from old ntfs driver (which copied from vfat driver).
*
* "1", "yes", "true", or an empty string are converted to %true.
* "0", "no", and "false" are converted to %false.
*
* Return: %1 if the string is converted or was empty and *setval contains it;
* %0 if the string was not valid.
*/
static int simple_getbool(char *s, bool *setval)
{
@ -2657,7 +2665,7 @@ static int ntfs_write_inode(struct inode *vi, struct writeback_control *wbc)
}
#endif
/**
/*
* The complete super operations.
*/
static const struct super_operations ntfs_sops = {

View File

@ -17,6 +17,7 @@ static int __init proc_cmdline_init(void)
struct proc_dir_entry *pde;
pde = proc_create_single("cmdline", 0, NULL, cmdline_proc_show);
pde_make_permanent(pde);
pde->size = saved_command_line_len + 1;
return 0;
}

View File

@ -4,7 +4,6 @@
#if defined(__KERNEL__) && !defined(__ASSEMBLY__)
enum {
EI_ETYPE_NONE, /* Dummy value for undefined case */
EI_ETYPE_NULL, /* Return NULL if failure */
EI_ETYPE_ERRNO, /* Return -ERRNO if failure */
EI_ETYPE_ERRNO_NULL, /* Return -ERRNO or NULL if failure */
@ -20,8 +19,10 @@ struct pt_regs;
#ifdef CONFIG_FUNCTION_ERROR_INJECTION
/*
* Whitelist generating macro. Specify functions which can be
* error-injectable using this macro.
* Whitelist generating macro. Specify functions which can be error-injectable
* using this macro. If you unsure what is required for the error-injectable
* functions, please read Documentation/fault-injection/fault-injection.rst
* 'Error Injectable Functions' section.
*/
#define ALLOW_ERROR_INJECTION(fname, _etype) \
static struct error_injection_entry __used \

View File

@ -3,6 +3,7 @@
#define _LINUX_ERROR_INJECTION_H
#include <linux/compiler.h>
#include <linux/errno.h>
#include <asm-generic/error-injection.h>
#ifdef CONFIG_FUNCTION_ERROR_INJECTION
@ -19,7 +20,7 @@ static inline bool within_error_injection_list(unsigned long addr)
static inline int get_injectable_error_type(unsigned long addr)
{
return EI_ETYPE_NONE;
return -EOPNOTSUPP;
}
#endif

View File

@ -403,7 +403,8 @@ extern int kimage_crash_copy_vmcoreinfo(struct kimage *image);
extern struct kimage *kexec_image;
extern struct kimage *kexec_crash_image;
extern int kexec_load_disabled;
bool kexec_load_permitted(int kexec_image_type);
#ifndef kexec_flush_icache_page
#define kexec_flush_icache_page(page)

View File

@ -152,9 +152,11 @@ __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
static inline void
percpu_counter_add(struct percpu_counter *fbc, s64 amount)
{
preempt_disable();
unsigned long flags;
local_irq_save(flags);
fbc->count += amount;
preempt_enable();
local_irq_restore(flags);
}
/* non-SMP percpu_counter_add_local is the same with percpu_counter_add */

View File

@ -2,6 +2,8 @@
#ifndef _LINUX_HELPER_MACROS_H_
#define _LINUX_HELPER_MACROS_H_
#include <linux/math.h>
#define __find_closest(x, a, as, op) \
({ \
typeof(as) __fc_i, __fc_as = (as) - 1; \

View File

@ -11,6 +11,7 @@
#include <linux/syscalls.h>
#include <linux/utime.h>
#include <linux/file.h>
#include <linux/kstrtox.h>
#include <linux/memblock.h>
#include <linux/mm.h>
#include <linux/namei.h>
@ -571,8 +572,7 @@ __setup("keepinitrd", keepinitrd_setup);
static bool __initdata initramfs_async = true;
static int __init initramfs_async_setup(char *str)
{
strtobool(str, &initramfs_async);
return 1;
return kstrtobool(str, &initramfs_async) == 0;
}
__setup("initramfs_async=", initramfs_async_setup);

View File

@ -142,6 +142,8 @@ static void check_hung_task(struct task_struct *t, unsigned long timeout)
if (sysctl_hung_task_all_cpu_backtrace)
hung_task_show_all_bt = true;
if (!sysctl_hung_task_warnings)
pr_info("Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings\n");
}
touch_nmi_watchdog();

View File

@ -190,10 +190,12 @@ out_unlock:
static inline int kexec_load_check(unsigned long nr_segments,
unsigned long flags)
{
int image_type = (flags & KEXEC_ON_CRASH) ?
KEXEC_TYPE_CRASH : KEXEC_TYPE_DEFAULT;
int result;
/* We only trust the superuser with rebooting the system. */
if (!capable(CAP_SYS_BOOT) || kexec_load_disabled)
if (!kexec_load_permitted(image_type))
return -EPERM;
/* Permit LSMs and IMA to fail the kexec */

View File

@ -921,10 +921,64 @@ int kimage_load_segment(struct kimage *image,
return result;
}
struct kexec_load_limit {
/* Mutex protects the limit count. */
struct mutex mutex;
int limit;
};
static struct kexec_load_limit load_limit_reboot = {
.mutex = __MUTEX_INITIALIZER(load_limit_reboot.mutex),
.limit = -1,
};
static struct kexec_load_limit load_limit_panic = {
.mutex = __MUTEX_INITIALIZER(load_limit_panic.mutex),
.limit = -1,
};
struct kimage *kexec_image;
struct kimage *kexec_crash_image;
int kexec_load_disabled;
static int kexec_load_disabled;
#ifdef CONFIG_SYSCTL
static int kexec_limit_handler(struct ctl_table *table, int write,
void *buffer, size_t *lenp, loff_t *ppos)
{
struct kexec_load_limit *limit = table->data;
int val;
struct ctl_table tmp = {
.data = &val,
.maxlen = sizeof(val),
.mode = table->mode,
};
int ret;
if (write) {
ret = proc_dointvec(&tmp, write, buffer, lenp, ppos);
if (ret)
return ret;
if (val < 0)
return -EINVAL;
mutex_lock(&limit->mutex);
if (limit->limit != -1 && val >= limit->limit)
ret = -EINVAL;
else
limit->limit = val;
mutex_unlock(&limit->mutex);
return ret;
}
mutex_lock(&limit->mutex);
val = limit->limit;
mutex_unlock(&limit->mutex);
return proc_dointvec(&tmp, write, buffer, lenp, ppos);
}
static struct ctl_table kexec_core_sysctls[] = {
{
.procname = "kexec_load_disabled",
@ -936,6 +990,18 @@ static struct ctl_table kexec_core_sysctls[] = {
.extra1 = SYSCTL_ONE,
.extra2 = SYSCTL_ONE,
},
{
.procname = "kexec_load_limit_panic",
.data = &load_limit_panic,
.mode = 0644,
.proc_handler = kexec_limit_handler,
},
{
.procname = "kexec_load_limit_reboot",
.data = &load_limit_reboot,
.mode = 0644,
.proc_handler = kexec_limit_handler,
},
{ }
};
@ -947,6 +1013,32 @@ static int __init kexec_core_sysctl_init(void)
late_initcall(kexec_core_sysctl_init);
#endif
bool kexec_load_permitted(int kexec_image_type)
{
struct kexec_load_limit *limit;
/*
* Only the superuser can use the kexec syscall and if it has not
* been disabled.
*/
if (!capable(CAP_SYS_BOOT) || kexec_load_disabled)
return false;
/* Check limit counter and decrease it.*/
limit = (kexec_image_type == KEXEC_TYPE_CRASH) ?
&load_limit_panic : &load_limit_reboot;
mutex_lock(&limit->mutex);
if (!limit->limit) {
mutex_unlock(&limit->mutex);
return false;
}
if (limit->limit != -1)
limit->limit--;
mutex_unlock(&limit->mutex);
return true;
}
/*
* No panic_cpu check version of crash_kexec(). This function is called
* only when panic_cpu holds the current CPU number; this is the only CPU

View File

@ -326,11 +326,13 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
unsigned long, cmdline_len, const char __user *, cmdline_ptr,
unsigned long, flags)
{
int ret = 0, i;
int image_type = (flags & KEXEC_FILE_ON_CRASH) ?
KEXEC_TYPE_CRASH : KEXEC_TYPE_DEFAULT;
struct kimage **dest_image, *image;
int ret = 0, i;
/* We only trust the superuser with rebooting the system. */
if (!capable(CAP_SYS_BOOT) || kexec_load_disabled)
if (!kexec_load_permitted(image_type))
return -EPERM;
/* Make sure we have a legal set of flags */
@ -342,11 +344,12 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd,
if (!kexec_trylock())
return -EBUSY;
dest_image = &kexec_image;
if (flags & KEXEC_FILE_ON_CRASH) {
if (image_type == KEXEC_TYPE_CRASH) {
dest_image = &kexec_crash_image;
if (kexec_crash_image)
arch_kexec_unprotect_crashkres();
} else {
dest_image = &kexec_image;
}
if (flags & KEXEC_FILE_UNLOAD)

View File

@ -1382,6 +1382,10 @@ EXPORT_SYMBOL_GPL(kthread_flush_worker);
* Flush and destroy @worker. The simple flush is enough because the kthread
* worker API is used only in trivial scenarios. There are no multi-step state
* machines needed.
*
* Note that this function is not responsible for handling delayed work, so
* caller should be responsible for queuing or canceling all delayed work items
* before invoke this function.
*/
void kthread_destroy_worker(struct kthread_worker *worker)
{
@ -1393,6 +1397,7 @@ void kthread_destroy_worker(struct kthread_worker *worker)
kthread_flush_worker(worker);
kthread_stop(task);
WARN_ON(!list_empty(&worker->delayed_work_list));
WARN_ON(!list_empty(&worker->work_list));
kfree(worker);
}

View File

@ -229,7 +229,7 @@ void __put_user_ns(struct user_namespace *ns)
EXPORT_SYMBOL(__put_user_ns);
/**
* idmap_key struct holds the information necessary to find an idmapping in a
* struct idmap_key - holds the information necessary to find an idmapping in a
* sorted idmap array. It is passed to cmp_map_id() as first argument.
*/
struct idmap_key {

View File

@ -1185,13 +1185,16 @@ config DEBUG_TIMEKEEPING
config DEBUG_PREEMPT
bool "Debug preemptible kernel"
depends on DEBUG_KERNEL && PREEMPTION && TRACE_IRQFLAGS_SUPPORT
default y
help
If you say Y here then the kernel will use a debug variant of the
commonly used smp_processor_id() function and will print warnings
if kernel code uses it in a preemption-unsafe way. Also, the kernel
will detect preemption count underflows.
This option has potential to introduce high runtime overhead,
depending on workload as it triggers debugging routines for each
this_cpu operation. It should only be used for debugging purposes.
menu "Lock Debugging (spinlocks, mutexes, etc...)"
config LOCK_DEBUGGING_SUPPORT
@ -2029,6 +2032,41 @@ menuconfig RUNTIME_TESTING_MENU
if RUNTIME_TESTING_MENU
config TEST_DHRY
tristate "Dhrystone benchmark test"
help
Enable this to include the Dhrystone 2.1 benchmark. This test
calculates the number of Dhrystones per second, and the number of
DMIPS (Dhrystone MIPS) obtained when the Dhrystone score is divided
by 1757 (the number of Dhrystones per second obtained on the VAX
11/780, nominally a 1 MIPS machine).
To run the benchmark, it needs to be enabled explicitly, either from
the kernel command line (when built-in), or from userspace (when
built-in or modular.
Run once during kernel boot:
test_dhry.run
Set number of iterations from kernel command line:
test_dhry.iterations=<n>
Set number of iterations from userspace:
echo <n> > /sys/module/test_dhry/parameters/iterations
Trigger manual run from userspace:
echo y > /sys/module/test_dhry/parameters/run
If the number of iterations is <= 0, the test will devise a suitable
number of iterations (test runs for at least 2s) automatically.
This process takes ca. 4s.
If unsure, say N.
config LKDTM
tristate "Linux Kernel Dump Test Tool Module"
depends on DEBUG_FS

View File

@ -57,6 +57,8 @@ obj-$(CONFIG_TEST_HEXDUMP) += test_hexdump.o
obj-y += kstrtox.o
obj-$(CONFIG_FIND_BIT_BENCHMARK) += find_bit_benchmark.o
obj-$(CONFIG_TEST_BPF) += test_bpf.o
test_dhry-objs := dhry_1.o dhry_2.o dhry_run.o
obj-$(CONFIG_TEST_DHRY) += test_dhry.o
obj-$(CONFIG_TEST_FIRMWARE) += test_firmware.o
obj-$(CONFIG_TEST_BITOPS) += test_bitops.o
CFLAGS_test_bitops.o += -Werror

358
lib/dhry.h Normal file
View File

@ -0,0 +1,358 @@
/* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */
/*
****************************************************************************
*
* "DHRYSTONE" Benchmark Program
* -----------------------------
*
* Version: C, Version 2.1
*
* File: dhry.h (part 1 of 3)
*
* Date: May 25, 1988
*
* Author: Reinhold P. Weicker
* Siemens AG, AUT E 51
* Postfach 3220
* 8520 Erlangen
* Germany (West)
* Phone: [+49]-9131-7-20330
* (8-17 Central European Time)
* Usenet: ..!mcsun!unido!estevax!weicker
*
* Original Version (in Ada) published in
* "Communications of the ACM" vol. 27., no. 10 (Oct. 1984),
* pp. 1013 - 1030, together with the statistics
* on which the distribution of statements etc. is based.
*
* In this C version, the following C library functions are used:
* - strcpy, strcmp (inside the measurement loop)
* - printf, scanf (outside the measurement loop)
* In addition, Berkeley UNIX system calls "times ()" or "time ()"
* are used for execution time measurement. For measurements
* on other systems, these calls have to be changed.
*
* Collection of Results:
* Reinhold Weicker (address see above) and
*
* Rick Richardson
* PC Research. Inc.
* 94 Apple Orchard Drive
* Tinton Falls, NJ 07724
* Phone: (201) 389-8963 (9-17 EST)
* Usenet: ...!uunet!pcrat!rick
*
* Please send results to Rick Richardson and/or Reinhold Weicker.
* Complete information should be given on hardware and software used.
* Hardware information includes: Machine type, CPU, type and size
* of caches; for microprocessors: clock frequency, memory speed
* (number of wait states).
* Software information includes: Compiler (and runtime library)
* manufacturer and version, compilation switches, OS version.
* The Operating System version may give an indication about the
* compiler; Dhrystone itself performs no OS calls in the measurement loop.
*
* The complete output generated by the program should be mailed
* such that at least some checks for correctness can be made.
*
***************************************************************************
*
* History: This version C/2.1 has been made for two reasons:
*
* 1) There is an obvious need for a common C version of
* Dhrystone, since C is at present the most popular system
* programming language for the class of processors
* (microcomputers, minicomputers) where Dhrystone is used most.
* There should be, as far as possible, only one C version of
* Dhrystone such that results can be compared without
* restrictions. In the past, the C versions distributed
* by Rick Richardson (Version 1.1) and by Reinhold Weicker
* had small (though not significant) differences.
*
* 2) As far as it is possible without changes to the Dhrystone
* statistics, optimizing compilers should be prevented from
* removing significant statements.
*
* This C version has been developed in cooperation with
* Rick Richardson (Tinton Falls, NJ), it incorporates many
* ideas from the "Version 1.1" distributed previously by
* him over the UNIX network Usenet.
* I also thank Chaim Benedelac (National Semiconductor),
* David Ditzel (SUN), Earl Killian and John Mashey (MIPS),
* Alan Smith and Rafael Saavedra-Barrera (UC at Berkeley)
* for their help with comments on earlier versions of the
* benchmark.
*
* Changes: In the initialization part, this version follows mostly
* Rick Richardson's version distributed via Usenet, not the
* version distributed earlier via floppy disk by Reinhold Weicker.
* As a concession to older compilers, names have been made
* unique within the first 8 characters.
* Inside the measurement loop, this version follows the
* version previously distributed by Reinhold Weicker.
*
* At several places in the benchmark, code has been added,
* but within the measurement loop only in branches that
* are not executed. The intention is that optimizing compilers
* should be prevented from moving code out of the measurement
* loop, or from removing code altogether. Since the statements
* that are executed within the measurement loop have NOT been
* changed, the numbers defining the "Dhrystone distribution"
* (distribution of statements, operand types and locality)
* still hold. Except for sophisticated optimizing compilers,
* execution times for this version should be the same as
* for previous versions.
*
* Since it has proven difficult to subtract the time for the
* measurement loop overhead in a correct way, the loop check
* has been made a part of the benchmark. This does have
* an impact - though a very minor one - on the distribution
* statistics which have been updated for this version.
*
* All changes within the measurement loop are described
* and discussed in the companion paper "Rationale for
* Dhrystone version 2".
*
* Because of the self-imposed limitation that the order and
* distribution of the executed statements should not be
* changed, there are still cases where optimizing compilers
* may not generate code for some statements. To a certain
* degree, this is unavoidable for small synthetic benchmarks.
* Users of the benchmark are advised to check code listings
* whether code is generated for all statements of Dhrystone.
*
* Version 2.1 is identical to version 2.0 distributed via
* the UNIX network Usenet in March 1988 except that it corrects
* some minor deficiencies that were found by users of version 2.0.
* The only change within the measurement loop is that a
* non-executed "else" part was added to the "if" statement in
* Func_3, and a non-executed "else" part removed from Proc_3.
*
***************************************************************************
*
* Compilation model and measurement (IMPORTANT):
*
* This C version of Dhrystone consists of three files:
* - dhry.h (this file, containing global definitions and comments)
* - dhry_1.c (containing the code corresponding to Ada package Pack_1)
* - dhry_2.c (containing the code corresponding to Ada package Pack_2)
*
* The following "ground rules" apply for measurements:
* - Separate compilation
* - No procedure merging
* - Otherwise, compiler optimizations are allowed but should be indicated
* - Default results are those without register declarations
* See the companion paper "Rationale for Dhrystone Version 2" for a more
* detailed discussion of these ground rules.
*
* For 16-Bit processors (e.g. 80186, 80286), times for all compilation
* models ("small", "medium", "large" etc.) should be given if possible,
* together with a definition of these models for the compiler system used.
*
**************************************************************************
*
* Dhrystone (C version) statistics:
*
* [Comment from the first distribution, updated for version 2.
* Note that because of language differences, the numbers are slightly
* different from the Ada version.]
*
* The following program contains statements of a high level programming
* language (here: C) in a distribution considered representative:
*
* assignments 52 (51.0 %)
* control statements 33 (32.4 %)
* procedure, function calls 17 (16.7 %)
*
* 103 statements are dynamically executed. The program is balanced with
* respect to the three aspects:
*
* - statement type
* - operand type
* - operand locality
* operand global, local, parameter, or constant.
*
* The combination of these three aspects is balanced only approximately.
*
* 1. Statement Type:
* ----------------- number
*
* V1 = V2 9
* (incl. V1 = F(..)
* V = Constant 12
* Assignment, 7
* with array element
* Assignment, 6
* with record component
* --
* 34 34
*
* X = Y +|-|"&&"|"|" Z 5
* X = Y +|-|"==" Constant 6
* X = X +|- 1 3
* X = Y *|/ Z 2
* X = Expression, 1
* two operators
* X = Expression, 1
* three operators
* --
* 18 18
*
* if .... 14
* with "else" 7
* without "else" 7
* executed 3
* not executed 4
* for ... 7 | counted every time
* while ... 4 | the loop condition
* do ... while 1 | is evaluated
* switch ... 1
* break 1
* declaration with 1
* initialization
* --
* 34 34
*
* P (...) procedure call 11
* user procedure 10
* library procedure 1
* X = F (...)
* function call 6
* user function 5
* library function 1
* --
* 17 17
* ---
* 103
*
* The average number of parameters in procedure or function calls
* is 1.82 (not counting the function values as implicit parameters).
*
*
* 2. Operators
* ------------
* number approximate
* percentage
*
* Arithmetic 32 50.8
*
* + 21 33.3
* - 7 11.1
* * 3 4.8
* / (int div) 1 1.6
*
* Comparison 27 42.8
*
* == 9 14.3
* /= 4 6.3
* > 1 1.6
* < 3 4.8
* >= 1 1.6
* <= 9 14.3
*
* Logic 4 6.3
*
* && (AND-THEN) 1 1.6
* | (OR) 1 1.6
* ! (NOT) 2 3.2
*
* -- -----
* 63 100.1
*
*
* 3. Operand Type (counted once per operand reference):
* ---------------
* number approximate
* percentage
*
* Integer 175 72.3 %
* Character 45 18.6 %
* Pointer 12 5.0 %
* String30 6 2.5 %
* Array 2 0.8 %
* Record 2 0.8 %
* --- -------
* 242 100.0 %
*
* When there is an access path leading to the final operand (e.g. a record
* component), only the final data type on the access path is counted.
*
*
* 4. Operand Locality:
* -------------------
* number approximate
* percentage
*
* local variable 114 47.1 %
* global variable 22 9.1 %
* parameter 45 18.6 %
* value 23 9.5 %
* reference 22 9.1 %
* function result 6 2.5 %
* constant 55 22.7 %
* --- -------
* 242 100.0 %
*
*
* The program does not compute anything meaningful, but it is syntactically
* and semantically correct. All variables have a value assigned to them
* before they are used as a source operand.
*
* There has been no explicit effort to account for the effects of a
* cache, or to balance the use of long or short displacements for code or
* data.
*
***************************************************************************
*/
typedef enum {
Ident_1,
Ident_2,
Ident_3,
Ident_4,
Ident_5
} Enumeration; /* for boolean and enumeration types in Ada, Pascal */
/* General definitions: */
typedef int One_Thirty;
typedef int One_Fifty;
typedef char Capital_Letter;
typedef int Boolean;
typedef char Str_30[31];
typedef int Arr_1_Dim[50];
typedef int Arr_2_Dim[50][50];
typedef struct record {
struct record *Ptr_Comp;
Enumeration Discr;
union {
struct {
Enumeration Enum_Comp;
int Int_Comp;
char Str_Comp[31];
} var_1;
struct {
Enumeration E_Comp_2;
char Str_2_Comp[31];
} var_2;
struct {
char Ch_1_Comp;
char Ch_2_Comp;
} var_3;
} variant;
} Rec_Type, *Rec_Pointer;
extern int Int_Glob;
extern char Ch_1_Glob;
void Proc_6(Enumeration Enum_Val_Par, Enumeration *Enum_Ref_Par);
void Proc_7(One_Fifty Int_1_Par_Val, One_Fifty Int_2_Par_Val,
One_Fifty *Int_Par_Ref);
void Proc_8(Arr_1_Dim Arr_1_Par_Ref, Arr_2_Dim Arr_2_Par_Ref,
int Int_1_Par_Val, int Int_2_Par_Val);
Enumeration Func_1(Capital_Letter Ch_1_Par_Val, Capital_Letter Ch_2_Par_Val);
Boolean Func_2(Str_30 Str_1_Par_Ref, Str_30 Str_2_Par_Ref);
int dhry(int n);

283
lib/dhry_1.c Normal file
View File

@ -0,0 +1,283 @@
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/*
****************************************************************************
*
* "DHRYSTONE" Benchmark Program
* -----------------------------
*
* Version: C, Version 2.1
*
* File: dhry_1.c (part 2 of 3)
*
* Date: May 25, 1988
*
* Author: Reinhold P. Weicker
*
****************************************************************************
*/
#include "dhry.h"
#include <linux/ktime.h>
#include <linux/slab.h>
#include <linux/string.h>
/* Global Variables: */
int Int_Glob;
char Ch_1_Glob;
static Rec_Pointer Ptr_Glob, Next_Ptr_Glob;
static Boolean Bool_Glob;
static char Ch_2_Glob;
static int Arr_1_Glob[50];
static int Arr_2_Glob[50][50];
static void Proc_3(Rec_Pointer *Ptr_Ref_Par)
/******************/
/* executed once */
/* Ptr_Ref_Par becomes Ptr_Glob */
{
if (Ptr_Glob) {
/* then, executed */
*Ptr_Ref_Par = Ptr_Glob->Ptr_Comp;
}
Proc_7(10, Int_Glob, &Ptr_Glob->variant.var_1.Int_Comp);
} /* Proc_3 */
static void Proc_1(Rec_Pointer Ptr_Val_Par)
/******************/
/* executed once */
{
Rec_Pointer Next_Record = Ptr_Val_Par->Ptr_Comp;
/* == Ptr_Glob_Next */
/* Local variable, initialized with Ptr_Val_Par->Ptr_Comp, */
/* corresponds to "rename" in Ada, "with" in Pascal */
*Ptr_Val_Par->Ptr_Comp = *Ptr_Glob;
Ptr_Val_Par->variant.var_1.Int_Comp = 5;
Next_Record->variant.var_1.Int_Comp =
Ptr_Val_Par->variant.var_1.Int_Comp;
Next_Record->Ptr_Comp = Ptr_Val_Par->Ptr_Comp;
Proc_3(&Next_Record->Ptr_Comp);
/* Ptr_Val_Par->Ptr_Comp->Ptr_Comp == Ptr_Glob->Ptr_Comp */
if (Next_Record->Discr == Ident_1) {
/* then, executed */
Next_Record->variant.var_1.Int_Comp = 6;
Proc_6(Ptr_Val_Par->variant.var_1.Enum_Comp,
&Next_Record->variant.var_1.Enum_Comp);
Next_Record->Ptr_Comp = Ptr_Glob->Ptr_Comp;
Proc_7(Next_Record->variant.var_1.Int_Comp, 10,
&Next_Record->variant.var_1.Int_Comp);
} else {
/* not executed */
*Ptr_Val_Par = *Ptr_Val_Par->Ptr_Comp;
}
} /* Proc_1 */
static void Proc_2(One_Fifty *Int_Par_Ref)
/******************/
/* executed once */
/* *Int_Par_Ref == 1, becomes 4 */
{
One_Fifty Int_Loc;
Enumeration Enum_Loc;
Int_Loc = *Int_Par_Ref + 10;
do {
/* executed once */
if (Ch_1_Glob == 'A') {
/* then, executed */
Int_Loc -= 1;
*Int_Par_Ref = Int_Loc - Int_Glob;
Enum_Loc = Ident_1;
} /* if */
} while (Enum_Loc != Ident_1); /* true */
} /* Proc_2 */
static void Proc_4(void)
/*******/
/* executed once */
{
Boolean Bool_Loc;
Bool_Loc = Ch_1_Glob == 'A';
Bool_Glob = Bool_Loc | Bool_Glob;
Ch_2_Glob = 'B';
} /* Proc_4 */
static void Proc_5(void)
/*******/
/* executed once */
{
Ch_1_Glob = 'A';
Bool_Glob = false;
} /* Proc_5 */
int dhry(int n)
/*****/
/* main program, corresponds to procedures */
/* Main and Proc_0 in the Ada version */
{
One_Fifty Int_1_Loc;
One_Fifty Int_2_Loc;
One_Fifty Int_3_Loc;
char Ch_Index;
Enumeration Enum_Loc;
Str_30 Str_1_Loc;
Str_30 Str_2_Loc;
int Run_Index;
int Number_Of_Runs;
ktime_t Begin_Time, End_Time;
u32 User_Time;
/* Initializations */
Next_Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_KERNEL);
Ptr_Glob = (Rec_Pointer)kzalloc(sizeof(Rec_Type), GFP_KERNEL);
Ptr_Glob->Ptr_Comp = Next_Ptr_Glob;
Ptr_Glob->Discr = Ident_1;
Ptr_Glob->variant.var_1.Enum_Comp = Ident_3;
Ptr_Glob->variant.var_1.Int_Comp = 40;
strcpy(Ptr_Glob->variant.var_1.Str_Comp,
"DHRYSTONE PROGRAM, SOME STRING");
strcpy(Str_1_Loc, "DHRYSTONE PROGRAM, 1'ST STRING");
Arr_2_Glob[8][7] = 10;
/* Was missing in published program. Without this statement, */
/* Arr_2_Glob[8][7] would have an undefined value. */
/* Warning: With 16-Bit processors and Number_Of_Runs > 32000, */
/* overflow may occur for this array element. */
pr_debug("Dhrystone Benchmark, Version 2.1 (Language: C)\n");
Number_Of_Runs = n;
pr_debug("Execution starts, %d runs through Dhrystone\n",
Number_Of_Runs);
/***************/
/* Start timer */
/***************/
Begin_Time = ktime_get();
for (Run_Index = 1; Run_Index <= Number_Of_Runs; ++Run_Index) {
Proc_5();
Proc_4();
/* Ch_1_Glob == 'A', Ch_2_Glob == 'B', Bool_Glob == true */
Int_1_Loc = 2;
Int_2_Loc = 3;
strcpy(Str_2_Loc, "DHRYSTONE PROGRAM, 2'ND STRING");
Enum_Loc = Ident_2;
Bool_Glob = !Func_2(Str_1_Loc, Str_2_Loc);
/* Bool_Glob == 1 */
while (Int_1_Loc < Int_2_Loc) {
/* loop body executed once */
Int_3_Loc = 5 * Int_1_Loc - Int_2_Loc;
/* Int_3_Loc == 7 */
Proc_7(Int_1_Loc, Int_2_Loc, &Int_3_Loc);
/* Int_3_Loc == 7 */
Int_1_Loc += 1;
} /* while */
/* Int_1_Loc == 3, Int_2_Loc == 3, Int_3_Loc == 7 */
Proc_8(Arr_1_Glob, Arr_2_Glob, Int_1_Loc, Int_3_Loc);
/* Int_Glob == 5 */
Proc_1(Ptr_Glob);
for (Ch_Index = 'A'; Ch_Index <= Ch_2_Glob; ++Ch_Index) {
/* loop body executed twice */
if (Enum_Loc == Func_1(Ch_Index, 'C')) {
/* then, not executed */
Proc_6(Ident_1, &Enum_Loc);
strcpy(Str_2_Loc, "DHRYSTONE PROGRAM, 3'RD STRING");
Int_2_Loc = Run_Index;
Int_Glob = Run_Index;
}
}
/* Int_1_Loc == 3, Int_2_Loc == 3, Int_3_Loc == 7 */
Int_2_Loc = Int_2_Loc * Int_1_Loc;
Int_1_Loc = Int_2_Loc / Int_3_Loc;
Int_2_Loc = 7 * (Int_2_Loc - Int_3_Loc) - Int_1_Loc;
/* Int_1_Loc == 1, Int_2_Loc == 13, Int_3_Loc == 7 */
Proc_2(&Int_1_Loc);
/* Int_1_Loc == 5 */
} /* loop "for Run_Index" */
/**************/
/* Stop timer */
/**************/
End_Time = ktime_get();
#define dhry_assert_int_eq(val, expected) \
if (val != expected) \
pr_err("%s: %d (FAIL, expected %d)\n", #val, val, \
expected); \
else \
pr_debug("%s: %d (OK)\n", #val, val)
#define dhry_assert_char_eq(val, expected) \
if (val != expected) \
pr_err("%s: %c (FAIL, expected %c)\n", #val, val, \
expected); \
else \
pr_debug("%s: %c (OK)\n", #val, val)
#define dhry_assert_string_eq(val, expected) \
if (strcmp(val, expected)) \
pr_err("%s: %s (FAIL, expected %s)\n", #val, val, \
expected); \
else \
pr_debug("%s: %s (OK)\n", #val, val)
pr_debug("Execution ends\n");
pr_debug("Final values of the variables used in the benchmark:\n");
dhry_assert_int_eq(Int_Glob, 5);
dhry_assert_int_eq(Bool_Glob, 1);
dhry_assert_char_eq(Ch_1_Glob, 'A');
dhry_assert_char_eq(Ch_2_Glob, 'B');
dhry_assert_int_eq(Arr_1_Glob[8], 7);
dhry_assert_int_eq(Arr_2_Glob[8][7], Number_Of_Runs + 10);
pr_debug("Ptr_Comp: %px\n", Ptr_Glob->Ptr_Comp);
dhry_assert_int_eq(Ptr_Glob->Discr, 0);
dhry_assert_int_eq(Ptr_Glob->variant.var_1.Enum_Comp, 2);
dhry_assert_int_eq(Ptr_Glob->variant.var_1.Int_Comp, 17);
dhry_assert_string_eq(Ptr_Glob->variant.var_1.Str_Comp,
"DHRYSTONE PROGRAM, SOME STRING");
if (Next_Ptr_Glob->Ptr_Comp != Ptr_Glob->Ptr_Comp)
pr_err("Next_Ptr_Glob->Ptr_Comp: %px (expected %px)\n",
Next_Ptr_Glob->Ptr_Comp, Ptr_Glob->Ptr_Comp);
else
pr_debug("Next_Ptr_Glob->Ptr_Comp: %px\n",
Next_Ptr_Glob->Ptr_Comp);
dhry_assert_int_eq(Next_Ptr_Glob->Discr, 0);
dhry_assert_int_eq(Next_Ptr_Glob->variant.var_1.Enum_Comp, 1);
dhry_assert_int_eq(Next_Ptr_Glob->variant.var_1.Int_Comp, 18);
dhry_assert_string_eq(Next_Ptr_Glob->variant.var_1.Str_Comp,
"DHRYSTONE PROGRAM, SOME STRING");
dhry_assert_int_eq(Int_1_Loc, 5);
dhry_assert_int_eq(Int_2_Loc, 13);
dhry_assert_int_eq(Int_3_Loc, 7);
dhry_assert_int_eq(Enum_Loc, 1);
dhry_assert_string_eq(Str_1_Loc, "DHRYSTONE PROGRAM, 1'ST STRING");
dhry_assert_string_eq(Str_2_Loc, "DHRYSTONE PROGRAM, 2'ND STRING");
User_Time = ktime_to_ms(ktime_sub(End_Time, Begin_Time));
kfree(Ptr_Glob);
kfree(Next_Ptr_Glob);
/* Measurements should last at least 2 seconds */
if (User_Time < 2 * MSEC_PER_SEC)
return -EAGAIN;
return div_u64(mul_u32_u32(MSEC_PER_SEC, Number_Of_Runs), User_Time);
}

175
lib/dhry_2.c Normal file
View File

@ -0,0 +1,175 @@
// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
/*
****************************************************************************
*
* "DHRYSTONE" Benchmark Program
* -----------------------------
*
* Version: C, Version 2.1
*
* File: dhry_2.c (part 3 of 3)
*
* Date: May 25, 1988
*
* Author: Reinhold P. Weicker
*
****************************************************************************
*/
#include "dhry.h"
#include <linux/string.h>
static Boolean Func_3(Enumeration Enum_Par_Val)
/***************************/
/* executed once */
/* Enum_Par_Val == Ident_3 */
{
Enumeration Enum_Loc;
Enum_Loc = Enum_Par_Val;
if (Enum_Loc == Ident_3) {
/* then, executed */
return true;
} else {
/* not executed */
return false;
}
} /* Func_3 */
void Proc_6(Enumeration Enum_Val_Par, Enumeration *Enum_Ref_Par)
/*********************************/
/* executed once */
/* Enum_Val_Par == Ident_3, Enum_Ref_Par becomes Ident_2 */
{
*Enum_Ref_Par = Enum_Val_Par;
if (!Func_3(Enum_Val_Par)) {
/* then, not executed */
*Enum_Ref_Par = Ident_4;
}
switch (Enum_Val_Par) {
case Ident_1:
*Enum_Ref_Par = Ident_1;
break;
case Ident_2:
if (Int_Glob > 100) {
/* then */
*Enum_Ref_Par = Ident_1;
} else {
*Enum_Ref_Par = Ident_4;
}
break;
case Ident_3: /* executed */
*Enum_Ref_Par = Ident_2;
break;
case Ident_4:
break;
case Ident_5:
*Enum_Ref_Par = Ident_3;
break;
} /* switch */
} /* Proc_6 */
void Proc_7(One_Fifty Int_1_Par_Val, One_Fifty Int_2_Par_Val, One_Fifty *Int_Par_Ref)
/**********************************************/
/* executed three times */
/* first call: Int_1_Par_Val == 2, Int_2_Par_Val == 3, */
/* Int_Par_Ref becomes 7 */
/* second call: Int_1_Par_Val == 10, Int_2_Par_Val == 5, */
/* Int_Par_Ref becomes 17 */
/* third call: Int_1_Par_Val == 6, Int_2_Par_Val == 10, */
/* Int_Par_Ref becomes 18 */
{
One_Fifty Int_Loc;
Int_Loc = Int_1_Par_Val + 2;
*Int_Par_Ref = Int_2_Par_Val + Int_Loc;
} /* Proc_7 */
void Proc_8(Arr_1_Dim Arr_1_Par_Ref, Arr_2_Dim Arr_2_Par_Ref, int Int_1_Par_Val, int Int_2_Par_Val)
/*********************************************************************/
/* executed once */
/* Int_Par_Val_1 == 3 */
/* Int_Par_Val_2 == 7 */
{
One_Fifty Int_Index;
One_Fifty Int_Loc;
Int_Loc = Int_1_Par_Val + 5;
Arr_1_Par_Ref[Int_Loc] = Int_2_Par_Val;
Arr_1_Par_Ref[Int_Loc+1] = Arr_1_Par_Ref[Int_Loc];
Arr_1_Par_Ref[Int_Loc+30] = Int_Loc;
for (Int_Index = Int_Loc; Int_Index <= Int_Loc+1; ++Int_Index)
Arr_2_Par_Ref[Int_Loc][Int_Index] = Int_Loc;
Arr_2_Par_Ref[Int_Loc][Int_Loc-1] += 1;
Arr_2_Par_Ref[Int_Loc+20][Int_Loc] = Arr_1_Par_Ref[Int_Loc];
Int_Glob = 5;
} /* Proc_8 */
Enumeration Func_1(Capital_Letter Ch_1_Par_Val, Capital_Letter Ch_2_Par_Val)
/*************************************************/
/* executed three times */
/* first call: Ch_1_Par_Val == 'H', Ch_2_Par_Val == 'R' */
/* second call: Ch_1_Par_Val == 'A', Ch_2_Par_Val == 'C' */
/* third call: Ch_1_Par_Val == 'B', Ch_2_Par_Val == 'C' */
{
Capital_Letter Ch_1_Loc;
Capital_Letter Ch_2_Loc;
Ch_1_Loc = Ch_1_Par_Val;
Ch_2_Loc = Ch_1_Loc;
if (Ch_2_Loc != Ch_2_Par_Val) {
/* then, executed */
return Ident_1;
} else {
/* not executed */
Ch_1_Glob = Ch_1_Loc;
return Ident_2;
}
} /* Func_1 */
Boolean Func_2(Str_30 Str_1_Par_Ref, Str_30 Str_2_Par_Ref)
/*************************************************/
/* executed once */
/* Str_1_Par_Ref == "DHRYSTONE PROGRAM, 1'ST STRING" */
/* Str_2_Par_Ref == "DHRYSTONE PROGRAM, 2'ND STRING" */
{
One_Thirty Int_Loc;
Capital_Letter Ch_Loc;
Int_Loc = 2;
while (Int_Loc <= 2) {
/* loop body executed once */
if (Func_1(Str_1_Par_Ref[Int_Loc],
Str_2_Par_Ref[Int_Loc+1]) == Ident_1) {
/* then, executed */
Ch_Loc = 'A';
Int_Loc += 1;
}
} /* if, while */
if (Ch_Loc >= 'W' && Ch_Loc < 'Z') {
/* then, not executed */
Int_Loc = 7;
}
if (Ch_Loc == 'R') {
/* then, not executed */
return true;
} else {
/* executed */
if (strcmp(Str_1_Par_Ref, Str_2_Par_Ref) > 0) {
/* then, not executed */
Int_Loc += 7;
Int_Glob = Int_Loc;
return true;
} else {
/* executed */
return false;
}
} /* if Ch_Loc */
} /* Func_2 */

85
lib/dhry_run.c Normal file
View File

@ -0,0 +1,85 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Dhrystone benchmark test module
*
* Copyright (C) 2022 Glider bv
*/
#include "dhry.h"
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/mutex.h>
#include <linux/smp.h>
#define DHRY_VAX 1757
static int dhry_run_set(const char *val, const struct kernel_param *kp);
static const struct kernel_param_ops run_ops = {
.flags = KERNEL_PARAM_OPS_FL_NOARG,
.set = dhry_run_set,
};
static bool dhry_run;
module_param_cb(run, &run_ops, &dhry_run, 0200);
MODULE_PARM_DESC(run, "Run the test (default: false)");
static int iterations = -1;
module_param(iterations, int, 0644);
MODULE_PARM_DESC(iterations,
"Number of iterations through the benchmark (default: auto)");
static void dhry_benchmark(void)
{
int i, n;
if (iterations > 0) {
n = dhry(iterations);
goto report;
}
for (i = DHRY_VAX; i > 0; i <<= 1) {
n = dhry(i);
if (n != -EAGAIN)
break;
}
report:
if (n >= 0)
pr_info("CPU%u: Dhrystones per Second: %d (%d DMIPS)\n",
smp_processor_id(), n, n / DHRY_VAX);
else if (n == -EAGAIN)
pr_err("Please increase the number of iterations\n");
else
pr_err("Dhrystone benchmark failed error %pe\n", ERR_PTR(n));
}
static int dhry_run_set(const char *val, const struct kernel_param *kp)
{
int ret;
if (val) {
ret = param_set_bool(val, kp);
if (ret)
return ret;
} else {
dhry_run = true;
}
if (dhry_run && system_state == SYSTEM_RUNNING)
dhry_benchmark();
return 0;
}
static int __init dhry_init(void)
{
if (dhry_run)
dhry_benchmark();
return 0;
}
module_init(dhry_init);
MODULE_AUTHOR("Geert Uytterhoeven <geert+renesas@glider.be>");
MODULE_LICENSE("GPL");

View File

@ -40,7 +40,7 @@ bool within_error_injection_list(unsigned long addr)
int get_injectable_error_type(unsigned long addr)
{
struct ei_entry *ent;
int ei_type = EI_ETYPE_NONE;
int ei_type = -EINVAL;
mutex_lock(&ei_mutex);
list_for_each_entry(ent, &error_injection_list, list) {

View File

@ -40,32 +40,30 @@ static inline size_t chunk_size(const struct gen_pool_chunk *chunk)
return chunk->end_addr - chunk->start_addr + 1;
}
static int set_bits_ll(unsigned long *addr, unsigned long mask_to_set)
static inline int
set_bits_ll(unsigned long *addr, unsigned long mask_to_set)
{
unsigned long val, nval;
unsigned long val = READ_ONCE(*addr);
nval = *addr;
do {
val = nval;
if (val & mask_to_set)
return -EBUSY;
cpu_relax();
} while ((nval = cmpxchg(addr, val, val | mask_to_set)) != val);
} while (!try_cmpxchg(addr, &val, val | mask_to_set));
return 0;
}
static int clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear)
static inline int
clear_bits_ll(unsigned long *addr, unsigned long mask_to_clear)
{
unsigned long val, nval;
unsigned long val = READ_ONCE(*addr);
nval = *addr;
do {
val = nval;
if ((val & mask_to_clear) != mask_to_clear)
return -EBUSY;
cpu_relax();
} while ((nval = cmpxchg(addr, val, val & ~mask_to_clear)) != val);
} while (!try_cmpxchg(addr, &val, val & ~mask_to_clear));
return 0;
}

View File

@ -73,28 +73,33 @@ void percpu_counter_set(struct percpu_counter *fbc, s64 amount)
EXPORT_SYMBOL(percpu_counter_set);
/*
* This function is both preempt and irq safe. The former is due to explicit
* preemption disable. The latter is guaranteed by the fact that the slow path
* is explicitly protected by an irq-safe spinlock whereas the fast patch uses
* this_cpu_add which is irq-safe by definition. Hence there is no need muck
* with irq state before calling this one
* local_irq_save() is needed to make the function irq safe:
* - The slow path would be ok as protected by an irq-safe spinlock.
* - this_cpu_add would be ok as it is irq-safe by definition.
* But:
* The decision slow path/fast path and the actual update must be atomic, too.
* Otherwise a call in process context could check the current values and
* decide that the fast path can be used. If now an interrupt occurs before
* the this_cpu_add(), and the interrupt updates this_cpu(*fbc->counters),
* then the this_cpu_add() that is executed after the interrupt has completed
* can produce values larger than "batch" or even overflows.
*/
void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)
{
s64 count;
unsigned long flags;
preempt_disable();
local_irq_save(flags);
count = __this_cpu_read(*fbc->counters) + amount;
if (abs(count) >= batch) {
unsigned long flags;
raw_spin_lock_irqsave(&fbc->lock, flags);
raw_spin_lock(&fbc->lock);
fbc->count += count;
__this_cpu_sub(*fbc->counters, count - amount);
raw_spin_unlock_irqrestore(&fbc->lock, flags);
raw_spin_unlock(&fbc->lock);
} else {
this_cpu_add(*fbc->counters, amount);
}
preempt_enable();
local_irq_restore(flags);
}
EXPORT_SYMBOL(percpu_counter_add_batch);

View File

@ -54,7 +54,7 @@
/* architecture-specific bits */
#ifdef CONFIG_ZLIB_DFLTCC
# include "../zlib_dfltcc/dfltcc.h"
# include "../zlib_dfltcc/dfltcc_deflate.h"
#else
#define DEFLATE_RESET_HOOK(strm) do {} while (0)
#define DEFLATE_HOOK(strm, flush, bstate) 0
@ -106,7 +106,7 @@ typedef struct deflate_workspace {
deflate_state deflate_memory;
#ifdef CONFIG_ZLIB_DFLTCC
/* State memory for s390 hardware deflate */
struct dfltcc_state dfltcc_memory;
struct dfltcc_deflate_state dfltcc_memory;
#endif
Byte *window_memory;
Pos *prev_memory;
@ -451,17 +451,24 @@ int zlib_deflate(
Assert(strm->avail_out > 0, "bug2");
if (flush != Z_FINISH) return Z_OK;
if (s->noheader) return Z_STREAM_END;
/* Write the zlib trailer (adler32) */
putShortMSB(s, (uInt)(strm->adler >> 16));
putShortMSB(s, (uInt)(strm->adler & 0xffff));
if (!s->noheader) {
/* Write zlib trailer (adler32) */
putShortMSB(s, (uInt)(strm->adler >> 16));
putShortMSB(s, (uInt)(strm->adler & 0xffff));
}
flush_pending(strm);
/* If avail_out is zero, the application will call deflate again
* to flush the rest.
*/
s->noheader = -1; /* write the trailer only once! */
return s->pending != 0 ? Z_OK : Z_STREAM_END;
if (!s->noheader) {
s->noheader = -1; /* write the trailer only once! */
}
if (s->pending == 0) {
Assert(s->bi_valid == 0, "bi_buf not flushed");
return Z_STREAM_END;
}
return Z_OK;
}
/* ========================================================================= */

View File

@ -23,37 +23,18 @@ char *oesc_msg(
}
}
void dfltcc_reset(
z_streamp strm,
uInt size
)
{
struct dfltcc_state *dfltcc_state =
(struct dfltcc_state *)((char *)strm->state + size);
struct dfltcc_qaf_param *param =
(struct dfltcc_qaf_param *)&dfltcc_state->param;
void dfltcc_reset_state(struct dfltcc_state *dfltcc_state) {
/* Initialize available functions */
if (is_dfltcc_enabled()) {
dfltcc(DFLTCC_QAF, param, NULL, NULL, NULL, NULL, NULL);
memmove(&dfltcc_state->af, param, sizeof(dfltcc_state->af));
dfltcc(DFLTCC_QAF, &dfltcc_state->param, NULL, NULL, NULL, NULL, NULL);
memmove(&dfltcc_state->af, &dfltcc_state->param, sizeof(dfltcc_state->af));
} else
memset(&dfltcc_state->af, 0, sizeof(dfltcc_state->af));
/* Initialize parameter block */
memset(&dfltcc_state->param, 0, sizeof(dfltcc_state->param));
dfltcc_state->param.nt = 1;
/* Initialize tuning parameters */
if (zlib_dfltcc_support == ZLIB_DFLTCC_FULL_DEBUG)
dfltcc_state->level_mask = DFLTCC_LEVEL_MASK_DEBUG;
else
dfltcc_state->level_mask = DFLTCC_LEVEL_MASK;
dfltcc_state->block_size = DFLTCC_BLOCK_SIZE;
dfltcc_state->block_threshold = DFLTCC_FIRST_FHT_BLOCK_SIZE;
dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE;
dfltcc_state->param.ribm = DFLTCC_RIBM;
}
EXPORT_SYMBOL(dfltcc_reset);
MODULE_LICENSE("GPL");

View File

@ -93,63 +93,32 @@ static_assert(sizeof(struct dfltcc_param_v0) == 1536);
struct dfltcc_state {
struct dfltcc_param_v0 param; /* Parameter block */
struct dfltcc_qaf_param af; /* Available functions */
char msg[64]; /* Buffer for strm->msg */
};
/*
* Extension of inflate_state and deflate_state for DFLTCC.
*/
struct dfltcc_deflate_state {
struct dfltcc_state common; /* Parameter block */
uLong level_mask; /* Levels on which to use DFLTCC */
uLong block_size; /* New block each X bytes */
uLong block_threshold; /* New block after total_in > X */
uLong dht_threshold; /* New block only if avail_in >= X */
char msg[64]; /* Buffer for strm->msg */
};
#define ALIGN_UP(p, size) (__typeof__(p))(((uintptr_t)(p) + ((size) - 1)) & ~((size) - 1))
/* Resides right after inflate_state or deflate_state */
#define GET_DFLTCC_STATE(state) ((struct dfltcc_state *)((state) + 1))
#define GET_DFLTCC_STATE(state) ((struct dfltcc_state *)((char *)(state) + ALIGN_UP(sizeof(*state), 8)))
void dfltcc_reset_state(struct dfltcc_state *dfltcc_state);
/* External functions */
int dfltcc_can_deflate(z_streamp strm);
int dfltcc_deflate(z_streamp strm,
int flush,
block_state *result);
void dfltcc_reset(z_streamp strm, uInt size);
int dfltcc_can_inflate(z_streamp strm);
typedef enum {
DFLTCC_INFLATE_CONTINUE,
DFLTCC_INFLATE_BREAK,
DFLTCC_INFLATE_SOFTWARE,
} dfltcc_inflate_action;
dfltcc_inflate_action dfltcc_inflate(z_streamp strm,
int flush, int *ret);
static inline int is_dfltcc_enabled(void)
{
return (zlib_dfltcc_support != ZLIB_DFLTCC_DISABLED &&
test_facility(DFLTCC_FACILITY));
}
#define DEFLATE_RESET_HOOK(strm) \
dfltcc_reset((strm), sizeof(deflate_state))
#define DEFLATE_HOOK dfltcc_deflate
#define DEFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_deflate((strm)))
#define DEFLATE_DFLTCC_ENABLED() is_dfltcc_enabled()
#define INFLATE_RESET_HOOK(strm) \
dfltcc_reset((strm), sizeof(struct inflate_state))
#define INFLATE_TYPEDO_HOOK(strm, flush) \
if (dfltcc_can_inflate((strm))) { \
dfltcc_inflate_action action; \
\
RESTORE(); \
action = dfltcc_inflate((strm), (flush), &ret); \
LOAD(); \
if (action == DFLTCC_INFLATE_CONTINUE) \
break; \
else if (action == DFLTCC_INFLATE_BREAK) \
goto inf_leave; \
}
#define INFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_inflate((strm)))
#define INFLATE_NEED_UPDATEWINDOW(strm) (!dfltcc_can_inflate((strm)))
#endif /* DFLTCC_H */

View File

@ -2,11 +2,13 @@
#include "../zlib_deflate/defutil.h"
#include "dfltcc_util.h"
#include "dfltcc.h"
#include "dfltcc_deflate.h"
#include <asm/setup.h>
#include <linux/export.h>
#include <linux/zutil.h>
#define GET_DFLTCC_DEFLATE_STATE(state) ((struct dfltcc_deflate_state *)GET_DFLTCC_STATE(state))
/*
* Compress.
*/
@ -15,7 +17,7 @@ int dfltcc_can_deflate(
)
{
deflate_state *state = (deflate_state *)strm->state;
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
struct dfltcc_deflate_state *dfltcc_state = GET_DFLTCC_DEFLATE_STATE(state);
/* Check for kernel dfltcc command line parameter */
if (zlib_dfltcc_support == ZLIB_DFLTCC_DISABLED ||
@ -28,22 +30,39 @@ int dfltcc_can_deflate(
return 0;
/* Unsupported hardware */
if (!is_bit_set(dfltcc_state->af.fns, DFLTCC_GDHT) ||
!is_bit_set(dfltcc_state->af.fns, DFLTCC_CMPR) ||
!is_bit_set(dfltcc_state->af.fmts, DFLTCC_FMT0))
if (!is_bit_set(dfltcc_state->common.af.fns, DFLTCC_GDHT) ||
!is_bit_set(dfltcc_state->common.af.fns, DFLTCC_CMPR) ||
!is_bit_set(dfltcc_state->common.af.fmts, DFLTCC_FMT0))
return 0;
return 1;
}
EXPORT_SYMBOL(dfltcc_can_deflate);
void dfltcc_reset_deflate_state(z_streamp strm) {
deflate_state *state = (deflate_state *)strm->state;
struct dfltcc_deflate_state *dfltcc_state = GET_DFLTCC_DEFLATE_STATE(state);
dfltcc_reset_state(&dfltcc_state->common);
/* Initialize tuning parameters */
if (zlib_dfltcc_support == ZLIB_DFLTCC_FULL_DEBUG)
dfltcc_state->level_mask = DFLTCC_LEVEL_MASK_DEBUG;
else
dfltcc_state->level_mask = DFLTCC_LEVEL_MASK;
dfltcc_state->block_size = DFLTCC_BLOCK_SIZE;
dfltcc_state->block_threshold = DFLTCC_FIRST_FHT_BLOCK_SIZE;
dfltcc_state->dht_threshold = DFLTCC_DHT_MIN_SAMPLE_SIZE;
}
EXPORT_SYMBOL(dfltcc_reset_deflate_state);
static void dfltcc_gdht(
z_streamp strm
)
{
deflate_state *state = (deflate_state *)strm->state;
struct dfltcc_param_v0 *param = &GET_DFLTCC_STATE(state)->param;
size_t avail_in = avail_in = strm->avail_in;
size_t avail_in = strm->avail_in;
dfltcc(DFLTCC_GDHT,
param, NULL, NULL,
@ -104,39 +123,46 @@ int dfltcc_deflate(
)
{
deflate_state *state = (deflate_state *)strm->state;
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
struct dfltcc_param_v0 *param = &dfltcc_state->param;
struct dfltcc_deflate_state *dfltcc_state = GET_DFLTCC_DEFLATE_STATE(state);
struct dfltcc_param_v0 *param = &dfltcc_state->common.param;
uInt masked_avail_in;
dfltcc_cc cc;
int need_empty_block;
int soft_bcc;
int no_flush;
if (!dfltcc_can_deflate(strm))
if (!dfltcc_can_deflate(strm)) {
/* Clear history. */
if (flush == Z_FULL_FLUSH)
param->hl = 0;
return 0;
}
again:
masked_avail_in = 0;
soft_bcc = 0;
no_flush = flush == Z_NO_FLUSH;
/* Trailing empty block. Switch to software, except when Continuation Flag
* is set, which means that DFLTCC has buffered some output in the
* parameter block and needs to be called again in order to flush it.
/* No input data. Return, except when Continuation Flag is set, which means
* that DFLTCC has buffered some output in the parameter block and needs to
* be called again in order to flush it.
*/
if (flush == Z_FINISH && strm->avail_in == 0 && !param->cf) {
if (param->bcf) {
/* A block is still open, and the hardware does not support closing
* blocks without adding data. Thus, close it manually.
*/
if (strm->avail_in == 0 && !param->cf) {
/* A block is still open, and the hardware does not support closing
* blocks without adding data. Thus, close it manually.
*/
if (!no_flush && param->bcf) {
send_eobs(strm, param);
param->bcf = 0;
}
return 0;
}
if (strm->avail_in == 0 && !param->cf) {
*result = need_more;
/* Let one of deflate_* functions write a trailing empty block. */
if (flush == Z_FINISH)
return 0;
/* Clear history. */
if (flush == Z_FULL_FLUSH)
param->hl = 0;
/* Trigger block post-processing if necessary. */
*result = no_flush ? need_more : block_done;
return 1;
}
@ -163,13 +189,18 @@ again:
param->bcf = 0;
dfltcc_state->block_threshold =
strm->total_in + dfltcc_state->block_size;
if (strm->avail_out == 0) {
*result = need_more;
return 1;
}
}
}
/* No space for compressed data. If we proceed, dfltcc_cmpr() will return
* DFLTCC_CC_OP1_TOO_SHORT without buffering header bits, but we will still
* set BCF=1, which is wrong. Avoid complications and return early.
*/
if (strm->avail_out == 0) {
*result = need_more;
return 1;
}
/* The caller gave us too much data. Pass only one block worth of
* uncompressed data to DFLTCC and mask the rest, so that on the next
* iteration we start a new block.
@ -189,7 +220,7 @@ again:
param->cvt = CVT_ADLER32;
if (!no_flush)
/* We need to close a block. Always do this in software - when there is
* no input data, the hardware will not nohor BCC. */
* no input data, the hardware will not hohor BCC. */
soft_bcc = 1;
if (flush == Z_FINISH && !param->bcf)
/* We are about to open a BFINAL block, set Block Header Final bit
@ -204,8 +235,8 @@ again:
param->sbb = (unsigned int)state->bi_valid;
if (param->sbb > 0)
*strm->next_out = (Byte)state->bi_buf;
if (param->hl)
param->nt = 0; /* Honor history */
/* Honor history and check value */
param->nt = 0;
param->cv = strm->adler;
/* When opening a block, choose a Huffman-Table Type */
@ -232,7 +263,7 @@ again:
} while (cc == DFLTCC_CC_AGAIN);
/* Translate parameter block to stream */
strm->msg = oesc_msg(dfltcc_state->msg, param->oesc);
strm->msg = oesc_msg(dfltcc_state->common.msg, param->oesc);
state->bi_valid = param->sbb;
if (state->bi_valid == 0)
state->bi_buf = 0; /* Avoid accessing next_out */

View File

@ -0,0 +1,21 @@
// SPDX-License-Identifier: Zlib
#ifndef DFLTCC_DEFLATE_H
#define DFLTCC_DEFLATE_H
#include "dfltcc.h"
/* External functions */
int dfltcc_can_deflate(z_streamp strm);
int dfltcc_deflate(z_streamp strm,
int flush,
block_state *result);
void dfltcc_reset_deflate_state(z_streamp strm);
#define DEFLATE_RESET_HOOK(strm) \
dfltcc_reset_deflate_state((strm))
#define DEFLATE_HOOK dfltcc_deflate
#define DEFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_deflate((strm)))
#endif /* DFLTCC_DEFLATE_H */

View File

@ -2,7 +2,7 @@
#include "../zlib_inflate/inflate.h"
#include "dfltcc_util.h"
#include "dfltcc.h"
#include "dfltcc_inflate.h"
#include <asm/setup.h>
#include <linux/export.h>
#include <linux/zutil.h>
@ -22,16 +22,20 @@ int dfltcc_can_inflate(
zlib_dfltcc_support == ZLIB_DFLTCC_DEFLATE_ONLY)
return 0;
/* Unsupported compression settings */
if (state->wbits != HB_BITS)
return 0;
/* Unsupported hardware */
return is_bit_set(dfltcc_state->af.fns, DFLTCC_XPND) &&
is_bit_set(dfltcc_state->af.fmts, DFLTCC_FMT0);
}
EXPORT_SYMBOL(dfltcc_can_inflate);
void dfltcc_reset_inflate_state(z_streamp strm) {
struct inflate_state *state = (struct inflate_state *)strm->state;
struct dfltcc_state *dfltcc_state = GET_DFLTCC_STATE(state);
dfltcc_reset_state(dfltcc_state);
}
EXPORT_SYMBOL(dfltcc_reset_inflate_state);
static int dfltcc_was_inflate_used(
z_streamp strm
)
@ -91,8 +95,10 @@ dfltcc_inflate_action dfltcc_inflate(
struct dfltcc_param_v0 *param = &dfltcc_state->param;
dfltcc_cc cc;
if (flush == Z_BLOCK) {
/* DFLTCC does not support stopping on block boundaries */
if (flush == Z_BLOCK || flush == Z_PACKET_FLUSH) {
/* DFLTCC does not support stopping on block boundaries (Z_BLOCK flush option)
* as well as the use of Z_PACKET_FLUSH option (used exclusively by PPP driver)
*/
if (dfltcc_inflate_disable(strm)) {
*ret = Z_STREAM_ERROR;
return DFLTCC_INFLATE_BREAK;
@ -121,8 +127,6 @@ dfltcc_inflate_action dfltcc_inflate(
/* Translate stream to parameter block */
param->cvt = CVT_ADLER32;
param->sbb = state->bits;
param->hl = state->whave; /* Software and hardware history formats match */
param->ho = (state->write - state->whave) & ((1 << HB_BITS) - 1);
if (param->hl)
param->nt = 0; /* Honor history for the first block */
param->cv = state->check;
@ -136,8 +140,6 @@ dfltcc_inflate_action dfltcc_inflate(
strm->msg = oesc_msg(dfltcc_state->msg, param->oesc);
state->last = cc == DFLTCC_CC_OK;
state->bits = param->sbb;
state->whave = param->hl;
state->write = (param->ho + param->hl) & ((1 << HB_BITS) - 1);
state->check = param->cv;
if (cc == DFLTCC_CC_OP2_CORRUPT && param->oesc != 0) {
/* Report an error if stream is corrupted */

View File

@ -0,0 +1,37 @@
// SPDX-License-Identifier: Zlib
#ifndef DFLTCC_INFLATE_H
#define DFLTCC_INFLATE_H
#include "dfltcc.h"
/* External functions */
void dfltcc_reset_inflate_state(z_streamp strm);
int dfltcc_can_inflate(z_streamp strm);
typedef enum {
DFLTCC_INFLATE_CONTINUE,
DFLTCC_INFLATE_BREAK,
DFLTCC_INFLATE_SOFTWARE,
} dfltcc_inflate_action;
dfltcc_inflate_action dfltcc_inflate(z_streamp strm,
int flush, int *ret);
#define INFLATE_RESET_HOOK(strm) \
dfltcc_reset_inflate_state((strm))
#define INFLATE_TYPEDO_HOOK(strm, flush) \
if (dfltcc_can_inflate((strm))) { \
dfltcc_inflate_action action; \
\
RESTORE(); \
action = dfltcc_inflate((strm), (flush), &ret); \
LOAD(); \
if (action == DFLTCC_INFLATE_CONTINUE) \
break; \
else if (action == DFLTCC_INFLATE_BREAK) \
goto inf_leave; \
}
#define INFLATE_NEED_CHECKSUM(strm) (!dfltcc_can_inflate((strm)))
#define INFLATE_NEED_UPDATEWINDOW(strm) (!dfltcc_can_inflate((strm)))
#endif /* DFLTCC_DEFLATE_H */

View File

@ -17,7 +17,7 @@
/* architecture-specific bits */
#ifdef CONFIG_ZLIB_DFLTCC
# include "../zlib_dfltcc/dfltcc.h"
# include "../zlib_dfltcc/dfltcc_inflate.h"
#else
#define INFLATE_RESET_HOOK(strm) do {} while (0)
#define INFLATE_TYPEDO_HOOK(strm, flush) do {} while (0)

View File

@ -1013,7 +1013,7 @@ static int flow_mask_insert(struct flow_table *tbl, struct sw_flow *flow,
mask = flow_mask_find(tbl, new);
if (!mask) {
/* Allocate a new mask if none exsits. */
/* Allocate a new mask if none exists. */
mask = mask_alloc();
if (!mask)
return -ENOMEM;

View File

@ -80,8 +80,7 @@ def calc(oldfile, newfile, format):
if d<0: shrink, down = shrink+1, down-d
delta.append((d, name))
delta.sort()
delta.reverse()
delta.sort(reverse=True)
return grow, shrink, add, remove, up, down, delta, old, new, otot, ntot
def print_result(symboltype, symbolformat):

View File

@ -823,7 +823,9 @@ our %deprecated_apis = (
"get_state_synchronize_sched" => "get_state_synchronize_rcu",
"cond_synchronize_sched" => "cond_synchronize_rcu",
"kmap" => "kmap_local_page",
"kunmap" => "kunmap_local",
"kmap_atomic" => "kmap_local_page",
"kunmap_atomic" => "kunmap_local",
);
#Create a search pattern for all these strings to speed up a loop below
@ -3142,21 +3144,33 @@ sub process {
if ($sign_off =~ /^co-developed-by:$/i) {
if ($email eq $author) {
WARN("BAD_SIGN_OFF",
"Co-developed-by: should not be used to attribute nominal patch author '$author'\n" . "$here\n" . $rawline);
"Co-developed-by: should not be used to attribute nominal patch author '$author'\n" . $herecurr);
}
if (!defined $lines[$linenr]) {
WARN("BAD_SIGN_OFF",
"Co-developed-by: must be immediately followed by Signed-off-by:\n" . "$here\n" . $rawline);
} elsif ($rawlines[$linenr] !~ /^\s*signed-off-by:\s*(.*)/i) {
"Co-developed-by: must be immediately followed by Signed-off-by:\n" . $herecurr);
} elsif ($rawlines[$linenr] !~ /^signed-off-by:\s*(.*)/i) {
WARN("BAD_SIGN_OFF",
"Co-developed-by: must be immediately followed by Signed-off-by:\n" . "$here\n" . $rawline . "\n" .$rawlines[$linenr]);
"Co-developed-by: must be immediately followed by Signed-off-by:\n" . $herecurr . $rawlines[$linenr] . "\n");
} elsif ($1 ne $email) {
WARN("BAD_SIGN_OFF",
"Co-developed-by and Signed-off-by: name/email do not match \n" . "$here\n" . $rawline . "\n" .$rawlines[$linenr]);
"Co-developed-by and Signed-off-by: name/email do not match\n" . $herecurr . $rawlines[$linenr] . "\n");
}
}
# check if Reported-by: is followed by a Link:
if ($sign_off =~ /^reported(?:|-and-tested)-by:$/i) {
if (!defined $lines[$linenr]) {
WARN("BAD_REPORTED_BY_LINK",
"Reported-by: should be immediately followed by Link: to the report\n" . $herecurr . $rawlines[$linenr] . "\n");
} elsif ($rawlines[$linenr] !~ m{^link:\s*https?://}i) {
WARN("BAD_REPORTED_BY_LINK",
"Reported-by: should be immediately followed by Link: with a URL to the report\n" . $herecurr . $rawlines[$linenr] . "\n");
}
}
}
# Check Fixes: styles is correct
if (!$in_header_lines &&
$line =~ /^\s*fixes:?\s*(?:commit\s*)?[0-9a-f]{5,}\b/i) {
@ -3250,6 +3264,18 @@ sub process {
$commit_log_possible_stack_dump = 0;
}
# Check for odd tags before a URI/URL
if ($in_commit_log &&
$line =~ /^\s*(\w+):\s*http/ && $1 ne 'Link') {
if ($1 =~ /^v(?:ersion)?\d+/i) {
WARN("COMMIT_LOG_VERSIONING",
"Patch version information should be after the --- line\n" . $herecurr);
} else {
WARN("COMMIT_LOG_USE_LINK",
"Unknown link reference '$1:', use 'Link:' instead\n" . $herecurr);
}
}
# Check for lines starting with a #
if ($in_commit_log && $line =~ /^#/) {
if (WARN("COMMIT_COMMENT_SYMBOL",
@ -3725,7 +3751,7 @@ sub process {
}
# check for embedded filenames
if ($rawline =~ /^\+.*\Q$realfile\E/) {
if ($rawline =~ /^\+.*\b\Q$realfile\E\b/) {
WARN("EMBEDDED_FILENAME",
"It's generally not useful to have the filename in the file\n" . $herecurr);
}

222
scripts/gdb/linux/mm.py Normal file
View File

@ -0,0 +1,222 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# gdb helper commands and functions for Linux kernel debugging
#
# routines to introspect page table
#
# Authors:
# Dmitrii Bundin <dmitrii.bundin.a@gmail.com>
#
import gdb
from linux import utils
PHYSICAL_ADDRESS_MASK = gdb.parse_and_eval('0xfffffffffffff')
def page_mask(level=1):
# 4KB
if level == 1:
return gdb.parse_and_eval('(u64) ~0xfff')
# 2MB
elif level == 2:
return gdb.parse_and_eval('(u64) ~0x1fffff')
# 1GB
elif level == 3:
return gdb.parse_and_eval('(u64) ~0x3fffffff')
else:
raise Exception(f'Unknown page level: {level}')
#page_offset_base in case CONFIG_DYNAMIC_MEMORY_LAYOUT is disabled
POB_NO_DYNAMIC_MEM_LAYOUT = '0xffff888000000000'
def _page_offset_base():
pob_symbol = gdb.lookup_global_symbol('page_offset_base')
pob = pob_symbol.name if pob_symbol else POB_NO_DYNAMIC_MEM_LAYOUT
return gdb.parse_and_eval(pob)
def is_bit_defined_tupled(data, offset):
return offset, bool(data >> offset & 1)
def content_tupled(data, bit_start, bit_end):
return (bit_start, bit_end), data >> bit_start & ((1 << (1 + bit_end - bit_start)) - 1)
def entry_va(level, phys_addr, translating_va):
def start_bit(level):
if level == 5:
return 48
elif level == 4:
return 39
elif level == 3:
return 30
elif level == 2:
return 21
elif level == 1:
return 12
else:
raise Exception(f'Unknown level {level}')
entry_offset = ((translating_va >> start_bit(level)) & 511) * 8
entry_va = _page_offset_base() + phys_addr + entry_offset
return entry_va
class Cr3():
def __init__(self, cr3, page_levels):
self.cr3 = cr3
self.page_levels = page_levels
self.page_level_write_through = is_bit_defined_tupled(cr3, 3)
self.page_level_cache_disabled = is_bit_defined_tupled(cr3, 4)
self.next_entry_physical_address = cr3 & PHYSICAL_ADDRESS_MASK & page_mask()
def next_entry(self, va):
next_level = self.page_levels
return PageHierarchyEntry(entry_va(next_level, self.next_entry_physical_address, va), next_level)
def mk_string(self):
return f"""\
cr3:
{'cr3 binary data': <30} {hex(self.cr3)}
{'next entry physical address': <30} {hex(self.next_entry_physical_address)}
---
{'bit' : <4} {self.page_level_write_through[0]: <10} {'page level write through': <30} {self.page_level_write_through[1]}
{'bit' : <4} {self.page_level_cache_disabled[0]: <10} {'page level cache disabled': <30} {self.page_level_cache_disabled[1]}
"""
class PageHierarchyEntry():
def __init__(self, address, level):
data = int.from_bytes(
memoryview(gdb.selected_inferior().read_memory(address, 8)),
"little"
)
if level == 1:
self.is_page = True
self.entry_present = is_bit_defined_tupled(data, 0)
self.read_write = is_bit_defined_tupled(data, 1)
self.user_access_allowed = is_bit_defined_tupled(data, 2)
self.page_level_write_through = is_bit_defined_tupled(data, 3)
self.page_level_cache_disabled = is_bit_defined_tupled(data, 4)
self.entry_was_accessed = is_bit_defined_tupled(data, 5)
self.dirty = is_bit_defined_tupled(data, 6)
self.pat = is_bit_defined_tupled(data, 7)
self.global_translation = is_bit_defined_tupled(data, 8)
self.page_physical_address = data & PHYSICAL_ADDRESS_MASK & page_mask(level)
self.next_entry_physical_address = None
self.hlat_restart_with_ordinary = is_bit_defined_tupled(data, 11)
self.protection_key = content_tupled(data, 59, 62)
self.executed_disable = is_bit_defined_tupled(data, 63)
else:
page_size = is_bit_defined_tupled(data, 7)
page_size_bit = page_size[1]
self.is_page = page_size_bit
self.entry_present = is_bit_defined_tupled(data, 0)
self.read_write = is_bit_defined_tupled(data, 1)
self.user_access_allowed = is_bit_defined_tupled(data, 2)
self.page_level_write_through = is_bit_defined_tupled(data, 3)
self.page_level_cache_disabled = is_bit_defined_tupled(data, 4)
self.entry_was_accessed = is_bit_defined_tupled(data, 5)
self.page_size = page_size
self.dirty = is_bit_defined_tupled(
data, 6) if page_size_bit else None
self.global_translation = is_bit_defined_tupled(
data, 8) if page_size_bit else None
self.pat = is_bit_defined_tupled(
data, 12) if page_size_bit else None
self.page_physical_address = data & PHYSICAL_ADDRESS_MASK & page_mask(level) if page_size_bit else None
self.next_entry_physical_address = None if page_size_bit else data & PHYSICAL_ADDRESS_MASK & page_mask()
self.hlat_restart_with_ordinary = is_bit_defined_tupled(data, 11)
self.protection_key = content_tupled(data, 59, 62) if page_size_bit else None
self.executed_disable = is_bit_defined_tupled(data, 63)
self.address = address
self.page_entry_binary_data = data
self.page_hierarchy_level = level
def next_entry(self, va):
if self.is_page or not self.entry_present[1]:
return None
next_level = self.page_hierarchy_level - 1
return PageHierarchyEntry(entry_va(next_level, self.next_entry_physical_address, va), next_level)
def mk_string(self):
if not self.entry_present[1]:
return f"""\
level {self.page_hierarchy_level}:
{'entry address': <30} {hex(self.address)}
{'page entry binary data': <30} {hex(self.page_entry_binary_data)}
---
PAGE ENTRY IS NOT PRESENT!
"""
elif self.is_page:
def page_size_line(ps_bit, ps, level):
return "" if level == 1 else f"{'bit': <3} {ps_bit: <5} {'page size': <30} {ps}"
return f"""\
level {self.page_hierarchy_level}:
{'entry address': <30} {hex(self.address)}
{'page entry binary data': <30} {hex(self.page_entry_binary_data)}
{'page size': <30} {'1GB' if self.page_hierarchy_level == 3 else '2MB' if self.page_hierarchy_level == 2 else '4KB' if self.page_hierarchy_level == 1 else 'Unknown page size for level:' + self.page_hierarchy_level}
{'page physical address': <30} {hex(self.page_physical_address)}
---
{'bit': <4} {self.entry_present[0]: <10} {'entry present': <30} {self.entry_present[1]}
{'bit': <4} {self.read_write[0]: <10} {'read/write access allowed': <30} {self.read_write[1]}
{'bit': <4} {self.user_access_allowed[0]: <10} {'user access allowed': <30} {self.user_access_allowed[1]}
{'bit': <4} {self.page_level_write_through[0]: <10} {'page level write through': <30} {self.page_level_write_through[1]}
{'bit': <4} {self.page_level_cache_disabled[0]: <10} {'page level cache disabled': <30} {self.page_level_cache_disabled[1]}
{'bit': <4} {self.entry_was_accessed[0]: <10} {'entry has been accessed': <30} {self.entry_was_accessed[1]}
{"" if self.page_hierarchy_level == 1 else f"{'bit': <4} {self.page_size[0]: <10} {'page size': <30} {self.page_size[1]}"}
{'bit': <4} {self.dirty[0]: <10} {'page dirty': <30} {self.dirty[1]}
{'bit': <4} {self.global_translation[0]: <10} {'global translation': <30} {self.global_translation[1]}
{'bit': <4} {self.hlat_restart_with_ordinary[0]: <10} {'restart to ordinary': <30} {self.hlat_restart_with_ordinary[1]}
{'bit': <4} {self.pat[0]: <10} {'pat': <30} {self.pat[1]}
{'bits': <4} {str(self.protection_key[0]): <10} {'protection key': <30} {self.protection_key[1]}
{'bit': <4} {self.executed_disable[0]: <10} {'execute disable': <30} {self.executed_disable[1]}
"""
else:
return f"""\
level {self.page_hierarchy_level}:
{'entry address': <30} {hex(self.address)}
{'page entry binary data': <30} {hex(self.page_entry_binary_data)}
{'next entry physical address': <30} {hex(self.next_entry_physical_address)}
---
{'bit': <4} {self.entry_present[0]: <10} {'entry present': <30} {self.entry_present[1]}
{'bit': <4} {self.read_write[0]: <10} {'read/write access allowed': <30} {self.read_write[1]}
{'bit': <4} {self.user_access_allowed[0]: <10} {'user access allowed': <30} {self.user_access_allowed[1]}
{'bit': <4} {self.page_level_write_through[0]: <10} {'page level write through': <30} {self.page_level_write_through[1]}
{'bit': <4} {self.page_level_cache_disabled[0]: <10} {'page level cache disabled': <30} {self.page_level_cache_disabled[1]}
{'bit': <4} {self.entry_was_accessed[0]: <10} {'entry has been accessed': <30} {self.entry_was_accessed[1]}
{'bit': <4} {self.page_size[0]: <10} {'page size': <30} {self.page_size[1]}
{'bit': <4} {self.hlat_restart_with_ordinary[0]: <10} {'restart to ordinary': <30} {self.hlat_restart_with_ordinary[1]}
{'bit': <4} {self.executed_disable[0]: <10} {'execute disable': <30} {self.executed_disable[1]}
"""
class TranslateVM(gdb.Command):
"""Prints the entire paging structure used to translate a given virtual address.
Having an address space of the currently executed process translates the virtual address
and prints detailed information of all paging structure levels used for the transaltion.
Currently supported arch: x86"""
def __init__(self):
super(TranslateVM, self).__init__('translate-vm', gdb.COMMAND_USER)
def invoke(self, arg, from_tty):
if utils.is_target_arch("x86"):
vm_address = gdb.parse_and_eval(f'{arg}')
cr3_data = gdb.parse_and_eval('$cr3')
cr4 = gdb.parse_and_eval('$cr4')
page_levels = 5 if cr4 & (1 << 12) else 4
page_entry = Cr3(cr3_data, page_levels)
while page_entry:
gdb.write(page_entry.mk_string())
page_entry = page_entry.next_entry(vm_address)
else:
gdb.GdbError("Virtual address translation is not"
"supported for this arch")
TranslateVM()

View File

@ -37,3 +37,4 @@ else:
import linux.clk
import linux.genpd
import linux.device
import linux.mm

View File

@ -65,6 +65,7 @@ acumulative||accumulative
acumulator||accumulator
acutally||actually
adapater||adapter
adderted||asserted
addional||additional
additionaly||additionally
additonal||additional
@ -122,6 +123,7 @@ alue||value
ambigious||ambiguous
ambigous||ambiguous
amoung||among
amount of times||number of times
amout||amount
amplifer||amplifier
amplifyer||amplifier
@ -287,6 +289,7 @@ capapbilities||capabilities
caputure||capture
carefuly||carefully
cariage||carriage
casued||caused
catagory||category
cehck||check
challange||challenge
@ -370,6 +373,7 @@ conbination||combination
conditionaly||conditionally
conditon||condition
condtion||condition
condtional||conditional
conected||connected
conector||connector
configration||configuration
@ -423,6 +427,7 @@ cound||could
couter||counter
coutner||counter
cryptocraphic||cryptographic
cummulative||cumulative
cunter||counter
curently||currently
cylic||cyclic
@ -625,8 +630,10 @@ exeuction||execution
existance||existence
existant||existent
exixt||exist
exsits||exists
exlcude||exclude
exlcusive||exclusive
exlusive||exclusive
exmaple||example
expecially||especially
experies||expires
@ -664,11 +671,13 @@ feauture||feature
feautures||features
fetaure||feature
fetaures||features
fetcing||fetching
fileystem||filesystem
fimrware||firmware
fimware||firmware
firmare||firmware
firmaware||firmware
firtly||firstly
firware||firmware
firwmare||firmware
finanize||finalize
@ -838,6 +847,7 @@ integrety||integrity
integrey||integrity
intendet||intended
intented||intended
interal||internal
interanl||internal
interchangable||interchangeable
interferring||interfering
@ -1023,6 +1033,7 @@ negotation||negotiation
nerver||never
nescessary||necessary
nessessary||necessary
none existent||non-existent
noticable||noticeable
notication||notification
notications||notifications
@ -1044,6 +1055,7 @@ occured||occurred
occurence||occurrence
occure||occurred
occuring||occurring
ocurrence||occurrence
offser||offset
offet||offset
offlaod||offload
@ -1055,6 +1067,7 @@ omitt||omit
ommiting||omitting
ommitted||omitted
onself||oneself
onthe||on the
ony||only
openning||opening
operatione||operation
@ -1121,6 +1134,7 @@ perfomring||performing
periperal||peripheral
peripherial||peripheral
permissons||permissions
permited||permitted
peroid||period
persistance||persistence
persistant||persistent
@ -1334,6 +1348,7 @@ sacrifying||sacrificing
safly||safely
safty||safety
satify||satisfy
satisifed||satisfied
savable||saveable
scaleing||scaling
scaned||scanned
@ -1558,6 +1573,7 @@ tunning||tuning
ture||true
tyep||type
udpate||update
updtes||updates
uesd||used
unknwon||unknown
uknown||unknown
@ -1614,6 +1630,7 @@ unuseful||useless
unvalid||invalid
upate||update
upsupported||unsupported
upto||up to
useable||usable
usefule||useful
usefull||useful

View File

@ -264,10 +264,12 @@ exuberant()
--$CTAGS_EXTRA=+fq --c-kinds=+px --fields=+iaS --langmap=c:+.h \
"${regex[@]}"
setup_regex exuberant kconfig
all_kconfigs | xargs $1 -a \
--langdef=kconfig --language-force=kconfig "${regex[@]}"
KCONFIG_ARGS=()
if ! $1 --list-languages | grep -iq kconfig; then
setup_regex exuberant kconfig
KCONFIG_ARGS=(--langdef=kconfig --language-force=kconfig "${regex[@]}")
fi
all_kconfigs | xargs $1 -a "${KCONFIG_ARGS[@]}"
}
emacs()

View File

@ -811,7 +811,7 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
priv->card.num_links = 1;
if (asrc_pdev) {
/* DPCM DAI Links only if ASRC exsits */
/* DPCM DAI Links only if ASRC exists */
priv->dai_link[1].cpus->of_node = asrc_np;
priv->dai_link[1].platforms->of_node = asrc_np;
priv->dai_link[2].codecs->dai_name = codec_dai_name;