Merge branch 'akpm' (patches from Andrew)

Merge more updates from Andrew Morton:

 - the rest of MM

 - KASAN updates

 - procfs updates

 - exit, fork updates

 - printk updates

 - lib/ updates

 - radix-tree testsuite updates

 - checkpatch updates

 - kprobes updates

 - a few other misc bits

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (162 commits)
  samples/kprobes: print out the symbol name for the hooks
  samples/kprobes: add a new module parameter
  kprobes: add the "tls" argument for j_do_fork
  init/main.c: simplify initcall_blacklisted()
  fs/efs/super.c: fix return value
  checkpatch: improve --git <commit-count> shortcut
  checkpatch: reduce number of `git log` calls with --git
  checkpatch: add support to check already applied git commits
  checkpatch: add --list-types to show message types to show or ignore
  checkpatch: advertise the --fix and --fix-inplace options more
  checkpatch: whine about ACCESS_ONCE
  checkpatch: add test for keywords not starting on tabstops
  checkpatch: improve CONSTANT_COMPARISON test for structure members
  checkpatch: add PREFER_IS_ENABLED test
  lib/GCD.c: use binary GCD algorithm instead of Euclidean
  radix-tree: free up the bottom bit of exceptional entries for reuse
  dax: move RADIX_DAX_ definitions to dax.c
  radix-tree: make radix_tree_descend() more useful
  radix-tree: introduce radix_tree_replace_clear_tags()
  radix-tree: tidy up __radix_tree_create()
  ...
This commit is contained in:
Linus Torvalds 2016-05-20 22:31:33 -07:00
commit 5469dc270c
195 changed files with 4502 additions and 2267 deletions

View File

@ -166,3 +166,12 @@ Description:
The mm_stat file is read-only and represents device's mm
statistics (orig_data_size, compr_data_size, etc.) in a format
similar to block layer statistics file format.
What: /sys/block/zram<id>/debug_stat
Date: July 2016
Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Description:
The debug_stat file is read-only and represents various
device's debugging info useful for kernel developers. Its
format is not documented intentionally and may change
anytime without any notice.

View File

@ -59,27 +59,16 @@ num_devices parameter is optional and tells zram how many devices should be
pre-created. Default: 1.
2) Set max number of compression streams
Compression backend may use up to max_comp_streams compression streams,
thus allowing up to max_comp_streams concurrent compression operations.
By default, compression backend uses single compression stream.
Regardless the value passed to this attribute, ZRAM will always
allocate multiple compression streams - one per online CPUs - thus
allowing several concurrent compression operations. The number of
allocated compression streams goes down when some of the CPUs
become offline. There is no single-compression-stream mode anymore,
unless you are running a UP system or has only 1 CPU online.
Examples:
#show max compression streams number
To find out how many streams are currently available:
cat /sys/block/zram0/max_comp_streams
#set max compression streams number to 3
echo 3 > /sys/block/zram0/max_comp_streams
Note:
In order to enable compression backend's multi stream support max_comp_streams
must be initially set to desired concurrency level before ZRAM device
initialisation. Once the device initialised as a single stream compression
backend (max_comp_streams equals to 1), you will see error if you try to change
the value of max_comp_streams because single stream compression backend
implemented as a special case by lock overhead issue and does not support
dynamic max_comp_streams. Only multi stream backend supports dynamic
max_comp_streams adjustment.
3) Select compression algorithm
Using comp_algorithm device attribute one can see available and
currently selected (shown in square brackets) compression algorithms,
@ -183,6 +172,7 @@ mem_limit RW the maximum amount of memory ZRAM can use to store
pages_compacted RO the number of pages freed during compaction
(available only via zram<id>/mm_stat node)
compact WO trigger memory compaction
debug_stat RO this file is used for zram debugging purposes
WARNING
=======

View File

@ -225,6 +225,7 @@ Table 1-2: Contents of the status files (as of 4.1)
TracerPid PID of process tracing this process (0 if not)
Uid Real, effective, saved set, and file system UIDs
Gid Real, effective, saved set, and file system GIDs
Umask file mode creation mask
FDSize number of file descriptor slots currently allocated
Groups supplementary group list
NStgid descendant namespace thread group ID hierarchy

View File

@ -340,7 +340,7 @@ unaffected. libhugetlbfs will also work fine as usual.
== Graceful fallback ==
Code walking pagetables but unware about huge pmds can simply call
Code walking pagetables but unaware about huge pmds can simply call
split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
pmd_offset. It's trivial to make the code transparent hugepage aware
by just grepping for "pmd_offset" and adding split_huge_pmd where
@ -414,7 +414,7 @@ tracking. The alternative is alter ->_mapcount in all subpages on each
map/unmap of the whole compound page.
We set PG_double_map when a PMD of the page got split for the first time,
but still have PMD mapping. The addtional references go away with last
but still have PMD mapping. The additional references go away with last
compound_mapcount.
split_huge_page internally has to distribute the refcounts in the head
@ -432,10 +432,10 @@ page->_mapcount.
We safe against physical memory scanners too: the only legitimate way
scanner can get reference to a page is get_page_unless_zero().
All tail pages has zero ->_refcount until atomic_add(). It prevent scanner
from geting reference to tail page up to the point. After the atomic_add()
we don't care about ->_refcount value. We already known how many references
with should uncharge from head page.
All tail pages have zero ->_refcount until atomic_add(). This prevents the
scanner from getting a reference to the tail page up to that point. After the
atomic_add() we don't care about the ->_refcount value. We already known how
many references should be uncharged from the head page.
For head page get_page_unless_zero() will succeed and we don't mind. It's
clear where reference should go after split: it will stay on head page.

View File

@ -0,0 +1,26 @@
z3fold
------
z3fold is a special purpose allocator for storing compressed pages.
It is designed to store up to three compressed pages per physical page.
It is a zbud derivative which allows for higher compression
ratio keeping the simplicity and determinism of its predecessor.
The main differences between z3fold and zbud are:
* unlike zbud, z3fold allows for up to PAGE_SIZE allocations
* z3fold can hold up to 3 compressed pages in its page
* z3fold doesn't export any API itself and is thus intended to be used
via the zpool API.
To keep the determinism and simplicity, z3fold, just like zbud, always
stores an integral number of compressed pages per page, but it can store
up to 3 pages unlike zbud which can store at most 2. Therefore the
compression ratio goes to around 2.7x while zbud's one is around 1.7x.
Unlike zbud (but like zsmalloc for that matter) z3fold_alloc() does not
return a dereferenceable pointer. Instead, it returns an unsigned long
handle which encodes actual location of the allocated object.
Keeping effective compression ratio close to zsmalloc's, z3fold doesn't
depend on MMU enabled and provides more predictable reclaim behavior
which makes it a better fit for small and response-critical systems.

View File

@ -6264,7 +6264,7 @@ S: Maintained
F: arch/*/include/asm/kasan.h
F: arch/*/mm/kasan_init*
F: Documentation/kasan.txt
F: include/linux/kasan.h
F: include/linux/kasan*.h
F: lib/test_kasan.c
F: mm/kasan/
F: scripts/Makefile.kasan
@ -8280,7 +8280,6 @@ F: drivers/of/resolver.c
OPENRISC ARCHITECTURE
M: Jonas Bonn <jonas@southpole.se>
W: http://openrisc.net
L: linux@lists.openrisc.net (moderated for non-subscribers)
S: Maintained
T: git git://openrisc.net/~jonas/linux
F: arch/openrisc/
@ -8401,7 +8400,6 @@ F: drivers/platform/x86/panasonic-laptop.c
PANASONIC MN10300/AM33/AM34 PORT
M: David Howells <dhowells@redhat.com>
M: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
L: linux-am33-list@redhat.com (moderated for non-subscribers)
W: ftp://ftp.redhat.com/pub/redhat/gnupro/AM33/
S: Maintained
@ -8835,7 +8833,6 @@ F: drivers/pinctrl/pinctrl-single.c
PIN CONTROLLER - ST SPEAR
M: Viresh Kumar <vireshk@kernel.org>
L: spear-devel@list.st.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.st.com/spear
S: Maintained
@ -10040,7 +10037,6 @@ F: drivers/mmc/host/sdhci-s3c*
SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) ST SPEAR DRIVER
M: Viresh Kumar <vireshk@kernel.org>
L: spear-devel@list.st.com
L: linux-mmc@vger.kernel.org
S: Maintained
F: drivers/mmc/host/sdhci-spear.c
@ -10603,7 +10599,6 @@ F: include/linux/compiler.h
SPEAR PLATFORM SUPPORT
M: Viresh Kumar <vireshk@kernel.org>
M: Shiraz Hashim <shiraz.linux.kernel@gmail.com>
L: spear-devel@list.st.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.st.com/spear
S: Maintained
@ -10612,7 +10607,6 @@ F: arch/arm/mach-spear/
SPEAR CLOCK FRAMEWORK SUPPORT
M: Viresh Kumar <vireshk@kernel.org>
L: spear-devel@list.st.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.st.com/spear
S: Maintained

View File

@ -187,7 +187,11 @@ config HAVE_OPTPROBES
config HAVE_KPROBES_ON_FTRACE
bool
config HAVE_NMI
bool
config HAVE_NMI_WATCHDOG
depends on HAVE_NMI
bool
#
# An arch should select this if it provides all these things:
@ -517,6 +521,11 @@ config HAVE_ARCH_MMAP_RND_BITS
- ARCH_MMAP_RND_BITS_MIN
- ARCH_MMAP_RND_BITS_MAX
config HAVE_EXIT_THREAD
bool
help
An architecture implements exit_thread.
config ARCH_MMAP_RND_BITS_MIN
int
@ -638,4 +647,7 @@ config COMPAT_OLD_SIGACTION
config ARCH_NO_COHERENT_DMA_MMAP
bool
config CPU_NO_EFFICIENT_FFS
def_bool n
source "kernel/gcov/Kconfig"

View File

@ -26,6 +26,7 @@ config ALPHA
select MODULES_USE_ELF_RELA
select ODD_RT_SIGACTION
select OLD_SIGSUSPEND
select CPU_NO_EFFICIENT_FFS if !ALPHA_EV67
help
The Alpha is a 64-bit general-purpose processor designed and
marketed by the Digital Equipment Corporation of blessed memory,

View File

@ -210,14 +210,6 @@ start_thread(struct pt_regs * regs, unsigned long pc, unsigned long sp)
}
EXPORT_SYMBOL(start_thread);
/*
* Free current thread data structures etc..
*/
void
exit_thread(void)
{
}
void
flush_thread(void)
{

View File

@ -107,6 +107,7 @@ choice
config ISA_ARCOMPACT
bool "ARCompact ISA"
select CPU_NO_EFFICIENT_FFS
help
The original ARC ISA of ARC600/700 cores

View File

@ -183,13 +183,6 @@ void flush_thread(void)
{
}
/*
* Free any architecture-specific thread data structures, etc.
*/
void exit_thread(void)
{
}
int dump_fpu(struct pt_regs *regs, elf_fpregset_t *fpu)
{
return 0;

View File

@ -50,6 +50,7 @@ config ARM
select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU
select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
select HAVE_EXIT_THREAD
select HAVE_FTRACE_MCOUNT_RECORD if (!XIP_KERNEL)
select HAVE_FUNCTION_GRAPH_TRACER if (!THUMB2_KERNEL)
select HAVE_FUNCTION_TRACER if (!XIP_KERNEL)
@ -66,6 +67,7 @@ config ARM
select HAVE_KRETPROBES if (HAVE_KPROBES)
select HAVE_MEMBLOCK
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select HAVE_OPROFILE if (HAVE_PERF_EVENTS)
select HAVE_OPTPROBES if !THUMB2_KERNEL
select HAVE_PERF_EVENTS

View File

@ -193,9 +193,9 @@ EXPORT_SYMBOL_GPL(thread_notify_head);
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
thread_notify(THREAD_NOTIFY_EXIT, current_thread_info());
thread_notify(THREAD_NOTIFY_EXIT, task_thread_info(tsk));
}
void flush_thread(void)

View File

@ -644,9 +644,11 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
break;
case IPI_CPU_BACKTRACE:
printk_nmi_enter();
irq_enter();
nmi_cpu_backtrace(regs);
irq_exit();
printk_nmi_exit();
break;
default:

View File

@ -421,18 +421,21 @@ config CPU_32v3
select CPU_USE_DOMAINS if MMU
select NEED_KUSER_HELPERS
select TLS_REG_EMUL if SMP || !MMU
select CPU_NO_EFFICIENT_FFS
config CPU_32v4
bool
select CPU_USE_DOMAINS if MMU
select NEED_KUSER_HELPERS
select TLS_REG_EMUL if SMP || !MMU
select CPU_NO_EFFICIENT_FFS
config CPU_32v4T
bool
select CPU_USE_DOMAINS if MMU
select NEED_KUSER_HELPERS
select TLS_REG_EMUL if SMP || !MMU
select CPU_NO_EFFICIENT_FFS
config CPU_32v5
bool

View File

@ -156,10 +156,6 @@ static void vfp_thread_copy(struct thread_info *thread)
* - we could be preempted if tree preempt rcu is enabled, so
* it is unsafe to use thread->cpu.
* THREAD_NOTIFY_EXIT
* - the thread (v) will be running on the local CPU, so
* v === current_thread_info()
* - thread->cpu is the local CPU number at the time it is accessed,
* but may change at any time.
* - we could be preempted if tree preempt rcu is enabled, so
* it is unsafe to use thread->cpu.
*/

View File

@ -200,13 +200,6 @@ void show_regs(struct pt_regs * regs)
__show_regs(regs);
}
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
}
static void tls_thread_flush(void)
{
asm ("msr tpidr_el0, xzr");

View File

@ -4,6 +4,7 @@ config AVR32
# that we usually don't need on AVR32.
select EXPERT
select HAVE_CLK
select HAVE_EXIT_THREAD
select HAVE_OPROFILE
select HAVE_KPROBES
select VIRT_TO_BUS
@ -17,6 +18,7 @@ config AVR32
select GENERIC_CLOCKEVENTS
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
select HAVE_NMI
help
AVR32 is a high-performance 32-bit RISC microprocessor core,
designed for cost-sensitive embedded applications, with particular

View File

@ -62,9 +62,9 @@ void machine_restart(char *cmd)
/*
* Free current thread data structures etc
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
ocd_disable(current);
ocd_disable(tsk);
}
void flush_thread(void)

View File

@ -40,6 +40,7 @@ config BLACKFIN
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_NMI
config GENERIC_CSUM
def_bool y

View File

@ -75,13 +75,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/*
* Free current thread data structures etc..
*/
static inline void exit_thread(void)
{
}
/*
* Return saved PC of a blocked thread.
*/

View File

@ -82,10 +82,6 @@ void flush_thread(void)
{
}
void exit_thread(void)
{
}
/*
* Do necessary setup to start up a newly executed thread.
*/

View File

@ -59,6 +59,7 @@ config CRIS
select GENERIC_IOMAP
select MODULES_USE_ELF_RELA
select CLONE_BACKWARDS2
select HAVE_EXIT_THREAD if ETRAX_ARCH_V32
select OLD_SIGSUSPEND
select OLD_SIGACTION
select GPIOLIB
@ -69,6 +70,7 @@ config CRIS
select GENERIC_CLOCKEVENTS if ETRAX_ARCH_V32
select GENERIC_SCHED_CLOCK if ETRAX_ARCH_V32
select HAVE_DEBUG_BUGVERBOSE if ETRAX_ARCH_V32
select HAVE_NMI
config HZ
int

View File

@ -35,15 +35,6 @@ void default_idle(void)
local_irq_enable();
}
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
/* Nothing needs to be done. */
}
/* if the watchdog is enabled, we can simply disable interrupts and go
* into an eternal loop, and the watchdog will reset the CPU after 0.1s
* if on the other hand the watchdog wasn't enabled, we just enable it and wait

View File

@ -33,9 +33,9 @@ void default_idle(void)
*/
extern void deconfigure_bp(long pid);
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
deconfigure_bp(current->pid);
deconfigure_bp(tsk->pid);
}
/*

View File

@ -96,13 +96,6 @@ extern asmlinkage void *restore_user_regs(const struct user_context *target, ...
#define release_segments(mm) do { } while (0)
#define forget_segments() do { } while (0)
/*
* Free current thread data structures etc..
*/
static inline void exit_thread(void)
{
}
/*
* Return saved PC of a blocked thread.
*/

View File

@ -20,6 +20,7 @@ config H8300
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZO
select HAVE_ARCH_KGDB
select CPU_NO_EFFICIENT_FFS
config RWSEM_GENERIC_SPINLOCK
def_bool y

View File

@ -110,13 +110,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/*
* Free current thread data structures etc..
*/
static inline void exit_thread(void)
{
}
/*
* Return saved PC of a blocked thread.
*/

View File

@ -136,13 +136,6 @@ void release_thread(struct task_struct *dead_task)
{
}
/*
* Free any architecture-specific thread data structures, etc.
*/
void exit_thread(void)
{
}
/*
* Some archs flush debug and FPU info here
*/

View File

@ -18,6 +18,7 @@ config IA64
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_EXIT_THREAD
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_KPROBES

View File

@ -4542,8 +4542,8 @@ pfm_context_unload(pfm_context_t *ctx, void *arg, int count, struct pt_regs *reg
/*
* called only from exit_thread(): task == current
* we come here only if current has a context attached (loaded or masked)
* called only from exit_thread()
* we come here only if the task has a context attached (loaded or masked)
*/
void
pfm_exit_thread(struct task_struct *task)

View File

@ -570,22 +570,22 @@ flush_thread (void)
}
/*
* Clean up state associated with current thread. This is called when
* Clean up state associated with a thread. This is called when
* the thread calls exit().
*/
void
exit_thread (void)
exit_thread (struct task_struct *tsk)
{
ia64_drop_fpu(current);
ia64_drop_fpu(tsk);
#ifdef CONFIG_PERFMON
/* if needed, stop monitoring and flush state to perfmon context */
if (current->thread.pfm_context)
pfm_exit_thread(current);
if (tsk->thread.pfm_context)
pfm_exit_thread(tsk);
/* free debug register resources */
if (current->thread.flags & IA64_THREAD_DBG_VALID)
pfm_release_debug_registers(current);
if (tsk->thread.flags & IA64_THREAD_DBG_VALID)
pfm_release_debug_registers(tsk);
#endif
}

View File

@ -17,6 +17,7 @@ config M32R
select ARCH_USES_GETTIMEOFFSET
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
select CPU_NO_EFFICIENT_FFS
config SBUS
bool

View File

@ -101,15 +101,6 @@ void show_regs(struct pt_regs * regs)
#endif
}
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
/* Nothing to do. */
DPRINTK("pid = %d\n", current->pid);
}
void flush_thread(void)
{
DPRINTK("pid = %d\n", current->pid);

View File

@ -40,6 +40,7 @@ config M68000
select CPU_HAS_NO_MULDIV64
select CPU_HAS_NO_UNALIGNED
select GENERIC_CSUM
select CPU_NO_EFFICIENT_FFS
help
The Freescale (was Motorola) 68000 CPU is the first generation of
the well known M68K family of processors. The CPU core as well as
@ -51,6 +52,7 @@ config MCPU32
bool
select CPU_HAS_NO_BITFIELDS
select CPU_HAS_NO_UNALIGNED
select CPU_NO_EFFICIENT_FFS
help
The Freescale (was then Motorola) CPU32 is a CPU core that is
based on the 68020 processor. For the most part it is used in
@ -130,6 +132,7 @@ config M5206
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5206 processor support.
@ -138,6 +141,7 @@ config M5206e
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5206e processor support.
@ -163,6 +167,7 @@ config M5249
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5249 processor support.
@ -171,6 +176,7 @@ config M525x
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Freescale (Motorola) Coldfire 5251/5253 processor support.
@ -189,6 +195,7 @@ config M5272
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5272 processor support.
@ -217,6 +224,7 @@ config M5307
select COLDFIRE_SW_A7
select HAVE_CACHE_CB
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5307 processor support.
@ -242,6 +250,7 @@ config M5407
select COLDFIRE_SW_A7
select HAVE_CACHE_CB
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5407 processor support.
@ -251,6 +260,7 @@ config M547x
select MMU_COLDFIRE if MMU
select HAVE_CACHE_CB
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Freescale ColdFire 5470/5471/5472/5473/5474/5475 processor support.
@ -260,6 +270,7 @@ config M548x
select M54xx
select HAVE_CACHE_CB
select HAVE_MBAR
select CPU_NO_EFFICIENT_FFS
help
Freescale ColdFire 5480/5481/5482/5483/5484/5485 processor support.

View File

@ -153,13 +153,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/*
* Free current thread data structures etc..
*/
static inline void exit_thread(void)
{
}
extern unsigned long thread_saved_pc(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p);

View File

@ -11,6 +11,7 @@ config METAG
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DYNAMIC_FTRACE
select HAVE_EXIT_THREAD
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER
select HAVE_KERNEL_BZIP2
@ -29,6 +30,7 @@ config METAG
select OF
select OF_EARLY_FLATTREE
select SPARSE_IRQ
select CPU_NO_EFFICIENT_FFS
config STACKTRACE_SUPPORT
def_bool y

View File

@ -134,8 +134,6 @@ static inline void release_thread(struct task_struct *dead_task)
#define copy_segments(tsk, mm) do { } while (0)
#define release_segments(mm) do { } while (0)
extern void exit_thread(void);
/*
* Return saved PC of a blocked thread.
*/

View File

@ -345,10 +345,10 @@ void flush_thread(void)
/*
* Free current thread data structures etc.
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
clear_fpu(&current->thread);
clear_dsp(&current->thread);
clear_fpu(&tsk->thread);
clear_dsp(&tsk->thread);
}
/* TODO: figure out how to unwind the kernel stack here to figure out

View File

@ -32,6 +32,7 @@ config MICROBLAZE
select OF_EARLY_FLATTREE
select TRACING_SUPPORT
select VIRT_TO_BUS
select CPU_NO_EFFICIENT_FFS
config SWAP
def_bool n

View File

@ -70,11 +70,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/* Free all resources held by a thread. */
static inline void exit_thread(void)
{
}
extern unsigned long thread_saved_pc(struct task_struct *t);
extern unsigned long get_wchan(struct task_struct *p);
@ -127,11 +122,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/* Free current thread data structures etc. */
static inline void exit_thread(void)
{
}
/* Return saved (kernel) PC of a blocked thread. */
# define thread_saved_pc(tsk) \
((tsk)->thread.regs ? (tsk)->thread.regs->r15 : 0)

View File

@ -48,6 +48,7 @@ config MIPS
select GENERIC_SCHED_CLOCK if !CAVIUM_OCTEON_SOC
select GENERIC_CMOS_UPDATE
select HAVE_MOD_ARCH_SPECIFIC
select HAVE_NMI
select VIRT_TO_BUS
select MODULES_USE_ELF_REL if MODULES
select MODULES_USE_ELF_RELA if MODULES && 64BIT

View File

@ -204,6 +204,16 @@
#endif
#endif
/* __builtin_constant_p(cpu_has_mips_r) && cpu_has_mips_r */
#if !((defined(cpu_has_mips32r1) && cpu_has_mips32r1) || \
(defined(cpu_has_mips32r2) && cpu_has_mips32r2) || \
(defined(cpu_has_mips32r6) && cpu_has_mips32r6) || \
(defined(cpu_has_mips64r1) && cpu_has_mips64r1) || \
(defined(cpu_has_mips64r2) && cpu_has_mips64r2) || \
(defined(cpu_has_mips64r6) && cpu_has_mips64r6))
#define CPU_NO_EFFICIENT_FFS 1
#endif
#ifndef cpu_has_mips_1
# define cpu_has_mips_1 (!cpu_has_mips_r6)
#endif

View File

@ -73,10 +73,6 @@ void start_thread(struct pt_regs * regs, unsigned long pc, unsigned long sp)
regs->regs[29] = sp;
}
void exit_thread(void)
{
}
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{
/*

View File

@ -1,5 +1,6 @@
config MN10300
def_bool y
select HAVE_EXIT_THREAD
select HAVE_OPROFILE
select HAVE_UID16
select GENERIC_IRQ_SHOW

View File

@ -76,11 +76,9 @@ static inline void unlazy_fpu(struct task_struct *tsk)
preempt_enable();
}
static inline void exit_fpu(void)
static inline void exit_fpu(struct task_struct *tsk)
{
#ifdef CONFIG_LAZY_SAVE_FPU
struct task_struct *tsk = current;
preempt_disable();
if (fpu_state_owner == tsk)
fpu_state_owner = NULL;
@ -123,7 +121,7 @@ static inline void fpu_init_state(void) {}
static inline void fpu_save(struct fpu_state_struct *s) {}
static inline void fpu_kill_state(struct task_struct *tsk) {}
static inline void unlazy_fpu(struct task_struct *tsk) {}
static inline void exit_fpu(void) {}
static inline void exit_fpu(struct task_struct *tsk) {}
static inline void flush_fpu(void) {}
static inline int fpu_setup_sigcontext(struct fpucontext *buf) { return 0; }
static inline int fpu_restore_sigcontext(struct fpucontext *buf) { return 0; }

View File

@ -103,9 +103,9 @@ void show_regs(struct pt_regs *regs)
/*
* free current thread data structures etc..
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
exit_fpu();
exit_fpu(tsk);
}
void flush_thread(void)

View File

@ -15,6 +15,7 @@ config NIOS2
select SOC_BUS
select SPARSE_IRQ
select USB_ARCH_HAS_HCD if USB_SUPPORT
select CPU_NO_EFFICIENT_FFS
config GENERIC_CSUM
def_bool y

View File

@ -75,11 +75,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
/* Free current thread data structures etc.. */
static inline void exit_thread(void)
{
}
/* Return saved PC of a blocked thread. */
#define thread_saved_pc(tsk) ((tsk)->thread.kregs->ea)

View File

@ -25,6 +25,7 @@ config OPENRISC
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
select OR1K_PIC
select CPU_NO_EFFICIENT_FFS if !OPENRISC_HAVE_INST_FF1
config MMU
def_bool y

View File

@ -84,15 +84,6 @@ void start_thread(struct pt_regs *regs, unsigned long nip, unsigned long sp);
void release_thread(struct task_struct *);
unsigned long get_wchan(struct task_struct *p);
/*
* Free current thread data structures etc..
*/
extern inline void exit_thread(void)
{
/* Nothing needs to be done. */
}
/*
* Return saved PC of a blocked thread. For now, this is the "user" PC
*/

View File

@ -32,6 +32,7 @@ config PARISC
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
select ARCH_NO_COHERENT_DMA_MMAP
select CPU_NO_EFFICIENT_FFS
help
The PA-RISC microprocessor is designed by Hewlett-Packard and used

View File

@ -144,13 +144,6 @@ void machine_power_off(void)
void (*pm_power_off)(void) = machine_power_off;
EXPORT_SYMBOL(pm_power_off);
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
}
void flush_thread(void)
{
/* Only needs to handle fpu stuff or perf monitors.

View File

@ -155,6 +155,7 @@ config PPC
select NO_BOOTMEM
select HAVE_GENERIC_RCU_GUP
select HAVE_PERF_EVENTS_NMI if PPC64
select HAVE_NMI if PERF_EVENTS
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select ARCH_HAS_DMA_SET_COHERENT_MASK

View File

@ -1329,10 +1329,6 @@ void show_regs(struct pt_regs * regs)
show_instructions(regs);
}
void exit_thread(void)
{
}
void flush_thread(void)
{
#ifdef CONFIG_HAVE_HW_BREAKPOINT

View File

@ -123,6 +123,7 @@ config S390
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_EARLY_PFN_TO_NID
select HAVE_ARCH_JUMP_LABEL
select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_SOFT_DIRTY
select HAVE_ARCH_TRACEHOOK
@ -134,6 +135,7 @@ config S390
select HAVE_DMA_API_DEBUG
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_EXIT_THREAD
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
@ -165,6 +167,7 @@ config S390
select TTY
select VIRT_CPU_ACCOUNTING
select VIRT_TO_BUS
select HAVE_NMI
config SCHED_OMIT_FRAME_POINTER

View File

@ -68,9 +68,10 @@ extern void kernel_thread_starter(void);
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
exit_thread_runtime_instr();
if (tsk == current)
exit_thread_runtime_instr();
}
void flush_thread(void)

View File

@ -14,6 +14,7 @@ config SCORE
select VIRT_TO_BUS
select MODULES_USE_ELF_REL
select CLONE_BACKWARDS
select CPU_NO_EFFICIENT_FFS
choice
prompt "System type"

View File

@ -56,8 +56,6 @@ void start_thread(struct pt_regs *regs, unsigned long pc, unsigned long sp)
regs->regs[0] = sp;
}
void exit_thread(void) {}
/*
* When a process does an "exec", machine state like FPU and debug
* registers need to be reset. This is a hook function for that.

View File

@ -20,6 +20,7 @@ config SUPERH
select PERF_USE_VMALLOC
select HAVE_DEBUG_KMEMLEAK
select HAVE_KERNEL_GZIP
select CPU_NO_EFFICIENT_FFS
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
select HAVE_KERNEL_XZ
@ -44,6 +45,7 @@ config SUPERH
select OLD_SIGSUSPEND
select OLD_SIGACTION
select HAVE_ARCH_AUDITSYSCALL
select HAVE_NMI
help
The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast
@ -71,6 +73,7 @@ config SUPERH32
config SUPERH64
def_bool ARCH = "sh64"
select HAVE_EXIT_THREAD
select KALLSYMS
config ARCH_DEFCONFIG

View File

@ -76,13 +76,6 @@ void start_thread(struct pt_regs *regs, unsigned long new_pc,
}
EXPORT_SYMBOL(start_thread);
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
}
void flush_thread(void)
{
struct task_struct *tsk = current;

View File

@ -288,7 +288,7 @@ void show_regs(struct pt_regs *regs)
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
/*
* See arch/sparc/kernel/process.c for the precedent for doing
@ -307,9 +307,8 @@ void exit_thread(void)
* which it would get safely nulled.
*/
#ifdef CONFIG_SH_FPU
if (last_task_used_math == current) {
if (last_task_used_math == tsk)
last_task_used_math = NULL;
}
#endif
}

View File

@ -20,6 +20,7 @@ config SPARC
select HAVE_OPROFILE
select HAVE_ARCH_KGDB if !SMP || SPARC64
select HAVE_ARCH_TRACEHOOK
select HAVE_EXIT_THREAD
select SYSCTL_EXCEPTION_TRACE
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select RTC_CLASS
@ -41,6 +42,7 @@ config SPARC
select ODD_RT_SIGACTION
select OLD_SIGSUSPEND
select ARCH_HAS_SG_CHAIN
select CPU_NO_EFFICIENT_FFS
config SPARC32
def_bool !64BIT
@ -78,6 +80,7 @@ config SPARC64
select NO_BOOTMEM
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
select HAVE_NMI
config ARCH_DEFCONFIG
string

View File

@ -184,21 +184,21 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
#ifndef CONFIG_SMP
if(last_task_used_math == current) {
if (last_task_used_math == tsk) {
#else
if (test_thread_flag(TIF_USEDFPU)) {
if (test_ti_thread_flag(task_thread_info(tsk), TIF_USEDFPU)) {
#endif
/* Keep process from leaving FPU in a bogon state. */
put_psr(get_psr() | PSR_EF);
fpsave(&current->thread.float_regs[0], &current->thread.fsr,
&current->thread.fpqueue[0], &current->thread.fpqdepth);
fpsave(&tsk->thread.float_regs[0], &tsk->thread.fsr,
&tsk->thread.fpqueue[0], &tsk->thread.fpqdepth);
#ifndef CONFIG_SMP
last_task_used_math = NULL;
#else
clear_thread_flag(TIF_USEDFPU);
clear_ti_thread_flag(task_thread_info(tsk), TIF_USEDFPU);
#endif
}
}

View File

@ -417,9 +417,9 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
}
/* Free current thread data structures etc.. */
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
struct thread_info *t = current_thread_info();
struct thread_info *t = task_thread_info(tsk);
if (t->utraps) {
if (t->utraps[0] < 2)

View File

@ -3,6 +3,7 @@
config TILE
def_bool y
select HAVE_EXIT_THREAD
select HAVE_PERF_EVENTS
select USE_PMC if PERF_EVENTS
select HAVE_DMA_API_DEBUG
@ -29,6 +30,7 @@ config TILE
select HAVE_DEBUG_STACKOVERFLOW
select ARCH_WANT_FRAME_POINTERS
select HAVE_CONTEXT_TRACKING
select HAVE_NMI if USE_PMC
select EDAC_SUPPORT
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER

View File

@ -541,7 +541,7 @@ void flush_thread(void)
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
#ifdef CONFIG_HARDWALL
/*
@ -550,7 +550,7 @@ void exit_thread(void)
* the last reference to a hardwall fd, it would already have
* been released and deactivated at this point.)
*/
hardwall_deactivate_all(current);
hardwall_deactivate_all(tsk);
#endif
}

View File

@ -103,10 +103,6 @@ void interrupt_end(void)
tracehook_notify_resume(regs);
}
void exit_thread(void)
{
}
int get_current_pid(void)
{
return task_pid_nr(current);

View File

@ -201,13 +201,6 @@ void show_regs(struct pt_regs *regs)
__backtrace();
}
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
{
}
void flush_thread(void)
{
struct thread_info *thread = current_thread_info();

View File

@ -105,6 +105,7 @@ config X86
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_EFFICIENT_UNALIGNED_ACCESS
select HAVE_EXIT_THREAD
select HAVE_FENTRY if X86_64
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_FP_TEST
@ -130,6 +131,7 @@ config X86
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MIXED_BREAKPOINTS_REGS
select HAVE_NMI
select HAVE_OPROFILE
select HAVE_OPTPROBES
select HAVE_PCSPKR_PLATFORM

View File

@ -5,6 +5,7 @@
*/
#include <linux/errno.h>
#include <linux/compiler.h>
#include <linux/kasan-checks.h>
#include <linux/thread_info.h>
#include <linux/string.h>
#include <asm/asm.h>
@ -721,6 +722,8 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
might_fault();
kasan_check_write(to, n);
/*
* While we would like to have the compiler do the checking for us
* even in the non-constant size case, any false positives there are
@ -754,6 +757,8 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
{
int sz = __compiletime_object_size(from);
kasan_check_read(from, n);
might_fault();
/* See the comment in copy_from_user() above. */

View File

@ -7,6 +7,7 @@
#include <linux/compiler.h>
#include <linux/errno.h>
#include <linux/lockdep.h>
#include <linux/kasan-checks.h>
#include <asm/alternative.h>
#include <asm/cpufeatures.h>
#include <asm/page.h>
@ -109,6 +110,7 @@ static __always_inline __must_check
int __copy_from_user(void *dst, const void __user *src, unsigned size)
{
might_fault();
kasan_check_write(dst, size);
return __copy_from_user_nocheck(dst, src, size);
}
@ -175,6 +177,7 @@ static __always_inline __must_check
int __copy_to_user(void __user *dst, const void *src, unsigned size)
{
might_fault();
kasan_check_read(src, size);
return __copy_to_user_nocheck(dst, src, size);
}
@ -242,12 +245,14 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
static __must_check __always_inline int
__copy_from_user_inatomic(void *dst, const void __user *src, unsigned size)
{
kasan_check_write(dst, size);
return __copy_from_user_nocheck(dst, src, size);
}
static __must_check __always_inline int
__copy_to_user_inatomic(void __user *dst, const void *src, unsigned size)
{
kasan_check_read(src, size);
return __copy_to_user_nocheck(dst, src, size);
}
@ -258,6 +263,7 @@ static inline int
__copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
{
might_fault();
kasan_check_write(dst, size);
return __copy_user_nocache(dst, src, size, 1);
}
@ -265,6 +271,7 @@ static inline int
__copy_from_user_inatomic_nocache(void *dst, const void __user *src,
unsigned size)
{
kasan_check_write(dst, size);
return __copy_user_nocache(dst, src, size, 0);
}

View File

@ -18,7 +18,6 @@
#include <linux/nmi.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/seq_buf.h>
#ifdef CONFIG_HARDLOCKUP_DETECTOR
u64 hw_nmi_get_sample_period(int watchdog_thresh)

View File

@ -97,10 +97,9 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
/*
* Free current thread data structures etc..
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
struct task_struct *me = current;
struct thread_struct *t = &me->thread;
struct thread_struct *t = &tsk->thread;
unsigned long *bp = t->io_bitmap_ptr;
struct fpu *fpu = &t->fpu;

View File

@ -14,6 +14,7 @@ config XTENSA
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
select HAVE_DMA_API_DEBUG
select HAVE_EXIT_THREAD
select HAVE_FUNCTION_TRACER
select HAVE_FUTEX_CMPXCHG if !MMU
select HAVE_HW_BREAKPOINT if PERF_EVENTS

View File

@ -115,10 +115,10 @@ void arch_cpu_idle(void)
/*
* This is called when the thread calls exit().
*/
void exit_thread(void)
void exit_thread(struct task_struct *tsk)
{
#if XTENSA_HAVE_COPROCESSORS
coprocessor_release_all(current_thread_info());
coprocessor_release_all(task_thread_info(tsk));
#endif
}

View File

@ -27,6 +27,8 @@
#include <linux/pagemap.h>
#include <linux/stringify.h>
#include <linux/kernel.h>
#include <linux/uuid.h>
#include "ldm.h"
#include "check.h"
#include "msdos.h"
@ -65,60 +67,6 @@ void _ldm_printk(const char *level, const char *function, const char *fmt, ...)
va_end(args);
}
/**
* ldm_parse_hexbyte - Convert a ASCII hex number to a byte
* @src: Pointer to at least 2 characters to convert.
*
* Convert a two character ASCII hex string to a number.
*
* Return: 0-255 Success, the byte was parsed correctly
* -1 Error, an invalid character was supplied
*/
static int ldm_parse_hexbyte (const u8 *src)
{
unsigned int x; /* For correct wrapping */
int h;
/* high part */
x = h = hex_to_bin(src[0]);
if (h < 0)
return -1;
/* low part */
h = hex_to_bin(src[1]);
if (h < 0)
return -1;
return (x << 4) + h;
}
/**
* ldm_parse_guid - Convert GUID from ASCII to binary
* @src: 36 char string of the form fa50ff2b-f2e8-45de-83fa-65417f2f49ba
* @dest: Memory block to hold binary GUID (16 bytes)
*
* N.B. The GUID need not be NULL terminated.
*
* Return: 'true' @dest contains binary GUID
* 'false' @dest contents are undefined
*/
static bool ldm_parse_guid (const u8 *src, u8 *dest)
{
static const int size[] = { 4, 2, 2, 2, 6 };
int i, j, v;
if (src[8] != '-' || src[13] != '-' ||
src[18] != '-' || src[23] != '-')
return false;
for (j = 0; j < 5; j++, src++)
for (i = 0; i < size[j]; i++, src+=2, *dest++ = v)
if ((v = ldm_parse_hexbyte (src)) < 0)
return false;
return true;
}
/**
* ldm_parse_privhead - Read the LDM Database PRIVHEAD structure
* @data: Raw database PRIVHEAD structure loaded from the device
@ -167,7 +115,7 @@ static bool ldm_parse_privhead(const u8 *data, struct privhead *ph)
ldm_error("PRIVHEAD disk size doesn't match real disk size");
return false;
}
if (!ldm_parse_guid(data + 0x0030, ph->disk_id)) {
if (uuid_be_to_bin(data + 0x0030, (uuid_be *)ph->disk_id)) {
ldm_error("PRIVHEAD contains an invalid GUID.");
return false;
}
@ -944,7 +892,7 @@ static bool ldm_parse_dsk3 (const u8 *buffer, int buflen, struct vblk *vb)
disk = &vb->vblk.disk;
ldm_get_vstr (buffer + 0x18 + r_diskid, disk->alt_name,
sizeof (disk->alt_name));
if (!ldm_parse_guid (buffer + 0x19 + r_name, disk->disk_id))
if (uuid_be_to_bin(buffer + 0x19 + r_name, (uuid_be *)disk->disk_id))
return false;
return true;

View File

@ -13,6 +13,7 @@
#include <linux/slab.h>
#include <linux/wait.h>
#include <linux/sched.h>
#include <linux/cpu.h>
#include "zcomp.h"
#include "zcomp_lzo.h"
@ -20,29 +21,6 @@
#include "zcomp_lz4.h"
#endif
/*
* single zcomp_strm backend
*/
struct zcomp_strm_single {
struct mutex strm_lock;
struct zcomp_strm *zstrm;
};
/*
* multi zcomp_strm backend
*/
struct zcomp_strm_multi {
/* protect strm list */
spinlock_t strm_lock;
/* max possible number of zstrm streams */
int max_strm;
/* number of available zstrm streams */
int avail_strm;
/* list of available strms */
struct list_head idle_strm;
wait_queue_head_t strm_wait;
};
static struct zcomp_backend *backends[] = {
&zcomp_lzo,
#ifdef CONFIG_ZRAM_LZ4_COMPRESS
@ -93,188 +71,6 @@ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp, gfp_t flags)
return zstrm;
}
/*
* get idle zcomp_strm or wait until other process release
* (zcomp_strm_release()) one for us
*/
static struct zcomp_strm *zcomp_strm_multi_find(struct zcomp *comp)
{
struct zcomp_strm_multi *zs = comp->stream;
struct zcomp_strm *zstrm;
while (1) {
spin_lock(&zs->strm_lock);
if (!list_empty(&zs->idle_strm)) {
zstrm = list_entry(zs->idle_strm.next,
struct zcomp_strm, list);
list_del(&zstrm->list);
spin_unlock(&zs->strm_lock);
return zstrm;
}
/* zstrm streams limit reached, wait for idle stream */
if (zs->avail_strm >= zs->max_strm) {
spin_unlock(&zs->strm_lock);
wait_event(zs->strm_wait, !list_empty(&zs->idle_strm));
continue;
}
/* allocate new zstrm stream */
zs->avail_strm++;
spin_unlock(&zs->strm_lock);
/*
* This function can be called in swapout/fs write path
* so we can't use GFP_FS|IO. And it assumes we already
* have at least one stream in zram initialization so we
* don't do best effort to allocate more stream in here.
* A default stream will work well without further multiple
* streams. That's why we use NORETRY | NOWARN.
*/
zstrm = zcomp_strm_alloc(comp, GFP_NOIO | __GFP_NORETRY |
__GFP_NOWARN);
if (!zstrm) {
spin_lock(&zs->strm_lock);
zs->avail_strm--;
spin_unlock(&zs->strm_lock);
wait_event(zs->strm_wait, !list_empty(&zs->idle_strm));
continue;
}
break;
}
return zstrm;
}
/* add stream back to idle list and wake up waiter or free the stream */
static void zcomp_strm_multi_release(struct zcomp *comp, struct zcomp_strm *zstrm)
{
struct zcomp_strm_multi *zs = comp->stream;
spin_lock(&zs->strm_lock);
if (zs->avail_strm <= zs->max_strm) {
list_add(&zstrm->list, &zs->idle_strm);
spin_unlock(&zs->strm_lock);
wake_up(&zs->strm_wait);
return;
}
zs->avail_strm--;
spin_unlock(&zs->strm_lock);
zcomp_strm_free(comp, zstrm);
}
/* change max_strm limit */
static bool zcomp_strm_multi_set_max_streams(struct zcomp *comp, int num_strm)
{
struct zcomp_strm_multi *zs = comp->stream;
struct zcomp_strm *zstrm;
spin_lock(&zs->strm_lock);
zs->max_strm = num_strm;
/*
* if user has lowered the limit and there are idle streams,
* immediately free as much streams (and memory) as we can.
*/
while (zs->avail_strm > num_strm && !list_empty(&zs->idle_strm)) {
zstrm = list_entry(zs->idle_strm.next,
struct zcomp_strm, list);
list_del(&zstrm->list);
zcomp_strm_free(comp, zstrm);
zs->avail_strm--;
}
spin_unlock(&zs->strm_lock);
return true;
}
static void zcomp_strm_multi_destroy(struct zcomp *comp)
{
struct zcomp_strm_multi *zs = comp->stream;
struct zcomp_strm *zstrm;
while (!list_empty(&zs->idle_strm)) {
zstrm = list_entry(zs->idle_strm.next,
struct zcomp_strm, list);
list_del(&zstrm->list);
zcomp_strm_free(comp, zstrm);
}
kfree(zs);
}
static int zcomp_strm_multi_create(struct zcomp *comp, int max_strm)
{
struct zcomp_strm *zstrm;
struct zcomp_strm_multi *zs;
comp->destroy = zcomp_strm_multi_destroy;
comp->strm_find = zcomp_strm_multi_find;
comp->strm_release = zcomp_strm_multi_release;
comp->set_max_streams = zcomp_strm_multi_set_max_streams;
zs = kmalloc(sizeof(struct zcomp_strm_multi), GFP_KERNEL);
if (!zs)
return -ENOMEM;
comp->stream = zs;
spin_lock_init(&zs->strm_lock);
INIT_LIST_HEAD(&zs->idle_strm);
init_waitqueue_head(&zs->strm_wait);
zs->max_strm = max_strm;
zs->avail_strm = 1;
zstrm = zcomp_strm_alloc(comp, GFP_KERNEL);
if (!zstrm) {
kfree(zs);
return -ENOMEM;
}
list_add(&zstrm->list, &zs->idle_strm);
return 0;
}
static struct zcomp_strm *zcomp_strm_single_find(struct zcomp *comp)
{
struct zcomp_strm_single *zs = comp->stream;
mutex_lock(&zs->strm_lock);
return zs->zstrm;
}
static void zcomp_strm_single_release(struct zcomp *comp,
struct zcomp_strm *zstrm)
{
struct zcomp_strm_single *zs = comp->stream;
mutex_unlock(&zs->strm_lock);
}
static bool zcomp_strm_single_set_max_streams(struct zcomp *comp, int num_strm)
{
/* zcomp_strm_single support only max_comp_streams == 1 */
return false;
}
static void zcomp_strm_single_destroy(struct zcomp *comp)
{
struct zcomp_strm_single *zs = comp->stream;
zcomp_strm_free(comp, zs->zstrm);
kfree(zs);
}
static int zcomp_strm_single_create(struct zcomp *comp)
{
struct zcomp_strm_single *zs;
comp->destroy = zcomp_strm_single_destroy;
comp->strm_find = zcomp_strm_single_find;
comp->strm_release = zcomp_strm_single_release;
comp->set_max_streams = zcomp_strm_single_set_max_streams;
zs = kmalloc(sizeof(struct zcomp_strm_single), GFP_KERNEL);
if (!zs)
return -ENOMEM;
comp->stream = zs;
mutex_init(&zs->strm_lock);
zs->zstrm = zcomp_strm_alloc(comp, GFP_KERNEL);
if (!zs->zstrm) {
kfree(zs);
return -ENOMEM;
}
return 0;
}
/* show available compressors */
ssize_t zcomp_available_show(const char *comp, char *buf)
{
@ -299,19 +95,14 @@ bool zcomp_available_algorithm(const char *comp)
return find_backend(comp) != NULL;
}
bool zcomp_set_max_streams(struct zcomp *comp, int num_strm)
{
return comp->set_max_streams(comp, num_strm);
}
struct zcomp_strm *zcomp_strm_find(struct zcomp *comp)
{
return comp->strm_find(comp);
return *get_cpu_ptr(comp->stream);
}
void zcomp_strm_release(struct zcomp *comp, struct zcomp_strm *zstrm)
{
comp->strm_release(comp, zstrm);
put_cpu_ptr(comp->stream);
}
int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm,
@ -327,9 +118,83 @@ int zcomp_decompress(struct zcomp *comp, const unsigned char *src,
return comp->backend->decompress(src, src_len, dst);
}
static int __zcomp_cpu_notifier(struct zcomp *comp,
unsigned long action, unsigned long cpu)
{
struct zcomp_strm *zstrm;
switch (action) {
case CPU_UP_PREPARE:
if (WARN_ON(*per_cpu_ptr(comp->stream, cpu)))
break;
zstrm = zcomp_strm_alloc(comp, GFP_KERNEL);
if (IS_ERR_OR_NULL(zstrm)) {
pr_err("Can't allocate a compression stream\n");
return NOTIFY_BAD;
}
*per_cpu_ptr(comp->stream, cpu) = zstrm;
break;
case CPU_DEAD:
case CPU_UP_CANCELED:
zstrm = *per_cpu_ptr(comp->stream, cpu);
if (!IS_ERR_OR_NULL(zstrm))
zcomp_strm_free(comp, zstrm);
*per_cpu_ptr(comp->stream, cpu) = NULL;
break;
default:
break;
}
return NOTIFY_OK;
}
static int zcomp_cpu_notifier(struct notifier_block *nb,
unsigned long action, void *pcpu)
{
unsigned long cpu = (unsigned long)pcpu;
struct zcomp *comp = container_of(nb, typeof(*comp), notifier);
return __zcomp_cpu_notifier(comp, action, cpu);
}
static int zcomp_init(struct zcomp *comp)
{
unsigned long cpu;
int ret;
comp->notifier.notifier_call = zcomp_cpu_notifier;
comp->stream = alloc_percpu(struct zcomp_strm *);
if (!comp->stream)
return -ENOMEM;
cpu_notifier_register_begin();
for_each_online_cpu(cpu) {
ret = __zcomp_cpu_notifier(comp, CPU_UP_PREPARE, cpu);
if (ret == NOTIFY_BAD)
goto cleanup;
}
__register_cpu_notifier(&comp->notifier);
cpu_notifier_register_done();
return 0;
cleanup:
for_each_online_cpu(cpu)
__zcomp_cpu_notifier(comp, CPU_UP_CANCELED, cpu);
cpu_notifier_register_done();
return -ENOMEM;
}
void zcomp_destroy(struct zcomp *comp)
{
comp->destroy(comp);
unsigned long cpu;
cpu_notifier_register_begin();
for_each_online_cpu(cpu)
__zcomp_cpu_notifier(comp, CPU_UP_CANCELED, cpu);
__unregister_cpu_notifier(&comp->notifier);
cpu_notifier_register_done();
free_percpu(comp->stream);
kfree(comp);
}
@ -339,9 +204,9 @@ void zcomp_destroy(struct zcomp *comp)
* backend pointer or ERR_PTR if things went bad. ERR_PTR(-EINVAL)
* if requested algorithm is not supported, ERR_PTR(-ENOMEM) in
* case of allocation error, or any other error potentially
* returned by functions zcomp_strm_{multi,single}_create.
* returned by zcomp_init().
*/
struct zcomp *zcomp_create(const char *compress, int max_strm)
struct zcomp *zcomp_create(const char *compress)
{
struct zcomp *comp;
struct zcomp_backend *backend;
@ -356,10 +221,7 @@ struct zcomp *zcomp_create(const char *compress, int max_strm)
return ERR_PTR(-ENOMEM);
comp->backend = backend;
if (max_strm > 1)
error = zcomp_strm_multi_create(comp, max_strm);
else
error = zcomp_strm_single_create(comp);
error = zcomp_init(comp);
if (error) {
kfree(comp);
return ERR_PTR(error);

View File

@ -10,8 +10,6 @@
#ifndef _ZCOMP_H_
#define _ZCOMP_H_
#include <linux/mutex.h>
struct zcomp_strm {
/* compression/decompression buffer */
void *buffer;
@ -21,8 +19,6 @@ struct zcomp_strm {
* working memory)
*/
void *private;
/* used in multi stream backend, protected by backend strm_lock */
struct list_head list;
};
/* static compression backend */
@ -41,19 +37,15 @@ struct zcomp_backend {
/* dynamic per-device compression frontend */
struct zcomp {
void *stream;
struct zcomp_strm * __percpu *stream;
struct zcomp_backend *backend;
struct zcomp_strm *(*strm_find)(struct zcomp *comp);
void (*strm_release)(struct zcomp *comp, struct zcomp_strm *zstrm);
bool (*set_max_streams)(struct zcomp *comp, int num_strm);
void (*destroy)(struct zcomp *comp);
struct notifier_block notifier;
};
ssize_t zcomp_available_show(const char *comp, char *buf);
bool zcomp_available_algorithm(const char *comp);
struct zcomp *zcomp_create(const char *comp, int max_strm);
struct zcomp *zcomp_create(const char *comp);
void zcomp_destroy(struct zcomp *comp);
struct zcomp_strm *zcomp_strm_find(struct zcomp *comp);

View File

@ -304,46 +304,25 @@ static ssize_t mem_used_max_store(struct device *dev,
return len;
}
/*
* We switched to per-cpu streams and this attr is not needed anymore.
* However, we will keep it around for some time, because:
* a) we may revert per-cpu streams in the future
* b) it's visible to user space and we need to follow our 2 years
* retirement rule; but we already have a number of 'soon to be
* altered' attrs, so max_comp_streams need to wait for the next
* layoff cycle.
*/
static ssize_t max_comp_streams_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int val;
struct zram *zram = dev_to_zram(dev);
down_read(&zram->init_lock);
val = zram->max_comp_streams;
up_read(&zram->init_lock);
return scnprintf(buf, PAGE_SIZE, "%d\n", val);
return scnprintf(buf, PAGE_SIZE, "%d\n", num_online_cpus());
}
static ssize_t max_comp_streams_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
int num;
struct zram *zram = dev_to_zram(dev);
int ret;
ret = kstrtoint(buf, 0, &num);
if (ret < 0)
return ret;
if (num < 1)
return -EINVAL;
down_write(&zram->init_lock);
if (init_done(zram)) {
if (!zcomp_set_max_streams(zram->comp, num)) {
pr_info("Cannot change max compression streams\n");
ret = -EINVAL;
goto out;
}
}
zram->max_comp_streams = num;
ret = len;
out:
up_write(&zram->init_lock);
return ret;
return len;
}
static ssize_t comp_algorithm_show(struct device *dev,
@ -456,8 +435,26 @@ static ssize_t mm_stat_show(struct device *dev,
return ret;
}
static ssize_t debug_stat_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
int version = 1;
struct zram *zram = dev_to_zram(dev);
ssize_t ret;
down_read(&zram->init_lock);
ret = scnprintf(buf, PAGE_SIZE,
"version: %d\n%8llu\n",
version,
(u64)atomic64_read(&zram->stats.writestall));
up_read(&zram->init_lock);
return ret;
}
static DEVICE_ATTR_RO(io_stat);
static DEVICE_ATTR_RO(mm_stat);
static DEVICE_ATTR_RO(debug_stat);
ZRAM_ATTR_RO(num_reads);
ZRAM_ATTR_RO(num_writes);
ZRAM_ATTR_RO(failed_reads);
@ -514,7 +511,7 @@ static struct zram_meta *zram_meta_alloc(char *pool_name, u64 disksize)
goto out_error;
}
meta->mem_pool = zs_create_pool(pool_name, GFP_NOIO | __GFP_HIGHMEM);
meta->mem_pool = zs_create_pool(pool_name);
if (!meta->mem_pool) {
pr_err("Error creating memory pool\n");
goto out_error;
@ -650,7 +647,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
{
int ret = 0;
size_t clen;
unsigned long handle;
unsigned long handle = 0;
struct page *page;
unsigned char *user_mem, *cmem, *src, *uncmem = NULL;
struct zram_meta *meta = zram->meta;
@ -673,9 +670,8 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
goto out;
}
zstrm = zcomp_strm_find(zram->comp);
compress_again:
user_mem = kmap_atomic(page);
if (is_partial_io(bvec)) {
memcpy(uncmem + offset, user_mem + bvec->bv_offset,
bvec->bv_len);
@ -699,6 +695,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
goto out;
}
zstrm = zcomp_strm_find(zram->comp);
ret = zcomp_compress(zram->comp, zstrm, uncmem, &clen);
if (!is_partial_io(bvec)) {
kunmap_atomic(user_mem);
@ -710,6 +707,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
pr_err("Compression failed! err=%d\n", ret);
goto out;
}
src = zstrm->buffer;
if (unlikely(clen > max_zpage_size)) {
clen = PAGE_SIZE;
@ -717,8 +715,35 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
src = uncmem;
}
handle = zs_malloc(meta->mem_pool, clen);
/*
* handle allocation has 2 paths:
* a) fast path is executed with preemption disabled (for
* per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear,
* since we can't sleep;
* b) slow path enables preemption and attempts to allocate
* the page with __GFP_DIRECT_RECLAIM bit set. we have to
* put per-cpu compression stream and, thus, to re-do
* the compression once handle is allocated.
*
* if we have a 'non-null' handle here then we are coming
* from the slow path and handle has already been allocated.
*/
if (!handle)
handle = zs_malloc(meta->mem_pool, clen,
__GFP_KSWAPD_RECLAIM |
__GFP_NOWARN |
__GFP_HIGHMEM);
if (!handle) {
zcomp_strm_release(zram->comp, zstrm);
zstrm = NULL;
atomic64_inc(&zram->stats.writestall);
handle = zs_malloc(meta->mem_pool, clen,
GFP_NOIO | __GFP_HIGHMEM);
if (handle)
goto compress_again;
pr_err("Error allocating memory for compressed page: %u, size=%zu\n",
index, clen);
ret = -ENOMEM;
@ -1009,7 +1034,6 @@ static void zram_reset_device(struct zram *zram)
/* Reset stats */
memset(&zram->stats, 0, sizeof(zram->stats));
zram->disksize = 0;
zram->max_comp_streams = 1;
set_capacity(zram->disk, 0);
part_stat_set_all(&zram->disk->part0, 0);
@ -1038,7 +1062,7 @@ static ssize_t disksize_store(struct device *dev,
if (!meta)
return -ENOMEM;
comp = zcomp_create(zram->compressor, zram->max_comp_streams);
comp = zcomp_create(zram->compressor);
if (IS_ERR(comp)) {
pr_err("Cannot initialise %s compressing backend\n",
zram->compressor);
@ -1177,6 +1201,7 @@ static struct attribute *zram_disk_attrs[] = {
&dev_attr_comp_algorithm.attr,
&dev_attr_io_stat.attr,
&dev_attr_mm_stat.attr,
&dev_attr_debug_stat.attr,
NULL,
};
@ -1273,7 +1298,6 @@ static int zram_add(void)
}
strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
zram->meta = NULL;
zram->max_comp_streams = 1;
pr_info("Added device: %s\n", zram->disk->disk_name);
return device_id;

View File

@ -85,6 +85,7 @@ struct zram_stats {
atomic64_t zero_pages; /* no. of zero filled pages */
atomic64_t pages_stored; /* no. of pages currently stored */
atomic_long_t max_used_pages; /* no. of maximum pages stored */
atomic64_t writestall; /* no. of write slow paths */
};
struct zram_meta {
@ -102,7 +103,6 @@ struct zram {
* the number of pages zram can consume for storing compressed data
*/
unsigned long limit_pages;
int max_comp_streams;
struct zram_stats stats;
atomic_t refcount; /* refcount for zram_meta */

View File

@ -260,6 +260,7 @@
#include <linux/irq.h>
#include <linux/syscalls.h>
#include <linux/completion.h>
#include <linux/uuid.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
@ -1621,26 +1622,6 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
return urandom_read(NULL, buf, count, NULL);
}
/***************************************************************
* Random UUID interface
*
* Used here for a Boot ID, but can be useful for other kernel
* drivers.
***************************************************************/
/*
* Generate random UUID
*/
void generate_random_uuid(unsigned char uuid_out[16])
{
get_random_bytes(uuid_out, 16);
/* Set UUID version to 4 --- truly random generation */
uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40;
/* Set the UUID variant to DCE */
uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80;
}
EXPORT_SYMBOL(generate_random_uuid);
/********************************************************************
*
* Sysctl interface

View File

@ -313,7 +313,7 @@ int of_hwspin_lock_get_id(struct device_node *np, int index)
hwlock = radix_tree_deref_slot(slot);
if (unlikely(!hwlock))
continue;
if (radix_tree_is_indirect_ptr(hwlock)) {
if (radix_tree_deref_retry(hwlock)) {
slot = radix_tree_iter_retry(&iter);
continue;
}

View File

@ -37,6 +37,7 @@
#include <linux/acpi.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/uuid.h>
ACPI_MODULE_NAME("wmi");
MODULE_AUTHOR("Carlos Corbacho");
@ -115,100 +116,21 @@ static struct acpi_driver acpi_wmi_driver = {
* GUID parsing functions
*/
/**
* wmi_parse_hexbyte - Convert a ASCII hex number to a byte
* @src: Pointer to at least 2 characters to convert.
*
* Convert a two character ASCII hex string to a number.
*
* Return: 0-255 Success, the byte was parsed correctly
* -1 Error, an invalid character was supplied
*/
static int wmi_parse_hexbyte(const u8 *src)
{
int h;
int value;
/* high part */
h = value = hex_to_bin(src[0]);
if (value < 0)
return -1;
/* low part */
value = hex_to_bin(src[1]);
if (value >= 0)
return (h << 4) | value;
return -1;
}
/**
* wmi_swap_bytes - Rearrange GUID bytes to match GUID binary
* @src: Memory block holding binary GUID (16 bytes)
* @dest: Memory block to hold byte swapped binary GUID (16 bytes)
*
* Byte swap a binary GUID to match it's real GUID value
*/
static void wmi_swap_bytes(u8 *src, u8 *dest)
{
int i;
for (i = 0; i <= 3; i++)
memcpy(dest + i, src + (3 - i), 1);
for (i = 0; i <= 1; i++)
memcpy(dest + 4 + i, src + (5 - i), 1);
for (i = 0; i <= 1; i++)
memcpy(dest + 6 + i, src + (7 - i), 1);
memcpy(dest + 8, src + 8, 8);
}
/**
* wmi_parse_guid - Convert GUID from ASCII to binary
* @src: 36 char string of the form fa50ff2b-f2e8-45de-83fa-65417f2f49ba
* @dest: Memory block to hold binary GUID (16 bytes)
*
* N.B. The GUID need not be NULL terminated.
*
* Return: 'true' @dest contains binary GUID
* 'false' @dest contents are undefined
*/
static bool wmi_parse_guid(const u8 *src, u8 *dest)
{
static const int size[] = { 4, 2, 2, 2, 6 };
int i, j, v;
if (src[8] != '-' || src[13] != '-' ||
src[18] != '-' || src[23] != '-')
return false;
for (j = 0; j < 5; j++, src++) {
for (i = 0; i < size[j]; i++, src += 2, *dest++ = v) {
v = wmi_parse_hexbyte(src);
if (v < 0)
return false;
}
}
return true;
}
static bool find_guid(const char *guid_string, struct wmi_block **out)
{
char tmp[16], guid_input[16];
uuid_le guid_input;
struct wmi_block *wblock;
struct guid_block *block;
struct list_head *p;
wmi_parse_guid(guid_string, tmp);
wmi_swap_bytes(tmp, guid_input);
if (uuid_le_to_bin(guid_string, &guid_input))
return false;
list_for_each(p, &wmi_block_list) {
wblock = list_entry(p, struct wmi_block, list);
block = &wblock->gblock;
if (memcmp(block->guid, guid_input, 16) == 0) {
if (memcmp(block->guid, &guid_input, 16) == 0) {
if (out)
*out = wblock;
return true;
@ -498,20 +420,20 @@ wmi_notify_handler handler, void *data)
{
struct wmi_block *block;
acpi_status status = AE_NOT_EXIST;
char tmp[16], guid_input[16];
uuid_le guid_input;
struct list_head *p;
if (!guid || !handler)
return AE_BAD_PARAMETER;
wmi_parse_guid(guid, tmp);
wmi_swap_bytes(tmp, guid_input);
if (uuid_le_to_bin(guid, &guid_input))
return AE_BAD_PARAMETER;
list_for_each(p, &wmi_block_list) {
acpi_status wmi_status;
block = list_entry(p, struct wmi_block, list);
if (memcmp(block->gblock.guid, guid_input, 16) == 0) {
if (memcmp(block->gblock.guid, &guid_input, 16) == 0) {
if (block->handler &&
block->handler != wmi_notify_debug)
return AE_ALREADY_ACQUIRED;
@ -539,20 +461,20 @@ acpi_status wmi_remove_notify_handler(const char *guid)
{
struct wmi_block *block;
acpi_status status = AE_NOT_EXIST;
char tmp[16], guid_input[16];
uuid_le guid_input;
struct list_head *p;
if (!guid)
return AE_BAD_PARAMETER;
wmi_parse_guid(guid, tmp);
wmi_swap_bytes(tmp, guid_input);
if (uuid_le_to_bin(guid, &guid_input))
return AE_BAD_PARAMETER;
list_for_each(p, &wmi_block_list) {
acpi_status wmi_status;
block = list_entry(p, struct wmi_block, list);
if (memcmp(block->gblock.guid, guid_input, 16) == 0) {
if (memcmp(block->gblock.guid, &guid_input, 16) == 0) {
if (!block->handler ||
block->handler == wmi_notify_debug)
return AE_NULL_ENTRY;

View File

@ -20,13 +20,13 @@
#include <linux/slab.h>
#include <linux/buffer_head.h>
#include <linux/blkdev.h>
#include <linux/random.h>
#include <linux/iocontext.h>
#include <linux/capability.h>
#include <linux/ratelimit.h>
#include <linux/kthread.h>
#include <linux/raid/pq.h>
#include <linux/semaphore.h>
#include <linux/uuid.h>
#include <asm/div64.h>
#include "ctree.h"
#include "extent_map.h"

View File

@ -32,6 +32,15 @@
#include <linux/pfn_t.h>
#include <linux/sizes.h>
#define RADIX_DAX_MASK 0xf
#define RADIX_DAX_SHIFT 4
#define RADIX_DAX_PTE (0x4 | RADIX_TREE_EXCEPTIONAL_ENTRY)
#define RADIX_DAX_PMD (0x8 | RADIX_TREE_EXCEPTIONAL_ENTRY)
#define RADIX_DAX_TYPE(entry) ((unsigned long)entry & RADIX_DAX_MASK)
#define RADIX_DAX_SECTOR(entry) (((unsigned long)entry >> RADIX_DAX_SHIFT))
#define RADIX_DAX_ENTRY(sector, pmd) ((void *)((unsigned long)sector << \
RADIX_DAX_SHIFT | (pmd ? RADIX_DAX_PMD : RADIX_DAX_PTE)))
static long dax_map_atomic(struct block_device *bdev, struct blk_dax_ctl *dax)
{
struct request_queue *q = bdev->bd_queue;

View File

@ -11,6 +11,7 @@
#include <linux/fs.h>
#include <linux/ctype.h>
#include <linux/slab.h>
#include <linux/uuid.h>
#include "internal.h"
@ -46,11 +47,7 @@ struct inode *efivarfs_get_inode(struct super_block *sb,
*/
bool efivarfs_valid_name(const char *str, int len)
{
static const char dashes[EFI_VARIABLE_GUID_LEN] = {
[8] = 1, [13] = 1, [18] = 1, [23] = 1
};
const char *s = str + len - EFI_VARIABLE_GUID_LEN;
int i;
/*
* We need a GUID, plus at least one letter for the variable name,
@ -68,37 +65,7 @@ bool efivarfs_valid_name(const char *str, int len)
*
* 12345678-1234-1234-1234-123456789abc
*/
for (i = 0; i < EFI_VARIABLE_GUID_LEN; i++) {
if (dashes[i]) {
if (*s++ != '-')
return false;
} else {
if (!isxdigit(*s++))
return false;
}
}
return true;
}
static void efivarfs_hex_to_guid(const char *str, efi_guid_t *guid)
{
guid->b[0] = hex_to_bin(str[6]) << 4 | hex_to_bin(str[7]);
guid->b[1] = hex_to_bin(str[4]) << 4 | hex_to_bin(str[5]);
guid->b[2] = hex_to_bin(str[2]) << 4 | hex_to_bin(str[3]);
guid->b[3] = hex_to_bin(str[0]) << 4 | hex_to_bin(str[1]);
guid->b[4] = hex_to_bin(str[11]) << 4 | hex_to_bin(str[12]);
guid->b[5] = hex_to_bin(str[9]) << 4 | hex_to_bin(str[10]);
guid->b[6] = hex_to_bin(str[16]) << 4 | hex_to_bin(str[17]);
guid->b[7] = hex_to_bin(str[14]) << 4 | hex_to_bin(str[15]);
guid->b[8] = hex_to_bin(str[19]) << 4 | hex_to_bin(str[20]);
guid->b[9] = hex_to_bin(str[21]) << 4 | hex_to_bin(str[22]);
guid->b[10] = hex_to_bin(str[24]) << 4 | hex_to_bin(str[25]);
guid->b[11] = hex_to_bin(str[26]) << 4 | hex_to_bin(str[27]);
guid->b[12] = hex_to_bin(str[28]) << 4 | hex_to_bin(str[29]);
guid->b[13] = hex_to_bin(str[30]) << 4 | hex_to_bin(str[31]);
guid->b[14] = hex_to_bin(str[32]) << 4 | hex_to_bin(str[33]);
guid->b[15] = hex_to_bin(str[34]) << 4 | hex_to_bin(str[35]);
return uuid_is_valid(s);
}
static int efivarfs_create(struct inode *dir, struct dentry *dentry,
@ -119,8 +86,7 @@ static int efivarfs_create(struct inode *dir, struct dentry *dentry,
/* length of the variable name itself: remove GUID and separator */
namelen = dentry->d_name.len - EFI_VARIABLE_GUID_LEN - 1;
efivarfs_hex_to_guid(dentry->d_name.name + namelen + 1,
&var->var.VendorGuid);
uuid_le_to_bin(dentry->d_name.name + namelen + 1, &var->var.VendorGuid);
if (efivar_variable_is_removable(var->var.VendorGuid,
dentry->d_name.name, namelen))

View File

@ -275,7 +275,7 @@ static int efs_fill_super(struct super_block *s, void *d, int silent)
if (!bh) {
pr_err("cannot read volume header\n");
return -EINVAL;
return -EIO;
}
/*
@ -293,7 +293,7 @@ static int efs_fill_super(struct super_block *s, void *d, int silent)
bh = sb_bread(s, sb->fs_start + EFS_SUPER);
if (!bh) {
pr_err("cannot read superblock\n");
return -EINVAL;
return -EIO;
}
if (efs_validate_super(sb, (struct efs_super *) bh->b_data)) {

View File

@ -13,8 +13,8 @@
#include <linux/compat.h>
#include <linux/mount.h>
#include <linux/file.h>
#include <linux/random.h>
#include <linux/quotaops.h>
#include <linux/uuid.h>
#include <asm/uaccess.h>
#include "ext4_jbd2.h"
#include "ext4.h"

View File

@ -20,7 +20,7 @@
#include <linux/uaccess.h>
#include <linux/mount.h>
#include <linux/pagevec.h>
#include <linux/random.h>
#include <linux/uuid.h>
#include "f2fs.h"
#include "node.h"

View File

@ -931,7 +931,8 @@ void wb_start_writeback(struct bdi_writeback *wb, long nr_pages,
* This is WB_SYNC_NONE writeback, so if allocation fails just
* wakeup the thread for old dirty data writeback
*/
work = kzalloc(sizeof(*work), GFP_ATOMIC);
work = kzalloc(sizeof(*work),
GFP_NOWAIT | __GFP_NOMEMALLOC | __GFP_NOWARN);
if (!work) {
trace_writeback_nowork(wb);
wb_wakeup(wb);

View File

@ -83,6 +83,7 @@
#include <linux/tracehook.h>
#include <linux/string_helpers.h>
#include <linux/user_namespace.h>
#include <linux/fs_struct.h>
#include <asm/pgtable.h>
#include <asm/processor.h>
@ -139,12 +140,25 @@ static inline const char *get_task_state(struct task_struct *tsk)
return task_state_array[fls(state)];
}
static inline int get_task_umask(struct task_struct *tsk)
{
struct fs_struct *fs;
int umask = -ENOENT;
task_lock(tsk);
fs = tsk->fs;
if (fs)
umask = fs->umask;
task_unlock(tsk);
return umask;
}
static inline void task_state(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *p)
{
struct user_namespace *user_ns = seq_user_ns(m);
struct group_info *group_info;
int g;
int g, umask;
struct task_struct *tracer;
const struct cred *cred;
pid_t ppid, tpid = 0, tgid, ngid;
@ -162,6 +176,10 @@ static inline void task_state(struct seq_file *m, struct pid_namespace *ns,
ngid = task_numa_group_id(p);
cred = get_task_cred(p);
umask = get_task_umask(p);
if (umask >= 0)
seq_printf(m, "Umask:\t%#04o\n", umask);
task_lock(p);
if (p->files)
max_fds = files_fdtable(p->files)->max_fds;

View File

@ -3162,6 +3162,44 @@ int proc_pid_readdir(struct file *file, struct dir_context *ctx)
return 0;
}
/*
* proc_tid_comm_permission is a special permission function exclusively
* used for the node /proc/<pid>/task/<tid>/comm.
* It bypasses generic permission checks in the case where a task of the same
* task group attempts to access the node.
* The rationale behind this is that glibc and bionic access this node for
* cross thread naming (pthread_set/getname_np(!self)). However, if
* PR_SET_DUMPABLE gets set to 0 this node among others becomes uid=0 gid=0,
* which locks out the cross thread naming implementation.
* This function makes sure that the node is always accessible for members of
* same thread group.
*/
static int proc_tid_comm_permission(struct inode *inode, int mask)
{
bool is_same_tgroup;
struct task_struct *task;
task = get_proc_task(inode);
if (!task)
return -ESRCH;
is_same_tgroup = same_thread_group(current, task);
put_task_struct(task);
if (likely(is_same_tgroup && !(mask & MAY_EXEC))) {
/* This file (/proc/<pid>/task/<tid>/comm) can always be
* read or written by the members of the corresponding
* thread group.
*/
return 0;
}
return generic_permission(inode, mask);
}
static const struct inode_operations proc_tid_comm_inode_operations = {
.permission = proc_tid_comm_permission,
};
/*
* Tasks
*/
@ -3180,7 +3218,9 @@ static const struct pid_entry tid_base_stuff[] = {
#ifdef CONFIG_SCHED_DEBUG
REG("sched", S_IRUGO|S_IWUSR, proc_pid_sched_operations),
#endif
REG("comm", S_IRUGO|S_IWUSR, proc_pid_set_comm_operations),
NOD("comm", S_IFREG|S_IRUGO|S_IWUSR,
&proc_tid_comm_inode_operations,
&proc_pid_set_comm_operations, {}),
#ifdef CONFIG_HAVE_ARCH_TRACEHOOK
ONE("syscall", S_IRUSR, proc_pid_syscall),
#endif

View File

@ -211,14 +211,11 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
struct page **pages = NULL, **ptr, *page;
loff_t isize;
if (!(flags & MAP_SHARED))
return addr;
/* the mapping mustn't extend beyond the EOF */
lpages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT;
isize = i_size_read(inode);
ret = -EINVAL;
ret = -ENOSYS;
maxpages = (isize + PAGE_SIZE - 1) >> PAGE_SHIFT;
if (pgoff >= maxpages)
goto out;
@ -227,7 +224,6 @@ static unsigned long ramfs_nommu_get_unmapped_area(struct file *file,
goto out;
/* gang-find the pages */
ret = -ENOMEM;
pages = kcalloc(lpages, sizeof(struct page *), GFP_KERNEL);
if (!pages)
goto out_free;
@ -263,7 +259,7 @@ out:
*/
static int ramfs_nommu_mmap(struct file *file, struct vm_area_struct *vma)
{
if (!(vma->vm_flags & VM_SHARED))
if (!(vma->vm_flags & (VM_SHARED | VM_MAYSHARE)))
return -ENOSYS;
file_accessed(file);

View File

@ -3,8 +3,8 @@
*/
#include <linux/string.h>
#include <linux/random.h>
#include <linux/time.h>
#include <linux/uuid.h>
#include "reiserfs.h"
/* find where objectid map starts */

View File

@ -28,8 +28,8 @@
#include "ubifs.h"
#include <linux/slab.h>
#include <linux/random.h>
#include <linux/math64.h>
#include <linux/uuid.h>
/*
* Default journal size in logical eraseblocks as a percent of total

View File

@ -137,7 +137,7 @@ static void userfaultfd_ctx_put(struct userfaultfd_ctx *ctx)
VM_BUG_ON(waitqueue_active(&ctx->fault_wqh));
VM_BUG_ON(spin_is_locked(&ctx->fd_wqh.lock));
VM_BUG_ON(waitqueue_active(&ctx->fd_wqh));
mmput(ctx->mm);
mmdrop(ctx->mm);
kmem_cache_free(userfaultfd_ctx_cachep, ctx);
}
}
@ -434,6 +434,9 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
ACCESS_ONCE(ctx->released) = true;
if (!mmget_not_zero(mm))
goto wakeup;
/*
* Flush page faults out of all CPUs. NOTE: all page faults
* must be retried without returning VM_FAULT_SIGBUS if
@ -466,7 +469,8 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
vma->vm_userfaultfd_ctx = NULL_VM_UFFD_CTX;
}
up_write(&mm->mmap_sem);
mmput(mm);
wakeup:
/*
* After no new page faults can wait on this fault_*wqh, flush
* the last page faults that may have been already waiting on
@ -760,10 +764,12 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
start = uffdio_register.range.start;
end = start + uffdio_register.range.len;
ret = -ENOMEM;
if (!mmget_not_zero(mm))
goto out;
down_write(&mm->mmap_sem);
vma = find_vma_prev(mm, start, &prev);
ret = -ENOMEM;
if (!vma)
goto out_unlock;
@ -864,6 +870,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
} while (vma && vma->vm_start < end);
out_unlock:
up_write(&mm->mmap_sem);
mmput(mm);
if (!ret) {
/*
* Now that we scanned all vmas we can already tell
@ -902,10 +909,12 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
start = uffdio_unregister.start;
end = start + uffdio_unregister.len;
ret = -ENOMEM;
if (!mmget_not_zero(mm))
goto out;
down_write(&mm->mmap_sem);
vma = find_vma_prev(mm, start, &prev);
ret = -ENOMEM;
if (!vma)
goto out_unlock;
@ -998,6 +1007,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
} while (vma && vma->vm_start < end);
out_unlock:
up_write(&mm->mmap_sem);
mmput(mm);
out:
return ret;
}
@ -1067,9 +1077,11 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx,
goto out;
if (uffdio_copy.mode & ~UFFDIO_COPY_MODE_DONTWAKE)
goto out;
ret = mcopy_atomic(ctx->mm, uffdio_copy.dst, uffdio_copy.src,
uffdio_copy.len);
if (mmget_not_zero(ctx->mm)) {
ret = mcopy_atomic(ctx->mm, uffdio_copy.dst, uffdio_copy.src,
uffdio_copy.len);
mmput(ctx->mm);
}
if (unlikely(put_user(ret, &user_uffdio_copy->copy)))
return -EFAULT;
if (ret < 0)
@ -1110,8 +1122,11 @@ static int userfaultfd_zeropage(struct userfaultfd_ctx *ctx,
if (uffdio_zeropage.mode & ~UFFDIO_ZEROPAGE_MODE_DONTWAKE)
goto out;
ret = mfill_zeropage(ctx->mm, uffdio_zeropage.range.start,
uffdio_zeropage.range.len);
if (mmget_not_zero(ctx->mm)) {
ret = mfill_zeropage(ctx->mm, uffdio_zeropage.range.start,
uffdio_zeropage.range.len);
mmput(ctx->mm);
}
if (unlikely(put_user(ret, &user_uffdio_zeropage->zeropage)))
return -EFAULT;
if (ret < 0)
@ -1289,12 +1304,12 @@ static struct file *userfaultfd_file_create(int flags)
ctx->released = false;
ctx->mm = current->mm;
/* prevent the mm struct to be freed */
atomic_inc(&ctx->mm->mm_users);
atomic_inc(&ctx->mm->mm_count);
file = anon_inode_getfile("[userfaultfd]", &userfaultfd_fops, ctx,
O_RDWR | (flags & UFFD_SHARED_FCNTL_FLAGS));
if (IS_ERR(file)) {
mmput(ctx->mm);
mmdrop(ctx->mm);
kmem_cache_free(userfaultfd_ctx_cachep, ctx);
}
out:

View File

@ -2,21 +2,46 @@
#define _LINUX_COMPACTION_H
/* Return values for compact_zone() and try_to_compact_pages() */
/* compaction didn't start as it was deferred due to past failures */
#define COMPACT_DEFERRED 0
/* compaction didn't start as it was not possible or direct reclaim was more suitable */
#define COMPACT_SKIPPED 1
/* compaction should continue to another pageblock */
#define COMPACT_CONTINUE 2
/* direct compaction partially compacted a zone and there are suitable pages */
#define COMPACT_PARTIAL 3
/* The full zone was compacted */
#define COMPACT_COMPLETE 4
/* For more detailed tracepoint output */
#define COMPACT_NO_SUITABLE_PAGE 5
#define COMPACT_NOT_SUITABLE_ZONE 6
#define COMPACT_CONTENDED 7
/* When adding new states, please adjust include/trace/events/compaction.h */
enum compact_result {
/* For more detailed tracepoint output - internal to compaction */
COMPACT_NOT_SUITABLE_ZONE,
/*
* compaction didn't start as it was not possible or direct reclaim
* was more suitable
*/
COMPACT_SKIPPED,
/* compaction didn't start as it was deferred due to past failures */
COMPACT_DEFERRED,
/* compaction not active last round */
COMPACT_INACTIVE = COMPACT_DEFERRED,
/* For more detailed tracepoint output - internal to compaction */
COMPACT_NO_SUITABLE_PAGE,
/* compaction should continue to another pageblock */
COMPACT_CONTINUE,
/*
* The full zone was compacted scanned but wasn't successfull to compact
* suitable pages.
*/
COMPACT_COMPLETE,
/*
* direct compaction has scanned part of the zone but wasn't successfull
* to compact suitable pages.
*/
COMPACT_PARTIAL_SKIPPED,
/* compaction terminated prematurely due to lock contentions */
COMPACT_CONTENDED,
/*
* direct compaction partially compacted a zone and there might be
* suitable pages
*/
COMPACT_PARTIAL,
};
/* Used to signal whether compaction detected need_sched() or lock contention */
/* No contention detected */
@ -38,12 +63,13 @@ extern int sysctl_extfrag_handler(struct ctl_table *table, int write,
extern int sysctl_compact_unevictable_allowed;
extern int fragmentation_index(struct zone *zone, unsigned int order);
extern unsigned long try_to_compact_pages(gfp_t gfp_mask, unsigned int order,
extern enum compact_result try_to_compact_pages(gfp_t gfp_mask,
unsigned int order,
unsigned int alloc_flags, const struct alloc_context *ac,
enum migrate_mode mode, int *contended);
extern void compact_pgdat(pg_data_t *pgdat, int order);
extern void reset_isolation_suitable(pg_data_t *pgdat);
extern unsigned long compaction_suitable(struct zone *zone, int order,
extern enum compact_result compaction_suitable(struct zone *zone, int order,
unsigned int alloc_flags, int classzone_idx);
extern void defer_compaction(struct zone *zone, int order);
@ -52,12 +78,80 @@ extern void compaction_defer_reset(struct zone *zone, int order,
bool alloc_success);
extern bool compaction_restarting(struct zone *zone, int order);
/* Compaction has made some progress and retrying makes sense */
static inline bool compaction_made_progress(enum compact_result result)
{
/*
* Even though this might sound confusing this in fact tells us
* that the compaction successfully isolated and migrated some
* pageblocks.
*/
if (result == COMPACT_PARTIAL)
return true;
return false;
}
/* Compaction has failed and it doesn't make much sense to keep retrying. */
static inline bool compaction_failed(enum compact_result result)
{
/* All zones were scanned completely and still not result. */
if (result == COMPACT_COMPLETE)
return true;
return false;
}
/*
* Compaction has backed off for some reason. It might be throttling or
* lock contention. Retrying is still worthwhile.
*/
static inline bool compaction_withdrawn(enum compact_result result)
{
/*
* Compaction backed off due to watermark checks for order-0
* so the regular reclaim has to try harder and reclaim something.
*/
if (result == COMPACT_SKIPPED)
return true;
/*
* If compaction is deferred for high-order allocations, it is
* because sync compaction recently failed. If this is the case
* and the caller requested a THP allocation, we do not want
* to heavily disrupt the system, so we fail the allocation
* instead of entering direct reclaim.
*/
if (result == COMPACT_DEFERRED)
return true;
/*
* If compaction in async mode encounters contention or blocks higher
* priority task we back off early rather than cause stalls.
*/
if (result == COMPACT_CONTENDED)
return true;
/*
* Page scanners have met but we haven't scanned full zones so this
* is a back off in fact.
*/
if (result == COMPACT_PARTIAL_SKIPPED)
return true;
return false;
}
bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
int alloc_flags);
extern int kcompactd_run(int nid);
extern void kcompactd_stop(int nid);
extern void wakeup_kcompactd(pg_data_t *pgdat, int order, int classzone_idx);
#else
static inline unsigned long try_to_compact_pages(gfp_t gfp_mask,
static inline enum compact_result try_to_compact_pages(gfp_t gfp_mask,
unsigned int order, int alloc_flags,
const struct alloc_context *ac,
enum migrate_mode mode, int *contended)
@ -73,7 +167,7 @@ static inline void reset_isolation_suitable(pg_data_t *pgdat)
{
}
static inline unsigned long compaction_suitable(struct zone *zone, int order,
static inline enum compact_result compaction_suitable(struct zone *zone, int order,
int alloc_flags, int classzone_idx)
{
return COMPACT_SKIPPED;
@ -88,6 +182,21 @@ static inline bool compaction_deferred(struct zone *zone, int order)
return true;
}
static inline bool compaction_made_progress(enum compact_result result)
{
return false;
}
static inline bool compaction_failed(enum compact_result result)
{
return false;
}
static inline bool compaction_withdrawn(enum compact_result result)
{
return true;
}
static inline int kcompactd_run(int nid)
{
return 0;

View File

@ -21,6 +21,7 @@
#include <linux/pfn.h>
#include <linux/pstore.h>
#include <linux/reboot.h>
#include <linux/uuid.h>
#include <linux/screen_info.h>
#include <asm/page.h>
@ -44,17 +45,10 @@ typedef u16 efi_char16_t; /* UNICODE character */
typedef u64 efi_physical_addr_t;
typedef void *efi_handle_t;
typedef struct {
u8 b[16];
} efi_guid_t;
typedef uuid_le efi_guid_t;
#define EFI_GUID(a,b,c,d0,d1,d2,d3,d4,d5,d6,d7) \
((efi_guid_t) \
{{ (a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff, \
(b) & 0xff, ((b) >> 8) & 0xff, \
(c) & 0xff, ((c) >> 8) & 0xff, \
(d0), (d1), (d2), (d3), (d4), (d5), (d6), (d7) }})
UUID_LE(a, b, c, d0, d1, d2, d3, d4, d5, d6, d7)
/*
* Generic EFI table header
@ -1117,7 +1111,7 @@ extern int efi_status_to_err(efi_status_t status);
* Length of a GUID string (strlen("aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"))
* not including trailing NUL
*/
#define EFI_VARIABLE_GUID_LEN 36
#define EFI_VARIABLE_GUID_LEN UUID_STRING_LEN
/*
* The type of search to perform when calling boottime->locate_handle

View File

@ -14,6 +14,7 @@
#include <linux/rcupdate.h>
#include <linux/slab.h>
#include <linux/percpu-refcount.h>
#include <linux/uuid.h>
#ifdef CONFIG_BLOCK
@ -93,7 +94,7 @@ struct disk_stats {
* Enough for the string representation of any kind of UUID plus NULL.
* EFI UUID is 36 characters. MSDOS UUID is 11 characters.
*/
#define PARTITION_META_INFO_UUIDLTH 37
#define PARTITION_META_INFO_UUIDLTH (UUID_STRING_LEN + 1)
struct partition_meta_info {
char uuid[PARTITION_META_INFO_UUIDLTH];
@ -228,27 +229,9 @@ static inline struct gendisk *part_to_disk(struct hd_struct *part)
return NULL;
}
static inline void part_pack_uuid(const u8 *uuid_str, u8 *to)
{
int i;
for (i = 0; i < 16; ++i) {
*to++ = (hex_to_bin(*uuid_str) << 4) |
(hex_to_bin(*(uuid_str + 1)));
uuid_str += 2;
switch (i) {
case 3:
case 5:
case 7:
case 9:
uuid_str++;
continue;
}
}
}
static inline int blk_part_pack_uuid(const u8 *uuid_str, u8 *to)
{
part_pack_uuid(uuid_str, to);
uuid_be_to_bin(uuid_str, (uuid_be *)to);
return 0;
}

Some files were not shown because too many files have changed in this diff Show More