2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-14 16:23:51 +08:00

Merge branch 'akpm' (patchbomb from Andrew) into next

Merge misc updates from Andrew Morton:

 - a few fixes for 3.16.  Cc'ed to stable so they'll get there somehow.

 - various misc fixes and cleanups

 - most of the ocfs2 queue.  Review is slow...

 - most of MM.  The MM queue is pretty huge this time, but not much in
   the way of feature work.

 - some tweaks under kernel/

 - printk maintenance work

 - updates to lib/

 - checkpatch updates

 - tweaks to init/

* emailed patches from Andrew Morton <akpm@linux-foundation.org>: (276 commits)
  fs/autofs4/dev-ioctl.c: add __init to autofs_dev_ioctl_init
  fs/ncpfs/getopt.c: replace simple_strtoul by kstrtoul
  init/main.c: remove an ifdef
  kthreads: kill CLONE_KERNEL, change kernel_thread(kernel_init) to avoid CLONE_SIGHAND
  init/main.c: add initcall_blacklist kernel parameter
  init/main.c: don't use pr_debug()
  fs/binfmt_flat.c: make old_reloc() static
  fs/binfmt_elf.c: fix bool assignements
  fs/efs: convert printk(KERN_DEBUG to pr_debug
  fs/efs: add pr_fmt / use __func__
  fs/efs: convert printk to pr_foo()
  scripts/checkpatch.pl: device_initcall is not the only __initcall substitute
  checkpatch: check stable email address
  checkpatch: warn on unnecessary void function return statements
  checkpatch: prefer kstrto<foo> to sscanf(buf, "%<lhuidx>", &bar);
  checkpatch: add warning for kmalloc/kzalloc with multiply
  checkpatch: warn on #defines ending in semicolon
  checkpatch: make --strict a default for files in drivers/net and net/
  checkpatch: always warn on missing blank line after variable declaration block
  checkpatch: fix wildcard DT compatible string checking
  ...
This commit is contained in:
Linus Torvalds 2014-06-04 16:55:13 -07:00
commit 00170fdd08
279 changed files with 4720 additions and 3522 deletions

View File

@ -660,15 +660,23 @@ There are a number of driver model diagnostic macros in <linux/device.h>
which you should use to make sure messages are matched to the right device
and driver, and are tagged with the right level: dev_err(), dev_warn(),
dev_info(), and so forth. For messages that aren't associated with a
particular device, <linux/printk.h> defines pr_debug() and pr_info().
particular device, <linux/printk.h> defines pr_notice(), pr_info(),
pr_warn(), pr_err(), etc.
Coming up with good debugging messages can be quite a challenge; and once
you have them, they can be a huge help for remote troubleshooting. Such
messages should be compiled out when the DEBUG symbol is not defined (that
is, by default they are not included). When you use dev_dbg() or pr_debug(),
that's automatic. Many subsystems have Kconfig options to turn on -DDEBUG.
A related convention uses VERBOSE_DEBUG to add dev_vdbg() messages to the
ones already enabled by DEBUG.
you have them, they can be a huge help for remote troubleshooting. However
debug message printing is handled differently than printing other non-debug
messages. While the other pr_XXX() functions print unconditionally,
pr_debug() does not; it is compiled out by default, unless either DEBUG is
defined or CONFIG_DYNAMIC_DEBUG is set. That is true for dev_dbg() also,
and a related convention uses VERBOSE_DEBUG to add dev_vdbg() messages to
the ones already enabled by DEBUG.
Many subsystems have Kconfig debug options to turn on -DDEBUG in the
corresponding Makefile; in other cases specific files #define DEBUG. And
when a debug message should be unconditionally printed, such as if it is
already inside a debug-related #ifdef secton, printk(KERN_DEBUG ...) can be
used.
Chapter 14: Allocating memory

View File

@ -270,6 +270,11 @@ When oom event notifier is registered, event will be delivered.
2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM)
WARNING: Current implementation lacks reclaim support. That means allocation
attempts will fail when close to the limit even if there are plenty of
kmem available for reclaim. That makes this option unusable in real
life so DO NOT SELECT IT unless for development purposes.
With the Kernel memory extension, the Memory Controller is able to limit
the amount of kernel memory used by the system. Kernel memory is fundamentally
different than user memory, since it can't be swapped out, which makes it
@ -535,17 +540,15 @@ Note:
5.3 swappiness
Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only.
Similar to /proc/sys/vm/swappiness, but only affecting reclaim that is
triggered by this cgroup's hard limit. The tunable in the root cgroup
corresponds to the global swappiness setting.
Please note that unlike the global swappiness, memcg knob set to 0
really prevents from any swapping even if there is a swap storage
available. This might lead to memcg OOM killer if there are no file
pages to reclaim.
Following cgroups' swappiness can't be changed.
- root cgroup (uses /proc/sys/vm/swappiness).
- a cgroup which uses hierarchy and it has other cgroup(s) below it.
- a cgroup which uses hierarchy and not the root of hierarchy.
5.4 failcnt
A memory cgroup provides memory.failcnt and memory.memsw.failcnt files.
@ -754,7 +757,6 @@ You can disable the OOM-killer by writing "1" to memory.oom_control file, as:
#echo 1 > memory.oom_control
This operation is only allowed to the top cgroup of a sub-hierarchy.
If OOM-killer is disabled, tasks under cgroup will hang/sleep
in memory cgroup's OOM-waitqueue when they request accountable memory.

View File

@ -630,8 +630,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Also note the kernel might malfunction if you disable
some critical bits.
cma=nn[MG] [ARM,KNL]
Sets the size of kernel global memory area for contiguous
cma=nn[MG]@[start[MG][-end[MG]]]
[ARM,X86,KNL]
Sets the size of kernel global memory area for
contiguous memory allocations and optionally the
placement constraint by the physical address range of
memory allocations. For more information, see
include/linux/dma-contiguous.h
@ -1309,6 +1312,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
for working out where the kernel is dying during
startup.
initcall_blacklist= [KNL] Do not execute a comma-separated list of
initcall functions. Useful for debugging built-in
modules and initcalls.
initrd= [BOOT] Specify the location of the initial ramdisk
inport.irq= [HW] Inport (ATI XL and Microsoft) busmouse driver

View File

@ -88,16 +88,21 @@ phase by hand.
1.3. Unit of Memory online/offline operation
------------
Memory hotplug uses SPARSEMEM memory model. SPARSEMEM divides the whole memory
into chunks of the same size. The chunk is called a "section". The size of
a section is architecture dependent. For example, power uses 16MiB, ia64 uses
1GiB. The unit of online/offline operation is "one section". (see Section 3.)
Memory hotplug uses SPARSEMEM memory model which allows memory to be divided
into chunks of the same size. These chunks are called "sections". The size of
a memory section is architecture dependent. For example, power uses 16MiB, ia64
uses 1GiB.
To determine the size of sections, please read this file:
Memory sections are combined into chunks referred to as "memory blocks". The
size of a memory block is architecture dependent and represents the logical
unit upon which memory online/offline operations are to be performed. The
default size of a memory block is the same as memory section size unless an
architecture specifies otherwise. (see Section 3.)
To determine the size (in bytes) of a memory block please read this file:
/sys/devices/system/memory/block_size_bytes
This file shows the size of sections in byte.
-----------------------
2. Kernel Configuration
@ -123,42 +128,35 @@ config options.
(CONFIG_ACPI_CONTAINER).
This option can be kernel module too.
--------------------------------
4 sysfs files for memory hotplug
3 sysfs files for memory hotplug
--------------------------------
All sections have their device information in sysfs. Each section is part of
a memory block under /sys/devices/system/memory as
All memory blocks have their device information in sysfs. Each memory block
is described under /sys/devices/system/memory as
/sys/devices/system/memory/memoryXXX
(XXX is the section id.)
(XXX is the memory block id.)
Now, XXX is defined as (start_address_of_section / section_size) of the first
section contained in the memory block. The files 'phys_index' and
'end_phys_index' under each directory report the beginning and end section id's
for the memory block covered by the sysfs directory. It is expected that all
For the memory block covered by the sysfs directory. It is expected that all
memory sections in this range are present and no memory holes exist in the
range. Currently there is no way to determine if there is a memory hole, but
the existence of one should not affect the hotplug capabilities of the memory
block.
For example, assume 1GiB section size. A device for a memory starting at
For example, assume 1GiB memory block size. A device for a memory starting at
0x100000000 is /sys/device/system/memory/memory4
(0x100000000 / 1Gib = 4)
This device covers address range [0x100000000 ... 0x140000000)
Under each section, you can see 4 or 5 files, the end_phys_index file being
a recent addition and not present on older kernels.
Under each memory block, you can see 4 files:
/sys/devices/system/memory/memoryXXX/start_phys_index
/sys/devices/system/memory/memoryXXX/end_phys_index
/sys/devices/system/memory/memoryXXX/phys_index
/sys/devices/system/memory/memoryXXX/phys_device
/sys/devices/system/memory/memoryXXX/state
/sys/devices/system/memory/memoryXXX/removable
'phys_index' : read-only and contains section id of the first section
in the memory block, same as XXX.
'end_phys_index' : read-only and contains section id of the last section
in the memory block.
'phys_index' : read-only and contains memory block id, same as XXX.
'state' : read-write
at read: contains online/offline state of memory.
at write: user can specify "online_kernel",
@ -185,6 +183,7 @@ For example:
A backlink will also be created:
/sys/devices/system/memory/memory9/node0 -> ../../node/node0
--------------------------------
4. Physical memory hot-add phase
--------------------------------
@ -227,11 +226,10 @@ You can tell the physical address of new memory to the kernel by
% echo start_address_of_new_memory > /sys/devices/system/memory/probe
Then, [start_address_of_new_memory, start_address_of_new_memory + section_size)
memory range is hot-added. In this case, hotplug script is not called (in
current implementation). You'll have to online memory by yourself.
Please see "How to online memory" in this text.
Then, [start_address_of_new_memory, start_address_of_new_memory +
memory_block_size] memory range is hot-added. In this case, hotplug script is
not called (in current implementation). You'll have to online memory by
yourself. Please see "How to online memory" in this text.
------------------------------
@ -240,36 +238,36 @@ Please see "How to online memory" in this text.
5.1. State of memory
------------
To see (online/offline) state of memory section, read 'state' file.
To see (online/offline) state of a memory block, read 'state' file.
% cat /sys/device/system/memory/memoryXXX/state
If the memory section is online, you'll read "online".
If the memory section is offline, you'll read "offline".
If the memory block is online, you'll read "online".
If the memory block is offline, you'll read "offline".
5.2. How to online memory
------------
Even if the memory is hot-added, it is not at ready-to-use state.
For using newly added memory, you have to "online" the memory section.
For using newly added memory, you have to "online" the memory block.
For onlining, you have to write "online" to the section's state file as:
For onlining, you have to write "online" to the memory block's state file as:
% echo online > /sys/devices/system/memory/memoryXXX/state
This onlining will not change the ZONE type of the target memory section,
If the memory section is in ZONE_NORMAL, you can change it to ZONE_MOVABLE:
This onlining will not change the ZONE type of the target memory block,
If the memory block is in ZONE_NORMAL, you can change it to ZONE_MOVABLE:
% echo online_movable > /sys/devices/system/memory/memoryXXX/state
(NOTE: current limit: this memory section must be adjacent to ZONE_MOVABLE)
(NOTE: current limit: this memory block must be adjacent to ZONE_MOVABLE)
And if the memory section is in ZONE_MOVABLE, you can change it to ZONE_NORMAL:
And if the memory block is in ZONE_MOVABLE, you can change it to ZONE_NORMAL:
% echo online_kernel > /sys/devices/system/memory/memoryXXX/state
(NOTE: current limit: this memory section must be adjacent to ZONE_NORMAL)
(NOTE: current limit: this memory block must be adjacent to ZONE_NORMAL)
After this, section memoryXXX's state will be 'online' and the amount of
After this, memory block XXX's state will be 'online' and the amount of
available memory will be increased.
Currently, newly added memory is added as ZONE_NORMAL (for powerpc, ZONE_DMA).
@ -284,22 +282,22 @@ This may be changed in future.
6.1 Memory offline and ZONE_MOVABLE
------------
Memory offlining is more complicated than memory online. Because memory offline
has to make the whole memory section be unused, memory offline can fail if
the section includes memory which cannot be freed.
has to make the whole memory block be unused, memory offline can fail if
the memory block includes memory which cannot be freed.
In general, memory offline can use 2 techniques.
(1) reclaim and free all memory in the section.
(2) migrate all pages in the section.
(1) reclaim and free all memory in the memory block.
(2) migrate all pages in the memory block.
In the current implementation, Linux's memory offline uses method (2), freeing
all pages in the section by page migration. But not all pages are
all pages in the memory block by page migration. But not all pages are
migratable. Under current Linux, migratable pages are anonymous pages and
page caches. For offlining a section by migration, the kernel has to guarantee
that the section contains only migratable pages.
page caches. For offlining a memory block by migration, the kernel has to
guarantee that the memory block contains only migratable pages.
Now, a boot option for making a section which consists of migratable pages is
supported. By specifying "kernelcore=" or "movablecore=" boot option, you can
Now, a boot option for making a memory block which consists of migratable pages
is supported. By specifying "kernelcore=" or "movablecore=" boot option, you can
create ZONE_MOVABLE...a zone which is just used for movable pages.
(See also Documentation/kernel-parameters.txt)
@ -315,28 +313,27 @@ creates ZONE_MOVABLE as following.
Size of memory for movable pages (for offline) is ZZZZ.
Note) Unfortunately, there is no information to show which section belongs
Note: Unfortunately, there is no information to show which memory block belongs
to ZONE_MOVABLE. This is TBD.
6.2. How to offline memory
------------
You can offline a section by using the same sysfs interface that was used in
memory onlining.
You can offline a memory block by using the same sysfs interface that was used
in memory onlining.
% echo offline > /sys/devices/system/memory/memoryXXX/state
If offline succeeds, the state of the memory section is changed to be "offline".
If offline succeeds, the state of the memory block is changed to be "offline".
If it fails, some error core (like -EBUSY) will be returned by the kernel.
Even if a section does not belong to ZONE_MOVABLE, you can try to offline it.
If it doesn't contain 'unmovable' memory, you'll get success.
Even if a memory block does not belong to ZONE_MOVABLE, you can try to offline
it. If it doesn't contain 'unmovable' memory, you'll get success.
A section under ZONE_MOVABLE is considered to be able to be offlined easily.
But under some busy state, it may return -EBUSY. Even if a memory section
cannot be offlined due to -EBUSY, you can retry offlining it and may be able to
offline it (or not).
(For example, a page is referred to by some kernel internal call and released
soon.)
A memory block under ZONE_MOVABLE is considered to be able to be offlined
easily. But under some busy state, it may return -EBUSY. Even if a memory
block cannot be offlined due to -EBUSY, you can retry offlining it and may be
able to offline it (or not). (For example, a page is referred to by some kernel
internal call and released soon.)
Consideration:
Memory hotplug's design direction is to make the possibility of memory offlining
@ -373,11 +370,11 @@ MEMORY_GOING_OFFLINE
Generated to begin the process of offlining memory. Allocations are no
longer possible from the memory but some of the memory to be offlined
is still in use. The callback can be used to free memory known to a
subsystem from the indicated memory section.
subsystem from the indicated memory block.
MEMORY_CANCEL_OFFLINE
Generated if MEMORY_GOING_OFFLINE fails. Memory is available again from
the section that we attempted to offline.
the memory block that we attempted to offline.
MEMORY_OFFLINE
Generated after offlining memory is complete.
@ -413,8 +410,8 @@ node if necessary.
--------------
- allowing memory hot-add to ZONE_MOVABLE. maybe we need some switch like
sysctl or new control file.
- showing memory section and physical device relationship.
- showing memory section is under ZONE_MOVABLE or not
- showing memory block and physical device relationship.
- showing memory block is under ZONE_MOVABLE or not
- test and make it better memory offlining.
- support HugeTLB page migration and offlining.
- memmap removing at memory offline.

View File

@ -746,8 +746,8 @@ Changing this takes effect whenever an application requests memory.
vfs_cache_pressure
------------------
Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.
This percentage value controls the tendency of the kernel to reclaim
the memory which is used for caching of directory and inode objects.
At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a "fair" rate with respect to pagecache and
@ -757,6 +757,11 @@ never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.
Increasing vfs_cache_pressure significantly beyond 100 may have negative
performance impact. Reclaim code needs to take various locks to find freeable
directory and inode objects. With vfs_cache_pressure=1000, it will look for
ten times more freeable objects than there are.
==============================================================
zone_reclaim_mode:
@ -772,16 +777,17 @@ This is value ORed together of
2 = Zone reclaim writes dirty pages out
4 = Zone reclaim swaps pages
zone_reclaim_mode is set during bootup to 1 if it is determined that pages
from remote zones will cause a measurable performance reduction. The
page allocator will then reclaim easily reusable pages (those page
cache pages that are currently not used) before allocating off node pages.
It may be beneficial to switch off zone reclaim if the system is
used for a file server and all of memory should be used for caching files
from disk. In that case the caching effect is more important than
zone_reclaim_mode is disabled by default. For file servers or workloads
that benefit from having their data cached, zone_reclaim_mode should be
left disabled as the caching effect is likely to be more important than
data locality.
zone_reclaim may be enabled if it's known that the workload is partitioned
such that each partition fits within a NUMA node and that accessing remote
memory would cause a measurable performance reduction. The page allocator
will then reclaim easily reusable pages (those page cache pages that are
currently not used) before allocating off node pages.
Allowing zone reclaim to write out pages stops processes that are
writing large amounts of data from dirtying pages on other nodes. Zone
reclaim will write out dirty pages if a zone fills up and so effectively

View File

@ -84,6 +84,11 @@ PR_MCE_KILL
PR_MCE_KILL_EARLY: Early kill
PR_MCE_KILL_LATE: Late kill
PR_MCE_KILL_DEFAULT: Use system global default
Note that if you want to have a dedicated thread which handles
the SIGBUS(BUS_MCEERR_AO) on behalf of the process, you should
call prctl(PR_MCE_KILL_EARLY) on the designated thread. Otherwise,
the SIGBUS is sent to the main thread.
PR_MCE_KILL_GET
return current mode

View File

@ -3882,6 +3882,11 @@ L: kvm@vger.kernel.org
S: Supported
F: drivers/uio/uio_pci_generic.c
GET_MAINTAINER SCRIPT
M: Joe Perches <joe@perches.com>
S: Maintained
F: scripts/get_maintainer.pl
GFS2 FILE SYSTEM
M: Steven Whitehouse <swhiteho@redhat.com>
L: cluster-devel@redhat.com
@ -4006,9 +4011,8 @@ S: Odd Fixes
F: drivers/media/usb/hdpvr/
HWPOISON MEMORY FAILURE HANDLING
M: Andi Kleen <andi@firstfloor.org>
M: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
L: linux-mm@kvack.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-mce-2.6.git hwpoison
S: Maintained
F: mm/memory-failure.c
F: mm/hwpoison-inject.c

View File

@ -86,12 +86,13 @@ static void show_faulting_vma(unsigned long address, char *buf)
unsigned long ino = 0;
dev_t dev = 0;
char *nm = buf;
struct mm_struct *active_mm = current->active_mm;
/* can't use print_vma_addr() yet as it doesn't check for
* non-inclusive vma
*/
vma = find_vma(current->active_mm, address);
down_read(&active_mm->mmap_sem);
vma = find_vma(active_mm, address);
/* check against the find_vma( ) behaviour which returns the next VMA
* if the container VMA is not found
@ -110,9 +111,10 @@ static void show_faulting_vma(unsigned long address, char *buf)
vma->vm_start < TASK_UNMAPPED_BASE ?
address : address - vma->vm_start,
nm, vma->vm_start, vma->vm_end);
} else {
} else
pr_info(" @No matching VMA found\n");
}
up_read(&active_mm->mmap_sem);
}
static void show_ecr_verbose(struct pt_regs *regs)

View File

@ -56,8 +56,3 @@ int pmd_huge(pmd_t pmd)
{
return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
}
int pmd_huge_support(void)
{
return 1;
}

View File

@ -58,11 +58,6 @@ int pud_huge(pud_t pud)
#endif
}
int pmd_huge_support(void)
{
return 1;
}
static __init int setup_hugepagesz(char *opt)
{
unsigned long ps = memparse(opt, &opt);

View File

@ -12,7 +12,6 @@
#define __ARCH_WANT_SYS_ALARM
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_FADVISE64
#define __ARCH_WANT_SYS_GETPGRP

View File

@ -15,7 +15,6 @@
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_IPC
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME

View File

@ -13,7 +13,6 @@
/* #define __ARCH_WANT_SYS_GETHOSTNAME */
#define __ARCH_WANT_SYS_IPC
#define __ARCH_WANT_SYS_PAUSE
/* #define __ARCH_WANT_SYS_SGETMASK */
/* #define __ARCH_WANT_SYS_SIGNAL */
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME

View File

@ -21,7 +21,8 @@
#define PENALTY_FOR_NODE_WITH_CPUS 255
/*
* Distance above which we begin to use zone reclaim
* Nodes within this distance are eligible for reclaim by zone_reclaim() when
* zone_reclaim_mode is enabled.
*/
#define RECLAIM_DISTANCE 15

View File

@ -114,11 +114,6 @@ int pud_huge(pud_t pud)
return 0;
}
int pmd_huge_support(void)
{
return 0;
}
struct page *
follow_huge_pmd(struct mm_struct *mm, unsigned long address, pmd_t *pmd, int write)
{

View File

@ -13,7 +13,6 @@
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_IPC
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME

View File

@ -110,11 +110,6 @@ int pud_huge(pud_t pud)
return 0;
}
int pmd_huge_support(void)
{
return 1;
}
struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
pmd_t *pmd, int write)
{

View File

@ -19,7 +19,6 @@
#define __ARCH_WANT_SYS_ALARM
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME

View File

@ -29,7 +29,6 @@
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_IPC
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_UTIME
#define __ARCH_WANT_SYS_WAITPID
#define __ARCH_WANT_SYS_SOCKETCALL

View File

@ -84,11 +84,6 @@ int pud_huge(pud_t pud)
return (pud_val(pud) & _PAGE_HUGE) != 0;
}
int pmd_huge_support(void)
{
return 1;
}
struct page *
follow_huge_pmd(struct mm_struct *mm, unsigned long address,
pmd_t *pmd, int write)

View File

@ -26,7 +26,6 @@
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_IPC
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME

View File

@ -145,7 +145,6 @@ type name(type1 arg1, type2 arg2, type3 arg3, type4 arg4, type5 arg5) \
#define __ARCH_WANT_SYS_ALARM
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_COMPAT_SYS_TIME

View File

@ -44,6 +44,12 @@ static inline int pte_present(pte_t pte)
return pte_val(pte) & (_PAGE_PRESENT | _PAGE_NUMA);
}
#define pte_present_nonuma pte_present_nonuma
static inline int pte_present_nonuma(pte_t pte)
{
return pte_val(pte) & (_PAGE_PRESENT);
}
#define pte_numa pte_numa
static inline int pte_numa(pte_t pte)
{

View File

@ -9,12 +9,8 @@ struct device_node;
#ifdef CONFIG_NUMA
/*
* Before going off node we want the VM to try and reclaim from the local
* node. It does this if the remote distance is larger than RECLAIM_DISTANCE.
* With the default REMOTE_DISTANCE of 20 and the default RECLAIM_DISTANCE of
* 20, we never reclaim and go off node straight away.
*
* To fix this we choose a smaller value of RECLAIM_DISTANCE.
* If zone_reclaim_mode is enabled, a RECLAIM_DISTANCE of 10 will mean that
* all zones on all nodes will be eligible for zone_reclaim().
*/
#define RECLAIM_DISTANCE 10

View File

@ -29,7 +29,6 @@
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_IPC
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME

View File

@ -86,11 +86,6 @@ int pgd_huge(pgd_t pgd)
*/
return ((pgd_val(pgd) & 0x3) != 0x0);
}
int pmd_huge_support(void)
{
return 1;
}
#else
int pmd_huge(pmd_t pmd)
{
@ -106,11 +101,6 @@ int pgd_huge(pgd_t pgd)
{
return 0;
}
int pmd_huge_support(void)
{
return 0;
}
#endif
pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)

View File

@ -220,11 +220,6 @@ int pud_huge(pud_t pud)
return 0;
}
int pmd_huge_support(void)
{
return 1;
}
struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
pmd_t *pmdp, int write)
{

View File

@ -11,7 +11,6 @@
# define __ARCH_WANT_SYS_GETHOSTNAME
# define __ARCH_WANT_SYS_IPC
# define __ARCH_WANT_SYS_PAUSE
# define __ARCH_WANT_SYS_SGETMASK
# define __ARCH_WANT_SYS_SIGNAL
# define __ARCH_WANT_SYS_TIME
# define __ARCH_WANT_SYS_UTIME

View File

@ -52,7 +52,7 @@ int arch_install_hw_breakpoint(struct perf_event *bp)
int i;
for (i = 0; i < sh_ubc->num_events; i++) {
struct perf_event **slot = &__get_cpu_var(bp_per_reg[i]);
struct perf_event **slot = this_cpu_ptr(&bp_per_reg[i]);
if (!*slot) {
*slot = bp;
@ -84,7 +84,7 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp)
int i;
for (i = 0; i < sh_ubc->num_events; i++) {
struct perf_event **slot = &__get_cpu_var(bp_per_reg[i]);
struct perf_event **slot = this_cpu_ptr(&bp_per_reg[i]);
if (*slot == bp) {
*slot = NULL;

View File

@ -102,7 +102,7 @@ int __kprobes kprobe_handle_illslot(unsigned long pc)
void __kprobes arch_remove_kprobe(struct kprobe *p)
{
struct kprobe *saved = &__get_cpu_var(saved_next_opcode);
struct kprobe *saved = this_cpu_ptr(&saved_next_opcode);
if (saved->addr) {
arch_disarm_kprobe(p);
@ -111,7 +111,7 @@ void __kprobes arch_remove_kprobe(struct kprobe *p)
saved->addr = NULL;
saved->opcode = 0;
saved = &__get_cpu_var(saved_next_opcode2);
saved = this_cpu_ptr(&saved_next_opcode2);
if (saved->addr) {
arch_disarm_kprobe(saved);
@ -129,14 +129,14 @@ static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb)
static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb)
{
__get_cpu_var(current_kprobe) = kcb->prev_kprobe.kp;
__this_cpu_write(current_kprobe, kcb->prev_kprobe.kp);
kcb->kprobe_status = kcb->prev_kprobe.status;
}
static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
struct kprobe_ctlblk *kcb)
{
__get_cpu_var(current_kprobe) = p;
__this_cpu_write(current_kprobe, p);
}
/*
@ -146,15 +146,15 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs,
*/
static void __kprobes prepare_singlestep(struct kprobe *p, struct pt_regs *regs)
{
__get_cpu_var(saved_current_opcode).addr = (kprobe_opcode_t *)regs->pc;
__this_cpu_write(saved_current_opcode.addr, (kprobe_opcode_t *)regs->pc);
if (p != NULL) {
struct kprobe *op1, *op2;
arch_disarm_kprobe(p);
op1 = &__get_cpu_var(saved_next_opcode);
op2 = &__get_cpu_var(saved_next_opcode2);
op1 = this_cpu_ptr(&saved_next_opcode);
op2 = this_cpu_ptr(&saved_next_opcode2);
if (OPCODE_JSR(p->opcode) || OPCODE_JMP(p->opcode)) {
unsigned int reg_nr = ((p->opcode >> 8) & 0x000F);
@ -249,7 +249,7 @@ static int __kprobes kprobe_handler(struct pt_regs *regs)
kcb->kprobe_status = KPROBE_REENTER;
return 1;
} else {
p = __get_cpu_var(current_kprobe);
p = __this_cpu_read(current_kprobe);
if (p->break_handler && p->break_handler(p, regs)) {
goto ss_probe;
}
@ -336,9 +336,9 @@ int __kprobes trampoline_probe_handler(struct kprobe *p, struct pt_regs *regs)
continue;
if (ri->rp && ri->rp->handler) {
__get_cpu_var(current_kprobe) = &ri->rp->kp;
__this_cpu_write(current_kprobe, &ri->rp->kp);
ri->rp->handler(ri, regs);
__get_cpu_var(current_kprobe) = NULL;
__this_cpu_write(current_kprobe, NULL);
}
orig_ret_address = (unsigned long)ri->ret_addr;
@ -383,19 +383,19 @@ static int __kprobes post_kprobe_handler(struct pt_regs *regs)
cur->post_handler(cur, regs, 0);
}
p = &__get_cpu_var(saved_next_opcode);
p = this_cpu_ptr(&saved_next_opcode);
if (p->addr) {
arch_disarm_kprobe(p);
p->addr = NULL;
p->opcode = 0;
addr = __get_cpu_var(saved_current_opcode).addr;
__get_cpu_var(saved_current_opcode).addr = NULL;
addr = __this_cpu_read(saved_current_opcode.addr);
__this_cpu_write(saved_current_opcode.addr, NULL);
p = get_kprobe(addr);
arch_arm_kprobe(p);
p = &__get_cpu_var(saved_next_opcode2);
p = this_cpu_ptr(&saved_next_opcode2);
if (p->addr) {
arch_disarm_kprobe(p);
p->addr = NULL;
@ -511,7 +511,7 @@ int __kprobes kprobe_exceptions_notify(struct notifier_block *self,
if (kprobe_handler(args->regs)) {
ret = NOTIFY_STOP;
} else {
p = __get_cpu_var(current_kprobe);
p = __this_cpu_read(current_kprobe);
if (p->break_handler &&
p->break_handler(p, args->regs))
ret = NOTIFY_STOP;

View File

@ -32,7 +32,7 @@ static DEFINE_PER_CPU(struct clock_event_device, local_clockevent);
*/
void local_timer_interrupt(void)
{
struct clock_event_device *clk = &__get_cpu_var(local_clockevent);
struct clock_event_device *clk = this_cpu_ptr(&local_clockevent);
irq_enter();
clk->event_handler(clk);

View File

@ -227,7 +227,7 @@ again:
static void sh_pmu_stop(struct perf_event *event, int flags)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx;
@ -245,7 +245,7 @@ static void sh_pmu_stop(struct perf_event *event, int flags)
static void sh_pmu_start(struct perf_event *event, int flags)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx;
@ -262,7 +262,7 @@ static void sh_pmu_start(struct perf_event *event, int flags)
static void sh_pmu_del(struct perf_event *event, int flags)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
sh_pmu_stop(event, PERF_EF_UPDATE);
__clear_bit(event->hw.idx, cpuc->used_mask);
@ -272,7 +272,7 @@ static void sh_pmu_del(struct perf_event *event, int flags)
static int sh_pmu_add(struct perf_event *event, int flags)
{
struct cpu_hw_events *cpuc = &__get_cpu_var(cpu_hw_events);
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
int idx = hwc->idx;
int ret = -EAGAIN;

View File

@ -111,7 +111,7 @@ void play_dead_common(void)
irq_ctx_exit(raw_smp_processor_id());
mb();
__get_cpu_var(cpu_state) = CPU_DEAD;
__this_cpu_write(cpu_state, CPU_DEAD);
local_irq_disable();
}

View File

@ -83,11 +83,6 @@ int pud_huge(pud_t pud)
return 0;
}
int pmd_huge_support(void)
{
return 0;
}
struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
pmd_t *pmd, int write)
{

View File

@ -25,7 +25,6 @@
#define __ARCH_WANT_SYS_ALARM
#define __ARCH_WANT_SYS_GETHOSTNAME
#define __ARCH_WANT_SYS_PAUSE
#define __ARCH_WANT_SYS_SGETMASK
#define __ARCH_WANT_SYS_SIGNAL
#define __ARCH_WANT_SYS_TIME
#define __ARCH_WANT_SYS_UTIME

View File

@ -231,11 +231,6 @@ int pud_huge(pud_t pud)
return 0;
}
int pmd_huge_support(void)
{
return 0;
}
struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
pmd_t *pmd, int write)
{

View File

@ -417,7 +417,7 @@ void __homecache_free_pages(struct page *page, unsigned int order)
if (put_page_testzero(page)) {
homecache_change_page_home(page, order, PAGE_HOME_HASH);
if (order == 0) {
free_hot_cold_page(page, 0);
free_hot_cold_page(page, false);
} else {
init_page_count(page);
__free_pages(page, order);

View File

@ -166,11 +166,6 @@ int pud_huge(pud_t pud)
return !!(pud_val(pud) & _PAGE_HUGE_PAGE);
}
int pmd_huge_support(void)
{
return 1;
}
struct page *follow_huge_pmd(struct mm_struct *mm, unsigned long address,
pmd_t *pmd, int write)
{

View File

@ -144,11 +144,11 @@ void __iomem *__uc32_ioremap_pfn_caller(unsigned long pfn,
* Don't allow RAM to be mapped
*/
if (pfn_valid(pfn)) {
printk(KERN_WARNING "BUG: Your driver calls ioremap() on\n"
WARN(1, "BUG: Your driver calls ioremap() on\n"
"system memory. This leads to architecturally\n"
"unpredictable behaviour, and ioremap() will fail in\n"
"the next kernel release. Please fix your driver.\n");
WARN_ON(1);
return NULL;
}
type = get_mem_type(mtype);

View File

@ -26,7 +26,7 @@ config X86
select ARCH_MIGHT_HAVE_PC_SERIO
select HAVE_AOUT if X86_32
select HAVE_UNSTABLE_SCHED_CLOCK
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
select ARCH_SUPPORTS_INT128 if X86_64
select ARCH_WANTS_PROT_NUMA_PROT_NONE
select HAVE_IDE
@ -41,7 +41,7 @@ config X86
select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_FRAME_POINTERS
select HAVE_DMA_ATTRS
select HAVE_DMA_CONTIGUOUS if !SWIOTLB
select HAVE_DMA_CONTIGUOUS
select HAVE_KRETPROBES
select GENERIC_EARLY_IOREMAP
select HAVE_OPTPROBES
@ -105,7 +105,7 @@ config X86
select HAVE_ARCH_SECCOMP_FILTER
select BUILDTIME_EXTABLE_SORT
select GENERIC_CMOS_UPDATE
select HAVE_ARCH_SOFT_DIRTY
select HAVE_ARCH_SOFT_DIRTY if X86_64
select CLOCKSOURCE_WATCHDOG
select GENERIC_CLOCKEVENTS
select ARCH_CLOCKSOURCE_DATA
@ -1874,6 +1874,10 @@ config ARCH_ENABLE_SPLIT_PMD_PTLOCK
def_bool y
depends on X86_64 || X86_PAE
config ARCH_ENABLE_HUGEPAGE_MIGRATION
def_bool y
depends on X86_64 && HUGETLB_PAGE && MIGRATION
menu "Power management and ACPI options"
config ARCH_HIBERNATION_HEADER

View File

@ -176,8 +176,6 @@ int mce_available(struct cpuinfo_x86 *c);
DECLARE_PER_CPU(unsigned, mce_exception_count);
DECLARE_PER_CPU(unsigned, mce_poll_count);
extern atomic_t mce_entry;
typedef DECLARE_BITMAP(mce_banks_t, MAX_NR_BANKS);
DECLARE_PER_CPU(mce_banks_t, mce_poll_banks);

View File

@ -62,66 +62,14 @@ static inline unsigned long pte_bitop(unsigned long value, unsigned int rightshi
return ((value >> rightshift) & mask) << leftshift;
}
#ifdef CONFIG_MEM_SOFT_DIRTY
/*
* Bits _PAGE_BIT_PRESENT, _PAGE_BIT_FILE, _PAGE_BIT_SOFT_DIRTY and
* _PAGE_BIT_PROTNONE are taken, split up the 28 bits of offset
* into this range.
*/
#define PTE_FILE_MAX_BITS 28
#define PTE_FILE_SHIFT1 (_PAGE_BIT_PRESENT + 1)
#define PTE_FILE_SHIFT2 (_PAGE_BIT_FILE + 1)
#define PTE_FILE_SHIFT3 (_PAGE_BIT_PROTNONE + 1)
#define PTE_FILE_SHIFT4 (_PAGE_BIT_SOFT_DIRTY + 1)
#define PTE_FILE_BITS1 (PTE_FILE_SHIFT2 - PTE_FILE_SHIFT1 - 1)
#define PTE_FILE_BITS2 (PTE_FILE_SHIFT3 - PTE_FILE_SHIFT2 - 1)
#define PTE_FILE_BITS3 (PTE_FILE_SHIFT4 - PTE_FILE_SHIFT3 - 1)
#define PTE_FILE_MASK1 ((1U << PTE_FILE_BITS1) - 1)
#define PTE_FILE_MASK2 ((1U << PTE_FILE_BITS2) - 1)
#define PTE_FILE_MASK3 ((1U << PTE_FILE_BITS3) - 1)
#define PTE_FILE_LSHIFT2 (PTE_FILE_BITS1)
#define PTE_FILE_LSHIFT3 (PTE_FILE_BITS1 + PTE_FILE_BITS2)
#define PTE_FILE_LSHIFT4 (PTE_FILE_BITS1 + PTE_FILE_BITS2 + PTE_FILE_BITS3)
static __always_inline pgoff_t pte_to_pgoff(pte_t pte)
{
return (pgoff_t)
(pte_bitop(pte.pte_low, PTE_FILE_SHIFT1, PTE_FILE_MASK1, 0) +
pte_bitop(pte.pte_low, PTE_FILE_SHIFT2, PTE_FILE_MASK2, PTE_FILE_LSHIFT2) +
pte_bitop(pte.pte_low, PTE_FILE_SHIFT3, PTE_FILE_MASK3, PTE_FILE_LSHIFT3) +
pte_bitop(pte.pte_low, PTE_FILE_SHIFT4, -1UL, PTE_FILE_LSHIFT4));
}
static __always_inline pte_t pgoff_to_pte(pgoff_t off)
{
return (pte_t){
.pte_low =
pte_bitop(off, 0, PTE_FILE_MASK1, PTE_FILE_SHIFT1) +
pte_bitop(off, PTE_FILE_LSHIFT2, PTE_FILE_MASK2, PTE_FILE_SHIFT2) +
pte_bitop(off, PTE_FILE_LSHIFT3, PTE_FILE_MASK3, PTE_FILE_SHIFT3) +
pte_bitop(off, PTE_FILE_LSHIFT4, -1UL, PTE_FILE_SHIFT4) +
_PAGE_FILE,
};
}
#else /* CONFIG_MEM_SOFT_DIRTY */
/*
* Bits _PAGE_BIT_PRESENT, _PAGE_BIT_FILE and _PAGE_BIT_PROTNONE are taken,
* split up the 29 bits of offset into this range.
*/
#define PTE_FILE_MAX_BITS 29
#define PTE_FILE_SHIFT1 (_PAGE_BIT_PRESENT + 1)
#if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
#define PTE_FILE_SHIFT2 (_PAGE_BIT_FILE + 1)
#define PTE_FILE_SHIFT3 (_PAGE_BIT_PROTNONE + 1)
#else
#define PTE_FILE_SHIFT2 (_PAGE_BIT_PROTNONE + 1)
#define PTE_FILE_SHIFT3 (_PAGE_BIT_FILE + 1)
#endif
#define PTE_FILE_BITS1 (PTE_FILE_SHIFT2 - PTE_FILE_SHIFT1 - 1)
#define PTE_FILE_BITS2 (PTE_FILE_SHIFT3 - PTE_FILE_SHIFT2 - 1)
@ -150,16 +98,9 @@ static __always_inline pte_t pgoff_to_pte(pgoff_t off)
};
}
#endif /* CONFIG_MEM_SOFT_DIRTY */
/* Encode and de-code a swap entry */
#if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
#define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
#else
#define SWP_TYPE_BITS (_PAGE_BIT_PROTNONE - _PAGE_BIT_PRESENT - 1)
#define SWP_OFFSET_SHIFT (_PAGE_BIT_FILE + 1)
#endif
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)

View File

@ -131,7 +131,8 @@ static inline int pte_exec(pte_t pte)
static inline int pte_special(pte_t pte)
{
return pte_flags(pte) & _PAGE_SPECIAL;
return (pte_flags(pte) & (_PAGE_PRESENT|_PAGE_SPECIAL)) ==
(_PAGE_PRESENT|_PAGE_SPECIAL);
}
static inline unsigned long pte_pfn(pte_t pte)
@ -296,6 +297,7 @@ static inline pmd_t pmd_mknotpresent(pmd_t pmd)
return pmd_clear_flags(pmd, _PAGE_PRESENT);
}
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
static inline int pte_soft_dirty(pte_t pte)
{
return pte_flags(pte) & _PAGE_SOFT_DIRTY;
@ -331,6 +333,8 @@ static inline int pte_file_soft_dirty(pte_t pte)
return pte_flags(pte) & _PAGE_SOFT_DIRTY;
}
#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
/*
* Mask out unsupported bits in a present pgprot. Non-present pgprots
* can use those bits for other purposes, so leave them be.
@ -452,6 +456,12 @@ static inline int pte_present(pte_t a)
_PAGE_NUMA);
}
#define pte_present_nonuma pte_present_nonuma
static inline int pte_present_nonuma(pte_t a)
{
return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE);
}
#define pte_accessible pte_accessible
static inline bool pte_accessible(struct mm_struct *mm, pte_t a)
{
@ -858,23 +868,25 @@ static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
{
}
#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
{
VM_BUG_ON(pte_present(pte));
VM_BUG_ON(pte_present_nonuma(pte));
return pte_set_flags(pte, _PAGE_SWP_SOFT_DIRTY);
}
static inline int pte_swp_soft_dirty(pte_t pte)
{
VM_BUG_ON(pte_present(pte));
VM_BUG_ON(pte_present_nonuma(pte));
return pte_flags(pte) & _PAGE_SWP_SOFT_DIRTY;
}
static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
{
VM_BUG_ON(pte_present(pte));
VM_BUG_ON(pte_present_nonuma(pte));
return pte_clear_flags(pte, _PAGE_SWP_SOFT_DIRTY);
}
#endif
#include <asm-generic/pgtable.h>
#endif /* __ASSEMBLY__ */

View File

@ -143,12 +143,12 @@ static inline int pgd_large(pgd_t pgd) { return 0; }
#define pte_unmap(pte) ((void)(pte))/* NOP */
/* Encode and de-code a swap entry */
#if _PAGE_BIT_FILE < _PAGE_BIT_PROTNONE
#define SWP_TYPE_BITS (_PAGE_BIT_FILE - _PAGE_BIT_PRESENT - 1)
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
#ifdef CONFIG_NUMA_BALANCING
/* Automatic NUMA balancing needs to be distinguishable from swap entries */
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 2)
#else
#define SWP_TYPE_BITS (_PAGE_BIT_PROTNONE - _PAGE_BIT_PRESENT - 1)
#define SWP_OFFSET_SHIFT (_PAGE_BIT_FILE + 1)
#define SWP_OFFSET_SHIFT (_PAGE_BIT_PROTNONE + 1)
#endif
#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)

View File

@ -16,15 +16,26 @@
#define _PAGE_BIT_PSE 7 /* 4 MB (or 2MB) page */
#define _PAGE_BIT_PAT 7 /* on 4KB pages */
#define _PAGE_BIT_GLOBAL 8 /* Global TLB entry PPro+ */
#define _PAGE_BIT_UNUSED1 9 /* available for programmer */
#define _PAGE_BIT_IOMAP 10 /* flag used to indicate IO mapping */
#define _PAGE_BIT_HIDDEN 11 /* hidden by kmemcheck */
#define _PAGE_BIT_SOFTW1 9 /* available for programmer */
#define _PAGE_BIT_SOFTW2 10 /* " */
#define _PAGE_BIT_SOFTW3 11 /* " */
#define _PAGE_BIT_PAT_LARGE 12 /* On 2MB or 1GB pages */
#define _PAGE_BIT_SPECIAL _PAGE_BIT_UNUSED1
#define _PAGE_BIT_CPA_TEST _PAGE_BIT_UNUSED1
#define _PAGE_BIT_SPLITTING _PAGE_BIT_UNUSED1 /* only valid on a PSE pmd */
#define _PAGE_BIT_SPECIAL _PAGE_BIT_SOFTW1
#define _PAGE_BIT_CPA_TEST _PAGE_BIT_SOFTW1
#define _PAGE_BIT_SPLITTING _PAGE_BIT_SOFTW2 /* only valid on a PSE pmd */
#define _PAGE_BIT_IOMAP _PAGE_BIT_SOFTW2 /* flag used to indicate IO mapping */
#define _PAGE_BIT_HIDDEN _PAGE_BIT_SOFTW3 /* hidden by kmemcheck */
#define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking */
#define _PAGE_BIT_NX 63 /* No execute: only valid after cpuid check */
/*
* Swap offsets on configurations that allow automatic NUMA balancing use the
* bits after _PAGE_BIT_GLOBAL. To uniquely distinguish NUMA hinting PTEs from
* swap entries, we use the first bit after _PAGE_BIT_GLOBAL and shrink the
* maximum possible swap space from 16TB to 8TB.
*/
#define _PAGE_BIT_NUMA (_PAGE_BIT_GLOBAL+1)
/* If _PAGE_BIT_PRESENT is clear, we use these: */
/* - if the user mapped it with PROT_NONE; pte_present gives true */
#define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL
@ -40,7 +51,7 @@
#define _PAGE_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_DIRTY)
#define _PAGE_PSE (_AT(pteval_t, 1) << _PAGE_BIT_PSE)
#define _PAGE_GLOBAL (_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
#define _PAGE_UNUSED1 (_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
#define _PAGE_SOFTW1 (_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1)
#define _PAGE_IOMAP (_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
#define _PAGE_PAT (_AT(pteval_t, 1) << _PAGE_BIT_PAT)
#define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
@ -61,14 +72,27 @@
* they do not conflict with each other.
*/
#define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_HIDDEN
#ifdef CONFIG_MEM_SOFT_DIRTY
#define _PAGE_SOFT_DIRTY (_AT(pteval_t, 1) << _PAGE_BIT_SOFT_DIRTY)
#else
#define _PAGE_SOFT_DIRTY (_AT(pteval_t, 0))
#endif
/*
* _PAGE_NUMA distinguishes between a numa hinting minor fault and a page
* that is not present. The hinting fault gathers numa placement statistics
* (see pte_numa()). The bit is always zero when the PTE is not present.
*
* The bit picked must be always zero when the pmd is present and not
* present, so that we don't lose information when we set it while
* atomically clearing the present bit.
*/
#ifdef CONFIG_NUMA_BALANCING
#define _PAGE_NUMA (_AT(pteval_t, 1) << _PAGE_BIT_NUMA)
#else
#define _PAGE_NUMA (_AT(pteval_t, 0))
#endif
/*
* Tracking soft dirty bit when a page goes to a swap is tricky.
* We need a bit which can be stored in pte _and_ not conflict
@ -94,26 +118,6 @@
#define _PAGE_FILE (_AT(pteval_t, 1) << _PAGE_BIT_FILE)
#define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
/*
* _PAGE_NUMA indicates that this page will trigger a numa hinting
* minor page fault to gather numa placement statistics (see
* pte_numa()). The bit picked (8) is within the range between
* _PAGE_FILE (6) and _PAGE_PROTNONE (8) bits. Therefore, it doesn't
* require changes to the swp entry format because that bit is always
* zero when the pte is not present.
*
* The bit picked must be always zero when the pmd is present and not
* present, so that we don't lose information when we set it while
* atomically clearing the present bit.
*
* Because we shared the same bit (8) with _PAGE_PROTNONE this can be
* interpreted as _PAGE_NUMA only in places that _PAGE_PROTNONE
* couldn't reach, like handle_mm_fault() (see access_error in
* arch/x86/mm/fault.c, the vma protection must not be PROT_NONE for
* handle_mm_fault() to be invoked).
*/
#define _PAGE_NUMA _PAGE_PROTNONE
#define _PAGE_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER | \
_PAGE_ACCESSED | _PAGE_DIRTY)
#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | \
@ -122,8 +126,8 @@
/* Set of bits not changed in pte_modify */
#define _PAGE_CHG_MASK (PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT | \
_PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY | \
_PAGE_SOFT_DIRTY)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
_PAGE_SOFT_DIRTY | _PAGE_NUMA)
#define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE | _PAGE_NUMA)
#define _PAGE_CACHE_MASK (_PAGE_PCD | _PAGE_PWT)
#define _PAGE_CACHE_WB (0)

View File

@ -29,4 +29,11 @@ static inline void pci_swiotlb_late_init(void)
static inline void dma_mark_clean(void *addr, size_t size) {}
extern void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
struct dma_attrs *attrs);
extern void x86_swiotlb_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_addr,
struct dma_attrs *attrs);
#endif /* _ASM_X86_SWIOTLB_H */

View File

@ -41,7 +41,6 @@
# define __ARCH_WANT_SYS_OLD_GETRLIMIT
# define __ARCH_WANT_SYS_OLD_UNAME
# define __ARCH_WANT_SYS_PAUSE
# define __ARCH_WANT_SYS_SGETMASK
# define __ARCH_WANT_SYS_SIGNAL
# define __ARCH_WANT_SYS_SIGPENDING
# define __ARCH_WANT_SYS_SIGPROCMASK

View File

@ -512,7 +512,7 @@ gart_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_addr, struct dma_attrs *attrs)
{
gart_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, NULL);
free_pages((unsigned long)vaddr, get_order(size));
dma_generic_free_coherent(dev, size, vaddr, dma_addr, attrs);
}
static int gart_mapping_error(struct device *dev, dma_addr_t dma_addr)

View File

@ -60,8 +60,6 @@ static DEFINE_MUTEX(mce_chrdev_read_mutex);
#define SPINUNIT 100 /* 100ns */
atomic_t mce_entry;
DEFINE_PER_CPU(unsigned, mce_exception_count);
struct mce_bank *mce_banks __read_mostly;
@ -1040,8 +1038,6 @@ void do_machine_check(struct pt_regs *regs, long error_code)
DECLARE_BITMAP(valid_banks, MAX_NR_BANKS);
char *msg = "Unknown";
atomic_inc(&mce_entry);
this_cpu_inc(mce_exception_count);
if (!cfg->banks)
@ -1171,7 +1167,6 @@ void do_machine_check(struct pt_regs *regs, long error_code)
mce_report_event(regs);
mce_wrmsrl(MSR_IA32_MCG_STATUS, 0);
out:
atomic_dec(&mce_entry);
sync_core();
}
EXPORT_SYMBOL_GPL(do_machine_check);

View File

@ -172,7 +172,7 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
*/
load_ucode_bsp();
if (console_loglevel == 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
early_printk("Kernel alive\n");
clear_page(init_level4_pgt);

View File

@ -97,12 +97,17 @@ void *dma_generic_alloc_coherent(struct device *dev, size_t size,
dma_mask = dma_alloc_coherent_mask(dev, flag);
flag |= __GFP_ZERO;
flag &= ~__GFP_ZERO;
again:
page = NULL;
/* CMA can be used only in the context which permits sleeping */
if (flag & __GFP_WAIT)
if (flag & __GFP_WAIT) {
page = dma_alloc_from_contiguous(dev, count, get_order(size));
if (page && page_to_phys(page) + size > dma_mask) {
dma_release_from_contiguous(dev, page, count);
page = NULL;
}
}
/* fallback */
if (!page)
page = alloc_pages_node(dev_to_node(dev), flag, get_order(size));
@ -120,7 +125,7 @@ again:
return NULL;
}
memset(page_address(page), 0, size);
*dma_addr = addr;
return page_address(page);
}

View File

@ -14,7 +14,7 @@
#include <asm/iommu_table.h>
int swiotlb __read_mostly;
static void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
struct dma_attrs *attrs)
{
@ -28,11 +28,14 @@ static void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
return swiotlb_alloc_coherent(hwdev, size, dma_handle, flags);
}
static void x86_swiotlb_free_coherent(struct device *dev, size_t size,
void x86_swiotlb_free_coherent(struct device *dev, size_t size,
void *vaddr, dma_addr_t dma_addr,
struct dma_attrs *attrs)
{
swiotlb_free_coherent(dev, size, vaddr, dma_addr);
if (is_swiotlb_buffer(dma_to_phys(dev, dma_addr)))
swiotlb_free_coherent(dev, size, vaddr, dma_addr);
else
dma_generic_free_coherent(dev, size, vaddr, dma_addr, attrs);
}
static struct dma_map_ops swiotlb_dma_ops = {

View File

@ -1119,7 +1119,7 @@ void __init setup_arch(char **cmdline_p)
setup_real_mode();
memblock_set_current_limit(get_max_mapped());
dma_contiguous_reserve(0);
dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT);
/*
* NOTE: On x86-32, only from this point on, fixmaps are ready for use.

View File

@ -58,11 +58,6 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
{
return NULL;
}
int pmd_huge_support(void)
{
return 0;
}
#else
struct page *
@ -80,11 +75,6 @@ int pud_huge(pud_t pud)
{
return !!(pud_val(pud) & _PAGE_PSE);
}
int pmd_huge_support(void)
{
return 1;
}
#endif
#ifdef CONFIG_HUGETLB_PAGE

View File

@ -1230,17 +1230,43 @@ const char *arch_vma_name(struct vm_area_struct *vma)
return NULL;
}
#ifdef CONFIG_X86_UV
unsigned long memory_block_size_bytes(void)
static unsigned long probe_memory_block_size(void)
{
/* start from 2g */
unsigned long bz = 1UL<<31;
#ifdef CONFIG_X86_UV
if (is_uv_system()) {
printk(KERN_INFO "UV: memory block size 2GB\n");
return 2UL * 1024 * 1024 * 1024;
}
return MIN_MEMORY_BLOCK_SIZE;
}
#endif
/* less than 64g installed */
if ((max_pfn << PAGE_SHIFT) < (16UL << 32))
return MIN_MEMORY_BLOCK_SIZE;
/* get the tail size */
while (bz > MIN_MEMORY_BLOCK_SIZE) {
if (!((max_pfn << PAGE_SHIFT) & (bz - 1)))
break;
bz >>= 1;
}
printk(KERN_DEBUG "memory block size : %ldMB\n", bz >> 20);
return bz;
}
static unsigned long memory_block_size_probed;
unsigned long memory_block_size_bytes(void)
{
if (!memory_block_size_probed)
memory_block_size_probed = probe_memory_block_size();
return memory_block_size_probed;
}
#ifdef CONFIG_SPARSEMEM_VMEMMAP
/*
* Initialise the sparsemem vmemmap using huge-pages at the PMD level.

View File

@ -559,7 +559,7 @@ static void __init numa_clear_kernel_node_hotplug(void)
int i, nid;
nodemask_t numa_kernel_nodes = NODE_MASK_NONE;
unsigned long start, end;
struct memblock_type *type = &memblock.reserved;
struct memblock_region *r;
/*
* At this time, all memory regions reserved by memblock are
@ -573,8 +573,8 @@ static void __init numa_clear_kernel_node_hotplug(void)
}
/* Mark all kernel nodes. */
for (i = 0; i < type->cnt; i++)
node_set(type->regions[i].nid, numa_kernel_nodes);
for_each_memblock(reserved, r)
node_set(r->nid, numa_kernel_nodes);
/* Clear MEMBLOCK_HOTPLUG flag for memory in kernel nodes. */
for (i = 0; i < numa_meminfo.nr_blks; i++) {

View File

@ -35,7 +35,7 @@ enum {
static int pte_testbit(pte_t pte)
{
return pte_flags(pte) & _PAGE_UNUSED1;
return pte_flags(pte) & _PAGE_SOFTW1;
}
struct split_state {

View File

@ -173,9 +173,7 @@ static void *sta2x11_swiotlb_alloc_coherent(struct device *dev,
{
void *vaddr;
vaddr = dma_generic_alloc_coherent(dev, size, dma_handle, flags, attrs);
if (!vaddr)
vaddr = swiotlb_alloc_coherent(dev, size, dma_handle, flags);
vaddr = x86_swiotlb_alloc_coherent(dev, size, dma_handle, flags, attrs);
*dma_handle = p2a(*dma_handle, to_pci_dev(dev));
return vaddr;
}
@ -183,7 +181,7 @@ static void *sta2x11_swiotlb_alloc_coherent(struct device *dev,
/* We have our own dma_ops: the same as swiotlb but from alloc (above) */
static struct dma_map_ops sta2x11_dma_ops = {
.alloc = sta2x11_swiotlb_alloc_coherent,
.free = swiotlb_free_coherent,
.free = x86_swiotlb_free_coherent,
.map_page = swiotlb_map_page,
.unmap_page = swiotlb_unmap_page,
.map_sg = swiotlb_map_sg_attrs,

View File

@ -85,7 +85,7 @@ static cpumask_var_t uv_nmi_cpu_mask;
* Default is all stack dumps go to the console and buffer.
* Lower level to send to log buffer only.
*/
static int uv_nmi_loglevel = 7;
static int uv_nmi_loglevel = CONSOLE_LOGLEVEL_DEFAULT;
module_param_named(dump_loglevel, uv_nmi_loglevel, int, 0644);
/*

View File

@ -258,7 +258,7 @@ endchoice
config CMA_ALIGNMENT
int "Maximum PAGE_SIZE order of alignment for contiguous buffers"
range 4 9
range 4 12
default 8
help
DMA mapping framework by default aligns all buffers to the smallest

View File

@ -60,11 +60,22 @@ struct cma *dma_contiguous_default_area;
*/
static const phys_addr_t size_bytes = CMA_SIZE_MBYTES * SZ_1M;
static phys_addr_t size_cmdline = -1;
static phys_addr_t base_cmdline;
static phys_addr_t limit_cmdline;
static int __init early_cma(char *p)
{
pr_debug("%s(%s)\n", __func__, p);
size_cmdline = memparse(p, &p);
if (*p != '@')
return 0;
base_cmdline = memparse(p + 1, &p);
if (*p != '-') {
limit_cmdline = base_cmdline + size_cmdline;
return 0;
}
limit_cmdline = memparse(p + 1, &p);
return 0;
}
early_param("cma", early_cma);
@ -108,11 +119,18 @@ static inline __maybe_unused phys_addr_t cma_early_percent_memory(void)
void __init dma_contiguous_reserve(phys_addr_t limit)
{
phys_addr_t selected_size = 0;
phys_addr_t selected_base = 0;
phys_addr_t selected_limit = limit;
bool fixed = false;
pr_debug("%s(limit %08lx)\n", __func__, (unsigned long)limit);
if (size_cmdline != -1) {
selected_size = size_cmdline;
selected_base = base_cmdline;
selected_limit = min_not_zero(limit_cmdline, limit);
if (base_cmdline + size_cmdline == limit_cmdline)
fixed = true;
} else {
#ifdef CONFIG_CMA_SIZE_SEL_MBYTES
selected_size = size_bytes;
@ -129,10 +147,12 @@ void __init dma_contiguous_reserve(phys_addr_t limit)
pr_debug("%s: reserving %ld MiB for global area\n", __func__,
(unsigned long)selected_size / SZ_1M);
dma_contiguous_reserve_area(selected_size, 0, limit,
&dma_contiguous_default_area);
dma_contiguous_reserve_area(selected_size, selected_base,
selected_limit,
&dma_contiguous_default_area,
fixed);
}
};
}
static DEFINE_MUTEX(cma_mutex);
@ -189,15 +209,20 @@ core_initcall(cma_init_reserved_areas);
* @base: Base address of the reserved area optional, use 0 for any
* @limit: End address of the reserved memory (optional, 0 for any).
* @res_cma: Pointer to store the created cma region.
* @fixed: hint about where to place the reserved area
*
* This function reserves memory from early allocator. It should be
* called by arch specific code once the early allocator (memblock or bootmem)
* has been activated and all other subsystems have already allocated/reserved
* memory. This function allows to create custom reserved areas for specific
* devices.
*
* If @fixed is true, reserve contiguous area at exactly @base. If false,
* reserve in range from @base to @limit.
*/
int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
phys_addr_t limit, struct cma **res_cma)
phys_addr_t limit, struct cma **res_cma,
bool fixed)
{
struct cma *cma = &cma_areas[cma_area_count];
phys_addr_t alignment;
@ -223,18 +248,15 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
limit &= ~(alignment - 1);
/* Reserve memory */
if (base) {
if (base && fixed) {
if (memblock_is_region_reserved(base, size) ||
memblock_reserve(base, size) < 0) {
ret = -EBUSY;
goto err;
}
} else {
/*
* Use __memblock_alloc_base() since
* memblock_alloc_base() panic()s.
*/
phys_addr_t addr = __memblock_alloc_base(size, alignment, limit);
phys_addr_t addr = memblock_alloc_range(size, alignment, base,
limit);
if (!addr) {
ret = -ENOMEM;
goto err;

View File

@ -118,16 +118,6 @@ static ssize_t show_mem_start_phys_index(struct device *dev,
return sprintf(buf, "%08lx\n", phys_index);
}
static ssize_t show_mem_end_phys_index(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct memory_block *mem = to_memory_block(dev);
unsigned long phys_index;
phys_index = mem->end_section_nr / sections_per_block;
return sprintf(buf, "%08lx\n", phys_index);
}
/*
* Show whether the section of memory is likely to be hot-removable
*/
@ -384,7 +374,6 @@ static ssize_t show_phys_device(struct device *dev,
}
static DEVICE_ATTR(phys_index, 0444, show_mem_start_phys_index, NULL);
static DEVICE_ATTR(end_phys_index, 0444, show_mem_end_phys_index, NULL);
static DEVICE_ATTR(state, 0644, show_mem_state, store_mem_state);
static DEVICE_ATTR(phys_device, 0444, show_phys_device, NULL);
static DEVICE_ATTR(removable, 0444, show_mem_removable, NULL);
@ -529,7 +518,6 @@ struct memory_block *find_memory_block(struct mem_section *section)
static struct attribute *memory_memblk_attrs[] = {
&dev_attr_phys_index.attr,
&dev_attr_end_phys_index.attr,
&dev_attr_state.attr,
&dev_attr_phys_device.attr,
&dev_attr_removable.attr,

View File

@ -200,11 +200,11 @@ static int copy_to_brd_setup(struct brd_device *brd, sector_t sector, size_t n)
copy = min_t(size_t, n, PAGE_SIZE - offset);
if (!brd_insert_page(brd, sector))
return -ENOMEM;
return -ENOSPC;
if (copy < n) {
sector += copy >> SECTOR_SHIFT;
if (!brd_insert_page(brd, sector))
return -ENOMEM;
return -ENOSPC;
}
return 0;
}
@ -360,6 +360,15 @@ out:
bio_endio(bio, err);
}
static int brd_rw_page(struct block_device *bdev, sector_t sector,
struct page *page, int rw)
{
struct brd_device *brd = bdev->bd_disk->private_data;
int err = brd_do_bvec(brd, page, PAGE_CACHE_SIZE, 0, rw, sector);
page_endio(page, rw & WRITE, err);
return err;
}
#ifdef CONFIG_BLK_DEV_XIP
static int brd_direct_access(struct block_device *bdev, sector_t sector,
void **kaddr, unsigned long *pfn)
@ -375,7 +384,7 @@ static int brd_direct_access(struct block_device *bdev, sector_t sector,
return -ERANGE;
page = brd_insert_page(brd, sector);
if (!page)
return -ENOMEM;
return -ENOSPC;
*kaddr = page_address(page);
*pfn = page_to_pfn(page);
@ -419,6 +428,7 @@ static int brd_ioctl(struct block_device *bdev, fmode_t mode,
static const struct block_device_operations brd_fops = {
.owner = THIS_MODULE,
.rw_page = brd_rw_page,
.ioctl = brd_ioctl,
#ifdef CONFIG_BLK_DEV_XIP
.direct_access = brd_direct_access,

View File

@ -572,10 +572,10 @@ static void zram_bio_discard(struct zram *zram, u32 index,
* skipping this logical block is appropriate here.
*/
if (offset) {
if (n < offset)
if (n <= (PAGE_SIZE - offset))
return;
n -= offset;
n -= (PAGE_SIZE - offset);
index++;
}

View File

@ -467,14 +467,17 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
goto err_free;
}
down_read(&current->mm->mmap_sem);
vma = find_vma(current->mm, userptr);
if (!vma) {
up_read(&current->mm->mmap_sem);
DRM_ERROR("failed to get vm region.\n");
ret = -EFAULT;
goto err_free_pages;
}
if (vma->vm_end < userptr + size) {
up_read(&current->mm->mmap_sem);
DRM_ERROR("vma is too small.\n");
ret = -EFAULT;
goto err_free_pages;
@ -482,6 +485,7 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
g2d_userptr->vma = exynos_gem_get_vma(vma);
if (!g2d_userptr->vma) {
up_read(&current->mm->mmap_sem);
DRM_ERROR("failed to copy vma.\n");
ret = -ENOMEM;
goto err_free_pages;
@ -492,10 +496,12 @@ static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
ret = exynos_gem_get_pages_from_userptr(start & PAGE_MASK,
npages, pages, vma);
if (ret < 0) {
up_read(&current->mm->mmap_sem);
DRM_ERROR("failed to get user pages from userptr.\n");
goto err_put_vma;
}
up_read(&current->mm->mmap_sem);
g2d_userptr->pages = pages;
sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);

View File

@ -39,6 +39,7 @@
#include <linux/dmi.h>
#include <linux/pci-ats.h>
#include <linux/memblock.h>
#include <linux/dma-contiguous.h>
#include <asm/irq_remapping.h>
#include <asm/cacheflush.h>
#include <asm/iommu.h>
@ -3193,7 +3194,7 @@ static void *intel_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t flags,
struct dma_attrs *attrs)
{
void *vaddr;
struct page *page = NULL;
int order;
size = PAGE_ALIGN(size);
@ -3208,17 +3209,31 @@ static void *intel_alloc_coherent(struct device *dev, size_t size,
flags |= GFP_DMA32;
}
vaddr = (void *)__get_free_pages(flags, order);
if (!vaddr)
return NULL;
memset(vaddr, 0, size);
if (flags & __GFP_WAIT) {
unsigned int count = size >> PAGE_SHIFT;
*dma_handle = __intel_map_single(dev, virt_to_bus(vaddr), size,
page = dma_alloc_from_contiguous(dev, count, order);
if (page && iommu_no_mapping(dev) &&
page_to_phys(page) + size > dev->coherent_dma_mask) {
dma_release_from_contiguous(dev, page, count);
page = NULL;
}
}
if (!page)
page = alloc_pages(flags, order);
if (!page)
return NULL;
memset(page_address(page), 0, size);
*dma_handle = __intel_map_single(dev, page_to_phys(page), size,
DMA_BIDIRECTIONAL,
dev->coherent_dma_mask);
if (*dma_handle)
return vaddr;
free_pages((unsigned long)vaddr, order);
return page_address(page);
if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
__free_pages(page, order);
return NULL;
}
@ -3226,12 +3241,14 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr,
dma_addr_t dma_handle, struct dma_attrs *attrs)
{
int order;
struct page *page = virt_to_page(vaddr);
size = PAGE_ALIGN(size);
order = get_order(size);
intel_unmap_page(dev, dma_handle, size, DMA_BIDIRECTIONAL, NULL);
free_pages((unsigned long)vaddr, order);
if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
__free_pages(page, order);
}
static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,

View File

@ -473,7 +473,7 @@ static struct nubus_dev* __init
if (slot == 0 && (unsigned long)dir.base % 2)
dir.base += 1;
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_DEBUG "nubus_get_functional_resource: parent is 0x%p, dir is 0x%p\n",
parent->base, dir.base);
@ -568,7 +568,7 @@ static int __init nubus_get_vidnames(struct nubus_board* board,
printk(KERN_INFO " video modes supported:\n");
nubus_get_subdir(parent, &dir);
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_DEBUG "nubus_get_vidnames: parent is 0x%p, dir is 0x%p\n",
parent->base, dir.base);
@ -629,7 +629,7 @@ static int __init nubus_get_vendorinfo(struct nubus_board* board,
printk(KERN_INFO " vendor info:\n");
nubus_get_subdir(parent, &dir);
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_DEBUG "nubus_get_vendorinfo: parent is 0x%p, dir is 0x%p\n",
parent->base, dir.base);
@ -654,7 +654,7 @@ static int __init nubus_get_board_resource(struct nubus_board* board, int slot,
struct nubus_dirent ent;
nubus_get_subdir(parent, &dir);
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_DEBUG "nubus_get_board_resource: parent is 0x%p, dir is 0x%p\n",
parent->base, dir.base);
@ -753,19 +753,19 @@ static void __init nubus_find_rom_dir(struct nubus_board* board)
if (nubus_readdir(&dir, &ent) == -1)
goto badrom;
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_INFO "nubus_get_rom_dir: entry %02x %06x\n", ent.type, ent.data);
/* This one takes us to where we want to go. */
if (nubus_readdir(&dir, &ent) == -1)
goto badrom;
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_DEBUG "nubus_get_rom_dir: entry %02x %06x\n", ent.type, ent.data);
nubus_get_subdir(&ent, &dir);
/* Resource ID 01, also an "Unknown Macintosh" */
if (nubus_readdir(&dir, &ent) == -1)
goto badrom;
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_DEBUG "nubus_get_rom_dir: entry %02x %06x\n", ent.type, ent.data);
/* FIXME: the first one is *not* always the right one. We
@ -780,7 +780,7 @@ static void __init nubus_find_rom_dir(struct nubus_board* board)
path to that address... */
if (nubus_readdir(&dir, &ent) == -1)
goto badrom;
if (console_loglevel >= 10)
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG)
printk(KERN_DEBUG "nubus_get_rom_dir: entry %02x %06x\n", ent.type, ent.data);
/* Bwahahahaha... */
@ -816,7 +816,7 @@ static struct nubus_board* __init nubus_add_board(int slot, int bytelanes)
board->fblock = rp;
/* Dump the format block for debugging purposes */
if (console_loglevel >= 10) {
if (console_loglevel >= CONSOLE_LOGLEVEL_DEBUG) {
int i;
printk(KERN_DEBUG "Slot %X, format block at 0x%p\n",
slot, rp);

View File

@ -88,7 +88,7 @@ static void sysrq_handle_loglevel(int key)
int i;
i = key - '0';
console_loglevel = 7;
console_loglevel = CONSOLE_LOGLEVEL_DEFAULT;
printk("Loglevel set to %d\n", i);
console_loglevel = i;
}
@ -343,7 +343,7 @@ static void send_sig_all(int sig)
static void sysrq_handle_term(int key)
{
send_sig_all(SIGTERM);
console_loglevel = 8;
console_loglevel = CONSOLE_LOGLEVEL_DEBUG;
}
static struct sysrq_key_op sysrq_term_op = {
.handler = sysrq_handle_term,
@ -387,7 +387,7 @@ static struct sysrq_key_op sysrq_thaw_op = {
static void sysrq_handle_kill(int key)
{
send_sig_all(SIGKILL);
console_loglevel = 8;
console_loglevel = CONSOLE_LOGLEVEL_DEBUG;
}
static struct sysrq_key_op sysrq_kill_op = {
.handler = sysrq_handle_kill,
@ -520,7 +520,7 @@ void __handle_sysrq(int key, bool check_mask)
* routing in the consumers of /proc/kmsg.
*/
orig_log_level = console_loglevel;
console_loglevel = 7;
console_loglevel = CONSOLE_LOGLEVEL_DEFAULT;
printk(KERN_INFO "SysRq : ");
op_p = __sysrq_get_key_op(key);

View File

@ -537,7 +537,7 @@ static struct attribute_group v9fs_attr_group = {
*
*/
static int v9fs_sysfs_init(void)
static int __init v9fs_sysfs_init(void)
{
v9fs_kobj = kobject_create_and_add("9p", fs_kobj);
if (!v9fs_kobj)

View File

@ -42,7 +42,6 @@
/**
* struct p9_rdir - readdir accounting
* @mutex: mutex protecting readdir
* @head: start offset of current dirread buffer
* @tail: end offset of current dirread buffer
* @buf: dirread buffer

View File

@ -681,7 +681,7 @@ v9fs_direct_read(struct file *filp, char __user *udata, size_t count,
/**
* v9fs_cached_file_read - read from a file
* @filp: file pointer to read
* @udata: user data buffer to read data into
* @data: user data buffer to read data into
* @count: size of buffer
* @offset: offset at which to read data
*
@ -698,7 +698,7 @@ v9fs_cached_file_read(struct file *filp, char __user *data, size_t count,
/**
* v9fs_mmap_file_read - read from a file
* @filp: file pointer to read
* @udata: user data buffer to read data into
* @data: user data buffer to read data into
* @count: size of buffer
* @offset: offset at which to read data
*

View File

@ -580,7 +580,7 @@ static int v9fs_at_to_dotl_flags(int flags)
* v9fs_remove - helper function to remove files and directories
* @dir: directory inode that is being deleted
* @dentry: dentry that is being deleted
* @rmdir: removing a directory
* @flags: removing a directory
*
*/
@ -778,7 +778,7 @@ static int v9fs_vfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode
* v9fs_vfs_lookup - VFS lookup hook to "walk" to a new inode
* @dir: inode that is being walked from
* @dentry: dentry that is being walked to?
* @nameidata: path data
* @flags: lookup flags (unused)
*
*/
@ -1324,7 +1324,7 @@ v9fs_vfs_put_link(struct dentry *dentry, struct nameidata *nd, void *p)
* v9fs_vfs_mkspecial - create a special file
* @dir: inode to create special file in
* @dentry: dentry to create
* @mode: mode to create special file
* @perm: mode to create special file
* @extension: 9p2000.u format extension string representing special file
*
*/

View File

@ -226,7 +226,7 @@ int v9fs_open_to_dotl_flags(int flags)
* v9fs_vfs_create_dotl - VFS hook to create files for 9P2000.L protocol.
* @dir: directory inode that is being created
* @dentry: dentry that is being deleted
* @mode: create permissions
* @omode: create permissions
*
*/
@ -375,7 +375,7 @@ err_clunk_old_fid:
* v9fs_vfs_mkdir_dotl - VFS mkdir hook to create a directory
* @dir: inode that is being unlinked
* @dentry: dentry that is being unlinked
* @mode: mode for new directory
* @omode: mode for new directory
*
*/
@ -607,7 +607,6 @@ int v9fs_vfs_setattr_dotl(struct dentry *dentry, struct iattr *iattr)
* v9fs_stat2inode_dotl - populate an inode structure with stat info
* @stat: stat structure
* @inode: inode to populate
* @sb: superblock of filesystem
*
*/
@ -808,7 +807,7 @@ v9fs_vfs_link_dotl(struct dentry *old_dentry, struct inode *dir,
* v9fs_vfs_mknod_dotl - create a special file
* @dir: inode destination for new link
* @dentry: dentry for file
* @mode: mode for creation
* @omode: mode for creation
* @rdev: device associated with special file
*
*/

View File

@ -737,7 +737,7 @@ MODULE_ALIAS_MISCDEV(AUTOFS_MINOR);
MODULE_ALIAS("devname:autofs");
/* Register/deregister misc character device */
int autofs_dev_ioctl_init(void)
int __init autofs_dev_ioctl_init(void)
{
int r;

View File

@ -1686,7 +1686,7 @@ static size_t get_note_info_size(struct elf_note_info *info)
static int write_note_info(struct elf_note_info *info,
struct coredump_params *cprm)
{
bool first = 1;
bool first = true;
struct elf_thread_core_info *t = info->thread;
do {
@ -1710,7 +1710,7 @@ static int write_note_info(struct elf_note_info *info,
!writenote(&t->notes[i], cprm))
return 0;
first = 0;
first = false;
t = t->next;
} while (t);

View File

@ -380,7 +380,7 @@ failed:
/****************************************************************************/
void old_reloc(unsigned long rl)
static void old_reloc(unsigned long rl)
{
#ifdef DEBUG
char *segment[] = { "TEXT", "DATA", "BSS", "*UNKNOWN*" };

View File

@ -363,6 +363,69 @@ int blkdev_fsync(struct file *filp, loff_t start, loff_t end, int datasync)
}
EXPORT_SYMBOL(blkdev_fsync);
/**
* bdev_read_page() - Start reading a page from a block device
* @bdev: The device to read the page from
* @sector: The offset on the device to read the page to (need not be aligned)
* @page: The page to read
*
* On entry, the page should be locked. It will be unlocked when the page
* has been read. If the block driver implements rw_page synchronously,
* that will be true on exit from this function, but it need not be.
*
* Errors returned by this function are usually "soft", eg out of memory, or
* queue full; callers should try a different route to read this page rather
* than propagate an error back up the stack.
*
* Return: negative errno if an error occurs, 0 if submission was successful.
*/
int bdev_read_page(struct block_device *bdev, sector_t sector,
struct page *page)
{
const struct block_device_operations *ops = bdev->bd_disk->fops;
if (!ops->rw_page)
return -EOPNOTSUPP;
return ops->rw_page(bdev, sector + get_start_sect(bdev), page, READ);
}
EXPORT_SYMBOL_GPL(bdev_read_page);
/**
* bdev_write_page() - Start writing a page to a block device
* @bdev: The device to write the page to
* @sector: The offset on the device to write the page to (need not be aligned)
* @page: The page to write
* @wbc: The writeback_control for the write
*
* On entry, the page should be locked and not currently under writeback.
* On exit, if the write started successfully, the page will be unlocked and
* under writeback. If the write failed already (eg the driver failed to
* queue the page to the device), the page will still be locked. If the
* caller is a ->writepage implementation, it will need to unlock the page.
*
* Errors returned by this function are usually "soft", eg out of memory, or
* queue full; callers should try a different route to write this page rather
* than propagate an error back up the stack.
*
* Return: negative errno if an error occurs, 0 if submission was successful.
*/
int bdev_write_page(struct block_device *bdev, sector_t sector,
struct page *page, struct writeback_control *wbc)
{
int result;
int rw = (wbc->sync_mode == WB_SYNC_ALL) ? WRITE_SYNC : WRITE;
const struct block_device_operations *ops = bdev->bd_disk->fops;
if (!ops->rw_page)
return -EOPNOTSUPP;
set_page_writeback(page);
result = ops->rw_page(bdev, sector + get_start_sect(bdev), page, rw);
if (result)
end_page_writeback(page);
else
unlock_page(page);
return result;
}
EXPORT_SYMBOL_GPL(bdev_write_page);
/*
* pseudo-fs
*/

View File

@ -4510,7 +4510,8 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
spin_unlock(&eb->refs_lock);
}
static void mark_extent_buffer_accessed(struct extent_buffer *eb)
static void mark_extent_buffer_accessed(struct extent_buffer *eb,
struct page *accessed)
{
unsigned long num_pages, i;
@ -4519,7 +4520,8 @@ static void mark_extent_buffer_accessed(struct extent_buffer *eb)
num_pages = num_extent_pages(eb->start, eb->len);
for (i = 0; i < num_pages; i++) {
struct page *p = extent_buffer_page(eb, i);
mark_page_accessed(p);
if (p != accessed)
mark_page_accessed(p);
}
}
@ -4533,7 +4535,7 @@ struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
start >> PAGE_CACHE_SHIFT);
if (eb && atomic_inc_not_zero(&eb->refs)) {
rcu_read_unlock();
mark_extent_buffer_accessed(eb);
mark_extent_buffer_accessed(eb, NULL);
return eb;
}
rcu_read_unlock();
@ -4581,7 +4583,7 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
spin_unlock(&mapping->private_lock);
unlock_page(p);
page_cache_release(p);
mark_extent_buffer_accessed(exists);
mark_extent_buffer_accessed(exists, p);
goto free_eb;
}
@ -4596,7 +4598,6 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
attach_extent_buffer_page(eb, p);
spin_unlock(&mapping->private_lock);
WARN_ON(PageDirty(p));
mark_page_accessed(p);
eb->pages[i] = p;
if (!PageUptodate(p))
uptodate = 0;

View File

@ -470,11 +470,12 @@ static void btrfs_drop_pages(struct page **pages, size_t num_pages)
for (i = 0; i < num_pages; i++) {
/* page checked is some magic around finding pages that
* have been modified without going through btrfs_set_page_dirty
* clear it here
* clear it here. There should be no need to mark the pages
* accessed as prepare_pages should have marked them accessed
* in prepare_pages via find_or_create_page()
*/
ClearPageChecked(pages[i]);
unlock_page(pages[i]);
mark_page_accessed(pages[i]);
page_cache_release(pages[i]);
}
}

View File

@ -227,7 +227,7 @@ __find_get_block_slow(struct block_device *bdev, sector_t block)
int all_mapped = 1;
index = block >> (PAGE_CACHE_SHIFT - bd_inode->i_blkbits);
page = find_get_page(bd_mapping, index);
page = find_get_page_flags(bd_mapping, index, FGP_ACCESSED);
if (!page)
goto out;
@ -1366,12 +1366,13 @@ __find_get_block(struct block_device *bdev, sector_t block, unsigned size)
struct buffer_head *bh = lookup_bh_lru(bdev, block, size);
if (bh == NULL) {
/* __find_get_block_slow will mark the page accessed */
bh = __find_get_block_slow(bdev, block);
if (bh)
bh_lru_install(bh);
}
if (bh)
} else
touch_buffer(bh);
return bh;
}
EXPORT_SYMBOL(__find_get_block);
@ -1483,16 +1484,27 @@ EXPORT_SYMBOL(set_bh_page);
/*
* Called when truncating a buffer on a page completely.
*/
/* Bits that are cleared during an invalidate */
#define BUFFER_FLAGS_DISCARD \
(1 << BH_Mapped | 1 << BH_New | 1 << BH_Req | \
1 << BH_Delay | 1 << BH_Unwritten)
static void discard_buffer(struct buffer_head * bh)
{
unsigned long b_state, b_state_old;
lock_buffer(bh);
clear_buffer_dirty(bh);
bh->b_bdev = NULL;
clear_buffer_mapped(bh);
clear_buffer_req(bh);
clear_buffer_new(bh);
clear_buffer_delay(bh);
clear_buffer_unwritten(bh);
b_state = bh->b_state;
for (;;) {
b_state_old = cmpxchg(&bh->b_state, b_state,
(b_state & ~BUFFER_FLAGS_DISCARD));
if (b_state_old == b_state)
break;
b_state = b_state_old;
}
unlock_buffer(bh);
}
@ -2879,10 +2891,9 @@ EXPORT_SYMBOL(block_truncate_page);
/*
* The generic ->writepage function for buffer-backed address_spaces
* this form passes in the end_io handler used to finish the IO.
*/
int block_write_full_page_endio(struct page *page, get_block_t *get_block,
struct writeback_control *wbc, bh_end_io_t *handler)
int block_write_full_page(struct page *page, get_block_t *get_block,
struct writeback_control *wbc)
{
struct inode * const inode = page->mapping->host;
loff_t i_size = i_size_read(inode);
@ -2892,7 +2903,7 @@ int block_write_full_page_endio(struct page *page, get_block_t *get_block,
/* Is the page fully inside i_size? */
if (page->index < end_index)
return __block_write_full_page(inode, page, get_block, wbc,
handler);
end_buffer_async_write);
/* Is the page fully outside i_size? (truncate in progress) */
offset = i_size & (PAGE_CACHE_SIZE-1);
@ -2915,18 +2926,8 @@ int block_write_full_page_endio(struct page *page, get_block_t *get_block,
* writes to that region are not written out to the file."
*/
zero_user_segment(page, offset, PAGE_CACHE_SIZE);
return __block_write_full_page(inode, page, get_block, wbc, handler);
}
EXPORT_SYMBOL(block_write_full_page_endio);
/*
* The generic ->writepage function for buffer-backed address_spaces
*/
int block_write_full_page(struct page *page, get_block_t *get_block,
struct writeback_control *wbc)
{
return block_write_full_page_endio(page, get_block, wbc,
end_buffer_async_write);
return __block_write_full_page(inode, page, get_block, wbc,
end_buffer_async_write);
}
EXPORT_SYMBOL(block_write_full_page);

View File

@ -24,6 +24,12 @@
* configfs Copyright (C) 2005 Oracle. All rights reserved.
*/
#ifdef pr_fmt
#undef pr_fmt
#endif
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/slab.h>
#include <linux/list.h>
#include <linux/spinlock.h>

View File

@ -940,9 +940,9 @@ static void client_drop_item(struct config_item *parent_item,
#ifdef DEBUG
static void configfs_dump_one(struct configfs_dirent *sd, int level)
{
printk(KERN_INFO "%*s\"%s\":\n", level, " ", configfs_get_name(sd));
pr_info("%*s\"%s\":\n", level, " ", configfs_get_name(sd));
#define type_print(_type) if (sd->s_type & _type) printk(KERN_INFO "%*s %s\n", level, " ", #_type);
#define type_print(_type) if (sd->s_type & _type) pr_info("%*s %s\n", level, " ", #_type);
type_print(CONFIGFS_ROOT);
type_print(CONFIGFS_DIR);
type_print(CONFIGFS_ITEM_ATTR);
@ -1699,7 +1699,7 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
struct dentry *root = dentry->d_sb->s_root;
if (dentry->d_parent != root) {
printk(KERN_ERR "configfs: Tried to unregister non-subsystem!\n");
pr_err("Tried to unregister non-subsystem!\n");
return;
}
@ -1709,7 +1709,7 @@ void configfs_unregister_subsystem(struct configfs_subsystem *subsys)
mutex_lock(&configfs_symlink_mutex);
spin_lock(&configfs_dirent_lock);
if (configfs_detach_prep(dentry, NULL)) {
printk(KERN_ERR "configfs: Tried to unregister non-empty subsystem!\n");
pr_err("Tried to unregister non-empty subsystem!\n");
}
spin_unlock(&configfs_dirent_lock);
mutex_unlock(&configfs_symlink_mutex);

View File

@ -168,9 +168,8 @@ static void configfs_set_inode_lock_class(struct configfs_dirent *sd,
* In practice the maximum level of locking depth is
* already reached. Just inform about possible reasons.
*/
printk(KERN_INFO "configfs: Too many levels of inodes"
" for the locking correctness validator.\n");
printk(KERN_INFO "Spurious warnings may appear.\n");
pr_info("Too many levels of inodes for the locking correctness validator.\n");
pr_info("Spurious warnings may appear.\n");
}
}
}

View File

@ -19,7 +19,7 @@
* Boston, MA 021110-1307, USA.
*
* Based on kobject:
* kobject is Copyright (c) 2002-2003 Patrick Mochel
* kobject is Copyright (c) 2002-2003 Patrick Mochel
*
* configfs Copyright (C) 2005 Oracle. All rights reserved.
*
@ -35,9 +35,9 @@
#include <linux/configfs.h>
static inline struct config_item * to_item(struct list_head * entry)
static inline struct config_item *to_item(struct list_head *entry)
{
return container_of(entry,struct config_item,ci_entry);
return container_of(entry, struct config_item, ci_entry);
}
/* Evil kernel */
@ -47,34 +47,35 @@ static void config_item_release(struct kref *kref);
* config_item_init - initialize item.
* @item: item in question.
*/
void config_item_init(struct config_item * item)
void config_item_init(struct config_item *item)
{
kref_init(&item->ci_kref);
INIT_LIST_HEAD(&item->ci_entry);
}
EXPORT_SYMBOL(config_item_init);
/**
* config_item_set_name - Set the name of an item
* @item: item.
* @name: name.
* @fmt: The vsnprintf()'s format string.
*
* If strlen(name) >= CONFIGFS_ITEM_NAME_LEN, then use a
* dynamically allocated string that @item->ci_name points to.
* Otherwise, use the static @item->ci_namebuf array.
*/
int config_item_set_name(struct config_item * item, const char * fmt, ...)
int config_item_set_name(struct config_item *item, const char *fmt, ...)
{
int error = 0;
int limit = CONFIGFS_ITEM_NAME_LEN;
int need;
va_list args;
char * name;
char *name;
/*
* First, try the static array
*/
va_start(args,fmt);
need = vsnprintf(item->ci_namebuf,limit,fmt,args);
va_start(args, fmt);
need = vsnprintf(item->ci_namebuf, limit, fmt, args);
va_end(args);
if (need < limit)
name = item->ci_namebuf;
@ -83,13 +84,13 @@ int config_item_set_name(struct config_item * item, const char * fmt, ...)
* Need more space? Allocate it and try again
*/
limit = need + 1;
name = kmalloc(limit,GFP_KERNEL);
name = kmalloc(limit, GFP_KERNEL);
if (!name) {
error = -ENOMEM;
goto Done;
}
va_start(args,fmt);
need = vsnprintf(name,limit,fmt,args);
va_start(args, fmt);
need = vsnprintf(name, limit, fmt, args);
va_end(args);
/* Still? Give up. */
@ -109,7 +110,6 @@ int config_item_set_name(struct config_item * item, const char * fmt, ...)
Done:
return error;
}
EXPORT_SYMBOL(config_item_set_name);
void config_item_init_type_name(struct config_item *item,
@ -131,20 +131,21 @@ void config_group_init_type_name(struct config_group *group, const char *name,
}
EXPORT_SYMBOL(config_group_init_type_name);
struct config_item * config_item_get(struct config_item * item)
struct config_item *config_item_get(struct config_item *item)
{
if (item)
kref_get(&item->ci_kref);
return item;
}
EXPORT_SYMBOL(config_item_get);
static void config_item_cleanup(struct config_item * item)
static void config_item_cleanup(struct config_item *item)
{
struct config_item_type * t = item->ci_type;
struct config_group * s = item->ci_group;
struct config_item * parent = item->ci_parent;
struct config_item_type *t = item->ci_type;
struct config_group *s = item->ci_group;
struct config_item *parent = item->ci_parent;
pr_debug("config_item %s: cleaning up\n",config_item_name(item));
pr_debug("config_item %s: cleaning up\n", config_item_name(item));
if (item->ci_name != item->ci_namebuf)
kfree(item->ci_name);
item->ci_name = NULL;
@ -167,21 +168,23 @@ static void config_item_release(struct kref *kref)
*
* Decrement the refcount, and if 0, call config_item_cleanup().
*/
void config_item_put(struct config_item * item)
void config_item_put(struct config_item *item)
{
if (item)
kref_put(&item->ci_kref, config_item_release);
}
EXPORT_SYMBOL(config_item_put);
/**
* config_group_init - initialize a group for use
* @k: group
* @group: config_group
*/
void config_group_init(struct config_group *group)
{
config_item_init(&group->cg_item);
INIT_LIST_HEAD(&group->cg_children);
}
EXPORT_SYMBOL(config_group_init);
/**
* config_group_find_item - search for item in group.
@ -195,11 +198,11 @@ void config_group_init(struct config_group *group)
struct config_item *config_group_find_item(struct config_group *group,
const char *name)
{
struct list_head * entry;
struct config_item * ret = NULL;
struct list_head *entry;
struct config_item *ret = NULL;
list_for_each(entry,&group->cg_children) {
struct config_item * item = to_item(entry);
list_for_each(entry, &group->cg_children) {
struct config_item *item = to_item(entry);
if (config_item_name(item) &&
!strcmp(config_item_name(item), name)) {
ret = config_item_get(item);
@ -208,9 +211,4 @@ struct config_item *config_group_find_item(struct config_group *group,
}
return ret;
}
EXPORT_SYMBOL(config_item_init);
EXPORT_SYMBOL(config_group_init);
EXPORT_SYMBOL(config_item_get);
EXPORT_SYMBOL(config_item_put);
EXPORT_SYMBOL(config_group_find_item);

View File

@ -85,7 +85,7 @@ static int configfs_fill_super(struct super_block *sb, void *data, int silent)
/* directory inodes start off with i_nlink == 2 (for "." entry) */
inc_nlink(inode);
} else {
pr_debug("configfs: could not get root inode\n");
pr_debug("could not get root inode\n");
return -ENOMEM;
}
@ -155,7 +155,7 @@ static int __init configfs_init(void)
return 0;
out4:
printk(KERN_ERR "configfs: Unable to register filesystem!\n");
pr_err("Unable to register filesystem!\n");
configfs_inode_exit();
out3:
kobject_put(config_kobj);

View File

@ -83,7 +83,7 @@ static int efivarfs_d_hash(const struct dentry *dentry, struct qstr *qstr)
return 0;
}
static struct dentry_operations efivarfs_d_ops = {
static const struct dentry_operations efivarfs_d_ops = {
.d_compare = efivarfs_d_compare,
.d_hash = efivarfs_d_hash,
.d_delete = always_delete_dentry,

View File

@ -26,7 +26,8 @@ static int efs_readdir(struct file *file, struct dir_context *ctx)
int slot;
if (inode->i_size & (EFS_DIRBSIZE-1))
printk(KERN_WARNING "EFS: WARNING: readdir(): directory size not a multiple of EFS_DIRBSIZE\n");
pr_warn("%s(): directory size not a multiple of EFS_DIRBSIZE\n",
__func__);
/* work out where this entry can be found */
block = ctx->pos >> EFS_DIRBSIZE_BITS;
@ -43,14 +44,15 @@ static int efs_readdir(struct file *file, struct dir_context *ctx)
bh = sb_bread(inode->i_sb, efs_bmap(inode, block));
if (!bh) {
printk(KERN_ERR "EFS: readdir(): failed to read dir block %d\n", block);
pr_err("%s(): failed to read dir block %d\n",
__func__, block);
break;
}
dirblock = (struct efs_dir *) bh->b_data;
if (be16_to_cpu(dirblock->magic) != EFS_DIRBLK_MAGIC) {
printk(KERN_ERR "EFS: readdir(): invalid directory block\n");
pr_err("%s(): invalid directory block\n", __func__);
brelse(bh);
break;
}
@ -69,10 +71,9 @@ static int efs_readdir(struct file *file, struct dir_context *ctx)
inodenum = be32_to_cpu(dirslot->inode);
namelen = dirslot->namelen;
nameptr = dirslot->name;
#ifdef DEBUG
printk(KERN_DEBUG "EFS: readdir(): block %d slot %d/%d: inode %u, name \"%s\", namelen %u\n", block, slot, dirblock->slots-1, inodenum, nameptr, namelen);
#endif
pr_debug("%s(): block %d slot %d/%d: inode %u, name \"%s\", namelen %u\n",
__func__, block, slot, dirblock->slots-1,
inodenum, nameptr, namelen);
if (!namelen)
continue;
/* found the next entry */
@ -80,7 +81,8 @@ static int efs_readdir(struct file *file, struct dir_context *ctx)
/* sanity check */
if (nameptr - (char *) dirblock + namelen > EFS_DIRBSIZE) {
printk(KERN_WARNING "EFS: directory entry %d exceeds directory block\n", slot);
pr_warn("directory entry %d exceeds directory block\n",
slot);
continue;
}

View File

@ -7,6 +7,12 @@
#ifndef _EFS_EFS_H_
#define _EFS_EFS_H_
#ifdef pr_fmt
#undef pr_fmt
#endif
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/fs.h>
#include <asm/uaccess.h>

View File

@ -22,10 +22,8 @@ int efs_get_block(struct inode *inode, sector_t iblock,
/*
* i have no idea why this happens as often as it does
*/
printk(KERN_WARNING "EFS: bmap(): block %d >= %ld (filesize %ld)\n",
block,
inode->i_blocks,
inode->i_size);
pr_warn("%s(): block %d >= %ld (filesize %ld)\n",
__func__, block, inode->i_blocks, inode->i_size);
#endif
return 0;
}
@ -38,7 +36,7 @@ int efs_get_block(struct inode *inode, sector_t iblock,
int efs_bmap(struct inode *inode, efs_block_t block) {
if (block < 0) {
printk(KERN_WARNING "EFS: bmap(): block < 0\n");
pr_warn("%s(): block < 0\n", __func__);
return 0;
}
@ -48,10 +46,8 @@ int efs_bmap(struct inode *inode, efs_block_t block) {
/*
* i have no idea why this happens as often as it does
*/
printk(KERN_WARNING "EFS: bmap(): block %d >= %ld (filesize %ld)\n",
block,
inode->i_blocks,
inode->i_size);
pr_warn("%s(): block %d >= %ld (filesize %ld)\n",
__func__, block, inode->i_blocks, inode->i_size);
#endif
return 0;
}

View File

@ -89,7 +89,7 @@ struct inode *efs_iget(struct super_block *super, unsigned long ino)
bh = sb_bread(inode->i_sb, block);
if (!bh) {
printk(KERN_WARNING "EFS: bread() failed at block %d\n", block);
pr_warn("%s() failed at block %d\n", __func__, block);
goto read_inode_error;
}
@ -130,19 +130,16 @@ struct inode *efs_iget(struct super_block *super, unsigned long ino)
for(i = 0; i < EFS_DIRECTEXTENTS; i++) {
extent_copy(&(efs_inode->di_u.di_extents[i]), &(in->extents[i]));
if (i < in->numextents && in->extents[i].cooked.ex_magic != 0) {
printk(KERN_WARNING "EFS: extent %d has bad magic number in inode %lu\n", i, inode->i_ino);
pr_warn("extent %d has bad magic number in inode %lu\n",
i, inode->i_ino);
brelse(bh);
goto read_inode_error;
}
}
brelse(bh);
#ifdef DEBUG
printk(KERN_DEBUG "EFS: efs_iget(): inode %lu, extents %d, mode %o\n",
inode->i_ino, in->numextents, inode->i_mode);
#endif
pr_debug("efs_iget(): inode %lu, extents %d, mode %o\n",
inode->i_ino, in->numextents, inode->i_mode);
switch (inode->i_mode & S_IFMT) {
case S_IFDIR:
inode->i_op = &efs_dir_inode_operations;
@ -162,7 +159,7 @@ struct inode *efs_iget(struct super_block *super, unsigned long ino)
init_special_inode(inode, inode->i_mode, device);
break;
default:
printk(KERN_WARNING "EFS: unsupported inode mode %o\n", inode->i_mode);
pr_warn("unsupported inode mode %o\n", inode->i_mode);
goto read_inode_error;
break;
}
@ -171,7 +168,7 @@ struct inode *efs_iget(struct super_block *super, unsigned long ino)
return inode;
read_inode_error:
printk(KERN_WARNING "EFS: failed to read inode %lu\n", inode->i_ino);
pr_warn("failed to read inode %lu\n", inode->i_ino);
iget_failed(inode);
return ERR_PTR(-EIO);
}
@ -216,7 +213,7 @@ efs_block_t efs_map_block(struct inode *inode, efs_block_t block) {
/* if we only have one extent then nothing can be found */
if (in->numextents == 1) {
printk(KERN_ERR "EFS: map_block() failed to map (1 extent)\n");
pr_err("%s() failed to map (1 extent)\n", __func__);
return 0;
}
@ -234,13 +231,12 @@ efs_block_t efs_map_block(struct inode *inode, efs_block_t block) {
}
}
printk(KERN_ERR "EFS: map_block() failed to map block %u (dir)\n", block);
pr_err("%s() failed to map block %u (dir)\n", __func__, block);
return 0;
}
#ifdef DEBUG
printk(KERN_DEBUG "EFS: map_block(): indirect search for logical block %u\n", block);
#endif
pr_debug("%s(): indirect search for logical block %u\n",
__func__, block);
direxts = in->extents[0].cooked.ex_offset;
indexts = in->numextents;
@ -262,7 +258,8 @@ efs_block_t efs_map_block(struct inode *inode, efs_block_t block) {
if (dirext == direxts) {
/* should never happen */
printk(KERN_ERR "EFS: couldn't find direct extent for indirect extent %d (block %u)\n", cur, block);
pr_err("couldn't find direct extent for indirect extent %d (block %u)\n",
cur, block);
if (bh) brelse(bh);
return 0;
}
@ -279,12 +276,12 @@ efs_block_t efs_map_block(struct inode *inode, efs_block_t block) {
bh = sb_bread(inode->i_sb, iblock);
if (!bh) {
printk(KERN_ERR "EFS: bread() failed at block %d\n", iblock);
pr_err("%s() failed at block %d\n",
__func__, iblock);
return 0;
}
#ifdef DEBUG
printk(KERN_DEBUG "EFS: map_block(): read indirect extent block %d\n", iblock);
#endif
pr_debug("%s(): read indirect extent block %d\n",
__func__, iblock);
first = 0;
lastblock = iblock;
}
@ -294,7 +291,8 @@ efs_block_t efs_map_block(struct inode *inode, efs_block_t block) {
extent_copy(&(exts[ioffset]), &ext);
if (ext.cooked.ex_magic != 0) {
printk(KERN_ERR "EFS: extent %d has bad magic number in block %d\n", cur, iblock);
pr_err("extent %d has bad magic number in block %d\n",
cur, iblock);
if (bh) brelse(bh);
return 0;
}
@ -306,7 +304,7 @@ efs_block_t efs_map_block(struct inode *inode, efs_block_t block) {
}
}
if (bh) brelse(bh);
printk(KERN_ERR "EFS: map_block() failed to map block %u (indir)\n", block);
pr_err("%s() failed to map block %u (indir)\n", __func__, block);
return 0;
}

View File

@ -23,20 +23,22 @@ static efs_ino_t efs_find_entry(struct inode *inode, const char *name, int len)
efs_block_t block;
if (inode->i_size & (EFS_DIRBSIZE-1))
printk(KERN_WARNING "EFS: WARNING: find_entry(): directory size not a multiple of EFS_DIRBSIZE\n");
pr_warn("%s(): directory size not a multiple of EFS_DIRBSIZE\n",
__func__);
for(block = 0; block < inode->i_blocks; block++) {
bh = sb_bread(inode->i_sb, efs_bmap(inode, block));
if (!bh) {
printk(KERN_ERR "EFS: find_entry(): failed to read dir block %d\n", block);
pr_err("%s(): failed to read dir block %d\n",
__func__, block);
return 0;
}
dirblock = (struct efs_dir *) bh->b_data;
if (be16_to_cpu(dirblock->magic) != EFS_DIRBLK_MAGIC) {
printk(KERN_ERR "EFS: find_entry(): invalid directory block\n");
pr_err("%s(): invalid directory block\n", __func__);
brelse(bh);
return(0);
}

View File

@ -134,7 +134,7 @@ static const struct export_operations efs_export_ops = {
static int __init init_efs_fs(void) {
int err;
printk("EFS: "EFS_VERSION" - http://aeschi.ch.eu.org/efs/\n");
pr_info(EFS_VERSION" - http://aeschi.ch.eu.org/efs/\n");
err = init_inodecache();
if (err)
goto out1;
@ -179,12 +179,12 @@ static efs_block_t efs_validate_vh(struct volume_header *vh) {
csum += be32_to_cpu(cs);
}
if (csum) {
printk(KERN_INFO "EFS: SGI disklabel: checksum bad, label corrupted\n");
pr_warn("SGI disklabel: checksum bad, label corrupted\n");
return 0;
}
#ifdef DEBUG
printk(KERN_DEBUG "EFS: bf: \"%16s\"\n", vh->vh_bootfile);
pr_debug("bf: \"%16s\"\n", vh->vh_bootfile);
for(i = 0; i < NVDIR; i++) {
int j;
@ -196,9 +196,8 @@ static efs_block_t efs_validate_vh(struct volume_header *vh) {
name[j] = (char) 0;
if (name[0]) {
printk(KERN_DEBUG "EFS: vh: %8s block: 0x%08x size: 0x%08x\n",
name,
(int) be32_to_cpu(vh->vh_vd[i].vd_lbn),
pr_debug("vh: %8s block: 0x%08x size: 0x%08x\n",
name, (int) be32_to_cpu(vh->vh_vd[i].vd_lbn),
(int) be32_to_cpu(vh->vh_vd[i].vd_nbytes));
}
}
@ -211,12 +210,11 @@ static efs_block_t efs_validate_vh(struct volume_header *vh) {
}
#ifdef DEBUG
if (be32_to_cpu(vh->vh_pt[i].pt_nblks)) {
printk(KERN_DEBUG "EFS: pt %2d: start: %08d size: %08d type: 0x%02x (%s)\n",
i,
(int) be32_to_cpu(vh->vh_pt[i].pt_firstlbn),
(int) be32_to_cpu(vh->vh_pt[i].pt_nblks),
pt_type,
(pt_entry->pt_name) ? pt_entry->pt_name : "unknown");
pr_debug("pt %2d: start: %08d size: %08d type: 0x%02x (%s)\n",
i, (int)be32_to_cpu(vh->vh_pt[i].pt_firstlbn),
(int)be32_to_cpu(vh->vh_pt[i].pt_nblks),
pt_type, (pt_entry->pt_name) ?
pt_entry->pt_name : "unknown");
}
#endif
if (IS_EFS(pt_type)) {
@ -226,11 +224,10 @@ static efs_block_t efs_validate_vh(struct volume_header *vh) {
}
if (slice == -1) {
printk(KERN_NOTICE "EFS: partition table contained no EFS partitions\n");
pr_notice("partition table contained no EFS partitions\n");
#ifdef DEBUG
} else {
printk(KERN_INFO "EFS: using slice %d (type %s, offset 0x%x)\n",
slice,
pr_info("using slice %d (type %s, offset 0x%x)\n", slice,
(pt_entry->pt_name) ? pt_entry->pt_name : "unknown",
sblock);
#endif
@ -268,7 +265,7 @@ static int efs_fill_super(struct super_block *s, void *d, int silent)
s->s_magic = EFS_SUPER_MAGIC;
if (!sb_set_blocksize(s, EFS_BLOCKSIZE)) {
printk(KERN_ERR "EFS: device does not support %d byte blocks\n",
pr_err("device does not support %d byte blocks\n",
EFS_BLOCKSIZE);
return -EINVAL;
}
@ -277,7 +274,7 @@ static int efs_fill_super(struct super_block *s, void *d, int silent)
bh = sb_bread(s, 0);
if (!bh) {
printk(KERN_ERR "EFS: cannot read volume header\n");
pr_err("cannot read volume header\n");
return -EINVAL;
}
@ -295,13 +292,14 @@ static int efs_fill_super(struct super_block *s, void *d, int silent)
bh = sb_bread(s, sb->fs_start + EFS_SUPER);
if (!bh) {
printk(KERN_ERR "EFS: cannot read superblock\n");
pr_err("cannot read superblock\n");
return -EINVAL;
}
if (efs_validate_super(sb, (struct efs_super *) bh->b_data)) {
#ifdef DEBUG
printk(KERN_WARNING "EFS: invalid superblock at block %u\n", sb->fs_start + EFS_SUPER);
pr_warn("invalid superblock at block %u\n",
sb->fs_start + EFS_SUPER);
#endif
brelse(bh);
return -EINVAL;
@ -310,7 +308,7 @@ static int efs_fill_super(struct super_block *s, void *d, int silent)
if (!(s->s_flags & MS_RDONLY)) {
#ifdef DEBUG
printk(KERN_INFO "EFS: forcing read-only mode\n");
pr_info("forcing read-only mode\n");
#endif
s->s_flags |= MS_RDONLY;
}
@ -318,13 +316,13 @@ static int efs_fill_super(struct super_block *s, void *d, int silent)
s->s_export_op = &efs_export_ops;
root = efs_iget(s, EFS_ROOTINODE);
if (IS_ERR(root)) {
printk(KERN_ERR "EFS: get root inode failed\n");
pr_err("get root inode failed\n");
return PTR_ERR(root);
}
s->s_root = d_make_root(root);
if (!(s->s_root)) {
printk(KERN_ERR "EFS: get root dentry failed\n");
pr_err("get root dentry failed\n");
return -ENOMEM;
}

View File

@ -259,7 +259,7 @@ static int filldir_one(void * __buf, const char * name, int len,
/**
* get_name - default export_operations->get_name function
* @dentry: the directory in which to find a name
* @path: the directory in which to find a name
* @name: a pointer to a %NAME_MAX+1 char buffer to store the name
* @child: the dentry for the child directory.
*
@ -337,7 +337,7 @@ out:
/**
* export_encode_fh - default export_operations->encode_fh function
* @inode: the object to encode
* @fh: where to store the file handle fragment
* @fid: where to store the file handle fragment
* @max_len: maximum length to store there
* @parent: parent directory inode, if wanted
*

View File

@ -1044,6 +1044,8 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group)
* allocating. If we are looking at the buddy cache we would
* have taken a reference using ext4_mb_load_buddy and that
* would have pinned buddy page to page cache.
* The call to ext4_mb_get_buddy_page_lock will mark the
* page accessed.
*/
ret = ext4_mb_get_buddy_page_lock(sb, group, &e4b);
if (ret || !EXT4_MB_GRP_NEED_INIT(this_grp)) {
@ -1062,7 +1064,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group)
ret = -EIO;
goto err;
}
mark_page_accessed(page);
if (e4b.bd_buddy_page == NULL) {
/*
@ -1082,7 +1083,6 @@ int ext4_mb_init_group(struct super_block *sb, ext4_group_t group)
ret = -EIO;
goto err;
}
mark_page_accessed(page);
err:
ext4_mb_put_buddy_page_lock(&e4b);
return ret;
@ -1141,7 +1141,7 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
/* we could use find_or_create_page(), but it locks page
* what we'd like to avoid in fast path ... */
page = find_get_page(inode->i_mapping, pnum);
page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED);
if (page == NULL || !PageUptodate(page)) {
if (page)
/*
@ -1176,15 +1176,16 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
ret = -EIO;
goto err;
}
/* Pages marked accessed already */
e4b->bd_bitmap_page = page;
e4b->bd_bitmap = page_address(page) + (poff * sb->s_blocksize);
mark_page_accessed(page);
block++;
pnum = block / blocks_per_page;
poff = block % blocks_per_page;
page = find_get_page(inode->i_mapping, pnum);
page = find_get_page_flags(inode->i_mapping, pnum, FGP_ACCESSED);
if (page == NULL || !PageUptodate(page)) {
if (page)
page_cache_release(page);
@ -1209,9 +1210,10 @@ ext4_mb_load_buddy(struct super_block *sb, ext4_group_t group,
ret = -EIO;
goto err;
}
/* Pages marked accessed already */
e4b->bd_buddy_page = page;
e4b->bd_buddy = page_address(page) + (poff * sb->s_blocksize);
mark_page_accessed(page);
BUG_ON(e4b->bd_bitmap_page == NULL);
BUG_ON(e4b->bd_buddy_page == NULL);

View File

@ -429,7 +429,7 @@ int ext4_bio_write_page(struct ext4_io_submit *io,
block_start = bh_offset(bh);
if (block_start >= len) {
/*
* Comments copied from block_write_full_page_endio:
* Comments copied from block_write_full_page:
*
* The page straddles i_size. It must be zeroed out on
* each and every writepage invocation because it may

View File

@ -69,7 +69,6 @@ repeat:
goto repeat;
}
out:
mark_page_accessed(page);
return page;
}
@ -137,13 +136,11 @@ int ra_meta_pages(struct f2fs_sb_info *sbi, int start, int nrpages, int type)
if (!page)
continue;
if (PageUptodate(page)) {
mark_page_accessed(page);
f2fs_put_page(page, 1);
continue;
}
f2fs_submit_page_mbio(sbi, page, blk_addr, &fio);
mark_page_accessed(page);
f2fs_put_page(page, 0);
}
out:

View File

@ -967,7 +967,6 @@ repeat:
goto repeat;
}
got_it:
mark_page_accessed(page);
return page;
}
@ -1022,7 +1021,6 @@ page_hit:
f2fs_put_page(page, 1);
return ERR_PTR(-EIO);
}
mark_page_accessed(page);
return page;
}

View File

@ -280,15 +280,15 @@ int fscache_add_cache(struct fscache_cache *cache,
spin_unlock(&fscache_fsdef_index.lock);
up_write(&fscache_addremove_sem);
printk(KERN_NOTICE "FS-Cache: Cache \"%s\" added (type %s)\n",
cache->tag->name, cache->ops->name);
pr_notice("Cache \"%s\" added (type %s)\n",
cache->tag->name, cache->ops->name);
kobject_uevent(cache->kobj, KOBJ_ADD);
_leave(" = 0 [%s]", cache->identifier);
return 0;
tag_in_use:
printk(KERN_ERR "FS-Cache: Cache tag '%s' already in use\n", tagname);
pr_err("Cache tag '%s' already in use\n", tagname);
__fscache_release_cache_tag(tag);
_leave(" = -EXIST");
return -EEXIST;
@ -317,8 +317,7 @@ EXPORT_SYMBOL(fscache_add_cache);
void fscache_io_error(struct fscache_cache *cache)
{
if (!test_and_set_bit(FSCACHE_IOERROR, &cache->flags))
printk(KERN_ERR "FS-Cache:"
" Cache '%s' stopped due to I/O error\n",
pr_err("Cache '%s' stopped due to I/O error\n",
cache->ops->name);
}
EXPORT_SYMBOL(fscache_io_error);
@ -369,8 +368,8 @@ void fscache_withdraw_cache(struct fscache_cache *cache)
_enter("");
printk(KERN_NOTICE "FS-Cache: Withdrawing cache \"%s\"\n",
cache->tag->name);
pr_notice("Withdrawing cache \"%s\"\n",
cache->tag->name);
/* make the cache unavailable for cookie acquisition */
if (test_and_set_bit(FSCACHE_CACHE_WITHDRAWN, &cache->flags))

View File

@ -519,7 +519,7 @@ void __fscache_disable_cookie(struct fscache_cookie *cookie, bool invalidate)
ASSERTCMP(atomic_read(&cookie->n_active), >, 0);
if (atomic_read(&cookie->n_children) != 0) {
printk(KERN_ERR "FS-Cache: Cookie '%s' still has children\n",
pr_err("Cookie '%s' still has children\n",
cookie->def->name);
BUG();
}

View File

@ -31,12 +31,10 @@ static int fscache_histogram_show(struct seq_file *m, void *v)
switch ((unsigned long) v) {
case 1:
seq_puts(m, "JIFS SECS OBJ INST OP RUNS OBJ RUNS "
" RETRV DLY RETRIEVLS\n");
seq_puts(m, "JIFS SECS OBJ INST OP RUNS OBJ RUNS RETRV DLY RETRIEVLS\n");
return 0;
case 2:
seq_puts(m, "===== ===== ========= ========= ========="
" ========= =========\n");
seq_puts(m, "===== ===== ========= ========= ========= ========= =========\n");
return 0;
default:
index = (unsigned long) v - 3;

Some files were not shown because too many files have changed in this diff Show More