Check cpu id in pj4_cp0_init. So for no-PJ4 V7 cpus,
pj4_cpu0_init just return.
This fix will help to make the all the V7 cpus(PJ4 and no-PJ4)
can use code.
Signed-off-by: Chao Xie <chao.xie@marvell.com>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Matt Porter <mporter@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The patch add cpu_is_pj4 at arch/arm/include/asm/cputype.h
PJ4 has some differences with V7, for example the coprocessor.
To disinguish this kind of situation. cpu_is_pj4 is needed.
Signed-off-by: Chao Xie <chao.xie@marvell.com>
Reviewed-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Kevin Hilman <khilman@linaro.org>
Tested-by: Stephen Warren <swarren@nvidia.com>
Reviewed-by: Stephen Warren <swarren@nvidia.com>
Tested-by: Matt Porter <mporter@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
arm_pm_restart(), arm_pm_idle() and soft_restart() are all declared in
system_misc.h, but this file is not included in process.c. Add this
missing include. Found via sparse:
arch/arm/kernel/process.c:98:6: warning: symbol 'soft_restart' was not declared. Should it be static?
arch/arm/kernel/process.c:127:6: warning: symbol 'arm_pm_restart' was not declared. Should it be static?
arch/arm/kernel/process.c:134:6: warning: symbol 'arm_pm_idle' was not declared. Should it be static?
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
virt_to_page() is incredibly inefficient when virt-to-phys patching is
enabled. This is because we end up with this calculation:
page = &mem_map[asm virt_to_phys(addr) >> 12 - __pv_phys_offset >> 12]
in assembly. The asm virt_to_phys() is equivalent this this operation:
addr - PAGE_OFFSET + __pv_phys_offset
and we can see that because this is assembly, the compiler has no chance
to optimise some of that away. This should reduce down to:
page = &mem_map[(addr - PAGE_OFFSET) >> 12]
for the common cases. Permit the compiler to make this optimisation by
giving it more of the information it needs - do this by providing a
virt_to_pfn() macro.
Another issue which makes this more complex is that __pv_phys_offset is
a 64-bit type on all platforms. This is needlessly wasteful - if we
store the physical offset as a PFN, we can save a lot of work having
to deal with 64-bit values, which sometimes ends up producing incredibly
horrid code:
a4c: e3009000 movw r9, #0
a4c: R_ARM_MOVW_ABS_NC __pv_phys_offset
a50: e3409000 movt r9, #0 ; r9 = &__pv_phys_offset
a50: R_ARM_MOVT_ABS __pv_phys_offset
a54: e3002000 movw r2, #0
a54: R_ARM_MOVW_ABS_NC __pv_phys_offset
a58: e3402000 movt r2, #0 ; r2 = &__pv_phys_offset
a58: R_ARM_MOVT_ABS __pv_phys_offset
a5c: e5999004 ldr r9, [r9, #4] ; r9 = high word of __pv_phys_offset
a60: e3001000 movw r1, #0
a60: R_ARM_MOVW_ABS_NC mem_map
a64: e592c000 ldr ip, [r2] ; ip = low word of __pv_phys_offset
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from drivers/scsi/arm
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from arch/arm/mach-w90x900/time.c
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Acked-by: Wan zongshun <mcuos.com@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from arch/arm/mach-spear/time.c
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from arch/arm/mach-mmp/time.c
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from miscellaneous code in mach-xxx and plat-xxx
This flag is a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Greg Ungerer <gerg@uclinux.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from arch/arm/mach-lpc32xx/timer.c
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from code in arch/arm/mach-ixp4xx
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
from arch/arm/mach-cns3xxx/core.c
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the use of the IRQF_DISABLED flag
in arch/arm/include/asm/floppy.h
It's a NOOP since 2.6.35 and it will be removed one day.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch removes the IRQF_DISABLED flag from footbridge
code. It's a NOOP since 2.6.35.
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Apart from setting the limit of memblock, it's also useful to be able
to get the limit to avoid recalculating it every time. Add the function
to do so.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
While unwinding backtrace, stack overflow is possible. This stack
overflow can sometimes lead to data abort in system if the area after
stack is not mapped to physical memory.
To prevent this problem from happening, execute the instructions that
can cause a data abort in separate helper functions, where a check for
feasibility is made before reading each word from the stack.
Signed-off-by: Anurag Aggarwal <a.anurag@samsung.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This allocates feature bits 0-4 in HWCAP2 for the crypto and CRC
extensions introduced in ARMv8.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This enables AT_HWCAP2 for ARM. The generic support for this
new ELF auxv entry was added in commit 2171364d1a (powerpc:
Add HWCAP2 aux entry)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
This patch moves bios32 over to using the generic code for enabling PCI
resources. Since the core code takes care of bridge resources too, we
can also drop the explicit IO and MEMORY enabling for them in the arch
code.
A side-effect of this change is that we no longer explicitly enable
devices when running in PCI_PROBE_ONLY mode. This stays closer to the
meaning of the option and prevents us from trying to enable devices
without any assigned resources (the core code refuses to enable
resources without parents).
Tested-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Tested-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Looking at perf profiles of multi-threaded hackbench runs, a significant
performance hit appears to manifest from the cmpxchg loop used to
implement the 32-bit atomic_add_unless function. This can be mitigated
by writing a direct implementation of __atomic_add_unless which doesn't
require iteration outside of the atomic operation.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Renames logical shift macros, 'push' and 'pull', defined in
arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'.
That eliminates name conflict between 'push' logical shift macro
and 'push' instruction mnemonic. That allows assembler.h to be
included in .S files that use 'push' instruction.
Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Delete ARM's asm/system.h. It's the last holdout and should be got rid of.
This builds for defconfig, lpc32xx_defconfig, exynos_defconfig + XEN, the
previous changed to a Gemini system and an omap3 config with TI_DAVINCI_EMAC.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The pte_accessible macro can be used to identify page table entries
capable of being cached by a TLB. In principle, this differs from
pte_present, since PROT_NONE mappings are mapped using invalid entries
identified as present and ptes designated as `old' can use either
invalid entries or those with the access flag cleared (guaranteed not to
be in the TLB). However, there is a race to take care of, as described
in 2084140594 ("mm: fix TLB flush race between migration, and
change_protection_range"), between a page being migrated and mprotected
at the same time. In this case, we can check whether a TLB invalidation
is pending for the mm and if so, temporarily consider PROT_NONE mappings
as valid.
This patch implements a quick pte_accessible macro for ARM by simply
checking if the pte is valid/present depending on the mm. For classic
MMU, these checks are identical and will generate some false positives
for PROT_NONE mappings, but this is better than the current asm-generic
definition of ((void)(pte),1).
Finally, pte_present_user is moved to use pte_valid (and renamed
appropriately) since we don't care about cache flushing for faulting
mappings.
Acked-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
After a bunch of benchmarking on the interaction between dmb and pldw,
it turns out that issuing the pldw *after* the dmb instruction can
give modest performance gains (~3% atomic_add_return improvement on a
dual A15).
This patch adds prefetchw invocations to our barriered atomic operations
including cmpxchg, test_and_xxx and futexes.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The Coherant DMA allocator allocates pages of high order then splits
them up into smaller pages.
This splitting logic would run into problems if the allocator was
given compound pages. Thus the Coherant DMA allocator was originally
incompatible with compound pages existing and, by extension, huge
pages. A compile #error was put in place whenever huge pages were
enabled.
Compatibility with compound pages has since been introduced by the
following commit (which merely excludes GFP_COMP pages from being
requested by the coherant DMA allocator):
ea2e705 ARM: 7172/1: dma: Drop GFP_COMP for DMA memory allocations
When huge page support was introduced to ARM, the compile #error in
dma-mapping.c was replaced by a #warning when it should have been
removed instead.
This patch removes the compile #warning in dma-mapping.c when huge
pages are enabled.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The functions in mcpm_entry.c are mostly intended for use during
scary cache and coherency disabling sequences, or do other things
which confuse trace ... like powering a CPU down and not
returning. Similarly for the backend code.
For simplicity, this patch just makes whole files notrace.
There should be more than enough traceable points on the paths to
these functions, but we can be more fine-grained later if there is
a need for it.
Jon Medhurst:
Also added spc.o to the list of files as it contains functions used by
MCPM code which have comments comments like: "might be used in code
paths where normal cacheable locks are not working"
Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Jon Medhurst <tixy@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
CPU_32v6 currently selects CPU_USE_DOMAINS if CPU_V6 and MMU. This is
because ARM 1136 r0pX CPUs lack the v6k extensions, and therefore do
not have hardware thread registers. The lack of these registers requires
the kernel to update the vectors page at each context switch in order to
write a new TLS pointer. This write must be done via the userspace
mapping, since aliasing caches can lead to expensive flushing when using
kmap. Finally, this requires the vectors page to be mapped r/w for
kernel and r/o for user, which has implications for things like put_user
which must trigger CoW appropriately when targetting user pages.
The upshot of all this is that a v6/v7 kernel makes use of domains to
segregate kernel and user memory accesses. This has the nasty
side-effect of making device mappings executable, which has been
observed to cause subtle bugs on recent cores (e.g. Cortex-A15
performing a speculative instruction fetch from the GIC and acking an
interrupt in the process).
This patch solves this problem by removing the remaining domain support
from ARMv6. A new memory type is added specifically for the vectors page
which allows that page (and only that page) to be mapped as user r/o,
kernel r/w. All other user r/o pages are mapped also as kernel r/o.
Patch co-developed with Russell King.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Now that we select HAVE_EFFICIENT_UNALIGNED_ACCESS for ARMv6+ CPUs,
replace the __LINUX_ARM_ARCH__ check in uaccess.h with the new symbol.
Signed-off-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Booting on feroceon CPUS requires the L2 cache to be turned off. With
some kernel configurations (notably CONFIG_ARM_PATCH_PHYS_VIRT
disabled) the kernel will boot even if the L2 is turned on.
However there may be subtle breakage, and when PATCH_PHYS_VIRT is
enabled it is very likely that booting with L2 will crash at early
boot before any kernel diagnostic output.
The diagnostic message is intended to discourage people from shipping
bootloaders that leave the L2 turned on.
The issue on feroceon is that the L2 is bypassed when the L1 caches
are disabled. So the decompressor will place parts of the kernel image
into the L2 and the early cache-off boot code in head.S will write to
parts of the kernel image, bypassing the L2 and creating inconsistency.
Tested on ARM Kirkwood.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Acked-by: Jason Cooper <jason@lakedaemon.net>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Add the trivial support necessary to get hardware breakpoints
working for GDB on ARMv8 simulators running in AArch32 mode.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Christopher Covington <cov@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The 32 bit sched_clock interface supports 64 bits since 3.13-rc1.
Upgrade to the 64 bit function to allow us to remove the 32 bit
registration interface.
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
The A12 behaves as the A7/A15 does with respect to setting the SMP bit, and
doesn't require TLB ops broadcasting to be explicitly enabled like the A9 does.
Note that as the ACTLR cannot (usually) be written from non-secure, it is the
responsibility of the bootloader/firmware to set this bit per core - it is
done here in Linux as last resort in case of bad firmware.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Pull SELinux fixes from James Morris.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security:
SELinux: Fix kernel BUG on empty security contexts.
selinux: add SOCK_DIAG_BY_FAMILY to the list of netlink message types
Pull vfs fixes from Al Viro:
"A couple of fixes, both -stable fodder. The O_SYNC bug is fairly
old..."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
fix a kmap leak in virtio_console
fix O_SYNC|O_APPEND syncing the wrong range on write()
While we are at it, don't do kmap() under kmap_atomic(), *especially*
for a page we'd allocated with GFP_KERNEL. It's spelled "page_address",
and had that been more than that, we'd have a real trouble - kmap_high()
can block, and doing that while holding kmap_atomic() is a Bad Idea(tm).
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
It actually goes back to 2004 ([PATCH] Concurrent O_SYNC write support)
when sync_page_range() had been introduced; generic_file_write{,v}() correctly
synced
pos_after_write - written .. pos_after_write - 1
but generic_file_aio_write() synced
pos_before_write .. pos_before_write + written - 1
instead. Which is not the same thing with O_APPEND, obviously.
A couple of years later correct variant had been killed off when
everything switched to use of generic_file_aio_write().
All users of generic_file_aio_write() are affected, and the same bug
has been copied into other instances of ->aio_write().
The fix is trivial; the only subtle point is that generic_write_sync()
ought to be inlined to avoid calculations useless for the majority of
calls.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Pull btrfs fixes from Chris Mason:
"This is a small collection of fixes"
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
Btrfs: fix data corruption when reading/updating compressed extents
Btrfs: don't loop forever if we can't run because of the tree mod log
btrfs: reserve no transaction units in btrfs_ioctl_set_features
btrfs: commit transaction after setting label and features
Btrfs: fix assert screwup for the pending move stuff
Pull perf fixes from Ingo Molnar:
"Tooling fixes, mostly related to the KASLR fallout, but also other
fixes"
* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf buildid-cache: Check relocation when checking for existing kcore
perf tools: Adjust kallsyms for relocated kernel
perf tests: No need to set up ref_reloc_sym
perf symbols: Prevent the use of kcore if the kernel has moved
perf record: Get ref_reloc_sym from kernel map
perf machine: Set up ref_reloc_sym in machine__create_kernel_maps()
perf machine: Add machine__get_kallsyms_filename()
perf tools: Add kallsyms__get_function_start()
perf symbols: Fix symbol annotation for relocated kernel
perf tools: Fix include for non x86 architectures
perf tools: Fix AAAAARGH64 memory barriers
perf tools: Demangle kernel and kernel module symbols too
perf/doc: Remove mention of non-existent set_perf_event_pending() from design.txt
When using a mix of compressed file extents and prealloc extents, it
is possible to fill a page of a file with random, garbage data from
some unrelated previous use of the page, instead of a sequence of zeroes.
A simple sequence of steps to get into such case, taken from the test
case I made for xfstests, is:
_scratch_mkfs
_scratch_mount "-o compress-force=lzo"
$XFS_IO_PROG -f -c "pwrite -S 0x06 -b 18670 266978 18670" $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "falloc 26450 665194" $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "truncate 542872" $SCRATCH_MNT/foobar
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foobar
This results in the following file items in the fs tree:
item 4 key (257 INODE_ITEM 0) itemoff 15879 itemsize 160
inode generation 6 transid 6 size 542872 block group 0 mode 100600
item 5 key (257 INODE_REF 256) itemoff 15863 itemsize 16
inode ref index 2 namelen 6 name: foobar
item 6 key (257 EXTENT_DATA 0) itemoff 15810 itemsize 53
extent data disk byte 0 nr 0 gen 6
extent data offset 0 nr 24576 ram 266240
extent compression 0
item 7 key (257 EXTENT_DATA 24576) itemoff 15757 itemsize 53
prealloc data disk byte 12849152 nr 241664 gen 6
prealloc data offset 0 nr 241664
item 8 key (257 EXTENT_DATA 266240) itemoff 15704 itemsize 53
extent data disk byte 12845056 nr 4096 gen 6
extent data offset 0 nr 20480 ram 20480
extent compression 2
item 9 key (257 EXTENT_DATA 286720) itemoff 15651 itemsize 53
prealloc data disk byte 13090816 nr 405504 gen 6
prealloc data offset 0 nr 258048
The on disk extent at offset 266240 (which corresponds to 1 single disk block),
contains 5 compressed chunks of file data. Each of the first 4 compress 4096
bytes of file data, while the last one only compresses 3024 bytes of file data.
Therefore a read into the file region [285648 ; 286720[ (length = 4096 - 3024 =
1072 bytes) should always return zeroes (our next extent is a prealloc one).
The solution here is the compression code path to zero the remaining (untouched)
bytes of the last page it uncompressed data into, as the information about how
much space the file data consumes in the last page is not known in the upper layer
fs/btrfs/extent_io.c:__do_readpage(). In __do_readpage we were correctly zeroing
the remainder of the page but only if it corresponds to the last page of the inode
and if the inode's size is not a multiple of the page size.
This would cause not only returning random data on reads, but also permanently
storing random data when updating parts of the region that should be zeroed.
For the example above, it means updating a single byte in the region [285648 ; 286720[
would store that byte correctly but also store random data on disk.
A test case for xfstests follows soon.
Signed-off-by: Filipe David Borba Manana <fdmanana@gmail.com>
Signed-off-by: Chris Mason <clm@fb.com>
A user reported a 100% cpu hang with my new delayed ref code. Turns out I
forgot to increase the count check when we can't run a delayed ref because of
the tree mod log. If we can't run any delayed refs during this there is no
point in continuing to look, and we need to break out. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Chris Mason <clm@fb.com>
Added in patch "btrfs: add ioctls to query/change feature bits online"
modifications to superblock don't need to reserve metadata blocks when
starting a transaction.
Signed-off-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Chris Mason <clm@fb.com>
The set_fslabel ioctl uses btrfs_end_transaction, which means it's
possible that the change will be lost if the system crashes, same for
the newly set features. Let's use btrfs_commit_transaction instead.
Signed-off-by: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Chris Mason <clm@fb.com>
Wang noticed that he was failing btrfs/030 even though me and Filipe couldn't
reproduce. Turns out this is because Wang didn't have CONFIG_BTRFS_ASSERT set,
which meant that a key part of Filipe's original patch was not being built in.
This appears to be a mess up with merging Filipe's patch as it does not exist in
his original patch. Fix this by changing how we make sure del_waiting_dir_move
asserts that it did not error and take the function out of the ifdef check.
This makes btrfs/030 pass with the assert on or off. Thanks,
Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Filipe Manana <fdmanana@gmail.com>
Signed-off-by: Chris Mason <clm@fb.com>
- Protect pinctrl_list_add() with the proper mutex. This
was identified by RedHat. Caused nasty locking warnings
was rootcased by Stanislaw Gruszka.
- Avoid adding dangerous debugfs files when either half of
the subsystem is unused: pinmux or pinconf.
- Various fixes to various drivers: locking, hardware
particulars, DT parsing, error codes.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJS9pQYAAoJEEEQszewGV1zK3QQALzJ5J//W0LOiLG7fhJMJAfI
A2xec10h9T3UV42CvCwaAP6cukjFqeEP/TK6bf14dO7emTnGp/T+JrwgSpiv+9Ht
tMIBDhJ6qhQQJtfzsbgeEiXPu8+OnfSO0uCk3YfvpJTsvyjgqCV6Kqf2s/rmt87c
DRmzYiT73vS6b1m69CKQBSPk5zdHt2lcchTvrjh5Kk/MJ9kkoOH+64RaXt8U3imT
q/VKJHd8e+b4zNCljsxd/gAQ4fBmbCnXWl/rp5Mp0m8X6iAsR24wkn7Z/0L5+ulF
BNl2PPYsTAJvYA2VREMrVjK70x6HqlL2fk8sQWNmwZXQcivN2ab4I6CC7p/yPJFZ
nTu03IY0ryW1367QB2c6TVPFVYUtSeJhjEihoKa9FooXvcs+EP1u51uVcgphqIbL
AnjfLdft4+7s+hPzL2h2tKMBJDmnV4wOc9y9HtbmY7aSbAnncr4WuysnoxznyNDC
Q9e/+P2f54OqzYLmnWQNpmwpFEe/NQS/D79U7vYxpfDIKVkPWs+eTJAb70SWjJM0
WTk+S4/Vxr9xuSqT2TRENp3x7vwwhlP7aXidqOG2A0G3XEqPlAiRalzOS3pTmMhh
j7IK2Lw+dSUx1kK3K/Q9guv4FKvG0JraW8dgryT6jSOEMTAXtYXkV2xblWdzdLWz
iBj5zhnWUlbWxMO9B8qP
=pxHE
-----END PGP SIGNATURE-----
Merge tag 'pinctrl-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
Pull pinctrl fixes from Linus Walleij:
"First round of pin control fixes for v3.14:
- Protect pinctrl_list_add() with the proper mutex. This was
identified by RedHat. Caused nasty locking warnings was rootcased
by Stanislaw Gruszka.
- Avoid adding dangerous debugfs files when either half of the
subsystem is unused: pinmux or pinconf.
- Various fixes to various drivers: locking, hardware particulars, DT
parsing, error codes"
* tag 'pinctrl-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
pinctrl: tegra: return correct error type
pinctrl: do not init debugfs entries for unimplemented functionalities
pinctrl: protect pinctrl_list add
pinctrl: sirf: correct the pin index of ac97_pins group
pinctrl: imx27: fix offset calculation in imx_read_2bit
pinctrl: vt8500: Change devicetree data parsing
pinctrl: imx27: fix wrong offset to ICONFB
pinctrl: at91: use locked variant of irq_set_handler
Pull x86 fixes from Peter Anvin:
"Quite a varied little collection of fixes. Most of them are
relatively small or isolated; the biggest one is Mel Gorman's fixes
for TLB range flushing.
A couple of AMD-related fixes (including not crashing when given an
invalid microcode image) and fix a crash when compiled with gcov"
* 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86, microcode, AMD: Unify valid container checks
x86, hweight: Fix BUG when booting with CONFIG_GCOV_PROFILE_ALL=y
x86/efi: Allow mapping BGRT on x86-32
x86: Fix the initialization of physnode_map
x86, cpu hotplug: Fix stack frame warning in check_irq_vectors_for_cpu_disable()
x86/intel/mid: Fix X86_INTEL_MID dependencies
arch/x86/mm/srat: Skip NUMA_NO_NODE while parsing SLIT
mm, x86: Revisit tlb_flushall_shift tuning for page flushes except on IvyBridge
x86: mm: change tlb_flushall_shift for IvyBridge
x86/mm: Eliminate redundant page table walk during TLB range flushing
x86/mm: Clean up inconsistencies when flushing TLB ranges
mm, x86: Account for TLB flushes only when debugging
x86/AMD/NB: Fix amd_set_subcaches() parameter type
x86/quirks: Add workaround for AMD F16h Erratum792
x86, doc, kconfig: Fix dud URL for Microcode data
I missed a couple errors in reviewing the patches converting jfs
to use the generic posix ACL function. Setting ACL's currently
fails with -EOPNOTSUPP.
Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Reported-by: Michael L. Semon <mlsemon35@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>