Use it to get mem size under the limit_pfn.
to replace local version in x86 reserved_initrd.
-v2: remove not needed cast that is pointed out by HPA.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-29-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
That is for bootloaders.
setup_data is in setup_header, and bootloader is copying that from bzImage.
So for old bootloader should keep that as 0 already.
old kexec-tools till now for elf image set setup_data to 0, so it is ok.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-28-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Now 64bit entry is fixed on 0x200, can not be changed anymore.
Update the comments to reflect that.
Also put info about it in boot.txt
-v2: fix some grammar error
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-27-git-send-email-yinghai@kernel.org
Cc: Rob Landley <rob@landley.net>
Cc: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
xloadflags bit 1 indicates that we can load the kernel and all data
structures above 4G; it is set if kernel is relocatable and 64bit.
bootloader will check if xloadflags bit 1 is set to decide if
it could load ramdisk and kernel high above 4G.
bootloader will fill value to ext_ramdisk_image/size for high 32bits
when it load ramdisk above 4G.
kernel use get_ramdisk_image/size to use ext_ramdisk_image/size to get
right positon for ramdisk.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Cc: Rob Landley <rob@landley.net>
Cc: Matt Fleming <matt.fleming@intel.com>
Cc: Gokul Caushik <caushik1@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joe Millenbach <jmillenbach@gmail.com>
Link: http://lkml.kernel.org/r/1359058816-7615-26-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
We should set mappings only for usable memory ranges under max_pfn
Otherwise causes same problem that is fixed by
x86, mm: Only direct map addresses that are marked as E820_RAM
This patch exposes pfn_mapped array, and only sets ident mapping for ranges
in that array.
This patch relies on new kernel_ident_mapping_init that could handle existing
pgd/pud between different calls.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-25-git-send-email-yinghai@kernel.org
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Now ident_mapping_init is checking if pgd/pud is present for every 2M,
so several 2Ms are in same PUD, it will keep checking if pud is there
with same pud.
init_level4_page just does not check existing pgd/pud.
We could use generic mapping_init with different settings in info to
replace those two local grown version functions.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-24-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
When first kernel is booted with memmap= or mem= to limit max_pfn.
kexec can load second kernel above that max_pfn.
We need to set ident mapping for whole image in this case instead of just
for first 2M.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-23-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Now 64bit kernel supports more than 1T ram and kexec tools
could find buffer above 1T, remove that obsolete limitation.
and use MAXMEM instead.
Tested on system with more than 1024G ram.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-22-git-send-email-yinghai@kernel.org
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
commit 08da5a2ca
x86_64: Early segment setup for VT
sets up LDT and TR into a valid state in order to speed up boot
decompression under VT.
Those code are put in code64, and it is using GDT that is only
loaded from code32 path.
That breaks booting with 64bit bootloader that does not go through
code32 path and jump to startup_64 directly, and it has different
GDT.
Move those lines into code32 after their GDT is loaded.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-21-git-send-email-yinghai@kernel.org
Cc: Zachary Amsden <zamsden@gmail.com>
Cc: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
We need to move some code to 32bit section in following patch:
x86, boot: Move lldt/ltr out of 64bit code section
but that will push startup_64 down from 0x200.
According to hpa, we can not change startup_64 position and that
is an ABI.
We could move function verify_cpu and no_longmode down, because
verify_cpu is used via function call and no_longmode will not
return, then we don't need to add extra code for jumping back.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-20-git-send-email-yinghai@kernel.org
Cc: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
boot/compressed/misc.c is used for bzImage in 64bit and 32bit, and
cmd_line_ptr could point to buffer that is above 4g, cmd_line_ptr
should be 64bit otherwise high 32bit will be capped out.
So need to change data type to unsigned long, that will be 64bit get
correct address of command line buffer.
And it is still ok with 32bit bzImage, because unsigned long on 32bit kernel
is still 32bit.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-19-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
cmdline.c::__cmdline_find_option... are shared between 16-bit setup code
and 32/64 bit decompressor code.
for 32/64 only path via kexec, we should not check if ptr is less 1M.
as those cmdline could be put above 1M, or even 4G.
Move out accessible checking out of __cmdline_find_option()
So decompressor in misc.c can parse cmdline correctly.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-18-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Add an accessor function for the command line address.
Later we will add support for holding a 64-bit address via ext_cmd_line_ptr.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-17-git-send-email-yinghai@kernel.org
Cc: Gokul Caushik <caushik1@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joe Millenbach <jmillenbach@gmail.com>
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
There are several places to find ramdisk information early for reserving
and relocating.
Use accessor functions to make code more readable and consistent.
Later will add ext_ramdisk_image/size in those functions to support
loading ramdisk above 4g.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-16-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
They are the same, could move them out from head32/64.c to setup.c.
We are using memblock, and it could handle overlapping properly, so
we don't need to reserve some at first to hold the location, and just
need to make sure we reserve them before we are using memblock to find
free mem to use.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-15-git-send-email-yinghai@kernel.org
Reviewed-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
We are not having max_pfn_mapped set correctly until init_memory_mapping.
So don't print its initial value for 64bit
Also need to use KERNEL_IMAGE_SIZE directly for highmap cleanup.
-v2: update comments about max_pfn_mapped according to Stefano Stabellini.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-14-git-send-email-yinghai@kernel.org
Acked-by: Borislav Petkov <bp@suse.de>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
We only map a single 2 MiB page per #PF, even though we should be able
to do this a full gigabyte at a time with no additional memory cost.
This is a workaround for a broken AMD reference BIOS (and its
derivatives in shipping system) which maps a large chunk of memory as
WB in the MTRR system but will #MC if the processor wanders off and
tries to prefetch that memory, which can happen any time the memory is
mapped in the TLB.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-13-git-send-email-yinghai@kernel.org
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
[ hpa: rewrote the patch description ]
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Linear mode (CR0.PG = 0) is mutually exclusive with 64-bit mode; all
64-bit code has to use page tables. This makes it awkward before we
have first set up properly all-covering page tables to access objects
that are outside the static kernel range.
So far we have dealt with that simply by mapping a fixed amount of
low memory, but that fails in at least two upcoming use cases:
1. We will support load and run kernel, struct boot_params, ramdisk,
command line, etc. above the 4 GiB mark.
2. need to access ramdisk early to get microcode to update that as
early possible.
We could use early_iomap to access them too, but it will make code to
messy and hard to be unified with 32 bit.
Hence, set up a #PF table and use a fixed number of buffers to set up
page tables on demand. If the buffers fill up then we simply flush
them and start over. These buffers are all in __initdata, so it does
not increase RAM usage at runtime.
Thus, with the help of the #PF handler, we can set the final kernel
mapping from blank, and switch to init_level4_pgt later.
During the switchover in head_64.S, before #PF handler is available,
we use three pages to handle kernel crossing 1G, 512G boundaries with
sharing page by playing games with page aliasing: the same page is
mapped twice in the higher-level tables with appropriate wraparound.
The kernel region itself will be properly mapped; other mappings may
be spurious.
early_make_pgtable is using kernel high mapping address to access pages
to set page table.
-v4: Add phys_base offset to make kexec happy, and add
init_mapping_kernel() - Yinghai
-v5: fix compiling with xen, and add back ident level3 and level2 for xen
also move back init_level4_pgt from BSS to DATA again.
because we have to clear it anyway. - Yinghai
-v6: switch to init_level4_pgt in init_mem_mapping. - Yinghai
-v7: remove not needed clear_page for init_level4_page
it is with fill 512,8,0 already in head_64.S - Yinghai
-v8: we need to keep that handler alive until init_mem_mapping and don't
let early_trap_init to trash that early #PF handler.
So split early_trap_pf_init out and move it down. - Yinghai
-v9: switchover only cover kernel space instead of 1G so could avoid
touch possible mem holes. - Yinghai
-v11: change far jmp back to far return to initial_code, that is needed
to fix failure that is reported by Konrad on AMD systems. - Yinghai
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-12-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
After we switch to use #PF handler help to set page table, init_level4_pgt
will only have entries set after init_mem_mapping().
We need to move copying init_level4_pgt to trampoline_pgd after that.
So split reserve and setup, and move the setup after init_mem_mapping()
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-11-git-send-email-yinghai@kernel.org
Cc: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
Acked-by: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
with #PF handler way to set early page table, level3_ident will go away with
64bit native path.
So just use entries in init_level4_pgt to set them in trampoline_pgd.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-10-git-send-email-yinghai@kernel.org
Cc: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
Acked-by: Jarkko Sakkinen <jarkko.sakkinen@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
We want to support struct boot_params (formerly known as the
zero-page, or real-mode data) above the 4 GiB mark. We will have #PF
handler to set page table for not accessible ram early, but want to
limit it before x86_64_start_reservations to limit the code change to
native path only.
Also we will need the ramdisk info in struct boot_params to access the microcode
blob in ramdisk in x86_64_start_kernel, so copy struct boot_params early makes
it accessing ramdisk info simple.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-9-git-send-email-yinghai@kernel.org
Cc: Alexander Duyck <alexander.h.duyck@intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
It is simple version for kernel_physical_mapping_init.
it will work to build one page table that will be used later.
Use mapping_info to control
1. alloc_pg_page method
2. if PMD is EXEC,
3. if pgd is with kernel low mapping or ident mapping.
Will use to replace some local versions in kexec, hibernation and etc.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-8-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Trampoline code is executed by APs with kernel low mapping on 64bit.
We need to set trampoline code to EXEC early before we boot APs.
Found the problem after switching to #PF handler set page table,
and we do not set initial kernel low mapping with EXEC anymore in
arch/x86/kernel/head_64.S.
Change to use early_initcall instead that will make sure trampoline
will have EXEC set.
-v2: Merge two comments according to Borislav Petkov <bp@alien8.de>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-7-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Just like the way we calculate next for pud and pmd, aka round down and
add size.
Also, do not do boundary-checking with 'next', and just pass 'end' down
to phys_pud_init() instead. Because the loop in phys_pud_init() stops at
PTRS_PER_PUD and thus can handle a possibly bigger 'end' properly.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-6-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Separate out the reservation of the kernel static memory areas into a
separate function.
Also add support for case when memmap=xxM$yyM is used without exactmap.
Need to remove reserved range at first before we add E820_RAM
range, otherwise added E820_RAM range will be ignored.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-5-git-send-email-yinghai@kernel.org
Cc: Jacob Shin <jacob.shin@amd.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
During debugging loading kernel above 4G, found that one page is not used
in pre-allocated BRK area for early page allocation.
pgt_buf_top is address that can not be used, so should check if that new
end is above that top, otherwise last page will not be used.
Fix that checking and also add print out for allocation from pre-allocated
BRK area to catch possible bugs later.
But after we get back that page for pgt, it tiggers one bug in pgt allocation
with xen: We need to avoid to use page as pgt to map range that is
overlapping with that pgt page.
Add checking about overlapping, when it happens, use memblock allocation
instead. That fixes crash on Xen PV guest with 2G that Stefan found.
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-2-git-send-email-yinghai@kernel.org
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Coming patches to x86/mm2 require the changes and advanced baseline in
x86/boot.
Resolved Conflicts:
arch/x86/kernel/setup.c
mm/nobootmem.c
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Use the new sentinel field to detect bootloaders which fail to follow
protocol and don't initialize fields in struct boot_params that they
do not explicitly initialize to zero.
Based on an original patch and research by Yinghai Lu.
Changed by hpa to be invoked both in the decompression path and in the
kernel proper; the latter for the case where a bootloader takes over
decompression.
Originally-by: Yinghai Lu <yinghai@kernel.org>
Link: http://lkml.kernel.org/r/1359058816-7615-26-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Define the 2.12 bzImage boot protocol: add xloadflags and additional
fields to allow the command line, initramfs and struct boot_params to
live above the 4 GiB mark.
The xloadflags now communicates if this is a 64-bit kernel with the
legacy 64-bit entry point and which of the EFI handover entry points
are supported.
Avoid adding new read flags to loadflags because of claimed
bootloaders testing the whole byte for == 1 to determine bzImageness
at least until the issue can be researched further.
This is based on patches by Yinghai Lu and David Woodhouse.
Originally-by: Yinghai Lu <yinghai@kernel.org>
Originally-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: David Woodhouse <dwmw2@infradead.org>
Acked-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/r/1359058816-7615-26-git-send-email-yinghai@kernel.org
Cc: Rob Landley <rob@landley.net>
Cc: Gokul Caushik <caushik1@gmail.com>
Cc: Josh Triplett <josh@joshtriplett.org>
Cc: Joe Millenbach <jmillenbach@gmail.com>
Pull btrfs fixes from Chris Mason:
"It turns out that we had two crc bugs when running fsx-linux in a
loop. Many thanks to Josef, Miao Xie, and Dave Sterba for nailing it
all down. Miao also has a new OOM fix in this v2 pull as well.
Ilya fixed a regression Liu Bo found in the balance ioctls for pausing
and resuming a running balance across drives.
Josef's orphan truncate patch fixes an obscure corruption we'd see
during xfstests.
Arne's patches address problems with subvolume quotas. If the user
destroys quota groups incorrectly the FS will refuse to mount.
The rest are smaller fixes and plugs for memory leaks."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (30 commits)
Btrfs: fix repeated delalloc work allocation
Btrfs: fix wrong max device number for single profile
Btrfs: fix missed transaction->aborted check
Btrfs: Add ACCESS_ONCE() to transaction->abort accesses
Btrfs: put csums on the right ordered extent
Btrfs: use right range to find checksum for compressed extents
Btrfs: fix panic when recovering tree log
Btrfs: do not allow logged extents to be merged or removed
Btrfs: fix a regression in balance usage filter
Btrfs: prevent qgroup destroy when there are still relations
Btrfs: ignore orphan qgroup relations
Btrfs: reorder locks and sanity checks in btrfs_ioctl_defrag
Btrfs: fix unlock order in btrfs_ioctl_rm_dev
Btrfs: fix unlock order in btrfs_ioctl_resize
Btrfs: fix "mutually exclusive op is running" error code
Btrfs: bring back balance pause/resume logic
btrfs: update timestamps on truncate()
btrfs: fix btrfs_cont_expand() freeing IS_ERR em
Btrfs: fix a bug when llseek for delalloc bytes behind prealloc extents
Btrfs: fix off-by-one in lseek
...
Pull cifs fixes from Steve French:
"Two small cifs fixes"
* 'for-next' of git://git.samba.org/sfrench/cifs-2.6:
fs/cifs/cifs_dfs_ref.c: fix potential memory leakage
cifs: fix srcip_matches() for ipv6
Pull ARM fixes from Russell King:
"A number of fixes:
Patrik found a problem with preempt counting in the VFP assembly
functions which can cause the preempt count to be upset.
Nicolas fixed a problem with the parsing of the DT when it straddles a
1MB boundary.
Subhash Jadavani reported a problem with sparsemem and our highmem
support for cache maintanence for DMA areas, and TI found a bug in
their strongly ordered memory mapping type.
Also, three fixes by way of Will Deacon's tree from Dave Martin for
instruction compatibility and Marc Zyngier to fix hypervisor boot mode
issues."
* 'fixes' of git://git.linaro.org/people/rmk/linux-arm:
ARM: 7629/1: mm: Fix missing XN flag for for MT_MEMORY_SO
ARM: DMA: Fix struct page iterator in dma_cache_maint() to work with sparsemem
ARM: 7628/1: head.S: map one extra section for the ATAG/DTB area
ARM: 7627/1: Predicate preempt logic on PREEMP_COUNT not PREEMPT alone
ARM: virt: simplify __hyp_stub_install epilog
ARM: virt: boot secondary CPUs through the right entry point
ARM: virt: Avoid bx instruction for compatibility with <=ARMv4
Here's a long-pending fixes pull request for arm-soc (I didn't send one
in the -rc4 cycle).
The larger deltas are from:
- A fixup of error paths in the mvsdio driver
- Header file move for a driver that hadn't been properly converted to
multiplatform on i.MX, which was causing build failures when included
- Device tree updates for at91 dealing mostly with their new
pinctrl setup merged in 3.8 and mistakes in those initial configs
The rest are the normal mix of small fixes all over the place; sunxi,
omap, imx, mvebu, etc, etc.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
iQIcBAABAgAGBQJRAV2RAAoJEIwa5zzehBx32N0P/AsFOLaVWjuvf3kBZTaZgp3J
jZjhmAfJ2dQCITA792U2bI0d+gPiXm49EY3KWaNdb7S2UmQ1MwXHhKOwOQSVXli0
j+qAgVUa4nSsi3FQesKS0zThG/Xr+RsyiJZ2dHu71hendJu5NB1O1hzO4hDEHkMc
K8NGglKjtGirEiLIoub9ag8E9k5sd8X4nulrEJclon1BoolPcef18Bs96tdPmq/o
Ss634vBqhzSE8OInFc6RDNzTSM52zXbornr/5xGAvFqQv6L0rSXHPvjeeWVdNjj1
aNqkOrQOAHWRwTcyHOR0GdJfuAPSUwF+JkBWcUbgmsda7XunFiSb5tKV3FSVbJfN
pMFvbg/iK+ByhWq8iAOkT7OP64wi++FlOFa39IAiQ1QPRD0j93OlKMp0LjqEEiKd
Gw8o3X03GWhqoJUlSz40TF0Pvkje1UTk2Y8k2y24I3AnnEAcO5x+5pZYUTOe6x5N
THqqSMsdKWIibrQJRuOXll/DkS8zcepTHU7o8hyHBKYh7LxdAs4ITQoYZjcU5lse
HGwldByKfuNlzF3+96Jh9wZsr/9zjD8yovEcQYk37s56T/b7kT0sQm6XGS1dFE8Q
xQgcXLEUXZLt/79B0bn/5ogh26xswx/3GHgNjL1tJQc/MhbQ6C0bb2bBVoU21qzq
I5yMMwNSkH8+7+PGPiaQ
=YDHs
-----END PGP SIGNATURE-----
Merge tag 'fixes-for-linus2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
Pull ARM SoC fixes from Olof Johansson:
"Here's a long-pending fixes pull request for arm-soc (I didn't send
one in the -rc4 cycle).
The larger deltas are from:
- A fixup of error paths in the mvsdio driver
- Header file move for a driver that hadn't been properly converted
to multiplatform on i.MX, which was causing build failures when
included
- Device tree updates for at91 dealing mostly with their new pinctrl
setup merged in 3.8 and mistakes in those initial configs
The rest are the normal mix of small fixes all over the place; sunxi,
omap, imx, mvebu, etc, etc."
* tag 'fixes-for-linus2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (40 commits)
mfd: vexpress-sysreg: Don't skip initialization on probe
ARM: vexpress: Enable A7 cores in V2P-CA15_A7's Device Tree
ARM: vexpress: extend the MPIDR range used for pen release check
ARM: at91/dts: correct comment in at91sam9x5.dtsi for mii
ARM: at91/at91_dt_defconfig: add at91sam9n12 SoC to DT defconfig
ARM: at91/at91_dt_defconfig: remove memory specification to cmdline
ARM: at91/dts: add macb mii pinctrl config for kizbox
ARM: at91: rm9200: remake the BGA as default version
ARM: at91: fix gpios on i2c-gpio for RM9200 DT
ARM: at91/at91sam9x5 DTS: add SCK USART pins
ARM: at91/at91sam9x5 DTS: correct wrong PIO BANK values on u(s)arts
ARM: at91/at91-pinctrl documentation: fix typo and add some details
ARM: kirkwood: fix missing #interrupt-cells property
mmc: mvsdio: use devm_ API to simplify/correct error paths.
clk: mvebu/clk-cpu.c: fix memory leakage
ARM: OMAP2+: omap4-panda: add UART2 muxing for WiLink shared transport
ARM: OMAP2+: DT node Timer iteration fix
ARM: OMAP2+: Fix section warning for omap_init_ocp2scp()
ARM: OMAP2+: fix build break for omapdrm
ARM: OMAP2: Fix missing omap2xxx_clkt_vps_late_init function calls
...
* Two cpuidle initialization fixes from Konrad Rzeszutek Wilk.
* cpufreq regression fixes for AMD processors from Borislav Petkov,
Stefan Bader, and Matthew Garrett.
* ACPI cpufreq fix from Thomas Schlichter.
* cpufreq and devfreq fixes related to incorrect usage of operating
performance points (OPP) framework and RCU from Nishanth Menon.
* APEI workaround for incorrect BIOS information from Lans Zhang.
/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.19 (GNU/Linux)
iQIcBAABAgAGBQJRASzdAAoJEKhOf7ml8uNsFiEP/RTaeCDutxTHNhcYRIx6MXcU
rFtUbWnSLxGbMjtxgQaPRFihMxmOC775609433v+T/DDvyNoqGqUdN7y+P5tBUFh
TXxV/1hmsjAyeYITq1dhCci8ESrdOmESA+i4V/wRPFqKPkQy5yrnvhqfMejrVXy7
CaYrglcALO5tmfwQ2qKcOLXYQE52L2WW1PenvxIZWkcI6ck9kKYg63Roswbek3K9
gMzx/QSlFbzrOcJfDlcPsHqCUlsvh5QV3TfGObYOejZe/BgZndKrJgLJ+9uEUmof
J5wNlF2cxlQ0ggFidX53eoynAiAGC/kdXA7UOzGVbTpEVgt1r3KdlmRqfF6V3tpf
N85GnMk7QBNK66sQM48i2Q+wwOkFj0Pq9LJ8KpT9MtoirzBHDXcLK9CgsL/JCqnV
/cmpXOiLoDpG+kmp4Fr3MDU55FqFkvLDpvPkIdZ3zBXRGv+DUvAZjl4TGeBMt2WL
LhU812RrRRBrqxo/mhRnSgIv6feQlrjCdDskC/Ra8NJrghyGpoEppe3JOvwdH8RF
suuRkwvE/Y81xi28WLvIp5Jth4/svhFPoXWeAXcV/apbvi9a6TB97losAiI9gFdH
NONT+/v4RjKrKZXS/zSqNrttG4dhey/Y18FgIAXLfwz7aUmE5XdcX0wQw0Ev8uBv
ESASwLtkH2BlkTVPyec5
=EEk8
-----END PGP SIGNATURE-----
Merge tag 'pm+acpi-for-3.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull ACPI and power management fixes from Rafael Wysocki:
- Two cpuidle initialization fixes from Konrad Rzeszutek Wilk.
- cpufreq regression fixes for AMD processors from Borislav Petkov,
Stefan Bader, and Matthew Garrett.
- ACPI cpufreq fix from Thomas Schlichter.
- cpufreq and devfreq fixes related to incorrect usage of operating
performance points (OPP) framework and RCU from Nishanth Menon.
- APEI workaround for incorrect BIOS information from Lans Zhang.
* tag 'pm+acpi-for-3.8-rc5' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
cpufreq: Add module aliases for acpi-cpufreq
ACPI: Check MSR valid bit before using P-state frequencies
PM / devfreq: exynos4_bus: honor RCU lock usage
PM / devfreq: add locking documentation for recommended_opp
cpufreq: cpufreq-cpu0: use RCU locks around usage of OPP
cpufreq: OMAP: use RCU locks around usage of OPP
ACPI, APEI: Fixup incorrect 64-bit access width firmware bug
ACPI / processor: Get power info before updating the C-states
powernow-k8: Add a kconfig dependency on acpi-cpufreq
ACPI / cpuidle: Fix NULL pointer issues when cpuidle is disabled
intel_idle: Don't register CPU notifier if we are not running.
One more oversight in the debugfs code was reported and fixed, plus a
documentation fix.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJRARV/AAoJELSic+t+oim9zE0QAICfs9wBiHPFLfjDUeGbta2F
3f1JRJ995oI8kA9JDujV2291XBCr690MCr/dpp6l19L7n3grzsPM3Oux+VuI0YQz
XxQUI9OSIq4A/Q9sP8hrbd+KPrn69aLjHAK4R4Rz3FtGaUuLb6sWrYL7xDNi3Cyr
mAZ5r//Wj8Z6PtjJdgLPN92k5X/EkIbO9A8BVUXX2Y2uVB3OeqWuJkFKqCZz2ha7
TheMW95hScsdeNKCpoPZCUD1r0tQ07U08q+poo4lTWBOxSv9JVw0HsZMVB0UpOG1
9q6NnX25zYRFIZnbBf2yjqz1Vf4JrRAjskidAVSCqFyaYjndRbsEnEJCXsrO3188
4luBgQYu4XN/1B3IgqyEdA3XjfSJRkZBfw/lB3k/EB1FmaZYg9jfwvhrDQOVLXYN
0Kaw4ZEwKu/B425adB2I13F4WEQzdqw8Fxl9rpS+HRTAJmoy7LZ91NskxwC4TemG
QAt3brOccwWj/4PUrYu4IIbfyOtnuQmDQ9K5XQYXdq1j4WV91bxo5y+FekHKCNcB
fSceA7vAXHeP/VUlylG+LNk/NGuoCh7I2b9NLEHe8HO2ytpUfWICtJrhvKCPJNSL
NC60aZsa7nYKEq/g0u5xszlH2uNGvO2rWVGIVI59XmJbKMsCAjhw8kSmmPicbKor
iALHcn/cd8DiZC8+SXMw
=O6T4
-----END PGP SIGNATURE-----
Merge tag 'regmap-fix-3.8-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap
Pull regmap fixes from Mark Brown:
"One more oversight in the debugfs code was reported and fixed, plus a
documentation fix."
* tag 'regmap-fix-3.8-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
regmap: fix small typo in regmap_bulk_write comment
regmap: debugfs: Fix seeking from the cache
Pull slave-dmaengine fixes from Vinod Koul:
"A few fixes on slave dmanengine. There are trivial fixes in imx-dma,
tegra-dma & ioat driver"
* 'fixes' of git://git.infradead.org/users/vkoul/slave-dma:
dma: tegra: implement flags parameters for cyclic transfer
dmaengine: imx-dma: Disable use of hw_chain to fix sg_dma transfers.
ioat: Fix DMA memory sync direction correct flag
btrfs_start_delalloc_inodes() locks the delalloc_inodes list, fetches the
first inode, unlocks the list, triggers btrfs_alloc_delalloc_work/
btrfs_queue_worker for this inode, and then it locks the list, checks the
head of the list again. But because we don't delete the first inode that it
deals with before, it will fetch the same inode. As a result, this function
allocates a huge amount of btrfs_delalloc_work structures, and OOM happens.
Fix this problem by splice this delalloc list.
Reported-by: Alex Lyakas <alex.btrfs@zadarastorage.com>
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
The max device number of single profile is 1, not 0 (0 means 'as many as
possible'). Fix it.
Cc: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
First, though the current transaction->aborted check can stop the commit early
and avoid unnecessary operations, it is too early, and some transaction handles
don't end, those handles may set transaction->aborted after the check.
Second, when we commit the transaction, we will wake up some worker threads to
flush the space cache and inode cache. Those threads also allocate some transaction
handles and may set transaction->aborted if some serious error happens.
So we need more check for ->aborted when committing the transaction. Fix it.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
We may access and update transaction->aborted on the different CPUs without
lock, so we need ACCESS_ONCE() wrapper to prevent the compiler from creating
unsolicited accesses and make sure we can get the right value.
Signed-off-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
I noticed a WARN_ON going off when adding csums because we were going over
the amount of csum bytes that should have been allowed for an ordered
extent. This is a leftover from when we used to hold the csums privately
for direct io, but now we use the normal ordered sum stuff so we need to
make sure and check if we've moved on to another extent so that the csums
are added to the right extent. Without this we could end up with csums for
bytenrs that don't have extents to cover them yet. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
For compressed extents, the range of checksum is covered by disk length,
and the disk length is different with ram length, so we need to use disk
length instead to get us the right checksum.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
A user reported a BUG_ON(ret) that occured during tree log replay. Ret was
-EAGAIN, so what I think happened is that we removed an extent that covered
a bitmap entry and an extent entry. We remove the part from the bitmap and
return -EAGAIN and then search for the next piece we want to remove, which
happens to be an entire extent entry, so we just free the sucker and return.
The problem is ret is still set to -EAGAIN so we trip the BUG_ON(). The
user used btrfs-zero-log so I'm not 100% sure this is what happened so I've
added a WARN_ON() to catch the other possibility. Thanks,
Reported-by: Jan Steffens <jan.steffens@gmail.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
We drop the extent map tree lock while we're logging extents, so somebody
could come in and merge another extent into this one and screw up our
logging, or they could even remove us from the list which would keep us from
logging the extent or freeing our ref on it, so we need to make sure to not
clear LOGGING until after the extent is logged, and then we can merge it to
adjacent extents. Thanks,
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
From Pawel Moll:
- makes the V2P-CA15_A7 (a.k.a. TC2) work with 3.8 kernels
- improves vexpress-sysreg.c behaviour on arm64 platforms
* 'vexpress/fixes' of git://git.linaro.org/people/pawelmoll/linux:
mfd: vexpress-sysreg: Don't skip initialization on probe
ARM: vexpress: Enable A7 cores in V2P-CA15_A7's Device Tree
ARM: vexpress: extend the MPIDR range used for pen release check