Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

In netdevice.h we removed the structure in net-next that is being
changes in 'net'.  In macsec.c and rtnetlink.c we have overlaps
between fixes in 'net' and the u64 attribute changes in 'net-next'.

The mlx5 conflicts have to do with vxlan support dependencies.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2016-05-09 15:59:24 -04:00
commit e800072c18
147 changed files with 1271 additions and 577 deletions

View File

@ -69,6 +69,7 @@ Jean Tourrilhes <jt@hpl.hp.com>
Jeff Garzik <jgarzik@pretzel.yyz.us>
Jens Axboe <axboe@suse.de>
Jens Osterkamp <Jens.Osterkamp@de.ibm.com>
John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
John Stultz <johnstul@us.ibm.com>
<josh@joshtriplett.org> <josh@freedesktop.org>
<josh@joshtriplett.org> <josh@kernel.org>

View File

@ -32,6 +32,10 @@ Optional properties:
- target-supply : regulator for SATA target power
- phys : reference to the SATA PHY node
- phy-names : must be "sata-phy"
- ports-implemented : Mask that indicates which ports that the HBA supports
are available for software to use. Useful if PORTS_IMPL
is not programmed by the BIOS, which is true with
some embedded SOC's.
Required properties when using sub-nodes:
- #address-cells : number of cells to encode an address

View File

@ -69,18 +69,18 @@ LCO: Local Checksum Offload
LCO is a technique for efficiently computing the outer checksum of an
encapsulated datagram when the inner checksum is due to be offloaded.
The ones-complement sum of a correctly checksummed TCP or UDP packet is
equal to the sum of the pseudo header, because everything else gets
'cancelled out' by the checksum field. This is because the sum was
equal to the complement of the sum of the pseudo header, because everything
else gets 'cancelled out' by the checksum field. This is because the sum was
complemented before being written to the checksum field.
More generally, this holds in any case where the 'IP-style' ones complement
checksum is used, and thus any checksum that TX Checksum Offload supports.
That is, if we have set up TX Checksum Offload with a start/offset pair, we
know that _after the device has filled in that checksum_, the ones
know that after the device has filled in that checksum, the ones
complement sum from csum_start to the end of the packet will be equal to
_whatever value we put in the checksum field beforehand_. This allows us
to compute the outer checksum without looking at the payload: we simply
stop summing when we get to csum_start, then add the 16-bit word at
(csum_start + csum_offset).
the complement of whatever value we put in the checksum field beforehand.
This allows us to compute the outer checksum without looking at the payload:
we simply stop summing when we get to csum_start, then add the complement of
the 16-bit word at (csum_start + csum_offset).
Then, when the true inner checksum is filled in (either by hardware or by
skb_checksum_help()), the outer checksum will become correct by virtue of
the arithmetic.

View File

@ -872,9 +872,9 @@ F: drivers/perf/arm_pmu.c
F: include/linux/perf/arm_pmu.h
ARM PORT
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.arm.linux.org.uk/
W: http://www.armlinux.org.uk/
S: Maintained
F: arch/arm/
@ -886,35 +886,35 @@ F: arch/arm/plat-*/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc.git
ARM PRIMECELL AACI PL041 DRIVER
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
S: Maintained
F: sound/arm/aaci.*
ARM PRIMECELL CLCD PL110 DRIVER
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
S: Maintained
F: drivers/video/fbdev/amba-clcd.*
ARM PRIMECELL KMI PL050 DRIVER
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
S: Maintained
F: drivers/input/serio/ambakmi.*
F: include/linux/amba/kmi.h
ARM PRIMECELL MMCI PL180/1 DRIVER
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
S: Maintained
F: drivers/mmc/host/mmci.*
F: include/linux/amba/mmci.h
ARM PRIMECELL UART PL010 AND PL011 DRIVERS
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
S: Maintained
F: drivers/tty/serial/amba-pl01*.c
F: include/linux/amba/serial.h
ARM PRIMECELL BUS SUPPORT
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
S: Maintained
F: drivers/amba/
F: include/linux/amba/bus.h
@ -1036,7 +1036,7 @@ L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
ARM/CLKDEV SUPPORT
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/include/asm/clkdev.h
@ -1093,9 +1093,9 @@ F: arch/arm/boot/dts/cx92755*
N: digicolor
ARM/EBSA110 MACHINE SUPPORT
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.arm.linux.org.uk/
W: http://www.armlinux.org.uk/
S: Maintained
F: arch/arm/mach-ebsa110/
F: drivers/net/ethernet/amd/am79c961a.*
@ -1124,9 +1124,9 @@ T: git git://git.berlios.de/gemini-board
F: arch/arm/mm/*-fa*
ARM/FOOTBRIDGE ARCHITECTURE
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.arm.linux.org.uk/
W: http://www.armlinux.org.uk/
S: Maintained
F: arch/arm/include/asm/hardware/dec21285.h
F: arch/arm/mach-footbridge/
@ -1457,7 +1457,7 @@ S: Maintained
ARM/PT DIGITAL BOARD PORT
M: Stefan Eletzhofer <stefan.eletzhofer@eletztrick.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.arm.linux.org.uk/
W: http://www.armlinux.org.uk/
S: Maintained
ARM/QUALCOMM SUPPORT
@ -1493,9 +1493,9 @@ S: Supported
F: arch/arm64/boot/dts/renesas/
ARM/RISCPC ARCHITECTURE
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.arm.linux.org.uk/
W: http://www.armlinux.org.uk/
S: Maintained
F: arch/arm/include/asm/hardware/entry-macro-iomd.S
F: arch/arm/include/asm/hardware/ioc.h
@ -1773,9 +1773,9 @@ F: drivers/clk/versatile/clk-vexpress-osc.c
F: drivers/clocksource/versatile.c
ARM/VFP SUPPORT
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.arm.linux.org.uk/
W: http://www.armlinux.org.uk/
S: Maintained
F: arch/arm/vfp/
@ -2924,7 +2924,7 @@ F: mm/cleancache.c
F: include/linux/cleancache.h
CLK API
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-clk@vger.kernel.org
S: Maintained
F: include/linux/clk.h
@ -3358,9 +3358,9 @@ S: Supported
F: drivers/net/ethernet/stmicro/stmmac/
CYBERPRO FB DRIVER
M: Russell King <linux@arm.linux.org.uk>
M: Russell King <linux@armlinux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.arm.linux.org.uk/
W: http://www.armlinux.org.uk/
S: Maintained
F: drivers/video/fbdev/cyber2000fb.*
@ -3885,7 +3885,7 @@ F: Documentation/devicetree/bindings/display/st,stih4xx.txt
DRM DRIVERS FOR VIVANTE GPU IP
M: Lucas Stach <l.stach@pengutronix.de>
R: Russell King <linux+etnaviv@arm.linux.org.uk>
R: Russell King <linux+etnaviv@armlinux.org.uk>
R: Christian Gmeiner <christian.gmeiner@gmail.com>
L: dri-devel@lists.freedesktop.org
S: Maintained
@ -4227,8 +4227,8 @@ F: Documentation/efi-stub.txt
F: arch/ia64/kernel/efi.c
F: arch/x86/boot/compressed/eboot.[ch]
F: arch/x86/include/asm/efi.h
F: arch/x86/platform/efi/*
F: drivers/firmware/efi/*
F: arch/x86/platform/efi/
F: drivers/firmware/efi/
F: include/linux/efi*.h
EFI VARIABLE FILESYSTEM
@ -6902,7 +6902,7 @@ L: linux-man@vger.kernel.org
S: Maintained
MARVELL ARMADA DRM SUPPORT
M: Russell King <rmk+kernel@arm.linux.org.uk>
M: Russell King <rmk+kernel@armlinux.org.uk>
S: Maintained
F: drivers/gpu/drm/armada/
@ -7902,7 +7902,7 @@ S: Supported
F: drivers/nfc/nxp-nci
NXP TDA998X DRM DRIVER
M: Russell King <rmk+kernel@arm.linux.org.uk>
M: Russell King <rmk+kernel@armlinux.org.uk>
S: Supported
F: drivers/gpu/drm/i2c/tda998x_drv.c
F: include/drm/i2c/tda998x.h
@ -7975,7 +7975,7 @@ F: arch/arm/*omap*/*pm*
F: drivers/cpufreq/omap-cpufreq.c
OMAP POWERDOMAIN SOC ADAPTATION LAYER SUPPORT
M: Rajendra Nayak <rnayak@ti.com>
M: Rajendra Nayak <rnayak@codeaurora.org>
M: Paul Walmsley <paul@pwsan.com>
L: linux-omap@vger.kernel.org
S: Maintained

View File

@ -1,7 +1,7 @@
VERSION = 4
PATCHLEVEL = 6
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION = -rc7
NAME = Charred Weasel
# *DOCUMENTATION*

View File

@ -58,6 +58,9 @@ config GENERIC_CSUM
config RWSEM_GENERIC_SPINLOCK
def_bool y
config ARCH_DISCONTIGMEM_ENABLE
def_bool y
config ARCH_FLATMEM_ENABLE
def_bool y
@ -347,6 +350,15 @@ config ARC_HUGEPAGE_16M
endchoice
config NODES_SHIFT
int "Maximum NUMA Nodes (as a power of 2)"
default "1" if !DISCONTIGMEM
default "2" if DISCONTIGMEM
depends on NEED_MULTIPLE_NODES
---help---
Accessing memory beyond 1GB (with or w/o PAE) requires 2 memory
zones.
if ISA_ARCOMPACT
config ARC_COMPACT_IRQ_LEVELS
@ -455,6 +467,7 @@ config LINUX_LINK_BASE
config HIGHMEM
bool "High Memory Support"
select DISCONTIGMEM
help
With ARC 2G:2G address split, only upper 2G is directly addressable by
kernel. Enable this to potentially allow access to rest of 2G and PAE

View File

@ -13,6 +13,15 @@
#include <asm/byteorder.h>
#include <asm/page.h>
#ifdef CONFIG_ISA_ARCV2
#include <asm/barrier.h>
#define __iormb() rmb()
#define __iowmb() wmb()
#else
#define __iormb() do { } while (0)
#define __iowmb() do { } while (0)
#endif
extern void __iomem *ioremap(phys_addr_t paddr, unsigned long size);
extern void __iomem *ioremap_prot(phys_addr_t paddr, unsigned long size,
unsigned long flags);
@ -31,6 +40,15 @@ extern void iounmap(const void __iomem *addr);
#define ioremap_wc(phy, sz) ioremap(phy, sz)
#define ioremap_wt(phy, sz) ioremap(phy, sz)
/*
* io{read,write}{16,32}be() macros
*/
#define ioread16be(p) ({ u16 __v = be16_to_cpu((__force __be16)__raw_readw(p)); __iormb(); __v; })
#define ioread32be(p) ({ u32 __v = be32_to_cpu((__force __be32)__raw_readl(p)); __iormb(); __v; })
#define iowrite16be(v,p) ({ __iowmb(); __raw_writew((__force u16)cpu_to_be16(v), p); })
#define iowrite32be(v,p) ({ __iowmb(); __raw_writel((__force u32)cpu_to_be32(v), p); })
/* Change struct page to physical address */
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
@ -108,15 +126,6 @@ static inline void __raw_writel(u32 w, volatile void __iomem *addr)
}
#ifdef CONFIG_ISA_ARCV2
#include <asm/barrier.h>
#define __iormb() rmb()
#define __iowmb() wmb()
#else
#define __iormb() do { } while (0)
#define __iowmb() do { } while (0)
#endif
/*
* MMIO can also get buffered/optimized in micro-arch, so barriers needed
* Based on ARM model for the typical use case

View File

@ -0,0 +1,43 @@
/*
* Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#ifndef _ASM_ARC_MMZONE_H
#define _ASM_ARC_MMZONE_H
#ifdef CONFIG_DISCONTIGMEM
extern struct pglist_data node_data[];
#define NODE_DATA(nid) (&node_data[nid])
static inline int pfn_to_nid(unsigned long pfn)
{
int is_end_low = 1;
if (IS_ENABLED(CONFIG_ARC_HAS_PAE40))
is_end_low = pfn <= virt_to_pfn(0xFFFFFFFFUL);
/*
* node 0: lowmem: 0x8000_0000 to 0xFFFF_FFFF
* node 1: HIGHMEM w/o PAE40: 0x0 to 0x7FFF_FFFF
* HIGHMEM with PAE40: 0x1_0000_0000 to ...
*/
if (pfn >= ARCH_PFN_OFFSET && is_end_low)
return 0;
return 1;
}
static inline int pfn_valid(unsigned long pfn)
{
int nid = pfn_to_nid(pfn);
return (pfn <= node_end_pfn(nid));
}
#endif /* CONFIG_DISCONTIGMEM */
#endif

View File

@ -72,11 +72,20 @@ typedef unsigned long pgprot_t;
typedef pte_t * pgtable_t;
/*
* Use virt_to_pfn with caution:
* If used in pte or paddr related macros, it could cause truncation
* in PAE40 builds
* As a rule of thumb, only use it in helpers starting with virt_
* You have been warned !
*/
#define virt_to_pfn(kaddr) (__pa(kaddr) >> PAGE_SHIFT)
#define ARCH_PFN_OFFSET virt_to_pfn(CONFIG_LINUX_LINK_BASE)
#ifdef CONFIG_FLATMEM
#define pfn_valid(pfn) (((pfn) - ARCH_PFN_OFFSET) < max_mapnr)
#endif
/*
* __pa, __va, virt_to_page (ALERT: deprecated, don't use them)
@ -85,12 +94,10 @@ typedef pte_t * pgtable_t;
* virt here means link-address/program-address as embedded in object code.
* And for ARC, link-addr = physical address
*/
#define __pa(vaddr) ((unsigned long)vaddr)
#define __pa(vaddr) ((unsigned long)(vaddr))
#define __va(paddr) ((void *)((unsigned long)(paddr)))
#define virt_to_page(kaddr) \
(mem_map + virt_to_pfn((kaddr) - CONFIG_LINUX_LINK_BASE))
#define virt_to_page(kaddr) pfn_to_page(virt_to_pfn(kaddr))
#define virt_addr_valid(kaddr) pfn_valid(virt_to_pfn(kaddr))
/* Default Permissions for stack/heaps pages (Non Executable) */

View File

@ -278,14 +278,13 @@ static inline void pmd_set(pmd_t *pmdp, pte_t *ptep)
#define pmd_present(x) (pmd_val(x))
#define pmd_clear(xp) do { pmd_val(*(xp)) = 0; } while (0)
#define pte_page(pte) \
(mem_map + virt_to_pfn(pte_val(pte) - CONFIG_LINUX_LINK_BASE))
#define pte_page(pte) pfn_to_page(pte_pfn(pte))
#define mk_pte(page, prot) pfn_pte(page_to_pfn(page), prot)
#define pte_pfn(pte) virt_to_pfn(pte_val(pte))
#define pfn_pte(pfn, prot) (__pte(((pte_t)(pfn) << PAGE_SHIFT) | \
pgprot_val(prot)))
#define __pte_index(addr) (virt_to_pfn(addr) & (PTRS_PER_PTE - 1))
#define pfn_pte(pfn, prot) (__pte(((pte_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot)))
/* Don't use virt_to_pfn for macros below: could cause truncations for PAE40*/
#define pte_pfn(pte) (pte_val(pte) >> PAGE_SHIFT)
#define __pte_index(addr) (((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
/*
* pte_offset gets a @ptr to PMD entry (PGD in our 2-tier paging system)

View File

@ -30,11 +30,16 @@ static const unsigned long low_mem_start = CONFIG_LINUX_LINK_BASE;
static unsigned long low_mem_sz;
#ifdef CONFIG_HIGHMEM
static unsigned long min_high_pfn;
static unsigned long min_high_pfn, max_high_pfn;
static u64 high_mem_start;
static u64 high_mem_sz;
#endif
#ifdef CONFIG_DISCONTIGMEM
struct pglist_data node_data[MAX_NUMNODES] __read_mostly;
EXPORT_SYMBOL(node_data);
#endif
/* User can over-ride above with "mem=nnn[KkMm]" in cmdline */
static int __init setup_mem_sz(char *str)
{
@ -109,13 +114,11 @@ void __init setup_arch_memory(void)
/* Last usable page of low mem */
max_low_pfn = max_pfn = PFN_DOWN(low_mem_start + low_mem_sz);
#ifdef CONFIG_HIGHMEM
min_high_pfn = PFN_DOWN(high_mem_start);
max_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
#ifdef CONFIG_FLATMEM
/* pfn_valid() uses this */
max_mapnr = max_low_pfn - min_low_pfn;
#endif
max_mapnr = max_pfn - min_low_pfn;
/*------------- bootmem allocator setup -----------------------*/
/*
@ -129,7 +132,7 @@ void __init setup_arch_memory(void)
* the crash
*/
memblock_add(low_mem_start, low_mem_sz);
memblock_add_node(low_mem_start, low_mem_sz, 0);
memblock_reserve(low_mem_start, __pa(_end) - low_mem_start);
#ifdef CONFIG_BLK_DEV_INITRD
@ -149,13 +152,6 @@ void __init setup_arch_memory(void)
zones_size[ZONE_NORMAL] = max_low_pfn - min_low_pfn;
zones_holes[ZONE_NORMAL] = 0;
#ifdef CONFIG_HIGHMEM
zones_size[ZONE_HIGHMEM] = max_pfn - max_low_pfn;
/* This handles the peripheral address space hole */
zones_holes[ZONE_HIGHMEM] = min_high_pfn - max_low_pfn;
#endif
/*
* We can't use the helper free_area_init(zones[]) because it uses
* PAGE_OFFSET to compute the @min_low_pfn which would be wrong
@ -168,6 +164,34 @@ void __init setup_arch_memory(void)
zones_holes); /* holes */
#ifdef CONFIG_HIGHMEM
/*
* Populate a new node with highmem
*
* On ARC (w/o PAE) HIGHMEM addresses are actually smaller (0 based)
* than addresses in normal ala low memory (0x8000_0000 based).
* Even with PAE, the huge peripheral space hole would waste a lot of
* mem with single mem_map[]. This warrants a mem_map per region design.
* Thus HIGHMEM on ARC is imlemented with DISCONTIGMEM.
*
* DISCONTIGMEM in turns requires multiple nodes. node 0 above is
* populated with normal memory zone while node 1 only has highmem
*/
node_set_online(1);
min_high_pfn = PFN_DOWN(high_mem_start);
max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
zones_size[ZONE_NORMAL] = 0;
zones_holes[ZONE_NORMAL] = 0;
zones_size[ZONE_HIGHMEM] = max_high_pfn - min_high_pfn;
zones_holes[ZONE_HIGHMEM] = 0;
free_area_init_node(1, /* node-id */
zones_size, /* num pages per zone */
min_high_pfn, /* first pfn of node */
zones_holes); /* holes */
high_memory = (void *)(min_high_pfn << PAGE_SHIFT);
kmap_init();
#endif
@ -185,7 +209,7 @@ void __init mem_init(void)
unsigned long tmp;
reset_all_zones_managed_pages();
for (tmp = min_high_pfn; tmp < max_pfn; tmp++)
for (tmp = min_high_pfn; tmp < max_high_pfn; tmp++)
free_highmem_page(pfn_to_page(tmp));
#endif

View File

@ -329,6 +329,7 @@
regulator-name = "V28";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
regulator-always-on; /* due to battery cover sensor */
};
@ -336,30 +337,35 @@
regulator-name = "VCSI";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
};
&vaux3 {
regulator-name = "VMMC2_30";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <3000000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
};
&vaux4 {
regulator-name = "VCAM_ANA_28";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
};
&vmmc1 {
regulator-name = "VMMC1";
regulator-min-microvolt = <1850000>;
regulator-max-microvolt = <3150000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
};
&vmmc2 {
regulator-name = "V28_A";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <3000000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
regulator-always-on; /* due VIO leak to AIC34 VDDs */
};
@ -367,6 +373,7 @@
regulator-name = "VPLL";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
regulator-always-on;
};
@ -374,6 +381,7 @@
regulator-name = "VSDI_CSI";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
regulator-always-on;
};
@ -381,6 +389,7 @@
regulator-name = "VMMC2_IO_18";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-initial-mode = <0x0e>; /* RES_STATE_ACTIVE */
};
&vio {

View File

@ -46,7 +46,7 @@
0x480bd800 0x017c>;
interrupts = <24>;
iommus = <&mmu_isp>;
syscon = <&scm_conf 0xdc>;
syscon = <&scm_conf 0x6c>;
ti,phy-type = <OMAP3ISP_PHY_TYPE_COMPLEX_IO>;
#clock-cells = <1>;
ports {

View File

@ -472,7 +472,7 @@
ldo1_reg: ldo1 {
/* VDDAPHY_CAM: vdda_csiport */
regulator-name = "ldo1";
regulator-min-microvolt = <1500000>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
@ -498,7 +498,7 @@
ldo4_reg: ldo4 {
/* VDDAPHY_DISP: vdda_dsiport/hdmi */
regulator-name = "ldo4";
regulator-min-microvolt = <1500000>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};

View File

@ -513,7 +513,7 @@
ldo1_reg: ldo1 {
/* VDDAPHY_CAM: vdda_csiport */
regulator-name = "ldo1";
regulator-min-microvolt = <1500000>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};
@ -537,7 +537,7 @@
ldo4_reg: ldo4 {
/* VDDAPHY_DISP: vdda_dsiport/hdmi */
regulator-name = "ldo4";
regulator-min-microvolt = <1500000>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
};

View File

@ -269,7 +269,7 @@
omap5_pmx_wkup: pinmux@c840 {
compatible = "ti,omap5-padconf",
"pinctrl-single";
reg = <0xc840 0x0038>;
reg = <0xc840 0x003c>;
#address-cells = <1>;
#size-cells = <0>;
#interrupt-cells = <1>;

View File

@ -666,7 +666,7 @@
};
sata0: sata@29000000 {
compatible = "generic-ahci";
compatible = "qcom,apq8064-ahci", "generic-ahci";
status = "disabled";
reg = <0x29000000 0x180>;
interrupts = <GIC_SPI 209 IRQ_TYPE_NONE>;
@ -688,6 +688,7 @@
phys = <&sata_phy0>;
phy-names = "sata-phy";
ports-implemented = <0x1>;
};
/* Temporary fixed regulator */

View File

@ -125,8 +125,6 @@
};
&reg_dc1sw {
regulator-min-microvolt = <3000000>;
regulator-max-microvolt = <3000000>;
regulator-name = "vcc-lcd";
};

View File

@ -84,6 +84,7 @@
#ifndef __ASSEMBLY__
#ifdef CONFIG_CPU_CP15_MMU
static inline unsigned int get_domain(void)
{
unsigned int domain;
@ -103,6 +104,16 @@ static inline void set_domain(unsigned val)
: : "r" (val) : "memory");
isb();
}
#else
static inline unsigned int get_domain(void)
{
return 0;
}
static inline void set_domain(unsigned val)
{
}
#endif
#ifdef CONFIG_CPU_USE_DOMAINS
#define modify_domain(dom,type) \

View File

@ -236,7 +236,7 @@ ENTRY(__setup_mpu)
mov r0, #CONFIG_VECTORS_BASE @ Cover from VECTORS_BASE
ldr r5,=(MPU_AP_PL1RW_PL0NA | MPU_RGN_NORMAL)
/* Writing N to bits 5:1 (RSR_SZ) --> region size 2^N+1 */
mov r6, #(((PAGE_SHIFT - 1) << MPU_RSR_SZ) | 1 << MPU_RSR_EN)
mov r6, #(((2 * PAGE_SHIFT - 1) << MPU_RSR_SZ) | 1 << MPU_RSR_EN)
setup_region r0, r5, r6, MPU_DATA_SIDE @ VECTORS_BASE, PL0 NA, enabled
beq 3f @ Memory-map not unified

View File

@ -1004,7 +1004,7 @@ static bool transparent_hugepage_adjust(kvm_pfn_t *pfnp, phys_addr_t *ipap)
kvm_pfn_t pfn = *pfnp;
gfn_t gfn = *ipap >> PAGE_SHIFT;
if (PageTransCompound(pfn_to_page(pfn))) {
if (PageTransCompoundMap(pfn_to_page(pfn))) {
unsigned long mask;
/*
* The address we faulted on is backed by a transparent huge

View File

@ -121,6 +121,11 @@ static void read_factory_config(struct nvmem_device *nvmem, void *context)
const char *partnum = NULL;
struct davinci_soc_info *soc_info = &davinci_soc_info;
if (!IS_BUILTIN(CONFIG_NVMEM)) {
pr_warn("Factory Config not available without CONFIG_NVMEM\n");
goto bad_config;
}
ret = nvmem_device_read(nvmem, 0, sizeof(factory_config),
&factory_config);
if (ret != sizeof(struct factory_config)) {

View File

@ -33,6 +33,11 @@ void davinci_get_mac_addr(struct nvmem_device *nvmem, void *context)
char *mac_addr = davinci_soc_info.emac_pdata->mac_addr;
off_t offset = (off_t)context;
if (!IS_BUILTIN(CONFIG_NVMEM)) {
pr_warn("Cannot read MAC addr from EEPROM without CONFIG_NVMEM\n");
return;
}
/* Read MAC addr from EEPROM */
if (nvmem_device_read(nvmem, offset, ETH_ALEN, mac_addr) == ETH_ALEN)
pr_info("Read MAC addr from EEPROM: %pM\n", mac_addr);

View File

@ -92,7 +92,7 @@ static int exynos_pd_power(struct generic_pm_domain *domain, bool power_on)
if (IS_ERR(pd->clk[i]))
break;
if (IS_ERR(pd->clk[i]))
if (IS_ERR(pd->pclk[i]))
continue; /* Skip on first power up */
if (clk_set_parent(pd->clk[i], pd->pclk[i]))
pr_err("%s: error setting parent to clock%d\n",

View File

@ -13,6 +13,7 @@
#include <asm/assembler.h>
.arch armv7-a
.arm
ENTRY(secondary_trampoline)
/* CPU1 will always fetch from 0x0 when it is brought out of reset.

View File

@ -87,7 +87,6 @@ static unsigned long irbar_read(void)
/* MPU initialisation functions */
void __init sanity_check_meminfo_mpu(void)
{
int i;
phys_addr_t phys_offset = PHYS_OFFSET;
phys_addr_t aligned_region_size, specified_mem_size, rounded_mem_size;
struct memblock_region *reg;
@ -110,11 +109,13 @@ void __init sanity_check_meminfo_mpu(void)
} else {
/*
* memblock auto merges contiguous blocks, remove
* all blocks afterwards
* all blocks afterwards in one go (we can't remove
* blocks separately while iterating)
*/
pr_notice("Ignoring RAM after %pa, memory at %pa ignored\n",
&mem_start, &reg->base);
memblock_remove(reg->base, reg->size);
&mem_end, &reg->base);
memblock_remove(reg->base, 0 - reg->base);
break;
}
}
@ -144,7 +145,7 @@ void __init sanity_check_meminfo_mpu(void)
pr_warn("Truncating memory from %pa to %pa (MPU region constraints)",
&specified_mem_size, &aligned_region_size);
memblock_remove(mem_start + aligned_region_size,
specified_mem_size - aligned_round_size);
specified_mem_size - aligned_region_size);
mem_end = mem_start + aligned_region_size;
}
@ -261,7 +262,7 @@ void __init mpu_setup(void)
return;
region_err = mpu_setup_region(MPU_RAM_REGION, PHYS_OFFSET,
ilog2(meminfo.bank[0].size),
ilog2(memblock.memory.regions[0].size),
MPU_AP_PL1RW_PL0RW | MPU_RGN_NORMAL);
if (region_err) {
panic("MPU region initialization failure! %d", region_err);
@ -285,7 +286,7 @@ void __init arm_mm_memblock_reserve(void)
* some architectures which the DRAM is the exception vector to trap,
* alloc_page breaks with error, although it is not NULL, but "0."
*/
memblock_reserve(CONFIG_VECTORS_BASE, PAGE_SIZE);
memblock_reserve(CONFIG_VECTORS_BASE, 2 * PAGE_SIZE);
#else /* ifndef CONFIG_CPU_V7M */
/*
* There is no dedicated vector page on V7-M. So nothing needs to be

View File

@ -120,7 +120,6 @@
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
status = "disabled";
};
soc {

View File

@ -344,7 +344,7 @@ tracesys_next:
#endif
cmpib,COND(=),n -1,%r20,tracesys_exit /* seccomp may have returned -1 */
comiclr,>>= __NR_Linux_syscalls, %r20, %r0
comiclr,>> __NR_Linux_syscalls, %r20, %r0
b,n .Ltracesys_nosys
LDREGX %r20(%r19), %r19

View File

@ -82,7 +82,7 @@ static inline unsigned long create_zero_mask(unsigned long bits)
"andc %1,%1,%2\n\t"
"popcntd %0,%1"
: "=r" (leading_zero_bits), "=&r" (trailing_zero_bit_mask)
: "r" (bits));
: "b" (bits));
return leading_zero_bits;
}

View File

@ -474,6 +474,7 @@ static __init int _init_perf_amd_iommu(
static struct perf_amd_iommu __perf_iommu = {
.pmu = {
.task_ctx_nr = perf_invalid_context,
.event_init = perf_iommu_event_init,
.add = perf_iommu_add,
.del = perf_iommu_del,

View File

@ -3637,6 +3637,8 @@ __init int intel_pmu_init(void)
pr_cont("Knights Landing events, ");
break;
case 142: /* 14nm Kabylake Mobile */
case 158: /* 14nm Kabylake Desktop */
case 78: /* 14nm Skylake Mobile */
case 94: /* 14nm Skylake Desktop */
case 85: /* 14nm Skylake Server */

View File

@ -891,9 +891,7 @@ void __init uv_system_init(void)
}
pr_info("UV: Found %s hub\n", hub);
/* We now only need to map the MMRs on UV1 */
if (is_uv1_hub())
map_low_mmrs();
map_low_mmrs();
m_n_config.v = uv_read_local_mmr(UVH_RH_GAM_CONFIG_MMR );
m_val = m_n_config.s.m_skt;

View File

@ -106,14 +106,24 @@ static int __init efifb_set_system(const struct dmi_system_id *id)
continue;
for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
resource_size_t start, end;
unsigned long flags;
flags = pci_resource_flags(dev, i);
if (!(flags & IORESOURCE_MEM))
continue;
if (flags & IORESOURCE_UNSET)
continue;
if (pci_resource_len(dev, i) == 0)
continue;
start = pci_resource_start(dev, i);
if (start == 0)
break;
end = pci_resource_end(dev, i);
if (screen_info.lfb_base >= start &&
screen_info.lfb_base < end) {
found_bar = 1;
break;
}
}
}

View File

@ -92,7 +92,7 @@ unsigned long try_msr_calibrate_tsc(void)
if (freq_desc_tables[cpu_index].msr_plat) {
rdmsr(MSR_PLATFORM_INFO, lo, hi);
ratio = (lo >> 8) & 0x1f;
ratio = (lo >> 8) & 0xff;
} else {
rdmsr(MSR_IA32_PERF_STATUS, lo, hi);
ratio = (hi >> 8) & 0x1f;

View File

@ -2823,7 +2823,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
*/
if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) &&
level == PT_PAGE_TABLE_LEVEL &&
PageTransCompound(pfn_to_page(pfn)) &&
PageTransCompoundMap(pfn_to_page(pfn)) &&
!mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) {
unsigned long mask;
/*
@ -4785,7 +4785,7 @@ restart:
*/
if (sp->role.direct &&
!kvm_is_reserved_pfn(pfn) &&
PageTransCompound(pfn_to_page(pfn))) {
PageTransCompoundMap(pfn_to_page(pfn))) {
drop_spte(kvm, sptep);
need_tlb_flush = 1;
goto restart;

View File

@ -43,40 +43,40 @@ void __init efi_bgrt_init(void)
return;
if (bgrt_tab->header.length < sizeof(*bgrt_tab)) {
pr_err("Ignoring BGRT: invalid length %u (expected %zu)\n",
pr_notice("Ignoring BGRT: invalid length %u (expected %zu)\n",
bgrt_tab->header.length, sizeof(*bgrt_tab));
return;
}
if (bgrt_tab->version != 1) {
pr_err("Ignoring BGRT: invalid version %u (expected 1)\n",
pr_notice("Ignoring BGRT: invalid version %u (expected 1)\n",
bgrt_tab->version);
return;
}
if (bgrt_tab->status & 0xfe) {
pr_err("Ignoring BGRT: reserved status bits are non-zero %u\n",
pr_notice("Ignoring BGRT: reserved status bits are non-zero %u\n",
bgrt_tab->status);
return;
}
if (bgrt_tab->image_type != 0) {
pr_err("Ignoring BGRT: invalid image type %u (expected 0)\n",
pr_notice("Ignoring BGRT: invalid image type %u (expected 0)\n",
bgrt_tab->image_type);
return;
}
if (!bgrt_tab->image_address) {
pr_err("Ignoring BGRT: null image address\n");
pr_notice("Ignoring BGRT: null image address\n");
return;
}
image = memremap(bgrt_tab->image_address, sizeof(bmp_header), MEMREMAP_WB);
if (!image) {
pr_err("Ignoring BGRT: failed to map image header memory\n");
pr_notice("Ignoring BGRT: failed to map image header memory\n");
return;
}
memcpy(&bmp_header, image, sizeof(bmp_header));
memunmap(image);
if (bmp_header.id != 0x4d42) {
pr_err("Ignoring BGRT: Incorrect BMP magic number 0x%x (expected 0x4d42)\n",
pr_notice("Ignoring BGRT: Incorrect BMP magic number 0x%x (expected 0x4d42)\n",
bmp_header.id);
return;
}
@ -84,14 +84,14 @@ void __init efi_bgrt_init(void)
bgrt_image = kmalloc(bgrt_image_size, GFP_KERNEL | __GFP_NOWARN);
if (!bgrt_image) {
pr_err("Ignoring BGRT: failed to allocate memory for image (wanted %zu bytes)\n",
pr_notice("Ignoring BGRT: failed to allocate memory for image (wanted %zu bytes)\n",
bgrt_image_size);
return;
}
image = memremap(bgrt_tab->image_address, bmp_header.size, MEMREMAP_WB);
if (!image) {
pr_err("Ignoring BGRT: failed to map image memory\n");
pr_notice("Ignoring BGRT: failed to map image memory\n");
kfree(bgrt_image);
bgrt_image = NULL;
return;

View File

@ -96,6 +96,7 @@ config CRYPTO_AKCIPHER
config CRYPTO_RSA
tristate "RSA algorithm"
select CRYPTO_AKCIPHER
select CRYPTO_MANAGER
select MPILIB
select ASN1
help

View File

@ -69,8 +69,9 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
struct scatterlist *sg;
sg = walk->sg;
walk->pg = sg_page(sg);
walk->offset = sg->offset;
walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
walk->offset = offset_in_page(walk->offset);
walk->entrylen = sg->length;
if (walk->entrylen > walk->total)

View File

@ -428,6 +428,9 @@ acpi_ds_begin_method_execution(struct acpi_namespace_node *method_node,
obj_desc->method.mutex->mutex.
original_sync_level =
obj_desc->method.mutex->mutex.sync_level;
obj_desc->method.mutex->mutex.thread_id =
acpi_os_get_thread_id();
}
}

View File

@ -287,8 +287,11 @@ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
offset);
rc = -ENXIO;
}
} else
} else {
rc = 0;
if (cmd_rc)
*cmd_rc = xlat_status(buf, cmd);
}
out:
ACPI_FREE(out_obj);

View File

@ -202,6 +202,14 @@ config SATA_FSL
If unsure, say N.
config SATA_AHCI_SEATTLE
tristate "AMD Seattle 6.0Gbps AHCI SATA host controller support"
depends on ARCH_SEATTLE
help
This option enables support for AMD Seattle SATA host controller.
If unsure, say N
config SATA_INIC162X
tristate "Initio 162x SATA support (Very Experimental)"
depends on PCI

View File

@ -4,6 +4,7 @@ obj-$(CONFIG_ATA) += libata.o
# non-SFF interface
obj-$(CONFIG_SATA_AHCI) += ahci.o libahci.o
obj-$(CONFIG_SATA_ACARD_AHCI) += acard-ahci.o libahci.o
obj-$(CONFIG_SATA_AHCI_SEATTLE) += ahci_seattle.o libahci.o libahci_platform.o
obj-$(CONFIG_SATA_AHCI_PLATFORM) += ahci_platform.o libahci.o libahci_platform.o
obj-$(CONFIG_SATA_FSL) += sata_fsl.o
obj-$(CONFIG_SATA_INIC162X) += sata_inic162x.o

View File

@ -51,6 +51,9 @@ static int ahci_probe(struct platform_device *pdev)
if (rc)
return rc;
of_property_read_u32(dev->of_node,
"ports-implemented", &hpriv->force_port_map);
if (of_device_is_compatible(dev->of_node, "hisilicon,hisi-ahci"))
hpriv->flags |= AHCI_HFLAG_NO_FBS | AHCI_HFLAG_NO_NCQ;

210
drivers/ata/ahci_seattle.c Normal file
View File

@ -0,0 +1,210 @@
/*
* AMD Seattle AHCI SATA driver
*
* Copyright (c) 2015, Advanced Micro Devices
* Author: Brijesh Singh <brijesh.singh@amd.com>
*
* based on the AHCI SATA platform driver by Jeff Garzik and Anton Vorontsov
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/pm.h>
#include <linux/device.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/libata.h>
#include <linux/ahci_platform.h>
#include <linux/acpi.h>
#include <linux/pci_ids.h>
#include "ahci.h"
/* SGPIO Control Register definition
*
* Bit Type Description
* 31 RW OD7.2 (activity)
* 30 RW OD7.1 (locate)
* 29 RW OD7.0 (fault)
* 28...8 RW OD6.2...OD0.0 (3bits per port, 1 bit per LED)
* 7 RO SGPIO feature flag
* 6:4 RO Reserved
* 3:0 RO Number of ports (0 means no port supported)
*/
#define ACTIVITY_BIT_POS(x) (8 + (3 * x))
#define LOCATE_BIT_POS(x) (ACTIVITY_BIT_POS(x) + 1)
#define FAULT_BIT_POS(x) (LOCATE_BIT_POS(x) + 1)
#define ACTIVITY_MASK 0x00010000
#define LOCATE_MASK 0x00080000
#define FAULT_MASK 0x00400000
#define DRV_NAME "ahci-seattle"
static ssize_t seattle_transmit_led_message(struct ata_port *ap, u32 state,
ssize_t size);
struct seattle_plat_data {
void __iomem *sgpio_ctrl;
};
static struct ata_port_operations ahci_port_ops = {
.inherits = &ahci_ops,
};
static const struct ata_port_info ahci_port_info = {
.flags = AHCI_FLAG_COMMON,
.pio_mask = ATA_PIO4,
.udma_mask = ATA_UDMA6,
.port_ops = &ahci_port_ops,
};
static struct ata_port_operations ahci_seattle_ops = {
.inherits = &ahci_ops,
.transmit_led_message = seattle_transmit_led_message,
};
static const struct ata_port_info ahci_port_seattle_info = {
.flags = AHCI_FLAG_COMMON | ATA_FLAG_EM | ATA_FLAG_SW_ACTIVITY,
.link_flags = ATA_LFLAG_SW_ACTIVITY,
.pio_mask = ATA_PIO4,
.udma_mask = ATA_UDMA6,
.port_ops = &ahci_seattle_ops,
};
static struct scsi_host_template ahci_platform_sht = {
AHCI_SHT(DRV_NAME),
};
static ssize_t seattle_transmit_led_message(struct ata_port *ap, u32 state,
ssize_t size)
{
struct ahci_host_priv *hpriv = ap->host->private_data;
struct ahci_port_priv *pp = ap->private_data;
struct seattle_plat_data *plat_data = hpriv->plat_data;
unsigned long flags;
int pmp;
struct ahci_em_priv *emp;
u32 val;
/* get the slot number from the message */
pmp = (state & EM_MSG_LED_PMP_SLOT) >> 8;
if (pmp >= EM_MAX_SLOTS)
return -EINVAL;
emp = &pp->em_priv[pmp];
val = ioread32(plat_data->sgpio_ctrl);
if (state & ACTIVITY_MASK)
val |= 1 << ACTIVITY_BIT_POS((ap->port_no));
else
val &= ~(1 << ACTIVITY_BIT_POS((ap->port_no)));
if (state & LOCATE_MASK)
val |= 1 << LOCATE_BIT_POS((ap->port_no));
else
val &= ~(1 << LOCATE_BIT_POS((ap->port_no)));
if (state & FAULT_MASK)
val |= 1 << FAULT_BIT_POS((ap->port_no));
else
val &= ~(1 << FAULT_BIT_POS((ap->port_no)));
iowrite32(val, plat_data->sgpio_ctrl);
spin_lock_irqsave(ap->lock, flags);
/* save off new led state for port/slot */
emp->led_state = state;
spin_unlock_irqrestore(ap->lock, flags);
return size;
}
static const struct ata_port_info *ahci_seattle_get_port_info(
struct platform_device *pdev, struct ahci_host_priv *hpriv)
{
struct device *dev = &pdev->dev;
struct seattle_plat_data *plat_data;
u32 val;
plat_data = devm_kzalloc(dev, sizeof(*plat_data), GFP_KERNEL);
if (IS_ERR(plat_data))
return &ahci_port_info;
plat_data->sgpio_ctrl = devm_ioremap_resource(dev,
platform_get_resource(pdev, IORESOURCE_MEM, 1));
if (IS_ERR(plat_data->sgpio_ctrl))
return &ahci_port_info;
val = ioread32(plat_data->sgpio_ctrl);
if (!(val & 0xf))
return &ahci_port_info;
hpriv->em_loc = 0;
hpriv->em_buf_sz = 4;
hpriv->em_msg_type = EM_MSG_TYPE_LED;
hpriv->plat_data = plat_data;
dev_info(dev, "SGPIO LED control is enabled.\n");
return &ahci_port_seattle_info;
}
static int ahci_seattle_probe(struct platform_device *pdev)
{
int rc;
struct ahci_host_priv *hpriv;
hpriv = ahci_platform_get_resources(pdev);
if (IS_ERR(hpriv))
return PTR_ERR(hpriv);
rc = ahci_platform_enable_resources(hpriv);
if (rc)
return rc;
rc = ahci_platform_init_host(pdev, hpriv,
ahci_seattle_get_port_info(pdev, hpriv),
&ahci_platform_sht);
if (rc)
goto disable_resources;
return 0;
disable_resources:
ahci_platform_disable_resources(hpriv);
return rc;
}
static SIMPLE_DEV_PM_OPS(ahci_pm_ops, ahci_platform_suspend,
ahci_platform_resume);
static const struct acpi_device_id ahci_acpi_match[] = {
{ "AMDI0600", 0 },
{}
};
MODULE_DEVICE_TABLE(acpi, ahci_acpi_match);
static struct platform_driver ahci_seattle_driver = {
.probe = ahci_seattle_probe,
.remove = ata_platform_remove_one,
.driver = {
.name = DRV_NAME,
.acpi_match_table = ahci_acpi_match,
.pm = &ahci_pm_ops,
},
};
module_platform_driver(ahci_seattle_driver);
MODULE_DESCRIPTION("Seattle AHCI SATA platform driver");
MODULE_AUTHOR("Brijesh Singh <brijesh.singh@amd.com>");
MODULE_LICENSE("GPL");
MODULE_ALIAS("platform:" DRV_NAME);

View File

@ -507,6 +507,7 @@ void ahci_save_initial_config(struct device *dev, struct ahci_host_priv *hpriv)
dev_info(dev, "forcing port_map 0x%x -> 0x%x\n",
port_map, hpriv->force_port_map);
port_map = hpriv->force_port_map;
hpriv->saved_port_map = port_map;
}
if (hpriv->mask_port_map) {

View File

@ -259,9 +259,6 @@ unsigned long dev_pm_opp_get_max_volt_latency(struct device *dev)
reg = opp_table->regulator;
if (IS_ERR(reg)) {
/* Regulator may not be required for device */
if (reg)
dev_err(dev, "%s: Invalid regulator (%ld)\n", __func__,
PTR_ERR(reg));
rcu_read_unlock();
return 0;
}

View File

@ -21,7 +21,7 @@
static inline bool is_pset_node(struct fwnode_handle *fwnode)
{
return fwnode && fwnode->type == FWNODE_PDATA;
return !IS_ERR_OR_NULL(fwnode) && fwnode->type == FWNODE_PDATA;
}
static inline struct property_set *to_pset_node(struct fwnode_handle *fwnode)

View File

@ -1557,21 +1557,25 @@ void cpufreq_suspend(void)
if (!cpufreq_driver)
return;
if (!has_target())
if (!has_target() && !cpufreq_driver->suspend)
goto suspend;
pr_debug("%s: Suspending Governors\n", __func__);
for_each_active_policy(policy) {
down_write(&policy->rwsem);
ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP);
up_write(&policy->rwsem);
if (has_target()) {
down_write(&policy->rwsem);
ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP);
up_write(&policy->rwsem);
if (ret)
pr_err("%s: Failed to stop governor for policy: %p\n",
__func__, policy);
else if (cpufreq_driver->suspend
&& cpufreq_driver->suspend(policy))
if (ret) {
pr_err("%s: Failed to stop governor for policy: %p\n",
__func__, policy);
continue;
}
}
if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy))
pr_err("%s: Failed to suspend driver: %p\n", __func__,
policy);
}
@ -1596,7 +1600,7 @@ void cpufreq_resume(void)
cpufreq_suspended = false;
if (!has_target())
if (!has_target() && !cpufreq_driver->resume)
return;
pr_debug("%s: Resuming Governors\n", __func__);
@ -1605,7 +1609,7 @@ void cpufreq_resume(void)
if (cpufreq_driver->resume && cpufreq_driver->resume(policy)) {
pr_err("%s: Failed to resume driver: %p\n", __func__,
policy);
} else {
} else if (has_target()) {
down_write(&policy->rwsem);
ret = cpufreq_start_governor(policy);
up_write(&policy->rwsem);

View File

@ -453,6 +453,14 @@ static void intel_pstate_hwp_set(const struct cpumask *cpumask)
}
}
static int intel_pstate_hwp_set_policy(struct cpufreq_policy *policy)
{
if (hwp_active)
intel_pstate_hwp_set(policy->cpus);
return 0;
}
static void intel_pstate_hwp_set_online_cpus(void)
{
get_online_cpus();
@ -1062,8 +1070,9 @@ static inline bool intel_pstate_sample(struct cpudata *cpu, u64 time)
static inline int32_t get_avg_frequency(struct cpudata *cpu)
{
return div64_u64(cpu->pstate.max_pstate_physical * cpu->sample.aperf *
cpu->pstate.scaling, cpu->sample.mperf);
return fp_toint(mul_fp(cpu->sample.core_pct_busy,
int_tofp(cpu->pstate.max_pstate_physical *
cpu->pstate.scaling / 100)));
}
static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu)
@ -1106,8 +1115,6 @@ static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu)
int32_t core_busy, max_pstate, current_pstate, sample_ratio;
u64 duration_ns;
intel_pstate_calc_busy(cpu);
/*
* core_busy is the ratio of actual performance to max
* max_pstate is the max non turbo pstate available
@ -1191,8 +1198,11 @@ static void intel_pstate_update_util(struct update_util_data *data, u64 time,
if ((s64)delta_ns >= pid_params.sample_rate_ns) {
bool sample_taken = intel_pstate_sample(cpu, time);
if (sample_taken && !hwp_active)
intel_pstate_adjust_busy_pstate(cpu);
if (sample_taken) {
intel_pstate_calc_busy(cpu);
if (!hwp_active)
intel_pstate_adjust_busy_pstate(cpu);
}
}
}
@ -1346,8 +1356,7 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
out:
intel_pstate_set_update_util_hook(policy->cpu);
if (hwp_active)
intel_pstate_hwp_set(policy->cpus);
intel_pstate_hwp_set_policy(policy);
return 0;
}
@ -1411,6 +1420,7 @@ static struct cpufreq_driver intel_pstate_driver = {
.flags = CPUFREQ_CONST_LOOPS,
.verify = intel_pstate_verify_policy,
.setpolicy = intel_pstate_set_policy,
.resume = intel_pstate_hwp_set_policy,
.get = intel_pstate_get,
.init = intel_pstate_cpu_init,
.stop_cpu = intel_pstate_stop_cpu,

View File

@ -259,6 +259,10 @@ static int sti_cpufreq_init(void)
{
int ret;
if ((!of_machine_is_compatible("st,stih407")) &&
(!of_machine_is_compatible("st,stih410")))
return -ENODEV;
ddata.cpu = get_cpu_device(0);
if (!ddata.cpu) {
dev_err(ddata.cpu, "Failed to get device for CPU0\n");

View File

@ -50,7 +50,7 @@ static int arm_enter_idle_state(struct cpuidle_device *dev,
* call the CPU ops suspend protocol with idle index as a
* parameter.
*/
arm_cpuidle_suspend(idx);
ret = arm_cpuidle_suspend(idx);
cpu_pm_exit();
}

View File

@ -236,6 +236,8 @@ void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev,
uint32_t vf_mask);
void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
int adf_init_pf_wq(void);
void adf_exit_pf_wq(void);
#else
static inline int adf_sriov_configure(struct pci_dev *pdev, int numvfs)
{
@ -253,5 +255,14 @@ static inline void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
static inline void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
{
}
static inline int adf_init_pf_wq(void)
{
return 0;
}
static inline void adf_exit_pf_wq(void)
{
}
#endif
#endif

View File

@ -462,12 +462,17 @@ static int __init adf_register_ctl_device_driver(void)
if (adf_init_aer())
goto err_aer;
if (adf_init_pf_wq())
goto err_pf_wq;
if (qat_crypto_register())
goto err_crypto_register;
return 0;
err_crypto_register:
adf_exit_pf_wq();
err_pf_wq:
adf_exit_aer();
err_aer:
adf_chr_drv_destroy();
@ -480,6 +485,7 @@ static void __exit adf_unregister_ctl_device_driver(void)
{
adf_chr_drv_destroy();
adf_exit_aer();
adf_exit_pf_wq();
qat_crypto_unregister();
adf_clean_vf_map(false);
mutex_destroy(&adf_ctl_lock);

View File

@ -119,11 +119,6 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev)
int i;
u32 reg;
/* Workqueue for PF2VF responses */
pf2vf_resp_wq = create_workqueue("qat_pf2vf_resp_wq");
if (!pf2vf_resp_wq)
return -ENOMEM;
for (i = 0, vf_info = accel_dev->pf.vf_info; i < totalvfs;
i++, vf_info++) {
/* This ptr will be populated when VFs will be created */
@ -216,11 +211,6 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev)
kfree(accel_dev->pf.vf_info);
accel_dev->pf.vf_info = NULL;
if (pf2vf_resp_wq) {
destroy_workqueue(pf2vf_resp_wq);
pf2vf_resp_wq = NULL;
}
}
EXPORT_SYMBOL_GPL(adf_disable_sriov);
@ -304,3 +294,19 @@ int adf_sriov_configure(struct pci_dev *pdev, int numvfs)
return numvfs;
}
EXPORT_SYMBOL_GPL(adf_sriov_configure);
int __init adf_init_pf_wq(void)
{
/* Workqueue for PF2VF responses */
pf2vf_resp_wq = create_workqueue("qat_pf2vf_resp_wq");
return !pf2vf_resp_wq ? -ENOMEM : 0;
}
void adf_exit_pf_wq(void)
{
if (pf2vf_resp_wq) {
destroy_workqueue(pf2vf_resp_wq);
pf2vf_resp_wq = NULL;
}
}

View File

@ -77,7 +77,7 @@ static inline u16 fw_cfg_sel_endianness(u16 key)
static inline void fw_cfg_read_blob(u16 key,
void *buf, loff_t pos, size_t count)
{
u32 glk;
u32 glk = -1U;
acpi_status status;
/* If we have ACPI, ensure mutual exclusion against any potential

View File

@ -541,6 +541,7 @@ int amdgpu_bo_set_metadata (struct amdgpu_bo *bo, void *metadata,
if (!metadata_size) {
if (bo->metadata_size) {
kfree(bo->metadata);
bo->metadata = NULL;
bo->metadata_size = 0;
}
return 0;

View File

@ -298,6 +298,10 @@ bool amdgpu_atombios_encoder_mode_fixup(struct drm_encoder *encoder,
&& (mode->crtc_vsync_start < (mode->crtc_vdisplay + 2)))
adjusted_mode->crtc_vsync_start = adjusted_mode->crtc_vdisplay + 2;
/* vertical FP must be at least 1 */
if (mode->crtc_vsync_start == mode->crtc_vdisplay)
adjusted_mode->crtc_vsync_start++;
/* get the native mode for scaling */
if (amdgpu_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT))
amdgpu_panel_mode_fixup(encoder, adjusted_mode);

View File

@ -792,7 +792,7 @@ static int i915_drm_resume(struct drm_device *dev)
static int i915_drm_resume_early(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int ret = 0;
int ret;
/*
* We have a resume ordering issue with the snd-hda driver also
@ -803,6 +803,36 @@ static int i915_drm_resume_early(struct drm_device *dev)
* FIXME: This should be solved with a special hdmi sink device or
* similar so that power domains can be employed.
*/
/*
* Note that we need to set the power state explicitly, since we
* powered off the device during freeze and the PCI core won't power
* it back up for us during thaw. Powering off the device during
* freeze is not a hard requirement though, and during the
* suspend/resume phases the PCI core makes sure we get here with the
* device powered on. So in case we change our freeze logic and keep
* the device powered we can also remove the following set power state
* call.
*/
ret = pci_set_power_state(dev->pdev, PCI_D0);
if (ret) {
DRM_ERROR("failed to set PCI D0 power state (%d)\n", ret);
goto out;
}
/*
* Note that pci_enable_device() first enables any parent bridge
* device and only then sets the power state for this device. The
* bridge enabling is a nop though, since bridge devices are resumed
* first. The order of enabling power and enabling the device is
* imposed by the PCI core as described above, so here we preserve the
* same order for the freeze/thaw phases.
*
* TODO: eventually we should remove pci_disable_device() /
* pci_enable_enable_device() from suspend/resume. Due to how they
* depend on the device enable refcount we can't anyway depend on them
* disabling/enabling the device.
*/
if (pci_enable_device(dev->pdev)) {
ret = -EIO;
goto out;

View File

@ -2907,7 +2907,14 @@ enum skl_disp_power_wells {
#define GEN6_RP_STATE_CAP _MMIO(MCHBAR_MIRROR_BASE_SNB + 0x5998)
#define BXT_RP_STATE_CAP _MMIO(0x138170)
#define INTERVAL_1_28_US(us) (((us) * 100) >> 7)
/*
* Make these a multiple of magic 25 to avoid SNB (eg. Dell XPS
* 8300) freezing up around GPU hangs. Looks as if even
* scheduling/timer interrupts start misbehaving if the RPS
* EI/thresholds are "bad", leading to a very sluggish or even
* frozen machine.
*/
#define INTERVAL_1_28_US(us) roundup(((us) * 100) >> 7, 25)
#define INTERVAL_1_33_US(us) (((us) * 3) >> 2)
#define INTERVAL_0_833_US(us) (((us) * 6) / 5)
#define GT_INTERVAL_FROM_US(dev_priv, us) (IS_GEN9(dev_priv) ? \

View File

@ -443,9 +443,17 @@ void intel_prepare_ddi_buffer(struct intel_encoder *encoder)
} else if (IS_BROADWELL(dev_priv)) {
ddi_translations_fdi = bdw_ddi_translations_fdi;
ddi_translations_dp = bdw_ddi_translations_dp;
ddi_translations_edp = bdw_ddi_translations_edp;
if (dev_priv->edp_low_vswing) {
ddi_translations_edp = bdw_ddi_translations_edp;
n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
} else {
ddi_translations_edp = bdw_ddi_translations_dp;
n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
}
ddi_translations_hdmi = bdw_ddi_translations_hdmi;
n_edp_entries = ARRAY_SIZE(bdw_ddi_translations_edp);
n_dp_entries = ARRAY_SIZE(bdw_ddi_translations_dp);
n_hdmi_entries = ARRAY_SIZE(bdw_ddi_translations_hdmi);
hdmi_default_entry = 7;
@ -3201,12 +3209,6 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
intel_ddi_clock_get(encoder, pipe_config);
}
static void intel_ddi_destroy(struct drm_encoder *encoder)
{
/* HDMI has nothing special to destroy, so we can go with this. */
intel_dp_encoder_destroy(encoder);
}
static bool intel_ddi_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
@ -3225,7 +3227,8 @@ static bool intel_ddi_compute_config(struct intel_encoder *encoder,
}
static const struct drm_encoder_funcs intel_ddi_funcs = {
.destroy = intel_ddi_destroy,
.reset = intel_dp_encoder_reset,
.destroy = intel_dp_encoder_destroy,
};
static struct intel_connector *
@ -3324,6 +3327,7 @@ void intel_ddi_init(struct drm_device *dev, enum port port)
intel_encoder->post_disable = intel_ddi_post_disable;
intel_encoder->get_hw_state = intel_ddi_get_hw_state;
intel_encoder->get_config = intel_ddi_get_config;
intel_encoder->suspend = intel_dp_encoder_suspend;
intel_dig_port->port = port;
intel_dig_port->saved_port_bits = I915_READ(DDI_BUF_CTL(port)) &

View File

@ -13351,6 +13351,9 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,
}
for_each_crtc_in_state(state, crtc, crtc_state, i) {
if (state->legacy_cursor_update)
continue;
ret = intel_crtc_wait_for_pending_flips(crtc);
if (ret)
return ret;

View File

@ -4898,7 +4898,7 @@ void intel_dp_encoder_destroy(struct drm_encoder *encoder)
kfree(intel_dig_port);
}
static void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
{
struct intel_dp *intel_dp = enc_to_intel_dp(&intel_encoder->base);
@ -4940,7 +4940,7 @@ static void intel_edp_panel_vdd_sanitize(struct intel_dp *intel_dp)
edp_panel_vdd_schedule_off(intel_dp);
}
static void intel_dp_encoder_reset(struct drm_encoder *encoder)
void intel_dp_encoder_reset(struct drm_encoder *encoder)
{
struct intel_dp *intel_dp;

View File

@ -1238,6 +1238,8 @@ void intel_dp_set_link_params(struct intel_dp *intel_dp,
void intel_dp_start_link_train(struct intel_dp *intel_dp);
void intel_dp_stop_link_train(struct intel_dp *intel_dp);
void intel_dp_sink_dpms(struct intel_dp *intel_dp, int mode);
void intel_dp_encoder_reset(struct drm_encoder *encoder);
void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder);
void intel_dp_encoder_destroy(struct drm_encoder *encoder);
int intel_dp_sink_crc(struct intel_dp *intel_dp, u8 *crc);
bool intel_dp_compute_config(struct intel_encoder *encoder,

View File

@ -1415,8 +1415,16 @@ intel_hdmi_detect(struct drm_connector *connector, bool force)
hdmi_to_dig_port(intel_hdmi));
}
if (!live_status)
DRM_DEBUG_KMS("Live status not up!");
if (!live_status) {
DRM_DEBUG_KMS("HDMI live status down\n");
/*
* Live status register is not reliable on all intel platforms.
* So consider live_status only for certain platforms, for
* others, read EDID to determine presence of sink.
*/
if (INTEL_INFO(dev_priv)->gen < 7 || IS_IVYBRIDGE(dev_priv))
live_status = true;
}
intel_hdmi_unset_edid(connector);

View File

@ -310,6 +310,10 @@ static bool radeon_atom_mode_fixup(struct drm_encoder *encoder,
&& (mode->crtc_vsync_start < (mode->crtc_vdisplay + 2)))
adjusted_mode->crtc_vsync_start = adjusted_mode->crtc_vdisplay + 2;
/* vertical FP must be at least 1 */
if (mode->crtc_vsync_start == mode->crtc_vdisplay)
adjusted_mode->crtc_vsync_start++;
/* get the native mode for scaling */
if (radeon_encoder->active_device & (ATOM_DEVICE_LCD_SUPPORT)) {
radeon_panel_mode_fixup(encoder, adjusted_mode);

View File

@ -1068,7 +1068,6 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
goto err_register;
}
pdev->dev.of_node = of_node;
pdev->dev.parent = dev;
ret = platform_device_add_data(pdev, &reg->pdata,
@ -1079,6 +1078,12 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
platform_device_put(pdev);
goto err_register;
}
/*
* Set of_node only after calling platform_device_add. Otherwise
* the platform:imx-ipuv3-crtc modalias won't be used.
*/
pdev->dev.of_node = of_node;
}
return 0;

View File

@ -103,15 +103,29 @@ static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi)
* there is room for the producer to send the pending packet.
*/
static bool hv_need_to_signal_on_read(u32 prev_write_sz,
struct hv_ring_buffer_info *rbi)
static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi)
{
u32 cur_write_sz;
u32 r_size;
u32 write_loc = rbi->ring_buffer->write_index;
u32 write_loc;
u32 read_loc = rbi->ring_buffer->read_index;
u32 pending_sz = rbi->ring_buffer->pending_send_sz;
u32 pending_sz;
/*
* Issue a full memory barrier before making the signaling decision.
* Here is the reason for having this barrier:
* If the reading of the pend_sz (in this function)
* were to be reordered and read before we commit the new read
* index (in the calling function) we could
* have a problem. If the host were to set the pending_sz after we
* have sampled pending_sz and go to sleep before we commit the
* read index, we could miss sending the interrupt. Issue a full
* memory barrier to address this.
*/
mb();
pending_sz = rbi->ring_buffer->pending_send_sz;
write_loc = rbi->ring_buffer->write_index;
/* If the other end is not blocked on write don't bother. */
if (pending_sz == 0)
return false;
@ -120,7 +134,7 @@ static bool hv_need_to_signal_on_read(u32 prev_write_sz,
cur_write_sz = write_loc >= read_loc ? r_size - (write_loc - read_loc) :
read_loc - write_loc;
if ((prev_write_sz < pending_sz) && (cur_write_sz >= pending_sz))
if (cur_write_sz >= pending_sz)
return true;
return false;
@ -455,7 +469,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
/* Update the read index */
hv_set_next_read_location(inring_info, next_read_location);
*signal = hv_need_to_signal_on_read(bytes_avail_towrite, inring_info);
*signal = hv_need_to_signal_on_read(inring_info);
return ret;
}

View File

@ -451,6 +451,8 @@ static int at91_adc_probe(struct platform_device *pdev)
if (ret)
goto vref_disable;
platform_set_drvdata(pdev, indio_dev);
ret = iio_device_register(indio_dev);
if (ret < 0)
goto per_clk_disable_unprepare;

View File

@ -104,6 +104,19 @@ static int inv_mpu6050_deselect_bypass(struct i2c_adapter *adap,
return 0;
}
static const char *inv_mpu_match_acpi_device(struct device *dev, int *chip_id)
{
const struct acpi_device_id *id;
id = acpi_match_device(dev->driver->acpi_match_table, dev);
if (!id)
return NULL;
*chip_id = (int)id->driver_data;
return dev_name(dev);
}
/**
* inv_mpu_probe() - probe function.
* @client: i2c client.
@ -115,14 +128,25 @@ static int inv_mpu_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct inv_mpu6050_state *st;
int result;
const char *name = id ? id->name : NULL;
int result, chip_type;
struct regmap *regmap;
const char *name;
if (!i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_I2C_BLOCK))
return -EOPNOTSUPP;
if (id) {
chip_type = (int)id->driver_data;
name = id->name;
} else if (ACPI_HANDLE(&client->dev)) {
name = inv_mpu_match_acpi_device(&client->dev, &chip_type);
if (!name)
return -ENODEV;
} else {
return -ENOSYS;
}
regmap = devm_regmap_init_i2c(client, &inv_mpu_regmap_config);
if (IS_ERR(regmap)) {
dev_err(&client->dev, "Failed to register i2c regmap %d\n",
@ -131,7 +155,7 @@ static int inv_mpu_probe(struct i2c_client *client,
}
result = inv_mpu_core_probe(regmap, client->irq, name,
NULL, id->driver_data);
NULL, chip_type);
if (result < 0)
return result;

View File

@ -46,6 +46,7 @@ static int inv_mpu_probe(struct spi_device *spi)
struct regmap *regmap;
const struct spi_device_id *id = spi_get_device_id(spi);
const char *name = id ? id->name : NULL;
const int chip_type = id ? id->driver_data : 0;
regmap = devm_regmap_init_spi(spi, &inv_mpu_regmap_config);
if (IS_ERR(regmap)) {
@ -55,7 +56,7 @@ static int inv_mpu_probe(struct spi_device *spi)
}
return inv_mpu_core_probe(regmap, spi->irq, name,
inv_mpu_i2c_disable, id->driver_data);
inv_mpu_i2c_disable, chip_type);
}
static int inv_mpu_remove(struct spi_device *spi)

View File

@ -462,6 +462,8 @@ static int ak8975_setup_irq(struct ak8975_data *data)
int rc;
int irq;
init_waitqueue_head(&data->data_ready_queue);
clear_bit(0, &data->flags);
if (client->irq)
irq = client->irq;
else
@ -477,8 +479,6 @@ static int ak8975_setup_irq(struct ak8975_data *data)
return rc;
}
init_waitqueue_head(&data->data_ready_queue);
clear_bit(0, &data->flags);
data->eoc_irq = irq;
return rc;
@ -732,7 +732,7 @@ static int ak8975_probe(struct i2c_client *client,
int eoc_gpio;
int err;
const char *name = NULL;
enum asahi_compass_chipset chipset;
enum asahi_compass_chipset chipset = AK_MAX_TYPE;
/* Grab and set up the supplied GPIO. */
if (client->dev.platform_data)

View File

@ -612,6 +612,7 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep,
struct Scsi_Host *shost;
struct iser_conn *iser_conn = NULL;
struct ib_conn *ib_conn;
u32 max_fr_sectors;
u16 max_cmds;
shost = iscsi_host_alloc(&iscsi_iser_sht, 0, 0);
@ -632,7 +633,6 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep,
iser_conn = ep->dd_data;
max_cmds = iser_conn->max_cmds;
shost->sg_tablesize = iser_conn->scsi_sg_tablesize;
shost->max_sectors = iser_conn->scsi_max_sectors;
mutex_lock(&iser_conn->state_mutex);
if (iser_conn->state != ISER_CONN_UP) {
@ -657,8 +657,6 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep,
*/
shost->sg_tablesize = min_t(unsigned short, shost->sg_tablesize,
ib_conn->device->ib_device->attrs.max_fast_reg_page_list_len);
shost->max_sectors = min_t(unsigned int,
1024, (shost->sg_tablesize * PAGE_SIZE) >> 9);
if (iscsi_host_add(shost,
ib_conn->device->ib_device->dma_device)) {
@ -672,6 +670,15 @@ iscsi_iser_session_create(struct iscsi_endpoint *ep,
goto free_host;
}
/*
* FRs or FMRs can only map up to a (device) page per entry, but if the
* first entry is misaligned we'll end up using using two entries
* (head and tail) for a single page worth data, so we have to drop
* one segment from the calculation.
*/
max_fr_sectors = ((shost->sg_tablesize - 1) * PAGE_SIZE) >> 9;
shost->max_sectors = min(iser_max_sectors, max_fr_sectors);
if (cmds_max > max_cmds) {
iser_info("cmds_max changed from %u to %u\n",
cmds_max, max_cmds);
@ -989,7 +996,6 @@ static struct scsi_host_template iscsi_iser_sht = {
.queuecommand = iscsi_queuecommand,
.change_queue_depth = scsi_change_queue_depth,
.sg_tablesize = ISCSI_ISER_DEF_SG_TABLESIZE,
.max_sectors = ISER_DEF_MAX_SECTORS,
.cmd_per_lun = ISER_DEF_CMD_PER_LUN,
.eh_abort_handler = iscsi_eh_abort,
.eh_device_reset_handler= iscsi_eh_device_reset,

View File

@ -181,6 +181,14 @@ static void vibra_play_work(struct work_struct *work)
{
struct vibra_info *info = container_of(work,
struct vibra_info, play_work);
int ret;
/* Do not allow effect, while the routing is set to use audio */
ret = twl6040_get_vibralr_status(info->twl6040);
if (ret & TWL6040_VIBSEL) {
dev_info(info->dev, "Vibra is configured for audio\n");
return;
}
mutex_lock(&info->mutex);
@ -199,14 +207,6 @@ static int vibra_play(struct input_dev *input, void *data,
struct ff_effect *effect)
{
struct vibra_info *info = input_get_drvdata(input);
int ret;
/* Do not allow effect, while the routing is set to use audio */
ret = twl6040_get_vibralr_status(info->twl6040);
if (ret & TWL6040_VIBSEL) {
dev_info(&input->dev, "Vibra is configured for audio\n");
return -EBUSY;
}
info->weak_speed = effect->u.rumble.weak_magnitude;
info->strong_speed = effect->u.rumble.strong_magnitude;

View File

@ -1093,6 +1093,19 @@ static int mxt_t6_command(struct mxt_data *data, u16 cmd_offset,
return 0;
}
static int mxt_acquire_irq(struct mxt_data *data)
{
int error;
enable_irq(data->irq);
error = mxt_process_messages_until_invalid(data);
if (error)
return error;
return 0;
}
static int mxt_soft_reset(struct mxt_data *data)
{
struct device *dev = &data->client->dev;
@ -1111,7 +1124,7 @@ static int mxt_soft_reset(struct mxt_data *data)
/* Ignore CHG line for 100ms after reset */
msleep(100);
enable_irq(data->irq);
mxt_acquire_irq(data);
ret = mxt_wait_for_completion(data, &data->reset_completion,
MXT_RESET_TIMEOUT);
@ -1466,19 +1479,6 @@ release_mem:
return ret;
}
static int mxt_acquire_irq(struct mxt_data *data)
{
int error;
enable_irq(data->irq);
error = mxt_process_messages_until_invalid(data);
if (error)
return error;
return 0;
}
static int mxt_get_info(struct mxt_data *data)
{
struct i2c_client *client = data->client;

View File

@ -370,8 +370,8 @@ static int zforce_touch_event(struct zforce_ts *ts, u8 *payload)
point.coord_x = point.coord_y = 0;
}
point.state = payload[9 * i + 5] & 0x03;
point.id = (payload[9 * i + 5] & 0xfc) >> 2;
point.state = payload[9 * i + 5] & 0x0f;
point.id = (payload[9 * i + 5] & 0xf0) >> 4;
/* determine touch major, minor and orientation */
point.area_major = max(payload[9 * i + 6],

View File

@ -846,11 +846,11 @@ struct media_device *media_device_find_devres(struct device *dev)
}
EXPORT_SYMBOL_GPL(media_device_find_devres);
#if IS_ENABLED(CONFIG_PCI)
void media_device_pci_init(struct media_device *mdev,
struct pci_dev *pci_dev,
const char *name)
{
#ifdef CONFIG_PCI
mdev->dev = &pci_dev->dev;
if (name)
@ -866,16 +866,16 @@ void media_device_pci_init(struct media_device *mdev,
mdev->driver_version = LINUX_VERSION_CODE;
media_device_init(mdev);
#endif
}
EXPORT_SYMBOL_GPL(media_device_pci_init);
#endif
#if IS_ENABLED(CONFIG_USB)
void __media_device_usb_init(struct media_device *mdev,
struct usb_device *udev,
const char *board_name,
const char *driver_name)
{
#ifdef CONFIG_USB
mdev->dev = &udev->dev;
if (driver_name)
@ -895,9 +895,9 @@ void __media_device_usb_init(struct media_device *mdev,
mdev->driver_version = LINUX_VERSION_CODE;
media_device_init(mdev);
#endif
}
EXPORT_SYMBOL_GPL(__media_device_usb_init);
#endif
#endif /* CONFIG_MEDIA_CONTROLLER */

View File

@ -1446,22 +1446,13 @@ static int fimc_md_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, fmd);
/* Protect the media graph while we're registering entities */
mutex_lock(&fmd->media_dev.graph_mutex);
ret = fimc_md_register_platform_entities(fmd, dev->of_node);
if (ret) {
mutex_unlock(&fmd->media_dev.graph_mutex);
if (ret)
goto err_clk;
}
ret = fimc_md_register_sensor_entities(fmd);
if (ret) {
mutex_unlock(&fmd->media_dev.graph_mutex);
if (ret)
goto err_m_ent;
}
mutex_unlock(&fmd->media_dev.graph_mutex);
ret = device_create_file(&pdev->dev, &dev_attr_subdev_conf_mode);
if (ret)

View File

@ -493,21 +493,17 @@ static int s3c_camif_probe(struct platform_device *pdev)
if (ret < 0)
goto err_sens;
mutex_lock(&camif->media_dev.graph_mutex);
ret = v4l2_device_register_subdev_nodes(&camif->v4l2_dev);
if (ret < 0)
goto err_unlock;
goto err_sens;
ret = camif_register_video_nodes(camif);
if (ret < 0)
goto err_unlock;
goto err_sens;
ret = camif_create_media_links(camif);
if (ret < 0)
goto err_unlock;
mutex_unlock(&camif->media_dev.graph_mutex);
goto err_sens;
ret = media_device_register(&camif->media_dev);
if (ret < 0)
@ -516,8 +512,6 @@ static int s3c_camif_probe(struct platform_device *pdev)
pm_runtime_put(dev);
return 0;
err_unlock:
mutex_unlock(&camif->media_dev.graph_mutex);
err_sens:
v4l2_device_unregister(&camif->v4l2_dev);
media_device_unregister(&camif->media_dev);

View File

@ -945,6 +945,11 @@ static long vop_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
ret = -EFAULT;
goto free_ret;
}
/* Ensure desc has not changed between the two reads */
if (memcmp(&dd, dd_config, sizeof(dd))) {
ret = -EINVAL;
goto free_ret;
}
mutex_lock(&vdev->vdev_mutex);
mutex_lock(&vi->vop_mutex);
ret = vop_virtio_add_device(vdev, dd_config);

View File

@ -1608,21 +1608,22 @@ static int xgene_enet_probe(struct platform_device *pdev)
ret = xgene_enet_init_hw(pdata);
if (ret)
goto err;
goto err_netdev;
mac_ops = pdata->mac_ops;
if (pdata->phy_mode == PHY_INTERFACE_MODE_RGMII) {
ret = xgene_enet_mdio_config(pdata);
if (ret)
goto err;
goto err_netdev;
} else {
INIT_DELAYED_WORK(&pdata->link_work, mac_ops->link_state);
}
xgene_enet_napi_add(pdata);
return 0;
err:
err_netdev:
unregister_netdev(ndev);
err:
free_netdev(ndev);
return ret;
}

View File

@ -1439,6 +1439,10 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
if (!TX_CMP_VALID(txcmp, raw_cons))
break;
/* The valid test of the entry must be done first before
* reading any further.
*/
rmb();
if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
tx_pkts++;
/* return full budget so NAPI will complete. */
@ -4096,9 +4100,11 @@ static int bnxt_alloc_rfs_vnics(struct bnxt *bp)
}
static int bnxt_cfg_rx_mode(struct bnxt *);
static bool bnxt_mc_list_updated(struct bnxt *, u32 *);
static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
{
struct bnxt_vnic_info *vnic = &bp->vnic_info[0];
int rc = 0;
if (irq_re_init) {
@ -4154,13 +4160,22 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
netdev_err(bp->dev, "HWRM vnic filter failure rc: %x\n", rc);
goto err_out;
}
bp->vnic_info[0].uc_filter_count = 1;
vnic->uc_filter_count = 1;
bp->vnic_info[0].rx_mask = CFA_L2_SET_RX_MASK_REQ_MASK_BCAST;
vnic->rx_mask = CFA_L2_SET_RX_MASK_REQ_MASK_BCAST;
if ((bp->dev->flags & IFF_PROMISC) && BNXT_PF(bp))
bp->vnic_info[0].rx_mask |=
CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS;
vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS;
if (bp->dev->flags & IFF_ALLMULTI) {
vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
vnic->mc_list_count = 0;
} else {
u32 mask = 0;
bnxt_mc_list_updated(bp, &mask);
vnic->rx_mask |= mask;
}
rc = bnxt_cfg_rx_mode(bp);
if (rc)

View File

@ -1521,9 +1521,15 @@ fec_enet_rx(struct net_device *ndev, int budget)
struct fec_enet_private *fep = netdev_priv(ndev);
for_each_set_bit(queue_id, &fep->work_rx, FEC_ENET_MAX_RX_QS) {
clear_bit(queue_id, &fep->work_rx);
pkt_received += fec_enet_rx_queue(ndev,
int ret;
ret = fec_enet_rx_queue(ndev,
budget - pkt_received, queue_id);
if (ret < budget - pkt_received)
clear_bit(queue_id, &fep->work_rx);
pkt_received += ret;
}
return pkt_received;
}

View File

@ -698,7 +698,7 @@ static int get_fixed_ipv6_csum(__wsum hw_checksum, struct sk_buff *skb,
if (ipv6h->nexthdr == IPPROTO_FRAGMENT || ipv6h->nexthdr == IPPROTO_HOPOPTS)
return -1;
hw_checksum = csum_add(hw_checksum, (__force __wsum)(ipv6h->nexthdr << 8));
hw_checksum = csum_add(hw_checksum, (__force __wsum)htons(ipv6h->nexthdr));
csum_pseudo_hdr = csum_partial(&ipv6h->saddr,
sizeof(ipv6h->saddr) + sizeof(ipv6h->daddr), 0);

View File

@ -14,7 +14,6 @@ config MLX5_CORE_EN
bool "Mellanox Technologies ConnectX-4 Ethernet support"
depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
select PTP_1588_CLOCK
select VXLAN if MLX5_CORE=y
default n
---help---
Ethernet support in Mellanox Technologies ConnectX-4 NIC.
@ -32,3 +31,10 @@ config MLX5_CORE_EN_DCB
This flag is depended on the kernel's DCB support.
If unsure, set to Y
config MLX5_CORE_EN_VXLAN
bool "VXLAN offloads Support"
default y
depends on MLX5_CORE_EN && VXLAN && !(MLX5_CORE=y && VXLAN=m)
---help---
Say Y here if you want to use VXLAN offloads in the driver.

View File

@ -6,6 +6,7 @@ mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o \
en_main.o en_fs.o en_ethtool.o en_tx.o en_rx.o \
en_txrx.o en_clock.o vxlan.o en_tc.o en_arfs.o
en_txrx.o en_clock.o en_tc.o en_arfs.o
mlx5_core-$(CONFIG_MLX5_CORE_EN_VXLAN) += vxlan.o
mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o

View File

@ -522,7 +522,12 @@ struct mlx5e_priv {
struct mlx5e_direct_tir direct_tir[MLX5E_MAX_NUM_CHANNELS];
struct mlx5e_flow_steering fs;
struct mlx5e_flow_tables fts;
struct mlx5e_eth_addr_db eth_addr;
struct mlx5e_vlan_db vlan;
#ifdef CONFIG_MLX5_CORE_EN_VXLAN
struct mlx5e_vxlan_db vxlan;
#endif
struct mlx5e_params params;
struct workqueue_struct *wq;

View File

@ -2509,6 +2509,7 @@ static int mlx5e_get_vf_stats(struct net_device *dev,
vf_stats);
}
#if IS_ENABLED(CONFIG_MLX5_CORE_EN_VXLAN)
static void mlx5e_add_vxlan_port(struct net_device *netdev,
sa_family_t sa_family, __be16 port)
{
@ -2580,6 +2581,7 @@ static netdev_features_t mlx5e_features_check(struct sk_buff *skb,
return features;
}
#endif
static const struct net_device_ops mlx5e_netdev_ops_basic = {
.ndo_open = mlx5e_open,
@ -2614,6 +2616,7 @@ static const struct net_device_ops mlx5e_netdev_ops_sriov = {
.ndo_set_features = mlx5e_set_features,
.ndo_change_mtu = mlx5e_change_mtu,
.ndo_do_ioctl = mlx5e_ioctl,
#ifdef CONFIG_MLX5_CORE_EN_VXLAN
.ndo_add_vxlan_port = mlx5e_add_vxlan_port,
.ndo_del_vxlan_port = mlx5e_del_vxlan_port,
.ndo_features_check = mlx5e_features_check,

View File

@ -48,14 +48,21 @@ struct mlx5e_vxlan_work {
static inline bool mlx5e_vxlan_allowed(struct mlx5_core_dev *mdev)
{
return (MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) &&
return IS_ENABLED(CONFIG_MLX5_CORE_EN_VXLAN) &&
(MLX5_CAP_ETH(mdev, tunnel_stateless_vxlan) &&
mlx5_core_is_pf(mdev));
}
#ifdef CONFIG_MLX5_CORE_EN_VXLAN
void mlx5e_vxlan_init(struct mlx5e_priv *priv);
void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv);
#else
static inline void mlx5e_vxlan_init(struct mlx5e_priv *priv) {}
static inline void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv) {}
#endif
void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family,
u16 port, int add);
struct mlx5e_vxlan *mlx5e_vxlan_lookup_port(struct mlx5e_priv *priv, u16 port);
void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv);
#endif /* __MLX5_VXLAN_H__ */

View File

@ -2843,11 +2843,11 @@ static int mlxsw_sp_port_lag_join(struct mlxsw_sp_port *mlxsw_sp_port,
lag->ref_count++;
return 0;
err_col_port_enable:
mlxsw_sp_lag_col_port_remove(mlxsw_sp_port, lag_id);
err_col_port_add:
if (!lag->ref_count)
mlxsw_sp_lag_destroy(mlxsw_sp, lag_id);
err_col_port_enable:
mlxsw_sp_lag_col_port_remove(mlxsw_sp_port, lag_id);
return err;
}

View File

@ -214,7 +214,15 @@ static int __mlxsw_sp_port_flood_set(struct mlxsw_sp_port *mlxsw_sp_port,
mlxsw_reg_sftr_pack(sftr_pl, MLXSW_SP_FLOOD_TABLE_BM, idx_begin,
table_type, range, local_port, set);
err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sftr), sftr_pl);
if (err)
goto err_flood_bm_set;
else
goto buffer_out;
err_flood_bm_set:
mlxsw_reg_sftr_pack(sftr_pl, MLXSW_SP_FLOOD_TABLE_UC, idx_begin,
table_type, range, local_port, !set);
mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sftr), sftr_pl);
buffer_out:
kfree(sftr_pl);
return err;

View File

@ -1015,20 +1015,24 @@ static int netxen_get_flash_block(struct netxen_adapter *adapter, int base,
{
int i, v, addr;
__le32 *ptr32;
int ret;
addr = base;
ptr32 = buf;
for (i = 0; i < size / sizeof(u32); i++) {
if (netxen_rom_fast_read(adapter, addr, &v) == -1)
return -1;
ret = netxen_rom_fast_read(adapter, addr, &v);
if (ret)
return ret;
*ptr32 = cpu_to_le32(v);
ptr32++;
addr += sizeof(u32);
}
if ((char *)buf + size > (char *)ptr32) {
__le32 local;
if (netxen_rom_fast_read(adapter, addr, &v) == -1)
return -1;
ret = netxen_rom_fast_read(adapter, addr, &v);
if (ret)
return ret;
local = cpu_to_le32(v);
memcpy(ptr32, &local, (char *)buf + size - (char *)ptr32);
}
@ -1940,7 +1944,7 @@ void netxen_nic_set_link_parameters(struct netxen_adapter *adapter)
if (adapter->phy_read &&
adapter->phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG,
&autoneg) != 0)
&autoneg) == 0)
adapter->link_autoneg = autoneg;
} else
goto link_down;

View File

@ -852,7 +852,8 @@ netxen_check_options(struct netxen_adapter *adapter)
ptr32 = (__le32 *)&serial_num;
offset = NX_FW_SERIAL_NUM_OFFSET;
for (i = 0; i < 8; i++) {
if (netxen_rom_fast_read(adapter, offset, &val) == -1) {
err = netxen_rom_fast_read(adapter, offset, &val);
if (err) {
dev_err(&pdev->dev, "error reading board info\n");
adapter->driver_mismatch = 1;
return;

View File

@ -429,7 +429,7 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb,
u8 xmit_type;
u16 idx;
u16 hlen;
bool data_split;
bool data_split = false;
/* Get tx-queue context and netdev index */
txq_index = skb_get_queue_mapping(skb);
@ -2094,8 +2094,6 @@ static struct qede_dev *qede_alloc_etherdev(struct qed_dev *cdev,
edev->q_num_rx_buffers = NUM_RX_BDS_DEF;
edev->q_num_tx_buffers = NUM_TX_BDS_DEF;
DP_INFO(edev, "Allocated netdev with 64 tx queues and 64 rx queues\n");
SET_NETDEV_DEV(ndev, &pdev->dev);
memset(&edev->stats, 0, sizeof(edev->stats));
@ -2274,9 +2272,9 @@ static void qede_update_pf_params(struct qed_dev *cdev)
{
struct qed_pf_params pf_params;
/* 16 rx + 16 tx */
/* 64 rx + 64 tx */
memset(&pf_params, 0, sizeof(struct qed_pf_params));
pf_params.eth_pf_params.num_cons = 32;
pf_params.eth_pf_params.num_cons = 128;
qed_ops->common->update_pf_params(cdev, &pf_params);
}

View File

@ -495,8 +495,6 @@ static int geneve_gro_complete(struct sock *sk, struct sk_buff *skb,
int gh_len;
int err = -ENOSYS;
udp_tunnel_gro_complete(skb, nhoff);
gh = (struct genevehdr *)(skb->data + nhoff);
gh_len = geneve_hlen(gh);
type = gh->proto_type;
@ -507,6 +505,9 @@ static int geneve_gro_complete(struct sock *sk, struct sk_buff *skb,
err = ptype->callbacks.gro_complete(skb, nhoff + gh_len);
rcu_read_unlock();
skb_set_inner_mac_header(skb, nhoff + gh_len);
return err;
}

View File

@ -85,7 +85,7 @@ struct gcm_iv {
* @tfm: crypto struct, key storage
*/
struct macsec_key {
u64 id;
u8 id[MACSEC_KEYID_LEN];
struct crypto_aead *tfm;
};
@ -1530,7 +1530,8 @@ static const struct nla_policy macsec_genl_sa_policy[NUM_MACSEC_SA_ATTR] = {
[MACSEC_SA_ATTR_AN] = { .type = NLA_U8 },
[MACSEC_SA_ATTR_ACTIVE] = { .type = NLA_U8 },
[MACSEC_SA_ATTR_PN] = { .type = NLA_U32 },
[MACSEC_SA_ATTR_KEYID] = { .type = NLA_U64 },
[MACSEC_SA_ATTR_KEYID] = { .type = NLA_BINARY,
.len = MACSEC_KEYID_LEN, },
[MACSEC_SA_ATTR_KEY] = { .type = NLA_BINARY,
.len = MACSEC_MAX_KEY_LEN, },
};
@ -1577,6 +1578,9 @@ static bool validate_add_rxsa(struct nlattr **attrs)
return false;
}
if (nla_len(attrs[MACSEC_SA_ATTR_KEYID]) != MACSEC_KEYID_LEN)
return false;
return true;
}
@ -1642,7 +1646,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
if (tb_sa[MACSEC_SA_ATTR_ACTIVE])
rx_sa->active = !!nla_get_u8(tb_sa[MACSEC_SA_ATTR_ACTIVE]);
rx_sa->key.id = nla_get_u64(tb_sa[MACSEC_SA_ATTR_KEYID]);
nla_memcpy(rx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEY], MACSEC_KEYID_LEN);
rx_sa->sc = rx_sc;
rcu_assign_pointer(rx_sc->sa[assoc_num], rx_sa);
@ -1723,6 +1727,9 @@ static bool validate_add_txsa(struct nlattr **attrs)
return false;
}
if (nla_len(attrs[MACSEC_SA_ATTR_KEYID]) != MACSEC_KEYID_LEN)
return false;
return true;
}
@ -1778,7 +1785,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
return -ENOMEM;
}
tx_sa->key.id = nla_get_u64(tb_sa[MACSEC_SA_ATTR_KEYID]);
nla_memcpy(tx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEY], MACSEC_KEYID_LEN);
spin_lock_bh(&tx_sa->lock);
tx_sa->next_pn = nla_get_u32(tb_sa[MACSEC_SA_ATTR_PN]);
@ -2365,9 +2372,7 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) ||
nla_put_u32(skb, MACSEC_SA_ATTR_PN, tx_sa->next_pn) ||
nla_put_u64_64bit(skb, MACSEC_SA_ATTR_KEYID,
tx_sa->key.id,
MACSEC_SA_ATTR_PAD) ||
nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, tx_sa->key.id) ||
nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, tx_sa->active)) {
nla_nest_cancel(skb, txsa_nest);
nla_nest_cancel(skb, txsa_list);
@ -2469,9 +2474,7 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) ||
nla_put_u32(skb, MACSEC_SA_ATTR_PN, rx_sa->next_pn) ||
nla_put_u64_64bit(skb, MACSEC_SA_ATTR_KEYID,
rx_sa->key.id,
MACSEC_SA_ATTR_PAD) ||
nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, rx_sa->key.id) ||
nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, rx_sa->active)) {
nla_nest_cancel(skb, rxsa_nest);
nla_nest_cancel(skb, rxsc_nest);

View File

@ -384,7 +384,7 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
goto wake_up;
}
kfree_skb(skb);
consume_skb(skb);
while (segs) {
struct sk_buff *nskb = segs->next;

View File

@ -613,8 +613,9 @@ out:
static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
{
udp_tunnel_gro_complete(skb, nhoff);
/* Sets 'skb->inner_mac_header' since we are always called with
* 'skb->encapsulation' set.
*/
return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));
}

View File

@ -397,10 +397,17 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn)
*/
start += start_pad;
npfns = (pmem->size - start_pad - end_trunc - SZ_8K) / SZ_4K;
if (nd_pfn->mode == PFN_MODE_PMEM)
offset = ALIGN(start + SZ_8K + 64 * npfns, nd_pfn->align)
if (nd_pfn->mode == PFN_MODE_PMEM) {
unsigned long memmap_size;
/*
* vmemmap_populate_hugepages() allocates the memmap array in
* HPAGE_SIZE chunks.
*/
memmap_size = ALIGN(64 * npfns, HPAGE_SIZE);
offset = ALIGN(start + SZ_8K + memmap_size, nd_pfn->align)
- start;
else if (nd_pfn->mode == PFN_MODE_RAM)
} else if (nd_pfn->mode == PFN_MODE_RAM)
offset = ALIGN(start + SZ_8K, nd_pfn->align) - start;
else
goto err;

View File

@ -94,7 +94,7 @@ static int mxs_ocotp_read(void *context, const void *reg, size_t reg_size,
if (ret)
goto close_banks;
while (val_size) {
while (val_size >= reg_size) {
if ((offset < OCOTP_DATA_OFFSET) || (offset % 16)) {
/* fill up non-data register */
*buf = 0;
@ -103,7 +103,7 @@ static int mxs_ocotp_read(void *context, const void *reg, size_t reg_size,
}
buf++;
val_size--;
val_size -= reg_size;
offset += reg_size;
}

View File

@ -126,7 +126,7 @@ struct rio_mport_mapping {
struct list_head node;
struct mport_dev *md;
enum rio_mport_map_dir dir;
u32 rioid;
u16 rioid;
u64 rio_addr;
dma_addr_t phys_addr; /* for mmap */
void *virt_addr; /* kernel address, for dma_free_coherent */
@ -137,7 +137,7 @@ struct rio_mport_mapping {
struct rio_mport_dma_map {
int valid;
uint64_t length;
u64 length;
void *vaddr;
dma_addr_t paddr;
};
@ -208,7 +208,7 @@ struct mport_cdev_priv {
struct kfifo event_fifo;
wait_queue_head_t event_rx_wait;
spinlock_t fifo_lock;
unsigned int event_mask; /* RIO_DOORBELL, RIO_PORTWRITE */
u32 event_mask; /* RIO_DOORBELL, RIO_PORTWRITE */
#ifdef CONFIG_RAPIDIO_DMA_ENGINE
struct dma_chan *dmach;
struct list_head async_list;
@ -276,7 +276,8 @@ static int rio_mport_maint_rd(struct mport_cdev_priv *priv, void __user *arg,
return -EFAULT;
if ((maint_io.offset % 4) ||
(maint_io.length == 0) || (maint_io.length % 4))
(maint_io.length == 0) || (maint_io.length % 4) ||
(maint_io.length + maint_io.offset) > RIO_MAINT_SPACE_SZ)
return -EINVAL;
buffer = vmalloc(maint_io.length);
@ -298,7 +299,8 @@ static int rio_mport_maint_rd(struct mport_cdev_priv *priv, void __user *arg,
offset += 4;
}
if (unlikely(copy_to_user(maint_io.buffer, buffer, maint_io.length)))
if (unlikely(copy_to_user((void __user *)(uintptr_t)maint_io.buffer,
buffer, maint_io.length)))
ret = -EFAULT;
out:
vfree(buffer);
@ -319,7 +321,8 @@ static int rio_mport_maint_wr(struct mport_cdev_priv *priv, void __user *arg,
return -EFAULT;
if ((maint_io.offset % 4) ||
(maint_io.length == 0) || (maint_io.length % 4))
(maint_io.length == 0) || (maint_io.length % 4) ||
(maint_io.length + maint_io.offset) > RIO_MAINT_SPACE_SZ)
return -EINVAL;
buffer = vmalloc(maint_io.length);
@ -327,7 +330,8 @@ static int rio_mport_maint_wr(struct mport_cdev_priv *priv, void __user *arg,
return -ENOMEM;
length = maint_io.length;
if (unlikely(copy_from_user(buffer, maint_io.buffer, length))) {
if (unlikely(copy_from_user(buffer,
(void __user *)(uintptr_t)maint_io.buffer, length))) {
ret = -EFAULT;
goto out;
}
@ -360,7 +364,7 @@ out:
*/
static int
rio_mport_create_outbound_mapping(struct mport_dev *md, struct file *filp,
u32 rioid, u64 raddr, u32 size,
u16 rioid, u64 raddr, u32 size,
dma_addr_t *paddr)
{
struct rio_mport *mport = md->mport;
@ -369,7 +373,7 @@ rio_mport_create_outbound_mapping(struct mport_dev *md, struct file *filp,
rmcd_debug(OBW, "did=%d ra=0x%llx sz=0x%x", rioid, raddr, size);
map = kzalloc(sizeof(struct rio_mport_mapping), GFP_KERNEL);
map = kzalloc(sizeof(*map), GFP_KERNEL);
if (map == NULL)
return -ENOMEM;
@ -394,7 +398,7 @@ err_map_outb:
static int
rio_mport_get_outbound_mapping(struct mport_dev *md, struct file *filp,
u32 rioid, u64 raddr, u32 size,
u16 rioid, u64 raddr, u32 size,
dma_addr_t *paddr)
{
struct rio_mport_mapping *map;
@ -433,7 +437,7 @@ static int rio_mport_obw_map(struct file *filp, void __user *arg)
dma_addr_t paddr;
int ret;
if (unlikely(copy_from_user(&map, arg, sizeof(struct rio_mmap))))
if (unlikely(copy_from_user(&map, arg, sizeof(map))))
return -EFAULT;
rmcd_debug(OBW, "did=%d ra=0x%llx sz=0x%llx",
@ -448,7 +452,7 @@ static int rio_mport_obw_map(struct file *filp, void __user *arg)
map.handle = paddr;
if (unlikely(copy_to_user(arg, &map, sizeof(struct rio_mmap))))
if (unlikely(copy_to_user(arg, &map, sizeof(map))))
return -EFAULT;
return 0;
}
@ -469,7 +473,7 @@ static int rio_mport_obw_free(struct file *filp, void __user *arg)
if (!md->mport->ops->unmap_outb)
return -EPROTONOSUPPORT;
if (copy_from_user(&handle, arg, sizeof(u64)))
if (copy_from_user(&handle, arg, sizeof(handle)))
return -EFAULT;
rmcd_debug(OBW, "h=0x%llx", handle);
@ -498,9 +502,9 @@ static int rio_mport_obw_free(struct file *filp, void __user *arg)
static int maint_hdid_set(struct mport_cdev_priv *priv, void __user *arg)
{
struct mport_dev *md = priv->md;
uint16_t hdid;
u16 hdid;
if (copy_from_user(&hdid, arg, sizeof(uint16_t)))
if (copy_from_user(&hdid, arg, sizeof(hdid)))
return -EFAULT;
md->mport->host_deviceid = hdid;
@ -520,9 +524,9 @@ static int maint_hdid_set(struct mport_cdev_priv *priv, void __user *arg)
static int maint_comptag_set(struct mport_cdev_priv *priv, void __user *arg)
{
struct mport_dev *md = priv->md;
uint32_t comptag;
u32 comptag;
if (copy_from_user(&comptag, arg, sizeof(uint32_t)))
if (copy_from_user(&comptag, arg, sizeof(comptag)))
return -EFAULT;
rio_local_write_config_32(md->mport, RIO_COMPONENT_TAG_CSR, comptag);
@ -837,7 +841,7 @@ err_out:
* @xfer: data transfer descriptor structure
*/
static int
rio_dma_transfer(struct file *filp, uint32_t transfer_mode,
rio_dma_transfer(struct file *filp, u32 transfer_mode,
enum rio_transfer_sync sync, enum dma_data_direction dir,
struct rio_transfer_io *xfer)
{
@ -875,7 +879,7 @@ rio_dma_transfer(struct file *filp, uint32_t transfer_mode,
unsigned long offset;
long pinned;
offset = (unsigned long)xfer->loc_addr & ~PAGE_MASK;
offset = (unsigned long)(uintptr_t)xfer->loc_addr & ~PAGE_MASK;
nr_pages = PAGE_ALIGN(xfer->length + offset) >> PAGE_SHIFT;
page_list = kmalloc_array(nr_pages,
@ -1015,19 +1019,20 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg)
if (unlikely(copy_from_user(&transaction, arg, sizeof(transaction))))
return -EFAULT;
if (transaction.count != 1)
if (transaction.count != 1) /* only single transfer for now */
return -EINVAL;
if ((transaction.transfer_mode &
priv->md->properties.transfer_mode) == 0)
return -ENODEV;
transfer = vmalloc(transaction.count * sizeof(struct rio_transfer_io));
transfer = vmalloc(transaction.count * sizeof(*transfer));
if (!transfer)
return -ENOMEM;
if (unlikely(copy_from_user(transfer, transaction.block,
transaction.count * sizeof(struct rio_transfer_io)))) {
if (unlikely(copy_from_user(transfer,
(void __user *)(uintptr_t)transaction.block,
transaction.count * sizeof(*transfer)))) {
ret = -EFAULT;
goto out_free;
}
@ -1038,8 +1043,9 @@ static int rio_mport_transfer_ioctl(struct file *filp, void __user *arg)
ret = rio_dma_transfer(filp, transaction.transfer_mode,
transaction.sync, dir, &transfer[i]);
if (unlikely(copy_to_user(transaction.block, transfer,
transaction.count * sizeof(struct rio_transfer_io))))
if (unlikely(copy_to_user((void __user *)(uintptr_t)transaction.block,
transfer,
transaction.count * sizeof(*transfer))))
ret = -EFAULT;
out_free:
@ -1129,11 +1135,11 @@ err_tmo:
}
static int rio_mport_create_dma_mapping(struct mport_dev *md, struct file *filp,
uint64_t size, struct rio_mport_mapping **mapping)
u64 size, struct rio_mport_mapping **mapping)
{
struct rio_mport_mapping *map;
map = kzalloc(sizeof(struct rio_mport_mapping), GFP_KERNEL);
map = kzalloc(sizeof(*map), GFP_KERNEL);
if (map == NULL)
return -ENOMEM;
@ -1165,7 +1171,7 @@ static int rio_mport_alloc_dma(struct file *filp, void __user *arg)
struct rio_mport_mapping *mapping = NULL;
int ret;
if (unlikely(copy_from_user(&map, arg, sizeof(struct rio_dma_mem))))
if (unlikely(copy_from_user(&map, arg, sizeof(map))))
return -EFAULT;
ret = rio_mport_create_dma_mapping(md, filp, map.length, &mapping);
@ -1174,7 +1180,7 @@ static int rio_mport_alloc_dma(struct file *filp, void __user *arg)
map.dma_handle = mapping->phys_addr;
if (unlikely(copy_to_user(arg, &map, sizeof(struct rio_dma_mem)))) {
if (unlikely(copy_to_user(arg, &map, sizeof(map)))) {
mutex_lock(&md->buf_mutex);
kref_put(&mapping->ref, mport_release_mapping);
mutex_unlock(&md->buf_mutex);
@ -1192,7 +1198,7 @@ static int rio_mport_free_dma(struct file *filp, void __user *arg)
int ret = -EFAULT;
struct rio_mport_mapping *map, *_map;
if (copy_from_user(&handle, arg, sizeof(u64)))
if (copy_from_user(&handle, arg, sizeof(handle)))
return -EFAULT;
rmcd_debug(EXIT, "filp=%p", filp);
@ -1242,14 +1248,18 @@ static int rio_mport_free_dma(struct file *filp, void __user *arg)
static int
rio_mport_create_inbound_mapping(struct mport_dev *md, struct file *filp,
u64 raddr, u32 size,
u64 raddr, u64 size,
struct rio_mport_mapping **mapping)
{
struct rio_mport *mport = md->mport;
struct rio_mport_mapping *map;
int ret;
map = kzalloc(sizeof(struct rio_mport_mapping), GFP_KERNEL);
/* rio_map_inb_region() accepts u32 size */
if (size > 0xffffffff)
return -EINVAL;
map = kzalloc(sizeof(*map), GFP_KERNEL);
if (map == NULL)
return -ENOMEM;
@ -1262,7 +1272,7 @@ rio_mport_create_inbound_mapping(struct mport_dev *md, struct file *filp,
if (raddr == RIO_MAP_ANY_ADDR)
raddr = map->phys_addr;
ret = rio_map_inb_region(mport, map->phys_addr, raddr, size, 0);
ret = rio_map_inb_region(mport, map->phys_addr, raddr, (u32)size, 0);
if (ret < 0)
goto err_map_inb;
@ -1288,7 +1298,7 @@ err_dma_alloc:
static int
rio_mport_get_inbound_mapping(struct mport_dev *md, struct file *filp,
u64 raddr, u32 size,
u64 raddr, u64 size,
struct rio_mport_mapping **mapping)
{
struct rio_mport_mapping *map;
@ -1331,7 +1341,7 @@ static int rio_mport_map_inbound(struct file *filp, void __user *arg)
if (!md->mport->ops->map_inb)
return -EPROTONOSUPPORT;
if (unlikely(copy_from_user(&map, arg, sizeof(struct rio_mmap))))
if (unlikely(copy_from_user(&map, arg, sizeof(map))))
return -EFAULT;
rmcd_debug(IBW, "%s filp=%p", dev_name(&priv->md->dev), filp);
@ -1344,7 +1354,7 @@ static int rio_mport_map_inbound(struct file *filp, void __user *arg)
map.handle = mapping->phys_addr;
map.rio_addr = mapping->rio_addr;
if (unlikely(copy_to_user(arg, &map, sizeof(struct rio_mmap)))) {
if (unlikely(copy_to_user(arg, &map, sizeof(map)))) {
/* Delete mapping if it was created by this request */
if (ret == 0 && mapping->filp == filp) {
mutex_lock(&md->buf_mutex);
@ -1375,7 +1385,7 @@ static int rio_mport_inbound_free(struct file *filp, void __user *arg)
if (!md->mport->ops->unmap_inb)
return -EPROTONOSUPPORT;
if (copy_from_user(&handle, arg, sizeof(u64)))
if (copy_from_user(&handle, arg, sizeof(handle)))
return -EFAULT;
mutex_lock(&md->buf_mutex);
@ -1401,7 +1411,7 @@ static int rio_mport_inbound_free(struct file *filp, void __user *arg)
static int maint_port_idx_get(struct mport_cdev_priv *priv, void __user *arg)
{
struct mport_dev *md = priv->md;
uint32_t port_idx = md->mport->index;
u32 port_idx = md->mport->index;
rmcd_debug(MPORT, "port_index=%d", port_idx);
@ -1451,7 +1461,7 @@ static void rio_mport_doorbell_handler(struct rio_mport *mport, void *dev_id,
handled = 0;
spin_lock(&data->db_lock);
list_for_each_entry(db_filter, &data->doorbells, data_node) {
if (((db_filter->filter.rioid == 0xffffffff ||
if (((db_filter->filter.rioid == RIO_INVALID_DESTID ||
db_filter->filter.rioid == src)) &&
info >= db_filter->filter.low &&
info <= db_filter->filter.high) {
@ -1525,6 +1535,9 @@ static int rio_mport_remove_db_filter(struct mport_cdev_priv *priv,
if (copy_from_user(&filter, arg, sizeof(filter)))
return -EFAULT;
if (filter.low > filter.high)
return -EINVAL;
spin_lock_irqsave(&priv->md->db_lock, flags);
list_for_each_entry(db_filter, &priv->db_filters, priv_node) {
if (db_filter->filter.rioid == filter.rioid &&
@ -1737,10 +1750,10 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
return -EEXIST;
}
size = sizeof(struct rio_dev);
size = sizeof(*rdev);
mport = md->mport;
destid = (u16)dev_info.destid;
hopcount = (u8)dev_info.hopcount;
destid = dev_info.destid;
hopcount = dev_info.hopcount;
if (rio_mport_read_config_32(mport, destid, hopcount,
RIO_PEF_CAR, &rval))
@ -1872,8 +1885,8 @@ static int rio_mport_del_riodev(struct mport_cdev_priv *priv, void __user *arg)
do {
rdev = rio_get_comptag(dev_info.comptag, rdev);
if (rdev && rdev->dev.parent == &mport->net->dev &&
rdev->destid == (u16)dev_info.destid &&
rdev->hopcount == (u8)dev_info.hopcount)
rdev->destid == dev_info.destid &&
rdev->hopcount == dev_info.hopcount)
break;
} while (rdev);
}
@ -2146,8 +2159,8 @@ static long mport_cdev_ioctl(struct file *filp,
return maint_port_idx_get(data, (void __user *)arg);
case RIO_MPORT_GET_PROPERTIES:
md->properties.hdid = md->mport->host_deviceid;
if (copy_to_user((void __user *)arg, &(data->md->properties),
sizeof(data->md->properties)))
if (copy_to_user((void __user *)arg, &(md->properties),
sizeof(md->properties)))
return -EFAULT;
return 0;
case RIO_ENABLE_DOORBELL_RANGE:
@ -2159,11 +2172,11 @@ static long mport_cdev_ioctl(struct file *filp,
case RIO_DISABLE_PORTWRITE_RANGE:
return rio_mport_remove_pw_filter(data, (void __user *)arg);
case RIO_SET_EVENT_MASK:
data->event_mask = arg;
data->event_mask = (u32)arg;
return 0;
case RIO_GET_EVENT_MASK:
if (copy_to_user((void __user *)arg, &data->event_mask,
sizeof(data->event_mask)))
sizeof(u32)))
return -EFAULT;
return 0;
case RIO_MAP_OUTBOUND:
@ -2374,7 +2387,7 @@ static ssize_t mport_write(struct file *filp, const char __user *buf,
return -EINVAL;
ret = rio_mport_send_doorbell(mport,
(u16)event.u.doorbell.rioid,
event.u.doorbell.rioid,
event.u.doorbell.payload);
if (ret < 0)
return ret;
@ -2421,7 +2434,7 @@ static struct mport_dev *mport_cdev_add(struct rio_mport *mport)
struct mport_dev *md;
struct rio_mport_attr attr;
md = kzalloc(sizeof(struct mport_dev), GFP_KERNEL);
md = kzalloc(sizeof(*md), GFP_KERNEL);
if (!md) {
rmcd_error("Unable allocate a device object");
return NULL;
@ -2470,7 +2483,7 @@ static struct mport_dev *mport_cdev_add(struct rio_mport *mport)
/* The transfer_mode property will be returned through mport query
* interface
*/
#ifdef CONFIG_PPC /* for now: only on Freescale's SoCs */
#ifdef CONFIG_FSL_RIO /* for now: only on Freescale's SoCs */
md->properties.transfer_mode |= RIO_TRANSFER_MODE_MAPPED;
#else
md->properties.transfer_mode |= RIO_TRANSFER_MODE_TRANSFER;

Some files were not shown because too many files have changed in this diff Show More