2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-25 13:43:55 +08:00
Commit Graph

811060 Commits

Author SHA1 Message Date
Peter Zijlstra
5d299eabea sched/fair: Add tmp_alone_branch assertion
The magic in list_add_leaf_cfs_rq() requires that at the end of
enqueue_task_fair():

  rq->tmp_alone_branch == &rq->lead_cfs_rq_list

If this is violated, list integrity is compromised for list entries
and the tmp_alone_branch pointer might dangle.

Also, reflow list_add_leaf_cfs_rq() while there. This looses one
indentation level and generates a form that's convenient for the next
patch.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:13:21 +01:00
Andrea Parri
c546951d9c sched/core: Use READ_ONCE()/WRITE_ONCE() in move_queued_task()/task_rq_lock()
move_queued_task() synchronizes with task_rq_lock() as follows:

	move_queued_task()		task_rq_lock()

	[S] ->on_rq = MIGRATING		[L] rq = task_rq()
	WMB (__set_task_cpu())		ACQUIRE (rq->lock);
	[S] ->cpu = new_cpu		[L] ->on_rq

where "[L] rq = task_rq()" is ordered before "ACQUIRE (rq->lock)" by an
address dependency and, in turn, "ACQUIRE (rq->lock)" is ordered before
"[L] ->on_rq" by the ACQUIRE itself.

Use READ_ONCE() to load ->cpu in task_rq() (c.f., task_cpu()) to honor
this address dependency.  Also, mark the accesses to ->cpu and ->on_rq
with READ_ONCE()/WRITE_ONCE() to comply with the LKMM.

Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Paul E. McKenney <paulmck@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Link: https://lkml.kernel.org/r/20190121155240.27173-1-andrea.parri@amarulasolutions.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:13:21 +01:00
Hidetoshi Seto
1ca4fa3ab6 sched/debug: Initialize sd_sysctl_cpus if !CONFIG_CPUMASK_OFFSTACK
register_sched_domain_sysctl() copies the cpu_possible_mask into
sd_sysctl_cpus, but only if sd_sysctl_cpus hasn't already been
allocated (ie, CONFIG_CPUMASK_OFFSTACK is set).  However, when
CONFIG_CPUMASK_OFFSTACK is not set, sd_sysctl_cpus is left
uninitialized (all zeroes) and the kernel may fail to initialize
sched_domain sysctl entries for all possible CPUs.

This is visible to the user if the kernel is booted with maxcpus=n, or
if ACPI tables have been modified to leave CPUs offline, and then
checking for missing /proc/sys/kernel/sched_domain/cpu* entries.

Fix this by separating the allocation and initialization, and adding a
flag to initialize the possible CPU entries while system booting only.

Tested-by: Syuuichirou Ishii <ishii.shuuichir@jp.fujitsu.com>
Tested-by: Tarumizu, Kohei <tarumizu.kohei@jp.fujitsu.com>
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Acked-by: Joe Lawrence <joe.lawrence@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Masayoshi Mizuma <msys.mizuma@gmail.com>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190129151245.5073-1-msys.mizuma@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:13:21 +01:00
Vincent Guittot
10a35e6812 sched/pelt: Skip updating util_est when utilization is higher than CPU's capacity
util_est is mainly meant to be a lower-bound for tasks utilization.
That's why task_util_est() returns the actual util_avg when it's higher
than the estimated utilization.

With new invaraince signal and without any special check on samples
collection, if a task is limited because of thermal capping for
example, we could end up overestimating its utilization and thus
perhaps generating an unwanted frequency spike when the capping is
relaxed... and (even worst) it will take some more activations for the
estimated utilization to converge back to the actual utilization.

Since we cannot easily know if there is idle time in a CPU when a task
completes an activation with a utilization higher then the CPU capacity,
we skip the sampling when utilization is higher than CPU's capacity.

Suggested-by: Patrick Bellasi <patrick.bellasi@arm.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: pjt@google.com
Cc: pkondeti@codeaurora.org
Cc: quentin.perret@arm.com
Cc: rjw@rjwysocki.net
Cc: srinivas.pandruvada@linux.intel.com
Cc: thara.gopinath@linaro.org
Link: https://lkml.kernel.org/r/1548257214-13745-4-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:13:21 +01:00
Vincent Guittot
2312729688 sched/fair: Update scale invariance of PELT
The current implementation of load tracking invariance scales the
contribution with current frequency and uarch performance (only for
utilization) of the CPU. One main result of this formula is that the
figures are capped by current capacity of CPU. Another one is that the
load_avg is not invariant because not scaled with uarch.

The util_avg of a periodic task that runs r time slots every p time slots
varies in the range :

    U * (1-y^r)/(1-y^p) * y^i < Utilization < U * (1-y^r)/(1-y^p)

with U is the max util_avg value = SCHED_CAPACITY_SCALE

At a lower capacity, the range becomes:

    U * C * (1-y^r')/(1-y^p) * y^i' < Utilization <  U * C * (1-y^r')/(1-y^p)

with C reflecting the compute capacity ratio between current capacity and
max capacity.

so C tries to compensate changes in (1-y^r') but it can't be accurate.

Instead of scaling the contribution value of PELT algo, we should scale the
running time. The PELT signal aims to track the amount of computation of
tasks and/or rq so it seems more correct to scale the running time to
reflect the effective amount of computation done since the last update.

In order to be fully invariant, we need to apply the same amount of
running time and idle time whatever the current capacity. Because running
at lower capacity implies that the task will run longer, we have to ensure
that the same amount of idle time will be applied when system becomes idle
and no idle time has been "stolen". But reaching the maximum utilization
value (SCHED_CAPACITY_SCALE) means that the task is seen as an
always-running task whatever the capacity of the CPU (even at max compute
capacity). In this case, we can discard this "stolen" idle times which
becomes meaningless.

In order to achieve this time scaling, a new clock_pelt is created per rq.
The increase of this clock scales with current capacity when something
is running on rq and synchronizes with clock_task when rq is idle. With
this mechanism, we ensure the same running and idle time whatever the
current capacity. This also enables to simplify the pelt algorithm by
removing all references of uarch and frequency and applying the same
contribution to utilization and loads. Furthermore, the scaling is done
only once per update of clock (update_rq_clock_task()) instead of during
each update of sched_entities and cfs/rt/dl_rq of the rq like the current
implementation. This is interesting when cgroup are involved as shown in
the results below:

On a hikey (octo Arm64 platform).
Performance cpufreq governor and only shallowest c-state to remove variance
generated by those power features so we only track the impact of pelt algo.

each test runs 16 times:

	./perf bench sched pipe
	(higher is better)
	kernel	tip/sched/core     + patch
	        ops/seconds        ops/seconds         diff
	cgroup
	root    59652(+/- 0.18%)   59876(+/- 0.24%)    +0.38%
	level1  55608(+/- 0.27%)   55923(+/- 0.24%)    +0.57%
	level2  52115(+/- 0.29%)   52564(+/- 0.22%)    +0.86%

	hackbench -l 1000
	(lower is better)
	kernel	tip/sched/core     + patch
	        duration(sec)      duration(sec)        diff
	cgroup
	root    4.453(+/- 2.37%)   4.383(+/- 2.88%)     -1.57%
	level1  4.859(+/- 8.50%)   4.830(+/- 7.07%)     -0.60%
	level2  5.063(+/- 9.83%)   4.928(+/- 9.66%)     -2.66%

Then, the responsiveness of PELT is improved when CPU is not running at max
capacity with this new algorithm. I have put below some examples of
duration to reach some typical load values according to the capacity of the
CPU with current implementation and with this patch. These values has been
computed based on the geometric series and the half period value:

  Util (%)     max capacity  half capacity(mainline)  half capacity(w/ patch)
  972 (95%)    138ms         not reachable            276ms
  486 (47.5%)  30ms          138ms                     60ms
  256 (25%)    13ms           32ms                     26ms

On my hikey (octo Arm64 platform) with schedutil governor, the time to
reach max OPP when starting from a null utilization, decreases from 223ms
with current scale invariance down to 121ms with the new algorithm.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: patrick.bellasi@arm.com
Cc: pjt@google.com
Cc: pkondeti@codeaurora.org
Cc: quentin.perret@arm.com
Cc: rjw@rjwysocki.net
Cc: srinivas.pandruvada@linux.intel.com
Cc: thara.gopinath@linaro.org
Link: https://lkml.kernel.org/r/1548257214-13745-3-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:13:21 +01:00
Vincent Guittot
62478d9911 sched/fair: Move the rq_of() helper function
Move rq_of() helper function so it can be used in pelt.c

[ mingo: Improve readability while at it. ]

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: patrick.bellasi@arm.com
Cc: pjt@google.com
Cc: pkondeti@codeaurora.org
Cc: quentin.perret@arm.com
Cc: rjw@rjwysocki.net
Cc: srinivas.pandruvada@linux.intel.com
Cc: thara.gopinath@linaro.org
Link: https://lkml.kernel.org/r/1548257214-13745-2-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 09:13:21 +01:00
Elena Reshetova
f0b89d3958 sched/core: Convert task_struct.stack_refcount to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:

 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable task_struct.stack_refcount is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

** Important note for maintainers:

Some functions from refcount_t API defined in lib/refcount.c
have different memory ordering guarantees than their atomic
counterparts.

The full comparison can be seen in
https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
in state to be merged to the documentation tree.

Normally the differences should not matter since refcount_t provides
enough guarantees to satisfy the refcounting use cases, but in
some rare cases it might matter.

Please double check that you don't have some undocumented
memory guarantees for this variable usage.

For the task_struct.stack_refcount it might make a difference
in following places:

 - try_get_task_stack(): increment in refcount_inc_not_zero() only
   guarantees control dependency on success vs. fully ordered
   atomic counterpart
 - put_task_stack(): decrement in refcount_dec_and_test() only
   provides RELEASE ordering and control dependency on success
   vs. fully ordered atomic counterpart

Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: viro@zeniv.linux.org.uk
Link: https://lkml.kernel.org/r/1547814450-18902-6-git-send-email-elena.reshetova@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 08:53:56 +01:00
Elena Reshetova
ec1d281923 sched/core: Convert task_struct.usage to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:

 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable task_struct.usage is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

** Important note for maintainers:

Some functions from refcount_t API defined in lib/refcount.c
have different memory ordering guarantees than their atomic
counterparts.

The full comparison can be seen in
https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
in state to be merged to the documentation tree.

Normally the differences should not matter since refcount_t provides
enough guarantees to satisfy the refcounting use cases, but in
some rare cases it might matter.

Please double check that you don't have some undocumented
memory guarantees for this variable usage.

For the task_struct.usage it might make a difference
in following places:

 - put_task_struct(): decrement in refcount_dec_and_test() only
   provides RELEASE ordering and control dependency on success
   vs. fully ordered atomic counterpart

Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: viro@zeniv.linux.org.uk
Link: https://lkml.kernel.org/r/1547814450-18902-5-git-send-email-elena.reshetova@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 08:53:55 +01:00
Elena Reshetova
c45a779524 sched/fair: Convert numa_group.refcount to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:

 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable numa_group.refcount is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

** Important note for maintainers:

Some functions from refcount_t API defined in lib/refcount.c
have different memory ordering guarantees than their atomic
counterparts.

The full comparison can be seen in
https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
in state to be merged to the documentation tree.

Normally the differences should not matter since refcount_t provides
enough guarantees to satisfy the refcounting use cases, but in
some rare cases it might matter.

Please double check that you don't have some undocumented
memory guarantees for this variable usage.

For the numa_group.refcount it might make a difference
in following places:

 - get_numa_group(): increment in refcount_inc_not_zero() only
   guarantees control dependency on success vs. fully ordered
   atomic counterpart
 - put_numa_group(): decrement in refcount_dec_and_test() only
   provides RELEASE ordering and control dependency on success
   vs. fully ordered atomic counterpart

Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: viro@zeniv.linux.org.uk
Link: https://lkml.kernel.org/r/1547814450-18902-4-git-send-email-elena.reshetova@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 08:53:54 +01:00
Elena Reshetova
60d4de3ff7 sched/core: Convert signal_struct.sigcnt to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:

 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable signal_struct.sigcnt is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

** Important note for maintainers:

Some functions from refcount_t API defined in lib/refcount.c
have different memory ordering guarantees than their atomic
counterparts.

The full comparison can be seen in
https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
in state to be merged to the documentation tree.

Normally the differences should not matter since refcount_t provides
enough guarantees to satisfy the refcounting use cases, but in
some rare cases it might matter.

Please double check that you don't have some undocumented
memory guarantees for this variable usage.

For the signal_struct.sigcnt it might make a difference
in following places:

 - put_signal_struct(): decrement in refcount_dec_and_test() only
   provides RELEASE ordering and control dependency on success
   vs. fully ordered atomic counterpart

Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: viro@zeniv.linux.org.uk
Link: https://lkml.kernel.org/r/1547814450-18902-3-git-send-email-elena.reshetova@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 08:53:53 +01:00
Elena Reshetova
d036bda7d0 sched/core: Convert sighand_struct.count to refcount_t
atomic_t variables are currently used to implement reference
counters with the following properties:

 - counter is initialized to 1 using atomic_set()
 - a resource is freed upon counter reaching zero
 - once counter reaches zero, its further
   increments aren't allowed
 - counter schema uses basic atomic operations
   (set, inc, inc_not_zero, dec_and_test, etc.)

Such atomic variables should be converted to a newly provided
refcount_t type and API that prevents accidental counter overflows
and underflows. This is important since overflows and underflows
can lead to use-after-free situation and be exploitable.

The variable sighand_struct.count is used as pure reference counter.
Convert it to refcount_t and fix up the operations.

** Important note for maintainers:

Some functions from refcount_t API defined in lib/refcount.c
have different memory ordering guarantees than their atomic
counterparts.

The full comparison can be seen in
https://lkml.org/lkml/2017/11/15/57 and it is hopefully soon
in state to be merged to the documentation tree.

Normally the differences should not matter since refcount_t provides
enough guarantees to satisfy the refcounting use cases, but in
some rare cases it might matter.

Please double check that you don't have some undocumented
memory guarantees for this variable usage.

For the sighand_struct.count it might make a difference
in following places:

 - __cleanup_sighand: decrement in refcount_dec_and_test() only
   provides RELEASE ordering and control dependency on success
   vs. fully ordered atomic counterpart

Suggested-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Elena Reshetova <elena.reshetova@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: David Windsor <dwindsor@gmail.com>
Reviewed-by: Hans Liljestrand <ishkamiel@gmail.com>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akpm@linux-foundation.org
Cc: viro@zeniv.linux.org.uk
Link: https://lkml.kernel.org/r/1547814450-18902-2-git-send-email-elena.reshetova@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-02-04 08:53:52 +01:00
Thomas Gleixner
15917dc028 sched: Remove stale PF_MUTEX_TESTER bit
The RTMUTEX tester was removed long ago but the PF bit stayed
around. Remove it and free up the space.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2019-01-29 20:14:28 +01:00
Vincent Guittot
46a745d905 sched/fair: Fix unnecessary increase of balance interval
In case of active balancing, we increase the balance interval to cover
pinned tasks cases not covered by all_pinned logic. Neverthless, the
active migration triggered by asym packing should be treated as the normal
unbalanced case and reset the interval to default value, otherwise active
migration for asym_packing can be easily delayed for hundreds of ms
because of this pinned task detection mechanism.

The same happens to other conditions tested in need_active_balance() like
misfit task and when the capacity of src_cpu is reduced compared to
dst_cpu (see comments in need_active_balance() for details).

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: valentin.schneider@arm.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Vincent Guittot
4ad4e481bd sched/fair: Fix rounding bug for asym packing
When check_asym_packing() is triggered, the imbalance is set to:

  busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE

But busiest_stat.avg_load equals:

  sgs->group_load * SCHED_CAPACITY_SCALE / sgs->group_capacity

These divisions can generate a rounding that will make imbalance
slightly lower than the weighted load of the cfs_rq.  But this is
enough to skip the rq in find_busiest_queue() and prevents asym
migration from happening.

Directly set imbalance to busiest's sgs->group_load to remove the
rounding.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: valentin.schneider@arm.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Vincent Guittot
a062d16449 sched/fair: Trigger asym_packing during idle load balance
Newly idle load balancing is not always triggered when a CPU becomes idle.
This prevents the scheduler from getting a chance to migrate the task
for asym packing.

Enable active migration during idle load balance too.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: valentin.schneider@arm.com
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Quentin Perret
81a930d3a6 sched/doc: Document Energy Aware Scheduling
Add some documentation detailing the main design points of EAS, as well
as a list of its dependencies.

Parts of this documentation are taken from Morten Rasmussen's original
EAS posting: https://lkml.org/lkml/2015/7/7/754

Co-authored-by: Morten Rasmussen <morten.rasmussen@arm.com>
Signed-off-by: Quentin Perret <quentin.perret@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Qais Yousef <qais.yousef@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: corbet@lwn.net
Cc: dietmar.eggemann@arm.com
Cc: patrick.bellasi@arm.com
Cc: rjw@rjwysocki.net
Link: https://lkml.kernel.org/r/20190110110546.8101-3-quentin.perret@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Quentin Perret
1017b48ccc PM/EM: Document the Energy Model framework
Introduce a documentation file summarizing the key design points and
APIs of the newly introduced Energy Model framework.

Signed-off-by: Quentin Perret <quentin.perret@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: corbet@lwn.net
Cc: dietmar.eggemann@arm.com
Cc: morten.rasmussen@arm.com
Cc: patrick.bellasi@arm.com
Cc: qais.yousef@arm.com
Cc: rjw@rjwysocki.net
Link: https://lkml.kernel.org/r/20190110110546.8101-2-quentin.perret@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Peter Zijlstra
c0ad4aa4d8 sched/fair: Robustify CFS-bandwidth timer locking
Traditionally hrtimer callbacks were run with IRQs disabled, but with
the introduction of HRTIMER_MODE_SOFT it is possible they run from
SoftIRQ context, which does _NOT_ have IRQs disabled.

Allow for the CFS bandwidth timers (period_timer and slack_timer) to
be ran from SoftIRQ context; this entails removing the assumption that
IRQs are already disabled from the locking.

While mainline doesn't strictly need this, -RT forces all timers not
explicitly marked with MODE_HARD into MODE_SOFT and trips over this.
And marking these timers as MODE_HARD doesn't make sense as they're
not required for RT operation and can potentially be quite expensive.

Reported-by: Tom Putzeys <tom.putzeys@be.atlascopco.com>
Tested-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190107125231.GE14122@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Peter Zijlstra
f8a696f25b sched/core: Give DCE a fighting chance
All that fancy new Energy-Aware scheduling foo is hidden behind a
static_key, which is awesome if you have the stuff enabled in your
config.

However, when you lack all the prerequisites it doesn't make any sense
to pretend we'll ever actually run this, so provide a little more clue
to the compiler so it can more agressively delete the code.

   text    data     bss     dec     hex filename
  50297     976      96   51369    c8a9 defconfig-build/kernel/sched/fair.o
  49227     944      96   50267    c45b defconfig-build/kernel/sched/fair.o

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Quentin Perret
8d5d0cfb63 sched/topology: Introduce a sysctl for Energy Aware Scheduling
In its current state, Energy Aware Scheduling (EAS) starts automatically
on asymmetric platforms having an Energy Model (EM). However, there are
users who want to have an EM (for thermal management for example), but
don't want EAS with it.

In order to let users disable EAS explicitly, introduce a new sysctl
called 'sched_energy_aware'. It is enabled by default so that EAS can
start automatically on platforms where it makes sense. Flipping it to 0
rebuilds the scheduling domains and disables EAS.

Signed-off-by: Quentin Perret <quentin.perret@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: adharmap@codeaurora.org
Cc: chris.redpath@arm.com
Cc: currojerez@riseup.net
Cc: dietmar.eggemann@arm.com
Cc: edubezval@gmail.com
Cc: gregkh@linuxfoundation.org
Cc: javi.merino@kernel.org
Cc: joel@joelfernandes.org
Cc: juri.lelli@redhat.com
Cc: morten.rasmussen@arm.com
Cc: patrick.bellasi@arm.com
Cc: pkondeti@codeaurora.org
Cc: rjw@rjwysocki.net
Cc: skannan@codeaurora.org
Cc: smuckle@google.com
Cc: srinivas.pandruvada@linux.intel.com
Cc: thara.gopinath@linaro.org
Cc: tkjos@google.com
Cc: valentin.schneider@arm.com
Cc: vincent.guittot@linaro.org
Cc: viresh.kumar@linaro.org
Link: https://lkml.kernel.org/r/20181203095628.11858-11-quentin.perret@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Lukas Bulwahn
62a8ddc93a MAINTAINERS, sched: Drop PREEMPTIBLE KERNEL section entry
The PREEMPTIBLE KERNEL section entry seems quite outdated:

Robert Love is not actively maintaining the file anymore, nor a recorded
contributor to the files in the PREEMPTIBLE KERNEL section for the last
few years.

The mailing list kpreempt-tech@lists.sourceforge.net does not exist
anymore; the website just points to some very old patches for v2.4/v2.5.

So, let's delete the PREEMPTIBLE KERNEL section entry and clean this up:

- Documentation/preempt-locking.txt is not modified much anyway, and the
  changes in that file are generally maintained by Jonathan Corbet. So,
  we do not need to explicitly mention Documentation/preempt-locking.txt
  in MAINTAINERS.

- include/linux/preempt.h is maintained by Peter and Ingo, so we simply
  add that file to the SCHEDULER section entry.

I got directed to this issue, as I could not subscribe to the outdated
mailing list address and decided to investigate and then cleaned this up.

Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Robert Love <rlove@rlove.org>
Cc: Robert Love <rml@tech9.net>
Cc: Robert Love <robert.w.love@intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20190112060613.7115-1-lukas.bulwahn@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2019-01-27 12:29:37 +01:00
Linus Torvalds
49a57857ae Linux 5.0-rc3 2019-01-21 13:14:44 +13:00
Linus Torvalds
1e556ba3b6 Fixes for pstore/ram
- Fix console ramoops to show the previous boot logs (Sai Prakash Ranjan)
 - Avoid allocation and leak of platform data
 -----BEGIN PGP SIGNATURE-----
 Comment: Kees Cook <kees@outflux.net>
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAlxE/QYWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJsVXD/94pJsNHogOle3XnZJst/1cV4+R
 KTyQ6QQ1184lI5FPGJliV3veCuKnTY4Q7e5vCl2mzEvds+VRxu2yUm+LFxKVj04W
 yiRNv8Yn3+3eKxNufqLa9ySbsSZyIpoCKQ/20nbC5cNM3r+Gq12GUnDBixtGdl9V
 fB316emIaKR70Qvi9KlfwDTcjh+SWBR2u8L0YsmTaoJkjtaCJhekN3hfya6Wnjf5
 1sj1fCDzteda/Ld0gXoImHPvt0UcHPJDp1+ZhY0ja64QIuoWwDxsisp4X4nS8iFD
 oak9BWRDhRwSzPnyCu8BI3Hc4ayxcBYR/vmBS12LTkIJVdR1CaqfeDYvZexyX04n
 TYZnmPeZ+2R3wzA9aCLjUDEAYNi403sLqIvR99jlrOoiMo+5Bf05TvBG8lND2dQg
 A13854vg9ssxfiuEmvxJTg8SLtcFiqEtrzZhqaLVfuf4cqaN3o0Ed+A0gjZadwwD
 TPxiMh4YEZuw+qszZwyzi7uECKOvJIJeLCL8CApPvihCNFy2vKF5wj58ZizXXmoT
 Orq6DJ1ITx4DTkYIgtiTC9HKDuvNDZ/ncdb5QoqlpoTe01r8GQOzffZ+tD3xRTNc
 8+yAdK0/bTjLVNbFX/Q4dwagh0I8EQSrdw8CmmQs0lsCvfyYTfHz8uPeBQ7rrzeN
 2r1H7a/OKpL/duTkEw==
 =m6sX
 -----END PGP SIGNATURE-----

Merge tag 'pstore-v5.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull pstore fixes from Kees Cook:

 - Fix console ramoops to show the previous boot logs (Sai Prakash
   Ranjan)

 - Avoid allocation and leak of platform data

* tag 'pstore-v5.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  pstore/ram: Avoid allocation and leak of platform data
  pstore/ram: Fix console ramoops to show the previous boot logs
2019-01-21 13:12:03 +13:00
Linus Torvalds
dbcfc96193 Bug fixes for gcc-plugins
- Fix ARM per-task stack protector plugin under GCC 9 (Ard Biesheuvel)
 -----BEGIN PGP SIGNATURE-----
 Comment: Kees Cook <kees@outflux.net>
 
 iQJKBAABCgA0FiEEpcP2jyKd1g9yPm4TiXL039xtwCYFAlxE8ucWHGtlZXNjb29r
 QGNocm9taXVtLm9yZwAKCRCJcvTf3G3AJrpKEACiPNVPQv4up+0fFYzVjYgUej4g
 JItPgDwxx/BFTYocvCrqOk++wlduhyheesfSo/VL1u3rSQa9XwIFUvnRMI2befye
 Rs2HqUB5c5tQzVs9Qaqf7d0aMAW06zROC+osI5dcJqKts0nTkRdw0PNhnmKNrkou
 qxXHWUPR2ussagaLnqrJisEuwRgga2nyGqBEWJXPO/qE6llZB+OUfeJ+3VX7QFj6
 PRlenSGs+9nq8yRyKb6+FQJbzxpxAoeNNogRdlG/NMwNMMcs5j81hl3+V1LNQNGt
 hoTE1Wt/qOtwFU/wm2iNj8WKczS1PjDteBsNAuen8yquTKIrBsvu5+J6me6Uw2By
 b5uPs0e9zbLD6U29Y/dX2mzLSR5Rd09/Czv2C8rmC3gU1pK12Zbaq/sXRuDPQrtq
 hShbFxW+eHyl2q8oHmROkjsOJitx0vWW/oHibjcRGpoArl/Pj1Wcz/Jq6KM0FDIU
 PiwT94BEQhDTpssb/7EnHflDWRQ+jX1mN+KF3BjtEtYyNExuMXn5Ec0iWcSq04M7
 gg6nSfEta4P3Bgdr386nufd5qIC9M1gEYpNZTVXaKjMnKKFiGox72/Jo0zQbL9LT
 5hOOn+VN9PCjIMJ8se3lYoe2n1dZLpLzrk0fyRbZ1Ab4HURoGdJJ2PzoEmtOsH73
 TV8lZiS+M+qRQidOhg==
 =q4n8
 -----END PGP SIGNATURE-----

Merge tag 'gcc-plugins-v5.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux

Pull gcc-plugins fixes from Kees Cook:
 "Fix ARM per-task stack protector plugin under GCC 9 (Ard Biesheuvel)"

* tag 'gcc-plugins-v5.0-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux:
  gcc-plugins: arm_ssp_per_task_plugin: fix for GCC 9+
  gcc-plugins: arm_ssp_per_task_plugin: sign extend the SP mask
2019-01-21 13:07:03 +13:00
Linus Torvalds
7d0ae236ed Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Pull networking fixes from David Miller:

 1) Fix endless loop in nf_tables, from Phil Sutter.

 2) Fix cross namespace ip6_gre tunnel hash list corruption, from
    Olivier Matz.

 3) Don't be too strict in phy_start_aneg() otherwise we might not allow
    restarting auto negotiation. From Heiner Kallweit.

 4) Fix various KMSAN uninitialized value cases in tipc, from Ying Xue.

 5) Memory leak in act_tunnel_key, from Davide Caratti.

 6) Handle chip errata of mv88e6390 PHY, from Andrew Lunn.

 7) Remove linear SKB assumption in fou/fou6, from Eric Dumazet.

 8) Missing udplite rehash callbacks, from Alexey Kodanev.

 9) Log dirty pages properly in vhost, from Jason Wang.

10) Use consume_skb() in neigh_probe() as this is a normal free not a
    drop, from Yang Wei. Likewise in macvlan_process_broadcast().

11) Missing device_del() in mdiobus_register() error paths, from Thomas
    Petazzoni.

12) Fix checksum handling of short packets in mlx5, from Cong Wang.

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (96 commits)
  bpf: in __bpf_redirect_no_mac pull mac only if present
  virtio_net: bulk free tx skbs
  net: phy: phy driver features are mandatory
  isdn: avm: Fix string plus integer warning from Clang
  net/mlx5e: Fix cb_ident duplicate in indirect block register
  net/mlx5e: Fix wrong (zero) TX drop counter indication for representor
  net/mlx5e: Fix wrong error code return on FEC query failure
  net/mlx5e: Force CHECKSUM_UNNECESSARY for short ethernet frames
  tools: bpftool: Cleanup license mess
  bpf: fix inner map masking to prevent oob under speculation
  bpf: pull in pkt_sched.h header for tooling to fix bpftool build
  selftests: forwarding: Add a test case for externally learned FDB entries
  selftests: mlxsw: Test FDB offload indication
  mlxsw: spectrum_switchdev: Do not treat static FDB entries as sticky
  net: bridge: Mark FDB entries that were added by user as such
  mlxsw: spectrum_fid: Update dummy FID index
  mlxsw: pci: Return error on PCI reset timeout
  mlxsw: pci: Increase PCI SW reset timeout
  mlxsw: pci: Ring CQ's doorbell before RDQ's
  MAINTAINERS: update email addresses of liquidio driver maintainers
  ...
2019-01-21 12:52:31 +13:00
Kees Cook
5631e8576a pstore/ram: Avoid allocation and leak of platform data
Yue Hu noticed that when parsing device tree the allocated platform data
was never freed. Since it's not used beyond the function scope, this
switches to using a stack variable instead.

Reported-by: Yue Hu <huyue2@yulong.com>
Fixes: 35da60941e ("pstore/ram: add Device Tree bindings")
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
2019-01-20 14:44:52 -08:00
Ard Biesheuvel
2c88c742d0 gcc-plugins: arm_ssp_per_task_plugin: fix for GCC 9+
GCC 9 reworks the way the references to the stack canary are
emitted, to prevent the value from being spilled to the stack
before the final comparison in the epilogue, defeating the
purpose, given that the spill slot is under control of the
attacker that we are protecting ourselves from.

Since our canary value address is obtained without accessing
memory (as opposed to pre-v7 code that will obtain it from a
literal pool), it is unlikely (although not guaranteed) that
the compiler will spill the canary value in the same way, so
let's just disable this improvement when building with GCC9+.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
2019-01-20 14:06:40 -08:00
Ard Biesheuvel
560706d5d2 gcc-plugins: arm_ssp_per_task_plugin: sign extend the SP mask
The ARM per-task stack protector GCC plugin hits an assert in
the compiler in some case, due to the fact the the SP mask
expression is not sign-extended as it should be. So fix that.

Suggested-by: Kugan Vivekanandarajah <kugan.vivekanandarajah@linaro.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
2019-01-20 14:06:40 -08:00
Linus Torvalds
bb617b9b45 virtio, vhost: fixes, cleanups
fixes and cleanups all over the place
 
 Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
 -----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJcPTc7AAoJECgfDbjSjVRpOEgH/Ahdx7VMYJtFsdmoJKiwhB7M
 jRRi9R903V9H87vl1BXy6dutHw+WONJtm6FSZ1ayNWlVmUmWS6vci+IUErr2uDrv
 KSG+dJMQLlF7t1dnLRwlLazvGa4/58+u0J459uKPQ5ckqwV5wXPjUS5Z0xF3ldxM
 Twz6vhYRGKCUc10YZm/WmsjlLROgaNtRya10PzAGVmXPzbCpvJfiojKWJER+Eigq
 JxWynTCm/YvIk824Ls9cDBVkDvb8GPS3blVbFnusR+D3ktvX7vLDPOsErGn4umVS
 nUm3/WiQALB9fKer+SsgcEGVh+fa06KIITK+IBblULmrAIT3CJdJp70UJBjfdTM=
 =DCkE
 -----END PGP SIGNATURE-----

Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost

Pull virtio/vhost fixes and cleanups from Michael Tsirkin:
 "Fixes and cleanups all over the place"

* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost:
  vhost/scsi: Use copy_to_iter() to send control queue response
  vhost: return EINVAL if iovecs size does not match the message size
  virtio-balloon: tweak config_changed implementation
  virtio: don't allocate vqs when names[i] = NULL
  virtio_pci: use queue idx instead of array idx to set up the vq
  virtio: document virtio_config_ops restrictions
  virtio: fix virtio_config_ops description
2019-01-21 07:37:16 +13:00
Linus Torvalds
1be969f468 for-5.0-rc2-tag
-----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEE8rQSAMVO+zA4DBdWxWXV+ddtWDsFAlxElHsACgkQxWXV+ddt
 WDsF8Q/6A+l/Ku8D+xSmw+JGXGNroX1V62sxVYWqIgrkYNg6iSMjicqF3aN1bCbR
 LqvOQ5skerrKteNYIPbTTOD5Xp37ccinSeEWEF0ktFkeU1G6yJo7aRsnQlO1sduk
 2PKUOA1/PdeTBiOj1bej/1ybhtIW+d0MaoPtnUCMC8DD/ihfmU332+KC8VmRUYGZ
 4kT1DvKfBOSVz1UTl6OJdWo76crvjz0eGVnH1YG7DoFbpVzAbphHE7+aC/WkHQme
 X5Ux2NtYVMe/0IGAC7kJnj4jCZ1weAdlmvzzagjzGtWdhWgTxntiJ/FVJs9nO8Dm
 G/pVtD8RVjgMkPOWcfw5fdderrBJqjVGgl4VDrDLqjO9OTGNFJs+HgcPRi6Oli28
 sA+HG+U34YzSfKY0L9eAmpkNxMjWywBuXTQIAlMhHNZCL0vZ2K5EYa3dTp9OswAW
 IIcOh/LfZxiomvMvUqQWcRCy5y/b+cYjOjbHwkrw+ewd3IWXVLG8YLMyZI3vnHKu
 /f1xn6KCap9a1cS4LwyK6gzstEugn0MYmnmD/Jx8I1BJFBt55Q31ES6tPHgTmh/d
 QjveRjMkxNCql4h5Hq0+LiXoSoocBmsO0wrs2QrWSx4PBnJsvjySnWr/8GfAOj79
 BhnuQFxbr/BkNyBzvKrjoI+zZnrVm0cBU59lP6PzN75+kQTaIfs=
 =hGlM
 -----END PGP SIGNATURE-----

Merge tag 'for-5.0-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux

Pull btrfs fixes from David Sterba:
 "A handful of fixes (some of them in testing for a long time):

   - fix some test failures regarding cleanup after transaction abort

   - revert of a patch that could cause a deadlock

   - delayed iput fixes, that can help in ENOSPC situation when there's
     low space and a lot data to write"

* tag 'for-5.0-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux:
  btrfs: wakeup cleaner thread when adding delayed iput
  btrfs: run delayed iputs before committing
  btrfs: wait on ordered extents on abort cleanup
  btrfs: handle delayed ref head accounting cleanup in abort
  Revert "btrfs: balance dirty metadata pages in btrfs_finish_ordered_io"
2019-01-21 07:35:26 +13:00
Linus Torvalds
315a6d850a include/linux/compiler*.h changes:
- A fix for OPTIMIZER_HIDE_VAR
     From Michael S. Tsirkin
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEPjU5OPd5QIZ9jqqOGXyLc2htIW0FAlxEkZkACgkQGXyLc2ht
 IW2ksQ/8C2rfoTcKKeNxjpgV2XQ0HV6DsacEeQw+0VSBTzA2OPSiV61Dfm6/qywp
 Y7DyjrosteIOShZhxSdg6M04uDEtb9m8GaWQitW3ewfmbnXmRX5OB3CI5vFF1/nS
 LOj/Bd6FRqkIM6b0b7MWp/J0hQMnSNuElZp+yqlJyy1YuivfXN3vu8iDsHDhlyAs
 OY4SqNmmlaUtSlRaTgJsSt28AFJ4CSgJqziKZux17xzjrstXg1p9BhcnZVzcmjeY
 tcGFptbpUEmrcF2iqR8weaWkJSizgfsI60USrRZwLKM+i1NyOmnk9AWtSxOblNZZ
 0z4QslkZG1/7rtmHOn6qVGcWsic+AINbrzSeBReEg8G/f/P/XI7yRJmQAaQWqzOD
 ByEYoCp6U7gmQY6QiLLwq9d3VTHxV9d6PeC6gqEDM5ifrTIdOwNbL0MPvpb/UOlC
 1IC/RpHOqAwWKTaYvpoutXzw9kG/TXG/yvdphTsStxOSnXeEXntdwmd0CDLKG6sG
 y4xmEqU51KUoQ1UsX++dhxxxR4H7O6WcUmcFGXcrrhAD+x0N7Rd9g+lSDCpxW9yC
 sIzr2aaPpaZMD40gAHb4vlXR+MqJdIFAAJ4xI1oaU+zSuxkEUn05xR6OgVgAdMLp
 jT1SbI0XpbviakV6mwquAND7HOKWP/eZ7NLf6LlJIp0wrkBZRrQ=
 =2ce7
 -----END PGP SIGNATURE-----
mergetag object 99e309b6ed
 type commit
 tag clang-format-for-linus-v5.0-rc3
 tagger Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> 1547999145 +0100
 
 clang-format changes:
 
   - Update of the for_each macro list
     From Jason Gunthorpe
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEPjU5OPd5QIZ9jqqOGXyLc2htIW0FAlxEmAYACgkQGXyLc2ht
 IW3AZQ//byEzF7HPogmwAQDA8cs6Rg2GSP3ohPsGavvqmd6AegXMqAfV9VNzFRBr
 tekhJuPufen6TjFkD+1Tlk+dvsXZXb/n/wYQWAJWm5SgOUfzh6Monprhe1JofLz0
 pQ5xIAKtbpsuS6TmjWttXMKWN7aTenNQIPyGXxZziKe653yLYdISuYKDqo9SbLQX
 QaXi911REjZHSUXdHrsmGyidci6HizJZkqCVB7zWrmB3+ygMTzo8x6HPTeIBOAdX
 zEHGDdKQEPKO7y6Jyh5GkCzpCKWqSgdgXsI16eyKsPYymkARaqgMYtCPHTBvZ3e1
 DkpCUg2BEMEDeftEFa9ysNOWppQTw9xDVuk6BO0T8YYeXnLlo9CWb6Dl7YRnoO63
 0nsdvmHRkDKP93Hs9Zn3kZRVvy1EgOeIkfD+gK6sJpibyzJZRFGAwC3ysP/ERDVx
 Lb25tdluWaxKZQwepqC472fiwX1V65YrLX66gUGfF5JIJqYDjeoOl+lgVb8L6Ped
 sjYKO8uf2D9ZPRpsXgecx9u+Fy94P0fPTEm76vo5z1HBMAldihrQnw1U9ZNsvjBr
 HiWIB6ccP/chDN+wtoI/lQGKgqjM6EYWJpts/NkPHvA1d0BUEPJ7/tHTFmUZ0c6z
 DxdcjX/g4Bu/rSyIJaeosdcKNgFm+maHWQX+L+YV9yE1uGTzdcE=
 =mM3e
 -----END PGP SIGNATURE-----

Merge tags 'compiler-attributes-for-linus-v5.0-rc3' and 'clang-format-for-linus-v5.0-rc3' of git://github.com/ojeda/linux

Pull misc clang fixes from Miguel Ojeda:

  - A fix for OPTIMIZER_HIDE_VAR from Michael S Tsirkin

  - Update clang-format with the latest for_each macro list from Jason
    Gunthorpe

* tag 'compiler-attributes-for-linus-v5.0-rc3' of git://github.com/ojeda/linux:
  include/linux/compiler*.h: fix OPTIMIZER_HIDE_VAR

* tag 'clang-format-for-linus-v5.0-rc3' of git://github.com/ojeda/linux:
  clang-format: Update .clang-format with the latest for_each macro list
2019-01-21 07:23:42 +13:00
Florian La Roche
fbfaf85190 fix int_sqrt64() for very large numbers
If an input number x for int_sqrt64() has the highest bit set, then
fls64(x) is 64.  (1UL << 64) is an overflow and breaks the algorithm.

Subtracting 1 is a better guess for the initial value of m anyway and
that's what also done in int_sqrt() implicitly [*].

[*] Note how int_sqrt() uses __fls() with two underscores, which already
    returns the proper raw bit number.

    In contrast, int_sqrt64() used fls64(), and that returns bit numbers
    illogically starting at 1, because of error handling for the "no
    bits set" case. Will points out that he bug probably is due to a
    copy-and-paste error from the regular int_sqrt() case.

Signed-off-by: Florian La Roche <Florian.LaRoche@googlemail.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-21 07:20:18 +13:00
Will Deacon
6e693b3ffe x86: uaccess: Inhibit speculation past access_ok() in user_access_begin()
Commit 594cc251fd ("make 'user_access_begin()' do 'access_ok()'")
makes the access_ok() check part of the user_access_begin() preceding a
series of 'unsafe' accesses.  This has the desirable effect of ensuring
that all 'unsafe' accesses have been range-checked, without having to
pick through all of the callsites to verify whether the appropriate
checking has been made.

However, the consolidated range check does not inhibit speculation, so
it is still up to the caller to ensure that they are not susceptible to
any speculative side-channel attacks for user addresses that ultimately
fail the access_ok() check.

This is an oversight, so use __uaccess_begin_nospec() to ensure that
speculation is inhibited until the access_ok() check has passed.

Reported-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-01-20 15:33:22 +12:00
Linus Torvalds
b0f3e768a8 arm64 fixes for -rc3
- Fix broken kpti page-table rewrite in bizarre KASLR configuration
 
 - Fix module loading with KASLR
 
 - Remove redundant definition of ARCH_SLAB_MINALIGN
 -----BEGIN PGP SIGNATURE-----
 
 iQEzBAABCgAdFiEEPxTL6PPUbjXGY88ct6xw3ITBYzQFAlxDkU8ACgkQt6xw3ITB
 YzRSeggAsvhxwa0Yg61A/s3tuaSO+kb6U6QXCVZSBw5F6tn3TPm7txoLlq+kUFoq
 gcQ5RFzoGaW27TQafWQHVWcwYVWHYAc4WqSLQBQDMDPRpA0WR7sx/WUaxPdBDHt1
 qLYHTKs68oTCdHMbvugNQhvBEt9s0qAQzrBk4exPhTLxkeWYELK4F2SpSgxzJun/
 K5Eg9qrl8XRCXO5TGyUT54MtZaF2utnopJOVupRpBpIjBrCY6BGSpZbWA6pcNlUr
 hftyXYMB8B4VGtKWAZp8mive2PFbzH/cj/rmxWDQRl0JUfnLw3lqXyHO8fe1e8vJ
 cZ6tW2shLhHzEM50OmAYnfaeTPqxcg==
 =EjUI
 -----END PGP SIGNATURE-----

Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux

Pull arm64 fixes from Will Deacon:
 "Three arm64 fixes for -rc3.

  We've plugged a couple of nasty issues involving KASLR-enabled
  kernels, and removed a redundant #define that was introduced as part
  of the KHWASAN fixes from akpm at -rc2.

   - Fix broken kpti page-table rewrite in bizarre KASLR configuration

   - Fix module loading with KASLR

   - Remove redundant definition of ARCH_SLAB_MINALIGN"

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  kasan, arm64: remove redundant ARCH_SLAB_MINALIGN define
  arm64: kaslr: ensure randomized quantities are clean to the PoC
  arm64: kpti: Update arm64_kernel_use_ng_mappings() when forced on
2019-01-20 15:27:59 +12:00
David S. Miller
6436408e81 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Daniel Borkmann says:

====================
pull-request: bpf 2019-01-20

The following pull-request contains BPF updates for your *net* tree.

The main changes are:

1) Fix a out-of-bounds access in __bpf_redirect_no_mac, from Willem.

2) Fix bpf_setsockopt to reset sock dst on SO_MARK changes, from Peter.

3) Fix map in map masking to prevent out-of-bounds access under
   speculative execution, from Daniel.

4) Fix bpf_setsockopt's SO_MAX_PACING_RATE to support TCP internal
   pacing, from Yuchung.

5) Fix json writer license in bpftool, from Thomas.

6) Fix AF_XDP to check if an actually queue exists during umem
   setup, from Krzysztof.

7) Several fixes to BPF stackmap's build id handling. Another fix
   for bpftool build to account for libbfd variations wrt linking
   requirements, from Stanislav.

8) Fix BPF samples build with clang by working around missing asm
   goto, from Yonghong.

9) Fix libbpf to retry program load on signal interrupt, from Lorenz.

10) Various minor compile warning fixes in BPF code, from Mathieu.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-19 16:38:12 -08:00
Willem de Bruijn
e7c87bd6cc bpf: in __bpf_redirect_no_mac pull mac only if present
Syzkaller was able to construct a packet of negative length by
redirecting from bpf_prog_test_run_skb with BPF_PROG_TYPE_LWT_XMIT:

    BUG: KASAN: slab-out-of-bounds in memcpy include/linux/string.h:345 [inline]
    BUG: KASAN: slab-out-of-bounds in skb_copy_from_linear_data include/linux/skbuff.h:3421 [inline]
    BUG: KASAN: slab-out-of-bounds in __pskb_copy_fclone+0x2dd/0xeb0 net/core/skbuff.c:1395
    Read of size 4294967282 at addr ffff8801d798009c by task syz-executor2/12942

    kasan_report.cold.9+0x242/0x309 mm/kasan/report.c:412
    check_memory_region_inline mm/kasan/kasan.c:260 [inline]
    check_memory_region+0x13e/0x1b0 mm/kasan/kasan.c:267
    memcpy+0x23/0x50 mm/kasan/kasan.c:302
    memcpy include/linux/string.h:345 [inline]
    skb_copy_from_linear_data include/linux/skbuff.h:3421 [inline]
    __pskb_copy_fclone+0x2dd/0xeb0 net/core/skbuff.c:1395
    __pskb_copy include/linux/skbuff.h:1053 [inline]
    pskb_copy include/linux/skbuff.h:2904 [inline]
    skb_realloc_headroom+0xe7/0x120 net/core/skbuff.c:1539
    ipip6_tunnel_xmit net/ipv6/sit.c:965 [inline]
    sit_tunnel_xmit+0xe1b/0x30d0 net/ipv6/sit.c:1029
    __netdev_start_xmit include/linux/netdevice.h:4325 [inline]
    netdev_start_xmit include/linux/netdevice.h:4334 [inline]
    xmit_one net/core/dev.c:3219 [inline]
    dev_hard_start_xmit+0x295/0xc90 net/core/dev.c:3235
    __dev_queue_xmit+0x2f0d/0x3950 net/core/dev.c:3805
    dev_queue_xmit+0x17/0x20 net/core/dev.c:3838
    __bpf_tx_skb net/core/filter.c:2016 [inline]
    __bpf_redirect_common net/core/filter.c:2054 [inline]
    __bpf_redirect+0x5cf/0xb20 net/core/filter.c:2061
    ____bpf_clone_redirect net/core/filter.c:2094 [inline]
    bpf_clone_redirect+0x2f6/0x490 net/core/filter.c:2066
    bpf_prog_41f2bcae09cd4ac3+0xb25/0x1000

The generated test constructs a packet with mac header, network
header, skb->data pointing to network header and skb->len 0.

Redirecting to a sit0 through __bpf_redirect_no_mac pulls the
mac length, even though skb->data already is at skb->network_header.
bpf_prog_test_run_skb has already pulled it as LWT_XMIT !is_l2.

Update the offset calculation to pull only if skb->data differs
from skb->network_header, which is not true in this case.

The test itself can be run only from commit 1cf1cae963 ("bpf:
introduce BPF_PROG_TEST_RUN command"), but the same type of packets
with skb at network header could already be built from lwt xmit hooks,
so this fix is more relevant to that commit.

Also set the mac header on redirect from LWT_XMIT, as even after this
change to __bpf_redirect_no_mac that field is expected to be set, but
is not yet in ip_finish_output2.

Fixes: 3a0af8fd61 ("bpf: BPF for lightweight tunnel infrastructure")
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
2019-01-20 01:11:48 +01:00
Michael S. Tsirkin
df133f3f96 virtio_net: bulk free tx skbs
Use napi_consume_skb() to get bulk free.  Note that napi_consume_skb is
safe to call in a non-napi context as long as the napi_budget flag is
correct.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-19 16:06:52 -08:00
Linus Torvalds
5d5c303ea0 A few MIPS fixes for 5.0:
- Fix IPI handling for Lantiq SoCs, which was broken by changes made
   back in v4.12.
 
 - Enable OF/DT serial support in ath79_defconfig to give us working
   serial by default.
 
 - Fix 64b builds for the Jazz platform.
 
 - Set up a struct device for the BCM47xx SoC to allow BCM47xx drivers to
   perform DMA again following the major DMA mapping changes made in
   v4.19.
 
 - Disable MSI on Cavium Octeon systems when the pcie_disable command
   line parameter introduced in v3.3 is used, in order to avoid
   inadvetently accessing PCIe controller registers despite the command
   line.
 
 - Fix a build failure for Cavium Octeon kernels with kexec enabled,
   introduced in v4.20.
 
 - Fix a regression in the behaviour of semctl/shmctl/msgctl IPC syscalls
   for kernels including n32 support but not o32 support caused by some
   cleanup in v3.19.
 -----BEGIN PGP SIGNATURE-----
 
 iIsEABYIADMWIQRgLjeFAZEXQzy86/s+p5+stXUA3QUCXEJhqRUccGF1bC5idXJ0
 b25AbWlwcy5jb20ACgkQPqefrLV1AN2aWwEA4ZExeZQi+g9oPNII/jd9wbLKU4Eq
 xjl/+NdzPVu+pP4A/AuG5hsEMFIgS2U0k2js7kNMHCzoV9Ky2m3kdbSNHvQI
 =AqoC
 -----END PGP SIGNATURE-----

Merge tag 'mips_fixes_5.0_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux

Pull MIPS fixes from Paul Burton:

 - Fix IPI handling for Lantiq SoCs, which was broken by changes made
   back in v4.12.

 - Enable OF/DT serial support in ath79_defconfig to give us working
   serial by default.

 - Fix 64b builds for the Jazz platform.

 - Set up a struct device for the BCM47xx SoC to allow BCM47xx drivers
   to perform DMA again following the major DMA mapping changes made in
   v4.19.

 - Disable MSI on Cavium Octeon systems when the pcie_disable command
   line parameter introduced in v3.3 is used, in order to avoid
   inadvetently accessing PCIe controller registers despite the command
   line.

 - Fix a build failure for Cavium Octeon kernels with kexec enabled,
   introduced in v4.20.

 - Fix a regression in the behaviour of semctl/shmctl/msgctl IPC
   syscalls for kernels including n32 support but not o32 support caused
   by some cleanup in v3.19.

* tag 'mips_fixes_5.0_2' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
  MIPS: OCTEON: fix kexec support
  mips: fix n32 compat_ipc_parse_version
  Disable MSI also when pcie-octeon.pcie_disable on
  MIPS: BCM47XX: Setup struct device for the SoC
  MIPS: jazz: fix 64bit build
  MIPS: ath79: Enable OF serial ports in the default config
  MIPS: lantiq: Use CP0_LEGACY_COMPARE_IRQ
  MIPS: lantiq: Fix IPI interrupt handling
2019-01-20 10:33:18 +12:00
Linus Torvalds
6a0141a096 A single build fix for powerpc due to device_node.type removal
-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCgAuFiEEktVUI4SxYhzZyEuo+vtdtY28YcMFAlxDUjkQHHJvYmhAa2Vy
 bmVsLm9yZwAKCRD6+121jbxhwzTBD/9MfQbCzF5IcgW+alp7fKqoFH5MQUkwEQgO
 TJ7TO/CgrnT/uDjUe90tfixsF7A4orpK8h8FWczLF22Ksnirqaxv4Gcmj0ItGQlD
 TeTzCV7DW6Kgcv6weQjH5PohB3XUKzHQe8/MdZCXnn/ikQpKK5bmQEV3Kxw1s4zM
 jHIwMGcDy+EKbcLPpq/lhLyCR7Ce3SrzCtrY41rw+tpgi450A72g0/sOf5onb/Xz
 +/akwRl8lIfcIl21+wMA4zyLOHVeqio+xvXSiYB31z82Pv2v4hGLxjTqBVfPLhpq
 1qJI1ua4v+424RLBNInuEf3+lnUkNizjT38INw0zw+E86TLtQYj1n/m2upoVsRE3
 ggLWVnxlWEIebDhpLySnKW9R58fYxXSq+qaP2jOKkGjzx+0HCZDVIjZwZx3C2Vft
 sMAVSNKd+Mche6GGGEhcPxmKnMIh6P2XyIo/60hv8JqYY6CwqxBGyCGQvCck38AA
 OySmMnS7q31pwiS3m6/RnlLiL6JmHeI/P5FRmPUak68Gbsp7dddcw6dIFUuOBsnF
 QQpuq99cG4r3e9Az5/A45KNrR5MyZVJHoZj0CmjLPRnQuJsPvpEk5BQi2rmH5jSG
 F/BPcFU9/+r65tDVYVmOaaHfqOWk9oeoxGR+e8ve/MpfTktid7to/dLv9smqs0UB
 XdHeh+jepg==
 =6H3h
 -----END PGP SIGNATURE-----

Merge tag 'devicetree-fixes-for-5.0-2' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux

Pull Devicetree fix from Rob Herring:
 "A single build fix for powerpc due to device_node.type removal"

* tag 'devicetree-fixes-for-5.0-2' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
  powerpc: chrp: Use of_node_is_type to access device_type
2019-01-20 10:28:46 +12:00
Linus Torvalds
26caabbcd7 libnvdimm v5.0-rc3
* Fix driver initialization crash due to the inability to report an
   'error' state for a DIMM's security capability.
 * Build warning fix for little-endian ARM64 builds
 * Fix a potential race between the EDAC driver's usage of the NFIT
   SMBIOS id for a DIMM and the driver shutdown path.
 * A small collection of one-line benign cleanups for duplicate variable
   assignments, a duplicate header include and a mis-typed function
   argument.
 -----BEGIN PGP SIGNATURE-----
 
 iQIcBAABAgAGBQJcQ4vDAAoJEB7SkWpmfYgCwrYP/iCZZVWz7TlP8u73alEXtw4G
 gPPQHb+n0Y5DQOMZCu1jFfKvkZzcIq4Wp6277Ht6DEREzoobyswRM8G1EKOwbzUM
 J9D0Hm8XCy9Z6v3MpH/VASMowjQkClL/5xy8rlJdZMjprjDFqq3t4M/sBARsaeZv
 xldAcNdbAQWq6HEebLZXja/AF8Ex0o59Z/w3oJ1Ds98mGPjGPUejV9cnpgCOHGZc
 rfB9DZdEKf4CUEqZmlRI253yhR4iHtzKJQGMadQVhcLdO/Jjc8DKxozOgxQOuWQu
 SvAv6t+tymGzPCUJg+pFROwemn3mB2fh3XMaueEY1biCZcj67Lml0H/zTewezsI7
 CZ4ZUPypc9L50GIUYCyPkNuB7E6jXmMD6Vn7EEx1GwQTlZmRQ8WbTELLGGsNUfvI
 55TqT0IbcnUhOXqKbaYql1Y7Z/yEmiBI1f5VdB9zXQKZ66dcgfgfVqLfhw7HUmmu
 j6J3xAtvcYX8x2EA75fdfr5mKUyrr1UAYG6bkActll0mRIHK4u2+ME4E4wmhZidG
 PlZV0sIy6EMDTbTyFP1N0P9FsG4DmK6mTATXFn/CqYMoW3TTuAlsWxhdtGMzhpur
 See6W4yJ0pT3wmzzPcOgPI4erntb7dQirwDjclw4qj/DnUegPKxP8UxXzFBH3gFV
 /wYYqXWh0cXa/Jq9YX3L
 =HCIs
 -----END PGP SIGNATURE-----

Merge tag 'libnvdimm-fixes-5.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm

Pull libnvdimm fixes from Dan Williams:
 "A crash fix, a build warning fix, a miscellaneous small cleanups.

  In case anyone is looking for them, there was a regression caught by
  testing that caused two patches to be dropped from this update.  Those
  patches have been reworked and will soak for another week / re-target
  5.0-rc4.

   - Fix driver initialization crash due to the inability to report an
     'error' state for a DIMM's security capability.

   - Build warning fix for little-endian ARM64 builds

   - Fix a potential race between the EDAC driver's usage of the NFIT
     SMBIOS id for a DIMM and the driver shutdown path.

   - A small collection of one-line benign cleanups for duplicate
     variable assignments, a duplicate header include and a mis-typed
     function argument"

* tag 'libnvdimm-fixes-5.0-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
  libnvdimm/security: Fix nvdimm_security_state() state request selection
  acpi/nfit: Remove duplicate set nd_set in acpi_nfit_init_interleave_set()
  acpi/nfit: Fix race accessing memdev in nfit_get_smbios_id()
  libnvdimm/dimm: Fix security capability detection for non-Intel NVDIMMs
  nfit: Mark some functions as __maybe_unused
  ACPI/nfit: delete the function to_acpi_nfit_desc
  ACPI/nfit: delete the redundant header file
2019-01-20 10:24:30 +12:00
Linus Torvalds
f403d718eb linux-watchdog 5.0-rc-fixes tag
-----BEGIN PGP SIGNATURE-----
 Version: GnuPG v2.0.14 (GNU/Linux)
 
 iEYEABECAAYFAlxDCwIACgkQ+iyteGJfRsq2qwCgxfbjmuRPmmo9qfFF9/0TNLGY
 aJkAniMRSsJzw1BhfW4+7841jPAZm3EV
 =ZUhJ
 -----END PGP SIGNATURE-----

Merge tag 'linux-watchdog-5.0-rc-fixes' of git://www.linux-watchdog.org/linux-watchdog

Pull watchdog fixes from Wim Van Sebroeck:

 - mt7621_wdt/rt2880_wdt: Fix compilation problem

 - tqmx86: Fix a couple IS_ERR() vs NULL bugs

* tag 'linux-watchdog-5.0-rc-fixes' of git://www.linux-watchdog.org/linux-watchdog:
  watchdog: tqmx86: Fix a couple IS_ERR() vs NULL bugs
  watchdog: mt7621_wdt/rt2880_wdt: Fix compilation problem
2019-01-20 09:58:52 +12:00
Linus Torvalds
b0efca46b5 NFS client fixes for Linux 5.0
Stable bugfixes:
 - Fix TCP receive code on archs with flush_dcache_page()
 
 Other bugfixes:
 - Fix error code in rpcrdma_buffer_create()
 - Fix a double free in rpcrdma_send_ctxs_create()
 - Fix kernel BUG at kernel/cred.c:825
 - Fix unnecessary retry in nfs42_proc_copy_file_range()
 - Ensure rq_bytes_sent is reset before request transmission
 - Ensure we respect the RPCSEC_GSS sequence number limit
 - Address Kerberos performance/behavior regression
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEEnZ5MQTpR7cLU7KEp18tUv7ClQOsFAlxCH1kACgkQ18tUv7Cl
 QOs4rBAAymqyhUzNgap1TX/KezFxqii7CVMVabrA5eGN+ZXbSVAZkwy7BMZWwVIp
 tEvD7lxWtF11x7bQDw7Xz+ruBCjLdD0RQIFnlBpVKqsRy9oSRA4PsgSbuIFaw+gX
 Bun4Z0xmOCPF7knRv6gQonArEZfHeokIIN8AtSBtWVByaOrnZwgDkNTIub8akpUl
 FQlzgq7lTydVzNcju2ImBeubU7KgFEu0F2Zub5z/iR+F2Mx/bAju8Q4YeVlPyD8U
 QJoIBlXAvgK8LK4bZCh40zPeEt0TMWXnW7o0JHgVQ0g6VbT+hp17I7fz91xEazye
 qbjpIJIjv5daEv0REM8t5ZCZB3tEatVjb4EQWXp0gJYb0l5E3I/O+7MO44n4uMYx
 s3UTxzM6NjwCtlgmn4tYUj+vEIExQHUUnwOl02e5iEa7bqNNY75ehAhj5Rh7iQBH
 H4b+OVuqc608q87rNePdK1LRyh0/u1cDI1kDAQoIP2omlb5hJQGk0Nuz9G2BodIj
 rP0x7nV+ykOXZtr6TR+RvaksL1W39PzVKYA0aL+e2gbcv4YO+Oq1phvNKwRWPM4a
 g08r/kvifS5h6/Jq8Wmn83f1vAOX7Sf23RtEoj+t9hc4S4JbsV2iYK3PY3eWbSYE
 Oz0Vt4gvBBJ+0rHJ10BsQ7686OQkyMKpIlvmx6O5mWVlthovbJM=
 =6Nzz
 -----END PGP SIGNATURE-----

Merge tag 'nfs-for-5.0-2' of git://git.linux-nfs.org/projects/anna/linux-nfs

Pull NFS client fixes from Anna Schumaker:
 "These are mostly fixes for SUNRPC bugs, with a single v4.2
  copy_file_range() fix mixed in.

  Stable bugfixes:
   - Fix TCP receive code on archs with flush_dcache_page()

  Other bugfixes:
   - Fix error code in rpcrdma_buffer_create()
   - Fix a double free in rpcrdma_send_ctxs_create()
   - Fix kernel BUG at kernel/cred.c:825
   - Fix unnecessary retry in nfs42_proc_copy_file_range()
   - Ensure rq_bytes_sent is reset before request transmission
   - Ensure we respect the RPCSEC_GSS sequence number limit
   - Address Kerberos performance/behavior regression"

* tag 'nfs-for-5.0-2' of git://git.linux-nfs.org/projects/anna/linux-nfs:
  SUNRPC: Address Kerberos performance/behavior regression
  SUNRPC: Ensure we respect the RPCSEC_GSS sequence number limit
  SUNRPC: Ensure rq_bytes_sent is reset before request transmission
  NFSv4.2 fix unnecessary retry in nfs4_copy_file_range
  sunrpc: kernel BUG at kernel/cred.c:825!
  SUNRPC: Fix TCP receive code on archs with flush_dcache_page()
  xprtrdma: Double free in rpcrdma_sendctxs_create()
  xprtrdma: Fix error code in rpcrdma_buffer_create()
2019-01-20 09:27:38 +12:00
Linus Torvalds
4d5f6e0201 SCSI fixes on 20190118
A set of 17 fixes.  Most of these are minor or trivial.  The one fix
 that may be serious is the isci one: the bug can cause hba parameters
 to be set from uninitialized memory.  I don't think it's exploitable,
 but you never know.
 
 Signed-off-by: James E.J. Bottomley <jejb@linux.ibm.com>
 -----BEGIN PGP SIGNATURE-----
 
 iJwEABMIAEQWIQTnYEDbdso9F2cI+arnQslM7pishQUCXEKL0SYcamFtZXMuYm90
 dG9tbGV5QGhhbnNlbnBhcnRuZXJzaGlwLmNvbQAKCRDnQslM7pishVZpAQCwuPTk
 fqOt4v4hJ0oUHtEBsQK3VMXSdUvWdb5Lbn3WeQD/RFYTyNxcIF7ADSWw71b+IigT
 ejUrMzI8ig+nZ1jbFZ4=
 =BdS/
 -----END PGP SIGNATURE-----

Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "A set of 17 fixes. Most of these are minor or trivial.

  The one fix that may be serious is the isci one: the bug can cause hba
  parameters to be set from uninitialized memory. I don't think it's
  exploitable, but you never know"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
  scsi: cxgb4i: add wait_for_completion()
  scsi: qla1280: set 64bit coherent mask
  scsi: ufs: Fix geometry descriptor size
  scsi: megaraid_sas: Retry reads of outbound_intr_status reg
  scsi: qedi: Add ep_state for login completion on un-reachable targets
  scsi: ufs: Fix system suspend status
  scsi: qla2xxx: Use correct number of vectors for online CPUs
  scsi: hisi_sas: Set protection parameters prior to adding SCSI host
  scsi: tcmu: avoid cmd/qfull timers updated whenever a new cmd comes
  scsi: isci: initialize shost fully before calling scsi_add_host()
  scsi: lpfc: lpfc_sli: Mark expected switch fall-throughs
  scsi: smartpqi_init: fix boolean expression in pqi_device_remove_start
  scsi: core: Synchronize request queue PM status only on successful resume
  scsi: pm80xx: reduce indentation
  scsi: qla4xxx: check return code of qla4xxx_copy_from_fwddb_param
  scsi: megaraid_sas: correct an info message
  scsi: target/iscsi: fix error msg typo when create lio_qr_cache failed
  scsi: sd: Fix cache_type_store()
2019-01-20 09:15:04 +12:00
Linus Torvalds
0facb89245 for-linus-20190118
-----BEGIN PGP SIGNATURE-----
 
 iQJEBAABCAAuFiEEwPw5LcreJtl1+l5K99NY+ylx4KYFAlxCLxcQHGF4Ym9lQGtl
 cm5lbC5kawAKCRD301j7KXHgpmBID/0SpsLsnoeki37Wei0HzBZeRdFGIXFfg1XB
 +s66nJUL+igHIIwH+EO7Ct+vw8jfCWZfasI+ThGtkgRg88bqJO3HEbE1MQY8KlTh
 Dm9LRHH9Jp4OYbbag2RXammaD4XKi980fIyVTp9R0keszQroSUTXXS40fksszTtB
 K4LEBiAxGJgin60W9RgbbK93n/pw17U5Wu2WID6ZfVxZYVzpXl0iIQXUz1zmnpy5
 3aP7ORqJGnH7WQiV57LMpq5/QFBoKrfUsjtxpnJ+i2694sLYX8NCqnH82SUliSU+
 NN3Vxic1ozmZEADFKg4dfmgVqqyJaoVCRvsA5i5YjqG9m65G/S7/BmsaWbq5Ixf2
 N1KQr/36vkYGYWqtb2rbYRVf9XfbxWKp1wuSaX3N1Vh1/Ee180qD68+wpa/0mtkh
 2dXV6CQP9De4BXH6mxj3Yr7LI2a2r7KLYhULZCAvLMvDtsBrPyHAmRRgXxaLlKL8
 QMFYYO/1wjlzpGnn4IxP8sDMH6N2shtZHAmC2n9TM2gmk5tA0OmajJJ4uxNbtyhw
 +ZlDco3aw/x0v2onjlbTiPij2cReRgwN+tVaEvn0dj3uxaisCru60iYdnr9FX4nL
 g3NxNUJbUhl6YMwvNMI0GKfz1IILzjPgJIhDqSHvYhtCA1w0DMVXs3Qv4HXoVh3m
 ffdby6/Yqw==
 =5yJp
 -----END PGP SIGNATURE-----

Merge tag 'for-linus-20190118' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:

 - block size setting fixes for loop/nbd (Jan Kara)

 - md bio_alloc_mddev() cleanup (Marcos)

 - Ensure we don't lose the REQ_INTEGRITY flag (Ming)

 - Two NVMe fixes by way of Christoph:
    - Fix NVMe IRQ calculation (Ming)
    - Uninitialized variable in nvmet-tcp (Sagi)

 - BFQ comment fix (Paolo)

 - License cleanup for recently added blk-mq-debugfs-zoned (Thomas)

* tag 'for-linus-20190118' of git://git.kernel.dk/linux-block:
  block: Cleanup license notice
  nvme-pci: fix nvme_setup_irqs()
  nvmet-tcp: fix uninitialized variable access
  block: don't lose track of REQ_INTEGRITY flag
  blockdev: Fix livelocks on loop device
  nbd: Use set_blocksize() to set device blocksize
  md: Make bio_alloc_mddev use bio_alloc_bioset
  block, bfq: fix comments on __bfq_deactivate_entity
2019-01-20 09:12:50 +12:00
Jason Gunthorpe
99e309b6ed clang-format: Update .clang-format with the latest for_each macro list
Re-run the shell fragment that generated the original list. In particular
this adds the missing xarray related functions.

Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
2019-01-19 19:26:06 +01:00
Camelia Groza
3e64cf7a43 net: phy: phy driver features are mandatory
Since phy driver features became a link_mode bitmap, phy drivers that
don't have a list of features configured will cause the kernel to crash
when probed.

Prevent the phy driver from registering if the features field is missing.

Fixes: 719655a149 ("net: phy: Replace phy driver features u32 with link_mode bitmap")
Reported-by: Scott Wood <oss@buserror.net>
Signed-off-by: Camelia Groza <camelia.groza@nxp.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-19 10:03:08 -08:00
Nathan Chancellor
7afa81c55f isdn: avm: Fix string plus integer warning from Clang
A recent commit in Clang expanded the -Wstring-plus-int warning, showing
some odd behavior in this file.

drivers/isdn/hardware/avm/b1.c:426:30: warning: adding 'int' to a string does not append to the string [-Wstring-plus-int]
                cinfo->version[j] = "\0\0" + 1;
                                    ~~~~~~~^~~
drivers/isdn/hardware/avm/b1.c:426:30: note: use array indexing to silence this warning
                cinfo->version[j] = "\0\0" + 1;
                                           ^
                                    &      [  ]
1 warning generated.

This is equivalent to just "\0". Nick pointed out that it is smarter to
use "" instead of "\0" because "" is used elsewhere in the kernel and
can be deduplicated at the linking stage.

Link: https://github.com/ClangBuiltLinux/linux/issues/309
Suggested-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-19 10:01:03 -08:00
Rob Herring
75a080cde0 powerpc: chrp: Use of_node_is_type to access device_type
Commit 8ce5f84157 ("of: Remove struct device_node.type pointer")
removed struct device_node.type pointer, but the conversion to use
of_node_is_type() accessor was missed in chrp_init_IRQ().

Fixes: 8ce5f84157 ("of: Remove struct device_node.type pointer")
Reported-by: kbuild test robot <lkp@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: linuxppc-dev@lists.ozlabs.org
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Rob Herring <robh@kernel.org>
2019-01-19 10:35:42 -06:00
David S. Miller
8a7fa0c350 mlx5-fixes-2019-01-18
-----BEGIN PGP SIGNATURE-----
 
 iQEcBAABAgAGBQJcQmwjAAoJEEg/ir3gV/o+aBEIAIE8q3gEonMEPLnRoYSBlgUy
 ZBl7/51yzC8CVdsdO7hx5dyyiI+gYq10Jp6TVpqA2i/V8P871T/u6+xRhP3T7dOc
 Fq2YE70NdkCqFkpyc0QONzMC7ypwVqIEJrX7KnuLi5Ybsm9+tDia8AgC0NX/0H3U
 bRgfzpupfK0TiLgtBe7kqC2WTo+bPzu0cEUu7xjQqUgZZie5QPyzW/6cEtn/V75+
 A3btMNS5w0cfYK4mR4MMuc+UT4rSbsdyIZQrzICo75C2UnVMlojDf90+RuW4JFVw
 tuOMA1LCB5l3lphgEEEerV2dZvs0ZJEuoYoFUkfgu2QEFSOMhod71t8/rurih0s=
 =f23b
 -----END PGP SIGNATURE-----

Merge tag 'mlx5-fixes-2019-01-18' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux

Saeed Mahameed says:

====================
Mellanox, mlx5 fixes 2019-01-18

This series introduces some fixes to mlx5 driver.

Please pull and let me know if there is any problem.

For -stable v4.18
('net/mlx5e: Force CHECKSUM_UNNECESSARY for short ethernet frames')

The patch doesn't apply cleanly to 4.18.y, but it is very simple to
resolve, what should be the procedure here ?
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
2019-01-18 18:23:23 -08:00
Eli Britstein
25f2d0e779 net/mlx5e: Fix cb_ident duplicate in indirect block register
Previously the identifier used for indirect block callback registry
and for block rule cb registry (when done via indirect blocks) was the
pointer to the tunnel netdev we were interested in receiving updates on.
This worked fine if a single PF existed that registered one callback for
the tunnel netdev of interest. However, if multiple PFs are in place then
the 2nd PF tries to register with the same tunnel netdev identifier. This
leads to EEXIST errors and/or incorrect cb deletions.

Prevent this conflict by using the rpriv pointer as the identifier for
netdev indirect block cb registry, allowing each PF to register a unique
callback per tunnel netdev. For block cb registry, the same PF may
register multiple cbs to the same block if using TC shared blocks.
Instead of the rpriv, use the pointer to the allocated indr_priv data as
the identifier here. This means that there can be a unique block callback
for each PF/tunnel netdev combo.

Fixes: f5bc2c5de1 ("net/mlx5e: Support TC indirect block notifications
for eswitch uplink reprs")
Signed-off-by: Eli Britstein <elibr@mellanox.com>
Reviewed-by: Oz Shlomo <ozsh@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
2019-01-18 16:15:31 -08:00