2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-13 15:53:56 +08:00
Commit Graph

392460 Commits

Author SHA1 Message Date
Peter Zijlstra
147c5fc2ba sched/fair: Shrink sg_lb_stats and play memset games
We can shrink sg_lb_stats because rq::nr_running is an unsigned int
and cpu numbers are 'int'

Before:
  sgs:        /* size: 72, cachelines: 2, members: 10 */
  sds:        /* size: 184, cachelines: 3, members: 7 */

After:
  sgs:        /* size: 56, cachelines: 1, members: 10 */
  sds:        /* size: 152, cachelines: 3, members: 7 */

Further we can avoid clearing all of sds since we do a total
clear/assignment of sg_stats in update_sg_lb_stats() with exception of
busiest_stat.avg_load which is referenced in update_sd_pick_busiest().

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-0klzmz9okll8wc0nsudguc9p@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-02 08:27:35 +02:00
Joonsoo Kim
56cf515b4b sched: Clean-up struct sd_lb_stat
There is no reason to maintain separate variables for this_group
and busiest_group in sd_lb_stat, except saving some space.
But this structure is always allocated in stack, so this saving
isn't really benificial [peterz: reducing stack space is good; in this
case readability increases enough that I think its still beneficial]

This patch unify these variables, so IMO, readability may be improved.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
[ Rename this to local -- avoids confusion between this_cpu and the C++ this pointer. ]
Reviewed-by: Paul  Turner <pjt@google.com>
[ Lots of style edits, a few fixes and a rename. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1375778203-31343-4-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-02 08:27:35 +02:00
Joonsoo Kim
23f0d2093c sched: Factor out code to should_we_balance()
Now checking whether this cpu is appropriate to balance or not
is embedded into update_sg_lb_stats() and this checking has no direct
relationship to this function. There is not enough reason to place
this checking at update_sg_lb_stats(), except saving one iteration
for sched_group_cpus.

In this patch, I factor out this checking to should_we_balance() function.
And before doing actual work for load_balancing, check whether this cpu is
appropriate to balance via should_we_balance(). If this cpu is not
a candidate for balancing, it quit the work immediately.

With this change, we can save two memset cost and can expect better
compiler optimization.

Below is result of this patch.

 * Vanilla *
   text	   data	    bss	    dec	    hex	filename
  34499	   1136	    116	  35751	   8ba7	kernel/sched/fair.o

 * Patched *
   text	   data	    bss	    dec	    hex	filename
  34243	   1136	    116	  35495	   8aa7	kernel/sched/fair.o

In addition, rename @balance to @continue_balancing in order to represent
its purpose more clearly.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
[ s/should_balance/continue_balancing/g ]
Reviewed-by: Paul Turner <pjt@google.com>
[ Made style changes and a fix in should_we_balance(). ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1375778203-31343-3-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-02 08:27:34 +02:00
Joonsoo Kim
95a79b805b sched: Remove one division operation in find_busiest_queue()
Remove one division operation in find_busiest_queue() by using
crosswise multiplication:

	wl_i / power_i > wl_j / power_j :=
	wl_i * power_j > wl_j * power_i

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
[ Expanded the changelog. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1375778203-31343-2-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-02 08:26:59 +02:00
Jiri Olsa
ae23bff1d7 perf: Prevent race in unthrottling code
The current throttling code triggers WARN below via following
workload (only hit on AMD machine with 48 CPUs):

  # while [ 1 ]; do perf record perf bench sched messaging; done

  WARNING: at arch/x86/kernel/cpu/perf_event.c:1054 x86_pmu_start+0xc6/0x100()
  SNIP
  Call Trace:
   <IRQ>  [<ffffffff815f62d6>] dump_stack+0x19/0x1b
   [<ffffffff8105f531>] warn_slowpath_common+0x61/0x80
   [<ffffffff8105f60a>] warn_slowpath_null+0x1a/0x20
   [<ffffffff810213a6>] x86_pmu_start+0xc6/0x100
   [<ffffffff81129dd2>] perf_adjust_freq_unthr_context.part.75+0x182/0x1a0
   [<ffffffff8112a058>] perf_event_task_tick+0xc8/0xf0
   [<ffffffff81093221>] scheduler_tick+0xd1/0x140
   [<ffffffff81070176>] update_process_times+0x66/0x80
   [<ffffffff810b9565>] tick_sched_handle.isra.15+0x25/0x60
   [<ffffffff810b95e1>] tick_sched_timer+0x41/0x60
   [<ffffffff81087c24>] __run_hrtimer+0x74/0x1d0
   [<ffffffff810b95a0>] ? tick_sched_handle.isra.15+0x60/0x60
   [<ffffffff81088407>] hrtimer_interrupt+0xf7/0x240
   [<ffffffff81606829>] smp_apic_timer_interrupt+0x69/0x9c
   [<ffffffff8160569d>] apic_timer_interrupt+0x6d/0x80
   <EOI>  [<ffffffff81129f74>] ? __perf_event_task_sched_in+0x184/0x1a0
   [<ffffffff814dd937>] ? kfree_skbmem+0x37/0x90
   [<ffffffff815f2c47>] ? __slab_free+0x1ac/0x30f
   [<ffffffff8118143d>] ? kfree+0xfd/0x130
   [<ffffffff81181622>] kmem_cache_free+0x1b2/0x1d0
   [<ffffffff814dd937>] kfree_skbmem+0x37/0x90
   [<ffffffff814e03c4>] consume_skb+0x34/0x80
   [<ffffffff8158b057>] unix_stream_recvmsg+0x4e7/0x820
   [<ffffffff814d5546>] sock_aio_read.part.7+0x116/0x130
   [<ffffffff8112c10c>] ? __perf_sw_event+0x19c/0x1e0
   [<ffffffff814d5581>] sock_aio_read+0x21/0x30
   [<ffffffff8119a5d0>] do_sync_read+0x80/0xb0
   [<ffffffff8119ac85>] vfs_read+0x145/0x170
   [<ffffffff8119b699>] SyS_read+0x49/0xa0
   [<ffffffff810df516>] ? __audit_syscall_exit+0x1f6/0x2a0
   [<ffffffff81604a19>] system_call_fastpath+0x16/0x1b
  ---[ end trace 622b7e226c4a766a ]---

The reason is a race in perf_event_task_tick() throttling code.
The race flow (simplified code):

  - perf_throttled_count is per cpu variable and is
    CPU throttling flag, here starting with 0

  - perf_throttled_seq is sequence/domain for allowed
    count of interrupts within the tick, gets increased
    each tick

    on single CPU (CPU bounded event):

      ... workload

    perf_event_task_tick:
    |
    | T0    inc(perf_throttled_seq)
    | T1    needs_unthr = xchg(perf_throttled_count, 0) == 0
     tick gets interrupted:

            ... event gets throttled under new seq ...

      T2    last NMI comes, event is throttled - inc(perf_throttled_count)

     back to tick:
    | perf_adjust_freq_unthr_context:
    |
    | T3    unthrottling is skiped for event (needs_unthr == 0)
    | T4    event is stop and started via freq adjustment
    |
    tick ends

      ... workload
      ... no sample is hit for event ...

    perf_event_task_tick:
    |
    | T5    needs_unthr = xchg(perf_throttled_count, 0) != 0 (from T2)
    | T6    unthrottling is done on event (interrupts == MAX_INTERRUPTS)
    |       event is already started (from T4) -> WARN

Fixing this by not checking needs_unthr again and thus
check all events for unthrottling.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Reported-by: Jan Stancek <jstancek@redhat.com>
Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1377355554-8934-1-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2013-09-02 08:13:24 +02:00
Al Viro
bd1c149aa9 Introduce [compat_]save_altstack_ex() to unbreak x86 SMAP
For performance reasons, when SMAP is in use, SMAP is left open for an
entire put_user_try { ... } put_user_catch(); block, however, calling
__put_user() in the middle of that block will close SMAP as the
STAC..CLAC constructs intentionally do not nest.

Furthermore, using __put_user() rather than put_user_ex() here is bad
for performance.

Thus, introduce new [compat_]save_altstack_ex() helpers that replace
__[compat_]save_altstack() for x86, being currently the only
architecture which supports put_user_try { ... } put_user_catch().

Reported-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: <stable@vger.kernel.org> # v3.8+
Link: http://lkml.kernel.org/n/tip-es5p6y64if71k8p5u08agv9n@git.kernel.org
2013-09-01 14:16:33 -07:00
H. Peter Anvin
7263dda41b x86, smap: Handle csum_partial_copy_*_user()
Add SMAP annotations to csum_partial_copy_to/from_user().  These
functions legitimately access user space and thus need to set the AC
flag.

TODO: add explicit checks that the side with the kernel space pointer
really points into kernel space.

Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/n/tip-2aps0u00eer658fd5xyanan7@git.kernel.org
Cc: <stable@vger.kernel.org> # v3.7+
2013-09-01 14:09:48 -07:00
Mark Brown
a584862221 Merge remote-tracking branch 'regulator/topic/tps65912' into regulator-next 2013-09-01 13:50:23 +01:00
Mark Brown
2d31b15b86 Merge remote-tracking branch 'regulator/topic/ti-abb' into regulator-next 2013-09-01 13:50:22 +01:00
Mark Brown
04fcec88cd Merge remote-tracking branch 'regulator/topic/sec' into regulator-next 2013-09-01 13:50:21 +01:00
Mark Brown
a89f5c7598 Merge remote-tracking branch 'regulator/topic/ramp' into regulator-next 2013-09-01 13:50:20 +01:00
Mark Brown
09f2dd88ff Merge remote-tracking branch 'regulator/topic/pfuze100' into regulator-next 2013-09-01 13:50:18 +01:00
Mark Brown
39fe3b45d3 Merge remote-tracking branch 'regulator/topic/palmas' into regulator-next 2013-09-01 13:50:17 +01:00
Mark Brown
f27a5fb424 Merge remote-tracking branch 'regulator/topic/optional' into regulator-next 2013-09-01 13:50:17 +01:00
Mark Brown
6979380d85 Merge remote-tracking branch 'regulator/topic/max8660' into regulator-next 2013-09-01 13:50:16 +01:00
Mark Brown
3aba952706 Merge remote-tracking branch 'regulator/topic/lp8755' into regulator-next 2013-09-01 13:50:14 +01:00
Mark Brown
882e3dba4c Merge remote-tracking branch 'regulator/topic/lp872x' into regulator-next 2013-09-01 13:50:13 +01:00
Mark Brown
bca3523b22 Merge remote-tracking branch 'regulator/topic/linear-range' into regulator-next 2013-09-01 13:50:12 +01:00
Mark Brown
099c606224 Merge remote-tracking branch 'regulator/topic/kconfig' into regulator-next 2013-09-01 13:50:11 +01:00
Mark Brown
724d054490 Merge remote-tracking branch 'regulator/topic/helpers' into regulator-next 2013-09-01 13:50:09 +01:00
Mark Brown
446b4665e3 Merge remote-tracking branch 'regulator/topic/fan53555' into regulator-next 2013-09-01 13:50:08 +01:00
Mark Brown
62696579a0 Merge remote-tracking branch 'regulator/topic/da9063' into regulator-next 2013-09-01 13:50:07 +01:00
Mark Brown
1ad13028e5 Merge remote-tracking branch 'regulator/topic/core' into regulator-next 2013-09-01 13:50:06 +01:00
Mark Brown
5288be36cd Merge remote-tracking branch 'regulator/topic/as3711' into regulator-next 2013-09-01 13:50:05 +01:00
Mark Brown
28c37c9ce8 Merge remote-tracking branch 'regulator/topic/88pm800' into regulator-next 2013-09-01 13:50:04 +01:00
Mark Brown
5787392598 Merge remote-tracking branch 'spi/topic/txx9' into spi-next 2013-09-01 13:49:18 +01:00
Mark Brown
961722f57c Merge remote-tracking branch 'spi/topic/topcliff' into spi-next 2013-09-01 13:49:17 +01:00
Mark Brown
989e694647 Merge remote-tracking branch 'spi/topic/tle62x0' into spi-next 2013-09-01 13:49:15 +01:00
Mark Brown
58737f127a Merge remote-tracking branch 'spi/topic/tel62x0' into spi-next 2013-09-01 13:49:14 +01:00
Mark Brown
e7c57cc124 Merge remote-tracking branch 'spi/topic/tegra' into spi-next 2013-09-01 13:49:13 +01:00
Mark Brown
a3e412dcf5 Merge remote-tracking branch 'spi/topic/sirf' into spi-next 2013-09-01 13:49:12 +01:00
Mark Brown
368ce0bca4 Merge remote-tracking branch 'spi/topic/sh-msiof' into spi-next 2013-09-01 13:49:11 +01:00
Mark Brown
874915ed4b Merge remote-tracking branch 'spi/topic/sh-hspi' into spi-next 2013-09-01 13:49:10 +01:00
Mark Brown
2dc745b6ef Merge remote-tracking branch 'spi/topic/s3c64xx' into spi-next 2013-09-01 13:49:09 +01:00
Mark Brown
121a39661b Merge remote-tracking branch 'spi/topic/rspi' into spi-next 2013-09-01 13:49:08 +01:00
Mark Brown
278ac33bbd Merge remote-tracking branch 'spi/topic/quad' into spi-next 2013-09-01 13:49:07 +01:00
Mark Brown
85cac43132 Merge remote-tracking branch 'spi/topic/qspi' into spi-next 2013-09-01 13:49:06 +01:00
Mark Brown
793b3cb6ac Merge remote-tracking branch 'spi/topic/pxa' into spi-next 2013-09-01 13:49:05 +01:00
Mark Brown
720ed2d130 Merge remote-tracking branch 'spi/topic/pl022' into spi-next 2013-09-01 13:49:04 +01:00
Mark Brown
68aa4cb337 Merge remote-tracking branch 'spi/topic/pdata' into spi-next 2013-09-01 13:49:03 +01:00
Mark Brown
11c28cfc1e Merge remote-tracking branch 'spi/topic/orion' into spi-next 2013-09-01 13:49:02 +01:00
Mark Brown
4374f332d9 Merge remote-tracking branch 'spi/topic/omap-100k' into spi-next 2013-09-01 13:49:01 +01:00
Mark Brown
45bb5065e1 Merge remote-tracking branch 'spi/topic/octeon' into spi-next 2013-09-01 13:49:00 +01:00
Mark Brown
455765f02e Merge remote-tracking branch 'spi/topic/nuc900' into spi-next 2013-09-01 13:48:59 +01:00
Mark Brown
7c2fde9b1a Merge remote-tracking branch 'spi/topic/mxs' into spi-next 2013-09-01 13:48:59 +01:00
Mark Brown
9020b75467 Merge remote-tracking branch 'spi/topic/msglen' into spi-next 2013-09-01 13:48:58 +01:00
Mark Brown
84c86edab3 Merge remote-tracking branch 'spi/topic/mpc512x' into spi-next 2013-09-01 13:48:57 +01:00
Mark Brown
4faad26c27 Merge remote-tracking branch 'spi/topic/ioremap' into spi-next 2013-09-01 13:48:56 +01:00
Mark Brown
8e37befc5c Merge remote-tracking branch 'spi/topic/imx' into spi-next 2013-09-01 13:48:55 +01:00
Mark Brown
497cb4473a Merge remote-tracking branch 'spi/topic/ep93xx' into spi-next 2013-09-01 13:48:54 +01:00