2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-21 03:33:59 +08:00
Commit Graph

464159 Commits

Author SHA1 Message Date
Joe Perches
032a4c0f9a checkpatch: warn on unnecessary else after return or break
Using an else following a break or return can unnecessarily indent code
blocks.

ie:
	for (i = 0; i < 100; i++) {
		int foo = bar();
		if (foo < 1)
			break;
		else
			usleep(1);
	}

is generally better written as:

	for (i = 0; i < 100; i++) {
		int foo = bar();
		if (foo < 1)
			break;
		usleep(1);
	}

Warn when a bare else statement is preceded by a break or return
indented 1 tab more than the else.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:27 -07:00
Joe Perches
ebfdc40969 checkpatch: attempt to find unnecessary 'out of memory' messages
Logging messages that show some type of "out of memory" error are
generally unnecessary as there is a generic message and a stack dump
done by the memory subsystem.

These messages generally increase kernel size without much added value.

Emit a warning on these types of messages.

This test looks for any inserted message function, then looks at the
previous line for an "if (!foo)" or "if (foo == NULL)" test and then
looks at the preceding statement for an allocation function like "foo =
kmalloc()"

ie: this code matches:

	foo = kmalloc();
	if (foo == NULL) {
		printk("Out of memory\n");
		return -ENOMEM;
	}

This test is very crude and incomplete.

This test can miss quite a lot of of OOM messages that do not have this
specific form.

ie: this code does not match:

	foo = kmalloc();
	if (!foo) {
		rtn = -ENOMEM;
		printk("Out of memory!\n");
		goto out;
	}

This test could also be a false positive when the logging message itself
does not specify anything about memory, but I did not find any false
positives in my limited testing.

spatch could be a better solution but correctness seems non-trivial for
that tool too.

Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:27 -07:00
Rasmus Villemoes
74e7653190 lib: bitmap: add missing mask in bitmap_andnot
Apparently, bitmap_andnot is supposed to return whether the new bitmap
is empty.  But it didn't take potential garbage bits in the last word
into account.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:27 -07:00
Rasmus Villemoes
7e5f97d192 lib: bitmap: add missing mask in bitmap_and
Apparently, bitmap_and is supposed to return whether the new bitmap is
empty.  But it didn't take potential garbage bits in the last word into
account.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:27 -07:00
Rasmus Villemoes
c5341ec890 lib: bitmap: add missing mask in bitmap_shift_right
There is no guarantee that *src does not contain garbage bits outside
the lower nbits, so we need to mask it before the shift-and-assign.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:27 -07:00
Rasmus Villemoes
2ac521d332 lib: bitmap: micro-optimize bitmap_allocate_region
__reg_op(..., REG_OP_ALLOC) always returns 0, so we might as well use that
and save an instruction.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
9279d3286e lib: bitmap: change parameter of bitmap_*_region to unsigned
Changing the pos parameter of __reg_op to unsigned allows the compiler
to generate slightly smaller and simpler code.  Also update its callers
bitmap_*_region to receive and pass unsigned int.  The return types of
bitmap_find_free_region and bitmap_allocate_region are still int to
allow a negative error code to be returned.  An int is certainly capable
of representing any realistic return value.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
a855174878 lib: bitmap: fix typo in kerneldoc for bitmap_pos_to_ord
A few lines above, it was stated that positions for non-set bits are
mapped to -1, which is obviously also what the code does.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
bc5be18280 lib: bitmap: simplify bitmap_parselist
We want len to be the index of the first '\n', or the length of the
string if there is no newline.  This is a good example of the usefulness
of strchrnul().  Use that instead, thus eliminating a branch and a call
to strlen().

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
154f5e38f3 lib: bitmap: make the start index of bitmap_clear unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "start" is non-negative.

Also, use the names "start" and "len" for the two parameters for
consistency with bitmap_set.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
fb5ac54263 lib: bitmap: make the start index of bitmap_set unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "start" is non-negative.

Also, use the names "start" and "len" for the two parameters in both
header file and implementation, instead of the previous mix.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
877d9f3b63 lib: bitmap: make nbits parameter of bitmap_weight unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "nbits" is non-negative.  Since no-one passes a negative
bit-count, this shouldn't affect the semantics.

I didn't change the return type, since that might change the semantics
of some expression containing a call to bitmap_weight(). Certainly an
int is capable of holding the result.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
5be20213e8 lib: bitmap: make nbits parameter of bitmap_subset unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "nbits" is non-negative.  Since no-one passes a negative
bit-count, this shouldn't affect the semantics.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
6dfe9799c2 lib: bitmap: make nbits parameter of bitmap_intersects unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "nbits" is non-negative.  Since no-one passes a negative
bit-count, this shouldn't affect the semantics.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
2f9305eb31 lib: bitmap: make nbits parameter of bitmap_{and,or,xor,andnot} unsigned
This change is only for consistency with the changes to the other
bitmap_* functions; it doesn't change the size of the generated code:
inside BITS_TO_LONGS there is a sizeof(long), which causes bits to be
interpreted as unsigned anyway.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
65b4ee62c9 lib: bitmap: remove unnecessary mask from bitmap_complement
Since the extra bits are "don't care", there is no reason to mask the
last word to the used bits when complementing.  This shaves off yet a
few bytes.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
3d6684f4e6 lib: bitmap: make nbits parameter of bitmap_complement unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "nbits" is non-negative.  Since no-one passes a negative
bit-count, this shouldn't affect the semantics.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
5e06806931 lib: bitmap: make nbits parameter of bitmap_equal unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "nbits" is non-negative.  Since no-one passes a negative
bit-count, this shouldn't affect the semantics.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:26 -07:00
Rasmus Villemoes
8397927c80 lib: bitmap: make nbits parameter of bitmap_full unsigned
The compiler can generate slightly smaller and simpler code when it
knows that "nbits" is non-negative.  Since no-one passes a negative
bit-count, this shouldn't affect the semantics.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Rasmus Villemoes
0679cc4836 lib: bitmap: make nbits parameter of bitmap_empty unsigned
Many functions in lib/bitmap.c start with an expression such as lim =
bits/BITS_PER_LONG.  Since bits has type (signed) int, and since gcc
cannot know that it is in fact non-negative, it generates worse code
than it could.  These patches, mostly consisting of changing various
parameters to unsigned, gives a slight overall code reduction:

  add/remove: 1/1 grow/shrink: 8/16 up/down: 251/-414 (-163)
  function                                     old     new   delta
  tick_device_uses_broadcast                   335     425     +90
  __irq_alloc_descs                            498     554     +56
  __bitmap_andnot                               73     115     +42
  __bitmap_and                                  70     101     +31
  bitmap_weight                                  -      11     +11
  copy_hugetlb_page_range                      752     762     +10
  follow_hugetlb_page                          846     854      +8
  hugetlb_init                                1415    1417      +2
  hugetlb_nrpages_setup                        130     131      +1
  hugetlb_add_hstate                           377     376      -1
  bitmap_allocate_region                        82      80      -2
  select_task_rq_fair                         2202    2191     -11
  hweight_long                                  66      55     -11
  __reg_op                                     230     219     -11
  dm_stats_message                            2849    2833     -16
  bitmap_parselist                              92      74     -18
  __bitmap_weight                              115      97     -18
  __bitmap_subset                              153     129     -24
  __bitmap_full                                128     104     -24
  __bitmap_empty                               120      96     -24
  bitmap_set                                   179     149     -30
  bitmap_clear                                 185     155     -30
  __bitmap_equal                               136     105     -31
  __bitmap_intersects                          148     108     -40
  __bitmap_complement                          109      67     -42
  tick_device_setup_broadcast_func.isra         81       -     -81

[The increases in __bitmap_and{,not} are due to bug fixes 17/18,18/18.
No idea why bitmap_weight suddenly appears.] While 163 bytes treewide is
insignificant, I believe the bitmap functions are often called with
locks held, so saving even a few cycles might be worth it.

While making these changes, I found a few other things that might be
worth including.  16,17,18 are actual bug fixes.  The rest shouldn't
change the behaviour of any of the functions, provided no-one passed
negative nbits values.  If something should come up, it should be fairly
bisectable.

A few issues I thought about, but didn't know what to do with:

* Many of the functions misbehave if nbits is compile-time 0; the
  out-of-line functions generally handle 0 correctly.  bitmap_fill() is
  particularly bad, whether the 0 is known at compile time or not.  It
  would probably be nice to add detection of at least compile-time 0 and
  handle that appropriately.

* I didn't change __bitmap_shift_{left,right} to use unsigned because I
  want to fully understand why the algorithm works before making that
  change.  However, AFAICT, they behave correctly for all (positive) shift
  amounts.  This is not the case for the small_const_nbits versions.  If
  for example nbits = n = BITS_PER_LONG, the shift operators turn into
  no-ops (at least on x86), so one get *dst = *src, whereas one would
  expect to get *dst=0.  That difference in behaviour is somewhat
  annoying.

This patch (of 18):

The compiler can generate slightly smaller and simpler code when it
knows that "nbits" is non-negative.  Since no-one passes a negative
bit-count, this shouldn't affect the semantics.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Andrew Morton
d0da23b0de lib/list_sort.c: convert to pr_foo
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Don Mullis <don.mullis@gmail.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Rasmus Villemoes
61b3d6c48f lib: list_sort.c: Limit number of unused cmp callbacks
The helper merge_and_restore_back_links() makes sure to call the
caller's cmp function during the final ->prev pointer fixup, so that the
cmp function may call cond_resched().  However, if the cmp function does
not call cond_resched() at all, this is entirely redundant.  If it does,
doing at least two function calls for every two pointer assignments is a
bit excessive.  This patch limits the calls to once for every 256
iterations.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Don Mullis <don.mullis@gmail.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Rasmus Villemoes
694123031d lib: list_sort_test(): simplify and harden cleanup
There is no reason to maintain the list structure while freeing the
debug elements.  Aside from the redundant pointer manipulations, it is
also inefficient from a locality-of-reference viewpoint, since they are
visited in a random order (wrt.  the order they were allocated).
Furthermore, if we jumped to exit: after detecting list corruption, it
is actually dangerous.

So just free the elements in the order they were allocated, using the
backing array elts.  Allocate that using kcalloc(), so that if
allocation of one of the debug element fails, we just end up calling
kfree(NULL) for the trailing elements.

Minor details: Use sizeof(*elts) instead of sizeof(void *), and return
err immediately when allocation of elts fails, to avoid introducing
another label just before the final return statement.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Don Mullis <don.mullis@gmail.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Rasmus Villemoes
9d418dcc6d lib: list_sort_test(): add extra corruption check
Add a check to make sure that the prev pointer of the list head points
to the last element on the list.

Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Don Mullis <don.mullis@gmail.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Rasmus Villemoes
27d555d101 lib: list_sort_test(): return -ENOMEM when allocation fails
Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Don Mullis <don.mullis@gmail.com>
Cc: Dave Chinner <david@fromorbit.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Joe Perches
087face526 kernel.h: remove deprecated pack_hex_byte
It's been nearly 3 years now since commit 55036ba76b ("lib: rename
pack_hex_byte() to hex_byte_pack()") so it's time to remove this
deprecated and unused static inline.

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Fabian Frederick
129965a916 lib/test-kstrtox.c: use ARRAY_SIZE instead of sizeof/sizeof[0]
Use kernel.h definition.

Signed-off-by: Fabian Frederick <fabf@skynet.be>
Cc: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Mathias Krause
142cda5dbc lib/string_helpers.c: constify static arrays
Complement commit 68aecfb979 ("lib/string_helpers.c: make arrays
static") by making the arrays const -- not only pointing to const
strings.  This moves them out of the data section to the r/o data
section:

   text    data     bss     dec     hex filename
   1150     176       0    1326     52e lib/string_helpers.old.o
   1326       0       0    1326     52e lib/string_helpers.new.o

Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
Gui Hecheng
e004f3c778 lib/cmdline.c: add size unit t/p/e to memparse
For modern filesystems such as btrfs, t/p/e size level operations are
common.  add size unit t/p/e parsing to memparse

Signed-off-by: Gui Hecheng <guihc.fnst@cn.fujitsu.com>
Acked-by: David Rientjes <rientjes@google.com>
Reviewed-by: Satoru Takeuchi <takeuchi_satoru@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
George Spelvin
428ac5fc05 libata: Use glob_match from lib/glob.c
The function may be useful for other drivers, so export it.  (Suggested
by Tejun Heo.)

Note that I inverted the return value of glob_match; returning true on
match seemed to make more sense.

Signed-off-by: George Spelvin <linux@horizon.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
George Spelvin
5f9be8248d lib/glob.c: add CONFIG_GLOB_SELFTEST
This was useful during development, and is retained for future
regression testing.

GCC appears to have no way to place string literals in a particular
section; adding __initconst to a char pointer leaves the string itself
in the default string section, where it will not be thrown away after
module load.

Thus all string constants are kept in explicitly declared and named
arrays.  Sorry this makes printk a bit harder to read.  At least the
tests are more compact.

Signed-off-by: George Spelvin <linux@horizon.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:25 -07:00
George Spelvin
b01250856b lib: add lib/glob.c
This is a helper function from drivers/ata/libata_core.c, where it is
used to blacklist particular device models.  It's being moved to lib/ so
other drivers may use it for the same purpose.

This implementation in non-recursive, so is safe for the kernel stack.

[akpm@linux-foundation.org: fix sparse warning]
Signed-off-by: George Spelvin <linux@horizon.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Sergey Senozhatsky
62e7ca5280 zlib: clean up some dead code
Cleanup unused `if 0'-ed functions, which have been dead since 2006
(commits 87c2ce3b93 ("lib/zlib*: cleanups") by Adrian Bunk and
4f3865fb57 ("zlib_inflate: Upgrade library code to a recent version")
by Richard Purdie):

 - zlib_deflateSetDictionary
 - zlib_deflateParams
 - zlib_deflateCopy
 - zlib_inflateSync
 - zlib_syncsearch
 - zlib_inflateSetDictionary
 - zlib_inflatePrime

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Ken Helias
0f9859ca92 klist: use same naming scheme as hlist for klist_add_after()
The name was modified from hlist_add_after() to hlist_add_behind() when
adjusting the order of arguments to match the one with
klist_add_after().  This is necessary to break old code when it would
use it the wrong way.

Make klist follow this naming scheme for consistency.

Signed-off-by: Ken Helias <kenhelias@firemail.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Ken Helias
1d023284c3 list: fix order of arguments for hlist_add_after(_rcu)
All other add functions for lists have the new item as first argument
and the position where it is added as second argument.  This was changed
for no good reason in this function and makes using it unnecessary
confusing.

The name was changed to hlist_add_behind() to cause unconverted code to
generate a compile error instead of using the wrong parameter order.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Ken Helias <kenhelias@firemail.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Acked-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>	[intel driver bits]
Cc: Hugh Dickins <hughd@google.com>
Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Ken Helias
bc18dd335a list: make hlist_add_after() argument names match hlist_add_after_rcu()
The argument names for hlist_add_after() are poorly chosen because they
look the same as the ones for hlist_add_before() but have to be used
differently.

hlist_add_after_rcu() has made a better choice.

Signed-off-by: Ken Helias <kenhelias@firemail.de>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Neil Zhang
d25d9feced kernel/printk/printk.c: fix bool assignements
Fix coccinelle warnings.

Signed-off-by: Neil Zhang <zhangwm@marvell.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Jan Kara
5874af2003 printk: enable interrupts before calling console_trylock_for_printk()
We need interrupts disabled when calling console_trylock_for_printk()
only so that cpu id we pass to can_use_console() remains valid (for
other things console_sem provides all the exclusion we need and
deadlocks on console_sem due to interrupts are impossible because we use
down_trylock()).  However if we are rescheduled, we are guaranteed to
run on an online cpu so we can easily just get the cpu id in
can_use_console().

We can lose a bit of performance when we enable interrupts in
vprintk_emit() and then disable them again in console_unlock() but OTOH
it can somewhat reduce interrupt latency caused by console_unlock().

We differ from (reverted) commit 939f04bec1 in that we avoid calling
console_unlock() from vprintk_emit() with lockdep enabled as that has
unveiled quite some bugs leading to system freezes during boot (e.g.
  https://lkml.org/lkml/2014/5/30/242,
  https://lkml.org/lkml/2014/6/28/521).

Signed-off-by: Jan Kara <jack@suse.cz>
Tested-by: Andreas Bombe <aeb@debian.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Alex Elder
249771b830 printk: miscellaneous cleanups
Some small cleanups to kernel/printk/printk.c.  None of them should
cause any change in behavior.

  - When CONFIG_PRINTK is defined, parenthesize the value of LOG_LINE_MAX.
  - When CONFIG_PRINTK is *not* defined, there is an extra LOG_LINE_MAX
    definition; delete it.
  - Pull an assignment out of a conditional expression in console_setup().
  - Use isdigit() in console_setup() rather than open coding it.
  - In update_console_cmdline(), drop a NUL-termination assignment;
    the strlcpy() call that precedes it guarantees it's not needed.
  - Simplify some logic in printk_timed_ratelimit().

Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Petr Mladek <pmladek@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Jan Kara <jack@suse.cz>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Alex Elder
e99aa46166 printk: use a clever macro
Use the IS_ENABLED() macro rather than #ifdef blocks to set certain
global values.

Signed-off-by: Alex Elder <elder@linaro.org>
Acked-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Petr Mladek <pmladek@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Jan Kara <jack@suse.cz>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Alex Elder
0b90fec3b9 printk: fix some comments
Fix a few comments that don't accurately describe their corresponding
code.  It also fixes some minor typographical errors.

Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Petr Mladek <pmladek@suse.cz>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Jan Kara <jack@suse.cz>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Alex Elder
42a9dc0b3d printk: rename DEFAULT_MESSAGE_LOGLEVEL
Commit a8fe19ebfb ("kernel/printk: use symbolic defines for console
loglevels") makes consistent use of symbolic values for printk() log
levels.

The naming scheme used is different from the one used for
DEFAULT_MESSAGE_LOGLEVEL though.  Change that symbol name to be
MESSAGE_LOGLEVEL_DEFAULT for consistency.  And because the value of that
symbol comes from a similarly-named config option, rename
CONFIG_DEFAULT_MESSAGE_LOGLEVEL as well.

Signed-off-by: Alex Elder <elder@linaro.org>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Borislav Petkov <bp@suse.de>
Cc: Jan Kara <jack@suse.cz>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Alex Elder
e97e1267e9 printk: tweak do_syslog() to match comments
In do_syslog() there's a path used by kmsg_poll() and kmsg_read() that
only needs to know whether there's any data available to read (and not
its size).  These callers only check for non-zero return.  As a
shortcut, do_syslog() returns the difference between what has been
logged and what has been "seen."

The comments say that the "count of records" should be returned but it's
not.  Instead it returns (log_next_idx - syslog_idx), which is a
difference between buffer offsets--and the result could be negative.

The behavior is the same (it'll be zero or not in the same cases), but
the count of records is more meaningful and it matches what the comments
say.  So change the code to return that.

Signed-off-by: Alex Elder <elder@linaro.org>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Jan Kara <jack@suse.cz>
Cc: Joe Perches <joe@perches.com>
Cc: John Stultz <john.stultz@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:24 -07:00
Luis R. Rodriguez
23b2899f7f printk: allow increasing the ring buffer depending on the number of CPUs
The default size of the ring buffer is too small for machines with a
large amount of CPUs under heavy load.  What ends up happening when
debugging is the ring buffer overlaps and chews up old messages making
debugging impossible unless the size is passed as a kernel parameter.
An idle system upon boot up will on average spew out only about one or
two extra lines but where this really matters is on heavy load and that
will vary widely depending on the system and environment.

There are mechanisms to help increase the kernel ring buffer for tracing
through debugfs, and those interfaces even allow growing the kernel ring
buffer per CPU.  We also have a static value which can be passed upon
boot.  Relying on debugfs however is not ideal for production, and
relying on the value passed upon bootup is can only used *after* an
issue has creeped up.  Instead of being reactive this adds a proactive
measure which lets you scale the amount of contributions you'd expect to
the kernel ring buffer under load by each CPU in the worst case
scenario.

We use num_possible_cpus() to avoid complexities which could be
introduced by dynamically changing the ring buffer size at run time,
num_possible_cpus() lets us use the upper limit on possible number of
CPUs therefore avoiding having to deal with hotplugging CPUs on and off.
This introduces the kernel configuration option LOG_CPU_MAX_BUF_SHIFT
which is used to specify the maximum amount of contributions to the
kernel ring buffer in the worst case before the kernel ring buffer flips
over, the size is specified as a power of 2.  The total amount of
contributions made by each CPU must be greater than half of the default
kernel ring buffer size (1 << LOG_BUF_SHIFT bytes) in order to trigger
an increase upon bootup.  The kernel ring buffer is increased to the
next power of two that would fit the required minimum kernel ring buffer
size plus the additional CPU contribution.  For example if LOG_BUF_SHIFT
is 18 (256 KB) you'd require at least 128 KB contributions by other CPUs
in order to trigger an increase of the kernel ring buffer.  With a
LOG_CPU_BUF_SHIFT of 12 (4 KB) you'd require at least anything over > 64
possible CPUs to trigger an increase.  If you had 128 possible CPUs the
amount of minimum required kernel ring buffer bumps to:

   ((1 << 18) + ((128 - 1) * (1 << 12))) / 1024 = 764 KB

Since we require the ring buffer to be a power of two the new required
size would be 1024 KB.

This CPU contributions are ignored when the "log_buf_len" kernel
parameter is used as it forces the exact size of the ring buffer to an
expected power of two value.

[pmladek@suse.cz: fix build]
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
Signed-off-by: Petr Mladek <pmladek@suse.cz>
Tested-by: Davidlohr Bueso <davidlohr@hp.com>
Tested-by: Petr Mladek <pmladek@suse.cz>
Reviewed-by: Davidlohr Bueso <davidlohr@hp.com>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Joe Perches <joe@perches.com>
Cc: Arun KS <arunks.linux@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:23 -07:00
Luis R. Rodriguez
f54051722e printk: make dynamic units clear for the kernel ring buffer
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
Suggested-by: Davidlohr Bueso <davidlohr@hp.com>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Joe Perches <joe@perches.com>
Cc: Arun KS <arunks.linux@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:23 -07:00
Luis R. Rodriguez
c0a318a361 printk: move power of 2 practice of ring buffer size to a helper
In practice the power of 2 practice of the size of the kernel ring
buffer remains purely historical but not a requirement, specially now
that we have LOG_ALIGN and use it for both static and dynamic
allocations.  It could have helped with implicit alignment back in the
days given the even the dynamically sized ring buffer was guaranteed to
be aligned so long as CONFIG_LOG_BUF_SHIFT was set to produce a
__LOG_BUF_LEN which is architecture aligned, since log_buf_len=n would
be allowed only if it was > __LOG_BUF_LEN and we always ended up
rounding the log_buf_len=n to the next power of 2 with
roundup_pow_of_two(), any multiple of 2 then should be also architecture
aligned.  These assumptions of course relied heavily on
CONFIG_LOG_BUF_SHIFT producing an aligned value but users can always
change this.

We now have precise alignment requirements set for the log buffer size
for both static and dynamic allocations, but lets upkeep the old
practice of using powers of 2 for its size to help with easy expected
scalable values and the allocators for dynamic allocations.  We'll reuse
this later so move this into a helper.

Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Joe Perches <joe@perches.com>
Cc: Arun KS <arunks.linux@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:23 -07:00
Luis R. Rodriguez
7030017752 printk: make dynamic kernel ring buffer alignment explicit
We have to consider alignment for the ring buffer both for the default
static size, and then also for when an dynamic allocation is made when
the log_buf_len=n kernel parameter is passed to set the size
specifically to a size larger than the default size set by the
architecture through CONFIG_LOG_BUF_SHIFT.

The default static kernel ring buffer can be aligned properly if
architectures set CONFIG_LOG_BUF_SHIFT properly, we provide ranges for
the size though so even if CONFIG_LOG_BUF_SHIFT has a sensible aligned
value it can be reduced to a non aligned value.  Commit 6ebb017de9
("printk: Fix alignment of buf causing crash on ARM EABI") by Andrew
Lunn ensures the static buffer is always aligned and the decision of
alignment is done by the compiler by using __alignof__(struct log).

When log_buf_len=n is used we allocate the ring buffer dynamically.
Dynamic allocation varies, for the early allocation called before
setup_arch() memblock_virt_alloc() requests a page aligment and for the
default kernel allocation memblock_virt_alloc_nopanic() requests no
special alignment, which in turn ends up aligning the allocation to
SMP_CACHE_BYTES, which is L1 cache aligned.

Since we already have the required alignment for the kernel ring buffer
though we can do better and request explicit alignment for LOG_ALIGN.
This does that to be safe and make dynamic allocation alignment
explicit.

Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
Tested-by: Petr Mladek <pmladek@suse.cz>
Acked-by: Petr Mladek <pmladek@suse.cz>
Cc: Andrew Lunn <andrew@lunn.ch>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Joe Perches <joe@perches.com>
Cc: Arun KS <arunks.linux@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Davidlohr Bueso <davidlohr@hp.com>
Cc: Chris Metcalf <cmetcalf@tilera.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:23 -07:00
Geoff Levand
90a856436d include/linux/byteorder/generic.h: minor comment fix
Signed-off-by: Geoff Levand <geoff@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:23 -07:00
Joe Perches
68be302963 fs.h, drivers/hwmon/asus_atk0110.c: fix DEFINE_SIMPLE_ATTRIBUTE semicolon definition and use
The DEFINE_SIMPLE_ATTRIBUTE macro should not end in a ; Fix the one use
in the kernel tree that did not have a semicolon.

Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Luca Tettamanti <kronos.it@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:23 -07:00
Jiri Kosina
69102311a5 ./Makefile: tell gcc optimizer to never introduce new data races
We have been chasing a memory corruption bug, which turned out to be
caused by very old gcc (4.3.4), which happily turned conditional load
into a non-conditional one, and that broke correctness (the condition
was met only if lock was held) and corrupted memory.

This particular problem with that particular code did not happen when
never gccs were used.  I've brought this up with our gcc folks, as I
wanted to make sure that this can't really happen again, and it turns
out it actually can.

Quoting Martin Jambor <mjambor@suse.cz>:
 "More current GCCs are more careful when it comes to replacing a
  conditional load with a non-conditional one, most notably they check
  that a store happens in each iteration of _a_ loop but they assume
  loops are executed.  They also perform a simple check whether the
  store cannot trap which currently passes only for non-const
  variables.  A simple testcase demonstrating it on an x86_64 is for
  example the following:

  $ cat cond_store.c

  int g_1 = 1;

  int g_2[1024] __attribute__((section ("safe_section"), aligned (4096)));

  int c = 4;

  int __attribute__ ((noinline))
  foo (void)
  {
    int l;
    for (l = 0; (l != 4); l++) {
      if (g_1)
        return l;
      for (g_2[0] = 0; (g_2[0] >= 26); ++g_2[0])
        ;
    }
    return 2;
  }

  int main (int argc, char* argv[])
  {
    if (mprotect (g_2, sizeof(g_2), PROT_READ) == -1)
      {
        int e = errno;
        error (e, e, "mprotect error %i", e);
      }
    foo ();
    __builtin_printf("OK\n");
    return 0;
  }
  /* EOF */
  $ ~/gcc/trunk/inst/bin/gcc cond_store.c -O2 --param allow-store-data-races=0
  $ ./a.out
  OK
  $ ~/gcc/trunk/inst/bin/gcc cond_store.c -O2 --param allow-store-data-races=1
  $ ./a.out
  Segmentation fault

  The testcase fails the same at least with 4.9, 4.8 and 4.7.  Therefore
  I would suggest building kernels with this parameter set to zero. I
  also agree with Jikos that the default should be changed for -O2.  I
  have run most of the SPEC 2k6 CPU benchmarks (gamess and dealII
  failed, at -O2, not sure why) compiled with and without this option
  and did not see any real difference between respective run-times"

Hopefully the default will be changed in newer gccs, but let's force it
for kernel builds so that we are on a safe side even when older gcc are
used.

The code in question was out-of-tree printk-in-NMI (yeah, surprise
suprise, once again) patch written by Petr Mladek, let me quote his
comment from our internal bugzilla:

 "I have spent few days investigating inconsistent state of kernel ring buffer.
  It went out that it was caused by speculative store generated by
  gcc-4.3.4.

  The problem is in assembly generated for make_free_space(). The functions is
  called the following way:

  + vprintk_emit();
      + log = MAIN_LOG; // with logbuf_lock
         or
         log = NMI_LOG; // with nmi_logbuf_lock
         cont_add(log, ...);
          + cont_flush(log, ...);
              + log_store(log, ...);
                    + log_make_free_space(log, ...);

  If called with log = NMI_LOG then only nmi_log_* global variables are safe to
  modify but the generated code does store also into (main_)log_* global
  variables:

  <log_make_free_space>:
         55                      push   %rbp
         89 f6                   mov    %esi,%esi

         48 8b 05 03 99 51 01    mov    0x1519903(%rip),%rax       # ffffffff82620868 <nmi_log_next_id>
         44 8b 1d ec 98 51 01    mov    0x15198ec(%rip),%r11d      # ffffffff82620858 <log_next_idx>
         8b 35 36 60 14 01       mov    0x1146036(%rip),%esi       # ffffffff8224cfa8 <log_buf_len>
         44 8b 35 33 60 14 01    mov    0x1146033(%rip),%r14d      # ffffffff8224cfac <nmi_log_buf_len>
         4c 8b 2d d0 98 51 01    mov    0x15198d0(%rip),%r13       # ffffffff82620850 <log_next_seq>
         4c 8b 25 11 61 14 01    mov    0x1146111(%rip),%r12       # ffffffff8224d098 <log_buf>
         49 89 c2                mov    %rax,%r10
         48 21 c2                and    %rax,%rdx
         48 8b 1d 0c 99 55 01    mov    0x155990c(%rip),%rbx       # ffffffff826608a0 <nmi_log_buf>
         49 c1 ea 20             shr    $0x20,%r10
         48 89 55 d0             mov    %rdx,-0x30(%rbp)
         44 29 de                sub    %r11d,%esi
         45 29 d6                sub    %r10d,%r14d
         4c 8b 0d 97 98 51 01    mov    0x1519897(%rip),%r9	# ffffffff82620840 <log_first_seq>
         eb 7e                   jmp    ffffffff81107029	<log_make_free_space+0xe9>
  [...]
         85 ff                   test   %edi,%edi                  # edi = 1 for NMI_LOG
         4c 89 e8                mov    %r13,%rax
         4c 89 ca                mov    %r9,%rdx
         74 0a                   je     ffffffff8110703d	<log_make_free_space+0xfd>
         8b 15 27 98 51 01       mov    0x1519827(%rip),%edx       # ffffffff82620860 <nmi_log_first_id>
         48 8b 45 d0             mov    -0x30(%rbp),%rax
         48 39 c2                cmp    %rax,%rdx                  # end of loop
         0f 84 da 00 00 00       je     ffffffff81107120 <log_make_free_space+0x1e0>
  [...]
         85 ff                   test   %edi,%edi                  # edi = 1 for NMI_LOG
         4c 89 0d 17 97 51 01    mov    %r9,0x1519717(%rip)        # ffffffff82620840 <log_first_seq>
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
                                 KABOOOM
         74 35                   je     ffffffff81107160		 <log_make_free_space+0x220>

  It stores log_first_seq when edi == NMI_LOG. This instructions are used also
  when edi == MAIN_LOG but the store is done speculatively before the condition
  is decided.  It is unsafe because we do not have "logbuf_lock" in NMI context
  and some other process migh modify "log_first_seq" in parallel"

I believe that the best course of action is both

 - building kernel (and anything multi-threaded, I guess) with that
   optimization turned off
 - persuade gcc folks to change the default for future releases

Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: Martin Jambor <mjambor@suse.cz>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Marek Polacek <polacek@redhat.com>
Cc: Jakub Jelinek <jakub@redhat.com>
Cc: Steven Noonan <steven@uplinklabs.net>
Cc: Richard Biener <richard.guenther@gmail.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-08-06 18:01:23 -07:00