This patch introduces debugfs support to UBI. All the UBI stuff is kept in the
"ubi" debugfs directory, which contains per-UBI device "ubi/ubiX"
sub-directories, containing debugging files. This file also creates
"ubi/ubiX/chk_gen" and "ubi/ubiX/chk_io" knobs for switching general and I/O
extra checks on and off. And it removes the 'debug_chks' UBI module parameters.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This is a minor preparational patch which changes the
'paranoid_check_in_wl_tree()' function interface by adding the 'ubi' parameter
which will be needed there in the next patch.
And while on it, add "const" qualifier to the 'ubi' parameter of the
'paranoid_check_in_pq()' function.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Fix checkpatch.pl errors and warnings:
* space before tab
* line over 80 characters
* include linux/ioctl.h instead of asm/ioctl.h
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Similarly to the debugging checks and message, make the test modes
be dynamically selected via the "debug_tsts" module parameter or
via the "/sys/module/ubi/parameters/debug_tsts" sysfs file. This
is consistent with UBIFS as well.
And now, since all the Kconfig knobs became dynamic, we can remove
the Kconfig.debug file completely.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This patch adds a possibility to dynamically switch UBI self-checks
on and off, instead of toggling them compile-time from the configuration
menu. This is much more flexible, and consistent with UBIFS, and this
also simplifies UBI Kconfig menu and the code.
This patch introduces two levels of self-checks - general, which
includes all self-checks which are relatively fast, and I/O, which
includes write-verify checks and erase-verify checks, which are
relatively slow and involve flash I/O.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Currently UBI erases all corrupted eraseblocks, irrespectively of the nature
of corruption: corruption due to power cuts and non-power cut corruption.
The former case is OK, but the latter is not, because UBI may destroy
potentially important data.
With this patch, during scanning, when UBI hits a PEB with corrupted VID
header, it checks whether this PEB contains only 0xFF data. If yes, it is
safe to erase this PEB and it is put to the 'erase' list. If not, this may
be important data and it is better to avoid erasing this PEB. Instead,
UBI puts it to the corr list and moves out of the pool of available PEB.
IOW, UBI preserves this PEB.
Such corrupted PEB lessen the amount of available PEBs. So the more of them
we accumulate, the less PEBs are available. The maximum amount of non-power
cut corrupted PEBs is 8.
This patch is a response to UBIFS problem where reporter
(Matthew L. Creech <mlcreech@gmail.com>) observes that UBIFS index points
to an unmapped LEB. The theory is that corresponding PEB somehow got
corrupted and UBI wiped it. This patch (actually a series of patches)
tries to make sure such PEBs are preserved - this would make it is easier
to analyze the corruption.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Currently UBI has one small flaw - when we read EC or VID header, but find only
0xFF bytes, we return UBI_IO_FF and do not report whether we had bit-flips or
not. In case of the VID header, the scanning code adds this PEB to the free list,
even though there were bit-flips.
Imagine the following situation: we start writing VID header to a PEB and have a
power cut, so the PEB becomes unstable. When we scan and read the PEB, we get
a bit-flip. Currently, UBI would just ignore this and treat the PEB as free. This
patch changes UBI behavior and now UBI will schedule this PEB for erasure.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
The 'UBI_IO_PEB_EMPTY' and 'UBI_IO_PEB_FREE' are essentially the same
and mean that there are only 0xFF bytes instead of headers. Simplify
UBI a little by turning them into a single 'UBI_IO_FF' error code.
Also, stop maintaining commentaries in 'ubi_io_read_vid_hdr()' which are
almost identical to commentaries in 'ubi_io_read_ec_hdr()'.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
When an erroneous PEB is scheduling for scrubbing, we end up with the
following oops:
[<c0162404>] (prot_queue_del+0x0/0x50) from [<c01635b4>] (ubi_wl_scrub_peb+0xec/0x13c)
[<c01634c8>] (ubi_wl_scrub_peb+0x0/0x13c) from [<c01603bc>] (ubi_eba_read_leb+0x200/0x428)
[<c01601bc>] (ubi_eba_read_leb+0x0/0x428) from [<c015e3c0>] (ubi_leb_read+0xe8/0x138)
[<c015e2d8>] (ubi_leb_read+0x0/0x138) from [<c00d6918>] (ubifs_start_scan+0x7c/0xf4)
[<c00d689c>] (ubifs_start_scan+0x0/0xf4) from [<c00e3650>] (ubifs_recover_leb+0x3c/0x730)
[<c00e3614>] (ubifs_recover_leb+0x0/0x730) from [<c00e444c>] (ubifs_recover_log_leb+0xc8/0x2dc)
[<c00e4384>] (ubifs_recover_log_leb+0x0/0x2dc) from [<c00d7c20>] (ubifs_replay_journal+0xb90/0x13a4)
[<c00d7090>] (ubifs_replay_journal+0x0/0x13a4) from [<c00cdd68>] (ubifs_fill_super+0xb84/0x1054)
[<c00cd1e4>] (ubifs_fill_super+0x0/0x1054) from [<c00ced04>] (ubifs_get_sb+0xc4/0x2ac)
[<c00cec40>] (ubifs_get_sb+0x0/0x2ac) from [<c007f04c>] (vfs_kern_mount+0x58/0x94)
[<c007eff4>] (vfs_kern_mount+0x0/0x94) from [<c007f0e8>] (do_kern_mount+0x40/0xe8)
[<c007f0a8>] (do_kern_mount+0x0/0xe8) from [<c0095628>] (do_new_mount+0x68/0x8c)
[<c00955c0>] (do_new_mount+0x0/0x8c) from [<c00957a8>] (do_mount+0x15c/0x1b8)
[<c009564c>] (do_mount+0x0/0x1b8) from [<c0095890>] (sys_mount+0x8c/0xd4)
[<c0095804>] (sys_mount+0x0/0xd4) from [<c0023c00>] (ret_fast_syscall+0x0/0x2c)
Kernel panic - not syncing: Fatal exception
The problem is that 'ubi_wl_scrub_peb()' does not expect that PEBs may
be in the erroneous tree, which is a bug. This patch fixes the bug
and adds corresponding check to 'ubi_wl_scrub_peb()'. Now it will simply
ignore erroneous PEBs, instead of causing an oops.
Reported-by: Matthieu CASTET <matthieu.castet@parrot.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
UBI debugging functions were a little bit over-engineered and
returned more error codes than needed, and the callers had to
do useless checks. Simplify the return codes.
Impact: only debugging code is affected, which means that for
non-developers this is a no-op patch.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
The 'paranoid_check_empty()' is bogus because, which is easilly
seen on NOR flash, which has long erase cycles, and which may
easilly end-up with half-erased eraseblocks. In this case the
paranoid check fails. I is just wrong to assume that PEBs which
do not have EC headers always contain all 0xFF. Such assumption
should not be made on the I/O level, which is quite low.
Thus, just kill the check.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This patch adds code which makes sure eraseblocks contain all 0xFF
bytes before starting using them. The verification is done only when
debugging checks are enabled.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
'kmem_cache_free()' oopeses if NULL is passed, and there is
one error-path place where UBI may call it with NULL object.
This problem was pointed to by Adrian Hunter.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
When marking a PEB as bad, print how many PEBs are left reserved.
This is very useful information.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Print not only the PEB number, but also the LEB number and volume id,
which is very useful for bug hunting.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This patch improves UBI errors handling. ATM UBI switches to
R/O mode when the WL worker fails to read the source PEB.
This means that the upper layers (e.g., UBIFS) has no
chances to unmap the erroneous PEB and fix the error.
This patch changes this behaviour and makes UBI put PEBs
like this into a separate RB-tree, thus preventing the
WL worker from hitting the same read errors again and
again.
But there is a 10% limit on a maximum amount of PEBs like this.
If there are too much of them, UBI switches to R/O mode.
Additionally, this patch teaches UBI not to panic and
switch to R/O mode if after a PEB has been copied, the
target LEB cannot be read back. Instead, now UBI cancels
the operation and schedules the target PEB for torturing.
The error paths has been tested by ingecting errors
into 'ubi_eba_copy_leb()'.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This patch fixes the error path in the WL worker - in same cases
UBI oopses when 'goto out_error' happens and e1 or e2 are NULL.
This patch also cleans up the error paths a little. And I have
tested nearly all error paths in the WL worker.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This patch is a clean-up and a preparation for the following
patches. It introduece constants for the return values of the
'ubi_eba_copy_leb()' function.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
UBI has 2 RB-trees to implement PEB protection, which is too
much for simply prevent PEB from being moved for some time.
This patch implements this using lists. The benefits:
1. No need to allocate protection entry on each PEB get.
2. No need to maintain balanced trees and walk them.
Signed-off-by: Xiaochuan-Xu <xiaochuan-xu@cqu.edu.cn>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This patch modifies @struct ubi_wl_entry and adds union which
contains only one element so far. This is just a preparation
for further changes which will kill the protection tree and
make UBI use a list instead.
Signed-off-by: Xiaochuan-Xu <xiaochuan-xu@cqu.edu.cn>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
When a PEB is moved and a write error happens, UBI switches
to R/O mode, which is wrong, because we just copy the data
and may select a different PEB and re-try this. This patch
fixes WL worker's behavior.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Make sure the resources had not already been freed before
freeing them in the error path of the WL worker function.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
If ubi_thread() exits but kthread_should_stop() is not true
then kthread_stop() will never return and cleanup thread
will forever stay in "D" state.
Signed-off-by: Vitaliy Gusev <vgusev@openvz.org>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
No functional changes, just tweak comments to make kernel-doc
work fine and stop complaining.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Just out or curiousity ran checkpatch.pl for whole UBI,
and discovered there are quite a few of stylistic issues.
Fix them.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
If bit-flips happen often, UBI prints to many messages. Lessen
the amount by only printing the messages when the PEB has been
scrubbed. Also, print torturing messages.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Hch asked not to use "unit" for sub-systems, let it be so.
Also some other commentaries modifications.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
UBI already checks that @min io size is the power of 2 at io_init.
It is save to use bit operations then.
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Old gcc complains:
CC drivers/mtd/ubi/wl.o
drivers/mtd/ubi/wl.c: In function 'wear_leveling_worker':
drivers/mtd/ubi/wl.c:746: warning: 'pe' may be used uninitialized in this function
CC drivers/mtd/ubi/scan.o
drivers/mtd/ubi/scan.c: In function 'ubi_scan':
drivers/mtd/ubi/scan.c:772: warning: 'ec' may be used uninitialized in this function
drivers/mtd/ubi/scan.c:772: note: 'ec' was declared here
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
The problem: NAND flashes have different amount of initial bad physical
eraseblocks (marked as bad by the manufacturer). For example, for 256MiB
Samsung OneNAND flash there might be from 0 to 40 bad initial eraseblocks,
which is about 2%. When UBI is used as the base system, one needs to know
the exact amount of good physical eraseblocks, because this number is
needed to create the UBI image which is put to the devices during
production. But this number is not know, which forces us to use the
minimum number of good physical eraseblocks. And UBI additionally
reserves some percentage of physical eraseblocks for bad block handling
(default is 1%), so we have 1-3% of PEBs reserved at the end, depending
on the amount of initial bad PEBs. But it is desired to always have
1% (or more, depending on the configuration).
Solution: this patch adds an "auto-resize" flag to the volume table.
The volume which has the "auto-resize" flag will automatically be re-sized
(enlarged) on the first UBI initialization. UBI clears the flag when
the volume is re-sized. Only one volume may have the "auto-resize" flag.
So, the production UBI image may have one volume with "auto-resize"
flag set, and its size is automatically adjusted on the first boot
of the device.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Prepare the attach and detach functions to by used outside of
module initialization:
* detach function checks reference count before detaching
* it kills the background thread as well
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
This is one more step on the way to "removable" UBI devices. It
adds reference counting for UBI devices. Every time a volume on
this device is opened - the device's refcount is increased. It
is also increased if someone is reading any sysfs file of this
UBI device or of one of its volumes.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
The flush function should finish all the pending jobs. But if
somebody else is doing a work, this function should wait and let
it finish.
This patche uses rw semaphore for synchronization purpose - it
just looks quite convinient.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
When the WL worker is moving an LEB, the volume might go away
occasionally. UBI does not handle these situations correctly.
This patch introduces a new mutex which serializes wear-levelling
worker and the the 'ubi_wl_put_peb()' function. Now, if one puts
an LEB, and its PEB is being moved, it will wait on the mutex.
And because we unmap all LEBs when removing volumes, this will make
the volume remove function to wait while the LEB movement
finishes.
Below is an example of an oops which should be fixed by this patch:
Pid: 9167, comm: io_paral Not tainted (2.6.24-rc5-ubi-2.6.git #2)
EIP: 0060:[<f884a379>] EFLAGS: 00010246 CPU: 0
EIP is at prot_tree_del+0x2a/0x63 [ubi]
EAX: f39a90e0 EBX: 00000000 ECX: 00000000 EDX: 00000134
ESI: f39a90e0 EDI: f39a90e0 EBP: f2d55ddc ESP: f2d55dd4
DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Process io_paral (pid: 9167, ti=f2d54000 task=f72a8030 task.ti=f2d54000)
Stack: f39a95f8 ef6aae50 f2d55e08 f884a511 f88538e1 f884ecea 00000134 00000000
f39a9604 f39a95f0 efea8280 00000000 f39a90e0 f2d55e40 f8847261 f8850c3c
f884eaad 00000001 000000b9 00000134 00000172 000000b9 00000134 00000001
Call Trace:
[<c0105227>] show_trace_log_lvl+0x1a/0x30
[<c01052e2>] show_stack_log_lvl+0xa5/0xca
[<c01053d6>] show_registers+0xcf/0x21b
[<c0105648>] die+0x126/0x224
[<c0119a62>] do_page_fault+0x27f/0x60d
[<c037dd62>] error_code+0x72/0x78
[<f884a511>] ubi_wl_put_peb+0xf0/0x191 [ubi]
[<f8847261>] ubi_eba_unmap_leb+0xaf/0xcc [ubi]
[<f8843c21>] ubi_remove_volume+0x102/0x1e8 [ubi]
[<f8846077>] ubi_cdev_ioctl+0x22a/0x383 [ubi]
[<c017d768>] do_ioctl+0x68/0x71
[<c017d7c6>] vfs_ioctl+0x55/0x271
[<c017da15>] sys_ioctl+0x33/0x52
[<c0104152>] sysenter_past_esp+0x5f/0xa5
=======================
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Similarly to ltree_entry_slab, it makes more sense to create
and destroy ubi_wl_entry slab on module initialization/exit.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
The task_struct->pid member is going to be deprecated, so start
using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in
the kernel.
The first thing to start with is the pid, printed to dmesg - in
this case we may safely use task_pid_nr(). Besides, printks produce
more (much more) than a half of all the explicit pid usage.
[akpm@linux-foundation.org: git-drm went and changed lots of stuff]
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Cc: Dave Airlie <airlied@linux.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Similar reason as in case of the previous patch: it causes
deadlocks if a filesystem with writeback support works on top
of UBI. So pre-allocate needed buffers when attaching MTD device.
We also need mutexes to protect the buffers, but they do not
cause much contantion because they are used in recovery, torture,
and WL copy routines, which are called seldom.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Use GFP_NOFS flag when allocating memory on I/O path, because otherwise
we may deadlock the filesystem which works on top of us. We observed
the deadlocks with UBIFS. Example:
VFS->FS lock a lock->UBI->kmalloc()->VFS writeback->FS locks the same
lock again.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
I hit those situations and found out lack of print messages. Add more prints
when erase problems occur.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Slab destructors were no longer supported after Christoph's
c59def9f22 change. They've been
BUGs for both slab and slub, and slob never supported them
either.
This rips out support for the dtor pointer from kmem_cache_create()
completely and fixes up every single callsite in the kernel (there were
about 224, not including the slab allocator definitions themselves,
or the documentation references).
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Do not switch to read-only mode in case of -EINTR and some
other obvious cases. Switch to RO mode only when we do not
know what is the error.
Reported-by: Vinit Agnihotri <vinit.agnihotri@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Kill UBI's homegrown endianess handling and replace it with
the standard kernel endianess handling.
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Currently, the freezer treats all tasks as freezable, except for the kernel
threads that explicitly set the PF_NOFREEZE flag for themselves. This
approach is problematic, since it requires every kernel thread to either
set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
care for the freezing of tasks at all.
It seems better to only require the kernel threads that want to or need to
be frozen to use some freezer-related code and to remove any
freezer-related code from the other (nonfreezable) kernel threads, which is
done in this patch.
The patch causes all kernel threads to be nonfreezable by default (ie. to
have PF_NOFREEZE set by default) and introduces the set_freezable()
function that should be called by the freezable kernel threads in order to
unset PF_NOFREEZE. It also makes all of the currently freezable kernel
threads call set_freezable(), so it shouldn't cause any (intentional)
change of behaviour to appear. Additionally, it updates documentation to
describe the freezing of tasks more accurately.
[akpm@linux-foundation.org: build fixes]
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: Nigel Cunningham <nigel@nigel.suspend2.net>
Cc: Pavel Machek <pavel@ucw.cz>
Cc: Oleg Nesterov <oleg@tv-sign.ru>
Cc: Gautham R Shenoy <ego@in.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>