This line of comment looks completely bogus.
It was introduced in:
commit d99383b00e
Author: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Date: Wed May 18 14:47:34 2011 +0300
UBI: change the interface of a debugging check function
Remove it.
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Both names 'total_read' and 'total_written' are actually used
as the number of bytes left to read and write.
Fix this confusion by renaming both to 'bytes_left'.
Signed-off-by: Ezequiel Garcia <elezegarcia@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
UBI reserves an LEB sized buffer for various needs. We can use this buffer
while scanning, instead of allocating another one. This patch was originally
created by Jan Luebbe <jlu@pengutronix.de>, but then he dropped it and I picked
up and tweaked a little bit.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Using list_move_tail() instead of list_del() + list_add_tail().
dpatch engine is used to auto generate this patch.
(https://github.com/weiyj/dpatch)
Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Acked-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Weinberger from Linutronix. Fastmap is designed to address UBI's slow scanning
issues. Namely, it introduces a new on-flash data-structure called "fastmap",
which stores the information about logical<->physical eraseblocks mappings.
So now to get this information just read the fastmap, instead of doing full
scan. More information here can be found in Richard's announcement in LKML
(Subject: UBI: Fastmap request for inclusion (v19)):
http://thread.gmane.org/gmane.linux.kernel/1364922/focus=1369109
One thing I want to explicitly say is that fastmap did not have large
enough linux-next exposure. It is partially my fault - I did not respond
quickly enough. I _really_ apologize for this. But it had good testing and
disabled by default, so I do not expect that we'll break anything.
Fastmap is declared as experimental so far, and it is off by default. We
did declare that the on-flash format may be changed. The reason for this is
that no one used it in real production so far, so there is a high risk that
something is missing. Besides, we do not have user-space tools supporting
fastmap so far.
Nevertheless, I suggest we merge this feature. Many people want UBI's scanning
bottleneck to be fixed and merging fastmap now should accelerate its production
use. The plan is to make it bullet-prove, somewhat clean-up, and make it the
default for UBI. I do not know how many kernel releases will it take.
Basically, I what I want to do for fastmap is something like Linus did for
btrfs few years ago.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJQcrPrAAoJECmIfjd9wqK0gjYQAJICGYdYBO2wXQKojN+c518j
dhZxSFBmn6VAVaLoiRJudbzVLiJXtEOupldnmfFqcBbM2I3KujkiVAb8kH99NNxH
orfbSXmi6fvLZd1FWKm6EETl1CwWeOvQHcewLtGEdlhMqjNxo9QwZSxDNEARwBK8
rCchMnR7QgnmKZQActxg/PfdKHmFgZf9mj/IkcLXvYhV3MimBeRjA+gKwJHQ4R7g
+LWm2wkqEFpDqbtRIrJGsXkaX4f4oVK+KqTvOudsSMc5VD6R+xVjFux7MFz9DOYZ
copqR/3Ep0O+xeVgUm6it/rGubdU/ejr0T5kF/EEmDe2ktn8/7eGDh4VMx+4AyES
Vf18rD9Wc8eEcImcm/yxVY6FQRQjKT2tRF2TEarrhLAxXREsiYzLghRUuspgIcJG
hqQsNqJyDvV/YNBZIqVyA7sx0TDaP/8CwNYy7NiVYtxsVeo9QyzWer0aaHjmIeBL
AyEn5/2m7FVPL3acoLN4jAwyMNuDnQ3iuNqIhm2drm2brrxAcYUmWg/mXORlZpKM
Q8ts0h4++sTWgwuOEM/iCfoHagxJc3cQMsE6C1Z7LdLsNFHAPdnKSSkkjXK54jqx
wq1yapCkNDqzwWU4xjWsr7QaMLb69KPltx4Iw4wBE+8lW4WjqRdFhBs/+f3zceBe
SaSUuYkTFij+Ol+XWJnL
=/Sqc
-----END PGP SIGNATURE-----
Merge tag 'upstream-3.7-rc1-fastmap' of git://git.infradead.org/linux-ubi
Pull UBI fastmap changes from Artem Bityutskiy:
"This pull request contains the UBI fastmap support implemented by
Richard Weinberger from Linutronix. Fastmap is designed to address
UBI's slow scanning issues. Namely, it introduces a new on-flash
data-structure called "fastmap", which stores the information about
logical<->physical eraseblocks mappings. So now to get this
information just read the fastmap, instead of doing full scan. More
information here can be found in Richard's announcement in LKML
(Subject: UBI: Fastmap request for inclusion (v19)):
http://thread.gmane.org/gmane.linux.kernel/1364922/focus=1369109
One thing I want to explicitly say is that fastmap did not have large
enough linux-next exposure. It is partially my fault - I did not
respond quickly enough. I _really_ apologize for this. But it had
good testing and disabled by default, so I do not expect that we'll
break anything.
Fastmap is declared as experimental so far, and it is off by default.
We did declare that the on-flash format may be changed. The reason
for this is that no one used it in real production so far, so there is
a high risk that something is missing. Besides, we do not have
user-space tools supporting fastmap so far.
Nevertheless, I suggest we merge this feature. Many people want UBI's
scanning bottleneck to be fixed and merging fastmap now should
accelerate its production use. The plan is to make it bullet-prove,
somewhat clean-up, and make it the default for UBI. I do not know how
many kernel releases will it take.
Basically, I what I want to do for fastmap is something like Linus did
for btrfs few years ago."
* tag 'upstream-3.7-rc1-fastmap' of git://git.infradead.org/linux-ubi:
UBI: Wire-up fastmap
UBI: Add fastmap core
UBI: Add fastmap support to the WL sub-system
UBI: Add fastmap stuff to attach.c
UBI: Wire-up ->fm_sem
UBI: Add fastmap bits to build.c
UBI: Add self_check_eba()
UBI: Export next_sqnum()
UBI: Add fastmap stuff to ubi.h
UBI: Add fastmap on-flash data structures
Make fastmap known to Kconfig, UBI Makefile and MAINTAINERS.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
To make fastmap possible the WL sub-system needs some
changes.
Mostly to support fastmaps pools.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Fastmap uses ->fm_sem to stop EBA changes while writing
a new fastmap.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
self_check_eba() compares two ubi_attach_info objects.
Fastmap uses this function for self checks.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Fastmap needs next_sqnum(), rename it to ubi_next_sqnum()
and make it non-static.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
This patch adds fastmap specific data structures to ubi.h.
It moves also struct ubi_work to ubi.h as it is now needed
for more than one c file.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Add the on-flash data structures neeed by fastmap
to ubi-media.h
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
handling. We used to reserve 2% of the partition, but now we are
more aggressive and we reserve 2% of the entire chip, which is
what actually manufacturers specify in data sheets. We introduced
an option to users to override the default, though.
There are a couple of fixes as well, and a number of cleanups.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
iQIcBAABAgAGBQJQauyXAAoJECmIfjd9wqK0hLAQAL1tv4BwLUi8NlUristmO7GY
qb4LvfadhYldapXQ/kw9g7DjDQduuerBaIGh7oglRG2HFtNrMD/03umTwb0P4li4
gB/YJxRlU2TdPxn6tmvldGyLUVd8DjGZBvF0l9Mzuc0pGz95UCGcWhwysTFtyxVg
AmKrW55jqSImzH0L2UkVNepV4h3vXivOLlM/SFP1bgAcVfKlj8THnNgO6cAKGuV8
q2qzWmFOakiMFzxgjpfx6QtcmyOUTSgKD+jBEmsUGbCmQmOdXpyXKwP2UugoCltF
j+MwwOsLD0J8WG6j4NPlzbPBSg8zJD2O6MQa0dCy+WG8ho0BloMfjLFv2qavcLYX
dSrWn4Df/mIuAjNqcrgC46nnnrHDQcNzqlfsD/f1tsnDvtLZ6CzEthAqPBBCU8Uq
TaF3kEKwq02JkZFAfkPEVkaFTkgBpa27EdRS+KzsRBIhsMSySC9tJOvl5QIRi9Ad
bnsgXIYZUzi36CGzUHGLeCMw6APC/smBqP/o2zQGpX0DA/WOmRgIuclpap8XSt+h
tBL4v785w8CzOCPSRRagr06wxDdu9vhd4GsTrg7dIIFrZnA0pZWWgSV8xlcjrWt9
VPw/DLvRPIV/b0BzNcA1gfDvo4GLwuY7eQRdBjYjLByKHJDXouH1p1CZcDJvQtio
qbSA1x8NNKvIy1D8gawd
=95SE
-----END PGP SIGNATURE-----
Merge tag 'upstream-3.7-rc1' of git://git.infradead.org/linux-ubi
Pull UBI changes from Artem Bityutskiy:
"The main change is the way we reserve eraseblocks for bad blocks
handling. We used to reserve 2% of the partition, but now we are more
aggressive and we reserve 2% of the entire chip, which is what
actually manufacturers specify in data sheets. We introduced an
option to users to override the default, though.
There are a couple of fixes as well, and a number of cleanups."
* tag 'upstream-3.7-rc1' of git://git.infradead.org/linux-ubi: (24 commits)
UBI: fix trivial typo 'it' => 'is'
UBI: load after mtd device drivers
UBI: print less
UBI: use pr_ helper instead of printk
UBI: comply with coding style
UBI: erase free PEB with bitflip in EC header
UBI: fix autoresize handling in R/O mode
UBI: add max_beb_per1024 to attach ioctl
UBI: allow specifying bad PEBs limit using module parameter
UBI: check max_beb_per1024 value in ubi_attach_mtd_dev
UBI: prepare for max_beb_per1024 module parameter addition
UBI: introduce MTD_PARAM_MAX_COUNT
UBI: separate bad_peb_limit in a function
arm: sam9_l9260_defconfig: correct CONFIG_MTD_UBI_BEB_LIMIT
UBI: use the whole MTD device size to get bad_peb_limit
mtd: mtdparts: introduce mtd_get_device_size
mtd: mark mtd_is_partition argument as constant
arm: sam9_l9260_defconfig: remove non-existing config option
UBI: kill CONFIG_MTD_UBI_BEB_RESERVE
UBI: limit amount of reserved eraseblocks for bad PEB handling
...
Use 'late_initcall()' in UBI to make sure it initializes after MTD drivers.
Signed-off-by: Jiang Lu <lu.jiang@windriver.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
UBI was mistakingly using 'kfree()' instead of 'kmem_cache_free()' when
freeing "attach eraseblock" structures in vtbl.c. Thankfully, this happened
only when we were doing auto-format, so many systems were unaffected. However,
there are still many users affected.
It is strange, but the system did not crash and nothing bad happened when
the SLUB memory allocator was used. However, in case of SLOB we observed an
crash right away.
This problem was introduced in 2.6.39 by commit
"6c1e875 UBI: add slab cache for ubi_scan_leb objects"
A note for stable trees:
Because variable were renamed, this won't cleanly apply to older kernels.
Changing names like this should help:
1. ai -> si
2. aeb_slab_cache -> seb_slab_cache
3. new_aeb -> new_seb
Reported-by: Richard Genoud <richard.genoud@gmail.com>
Tested-by: Richard Genoud <richard.genoud@gmail.com>
Tested-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: stable@vger.kernel.org [v2.6.39+]
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
UBI currently prints a lot of information when it mounts a volume, which
bothers some people. Make it less chatty - print only important information
by default.
Get rid of 'dbg_msg()' macro completely.
Reported-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Without this patch, these PEB are not scrubbed until we put data in them.
Bitflip can accumulate latter and we can loose the EC header (but VID header
should be intact and allow to recover data).
Signed-off-by: Matthieu Castet <matthieu.castet@parrot.com>
Cc: stable@vger.kernel.org
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Currently UBI fails in autoresize when it is in R/O mode (e.g., because the
underlying MTD device is R/O). This patch fixes the issue - we just skip
autoresize and print a warning.
Reported-by: Pali Rohár <pali.rohar@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
This patch provides a possibility to set the "maximum expected number of
bad blocks per 1024 blocks" (max_beb_per1024) for each mtd device using
the UBI_IOCATT ioctl.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
This patch provides the possibility to adjust the "maximum expected number of
bad blocks per 1024 blocks" (max_beb_per1024) for each mtd device.
The majority of NAND devices have their max_beb_per1024 equal to 20, but
sometimes it's more.
Now, we can adjust that via a kernel parameter:
ubi.mtd=<name|num|path>[,<vid_hdr_offs>[,max_beb_per1024]]
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
max_beb_per1024 shouldn't be negative, and a 0 value will be treated as
the default value. For the upper bound, 768/1024 should be enough.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
This patch prepare the way for the addition of max_beb_per1024 module
parameter. There's no functional change.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Reviewed-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
No functional changes here, just to prepare for next patch.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
On NAND flash devices, UBI reserves some physical erase blocks (PEB) for
bad block handling. Today, the number of reserved PEB can only be set as a
percentage of the total number of PEB in each MTD partition. For example, for a
NAND flash with 128KiB PEB, 2 MTD partition of 20MiB (mtd0) and 100MiB (mtd1)
and 2% reserved PEB:
- the UBI device on mtd0 will have 2 PEB reserved
- the UBI device on mtd1 will have 16 PEB reserved
The problem with this behaviour is that NAND flash manufacturers give a
minimum number of valid block (NVB) during the endurance life of the
device, e.g.:
Parameter Symbol Min Max Unit Notes
--------------------------------------------------------------
Valid block number NVB 1004 1024 Blocks 1
From this number we can deduce the maximum number of bad PEB that a device will
contain during its endurance life: a 128MiB NAND flash (1024 PEB) will not have
less than 20 bad blocks during the flash endurance life.
But the manufacturer doesn't tell where those bad block will appear. He doesn't
say either if they will be equally disposed on the whole device (and I'm pretty
sure they won't). So, according to the datasheets, we should reserve the
maximum number of bad PEB for each UBI device (worst case scenario: 20 bad
blocks appears on the smallest MTD partition).
So this patch make UBI use the whole MTD device size to calculate the maximum
bad expected eraseblocks.
The Kconfig option is in per1024 blocks, thus it can have a default value of 20
which is *very* common for NAND devices.
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
CONFIG_MTD_UBI_BEB_RESERVE and MIN_RESEVED_PEBS are no longer used,
since the amount of reserved eraseblocks for bad PEB handling is now
derived from 'ubi->bad_peb_limit' (ubi's maximum expected bad
eraseblocks).
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
The existing mechanism of reserving PEBs for bad PEB handling has two
flaws:
- It is calculated as a percentage of good PEBs instead of total PEBs.
- There's no limit on the amount of PEBs UBI reserves for future bad
eraseblock handling.
This patch changes the mechanism to overcome these flaws.
The desired level of PEBs reserved for bad PEB handling (beb_rsvd_level)
is set to the maximum expected bad eraseblocks (bad_peb_limit) minus the
existing number of bad eraseblocks (bad_peb_count).
The actual amount of PEBs reserved for bad PEB handling is usually set
to the desired level (but in some circumstances may be lower than the
desired level, e.g. when attaching to a device that has too few
available PEBs to satisfy the desired level).
In the case where the device has too many bad PEBs (above the expected
limit), then the desired level, and the actual amount of PEBs reserved
are set to zero. No PEBs will be set aside for future bad eraseblock
handling - even if some PEBs are made available (e.g. by shrinking a
volume).
If another PEB goes bad, and there are available PEBs, then the
eraseblock will be marked bad (consuming one available PEB). But if
there are no available PEBs, ubi will go into readonly mode.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Introduce 'ubi->bad_peb_limit', which specifies an upper limit of PEBs
UBI expects to go bad. Currently, it is initialized to a fixed percentage
of total PEBs in the UBI device (configurable via CONFIG_MTD_UBI_BEB_LIMIT).
The 'bad_peb_limit' is intended to be used for calculating the amount of PEBs
UBI needs to reserve for bad eraseblock handling.
Artem: minor amendments.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
Currently, there are several locations where an attempt to reserve more
PEBs for bad PEB handling is made, with the same code being duplicated.
Harmonize it by introducing 'ubi_update_reserved()'.
Also, improve the debug message issued, making it more descriptive.
Artem: amended the patch a little.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
The function name within the comment was not aligned with the actual
function name.
Signed-off-by: Shmulik Ladkani <shmulik.ladkani@gmail.com>
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
The actual value (1%) is too low for actual NAND devices, a huge
majority of device has 2% maximum bad blocks (SLC or MLC).
(Actually it's 20 blocks on a 1024 blocks device, 40/2048...)
Signed-off-by: Richard Genoud <richard.genoud@gmail.com>
Commit "e9b4cf2 UBI: fix debugfs-less systems support" fixed one
regression but introduced a different regression - the debugfs is now always
compiled out. Root cause: IS_ENABLED() arguments should be used with the
CONFIG_* prefix.
Signed-off-by: Brian Norris <computersforpeace@gmail.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Commit "62f38455 UBI: modify ubi_wl_flush function to clear work queue for a lnum"
takes the 'work_sem' semaphore in write mode for the entire loop, which is not
very good because it will block other workers for potentially long time. We do
not need to have it in write mode - read mode is enough, and we do not need to
hole it over the entire loop. So this patch turns changes the locking: takes
'work_sem' in read mode and pushes it down to the loop.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Commit "aa44d1d UBI: remove Kconfig debugging option" broke UBI and it
refuses to initialize if debugfs (CONFIG_DEBUG_FS) is disabled. I incorrectly
assumed that debugfs files creation function will return success if debugfs
is disabled, but they actually return -ENODEV. This patch fixes the issue.
Reported-by: Paul Parsons <lost.distance@yahoo.com>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Tested-by: Paul Parsons <lost.distance@yahoo.com>
This patch modifies ubi_wl_flush to force the erasure of
particular volume id / logical eraseblock number pairs. Previous functionality
is preserved when passing UBI_ALL for both values. The locations where ubi_wl_flush
were called are appropriately changed: ubi_leb_erase only flushes for the
erased LEB, and ubi_create_volume forces only flushing for its volume id.
External code can call this new feature via the new function ubi_flush() added
to kapi.c, which simply passes through to ubi_wl_flush().
This was tested by disabling the call to do_work in ubi thread, which results
in the work queue remaining unless explicitly called to remove. UBIFS was
changed to call ubifs_leb_change 50 times for four different LEBs. Then the
new function was called to clear the queue: passing wrong volume ids / lnum,
correct ones, and finally UBI_ALL for both to ensure it was finally all
cleard. The work queue was dumped each time and the selective removal
of the particular LEB numbers was observed. Extra checks were enabled and
ubifs's integck was also run. Finally, the drive was repeatedly filled and
emptied to ensure that the queue was cleared normally.
Artem: amended the patch.
Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Joel will use it in his 'ubi_flush()' extention to specify all eraseblocks.
Also amend the comment for UBI_UNKNOWN - it is used beyond attaching info
structure now.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
This is part of a multipart patch to allow UBI to force the erasure of
particular logical eraseblock numbers. In this patch, the volume id and LEB
number are added to ubi_work data structure, and both are also passed as a
parameter to schedule erase to set it appropriately. Whenever ubi_wl_put_peb
is called, the lnum is also passed to be forwarded to schedule erase. Later,
a new ubi_sync_lnum will be added to execute immediately all work related to
that lnum.
This was tested by outputting the vol_id and lnum during the schedule of
erasure. The ubi thread was disabled and two ubifs drives on separate
partitions repeated changed a small number of LEBs. The ubi module was readded,
and all the erased LEBs, corresponding to the volumes, were added to the
schedule erase queue.
Artem: minor tweaks
Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
This patch adds the volume id to struct ubi_ainf_peb when scanning the LEBs at
startup. PEBs now added to the erase queue will know their original LEB number
and volume id, if available, and will be -1 otherwise (for instance, if the VID
header is unreadable).
This was tested by creating an ubi device with 3 volumes and disabiling the
ubi_thread's do_work functionality. The different ubi volumes were formatted
to ubifs and had files created and erased. The ubi modules was reloaded and
the list of LEB's added to the erased list was outputted, confirming the
volume ids and LEB numbers were appropriate.
Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Explicitly provide the first internal volume ID value in the comment for
UBI_INTERNAL_VOL_START. This allows developers who, when adding features
related to volume ids and observe unexpected very large volume ids, to grep
for the observed value in the source code and find out immediately that it is
expected behaviour.
Signed-off-by: Joel Reardon <reardonj@inf.ethz.ch>
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Finally, rename the scan.c file. Now adding fastmap support won't look that
hacky anymore.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
This file is small and it does not make sense to have it separate from where
everything else lives, so merge it with ubi.h.
Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>