summaryrefslogtreecommitdiff
path: root/drivers/mtd/ubi/fastmap-wl.c
AgeCommit message (Collapse)AuthorFilesLines
2023-10-29ubi: fastmap: Fix lapsed wear leveling for first 64 PEBsZhihao Cheng1-1/+1
The anchor PEB must be picked from first 64 PEBs, these PEBs could have large erase counter greater than other PEBs especially when free space is nearly running out. The ubi_update_fastmap will be called as long as pool/wl_pool is empty, old anchor PEB is erased when updating fastmap. Given an UBI device with N PEBs, free PEBs is nearly running out and pool will be filled with 1 PEB every time ubi_update_fastmap invoked. So t=N/POOL_SIZE[1]/64 means that in worst case the erase counter of first 64 PEBs is t times greater than other PEBs in theory. After running fsstress for 24h, the erase counter statistics for two UBI devices shown as follow(CONFIG_MTD_UBI_WL_THRESHOLD=128): Device A(1024 PEBs, pool=50, wl_pool=25): ========================================================= from to count min avg max --------------------------------------------------------- 0 .. 9: 0 0 0 0 10 .. 99: 0 0 0 0 100 .. 999: 0 0 0 0 1000 .. 9999: 0 0 0 0 10000 .. 99999: 960 29224 29282 29362 100000 .. inf: 64 117897 117934 117940 --------------------------------------------------------- Total : 1024 29224 34822 117940 Device B(8192 PEBs, pool=256, wl_pool=128): ========================================================= from to count min avg max --------------------------------------------------------- 0 .. 9: 0 0 0 0 10 .. 99: 0 0 0 0 100 .. 999: 0 0 0 0 1000 .. 9999: 8128 2253 2321 2387 10000 .. 99999: 64 35387 35387 35388 100000 .. inf: 0 0 0 0 --------------------------------------------------------- Total : 8192 2253 2579 35388 The key point is reducing fastmap updating frequency by enlarging POOL_SIZE, so let UBI reserve ubi->fm_pool.max_size PEBs during attaching. Then POOL_SIZE will become ubi->fm_pool.max_size/2 even in free space running out case. Given an UBI device with 8192 PEBs(16384\8192\4096 is common large-capacity flash), t=8192/128/64=1. The fastmap updating will happen in either wl_pool or pool is empty, so setting fm_pool_rsv_cnt as ubi->fm_pool.max_size can fill wl_pool in full state. After pool reservation, running fsstress for 24h: Device A(1024 PEBs, pool=50, wl_pool=25): ========================================================= from to count min avg max --------------------------------------------------------- 0 .. 9: 0 0 0 0 10 .. 99: 0 0 0 0 100 .. 999: 0 0 0 0 1000 .. 9999: 0 0 0 0 10000 .. 99999: 1024 33801 33997 34056 100000 .. inf: 0 0 0 0 --------------------------------------------------------- Total : 1024 33801 33997 34056 Device B(8192 PEBs, pool=256, wl_pool=128): ========================================================= from to count min avg max --------------------------------------------------------- 0 .. 9: 0 0 0 0 10 .. 99: 0 0 0 0 100 .. 999: 0 0 0 0 1000 .. 9999: 8192 2205 2397 2460 10000 .. 99999: 0 0 0 0 100000 .. inf: 0 0 0 0 --------------------------------------------------------- Total : 8192 2205 2397 2460 The difference of erase counter between first 64 PEBs and others is under WL_FREE_MAX_DIFF(2*UBI_WL_THRESHOLD=2*128=256). Device A: 34056 - 33801 = 255 Device B: 2460 - 2205 = 255 Next patch will add a switch to control whether UBI needs to reserve PEBs for filling pool. Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-29ubi: fastmap: Get wl PEB even ec beyonds the 'max' if free PEBs are run outZhihao Cheng1-10/+34
This is the part 2 to fix cyclically reusing single fastmap data PEBs. Consider one situation, if there are four free PEBs for fm_anchor, pool, wl_pool and fastmap data PEB with erase counter 100, 100, 100, 5096 (ubi->beb_rsvd_pebs is 0). PEB with erase counter 5096 is always picked for fastmap data according to the realization of find_wl_entry(), since fastmap data PEB is not scheduled for wl, finally there are two PEBs (fm data) with great erase counter than other PEBS. Get wl PEB even its erase counter exceeds the 'max' in find_wl_entry() when free PEBs are run out after filling pools and fm data. Then the PEB with biggest erase conter is taken as wl PEB, it can be scheduled for wl. Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-28ubi: fastmap: may_reserve_for_fm: Don't reserve PEB if fm_anchor existsZhihao Cheng1-1/+1
This is the part 1 to fix cyclically reusing single fastmap data PEBs. After running fsstress on UBIFS for a while, UBI (16384 blocks, fastmap takes 2 blocks) has an erase block(PEB: 8031) with big erase counter greater than any other pebs: ========================================================= from to count min avg max --------------------------------------------------------- 0 .. 9: 0 0 0 0 10 .. 99: 532 84 92 99 100 .. 999: 15787 100 147 229 1000 .. 9999: 64 4699 4765 4826 10000 .. 99999: 0 0 0 0 100000 .. inf: 1 272935 272935 272935 --------------------------------------------------------- Total : 16384 84 180 272935 Not like fm_anchor, there is no candidate PEBs for fastmap data area, so old fastmap data pebs will be reused after all free pebs are filled into pool/wl_pool: ubi_update_fastmap for (i = 1; i < new_fm->used_blocks; i++) erase_block(ubi, old_fm->e[i]->pnum) new_fm->e[i] = old_fm->e[i] According to wear leveling algorithm, UBI selects one small erase counter PEB from ubi->used and one big erase counter PEB from wl_pool, the reused fastmap data PEB is not in these trees. UBI won't schedule this PEB for wl even it is in ubi->used because wl algorithm expects small erase counter for used PEB. Don't reserve PEB for fastmap in may_reserve_for_fm() if fm_anchor already exists. Otherwise, when UBI is running out of free PEBs, the only one free PEB (pnum < 64) will be skipped and fastmap data will be written on the same old PEB. Fixes: dbb7d2a88d2a ("UBI: Add fastmap core") Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-28ubi: fastmap: Remove unneeded break condition while filling poolsZhihao Cheng1-3/+2
Change pool filling stop condition. Commit d09e9a2bddba ("ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not empty") reserves fastmap data PEBs after filling 1 PEB in wl_pool. Now wait_free_pebs_for_pool() makes enough free PEBs before filling pool, there will still be at least 1 PEB in pool and 1 PEB in wl_pool after doing ubi_refill_pools(). Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-28ubi: fastmap: Wait until there are enough free PEBs before filling poolsZhihao Cheng1-3/+50
Wait until there are enough free PEBs before filling pool/wl_pool, sometimes erase_worker is not scheduled in time, which causes two situations: A. There are few PEBs filled in pool, which makes ubi_update_fastmap is frequently called and leads first 64 PEBs are erased more times than other PEBs. So waiting free PEBs before filling pool reduces fastmap updating frequency and prolongs flash service life. B. In situation that space is nearly running out, ubi_refill_pools() cannot make sure pool and wl_pool are filled with free PEBs, caused by the delay of erase_worker. After this patch applied, there must exist free PEBs in pool after one call of ubi_update_fastmap. Besides, this patch is a preparetion for fixing large erase counter in fastmap data block and fixing lapsed wear leveling for first 64 PEBs. Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-10-28ubi: fastmap: Use free pebs reserved for bad block handlingZhihao Cheng1-11/+5
If new bad PEBs occur, UBI firstly consumes ubi->beb_rsvd_pebs, and then ubi->avail_pebs, finally UBI becomes read-only if above two items are 0, which means that the amount of PEBs for user volumes is not effected. Besides, UBI reserves count of free PBEs is ubi->beb_rsvd_pebs while filling wl pool or getting free PEBs, but ubi->avail_pebs is not reserved. So ubi->beb_rsvd_pebs and ubi->avail_pebs have nothing to do with the usage of free PEBs, UBI can use all free PEBs. Commit 78d6d497a648 ("UBI: Move fastmap specific functions out of wl.c") has removed beb_rsvd_pebs checking while filling pool. Now, don't reserve ubi->beb_rsvd_pebs while filling wl_pool. This will fill more PEBs in pool and also reduce fastmap updating frequency. Also remove beb_rsvd_pebs checking in ubi_wl_get_fm_peb. Link: https://bugzilla.kernel.org/show_bug.cgi?id=217787 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2023-02-02ubi: fastmap: Fix missed fm_anchor PEB in wear-leveling after disabling fastmapZhihao Cheng1-5/+7
After disabling fastmap(ubi->fm_disabled = 1), fastmap won't be updated, fm_anchor PEB is missed being scheduled for erasing. Besides, fm_anchor PEB may have smallest erase count, it doesn't participate wear-leveling. The difference of erase count between fm_anchor PEB and other PEBs will be larger and larger later on. In which situation fastmap can be disabled? Initially, we have an UBI image with fastmap. Then the image will be atttached without module parameter 'fm_autoconvert', ubi turns to full scanning mode in one random attaching process(eg. bad fastmap caused by powercut), ubi fastmap is disabled since then. Fix it by not getting fm_anchor if fastmap is disabled in ubi_refill_pools(). Fetch a reproducer in [Link]. Link: https://bugzilla.kernel.org/show_bug.cgi?id=216341 Fixes: 4b68bf9a69d22d ("ubi: Select fastmap anchor PEBs considering ...") Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-05-27ubi: fastmap: Check wl_pool for free peb before wear levelingZhihao Cheng1-0/+52
UBI fetches free peb from wl_pool during wear leveling, so UBI should check wl_pool's empty status before wear leveling. Otherwise, UBI will miss wear leveling chances when free pebs are run out. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2022-05-27ubi: fastmap: Fix high cpu usage of ubi_bgt by making sure wl_pool not emptyZhihao Cheng1-23/+46
There at least 6 PEBs reserved on UBI device: 1. EBA_RESERVED_PEBS[1] 2. WL_RESERVED_PEBS[1] 3. UBI_LAYOUT_VOLUME_EBS[2] 4. MIN_FASTMAP_RESERVED_PEBS[2] When all ubi volumes take all their PEBs, there are 3 (EBA_RESERVED_PEBS + WL_RESERVED_PEBS + MIN_FASTMAP_RESERVED_PEBS - MIN_FASTMAP_TAKEN_PEBS[1]) free PEBs. Since commit f9c34bb529975fe ("ubi: Fix producing anchor PEBs") and commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules") applied, there is only 1 (3 - FASTMAP_ANCHOR_PEBS[1] - FASTMAP_NEXT_ANCHOR_PEBS[1]) free PEB to fill pool and wl_pool, after filling pool, wl_pool is always empty. So, UBI could be stuck in an infinite loop: ubi_thread system_wq wear_leveling_worker <-------------------------------------------------- get_peb_for_wl | // fm_wl_pool, used = size = 0 | schedule_work(&ubi->fm_work) | | update_fastmap_work_fn | ubi_update_fastmap | ubi_refill_pools | // ubi->free_count - ubi->beb_rsvd_pebs < 5 | // wl_pool is not filled with any PEBs | schedule_erase(old_fm_anchor) | ubi_ensure_anchor_pebs | __schedule_ubi_work(wear_leveling_worker) | | __erase_worker | ensure_wear_leveling | __schedule_ubi_work(wear_leveling_worker) -------------------------- , which cause high cpu usage of ubi_bgt: top - 12:10:42 up 5 min, 2 users, load average: 1.76, 0.68, 0.27 Tasks: 123 total, 3 running, 54 sleeping, 0 stopped, 0 zombie PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1589 root 20 0 0 0 0 R 45.0 0.0 0:38.86 ubi_bgt0d 319 root 20 0 0 0 0 I 15.2 0.0 0:15.29 kworker/0:3-eve 371 root 20 0 0 0 0 I 14.9 0.0 0:12.85 kworker/3:3-eve 20 root 20 0 0 0 0 I 11.3 0.0 0:05.33 kworker/1:0-eve 202 root 20 0 0 0 0 I 11.3 0.0 0:04.93 kworker/2:3-eve In commit 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs considering wear level rules"), there are three key changes: 1) Choose the fastmap anchor when the most free PEBs are available. 2) Enable anchor move within the anchor area again as it is useful for distributing wear. 3) Import a candidate fm anchor and check this PEB's erase count during wear leveling. If the wear leveling limit is exceeded, use the used anchor area PEB with the lowest erase count to replace it. The anchor candidate can be removed, we can check fm_anchor PEB's erase count during wear leveling. Fix it by: 1) Removing 'fm_next_anchor' and check 'fm_anchor' during wear leveling. 2) Preferentially filling one free peb into fm_wl_pool in condition of ubi->free_count > ubi->beb_rsvd_pebs, then try to reserve enough free count for fastmap non anchor pebs after the above prerequisites are met. Then, there are at least 1 PEB in pool and 1 PEB in wl_pool after calling ubi_refill_pools() with all erase works done. Fetch a reproducer in [Link]. Fixes: 4b68bf9a69d22dd ("ubi: Select fastmap anchor PEBs ... rules") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215407 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-08-03ubi: fastmap: Free fastmap next anchor peb during detachZhihao Cheng1-0/+5
ubi_wl_entry related with the fm_next_anchor PEB is not freed during detach, which causes a memory leak. Don't forget to release fm_next_anchor PEB while detaching ubi from mtd when CONFIG_MTD_UBI_FASTMAP is enabled. Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Fixes: 4b68bf9a69d22d ("ubi: Select fastmap anchor PEBs considering...") Signed-off-by: Richard Weinberger <richard@nod.at>
2020-06-02ubi: Select fastmap anchor PEBs considering wear level rulesArne Edholm1-15/+24
There is a risk that the fastmap anchor PEB is alternating between just two PEBs, the current anchor and the previous anchor that was just deleted. As the fastmap pools gets the first take on free PEBs, the pools may leave no free PEBs to be selected as the new anchor, resulting in the two PEBs alternating behaviour. If the anchor PEBs gets a high erase count the PEBs will not be used by the pools but remain in ubi->free, even more increasing the likelihood they will be used as anchors. Getting stuck using only a couple of PEBs continuously will result in an uneven wear, eventually leading to failure. To fix this: - Choose the fastmap anchor when the most free PEBs are available. This is during rebuilding of the fastmap pools, after the unused pool PEBs are added to ubi->free but before the pools are populated again from the free PEBs. Also reserve an additional second best PEB as a candidate for the next time the fast map anchor is updated. If a better PEB is found the next time the fast map anchor is updated, the candidate is made available for building the pools. - Enable anchor move within the anchor area again as it is useful for distributing wear. - The anchor candidate for the next fastmap update is the most suited free PEB. Check this PEB's erase count during wear leveling. If the wear leveling limit is exceeded, the PEB is considered unsuitable for now. As all other non used anchor area PEBs should be even worse, free up the used anchor area PEB with the lowest erase count. Signed-off-by: Arne Edholm <arne.edholm@axis.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2020-03-31ubi: fastmap: Free unused fastmap anchor peb during detachHou Tao1-2/+13
When CONFIG_MTD_UBI_FASTMAP is enabled, fm_anchor will be assigned a free PEB during ubi_wl_init() or ubi_update_fastmap(). However if fastmap is not used or disabled on the MTD device, ubi_wl_entry related with the PEB will not be freed during detach. So Fix it by freeing the unused fastmap anchor during detach. Fixes: f9c34bb52997 ("ubi: Fix producing anchor PEBs") Reported-by: syzbot+f317896aae32eb281a58@syzkaller.appspotmail.com Reviewed-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-11-18ubi: Fix producing anchor PEBsSascha Hauer1-13/+18
When a new fastmap is about to be written UBI must make sure it has a free block for a fastmap anchor available. For this ubi_update_fastmap() calls ubi_ensure_anchor_pebs(). This stopped working with 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()"), with this commit the wear leveling code is blocked and can no longer produce free PEBs. UBI then more often than not falls back to write the new fastmap anchor to the same block it was already on which means the same erase block gets erased during each fastmap write and wears out quite fast. As the locking prevents us from producing the anchor PEB when we actually need it, this patch changes the strategy for creating the anchor PEB. We no longer create it on demand right before we want to write a fastmap, but instead we create an anchor PEB right after we have written a fastmap. This gives us enough time to produce a new anchor PEB before it is needed. To make sure we have an anchor PEB for the very first fastmap write we call ubi_ensure_anchor_pebs() during initialisation as well. Fixes: 2e8f08deabbc ("ubi: Fix races around ubi_refill_pools()") Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-09-15ubi: ubi_wl_get_peb: Increase the number of attempts while getting PEBZhihao Cheng1-3/+3
Running stress test io_paral (A pressure ubi test in mtd-utils) on an UBI device with fewer PEBs (fastmap enabled) may cause ENOSPC errors and make UBI device read-only, but there are still free PEBs on the UBI device. This problem can be easily reproduced by performing the following steps on a 2-core machine: $ modprobe nandsim first_id_byte=0x20 second_id_byte=0x33 parts=80 $ modprobe ubi mtd="0,0" fm_autoconvert $ ./io_paral /dev/ubi0 We may see the following verbose: (output) [io_paral] update_volume():108: failed to write 380 bytes at offset 95920 of volume 2 [io_paral] update_volume():109: update: 97088 bytes [io_paral] write_thread():227: function pwrite() failed with error 28 (No space left on device) [io_paral] write_thread():229: cannot write 15872 bytes to offs 31744, wrote -1 (dmesg) ubi0 error: ubi_wl_get_peb [ubi]: Unable to get a free PEB from user WL pool ubi0 warning: ubi_eba_write_leb [ubi]: switch to read-only mode CPU: 0 PID: 2027 Comm: io_paral Not tainted 5.3.0-rc2-00001-g5986cd0 #9 ubi0 warning: try_write_vid_and_data [ubi]: failed to write VID header to LEB 2:5, PEB 18 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0 -0-ga698c8995f-prebuilt.qemu.org 04/01/2014 Call Trace: dump_stack+0x85/0xba ubi_eba_write_leb+0xa1e/0xa40 [ubi] vol_cdev_write+0x307/0x520 [ubi] vfs_write+0xfa/0x280 ksys_pwrite64+0xc5/0xe0 __x64_sys_pwrite64+0x22/0x30 do_syscall_64+0xbf/0x440 In function ubi_wl_get_peb, the operation of filling the pool (ubi_update_fastmap) with free PEBs and fetching a free PEB from the pool is not atomic. After thread A filling the pool with free PEB, free PEB may be taken away by thread B. When thread A checks the expression again, the condition is still unsatisfactory. At this time, there may still be free PEBs on UBI that can be filled into the pool. This patch increases the number of attempts to obtain PEB. An extreme case (No free PEBs left after creating test volumes) has been tested on different type of machines for 100 times. The biggest number of attempts are shown below: x86_64 arm64 2-core 4 4 4-core 8 4 8-core 4 4 Signed-off-by: Zhihao Cheng <chengzhihao1@huawei.com> Signed-off-by: Richard Weinberger <richard@nod.at>
2019-06-05treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 286Thomas Gleixner1-10/+1
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation version 2 this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 97 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190529141901.025053186@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2018-04-05ubi: fastmap: Don't flush fastmap work on detachRichard Weinberger1-1/+0
At this point UBI volumes have already been free()'ed and fastmap can no longer access these data structures. Reported-by: Martin Townsend <mtownsend1973@gmail.com> Fixes: 74cdaf24004a ("UBI: Fastmap: Fix memory leaks while closing the WL sub-system") Cc: stable@vger.kernel.org Signed-off-by: Richard Weinberger <richard@nod.at>
2018-01-18ubi: Fastmap: Fix typoSascha Hauer1-1/+1
Fix misspelling of 'available' in function name. Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Richard Weinberger <richard@nod.at>
2016-10-02ubi: Fix races around ubi_refill_pools()Richard Weinberger1-2/+4
When writing a new Fastmap the first thing that happens is refilling the pools in memory. At this stage it is possible that new PEBs from the new pools get already claimed and written with data. If this happens before the new Fastmap data structure hits the flash and we face power cut the freshly written PEB will not scanned and unnoticed. Solve the issue by locking the pools until Fastmap is written. Cc: <stable@vger.kernel.org> Fixes: dbb7d2a88d ("UBI: Add fastmap core") Signed-off-by: Richard Weinberger <richard@nod.at>
2015-10-04ubi: fastmap: Implement produce_free_peb()Richard Weinberger1-0/+29
If fastmap requests a free PEB for a pool and UBI is busy with erasing PEBs we need to offer a function to wait for one. We can reuse produce_free_peb() from the non-fastmap WL code but with different locking semantics. Cc: stable@vger.kernel.org # 4.1.x- Reported-and-tested-by: Jörg Krause <joerg.krause@embedded.rocks> Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Remove is_fm_block()Richard Weinberger1-19/+0
This function was added to fastmap in a very early stage to have paranoid assertions. With the current fastmap implementation this assert will never trigger as fastmap PEBs are not seen by the WL sub-system. Remove it to save us some CPU cycles. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Add blank line after declarationsRichard Weinberger1-0/+1
Another checkpatch complaint: WARNING: Missing a blank line after declarations Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Remove else after return.Richard Weinberger1-3/+3
checkpatch.pl complains: WARNING: else is not generally useful after a break or return Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Fastmap: Introduce may_reserve_for_fm()Richard Weinberger1-0/+19
...and kill another #ifdef. Signed-off-by: Richard Weinberger <richard@nod.at>
2015-03-27UBI: Move fastmap specific functions out of wl.cRichard Weinberger1-0/+361
Fastmap is tightly connected to the WL sub-system, many fastmap-specific functionslive in wl.c. To get rid of most #ifdefs in wl.c move this functions into a new file and include it into wl.c Signed-off-by: Richard Weinberger <richard@nod.at>