diff options
author | Javier González <jg@lightnvm.io> | 2017-06-26 12:57:29 +0300 |
---|---|---|
committer | Jens Axboe <axboe@kernel.dk> | 2017-06-27 01:27:39 +0300 |
commit | 588726d3ec68b66be2e2881d2b85060ff383078a (patch) | |
tree | 3e0775b496dea2ea0cb676280e924df027400df5 /drivers/lightnvm/pblk-map.c | |
parent | ef5764946b1314e0aa1ab261493de6b9aa482ff9 (diff) | |
download | linux-588726d3ec68b66be2e2881d2b85060ff383078a.tar.xz |
lightnvm: pblk: fail gracefully on irrec. error
Due to user writes being decoupled from media writes because of the need
of an intermediate write buffer, irrecoverable media write errors lead
to pblk stalling; user writes fill up the buffer and end up in an
infinite retry loop.
In order to let user writes fail gracefully, it is necessary for pblk to
keep track of its own internal state and prevent further writes from
being placed into the write buffer.
This patch implements a state machine to keep track of internal errors
and, in case of failure, fail further user writes in an standard way.
Depending on the type of error, pblk will do its best to persist
buffered writes (which are already acknowledged) and close down on a
graceful manner. This way, data might be recovered by re-instantiating
pblk. Such state machine paves out the way for a state-based FTL log.
Signed-off-by: Javier González <javier@cnexlabs.com>
Signed-off-by: Matias Bjørling <matias@cnexlabs.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Diffstat (limited to 'drivers/lightnvm/pblk-map.c')
-rw-r--r-- | drivers/lightnvm/pblk-map.c | 23 |
1 files changed, 17 insertions, 6 deletions
diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c index 9942d9bc7b3a..a9be03cd07a8 100644 --- a/drivers/lightnvm/pblk-map.c +++ b/drivers/lightnvm/pblk-map.c @@ -62,9 +62,8 @@ static void pblk_map_page_data(struct pblk *pblk, unsigned int sentry, if (pblk_line_is_full(line)) { struct pblk_line *prev_line = line; - line = pblk_line_replace_data(pblk); - if (!line) - return; + + pblk_line_replace_data(pblk); pblk_line_close_meta(pblk, prev_line); } @@ -106,10 +105,16 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, pblk_map_page_data(pblk, sentry + i, &rqd->ppa_list[i], lun_bitmap, &meta_list[i], map_secs); - /* line can change after page map */ - e_line = pblk_line_get_erase(pblk); erase_lun = pblk_ppa_to_pos(geo, rqd->ppa_list[i]); + /* line can change after page map. We might also be writing the + * last line. + */ + e_line = pblk_line_get_erase(pblk); + if (!e_line) + return pblk_map_rq(pblk, rqd, sentry, lun_bitmap, + valid_secs, i + min); + spin_lock(&e_line->lock); if (!test_bit(erase_lun, e_line->erase_bitmap)) { set_bit(erase_lun, e_line->erase_bitmap); @@ -127,9 +132,15 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq *rqd, spin_unlock(&e_line->lock); } - e_line = pblk_line_get_erase(pblk); d_line = pblk_line_get_data(pblk); + /* line can change after page map. We might also be writing the + * last line. + */ + e_line = pblk_line_get_erase(pblk); + if (!e_line) + return; + /* Erase blocks that are bad in this line but might not be in next */ if (unlikely(ppa_empty(*erase_ppa)) && bitmap_weight(d_line->blk_bitmap, lm->blk_per_line)) { |