summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2014-01-22mg_disk: Spelling s/finised/finished/Geert Uytterhoeven1-1/+1
Signed-off-by: Geert Uytterhoeven <geert+renesas@linux-m68k.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22null_blk: Null pointer deference problem in alloc_page_buffersRaghavendra K T1-0/+5
If we load the null_blk module with bs=8k we get following oops: [ 3819.812190] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008 [ 3819.812387] IP: [<ffffffff81170aa5>] create_empty_buffers+0x28/0xaf [ 3819.812527] PGD 219244067 PUD 215a06067 PMD 0 [ 3819.812640] Oops: 0000 [#1] SMP [ 3819.812772] Modules linked in: null_blk(+) Fix that by resetting block size to PAGE_SIZE if it is greater than PAGE_SIZE Reported-by: Sumanth <sumantk2@linux.vnet.ibm.com> Signed-off-by: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com> Reviewed-by: Matias Bjorling <m@bjorling.me> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22mtip32xx: Correctly handle security locked conditionSam Bradshaw2-3/+15
If power is removed during a secure erase, the drive will end up in a security locked condition. This patch causes the driver to identify, log, and flag the security lock state. IOs are prevented from submission to the drive until the locked state is addressed with a secure erase. Bumped version number to reflect this capability. Signed-off-by: Sam Bradshaw <sbradshaw@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22mtip32xx: Make SGL container per-command to eliminate high order dma allocationSam Bradshaw2-97/+149
The mtip32xx driver makes a high order dma memory allocation to store a command index table, some dedicated buffers, and a command header & SGL blob. This allocation can fail with a surprise insert under low & fragmented memory conditions. This patch breaks these regions up into separate low order allocations and increases the maximum number of segments a single command SGL can have. We wanted to allow at least 256 segments for 1 MB direct IO. Since the command header occupies the first 0x80 bytes of the SGL blob, that meant we needed two 4k pages to contain the header and SGL. The two pages allow up to 504 SGL segments. Signed-off-by: Sam Bradshaw <sbradshaw@micron.com> Signed-off-by: Asai Thambi S P <asamymuthupa@micron.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22Merge branch 'for-jens' of ↵Jens Axboe2-10/+29
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/linux-block into for-3.14/drivers
2014-01-22drivers/block/loop.c: fix comment typo in loop_config_discardOlaf Hering1-1/+1
Discard requests are ignored if the encryption is enabled for the given loop device. Update comment to match the code, and similar comments elsewhere in the file. Signed-off-by: Olaf Hering <olaf@aepfle.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22drivers/block/cciss.c:cciss_init_one(): use proper errnosAndrew Morton1-2/+2
pci_driver.probe should return a meaningful errno, not -1. Cc: Jens Axboe <axboe@kernel.dk> Cc: Stephen M. Cameron <scameron@beardog.cce.hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22drivers/block/paride/pg.c: underflow bug in pg_write()Dan Carpenter1-1/+1
The test here can underflow so we pass bogus lengths to the hardware. It's a static checker fix and I don't know the impact. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22drivers/block/sx8.c: remove unnecessary pci_set_drvdata()Jingoo Han1-1/+0
The driver core clears the driver data to NULL after device_release or on probe failure. Thus, it is not needed to manually clear the device driver data to NULL. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-22drivers/block/sx8.c: use module_pci_driver()Jingoo Han1-14/+1
Use module_pci_driver() macro which makes the code smaller and simpler. Signed-off-by: Jingoo Han <jg1.han@samsung.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-17floppy: bail out in open() if drive is not responding to block0 readJiri Kosina2-10/+29
In case reading of block 0 during open() fails, it is not the right thing to let open() succeed. Fix this by introducing FD_OPEN_SHOULD_FAIL_BIT flag, and setting it in case the bio callback encounters an error while trying to read block 0. As a bonus, this works around certain broken userspace (blkid), which is not able to properly handle read()s returning IO errors. Hence be nice to those, and bail out during open() already; if block 0 is not readable, read()s are not going to provide any meaningful data anyway. Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2014-01-09bcache: Fix auxiliary search trees for key size > cacheline sizeKent Overstreet1-14/+14
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Don't return -EINTR when insert finishedKent Overstreet1-2/+4
We need to return -EINTR after a split because we invalidated iterators (and freed the btree node) - but if we were finished inserting, we don't want to redo the traversal. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Improve bucket_prio() calculationKent Overstreet2-3/+16
When deciding what order to reuse buckets we take into account both the bucket's priority (which indicates lru order) and also the amount of live data in that bucket. The way they were scaled together wasn't as correct as it could be... this patch improves and documents it. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Add bch_bkey_equal_header()Nicholas Swenson3-8/+11
Checks if two keys have equivalent header fields. (good enough for replacement or merging) Used in bch_bkey_try_merge, and replacing a key in the btree. Signed-off-by: Nicholas Swenson <nks@daterainc.com> Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: update bch_bkey_try_mergeNicholas Swenson3-16/+28
Added generic header checks to bch_bkey_try_merge, which then calls the bkey specific function Removed extraneous checks from bch_extent_merge Signed-off-by: Nicholas Swenson <nks@daterainc.com>
2014-01-09bcache: Move insert_fixup() to btree_keys_opsKent Overstreet4-229/+257
Now handling overlapping extents/keys is a method that's specific to what the btree node contains. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Convert sorting to btree_keysKent Overstreet3-36/+33
More work to disentangle various code from struct btree Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Convert debug code to btree_keysKent Overstreet9-217/+264
More work to disentangle various code from struct btree Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Convert btree_iter to struct btree_keysKent Overstreet6-38/+41
More work to disentangle bset.c from struct btree Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Refactor bset_tree sysfs statsKent Overstreet3-47/+54
We're in the process of turning bset.c into library code, so none of the code in that file should know about struct cache_set or struct btree - so, move the btree traversal part of the stats code to sysfs.c. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Add bch_btree_keys_u64s_remaining()Kent Overstreet3-13/+31
Helper function to explicitly check how much space is free in a btree node Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Add struct btree_keysKent Overstreet9-264/+323
Soon, bset.c won't need to depend on struct btree. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Abstract out stuff needed for sortingKent Overstreet9-289/+423
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Rename/shuffle various code aroundKent Overstreet8-276/+341
More work to disentangle bset.c from the rest of the code: Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Add struct bset_sort_stateKent Overstreet6-49/+87
More disentangling bset.c from the rest of the bcache code - soon, the sorting routines won't have any dependencies on any outside structs. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Split out sort_extent_cmp()Kent Overstreet4-32/+73
Only use extent comparison for comparing extents, so we're not using START_KEY() on other key types (i.e. btree pointers) Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Bkey indexing renamingKent Overstreet7-53/+63
More refactoring: node() -> bset_bkey_idx() end() -> bset_bkey_last() Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Make bch_keylist_realloc() take u64s, not nptrsKent Overstreet4-16/+26
Getting away from KEY_PTRS and moving toward KEY_U64s - and getting rid of magic 2s Also - split out the part that checks against journal entry size so as to avoid a dependancy on struct cache_set in bset.c Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Remove/fix some header dependenciesKent Overstreet3-24/+26
In the process of disentagling/libraryizing bset.c from the rest of the bcache code. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Use a mempool for mergesort temporary spaceKent Overstreet3-16/+8
It was a single element mempool before, it's slightly cleaner to just use a real mempool. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Btree verify code improvementsKent Overstreet6-40/+83
Used this fixed code to find and fix the bug fixed by a4d885097b0ac0cd1337f171f2d4b83e946094d4. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: kill index()Kent Overstreet4-8/+24
That was a terrible name for a macro, add some better helpers to replace it. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Trivial error handling fixKent Overstreet1-1/+2
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache/md: Use raid stripe sizeKent Overstreet4-0/+12
Now that we've got code for raid5/6 stripe awareness, bcache just needs to know about the stripes and when writing partial stripes is expensive - we probably don't want to enable this optimization for raid1 or 10, even though they have stripes. So add a flag to queue_limits. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Do bkey_put() in btree_split() error pathKent Overstreet1-1/+4
This error path shouldn't have been hit in practice.. and we've got reworked reserve code coming soon so that it shouldn't _ever_ be bit... but if we've got code for this error path it should be correct. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Rework allocator reservesKent Overstreet8-83/+105
We need a reserve for allocating buckets for new btree nodes - and now that we've got multiple btrees, it really needs to be per btree. This reworks the reserves so we've got separate freelists for each reserve instead of watermarks, which seems to make things a bit cleaner, and it adds some code so that btree_split() can make sure the reserve is available before it starts. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: kill closure locking codeKent Overstreet2-313/+123
Also flesh out the documentation a bit Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: kill closure locking usageKent Overstreet7-55/+98
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Zero less memoryKent Overstreet3-40/+41
Another minor performance optimization Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Don't touch bucket gen for dirty ptrsKent Overstreet2-2/+7
Unnecessary since a bucket that has dirty pointers pointing to it can never be invalidated - and skipping it is a measurable performance boost, since the bucket gen will usually be a cache miss. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Minor btree cache fixKent Overstreet1-7/+3
Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Performance fix for when journal entry is fullKent Overstreet1-5/+9
We were unnecessarily waiting on a journal write to complete when we just needed to start a journal write and start setting up the next one. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Minor journal fixKent Overstreet1-5/+14
The real fix is where we check the bytes we need against how much is remaining - we also need to check for a journal entry bigger than our buffer, we'll never write those and it would be bad if we tried to read one. Also improve the diagnostic messages. Signed-off-by: Kent Overstreet <kmo@daterainc.com>
2014-01-09bcache: Data corruption fixKent Overstreet1-4/+22
The code that handles overlapping extents that we've just read back in from disk was depending on the behaviour of the code that handles overlapping extents as we're inserting into a btree node in the case of an insert that forced an existing extent to be split: on insert, if we had to split we'd also insert a new extent to represent the top part of the old extent - and then that new extent would get written out. The code that read the extents back in thus not bother with splitting extents - if it saw an extent that ovelapped in the middle of an older extent, it would trim the old extent to only represent the bottom part, assuming that the original insert would've inserted a new extent to represent the top part. I still haven't figured out _how_ it can happen, but I'm now pretty convinced (and testing has confirmed) that there's some kind of an obscure corner case (probably involving extent merging, and multiple overwrites in different sets) that breaks this. The fix is to change the mergesort fixup code to split extents itself when required. Signed-off-by: Kent Overstreet <kmo@daterainc.com> Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
2014-01-08Merge branch 'for-3.14/core' into for-3.14/driversJens Axboe1314-7015/+15423
We need the updated code to make bcache easier to merge.
2014-01-03blk-mq: fix initializing request's start timeMing Lei1-0/+2
blk_rq_init() is called in req's complete handler to initialize the request, so the members of start_time and start_time_ns might become inaccurate when it is allocated in future. The patch initializes the two members in blk_mq_rq_ctx_init() to fix the problem. Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2014-01-03pktcdvd: fix error return codeJulia Lawall1-1/+3
Set the return variable to an error code as done elsewhere in the function. A simplified version of the semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> ( if@p1 (\(ret < 0\|ret != 0\)) { ... return ret; } | ret@p1 = 0 ) ... when != ret = e1 when != &ret *if(...) { ... when != ret = e2 when forall return ret; } // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
2013-12-31block: blk-mq: don't export blk_mq_free_queue()Ming Lei4-2/+2
blk_mq_free_queue() is called from release handler of queue kobject, so it needn't be called from drivers. Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2013-12-31block: blk-mq: make blk_sync_queue support mqMing Lei2-2/+10
This patch moves synchronization on mq->delay_work from blk_mq_free_queue() to blk_sync_queue(), so that blk_sync_queue can work on mq. Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>