diff options
author | NeilBrown <neilb@suse.de> | 2015-02-26 04:47:56 +0300 |
---|---|---|
committer | NeilBrown <neilb@suse.de> | 2015-04-22 01:00:43 +0300 |
commit | edbe83ab4c27ea6669eb57adb5ed7eaec1118ceb (patch) | |
tree | 0bfa3622e7c297cd7fc2b42a56bc5006ff87bfdc /net | |
parent | 5423399a84ee1d92d29d763029ed40e4905cf50f (diff) | |
download | linux-edbe83ab4c27ea6669eb57adb5ed7eaec1118ceb.tar.xz |
md/raid5: allow the stripe_cache to grow and shrink.
The default setting of 256 stripe_heads is probably
much too small for many configurations. So it is best to make it
auto-configure.
Shrinking the cache under memory pressure is easy. The only
interesting part here is that we put a fairly high cost
('seeks') on shrinking the cache as the cost is greater than
just having to read more data, it reduces parallelism.
Growing the cache on demand needs to be done carefully. If we allow
fast growth, that can upset memory balance as lots of dirty memory can
quickly turn into lots of memory queued in the stripe_cache.
It is important for the raid5 block device to appear congested to
allow write-throttling to work.
So we only add stripes slowly. We set a flag when an allocation
fails because all stripes are in use, allocate at a convenient
time when that flag is set, and don't allow it to be set again
until at least one stripe_head has been released for re-use.
This means that a spurt of requests will only cause one stripe_head
to be allocated, but a steady stream of requests will slowly
increase the cache size - until memory pressure puts it back again.
It could take hours to reach a steady state.
The value written to, and displayed in, stripe_cache_size is
used as a minimum. The cache can grow above this and shrink back
down to it. The actual size is not directly visible, though it can
be deduced to some extent by watching stripe_cache_active.
Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'net')
0 files changed, 0 insertions, 0 deletions