diff options
author | Joern Engel <joern@logfs.org> | 2014-09-17 00:23:12 +0400 |
---|---|---|
committer | Nicholas Bellinger <nab@linux-iscsi.org> | 2014-10-02 01:39:06 +0400 |
commit | 33940d09937276cd3c81f2874faf43e37c2db0e2 (patch) | |
tree | 2c3043e6902ee4e8e23b947f2e30e50967e99c5b /drivers/target/target_core_ua.c | |
parent | 74ed7e62289dc6d388996d7c8f89c2e7e95b9657 (diff) | |
download | linux-33940d09937276cd3c81f2874faf43e37c2db0e2.tar.xz |
target: encapsulate smp_mb__after_atomic()
The target code has a rather generous helping of smp_mb__after_atomic()
throughout the code base. Most atomic operations were followed by one
and none were preceded by smp_mb__before_atomic(), nor accompanied by a
comment explaining the need for a barrier.
Instead of trying to prove for every case whether or not it is needed,
this patch introduces atomic_inc_mb() and atomic_dec_mb(), which
explicitly include the memory barriers before and after the atomic
operation. For now they are defined in a target header, although they
could be of general use.
Most of the existing atomic/mb combinations were replaced by the new
helpers. In a few cases the atomic was sandwiched in
spin_lock/spin_unlock and I simply removed the barrier.
I suspect that in most cases the correct conversion would have been to
drop the barrier. I also suspect that a few cases exist where a) the
barrier was necessary and b) a second barrier before the atomic would
have been necessary and got added by this patch.
Signed-off-by: Joern Engel <joern@logfs.org>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Diffstat (limited to 'drivers/target/target_core_ua.c')
-rw-r--r-- | drivers/target/target_core_ua.c | 15 |
1 files changed, 5 insertions, 10 deletions
diff --git a/drivers/target/target_core_ua.c b/drivers/target/target_core_ua.c index 101858e245b3..1738b1646988 100644 --- a/drivers/target/target_core_ua.c +++ b/drivers/target/target_core_ua.c @@ -161,8 +161,7 @@ int core_scsi3_ua_allocate( spin_unlock(&deve->ua_lock); spin_unlock_irq(&nacl->device_list_lock); - atomic_inc(&deve->ua_count); - smp_mb__after_atomic(); + atomic_inc_mb(&deve->ua_count); return 0; } list_add_tail(&ua->ua_nacl_list, &deve->ua_list); @@ -174,8 +173,7 @@ int core_scsi3_ua_allocate( nacl->se_tpg->se_tpg_tfo->get_fabric_name(), unpacked_lun, asc, ascq); - atomic_inc(&deve->ua_count); - smp_mb__after_atomic(); + atomic_inc_mb(&deve->ua_count); return 0; } @@ -189,8 +187,7 @@ void core_scsi3_ua_release_all( list_del(&ua->ua_nacl_list); kmem_cache_free(se_ua_cache, ua); - atomic_dec(&deve->ua_count); - smp_mb__after_atomic(); + atomic_dec_mb(&deve->ua_count); } spin_unlock(&deve->ua_lock); } @@ -250,8 +247,7 @@ void core_scsi3_ua_for_check_condition( list_del(&ua->ua_nacl_list); kmem_cache_free(se_ua_cache, ua); - atomic_dec(&deve->ua_count); - smp_mb__after_atomic(); + atomic_dec_mb(&deve->ua_count); } spin_unlock(&deve->ua_lock); spin_unlock_irq(&nacl->device_list_lock); @@ -309,8 +305,7 @@ int core_scsi3_ua_clear_for_request_sense( list_del(&ua->ua_nacl_list); kmem_cache_free(se_ua_cache, ua); - atomic_dec(&deve->ua_count); - smp_mb__after_atomic(); + atomic_dec_mb(&deve->ua_count); } spin_unlock(&deve->ua_lock); spin_unlock_irq(&nacl->device_list_lock); |