summaryrefslogtreecommitdiff
path: root/drivers
diff options
context:
space:
mode:
authorVitaly Kuznetsov <vkuznets@redhat.com>2015-05-29 21:18:02 +0300
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2015-06-01 00:38:21 +0300
commit4e4bd36f97b1492f19b3329ac74ed313da13de34 (patch)
tree6768a2857429f56f4dc6833aa81fcd2c2a37c216 /drivers
parent6c4e5f9c9ff41ea997fd0f345b3b2b88c113eb68 (diff)
downloadlinux-4e4bd36f97b1492f19b3329ac74ed313da13de34.tar.xz
Drivers: hv: balloon: check if ha_region_mutex was acquired in MEM_CANCEL_ONLINE case
Memory notifiers are being executed in a sequential order and when one of them fails returning something different from NOTIFY_OK the remainder of the notification chain is not being executed. When a memory block is being onlined in online_pages() we do memory_notify(MEM_GOING_ONLINE, ) and if one of the notifiers in the chain fails we end up doing memory_notify(MEM_CANCEL_ONLINE, ) so it is possible for a notifier to see MEM_CANCEL_ONLINE without seeing the corresponding MEM_GOING_ONLINE event. E.g. when CONFIG_KASAN is enabled the kasan_mem_notifier() is being used to prevent memory hotplug, it returns NOTIFY_BAD for all MEM_GOING_ONLINE events. As kasan_mem_notifier() comes before the hv_memory_notifier() in the notification chain we don't see the MEM_GOING_ONLINE event and we do not take the ha_region_mutex. We, however, see the MEM_CANCEL_ONLINE event and unconditionally try to release the lock, the following is observed: [ 110.850927] ===================================== [ 110.850927] [ BUG: bad unlock balance detected! ] [ 110.850927] 4.1.0-rc3_bugxxxxxxx_test_xxxx #595 Not tainted [ 110.850927] ------------------------------------- [ 110.850927] systemd-udevd/920 is trying to release lock (&dm_device.ha_region_mutex) at: [ 110.850927] [<ffffffff81acda0e>] mutex_unlock+0xe/0x10 [ 110.850927] but there are no more locks to release! At the same time we can have the ha_region_mutex taken when we get the MEM_CANCEL_ONLINE event in case one of the memory notifiers after the hv_memory_notifier() in the notification chain failed so we need to add the mutex_is_locked() check. In case of MEM_ONLINE we are always supposed to have the mutex locked. Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: K. Y. Srinivasan <kys@microsoft.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'drivers')
-rw-r--r--drivers/hv/hv_balloon.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index cb5b7dc9797f..8a725cd69ad7 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -567,7 +567,9 @@ static int hv_memory_notifier(struct notifier_block *nb, unsigned long val,
case MEM_ONLINE:
dm_device.num_pages_onlined += mem->nr_pages;
case MEM_CANCEL_ONLINE:
- mutex_unlock(&dm_device.ha_region_mutex);
+ if (val == MEM_ONLINE ||
+ mutex_is_locked(&dm_device.ha_region_mutex))
+ mutex_unlock(&dm_device.ha_region_mutex);
if (dm_device.ha_waiting) {
dm_device.ha_waiting = false;
complete(&dm_device.ol_waitevent);