summaryrefslogtreecommitdiff
path: root/kernel/workqueue.c
diff options
context:
space:
mode:
authorJoonsoo Kim <js1304@gmail.com>2013-05-01 00:07:00 +0900
committerTejun Heo <tj@kernel.org>2013-05-14 11:48:15 -0700
commit8f174b1175a10903ade40f36eb6c896412877ca0 (patch)
treeb6f1c9c7317ecc1092f582aca0160fdfb77c624e /kernel/workqueue.c
parentd3251859168b0b12841e1b90d6d768ab478dc23d (diff)
downloadlwn-8f174b1175a10903ade40f36eb6c896412877ca0.tar.gz
lwn-8f174b1175a10903ade40f36eb6c896412877ca0.zip
workqueue: correct handling of the pool spin_lock
When we fail to mutex_trylock(), we release the pool spin_lock and do mutex_lock(). After that, we should regrab the pool spin_lock, but, regrabbing is missed in current code. So correct it. Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Tejun Heo <tj@kernel.org>
Diffstat (limited to 'kernel/workqueue.c')
-rw-r--r--kernel/workqueue.c1
1 files changed, 1 insertions, 0 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 1ae602809efb..286847b90225 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2059,6 +2059,7 @@ static bool manage_workers(struct worker *worker)
if (unlikely(!mutex_trylock(&pool->manager_mutex))) {
spin_unlock_irq(&pool->lock);
mutex_lock(&pool->manager_mutex);
+ spin_lock_irq(&pool->lock);
ret = true;
}