diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2008-05-11 16:04:48 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-05-11 16:04:48 -0700 |
commit | c3921ab71507b108d51a0f1ee960f80cd668a93d (patch) | |
tree | b1408b898a8b50f15ad4a0cf1f29e17cc0138485 /fs/locks.c | |
parent | 9662369786b9d07fd46d65b0f9e3938a3e01a5d9 (diff) | |
download | lwn-c3921ab71507b108d51a0f1ee960f80cd668a93d.tar.gz lwn-c3921ab71507b108d51a0f1ee960f80cd668a93d.zip |
Add new 'cond_resched_bkl()' helper function
It acts exactly like a regular 'cond_resched()', but will not get
optimized away when CONFIG_PREEMPT is set.
Normal kernel code is already preemptable in the presense of
CONFIG_PREEMPT, so cond_resched() is optimized away (see commit
02b67cc3ba36bdba351d6c3a00593f4ec550d9d3 "sched: do not do
cond_resched() when CONFIG_PREEMPT").
But when wanting to conditionally reschedule while holding a lock, you
need to use "cond_sched_lock(lock)", and the new function is the BKL
equivalent of that.
Also make fs/locks.c use it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'fs/locks.c')
-rw-r--r-- | fs/locks.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/fs/locks.c b/fs/locks.c index 0ac6b92cb0b6..11dbf08651b7 100644 --- a/fs/locks.c +++ b/fs/locks.c @@ -773,7 +773,7 @@ static int flock_lock_file(struct file *filp, struct file_lock *request) * give it the opportunity to lock the file. */ if (found) - cond_resched(); + cond_resched_bkl(); find_conflict: for_each_lock(inode, before) { |