summaryrefslogtreecommitdiff
path: root/net/sched/sch_generic.c
diff options
context:
space:
mode:
authorMarc Kleine-Budde <mkl@pengutronix.de>2019-10-16 10:28:33 +0200
committerDavid S. Miller <davem@davemloft.net>2019-10-17 15:33:03 -0400
commit4eab421bc339e719af1b4b9560dd0cb97ce29b73 (patch)
tree3ca671a812d9ad9f80af00e04ea9e899f7dc6a6c /net/sched/sch_generic.c
parentce753e66dcc37e19572a87f70585ec6537dede81 (diff)
downloadlwn-4eab421bc339e719af1b4b9560dd0cb97ce29b73.tar.gz
lwn-4eab421bc339e719af1b4b9560dd0cb97ce29b73.zip
net: sched: Avoid using yield() in a busy waiting loop
With threaded interrupts enabled, the interrupt thread runs as SCHED_RR with priority 50. If a user application with a higher priority preempts the interrupt thread and tries to shutdown the network interface then it will loop forever. The kernel will spin in the loop waiting for the device to become idle and the scheduler will never consider the interrupt thread because its priority is lower. Avoid the problem by sleeping for a jiffy giving other tasks, including the interrupt thread, a chance to run and make progress. In the original thread it has been suggested to use wait_event() and properly waiting for the state to occur. DaveM explained that this would require to add expensive checks in the fast paths of packet processing. Link: https://lkml.kernel.org/r/1393976987-23555-1-git-send-email-mkl@pengutronix.de Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de> [bigeasy: Rewrite commit message, add comment, use schedule_timeout_uninterruptible()] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/sch_generic.c')
-rw-r--r--net/sched/sch_generic.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 4c75dbabd343..ed5b0e9fd395 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -1212,8 +1212,13 @@ void dev_deactivate_many(struct list_head *head)
/* Wait for outstanding qdisc_run calls. */
list_for_each_entry(dev, head, close_list) {
- while (some_qdisc_is_busy(dev))
- yield();
+ while (some_qdisc_is_busy(dev)) {
+ /* wait_event() would avoid this sleep-loop but would
+ * require expensive checks in the fast paths of packet
+ * processing which isn't worth it.
+ */
+ schedule_timeout_uninterruptible(1);
+ }
/* The new qdisc is assigned at this point so we can safely
* unwind stale skb lists and qdisc statistics
*/