Skip to content

Commit 3bcb846

Browse files
Eric Dumazetdavem330
Eric Dumazet
authored andcommitted
net: get rid of spin_trylock() in net_tx_action()
Note: Tom Herbert posted almost same patch 3 months back, but for different reasons. The reasons we want to get rid of this spin_trylock() are : 1) Under high qdisc pressure, the spin_trylock() has almost no chance to succeed. 2) We loop multiple times in softirq handler, eventually reaching the max retry count (10), and we schedule ksoftirqd. Since we want to adhere more strictly to ksoftirqd being waked up in the future (https://lwn.net/Articles/687617/), better avoid spurious wakeups. 3) calls to __netif_reschedule() dirty the cache line containing q->next_sched, slowing down the owner of qdisc. 4) RT kernels can not use the spin_trylock() here. With help of busylock, we get the qdisc spinlock fast enough, and the trylock trick brings only performance penalty. Depending on qdisc setup, I observed a gain of up to 19 % in qdisc performance (1016600 pps instead of 853400 pps, using prio+tbf+fq_codel) ("mpstat -I SCPU 1" is much happier now) Signed-off-by: Eric Dumazet <[email protected]> Cc: Tom Herbert <[email protected]> Acked-by: Tom Herbert <[email protected]> Signed-off-by: David S. Miller <[email protected]>
1 parent 8241a1e commit 3bcb846

File tree

1 file changed

+9
-17
lines changed

1 file changed

+9
-17
lines changed

net/core/dev.c

+9-17
Original file line numberDiff line numberDiff line change
@@ -2253,7 +2253,7 @@ int netif_get_num_default_rss_queues(void)
22532253
}
22542254
EXPORT_SYMBOL(netif_get_num_default_rss_queues);
22552255

2256-
static inline void __netif_reschedule(struct Qdisc *q)
2256+
static void __netif_reschedule(struct Qdisc *q)
22572257
{
22582258
struct softnet_data *sd;
22592259
unsigned long flags;
@@ -3898,22 +3898,14 @@ static void net_tx_action(struct softirq_action *h)
38983898
head = head->next_sched;
38993899

39003900
root_lock = qdisc_lock(q);
3901-
if (spin_trylock(root_lock)) {
3902-
smp_mb__before_atomic();
3903-
clear_bit(__QDISC_STATE_SCHED,
3904-
&q->state);
3905-
qdisc_run(q);
3906-
spin_unlock(root_lock);
3907-
} else {
3908-
if (!test_bit(__QDISC_STATE_DEACTIVATED,
3909-
&q->state)) {
3910-
__netif_reschedule(q);
3911-
} else {
3912-
smp_mb__before_atomic();
3913-
clear_bit(__QDISC_STATE_SCHED,
3914-
&q->state);
3915-
}
3916-
}
3901+
spin_lock(root_lock);
3902+
/* We need to make sure head->next_sched is read
3903+
* before clearing __QDISC_STATE_SCHED
3904+
*/
3905+
smp_mb__before_atomic();
3906+
clear_bit(__QDISC_STATE_SCHED, &q->state);
3907+
qdisc_run(q);
3908+
spin_unlock(root_lock);
39173909
}
39183910
}
39193911
}

0 commit comments

Comments
 (0)