2
0
mirror of https://github.com/edk2-porting/linux-next.git synced 2024-12-24 05:04:00 +08:00

tcp: properly reset skb->truesize for tx recycling

tcp sendmsg() and sendpage() normally advance skb->data_len
and skb->truesize by the payload added to an skb.

But sendmsg(fd, ..., MSG_ZEROCOPY) has to account for whole pages,
even if a single byte of payload is used in the page.

This means that we can not assume skb->truesize can be adjusted
by skb->data_len. We must instead overwrite its value.

Otherwise skb->truesize is too big and can hit socket sndbuf limit,
especially if the skb is recycled multiple times :/

Fixes: 472c2e07ee ("tcp: add one skb cache for tx")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Soheil Hassas Yeganeh <soheil@google.com>
Cc: Willem de Bruijn <willemb@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
Eric Dumazet 2019-04-19 16:02:03 -07:00 committed by David S. Miller
parent 0a9798c123
commit d7cc399e12

View File

@ -868,7 +868,7 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp,
if (likely(!size)) { if (likely(!size)) {
skb = sk->sk_tx_skb_cache; skb = sk->sk_tx_skb_cache;
if (skb && !skb_cloned(skb)) { if (skb && !skb_cloned(skb)) {
skb->truesize -= skb->data_len; skb->truesize = SKB_TRUESIZE(skb_end_offset(skb));
sk->sk_tx_skb_cache = NULL; sk->sk_tx_skb_cache = NULL;
pskb_trim(skb, 0); pskb_trim(skb, 0);
INIT_LIST_HEAD(&skb->tcp_tsorted_anchor); INIT_LIST_HEAD(&skb->tcp_tsorted_anchor);