mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-12-16 07:24:39 +08:00
7582207b10
KASAN detected the following BUG:
BUG: KASAN: use-after-free in rtrs_clt_update_wc_stats+0x41/0x100 [rtrs_client]
Read of size 8 at addr ffff88bf2fb4adc0 by task swapper/0/0
CPU: 0 PID: 0 Comm: swapper/0 Tainted: G O 5.4.84-pserver #5.4.84-1+feature+linux+5.4.y+dbg+20201216.1319+b6b887b~deb10
Hardware name: Supermicro H8QG6/H8QG6, BIOS 3.00 09/04/2012
Call Trace:
<IRQ>
dump_stack+0x96/0xe0
print_address_description.constprop.4+0x1f/0x300
? irq_work_claim+0x2e/0x50
__kasan_report.cold.8+0x78/0x92
? rtrs_clt_update_wc_stats+0x41/0x100 [rtrs_client]
kasan_report+0x10/0x20
rtrs_clt_update_wc_stats+0x41/0x100 [rtrs_client]
rtrs_clt_rdma_done+0xb1/0x760 [rtrs_client]
? lockdep_hardirqs_on+0x1a8/0x290
? process_io_rsp+0xb0/0xb0 [rtrs_client]
? mlx4_ib_destroy_cq+0x100/0x100 [mlx4_ib]
? add_interrupt_randomness+0x1a2/0x340
__ib_process_cq+0x97/0x100 [ib_core]
ib_poll_handler+0x41/0xb0 [ib_core]
irq_poll_softirq+0xe0/0x260
__do_softirq+0x127/0x672
irq_exit+0xd1/0xe0
do_IRQ+0xa3/0x1d0
common_interrupt+0xf/0xf
</IRQ>
RIP: 0010:cpuidle_enter_state+0xea/0x780
Code: 31 ff e8 99 48 47 ff 80 7c 24 08 00 74 12 9c 58 f6 c4 02 0f 85 53 05 00 00 31 ff e8 b0 6f 53 ff e8 ab 4f 5e ff fb 8b 44 24 04 <85> c0 0f 89 f3 01 00 00 48 8d 7b 14 e8 65 1e 77 ff c7 43 14 00 00
RSP: 0018:ffffffffab007d58 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffca
RAX: 0000000000000002 RBX: ffff88b803d69800 RCX: ffffffffa91a8298
RDX: 0000000000000007 RSI: dffffc0000000000 RDI: ffffffffab021414
RBP: ffffffffab6329e0 R08: 0000000000000002 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000002
R13: 000000bf39d82466 R14: ffffffffab632aa0 R15: ffffffffab632ae0
? lockdep_hardirqs_on+0x1a8/0x290
? cpuidle_enter_state+0xe5/0x780
cpuidle_enter+0x3c/0x60
do_idle+0x2fb/0x390
? arch_cpu_idle_exit+0x40/0x40
? schedule+0x94/0x120
cpu_startup_entry+0x19/0x1b
start_kernel+0x5da/0x61b
? thread_stack_cache_init+0x6/0x6
? load_ucode_amd_bsp+0x6f/0xc4
? init_amd_microcode+0xa6/0xa6
? x86_family+0x5/0x20
? load_ucode_bsp+0x182/0x1fd
secondary_startup_64+0xa4/0xb0
Allocated by task 5730:
save_stack+0x19/0x80
__kasan_kmalloc.constprop.9+0xc1/0xd0
kmem_cache_alloc_trace+0x15b/0x350
alloc_sess+0xf4/0x570 [rtrs_client]
rtrs_clt_open+0x3b4/0x780 [rtrs_client]
find_and_get_or_create_sess+0x649/0x9d0 [rnbd_client]
rnbd_clt_map_device+0xd7/0xf50 [rnbd_client]
rnbd_clt_map_device_store+0x4ee/0x970 [rnbd_client]
kernfs_fop_write+0x141/0x240
vfs_write+0xf3/0x280
ksys_write+0xba/0x150
do_syscall_64+0x68/0x270
entry_SYSCALL_64_after_hwframe+0x49/0xbe
Freed by task 5822:
save_stack+0x19/0x80
__kasan_slab_free+0x125/0x170
kfree+0xe7/0x3f0
kobject_put+0xd3/0x240
rtrs_clt_destroy_sess_files+0x3f/0x60 [rtrs_client]
rtrs_clt_close+0x3c/0x80 [rtrs_client]
close_rtrs+0x45/0x80 [rnbd_client]
rnbd_client_exit+0x10f/0x2bd [rnbd_client]
__x64_sys_delete_module+0x27b/0x340
do_syscall_64+0x68/0x270
entry_SYSCALL_64_after_hwframe+0x49/0xbe
When rtrs_clt_close is triggered, it iterates over all the present
rtrs_clt_sess and triggers close on them. However, the call to
rtrs_clt_destroy_sess_files is done before the rtrs_clt_close_conns. This
is incorrect since during the initialization phase we allocate
rtrs_clt_sess first, and then we go ahead and create rtrs_clt_con for it.
If we free the rtrs_clt_sess structure before closing the rtrs_clt_con, it
may so happen that an inflight IO completion would trigger the function
rtrs_clt_rdma_done, which would lead to the above UAF case.
Hence close the rtrs_clt_con connections first, and then trigger the
destruction of session files.
Fixes:
|
||
---|---|---|
.. | ||
Kconfig | ||
Makefile | ||
README | ||
rtrs-clt-stats.c | ||
rtrs-clt-sysfs.c | ||
rtrs-clt.c | ||
rtrs-clt.h | ||
rtrs-log.h | ||
rtrs-pri.h | ||
rtrs-srv-stats.c | ||
rtrs-srv-sysfs.c | ||
rtrs-srv.c | ||
rtrs-srv.h | ||
rtrs.c | ||
rtrs.h |
**************************** RDMA Transport (RTRS) **************************** RTRS (RDMA Transport) is a reliable high speed transport library which provides support to establish optimal number of connections between client and server machines using RDMA (InfiniBand, RoCE, iWarp) transport. It is optimized to transfer (read/write) IO blocks. In its core interface it follows the BIO semantics of providing the possibility to either write data from an sg list to the remote side or to request ("read") data transfer from the remote side into a given sg list. RTRS provides I/O fail-over and load-balancing capabilities by using multipath I/O (see "add_path" and "mp_policy" configuration entries in Documentation/ABI/testing/sysfs-class-rtrs-client). RTRS is used by the RNBD (RDMA Network Block Device) modules. ================== Transport protocol ================== Overview -------- An established connection between a client and a server is called rtrs session. A session is associated with a set of memory chunks reserved on the server side for a given client for rdma transfer. A session consists of multiple paths, each representing a separate physical link between client and server. Those are used for load balancing and failover. Each path consists of as many connections (QPs) as there are cpus on the client. When processing an incoming write or read request, rtrs client uses memory chunks reserved for him on the server side. Their number, size and addresses need to be exchanged between client and server during the connection establishment phase. Apart from the memory related information client needs to inform the server about the session name and identify each path and connection individually. On an established session client sends to server write or read messages. Server uses immediate field to tell the client which request is being acknowledged and for errno. Client uses immediate field to tell the server which of the memory chunks has been accessed and at which offset the message can be found. Module parameter always_invalidate is introduced for the security problem discussed in LPC RDMA MC 2019. When always_invalidate=Y, on the server side we invalidate each rdma buffer before we hand it over to RNBD server and then pass it to the block layer. A new rkey is generated and registered for the buffer after it returns back from the block layer and RNBD server. The new rkey is sent back to the client along with the IO result. The procedure is the default behaviour of the driver. This invalidation and registration on each IO causes performance drop of up to 20%. A user of the driver may choose to load the modules with this mechanism switched off (always_invalidate=N), if he understands and can take the risk of a malicious client being able to corrupt memory of a server it is connected to. This might be a reasonable option in a scenario where all the clients and all the servers are located within a secure datacenter. Connection establishment ------------------------ 1. Client starts establishing connections belonging to a path of a session one by one via attaching RTRS_MSG_CON_REQ messages to the rdma_connect requests. Those include uuid of the session and uuid of the path to be established. They are used by the server to find a persisting session/path or to create a new one when necessary. The message also contains the protocol version and magic for compatibility, total number of connections per session (as many as cpus on the client), the id of the current connection and the reconnect counter, which is used to resolve the situations where client is trying to reconnect a path, while server is still destroying the old one. 2. Server accepts the connection requests one by one and attaches RTRS_MSG_CONN_RSP messages to the rdma_accept. Apart from magic and protocol version, the messages include error code, queue depth supported by the server (number of memory chunks which are going to be allocated for that session) and the maximum size of one io, RTRS_MSG_NEW_RKEY_F flags is set when always_invalidate=Y. 3. After all connections of a path are established client sends to server the RTRS_MSG_INFO_REQ message, containing the name of the session. This message requests the address information from the server. 4. Server replies to the session info request message with RTRS_MSG_INFO_RSP, which contains the addresses and keys of the RDMA buffers allocated for that session. 5. Session becomes connected after all paths to be established are connected (i.e. steps 1-4 finished for all paths requested for a session) 6. Server and client exchange periodically heartbeat messages (empty rdma messages with an immediate field) which are used to detect a crash on remote side or network outage in an absence of IO. 7. On any RDMA related error or in the case of a heartbeat timeout, the corresponding path is disconnected, all the inflight IO are failed over to a healthy path, if any, and the reconnect mechanism is triggered. CLT SRV *for each connection belonging to a path and for each path: RTRS_MSG_CON_REQ -------------------> <------------------- RTRS_MSG_CON_RSP ... *after all connections are established: RTRS_MSG_INFO_REQ -------------------> <------------------- RTRS_MSG_INFO_RSP *heartbeat is started from both sides: -------------------> [RTRS_HB_MSG_IMM] [RTRS_HB_MSG_ACK] <------------------- [RTRS_HB_MSG_IMM] <------------------- -------------------> [RTRS_HB_MSG_ACK] IO path ------- * Write (always_invalidate=N) * 1. When processing a write request client selects one of the memory chunks on the server side and rdma writes there the user data, user header and the RTRS_MSG_RDMA_WRITE message. Apart from the type (write), the message only contains size of the user header. The client tells the server which chunk has been accessed and at what offset the RTRS_MSG_RDMA_WRITE can be found by using the IMM field. 2. When confirming a write request server sends an "empty" rdma message with an immediate field. The 32 bit field is used to specify the outstanding inflight IO and for the error code. CLT SRV usr_data + usr_hdr + rtrs_msg_rdma_write -----------------> [RTRS_IO_REQ_IMM] [RTRS_IO_RSP_IMM] <----------------- (id + errno) * Write (always_invalidate=Y) * 1. When processing a write request client selects one of the memory chunks on the server side and rdma writes there the user data, user header and the RTRS_MSG_RDMA_WRITE message. Apart from the type (write), the message only contains size of the user header. The client tells the server which chunk has been accessed and at what offset the RTRS_MSG_RDMA_WRITE can be found by using the IMM field, Server invalidate rkey associated to the memory chunks first, when it finishes, pass the IO to RNBD server module. 2. When confirming a write request server sends an "empty" rdma message with an immediate field. The 32 bit field is used to specify the outstanding inflight IO and for the error code. The new rkey is sent back using SEND_WITH_IMM WR, client When it recived new rkey message, it validates the message and finished IO after update rkey for the rbuffer, then post back the recv buffer for later use. CLT SRV usr_data + usr_hdr + rtrs_msg_rdma_write -----------------> [RTRS_IO_REQ_IMM] [RTRS_MSG_RKEY_RSP] <----------------- (RTRS_MSG_RKEY_RSP) [RTRS_IO_RSP_IMM] <----------------- (id + errno) * Read (always_invalidate=N)* 1. When processing a read request client selects one of the memory chunks on the server side and rdma writes there the user header and the RTRS_MSG_RDMA_READ message. This message contains the type (read), size of the user header, flags (specifying if memory invalidation is necessary) and the list of addresses along with keys for the data to be read into. 2. When confirming a read request server transfers the requested data first, attaches an invalidation message if requested and finally an "empty" rdma message with an immediate field. The 32 bit field is used to specify the outstanding inflight IO and the error code. CLT SRV usr_hdr + rtrs_msg_rdma_read --------------> [RTRS_IO_REQ_IMM] [RTRS_IO_RSP_IMM] <-------------- usr_data + (id + errno) or in case client requested invalidation: [RTRS_IO_RSP_IMM_W_INV] <-------------- usr_data + (INV) + (id + errno) * Read (always_invalidate=Y)* 1. When processing a read request client selects one of the memory chunks on the server side and rdma writes there the user header and the RTRS_MSG_RDMA_READ message. This message contains the type (read), size of the user header, flags (specifying if memory invalidation is necessary) and the list of addresses along with keys for the data to be read into. Server invalidate rkey associated to the memory chunks first, when it finishes, passes the IO to RNBD server module. 2. When confirming a read request server transfers the requested data first, attaches an invalidation message if requested and finally an "empty" rdma message with an immediate field. The 32 bit field is used to specify the outstanding inflight IO and the error code. The new rkey is sent back using SEND_WITH_IMM WR, client When it recived new rkey message, it validates the message and finished IO after update rkey for the rbuffer, then post back the recv buffer for later use. CLT SRV usr_hdr + rtrs_msg_rdma_read --------------> [RTRS_IO_REQ_IMM] [RTRS_IO_RSP_IMM] <-------------- usr_data + (id + errno) [RTRS_MSG_RKEY_RSP] <----------------- (RTRS_MSG_RKEY_RSP) or in case client requested invalidation: [RTRS_IO_RSP_IMM_W_INV] <-------------- usr_data + (INV) + (id + errno) ========================================= Contributors List(in alphabetical order) ========================================= Danil Kipnis <danil.kipnis@profitbricks.com> Fabian Holler <mail@fholler.de> Guoqing Jiang <guoqing.jiang@cloud.ionos.com> Jack Wang <jinpu.wang@profitbricks.com> Kleber Souza <kleber.souza@profitbricks.com> Lutz Pogrell <lutz.pogrell@cloud.ionos.com> Milind Dumbare <Milind.dumbare@gmail.com> Roman Penyaev <roman.penyaev@profitbricks.com>