In the Linux kernel, the following vulnerability has been resolved: mm: swap: fix race between free_swap_and_cache() and swapoff() There was previously a theoretical window where swapoff() could run and teardown a swap_info_struct while a call to free_swap_and_cache() was running in another thread. This could cause, amongst other bad possibilities, swap_page_trans_huge_swapped() (called by free_swap_and_cache()) to access the freed memory for swap_map. This is a theoretical problem and I haven't been able to provoke it from a test case. But there has been agreement based on code review that this is possible (see link below). Fix it by using get_swap_device()/put_swap_device(), which will stall swapoff(). There was an extra check in _swap_info_get() to confirm that the swap entry was not free. This isn't present in get_swap_device() because it doesn't make sense in general due to the race between getting the reference and swapoff. So I've added an equivalent check directly in free_swap_and_cache(). Details of how to provoke one possible issue (thanks to David Hildenbrand for deriving this): --8<----- __swap_entry_free() might be the last user and result in "count == SWAP_HAS_CACHE". swapoff->try_to_unuse() will stop as soon as soon as si->inuse_pages==0. So the question is: could someone reclaim the folio and turn si->inuse_pages==0, before we completed swap_page_trans_huge_swapped(). Imagine the following: 2 MiB folio in the swapcache. Only 2 subpages are still references by swap entries. Process 1 still references subpage 0 via swap entry. Process 2 still references subpage 1 via swap entry. Process 1 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE [then, preempted in the hypervisor etc.] Process 2 quits. Calls free_swap_and_cache(). -> count == SWAP_HAS_CACHE Process 2 goes ahead, passes swap_page_trans_huge_swapped(), and calls __try_to_reclaim_swap(). __try_to_reclaim_swap()->folio_free_swap()->delete_from_swap_cache()->...
4.13.0-16.194.13.0-17.204.13.0-25.294.13.0-32.354.15.0-10.114.15.0-101.1024.15.0-106.1074.15.0-108.1094.15.0-109.1104.15.0-111.112+120 more3.11.0-12.195.3.0-18.195.3.0-24.265.4.0-100.1135.4.0-104.1185.4.0-105.1195.4.0-107.1215.4.0-109.1235.4.0-110.1245.4.0-113.1275.4.0-117.132+97 more5.4.0-198.2185.13.0-19.195.15.0-100.1105.15.0-101.1115.15.0-102.1125.15.0-105.1155.15.0-106.1165.15.0-107.1175.15.0-112.1225.15.0-113.1235.15.0-17.17+44 more5.15.0-116.1264.2.0-16.194.2.0-17.214.2.0-19.234.3.0-1.104.3.0-2.114.3.0-5.164.3.0-6.174.3.0-7.184.4.0-2.166.5.0-9.96.6.0-14.146.8.0-11.116.8.0-20.206.8.0-22.226.8.0-28.286.8.0-31.316.8.0-35.356.11.0-8.84.13.0-16.194.13.0-17.204.13.0-25.294.13.0-32.354.15.0-10.114.15.0-101.1024.15.0-106.1074.15.0-108.1094.15.0-109.1104.15.0-111.112+120 more4.15.0-230.2425.19.0-1007.7~22.04.15.19.0-1009.9~22.04.15.19.0-1010.10~22.04.15.19.0-1011.11~22.04.15.19.0-1012.12~22.04.15.19.0-1013.13~22.04.15.19.0-1014.14~22.04.15.19.0-1015.15~22.04.14.15.0-1001.14.15.0-1003.34.15.0-1005.54.15.0-1006.64.15.0-1007.74.15.0-1009.94.15.0-1010.104.15.0-1011.114.15.0-1016.164.15.0-1017.17+107 more4.15.0-1174.187Exploitability
AV:LAC:LPR:LUI:NScope
S:UImpact
C:NI:NA:HCVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H