mirror of
https://mirrors.bfsu.edu.cn/git/linux.git
synced 2024-11-13 14:24:11 +08:00
f85bea18f7
20 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
Alice Ryhl
|
fc6e66f469 |
rust: add abstraction for struct page
Adds a new struct called `Page` that wraps a pointer to `struct page`. This struct is assumed to hold ownership over the page, so that Rust code can allocate and manage pages directly. The page type has various methods for reading and writing into the page. These methods will temporarily map the page to allow the operation. All of these methods use a helper that takes an offset and length, performs bounds checks, and returns a pointer to the given offset in the page. This patch only adds support for pages of order zero, as that is all Rust Binder needs. However, it is written to make it easy to add support for higher-order pages in the future. To do that, you would add a const generic parameter to `Page` that specifies the order. Most of the methods do not need to be adjusted, as the logic for dealing with mapping multiple pages at once can be isolated to just the `with_pointer_into_page` method. Rust Binder needs to manage pages directly as that is how transactions are delivered: Each process has an mmap'd region for incoming transactions. When an incoming transaction arrives, the Binder driver will choose a region in the mmap, allocate and map the relevant pages manually, and copy the incoming transaction directly into the page. This architecture allows the driver to copy transactions directly from the address space of one process to another, without an intermediate copy to a kernel buffer. This code is based on Wedson's page abstractions from the old rust branch, but it has been modified by Alice by removing the incomplete support for higher-order pages, by introducing the `with_*` helpers to consolidate the bounds checking logic into a single place, and various other changes. Co-developed-by: Wedson Almeida Filho <wedsonaf@gmail.com> Signed-off-by: Wedson Almeida Filho <wedsonaf@gmail.com> Reviewed-by: Andreas Hindborg <a.hindborg@samsung.com> Reviewed-by: Trevor Gross <tmgross@umich.edu> Reviewed-by: Benno Lossin <benno.lossin@proton.me> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20240528-alice-mm-v7-4-78222c31b8f4@google.com [ Fixed typos and added a few intra-doc links. - Miguel ] Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Wedson Almeida Filho
|
1b580e7b9b |
rust: uaccess: add userspace pointers
A pointer to an area in userspace memory, which can be either read-only or read-write. All methods on this struct are safe: attempting to read or write on bad addresses (either out of the bound of the slice or unmapped addresses) will return `EFAULT`. Concurrent access, *including data races to/from userspace memory*, is permitted, because fundamentally another userspace thread/process could always be modifying memory at the same time (in the same way that userspace Rust's `std::io` permits data races with the contents of files on disk). In the presence of a race, the exact byte values read/written are unspecified but the operation is well-defined. Kernelspace code should validate its copy of data after completing a read, and not expect that multiple reads of the same address will return the same value. These APIs are designed to make it difficult to accidentally write TOCTOU bugs. Every time you read from a memory location, the pointer is advanced by the length so that you cannot use that reader to read the same memory location twice. Preventing double-fetches avoids TOCTOU bugs. This is accomplished by taking `self` by value to prevent obtaining multiple readers on a given `UserSlice`, and the readers only permitting forward reads. If double-fetching a memory location is necessary for some reason, then that is done by creating multiple readers to the same memory location. Constructing a `UserSlice` performs no checks on the provided address and length, it can safely be constructed inside a kernel thread with no current userspace process. Reads and writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map of the current process and enforce that the address range is within the user range (no additional calls to `access_ok` are needed). This code is based on something that was originally written by Wedson on the old rust branch. It was modified by Alice by removing the `IoBufferReader` and `IoBufferWriter` traits, and various other changes. Signed-off-by: Wedson Almeida Filho <wedsonaf@gmail.com> Reviewed-by: Benno Lossin <benno.lossin@proton.me> Reviewed-by: Trevor Gross <tmgross@umich.edu> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Co-developed-by: Alice Ryhl <aliceryhl@google.com> Signed-off-by: Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20240528-alice-mm-v7-1-78222c31b8f4@google.com [ Wrapped docs to 100 and added a few intra-doc links. - Miguel ] Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Linus Torvalds
|
61307b7be4 |
The usual shower of singleton fixes and minor series all over MM,
documented (hopefully adequately) in the respective changelogs. Notable series include: - Lucas Stach has provided some page-mapping cleanup/consolidation/maintainability work in the series "mm/treewide: Remove pXd_huge() API". - In the series "Allow migrate on protnone reference with MPOL_PREFERRED_MANY policy", Donet Tom has optimized mempolicy's MPOL_PREFERRED_MANY mode, yielding almost doubled performance in one test. - In their series "Memory allocation profiling" Kent Overstreet and Suren Baghdasaryan have contributed a means of determining (via /proc/allocinfo) whereabouts in the kernel memory is being allocated: number of calls and amount of memory. - Matthew Wilcox has provided the series "Various significant MM patches" which does a number of rather unrelated things, but in largely similar code sites. - In his series "mm: page_alloc: freelist migratetype hygiene" Johannes Weiner has fixed the page allocator's handling of migratetype requests, with resulting improvements in compaction efficiency. - In the series "make the hugetlb migration strategy consistent" Baolin Wang has fixed a hugetlb migration issue, which should improve hugetlb allocation reliability. - Liu Shixin has hit an I/O meltdown caused by readahead in a memory-tight memcg. Addressed in the series "Fix I/O high when memory almost met memcg limit". - In the series "mm/filemap: optimize folio adding and splitting" Kairui Song has optimized pagecache insertion, yielding ~10% performance improvement in one test. - Baoquan He has cleaned up and consolidated the early zone initialization code in the series "mm/mm_init.c: refactor free_area_init_core()". - Baoquan has also redone some MM initializatio code in the series "mm/init: minor clean up and improvement". - MM helper cleanups from Christoph Hellwig in his series "remove follow_pfn". - More cleanups from Matthew Wilcox in the series "Various page->flags cleanups". - Vlastimil Babka has contributed maintainability improvements in the series "memcg_kmem hooks refactoring". - More folio conversions and cleanups in Matthew Wilcox's series "Convert huge_zero_page to huge_zero_folio" "khugepaged folio conversions" "Remove page_idle and page_young wrappers" "Use folio APIs in procfs" "Clean up __folio_put()" "Some cleanups for memory-failure" "Remove page_mapping()" "More folio compat code removal" - David Hildenbrand chipped in with "fs/proc/task_mmu: convert hugetlb functions to work on folis". - Code consolidation and cleanup work related to GUP's handling of hugetlbs in Peter Xu's series "mm/gup: Unify hugetlb, part 2". - Rick Edgecombe has developed some fixes to stack guard gaps in the series "Cover a guard gap corner case". - Jinjiang Tu has fixed KSM's behaviour after a fork+exec in the series "mm/ksm: fix ksm exec support for prctl". - Baolin Wang has implemented NUMA balancing for multi-size THPs. This is a simple first-cut implementation for now. The series is "support multi-size THP numa balancing". - Cleanups to vma handling helper functions from Matthew Wilcox in the series "Unify vma_address and vma_pgoff_address". - Some selftests maintenance work from Dev Jain in the series "selftests/mm: mremap_test: Optimizations and style fixes". - Improvements to the swapping of multi-size THPs from Ryan Roberts in the series "Swap-out mTHP without splitting". - Kefeng Wang has significantly optimized the handling of arm64's permission page faults in the series "arch/mm/fault: accelerate pagefault when badaccess" "mm: remove arch's private VM_FAULT_BADMAP/BADACCESS" - GUP cleanups from David Hildenbrand in "mm/gup: consistently call it GUP-fast". - hugetlb fault code cleanups from Vishal Moola in "Hugetlb fault path to use struct vm_fault". - selftests build fixes from John Hubbard in the series "Fix selftests/mm build without requiring "make headers"". - Memory tiering fixes/improvements from Ho-Ren (Jack) Chuang in the series "Improved Memory Tier Creation for CPUless NUMA Nodes". Fixes the initialization code so that migration between different memory types works as intended. - David Hildenbrand has improved follow_pte() and fixed an errant driver in the series "mm: follow_pte() improvements and acrn follow_pte() fixes". - David also did some cleanup work on large folio mapcounts in his series "mm: mapcount for large folios + page_mapcount() cleanups". - Folio conversions in KSM in Alex Shi's series "transfer page to folio in KSM". - Barry Song has added some sysfs stats for monitoring multi-size THP's in the series "mm: add per-order mTHP alloc and swpout counters". - Some zswap cleanups from Yosry Ahmed in the series "zswap same-filled and limit checking cleanups". - Matthew Wilcox has been looking at buffer_head code and found the documentation to be lacking. The series is "Improve buffer head documentation". - Multi-size THPs get more work, this time from Lance Yang. His series "mm/madvise: enhance lazyfreeing with mTHP in madvise_free" optimizes the freeing of these things. - Kemeng Shi has added more userspace-visible writeback instrumentation in the series "Improve visibility of writeback". - Kemeng Shi then sent some maintenance work on top in the series "Fix and cleanups to page-writeback". - Matthew Wilcox reduces mmap_lock traffic in the anon vma code in the series "Improve anon_vma scalability for anon VMAs". Intel's test bot reported an improbable 3x improvement in one test. - SeongJae Park adds some DAMON feature work in the series "mm/damon: add a DAMOS filter type for page granularity access recheck" "selftests/damon: add DAMOS quota goal test" - Also some maintenance work in the series "mm/damon/paddr: simplify page level access re-check for pageout" "mm/damon: misc fixes and improvements" - David Hildenbrand has disabled some known-to-fail selftests ni the series "selftests: mm: cow: flag vmsplice() hugetlb tests as XFAIL". - memcg metadata storage optimizations from Shakeel Butt in "memcg: reduce memory consumption by memcg stats". - DAX fixes and maintenance work from Vishal Verma in the series "dax/bus.c: Fixups for dax-bus locking". -----BEGIN PGP SIGNATURE----- iHUEABYIAB0WIQTTMBEPP41GrTpTJgfdBJ7gKXxAjgUCZkgQYwAKCRDdBJ7gKXxA jrdKAP9WVJdpEcXxpoub/vVE0UWGtffr8foifi9bCwrQrGh5mgEAx7Yf0+d/oBZB nvA4E0DcPrUAFy144FNM0NTCb7u9vAw= =V3R/ -----END PGP SIGNATURE----- Merge tag 'mm-stable-2024-05-17-19-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull mm updates from Andrew Morton: "The usual shower of singleton fixes and minor series all over MM, documented (hopefully adequately) in the respective changelogs. Notable series include: - Lucas Stach has provided some page-mapping cleanup/consolidation/ maintainability work in the series "mm/treewide: Remove pXd_huge() API". - In the series "Allow migrate on protnone reference with MPOL_PREFERRED_MANY policy", Donet Tom has optimized mempolicy's MPOL_PREFERRED_MANY mode, yielding almost doubled performance in one test. - In their series "Memory allocation profiling" Kent Overstreet and Suren Baghdasaryan have contributed a means of determining (via /proc/allocinfo) whereabouts in the kernel memory is being allocated: number of calls and amount of memory. - Matthew Wilcox has provided the series "Various significant MM patches" which does a number of rather unrelated things, but in largely similar code sites. - In his series "mm: page_alloc: freelist migratetype hygiene" Johannes Weiner has fixed the page allocator's handling of migratetype requests, with resulting improvements in compaction efficiency. - In the series "make the hugetlb migration strategy consistent" Baolin Wang has fixed a hugetlb migration issue, which should improve hugetlb allocation reliability. - Liu Shixin has hit an I/O meltdown caused by readahead in a memory-tight memcg. Addressed in the series "Fix I/O high when memory almost met memcg limit". - In the series "mm/filemap: optimize folio adding and splitting" Kairui Song has optimized pagecache insertion, yielding ~10% performance improvement in one test. - Baoquan He has cleaned up and consolidated the early zone initialization code in the series "mm/mm_init.c: refactor free_area_init_core()". - Baoquan has also redone some MM initializatio code in the series "mm/init: minor clean up and improvement". - MM helper cleanups from Christoph Hellwig in his series "remove follow_pfn". - More cleanups from Matthew Wilcox in the series "Various page->flags cleanups". - Vlastimil Babka has contributed maintainability improvements in the series "memcg_kmem hooks refactoring". - More folio conversions and cleanups in Matthew Wilcox's series: "Convert huge_zero_page to huge_zero_folio" "khugepaged folio conversions" "Remove page_idle and page_young wrappers" "Use folio APIs in procfs" "Clean up __folio_put()" "Some cleanups for memory-failure" "Remove page_mapping()" "More folio compat code removal" - David Hildenbrand chipped in with "fs/proc/task_mmu: convert hugetlb functions to work on folis". - Code consolidation and cleanup work related to GUP's handling of hugetlbs in Peter Xu's series "mm/gup: Unify hugetlb, part 2". - Rick Edgecombe has developed some fixes to stack guard gaps in the series "Cover a guard gap corner case". - Jinjiang Tu has fixed KSM's behaviour after a fork+exec in the series "mm/ksm: fix ksm exec support for prctl". - Baolin Wang has implemented NUMA balancing for multi-size THPs. This is a simple first-cut implementation for now. The series is "support multi-size THP numa balancing". - Cleanups to vma handling helper functions from Matthew Wilcox in the series "Unify vma_address and vma_pgoff_address". - Some selftests maintenance work from Dev Jain in the series "selftests/mm: mremap_test: Optimizations and style fixes". - Improvements to the swapping of multi-size THPs from Ryan Roberts in the series "Swap-out mTHP without splitting". - Kefeng Wang has significantly optimized the handling of arm64's permission page faults in the series "arch/mm/fault: accelerate pagefault when badaccess" "mm: remove arch's private VM_FAULT_BADMAP/BADACCESS" - GUP cleanups from David Hildenbrand in "mm/gup: consistently call it GUP-fast". - hugetlb fault code cleanups from Vishal Moola in "Hugetlb fault path to use struct vm_fault". - selftests build fixes from John Hubbard in the series "Fix selftests/mm build without requiring "make headers"". - Memory tiering fixes/improvements from Ho-Ren (Jack) Chuang in the series "Improved Memory Tier Creation for CPUless NUMA Nodes". Fixes the initialization code so that migration between different memory types works as intended. - David Hildenbrand has improved follow_pte() and fixed an errant driver in the series "mm: follow_pte() improvements and acrn follow_pte() fixes". - David also did some cleanup work on large folio mapcounts in his series "mm: mapcount for large folios + page_mapcount() cleanups". - Folio conversions in KSM in Alex Shi's series "transfer page to folio in KSM". - Barry Song has added some sysfs stats for monitoring multi-size THP's in the series "mm: add per-order mTHP alloc and swpout counters". - Some zswap cleanups from Yosry Ahmed in the series "zswap same-filled and limit checking cleanups". - Matthew Wilcox has been looking at buffer_head code and found the documentation to be lacking. The series is "Improve buffer head documentation". - Multi-size THPs get more work, this time from Lance Yang. His series "mm/madvise: enhance lazyfreeing with mTHP in madvise_free" optimizes the freeing of these things. - Kemeng Shi has added more userspace-visible writeback instrumentation in the series "Improve visibility of writeback". - Kemeng Shi then sent some maintenance work on top in the series "Fix and cleanups to page-writeback". - Matthew Wilcox reduces mmap_lock traffic in the anon vma code in the series "Improve anon_vma scalability for anon VMAs". Intel's test bot reported an improbable 3x improvement in one test. - SeongJae Park adds some DAMON feature work in the series "mm/damon: add a DAMOS filter type for page granularity access recheck" "selftests/damon: add DAMOS quota goal test" - Also some maintenance work in the series "mm/damon/paddr: simplify page level access re-check for pageout" "mm/damon: misc fixes and improvements" - David Hildenbrand has disabled some known-to-fail selftests ni the series "selftests: mm: cow: flag vmsplice() hugetlb tests as XFAIL". - memcg metadata storage optimizations from Shakeel Butt in "memcg: reduce memory consumption by memcg stats". - DAX fixes and maintenance work from Vishal Verma in the series "dax/bus.c: Fixups for dax-bus locking"" * tag 'mm-stable-2024-05-17-19-19' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (426 commits) memcg, oom: cleanup unused memcg_oom_gfp_mask and memcg_oom_order selftests/mm: hugetlb_madv_vs_map: avoid test skipping by querying hugepage size at runtime mm/hugetlb: add missing VM_FAULT_SET_HINDEX in hugetlb_wp mm/hugetlb: add missing VM_FAULT_SET_HINDEX in hugetlb_fault selftests: cgroup: add tests to verify the zswap writeback path mm: memcg: make alloc_mem_cgroup_per_node_info() return bool mm/damon/core: fix return value from damos_wmark_metric_value mm: do not update memcg stats for NR_{FILE/SHMEM}_PMDMAPPED selftests: cgroup: remove redundant enabling of memory controller Docs/mm/damon/maintainer-profile: allow posting patches based on damon/next tree Docs/mm/damon/maintainer-profile: change the maintainer's timezone from PST to PT Docs/mm/damon/design: use a list for supported filters Docs/admin-guide/mm/damon/usage: fix wrong schemes effective quota update command Docs/admin-guide/mm/damon/usage: fix wrong example of DAMOS filter matching sysfs file selftests/damon: classify tests for functionalities and regressions selftests/damon/_damon_sysfs: use 'is' instead of '==' for 'None' selftests/damon/_damon_sysfs: find sysfs mount point from /proc/mounts selftests/damon/_damon_sysfs: check errors from nr_schemes file reads mm/damon/core: initialize ->esz_bp from damos_quota_init_priv() selftests/damon: add a test for DAMOS quota goal ... |
||
Thorsten Blum
|
84373132b8 |
rust: helpers: Fix grammar in comment
s/directly the bindings/the bindings directly/ Signed-off-by: Thorsten Blum <thorsten.blum@toblux.com> Reviewed-by: Trevor Gross <tmgross@umich.edu> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20240411205428.537700-1-thorsten.blum@toblux.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Kent Overstreet
|
53ed0af496 |
rust: add a rust helper for krealloc()
Memory allocation profiling is turning krealloc() into a nontrivial macro - so for now, we need a helper for it. Until we have proper support on the rust side for memory allocation profiling this does mean that all Rust allocations will be accounted to the helper. Link: https://lkml.kernel.org/r/20240321163705.3067592-25-surenb@google.com Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Signed-off-by: Suren Baghdasaryan <surenb@google.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Acked-by: Miguel Ojeda <ojeda@kernel.org> Tested-by: Kees Cook <keescook@chromium.org> Cc: Alex Gaynor <alex.gaynor@gmail.com> Cc: Wedson Almeida Filho <wedsonaf@gmail.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Gary Guo <gary@garyguo.net> Cc: "Björn Roy Baron" <bjorn3_gh@protonmail.com> Cc: Benno Lossin <benno.lossin@proton.me> Cc: Andreas Hindborg <a.hindborg@samsung.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Christoph Lameter <cl@linux.com> Cc: Dennis Zhou <dennis@kernel.org> Cc: Pasha Tatashin <pasha.tatashin@soleen.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Tejun Heo <tj@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> |
||
Alice Ryhl
|
7324b88975 |
rust: workqueue: add helper for defining work_struct fields
The main challenge with defining `work_struct` fields is making sure that the function pointer stored in the `work_struct` is appropriate for the work item type it is embedded in. It needs to know the offset of the `work_struct` field being used (even if there are several!) so that it can do a `container_of`, and it needs to know the type of the work item so that it can call into the right user-provided code. All of this needs to happen in a way that provides a safe API to the user, so that users of the workqueue cannot mix up the function pointers. There are three important pieces that are relevant when doing this: * The pointer type. * The work item struct. This is what the pointer points at. * The `work_struct` field. This is a field of the work item struct. This patch introduces a separate trait for each piece. The pointer type is given a `WorkItemPointer` trait, which pointer types need to implement to be usable with the workqueue. This trait will be implemented for `Arc` and `Box` in a later patch in this patchset. Implementing this trait is unsafe because this is where the `container_of` operation happens, but user-code will not need to implement it themselves. The work item struct should then implement the `WorkItem` trait. This trait is where user-code specifies what they want to happen when a work item is executed. It also specifies what the correct pointer type is. Finally, to make the work item struct know the offset of its `work_struct` field, we use a trait called `HasWork<T, ID>`. If a type implements this trait, then the type declares that, at the given offset, there is a field of type `Work<T, ID>`. The trait is marked unsafe because the OFFSET constant must be correct, but we provide an `impl_has_work!` macro that can safely implement `HasWork<T>` on a type. The macro expands to something that only compiles if the specified field really has the type `Work<T>`. It is used like this: ``` struct MyWorkItem { work_field: Work<MyWorkItem, 1>, } impl_has_work! { impl HasWork<MyWorkItem, 1> for MyWorkItem { self.work_field } } ``` Note that since the `Work` type is annotated with an id, you can have several `work_struct` fields by using a different id for each one. Co-developed-by: Gary Guo <gary@garyguo.net> Signed-off-by: Gary Guo <gary@garyguo.net> Signed-off-by: Alice Ryhl <aliceryhl@google.com> Reviewed-by: Benno Lossin <benno.lossin@proton.me> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Reviewed-by: Andreas Hindborg <a.hindborg@samsung.com> Reviewed-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Tejun Heo <tj@kernel.org> |
||
Linus Torvalds
|
a031fe8d1d |
Rust changes for v6.6
In terms of lines, most changes this time are on the pinned-init API and infrastructure. While we have a Rust version upgrade, and thus a bunch of changes from the vendored 'alloc' crate as usual, this time those do not account for many lines. Toolchain and infrastructure: - Upgrade to Rust 1.71.1. This is the second such upgrade, which is a smaller jump compared to the last time. This version allows us to remove the '__rust_*' allocator functions -- the compiler now generates them as expected, thus now our 'KernelAllocator' is used. It also introduces the 'offset_of!' macro in the standard library (as an unstable feature) which we will need soon. So far, we were using a declarative macro as a prerequisite in some not-yet-landed patch series, which did not support sub-fields (i.e. nested structs): #[repr(C)] struct S { a: u16, b: (u8, u8), } assert_eq!(offset_of!(S, b.1), 3); - Upgrade to bindgen 0.65.1. This is the first time we upgrade its version. Given it is a fairly big jump, it comes with a fair number of improvements/changes that affect us, such as a fix needed to support LLVM 16 as well as proper support for '__noreturn' C functions, which are now mapped to return the '!' type in Rust: void __noreturn f(void); // C pub fn f() -> !; // Rust - 'scripts/rust_is_available.sh' improvements and fixes. This series takes care of all the issues known so far and adds a few new checks to cover for even more cases, plus adds some more help texts. All this together will hopefully make problematic setups easier to identify and to be solved by users building the kernel. In addition, it adds a test suite which covers all branches of the shell script, as well as tests for the issues found so far. - Support rust-analyzer for out-of-tree modules too. - Give 'cfg's to rust-analyzer for the 'core' and 'alloc' crates. - Drop 'scripts/is_rust_module.sh' since it is not needed anymore. Macros crate: - New 'paste!' proc macro. This macro is a more flexible version of 'concat_idents!': it allows the resulting identifier to be used to declare new items and it allows to transform the identifiers before concatenating them, e.g. let x_1 = 42; paste!(let [<x _2>] = [<x _1>];); assert!(x_1 == x_2); The macro is then used for several of the pinned-init API changes in this pull. Pinned-init API: - Make '#[pin_data]' compatible with conditional compilation of fields, allowing to write code like: #[pin_data] pub struct Foo { #[cfg(CONFIG_BAR)] a: Bar, #[cfg(not(CONFIG_BAR))] a: Baz, } - New '#[derive(Zeroable)]' proc macro for the 'Zeroable' trait, which allows 'unsafe' implementations for structs where every field implements the 'Zeroable' trait, e.g.: #[derive(Zeroable)] pub struct DriverData { id: i64, buf_ptr: *mut u8, len: usize, } - Add '..Zeroable::zeroed()' syntax to the 'pin_init!' macro for zeroing all other fields, e.g.: pin_init!(Buf { buf: [1; 64], ..Zeroable::zeroed() }); - New '{,pin_}init_array_from_fn()' functions to create array initializers given a generator function, e.g.: let b: Box<[usize; 1_000]> = Box::init::<Error>( init_array_from_fn(|i| i) ).unwrap(); assert_eq!(b.len(), 1_000); assert_eq!(b[123], 123); - New '{,pin_}chain' methods for '{,Pin}Init<T, E>' that allow to execute a closure on the value directly after initialization, e.g.: let foo = init!(Foo { buf <- init::zeroed() }).chain(|foo| { foo.setup(); Ok(()) }); - Support arbitrary paths in init macros, instead of just identifiers and generic types. - Implement the 'Zeroable' trait for the 'UnsafeCell<T>' and 'Opaque<T>' types. - Make initializer values inaccessible after initialization. - Make guards in the init macros hygienic. 'allocator' module: - Use 'krealloc_aligned()' in 'KernelAllocator::alloc' preventing misaligned allocations when the Rust 1.71.1 upgrade is applied later in this pull. The equivalent fix for the previous compiler version (where 'KernelAllocator' is not yet used) was merged into 6.5 already, which added the 'krealloc_aligned()' function used here. - Implement 'KernelAllocator::{realloc, alloc_zeroed}' for performance, using 'krealloc_aligned()' too, which forwards the call to the C API. 'types' module: - Make 'Opaque' be '!Unpin', removing the need to add a 'PhantomPinned' field to Rust structs that contain C structs which must not be moved. - Make 'Opaque' use 'UnsafeCell' as the outer type, rather than inner. Documentation: - Suggest obtaining the source code of the Rust's 'core' library using the tarball instead of the repository. MAINTAINERS: - Andreas and Alice, from Samsung and Google respectively, are joining as reviewers of the "RUST" entry. As well as a few other minor changes and cleanups. -----BEGIN PGP SIGNATURE----- iQIzBAABCgAdFiEEPjU5OPd5QIZ9jqqOGXyLc2htIW0FAmTnzOAACgkQGXyLc2ht IW0RFg/9FKGAn+JNvLUpB7OIXQZFyDVDpXkL14Dy8At0z609ZhkD36pFAxGua4OC BLHpyEQK5bUAQZ4pZ1aexmpFt37z+OPZBMmKoC7eUH2fm8Q277Gm54pno2AzIg3g if9lFhIowQTB8pG1YZRF6YMIdIp5JCmT0m8YuXMrr1XYtWIWnyU4twT/bmfk9UKU DgmuE1GmpHbWQgIf11eYWxbgfIuY9F/QyHzljW8P+Jgln7F4d8WDVJln8Yw0z/Bm w/4kvYv7AHOHQvzjCi971ANvnhsgjeKMSmt2RrcGefn+6t3pNsdZEUYGR9xdAxCz fvcje6nUoGjPr9J4F/JdZPmCb7jwSGpF01OvA//H8YjUwP3+msBwxVhRSH1FA1m3 SVKedXmAUMNAaqtqCNFZmUiNB5LbW4cldFSnNf4CVW9w9bXe2jIKqjjsPi8m57B1 H4zwr1WTtY2s2n2fdYOAtzmOaOJFXa7PIrGo3onj1mSgcyKOVeoMI5+NR/pwxgIR 9Z8633bhTfGVHRyC7p0XpakcZd0jbl0yq+bbvgH2sof+RNWYuoZQ92DJ05/g3zOK Mj54PNjAgY+Z+TqX/vjlEdWs4SoBcnL3cAy9RFKGRDUoGDPeqiW6qa7Y9oAFZHfk PX3oboI0VYn5F9BVGO4i+9cL/CNL4b6sb5FBvL+0EwUBhWTxeKE= =BAP+ -----END PGP SIGNATURE----- Merge tag 'rust-6.6' of https://github.com/Rust-for-Linux/linux Pull rust updates from Miguel Ojeda: "In terms of lines, most changes this time are on the pinned-init API and infrastructure. While we have a Rust version upgrade, and thus a bunch of changes from the vendored 'alloc' crate as usual, this time those do not account for many lines. Toolchain and infrastructure: - Upgrade to Rust 1.71.1. This is the second such upgrade, which is a smaller jump compared to the last time. This version allows us to remove the '__rust_*' allocator functions -- the compiler now generates them as expected, thus now our 'KernelAllocator' is used. It also introduces the 'offset_of!' macro in the standard library (as an unstable feature) which we will need soon. So far, we were using a declarative macro as a prerequisite in some not-yet-landed patch series, which did not support sub-fields (i.e. nested structs): #[repr(C)] struct S { a: u16, b: (u8, u8), } assert_eq!(offset_of!(S, b.1), 3); - Upgrade to bindgen 0.65.1. This is the first time we upgrade its version. Given it is a fairly big jump, it comes with a fair number of improvements/changes that affect us, such as a fix needed to support LLVM 16 as well as proper support for '__noreturn' C functions, which are now mapped to return the '!' type in Rust: void __noreturn f(void); // C pub fn f() -> !; // Rust - 'scripts/rust_is_available.sh' improvements and fixes. This series takes care of all the issues known so far and adds a few new checks to cover for even more cases, plus adds some more help texts. All this together will hopefully make problematic setups easier to identify and to be solved by users building the kernel. In addition, it adds a test suite which covers all branches of the shell script, as well as tests for the issues found so far. - Support rust-analyzer for out-of-tree modules too. - Give 'cfg's to rust-analyzer for the 'core' and 'alloc' crates. - Drop 'scripts/is_rust_module.sh' since it is not needed anymore. Macros crate: - New 'paste!' proc macro. This macro is a more flexible version of 'concat_idents!': it allows the resulting identifier to be used to declare new items and it allows to transform the identifiers before concatenating them, e.g. let x_1 = 42; paste!(let [<x _2>] = [<x _1>];); assert!(x_1 == x_2); The macro is then used for several of the pinned-init API changes in this pull. Pinned-init API: - Make '#[pin_data]' compatible with conditional compilation of fields, allowing to write code like: #[pin_data] pub struct Foo { #[cfg(CONFIG_BAR)] a: Bar, #[cfg(not(CONFIG_BAR))] a: Baz, } - New '#[derive(Zeroable)]' proc macro for the 'Zeroable' trait, which allows 'unsafe' implementations for structs where every field implements the 'Zeroable' trait, e.g.: #[derive(Zeroable)] pub struct DriverData { id: i64, buf_ptr: *mut u8, len: usize, } - Add '..Zeroable::zeroed()' syntax to the 'pin_init!' macro for zeroing all other fields, e.g.: pin_init!(Buf { buf: [1; 64], ..Zeroable::zeroed() }); - New '{,pin_}init_array_from_fn()' functions to create array initializers given a generator function, e.g.: let b: Box<[usize; 1_000]> = Box::init::<Error>( init_array_from_fn(|i| i) ).unwrap(); assert_eq!(b.len(), 1_000); assert_eq!(b[123], 123); - New '{,pin_}chain' methods for '{,Pin}Init<T, E>' that allow to execute a closure on the value directly after initialization, e.g.: let foo = init!(Foo { buf <- init::zeroed() }).chain(|foo| { foo.setup(); Ok(()) }); - Support arbitrary paths in init macros, instead of just identifiers and generic types. - Implement the 'Zeroable' trait for the 'UnsafeCell<T>' and 'Opaque<T>' types. - Make initializer values inaccessible after initialization. - Make guards in the init macros hygienic. 'allocator' module: - Use 'krealloc_aligned()' in 'KernelAllocator::alloc' preventing misaligned allocations when the Rust 1.71.1 upgrade is applied later in this pull. The equivalent fix for the previous compiler version (where 'KernelAllocator' is not yet used) was merged into 6.5 already, which added the 'krealloc_aligned()' function used here. - Implement 'KernelAllocator::{realloc, alloc_zeroed}' for performance, using 'krealloc_aligned()' too, which forwards the call to the C API. 'types' module: - Make 'Opaque' be '!Unpin', removing the need to add a 'PhantomPinned' field to Rust structs that contain C structs which must not be moved. - Make 'Opaque' use 'UnsafeCell' as the outer type, rather than inner. Documentation: - Suggest obtaining the source code of the Rust's 'core' library using the tarball instead of the repository. MAINTAINERS: - Andreas and Alice, from Samsung and Google respectively, are joining as reviewers of the "RUST" entry. As well as a few other minor changes and cleanups" * tag 'rust-6.6' of https://github.com/Rust-for-Linux/linux: (42 commits) rust: init: update expanded macro explanation rust: init: add `{pin_}chain` functions to `{Pin}Init<T, E>` rust: init: make `PinInit<T, E>` a supertrait of `Init<T, E>` rust: init: implement `Zeroable` for `UnsafeCell<T>` and `Opaque<T>` rust: init: add support for arbitrary paths in init macros rust: init: add functions to create array initializers rust: init: add `..Zeroable::zeroed()` syntax for zeroing all missing fields rust: init: make initializer values inaccessible after initializing rust: init: wrap type checking struct initializers in a closure rust: init: make guards in the init macros hygienic rust: add derive macro for `Zeroable` rust: init: make `#[pin_data]` compatible with conditional compilation of fields rust: init: consolidate init macros docs: rust: clarify what 'rustup override' does docs: rust: update instructions for obtaining 'core' source docs: rust: add command line to rust-analyzer section scripts: generate_rust_analyzer: provide `cfg`s for `core` and `alloc` rust: bindgen: upgrade to 0.65.1 rust: enable `no_mangle_with_rust_abi` Clippy lint rust: upgrade to Rust 1.71.1 ... |
||
Aakash Sen Sharma
|
08ab786556 |
rust: bindgen: upgrade to 0.65.1
In LLVM 16, anonymous items may return names like `(unnamed union at ..)` rather than empty names [1], which breaks Rust-enabled builds because bindgen assumed an empty name instead of detecting them via `clang_Cursor_isAnonymous` [2]: $ make rustdoc LLVM=1 CLIPPY=1 -j$(nproc) RUSTC L rust/core.o BINDGEN rust/bindings/bindings_generated.rs BINDGEN rust/bindings/bindings_helpers_generated.rs BINDGEN rust/uapi/uapi_generated.rs thread 'main' panicked at '"ftrace_branch_data_union_(anonymous_at__/_/include/linux/compiler_types_h_146_2)" is not a valid Ident', .../proc-macro2-1.0.24/src/fallback.rs:693:9 ... thread 'main' panicked at '"ftrace_branch_data_union_(anonymous_at__/_/include/linux/compiler_types_h_146_2)" is not a valid Ident', .../proc-macro2-1.0.24/src/fallback.rs:693:9 ... This was fixed in bindgen 0.62.0. Therefore, upgrade bindgen to a more recent version, 0.65.1, to support LLVM 16. Since bindgen 0.58.0 changed the `--{white,black}list-*` flags to `--{allow,block}list-*` [3], update them on our side too. In addition, bindgen 0.61.0 moved its CLI utility into a binary crate called `bindgen-cli` [4]. Thus update the installation command in the Quick Start guide. Moreover, bindgen 0.61.0 changed the default functionality to bind `size_t` to `usize` [5] and added the `--no-size_t-is-usize` flag to not bind `size_t` as `usize`. Then bindgen 0.65.0 removed the `--size_t-is-usize` flag [6]. Thus stop passing the flag to bindgen. Finally, bindgen 0.61.0 added support for the `noreturn` attribute (in its different forms) [7]. Thus remove the infinite loop in our Rust panic handler after calling `BUG()`, since bindgen now correctly generates a `BUG()` binding that returns `!` instead of `()`. Link: |
||
Ariel Miculas
|
917b2e00b9 |
rust: helpers: sort includes alphabetically in rust/helpers.c
Sort the #include directives of rust/helpers.c alphabetically and add a comment specifying this. The reason for this is to improve readability and to be consistent with the other files with a similar approach within 'rust/'. Suggested-by: Miguel Ojeda <ojeda@kernel.org> Link: https://github.com/Rust-for-Linux/linux/issues/1003 Signed-off-by: Ariel Miculas <amiculas@cisco.com> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Reviewed-by: Gary Guo <gary@garyguo.net> Link: https://lore.kernel.org/r/20230426204923.16195-1-amiculas@cisco.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Miguel Ojeda
|
a66d733da8 |
rust: support running Rust documentation tests as KUnit ones
Rust has documentation tests: these are typically examples of usage of any item (e.g. function, struct, module...). They are very convenient because they are just written alongside the documentation. For instance: /// Sums two numbers. /// /// ``` /// assert_eq!(mymod::f(10, 20), 30); /// ``` pub fn f(a: i32, b: i32) -> i32 { a + b } In userspace, the tests are collected and run via `rustdoc`. Using the tool as-is would be useful already, since it allows to compile-test most tests (thus enforcing they are kept in sync with the code they document) and run those that do not depend on in-kernel APIs. However, by transforming the tests into a KUnit test suite, they can also be run inside the kernel. Moreover, the tests get to be compiled as other Rust kernel objects instead of targeting userspace. On top of that, the integration with KUnit means the Rust support gets to reuse the existing testing facilities. For instance, the kernel log would look like: KTAP version 1 1..1 KTAP version 1 # Subtest: rust_doctests_kernel 1..59 # rust_doctest_kernel_build_assert_rs_0.location: rust/kernel/build_assert.rs:13 ok 1 rust_doctest_kernel_build_assert_rs_0 # rust_doctest_kernel_build_assert_rs_1.location: rust/kernel/build_assert.rs:56 ok 2 rust_doctest_kernel_build_assert_rs_1 # rust_doctest_kernel_init_rs_0.location: rust/kernel/init.rs:122 ok 3 rust_doctest_kernel_init_rs_0 ... # rust_doctest_kernel_types_rs_2.location: rust/kernel/types.rs:150 ok 59 rust_doctest_kernel_types_rs_2 # rust_doctests_kernel: pass:59 fail:0 skip:0 total:59 # Totals: pass:59 fail:0 skip:0 total:59 ok 1 rust_doctests_kernel Therefore, add support for running Rust documentation tests in KUnit. Some other notes about the current implementation and support follow. The transformation is performed by a couple scripts written as Rust hostprogs. Tests using the `?` operator are also supported as usual, e.g.: /// ``` /// # use kernel::{spawn_work_item, workqueue}; /// spawn_work_item!(workqueue::system(), || pr_info!("x"))?; /// # Ok::<(), Error>(()) /// ``` The tests are also compiled with Clippy under `CLIPPY=1`, just like normal code, thus also benefitting from extra linting. The names of the tests are currently automatically generated. This allows to reduce the burden for documentation writers, while keeping them fairly stable for bisection. This is an improvement over the `rustdoc`-generated names, which include the line number; but ideally we would like to get `rustdoc` to provide the Rust item path and a number (for multiple examples in a single documented Rust item). In order for developers to easily see from which original line a failed doctests came from, a KTAP diagnostic line is printed to the log, containing the location (file and line) of the original test (i.e. instead of the location in the generated Rust file): # rust_doctest_kernel_types_rs_2.location: rust/kernel/types.rs:150 This line follows the syntax for declaring test metadata in the proposed KTAP v2 spec [1], which may be used for the proposed KUnit test attributes API [2]. Thus hopefully this will make migration easier later on (suggested by David [3]). The original line in that test attribute is figured out by providing an anchor (suggested by Boqun [4]). The original file is found by walking the filesystem, checking directory prefixes to reduce the amount of combinations to check, and it is only done once per file. Ambiguities are detected and reported. A notable difference from KUnit C tests is that the Rust tests appear to assert using the usual `assert!` and `assert_eq!` macros from the Rust standard library (`core`). We provide a custom version that forwards the call to KUnit instead. Importantly, these macros do not require passing context, unlike the KUnit C ones (i.e. `struct kunit *`). This makes them easier to use, and readers of the documentation do not need to care about which testing framework is used. In addition, it may allow us to test third-party code more easily in the future. However, a current limitation is that KUnit does not support assertions in other tasks. Thus we presently simply print an error to the kernel log if an assertion actually failed. This should be revisited to properly fail the test, perhaps saving the context somewhere else, or letting KUnit handle it. Link: https://lore.kernel.org/lkml/20230420205734.1288498-1-rmoar@google.com/ [1] Link: https://lore.kernel.org/linux-kselftest/20230707210947.1208717-1-rmoar@google.com/ [2] Link: https://lore.kernel.org/rust-for-linux/CABVgOSkOLO-8v6kdAGpmYnZUb+LKOX0CtYCo-Bge7r_2YTuXDQ@mail.gmail.com/ [3] Link: https://lore.kernel.org/rust-for-linux/ZIps86MbJF%2FiGIzd@boqun-archlinux/ [4] Signed-off-by: Miguel Ojeda <ojeda@kernel.org> Reviewed-by: David Gow <davidgow@google.com> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org> |
||
Gary Guo
|
d2e3115d71 |
rust: error: impl Debug for Error with errname() integration
Integrate the `Error` type with `errname()` by providing a new `name()` method. Then, implement `Debug` for the type using the new method. [ Miguel: under `CONFIG_SYMBOLIC_ERRNAME=n`, `errname()` is a `static inline`, so added a helper to support that case, like we had in the `rust` branch. Also moved `#include` up and reworded commit message for clarity. ] Co-developed-by: Wedson Almeida Filho <walmeida@microsoft.com> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com> Co-developed-by: Sven Van Asbroeck <thesven73@gmail.com> Signed-off-by: Sven Van Asbroeck <thesven73@gmail.com> Signed-off-by: Gary Guo <gary@garyguo.net> Signed-off-by: Alice Ryhl <aliceryhl@google.com> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Link: https://lore.kernel.org/r/20230531174450.3733220-1-aliceryhl@google.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Wedson Almeida Filho
|
19096bce81 |
rust: sync: introduce CondVar
This is the traditional condition variable or monitor synchronisation primitive. It is implemented with C's `wait_queue_head_t`. It allows users to release a lock and go to sleep while guaranteeing that notifications won't be missed. This is achieved by enqueuing a wait entry before releasing the lock. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Will Deacon <will@kernel.org> Cc: Waiman Long <longman@redhat.com> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Link: https://lore.kernel.org/r/20230411054543.21278-12-wedsonaf@gmail.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Wedson Almeida Filho
|
8da7a2b743 |
rust: introduce current
This allows Rust code to get a reference to the current task without having to increment the refcount, but still guaranteeing memory safety. Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com> Link: https://lore.kernel.org/r/20230411054543.21278-10-wedsonaf@gmail.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Wedson Almeida Filho
|
313c4281bc |
rust: add basic Task
It is an abstraction for C's `struct task_struct`. It implements `AlwaysRefCounted`, so the refcount of the wrapped object is managed safely on the Rust side. Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com> Link: https://lore.kernel.org/r/20230411054543.21278-9-wedsonaf@gmail.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Wedson Almeida Filho
|
c6d917a498 |
rust: lock: introduce SpinLock
This is the `spinlock_t` lock backend and allows Rust code to use the kernel spinlock idiomatically. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Will Deacon <will@kernel.org> Cc: Waiman Long <longman@redhat.com> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com> Link: https://lore.kernel.org/r/20230419174426.132207-1-wedsonaf@gmail.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Wedson Almeida Filho
|
6d20d629c6 |
rust: lock: introduce Mutex
This is the `struct mutex` lock backend and allows Rust code to use the kernel mutex idiomatically. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Will Deacon <will@kernel.org> Cc: Waiman Long <longman@redhat.com> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Signed-off-by: Wedson Almeida Filho <walmeida@microsoft.com> Link: https://lore.kernel.org/r/20230411054543.21278-3-wedsonaf@gmail.com Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Sven Van Asbroeck
|
752417b3f0 |
rust: error: Add a helper to convert a C ERR_PTR to a Result
Some kernel C API functions return a pointer which embeds an optional `errno`. Callers are supposed to check the returned pointer with `IS_ERR()` and if this returns `true`, retrieve the `errno` using `PTR_ERR()`. Create a Rust helper function to implement the Rust equivalent: transform a `*mut T` to `Result<*mut T>`. Lina: Imported from rust-for-linux/linux, with subsequent refactoring and contributions squashed in and attributed below. Renamed the function to from_err_ptr(). Co-developed-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Co-developed-by: Miguel Ojeda <ojeda@kernel.org> Signed-off-by: Miguel Ojeda <ojeda@kernel.org> Co-developed-by: Fox Chen <foxhlchen@gmail.com> Signed-off-by: Fox Chen <foxhlchen@gmail.com> Co-developed-by: Gary Guo <gary@garyguo.net> Signed-off-by: Gary Guo <gary@garyguo.net> Signed-off-by: Sven Van Asbroeck <thesven73@gmail.com> Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Signed-off-by: Asahi Lina <lina@asahilina.net> Link: https://lore.kernel.org/r/20230224-rust-error-v3-5-03779bddc02b@asahilina.net [ Add a removal of `#[allow(dead_code)]`. ] Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Asahi Lina
|
c7e20faa5f |
rust: error: Add Error::to_ptr()
This is the Rust equivalent to ERR_PTR(), for use in C callbacks. Marked as #[allow(dead_code)] for now, since it does not have any consumers yet. Reviewed-by: Martin Rodriguez Reboredo <yakoyoku@gmail.com> Signed-off-by: Asahi Lina <lina@asahilina.net> Reviewed-by: Gary Guo <gary@garyguo.net> Link: https://lore.kernel.org/r/20230224-rust-error-v3-2-03779bddc02b@asahilina.net Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Wedson Almeida Filho
|
9dc0436550 |
rust: sync: add Arc for ref-counted allocations
This is a basic implementation of `Arc` backed by C's `refcount_t`. It allows Rust code to idiomatically allocate memory that is ref-counted. Cc: Will Deacon <will@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Wedson Almeida Filho <wedsonaf@gmail.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Reviewed-by: Gary Guo <gary@garyguo.net> Reviewed-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com> Acked-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |
||
Miguel Ojeda
|
12f577216a |
rust: add C helpers
Introduces the source file that will contain forwarders to C macros and inlined functions. Initially this only contains a single helper, but will gain more as more functionality is added to the `kernel` crate in the future. Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Co-developed-by: Alex Gaynor <alex.gaynor@gmail.com> Signed-off-by: Alex Gaynor <alex.gaynor@gmail.com> Co-developed-by: Geoffrey Thomas <geofft@ldpreload.com> Signed-off-by: Geoffrey Thomas <geofft@ldpreload.com> Co-developed-by: Wedson Almeida Filho <wedsonaf@google.com> Signed-off-by: Wedson Almeida Filho <wedsonaf@google.com> Co-developed-by: Sven Van Asbroeck <thesven73@gmail.com> Signed-off-by: Sven Van Asbroeck <thesven73@gmail.com> Co-developed-by: Gary Guo <gary@garyguo.net> Signed-off-by: Gary Guo <gary@garyguo.net> Co-developed-by: Boqun Feng <boqun.feng@gmail.com> Signed-off-by: Boqun Feng <boqun.feng@gmail.com> Co-developed-by: Maciej Falkowski <m.falkowski@samsung.com> Signed-off-by: Maciej Falkowski <m.falkowski@samsung.com> Co-developed-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Wei Liu <wei.liu@kernel.org> Signed-off-by: Miguel Ojeda <ojeda@kernel.org> |