summaryrefslogtreecommitdiff
path: root/rust/alloc/vec/mod.rs
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2023-08-29 08:19:46 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2023-08-29 08:19:46 -0700
commita031fe8d1d32898582e36ccbffa9847d16f67aa2 (patch)
treebb097e00fcf06c92efffe95e48163e5658d527fc /rust/alloc/vec/mod.rs
parentf2586d921cea4feeddd1cc5ee3495700540dba8f (diff)
parent4af84c6a85c63bec24611e46bb3de2c0a6602a51 (diff)
downloadlwn-a031fe8d1d32898582e36ccbffa9847d16f67aa2.tar.gz
lwn-a031fe8d1d32898582e36ccbffa9847d16f67aa2.zip
Merge tag 'rust-6.6' of https://github.com/Rust-for-Linux/linux
Pull rust updates from Miguel Ojeda: "In terms of lines, most changes this time are on the pinned-init API and infrastructure. While we have a Rust version upgrade, and thus a bunch of changes from the vendored 'alloc' crate as usual, this time those do not account for many lines. Toolchain and infrastructure: - Upgrade to Rust 1.71.1. This is the second such upgrade, which is a smaller jump compared to the last time. This version allows us to remove the '__rust_*' allocator functions -- the compiler now generates them as expected, thus now our 'KernelAllocator' is used. It also introduces the 'offset_of!' macro in the standard library (as an unstable feature) which we will need soon. So far, we were using a declarative macro as a prerequisite in some not-yet-landed patch series, which did not support sub-fields (i.e. nested structs): #[repr(C)] struct S { a: u16, b: (u8, u8), } assert_eq!(offset_of!(S, b.1), 3); - Upgrade to bindgen 0.65.1. This is the first time we upgrade its version. Given it is a fairly big jump, it comes with a fair number of improvements/changes that affect us, such as a fix needed to support LLVM 16 as well as proper support for '__noreturn' C functions, which are now mapped to return the '!' type in Rust: void __noreturn f(void); // C pub fn f() -> !; // Rust - 'scripts/rust_is_available.sh' improvements and fixes. This series takes care of all the issues known so far and adds a few new checks to cover for even more cases, plus adds some more help texts. All this together will hopefully make problematic setups easier to identify and to be solved by users building the kernel. In addition, it adds a test suite which covers all branches of the shell script, as well as tests for the issues found so far. - Support rust-analyzer for out-of-tree modules too. - Give 'cfg's to rust-analyzer for the 'core' and 'alloc' crates. - Drop 'scripts/is_rust_module.sh' since it is not needed anymore. Macros crate: - New 'paste!' proc macro. This macro is a more flexible version of 'concat_idents!': it allows the resulting identifier to be used to declare new items and it allows to transform the identifiers before concatenating them, e.g. let x_1 = 42; paste!(let [<x _2>] = [<x _1>];); assert!(x_1 == x_2); The macro is then used for several of the pinned-init API changes in this pull. Pinned-init API: - Make '#[pin_data]' compatible with conditional compilation of fields, allowing to write code like: #[pin_data] pub struct Foo { #[cfg(CONFIG_BAR)] a: Bar, #[cfg(not(CONFIG_BAR))] a: Baz, } - New '#[derive(Zeroable)]' proc macro for the 'Zeroable' trait, which allows 'unsafe' implementations for structs where every field implements the 'Zeroable' trait, e.g.: #[derive(Zeroable)] pub struct DriverData { id: i64, buf_ptr: *mut u8, len: usize, } - Add '..Zeroable::zeroed()' syntax to the 'pin_init!' macro for zeroing all other fields, e.g.: pin_init!(Buf { buf: [1; 64], ..Zeroable::zeroed() }); - New '{,pin_}init_array_from_fn()' functions to create array initializers given a generator function, e.g.: let b: Box<[usize; 1_000]> = Box::init::<Error>( init_array_from_fn(|i| i) ).unwrap(); assert_eq!(b.len(), 1_000); assert_eq!(b[123], 123); - New '{,pin_}chain' methods for '{,Pin}Init<T, E>' that allow to execute a closure on the value directly after initialization, e.g.: let foo = init!(Foo { buf <- init::zeroed() }).chain(|foo| { foo.setup(); Ok(()) }); - Support arbitrary paths in init macros, instead of just identifiers and generic types. - Implement the 'Zeroable' trait for the 'UnsafeCell<T>' and 'Opaque<T>' types. - Make initializer values inaccessible after initialization. - Make guards in the init macros hygienic. 'allocator' module: - Use 'krealloc_aligned()' in 'KernelAllocator::alloc' preventing misaligned allocations when the Rust 1.71.1 upgrade is applied later in this pull. The equivalent fix for the previous compiler version (where 'KernelAllocator' is not yet used) was merged into 6.5 already, which added the 'krealloc_aligned()' function used here. - Implement 'KernelAllocator::{realloc, alloc_zeroed}' for performance, using 'krealloc_aligned()' too, which forwards the call to the C API. 'types' module: - Make 'Opaque' be '!Unpin', removing the need to add a 'PhantomPinned' field to Rust structs that contain C structs which must not be moved. - Make 'Opaque' use 'UnsafeCell' as the outer type, rather than inner. Documentation: - Suggest obtaining the source code of the Rust's 'core' library using the tarball instead of the repository. MAINTAINERS: - Andreas and Alice, from Samsung and Google respectively, are joining as reviewers of the "RUST" entry. As well as a few other minor changes and cleanups" * tag 'rust-6.6' of https://github.com/Rust-for-Linux/linux: (42 commits) rust: init: update expanded macro explanation rust: init: add `{pin_}chain` functions to `{Pin}Init<T, E>` rust: init: make `PinInit<T, E>` a supertrait of `Init<T, E>` rust: init: implement `Zeroable` for `UnsafeCell<T>` and `Opaque<T>` rust: init: add support for arbitrary paths in init macros rust: init: add functions to create array initializers rust: init: add `..Zeroable::zeroed()` syntax for zeroing all missing fields rust: init: make initializer values inaccessible after initializing rust: init: wrap type checking struct initializers in a closure rust: init: make guards in the init macros hygienic rust: add derive macro for `Zeroable` rust: init: make `#[pin_data]` compatible with conditional compilation of fields rust: init: consolidate init macros docs: rust: clarify what 'rustup override' does docs: rust: update instructions for obtaining 'core' source docs: rust: add command line to rust-analyzer section scripts: generate_rust_analyzer: provide `cfg`s for `core` and `alloc` rust: bindgen: upgrade to 0.65.1 rust: enable `no_mangle_with_rust_abi` Clippy lint rust: upgrade to Rust 1.71.1 ...
Diffstat (limited to 'rust/alloc/vec/mod.rs')
-rw-r--r--rust/alloc/vec/mod.rs84
1 files changed, 19 insertions, 65 deletions
diff --git a/rust/alloc/vec/mod.rs b/rust/alloc/vec/mod.rs
index 94995913566b..05c70de0227e 100644
--- a/rust/alloc/vec/mod.rs
+++ b/rust/alloc/vec/mod.rs
@@ -58,13 +58,9 @@
#[cfg(not(no_global_oom_handling))]
use core::cmp;
use core::cmp::Ordering;
-use core::convert::TryFrom;
use core::fmt;
use core::hash::{Hash, Hasher};
-use core::intrinsics::assume;
use core::iter;
-#[cfg(not(no_global_oom_handling))]
-use core::iter::FromIterator;
use core::marker::PhantomData;
use core::mem::{self, ManuallyDrop, MaybeUninit, SizedTypeProperties};
use core::ops::{self, Index, IndexMut, Range, RangeBounds};
@@ -381,8 +377,8 @@ mod spec_extend;
/// Currently, `Vec` does not guarantee the order in which elements are dropped.
/// The order has changed in the past and may change again.
///
-/// [`get`]: ../../std/vec/struct.Vec.html#method.get
-/// [`get_mut`]: ../../std/vec/struct.Vec.html#method.get_mut
+/// [`get`]: slice::get
+/// [`get_mut`]: slice::get_mut
/// [`String`]: crate::string::String
/// [`&str`]: type@str
/// [`shrink_to_fit`]: Vec::shrink_to_fit
@@ -708,14 +704,14 @@ impl<T, A: Allocator> Vec<T, A> {
///
/// // The vector contains no items, even though it has capacity for more
/// assert_eq!(vec.len(), 0);
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
///
/// // These are all done without reallocating...
/// for i in 0..10 {
/// vec.push(i);
/// }
/// assert_eq!(vec.len(), 10);
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
///
/// // ...but this may make the vector reallocate
/// vec.push(11);
@@ -766,14 +762,14 @@ impl<T, A: Allocator> Vec<T, A> {
///
/// // The vector contains no items, even though it has capacity for more
/// assert_eq!(vec.len(), 0);
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
///
/// // These are all done without reallocating...
/// for i in 0..10 {
/// vec.push(i);
/// }
/// assert_eq!(vec.len(), 10);
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
///
/// // ...but this may make the vector reallocate
/// vec.push(11);
@@ -999,7 +995,7 @@ impl<T, A: Allocator> Vec<T, A> {
/// ```
/// let mut vec: Vec<i32> = Vec::with_capacity(10);
/// vec.push(42);
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
/// ```
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
@@ -1150,7 +1146,7 @@ impl<T, A: Allocator> Vec<T, A> {
/// ```
/// let mut vec = Vec::with_capacity(10);
/// vec.extend([1, 2, 3]);
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
/// vec.shrink_to_fit();
/// assert!(vec.capacity() >= 3);
/// ```
@@ -1177,7 +1173,7 @@ impl<T, A: Allocator> Vec<T, A> {
/// ```
/// let mut vec = Vec::with_capacity(10);
/// vec.extend([1, 2, 3]);
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
/// vec.shrink_to(4);
/// assert!(vec.capacity() >= 4);
/// vec.shrink_to(0);
@@ -1212,7 +1208,7 @@ impl<T, A: Allocator> Vec<T, A> {
/// let mut vec = Vec::with_capacity(10);
/// vec.extend([1, 2, 3]);
///
- /// assert_eq!(vec.capacity(), 10);
+ /// assert!(vec.capacity() >= 10);
/// let slice = vec.into_boxed_slice();
/// assert_eq!(slice.into_vec().capacity(), 3);
/// ```
@@ -1358,11 +1354,7 @@ impl<T, A: Allocator> Vec<T, A> {
pub fn as_ptr(&self) -> *const T {
// We shadow the slice method of the same name to avoid going through
// `deref`, which creates an intermediate reference.
- let ptr = self.buf.ptr();
- unsafe {
- assume(!ptr.is_null());
- }
- ptr
+ self.buf.ptr()
}
/// Returns an unsafe mutable pointer to the vector's buffer, or a dangling
@@ -1395,11 +1387,7 @@ impl<T, A: Allocator> Vec<T, A> {
pub fn as_mut_ptr(&mut self) -> *mut T {
// We shadow the slice method of the same name to avoid going through
// `deref_mut`, which creates an intermediate reference.
- let ptr = self.buf.ptr();
- unsafe {
- assume(!ptr.is_null());
- }
- ptr
+ self.buf.ptr()
}
/// Returns a reference to the underlying allocator.
@@ -2892,35 +2880,6 @@ impl<T, A: Allocator> ops::DerefMut for Vec<T, A> {
}
#[cfg(not(no_global_oom_handling))]
-trait SpecCloneFrom {
- fn clone_from(this: &mut Self, other: &Self);
-}
-
-#[cfg(not(no_global_oom_handling))]
-impl<T: Clone, A: Allocator> SpecCloneFrom for Vec<T, A> {
- default fn clone_from(this: &mut Self, other: &Self) {
- // drop anything that will not be overwritten
- this.truncate(other.len());
-
- // self.len <= other.len due to the truncate above, so the
- // slices here are always in-bounds.
- let (init, tail) = other.split_at(this.len());
-
- // reuse the contained values' allocations/resources.
- this.clone_from_slice(init);
- this.extend_from_slice(tail);
- }
-}
-
-#[cfg(not(no_global_oom_handling))]
-impl<T: Copy, A: Allocator> SpecCloneFrom for Vec<T, A> {
- fn clone_from(this: &mut Self, other: &Self) {
- this.clear();
- this.extend_from_slice(other);
- }
-}
-
-#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Clone, A: Allocator + Clone> Clone for Vec<T, A> {
#[cfg(not(test))]
@@ -2940,7 +2899,7 @@ impl<T: Clone, A: Allocator + Clone> Clone for Vec<T, A> {
}
fn clone_from(&mut self, other: &Self) {
- SpecCloneFrom::clone_from(self, other)
+ crate::slice::SpecCloneIntoVec::clone_into(other.as_slice(), self);
}
}
@@ -2948,7 +2907,6 @@ impl<T: Clone, A: Allocator + Clone> Clone for Vec<T, A> {
/// as required by the `core::borrow::Borrow` implementation.
///
/// ```
-/// #![feature(build_hasher_simple_hash_one)]
/// use std::hash::BuildHasher;
///
/// let b = std::collections::hash_map::RandomState::new();
@@ -3330,7 +3288,7 @@ impl<'a, T: Copy + 'a, A: Allocator + 'a> Extend<&'a T> for Vec<T, A> {
}
}
-/// Implements comparison of vectors, [lexicographically](core::cmp::Ord#lexicographical-comparison).
+/// Implements comparison of vectors, [lexicographically](Ord#lexicographical-comparison).
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: PartialOrd, A: Allocator> PartialOrd for Vec<T, A> {
#[inline]
@@ -3342,7 +3300,7 @@ impl<T: PartialOrd, A: Allocator> PartialOrd for Vec<T, A> {
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Eq, A: Allocator> Eq for Vec<T, A> {}
-/// Implements ordering of vectors, [lexicographically](core::cmp::Ord#lexicographical-comparison).
+/// Implements ordering of vectors, [lexicographically](Ord#lexicographical-comparison).
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Ord, A: Allocator> Ord for Vec<T, A> {
#[inline]
@@ -3365,8 +3323,7 @@ unsafe impl<#[may_dangle] T, A: Allocator> Drop for Vec<T, A> {
}
#[stable(feature = "rust1", since = "1.0.0")]
-#[rustc_const_unstable(feature = "const_default_impls", issue = "87864")]
-impl<T> const Default for Vec<T> {
+impl<T> Default for Vec<T> {
/// Creates an empty `Vec<T>`.
///
/// The vector will not allocate until elements are pushed onto it.
@@ -3462,10 +3419,7 @@ impl<T, const N: usize> From<[T; N]> for Vec<T> {
/// ```
#[cfg(not(test))]
fn from(s: [T; N]) -> Vec<T> {
- <[T]>::into_vec(
- #[rustc_box]
- Box::new(s),
- )
+ <[T]>::into_vec(Box::new(s))
}
#[cfg(test)]
@@ -3490,8 +3444,8 @@ where
///
/// ```
/// # use std::borrow::Cow;
- /// let o: Cow<[i32]> = Cow::Owned(vec![1, 2, 3]);
- /// let b: Cow<[i32]> = Cow::Borrowed(&[1, 2, 3]);
+ /// let o: Cow<'_, [i32]> = Cow::Owned(vec![1, 2, 3]);
+ /// let b: Cow<'_, [i32]> = Cow::Borrowed(&[1, 2, 3]);
/// assert_eq!(Vec::from(o), Vec::from(b));
/// ```
fn from(s: Cow<'a, [T]>) -> Vec<T> {