mirror of
https://github.com/torvalds/linux.git
synced 2025-12-07 11:56:58 +00:00
Merge tag 'rust-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ojeda/linux
Pull Rust updates from Miguel Ojeda:
"Toolchain and infrastructure:
- Add support for 'syn'.
Syn is a parsing library for parsing a stream of Rust tokens into a
syntax tree of Rust source code.
Currently this library is geared toward use in Rust procedural
macros, but contains some APIs that may be useful more generally.
'syn' allows us to greatly simplify writing complex macros such as
'pin-init' (Benno has already prepared the 'syn'-based version). We
will use it in the 'macros' crate too.
'syn' is the most downloaded Rust crate (according to crates.io),
and it is also used by the Rust compiler itself. While the amount
of code is substantial, there should not be many updates needed for
these crates, and even if there are, they should not be too big,
e.g. +7k -3k lines across the 3 crates in the last year.
'syn' requires two smaller dependencies: 'quote' and 'proc-macro2'.
I only modified their code to remove a third dependency
('unicode-ident') and to add the SPDX identifiers. The code can be
easily verified to exactly match upstream with the provided
scripts.
They are all licensed under "Apache-2.0 OR MIT", like the other
vendored 'alloc' crate we had for a while.
Please see the merge commit with the cover letter for more context.
- Allow 'unreachable_pub' and 'clippy::disallowed_names' for
doctests.
Examples (i.e. doctests) may want to do things like show public
items and use names such as 'foo'.
Nevertheless, we still try to keep examples as close to real code
as possible (this is part of why running Clippy on doctests is
important for us, e.g. for safety comments, which userspace Rust
does not support yet but we are stricter).
'kernel' crate:
- Replace our custom 'CStr' type with 'core::ffi::CStr'.
Using the standard library type reduces our custom code footprint,
and we retain needed custom functionality through an extension
trait and a new 'fmt!' macro which replaces the previous 'core'
import.
This started in 6.17 and continued in 6.18, and we finally land the
replacement now. This required quite some stamina from Tamir, who
split the changes in steps to prepare for the flag day change here.
- Replace 'kernel::c_str!' with C string literals.
C string literals were added in Rust 1.77, which produce '&CStr's
(the 'core' one), so now we can write:
c"hi"
instead of:
c_str!("hi")
- Add 'num' module for numerical features.
It includes the 'Integer' trait, implemented for all primitive
integer types.
It also includes the 'Bounded' integer wrapping type: an integer
value that requires only the 'N' least significant bits of the
wrapped type to be encoded:
// An unsigned 8-bit integer, of which only the 4 LSBs are used.
let v = Bounded::<u8, 4>::new::<15>();
assert_eq!(v.get(), 15);
'Bounded' is useful to e.g. enforce guarantees when working with
bitfields that have an arbitrary number of bits.
Values can also be constructed from simple non-constant expressions
or, for more complex ones, validated at runtime.
'Bounded' also comes with comparison and arithmetic operations
(with both their backing type and other 'Bounded's with a
compatible backing type), casts to change the backing type,
extending/shrinking and infallible/fallible conversions from/to
primitives as applicable.
- 'rbtree' module: add immutable cursor ('Cursor').
It enables to use just an immutable tree reference where
appropriate. The existing fully-featured mutable cursor is renamed
to 'CursorMut'.
kallsyms:
- Fix wrong "big" kernel symbol type read from procfs.
'pin-init' crate:
- A couple minor fixes (Benno asked me to pick these patches up for
him this cycle).
Documentation:
- Quick Start guide: add Debian 13 (Trixie).
Debian Stable is now able to build Linux, since Debian 13 (released
2025-08-09) packages Rust 1.85.0, which is recent enough.
We are planning to propose that the minimum supported Rust version
in Linux follows Debian Stable releases, with Debian 13 being the
first one we upgrade to, i.e. Rust 1.85.
MAINTAINERS:
- Add entry for the new 'num' module.
- Remove Alex as Rust maintainer: he hasn't had the time to
contribute for a few years now, so it is a no-op change in
practice.
And a few other cleanups and improvements"
* tag 'rust-6.19' of git://git.kernel.org/pub/scm/linux/kernel/git/ojeda/linux: (53 commits)
rust: macros: support `proc-macro2`, `quote` and `syn`
rust: syn: enable support in kbuild
rust: syn: add `README.md`
rust: syn: remove `unicode-ident` dependency
rust: syn: add SPDX License Identifiers
rust: syn: import crate
rust: quote: enable support in kbuild
rust: quote: add `README.md`
rust: quote: add SPDX License Identifiers
rust: quote: import crate
rust: proc-macro2: enable support in kbuild
rust: proc-macro2: add `README.md`
rust: proc-macro2: remove `unicode_ident` dependency
rust: proc-macro2: add SPDX License Identifiers
rust: proc-macro2: import crate
rust: kbuild: support using libraries in `rustc_procmacro`
rust: kbuild: support skipping flags in `rustc_test_library`
rust: kbuild: add proc macro library support
rust: kbuild: simplify `--cfg` handling
rust: kbuild: introduce `core-flags` and `core-skip_flags`
...
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -41,6 +41,7 @@
|
||||
*.o.*
|
||||
*.patch
|
||||
*.pyc
|
||||
*.rlib
|
||||
*.rmeta
|
||||
*.rpm
|
||||
*.rsi
|
||||
|
||||
@@ -39,8 +39,8 @@ of the box, e.g.::
|
||||
Debian
|
||||
******
|
||||
|
||||
Debian Testing and Debian Unstable (Sid), outside of the freeze period, provide
|
||||
recent Rust releases and thus they should generally work out of the box, e.g.::
|
||||
Debian 13 (Trixie), as well as Testing and Debian Unstable (Sid) provide recent
|
||||
Rust releases and thus they should generally work out of the box, e.g.::
|
||||
|
||||
apt install rustc rust-src bindgen rustfmt rust-clippy
|
||||
|
||||
|
||||
@@ -22524,7 +22524,6 @@ F: tools/verification/
|
||||
|
||||
RUST
|
||||
M: Miguel Ojeda <ojeda@kernel.org>
|
||||
M: Alex Gaynor <alex.gaynor@gmail.com>
|
||||
R: Boqun Feng <boqun.feng@gmail.com>
|
||||
R: Gary Guo <gary@garyguo.net>
|
||||
R: Björn Roy Baron <bjorn3_gh@protonmail.com>
|
||||
@@ -22561,6 +22560,14 @@ T: git https://github.com/Rust-for-Linux/linux.git alloc-next
|
||||
F: rust/kernel/alloc.rs
|
||||
F: rust/kernel/alloc/
|
||||
|
||||
RUST [NUM]
|
||||
M: Alexandre Courbot <acourbot@nvidia.com>
|
||||
R: Yury Norov <yury.norov@gmail.com>
|
||||
L: rust-for-linux@vger.kernel.org
|
||||
S: Maintained
|
||||
F: rust/kernel/num.rs
|
||||
F: rust/kernel/num/
|
||||
|
||||
RUST [PIN-INIT]
|
||||
M: Benno Lossin <lossin@kernel.org>
|
||||
L: rust-for-linux@vger.kernel.org
|
||||
|
||||
7
Makefile
7
Makefile
@@ -1830,10 +1830,17 @@ rusttest: prepare
|
||||
$(Q)$(MAKE) $(build)=rust $@
|
||||
|
||||
# Formatting targets
|
||||
#
|
||||
# Generated files as well as vendored crates are skipped.
|
||||
PHONY += rustfmt rustfmtcheck
|
||||
|
||||
rustfmt:
|
||||
$(Q)find $(srctree) $(RCS_FIND_IGNORE) \
|
||||
\( \
|
||||
-path $(srctree)/rust/proc-macro2 \
|
||||
-o -path $(srctree)/rust/quote \
|
||||
-o -path $(srctree)/rust/syn \
|
||||
\) -prune -o \
|
||||
-type f -a -name '*.rs' -a ! -name '*generated*' -print \
|
||||
| xargs $(RUSTFMT) $(rustfmt_flags)
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
// Copyright (C) 2025 Google LLC.
|
||||
|
||||
use kernel::fmt;
|
||||
use kernel::prelude::*;
|
||||
|
||||
use crate::defs::*;
|
||||
@@ -76,8 +77,8 @@ impl From<kernel::alloc::AllocError> for BinderError {
|
||||
}
|
||||
}
|
||||
|
||||
impl core::fmt::Debug for BinderError {
|
||||
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
|
||||
impl fmt::Debug for BinderError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
match self.reply {
|
||||
BR_FAILED_REPLY => match self.source.as_ref() {
|
||||
Some(source) => f
|
||||
|
||||
@@ -331,7 +331,7 @@ impl Process {
|
||||
KVVec::with_capacity(8, GFP_KERNEL).unwrap_or_else(|_err| KVVec::new());
|
||||
|
||||
let mut inner = self.lock_with_nodes();
|
||||
let mut curr = inner.nodes.cursor_front();
|
||||
let mut curr = inner.nodes.cursor_front_mut();
|
||||
while let Some(cursor) = curr {
|
||||
let (key, node) = cursor.current();
|
||||
let key = *key;
|
||||
@@ -345,7 +345,7 @@ impl Process {
|
||||
// Find the node we were looking at and try again. If the set of nodes was changed,
|
||||
// then just proceed to the next node. This is ok because we don't guarantee the
|
||||
// inclusion of nodes that are added or removed in parallel with this operation.
|
||||
curr = inner.nodes.cursor_lower_bound(&key);
|
||||
curr = inner.nodes.cursor_lower_bound_mut(&key);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
||||
@@ -623,7 +623,7 @@ impl Process {
|
||||
" ref {}: desc {} {}node {debug_id} s {strong} w {weak}",
|
||||
r.debug_id,
|
||||
r.handle,
|
||||
if dead { "dead " } else { "" },
|
||||
if dead { "dead " } else { "" }
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -1320,7 +1320,7 @@ impl Process {
|
||||
{
|
||||
while let Some(node) = {
|
||||
let mut lock = self.inner.lock();
|
||||
lock.nodes.cursor_front().map(|c| c.remove_current().1)
|
||||
lock.nodes.cursor_front_mut().map(|c| c.remove_current().1)
|
||||
} {
|
||||
node.to_key_value().1.release();
|
||||
}
|
||||
|
||||
@@ -207,7 +207,7 @@ impl<T> TreeRangeAllocator<T> {
|
||||
}
|
||||
|
||||
pub(crate) fn reservation_abort(&mut self, offset: usize) -> Result<FreedRange> {
|
||||
let mut cursor = self.tree.cursor_lower_bound(&offset).ok_or_else(|| {
|
||||
let mut cursor = self.tree.cursor_lower_bound_mut(&offset).ok_or_else(|| {
|
||||
pr_warn!(
|
||||
"EINVAL from range_alloc.reservation_abort - offset: {}",
|
||||
offset
|
||||
|
||||
@@ -61,7 +61,7 @@ impl BinderStats {
|
||||
|
||||
mod strings {
|
||||
use core::str::from_utf8_unchecked;
|
||||
use kernel::str::CStr;
|
||||
use kernel::str::{CStr, CStrExt as _};
|
||||
|
||||
extern "C" {
|
||||
static binder_command_strings: [*const u8; super::BC_COUNT];
|
||||
@@ -72,7 +72,7 @@ mod strings {
|
||||
// SAFETY: Accessing `binder_command_strings` is always safe.
|
||||
let c_str_ptr = unsafe { binder_command_strings[i] };
|
||||
// SAFETY: The `binder_command_strings` array only contains nul-terminated strings.
|
||||
let bytes = unsafe { CStr::from_char_ptr(c_str_ptr) }.as_bytes();
|
||||
let bytes = unsafe { CStr::from_char_ptr(c_str_ptr) }.to_bytes();
|
||||
// SAFETY: The `binder_command_strings` array only contains strings with ascii-chars.
|
||||
unsafe { from_utf8_unchecked(bytes) }
|
||||
}
|
||||
@@ -81,7 +81,7 @@ mod strings {
|
||||
// SAFETY: Accessing `binder_return_strings` is always safe.
|
||||
let c_str_ptr = unsafe { binder_return_strings[i] };
|
||||
// SAFETY: The `binder_command_strings` array only contains nul-terminated strings.
|
||||
let bytes = unsafe { CStr::from_char_ptr(c_str_ptr) }.as_bytes();
|
||||
let bytes = unsafe { CStr::from_char_ptr(c_str_ptr) }.to_bytes();
|
||||
// SAFETY: The `binder_command_strings` array only contains strings with ascii-chars.
|
||||
unsafe { from_utf8_unchecked(bytes) }
|
||||
}
|
||||
|
||||
@@ -1,12 +1,13 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
use super::{NullBlkDevice, THIS_MODULE};
|
||||
use core::fmt::{Display, Write};
|
||||
use kernel::{
|
||||
block::mq::gen_disk::{GenDisk, GenDiskBuilder},
|
||||
c_str,
|
||||
configfs::{self, AttributeOperations},
|
||||
configfs_attrs, new_mutex,
|
||||
configfs_attrs,
|
||||
fmt::{self, Write as _},
|
||||
new_mutex,
|
||||
page::PAGE_SIZE,
|
||||
prelude::*,
|
||||
str::{kstrtobool_bytes, CString},
|
||||
@@ -99,8 +100,8 @@ impl TryFrom<u8> for IRQMode {
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for IRQMode {
|
||||
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> core::fmt::Result {
|
||||
impl fmt::Display for IRQMode {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
match self {
|
||||
Self::None => f.write_str("0")?,
|
||||
Self::Soft => f.write_str("1")?,
|
||||
|
||||
@@ -103,8 +103,11 @@ static char kallsyms_get_symbol_type(unsigned int off)
|
||||
{
|
||||
/*
|
||||
* Get just the first code, look it up in the token table,
|
||||
* and return the first char from this token.
|
||||
* and return the first char from this token. If MSB of length
|
||||
* is 1, it is a "big" symbol, so needs an additional byte.
|
||||
*/
|
||||
if (kallsyms_names[off] & 0x80)
|
||||
off++;
|
||||
return kallsyms_token_table[kallsyms_token_index[kallsyms_names[off + 1]]];
|
||||
}
|
||||
|
||||
|
||||
147
rust/Makefile
147
rust/Makefile
@@ -27,6 +27,8 @@ endif
|
||||
|
||||
obj-$(CONFIG_RUST) += exports.o
|
||||
|
||||
always-$(CONFIG_RUST) += libproc_macro2.rlib libquote.rlib libsyn.rlib
|
||||
|
||||
always-$(CONFIG_RUST_KERNEL_DOCTESTS) += doctests_kernel_generated.rs
|
||||
always-$(CONFIG_RUST_KERNEL_DOCTESTS) += doctests_kernel_generated_kunit.c
|
||||
|
||||
@@ -60,11 +62,61 @@ rustdoc_test_quiet=--test-args -q
|
||||
rustdoc_test_kernel_quiet=>/dev/null
|
||||
endif
|
||||
|
||||
core-cfgs = \
|
||||
--cfg no_fp_fmt_parse
|
||||
cfgs-to-flags = $(patsubst %,--cfg='%',$1)
|
||||
|
||||
core-cfgs := \
|
||||
no_fp_fmt_parse
|
||||
|
||||
core-edition := $(if $(call rustc-min-version,108700),2024,2021)
|
||||
|
||||
core-skip_flags := \
|
||||
--edition=2021 \
|
||||
-Wunreachable_pub \
|
||||
-Wrustdoc::unescaped_backticks
|
||||
|
||||
core-flags := \
|
||||
--edition=$(core-edition) \
|
||||
$(call cfgs-to-flags,$(core-cfgs))
|
||||
|
||||
proc_macro2-cfgs := \
|
||||
feature="proc-macro" \
|
||||
wrap_proc_macro \
|
||||
$(if $(call rustc-min-version,108800),proc_macro_span_file proc_macro_span_location)
|
||||
|
||||
# Stable since Rust 1.79.0: `feature(proc_macro_byte_character,proc_macro_c_str_literals)`.
|
||||
proc_macro2-flags := \
|
||||
--cap-lints=allow \
|
||||
-Zcrate-attr='feature(proc_macro_byte_character,proc_macro_c_str_literals)' \
|
||||
$(call cfgs-to-flags,$(proc_macro2-cfgs))
|
||||
|
||||
quote-cfgs := \
|
||||
feature="proc-macro"
|
||||
|
||||
quote-skip_flags := \
|
||||
--edition=2021
|
||||
|
||||
quote-flags := \
|
||||
--edition=2018 \
|
||||
--cap-lints=allow \
|
||||
--extern proc_macro2 \
|
||||
$(call cfgs-to-flags,$(quote-cfgs))
|
||||
|
||||
# `extra-traits`, `fold` and `visit` may be enabled if needed.
|
||||
syn-cfgs := \
|
||||
feature="clone-impls" \
|
||||
feature="derive" \
|
||||
feature="full" \
|
||||
feature="parsing" \
|
||||
feature="printing" \
|
||||
feature="proc-macro" \
|
||||
feature="visit-mut"
|
||||
|
||||
syn-flags := \
|
||||
--cap-lints=allow \
|
||||
--extern proc_macro2 \
|
||||
--extern quote \
|
||||
$(call cfgs-to-flags,$(syn-cfgs))
|
||||
|
||||
# `rustdoc` did not save the target modifiers, thus workaround for
|
||||
# the time being (https://github.com/rust-lang/rust/issues/144521).
|
||||
rustdoc_modifiers_workaround := $(if $(call rustc-min-version,108800),-Cunsafe-allow-abi-mismatch=fixed-x18)
|
||||
@@ -117,16 +169,33 @@ rustdoc: rustdoc-core rustdoc-macros rustdoc-compiler_builtins \
|
||||
$(Q)for f in $(rustdoc_output)/static.files/rustdoc-*.css; do \
|
||||
echo ".logo-container > img { object-fit: contain; }" >> $$f; done
|
||||
|
||||
rustdoc-proc_macro2: private rustdoc_host = yes
|
||||
rustdoc-proc_macro2: private rustc_target_flags = $(proc_macro2-flags)
|
||||
rustdoc-proc_macro2: $(src)/proc-macro2/lib.rs rustdoc-clean FORCE
|
||||
+$(call if_changed,rustdoc)
|
||||
|
||||
rustdoc-quote: private rustdoc_host = yes
|
||||
rustdoc-quote: private rustc_target_flags = $(quote-flags)
|
||||
rustdoc-quote: private skip_flags = $(quote-skip_flags)
|
||||
rustdoc-quote: $(src)/quote/lib.rs rustdoc-clean rustdoc-proc_macro2 FORCE
|
||||
+$(call if_changed,rustdoc)
|
||||
|
||||
rustdoc-syn: private rustdoc_host = yes
|
||||
rustdoc-syn: private rustc_target_flags = $(syn-flags)
|
||||
rustdoc-syn: $(src)/syn/lib.rs rustdoc-clean rustdoc-quote FORCE
|
||||
+$(call if_changed,rustdoc)
|
||||
|
||||
rustdoc-macros: private rustdoc_host = yes
|
||||
rustdoc-macros: private rustc_target_flags = --crate-type proc-macro \
|
||||
--extern proc_macro
|
||||
rustdoc-macros: $(src)/macros/lib.rs rustdoc-clean FORCE
|
||||
--extern proc_macro --extern proc_macro2 --extern quote --extern syn
|
||||
rustdoc-macros: $(src)/macros/lib.rs rustdoc-clean rustdoc-proc_macro2 \
|
||||
rustdoc-quote rustdoc-syn FORCE
|
||||
+$(call if_changed,rustdoc)
|
||||
|
||||
# Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should
|
||||
# not be needed -- see https://github.com/rust-lang/rust/pull/128307.
|
||||
rustdoc-core: private skip_flags = --edition=2021 -Wrustdoc::unescaped_backticks
|
||||
rustdoc-core: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
|
||||
rustdoc-core: private skip_flags = $(core-skip_flags)
|
||||
rustdoc-core: private rustc_target_flags = $(core-flags)
|
||||
rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs rustdoc-clean FORCE
|
||||
+$(call if_changed,rustdoc)
|
||||
|
||||
@@ -170,8 +239,8 @@ rustdoc-clean: FORCE
|
||||
quiet_cmd_rustc_test_library = $(RUSTC_OR_CLIPPY_QUIET) TL $<
|
||||
cmd_rustc_test_library = \
|
||||
OBJTREE=$(abspath $(objtree)) \
|
||||
$(RUSTC_OR_CLIPPY) $(rust_common_flags) \
|
||||
@$(objtree)/include/generated/rustc_cfg $(rustc_target_flags) \
|
||||
$(RUSTC_OR_CLIPPY) $(filter-out $(skip_flags),$(rust_common_flags) $(rustc_target_flags)) \
|
||||
@$(objtree)/include/generated/rustc_cfg \
|
||||
--crate-type $(if $(rustc_test_library_proc),proc-macro,rlib) \
|
||||
--out-dir $(objtree)/$(obj)/test --cfg testlib \
|
||||
-L$(objtree)/$(obj)/test \
|
||||
@@ -183,9 +252,24 @@ rusttestlib-build_error: $(src)/build_error.rs FORCE
|
||||
rusttestlib-ffi: $(src)/ffi.rs FORCE
|
||||
+$(call if_changed,rustc_test_library)
|
||||
|
||||
rusttestlib-macros: private rustc_target_flags = --extern proc_macro
|
||||
rusttestlib-proc_macro2: private rustc_target_flags = $(proc_macro2-flags)
|
||||
rusttestlib-proc_macro2: $(src)/proc-macro2/lib.rs FORCE
|
||||
+$(call if_changed,rustc_test_library)
|
||||
|
||||
rusttestlib-quote: private skip_flags = $(quote-skip_flags)
|
||||
rusttestlib-quote: private rustc_target_flags = $(quote-flags)
|
||||
rusttestlib-quote: $(src)/quote/lib.rs rusttestlib-proc_macro2 FORCE
|
||||
+$(call if_changed,rustc_test_library)
|
||||
|
||||
rusttestlib-syn: private rustc_target_flags = $(syn-flags)
|
||||
rusttestlib-syn: $(src)/syn/lib.rs rusttestlib-quote FORCE
|
||||
+$(call if_changed,rustc_test_library)
|
||||
|
||||
rusttestlib-macros: private rustc_target_flags = --extern proc_macro \
|
||||
--extern proc_macro2 --extern quote --extern syn
|
||||
rusttestlib-macros: private rustc_test_library_proc = yes
|
||||
rusttestlib-macros: $(src)/macros/lib.rs FORCE
|
||||
rusttestlib-macros: $(src)/macros/lib.rs \
|
||||
rusttestlib-proc_macro2 rusttestlib-quote rusttestlib-syn FORCE
|
||||
+$(call if_changed,rustc_test_library)
|
||||
|
||||
rusttestlib-pin_init_internal: private rustc_target_flags = --cfg kernel \
|
||||
@@ -266,7 +350,8 @@ quiet_cmd_rustc_test = $(RUSTC_OR_CLIPPY_QUIET) T $<
|
||||
rusttest: rusttest-macros
|
||||
|
||||
rusttest-macros: private rustc_target_flags = --extern proc_macro \
|
||||
--extern macros --extern kernel --extern pin_init
|
||||
--extern macros --extern kernel --extern pin_init \
|
||||
--extern proc_macro2 --extern quote --extern syn
|
||||
rusttest-macros: private rustdoc_test_target_flags = --crate-type proc-macro
|
||||
rusttest-macros: $(src)/macros/lib.rs \
|
||||
rusttestlib-macros rusttestlib-kernel rusttestlib-pin_init FORCE
|
||||
@@ -419,18 +504,47 @@ $(obj)/exports_bindings_generated.h: $(obj)/bindings.o FORCE
|
||||
$(obj)/exports_kernel_generated.h: $(obj)/kernel.o FORCE
|
||||
$(call if_changed,exports)
|
||||
|
||||
quiet_cmd_rustc_procmacrolibrary = $(RUSTC_OR_CLIPPY_QUIET) PL $@
|
||||
cmd_rustc_procmacrolibrary = \
|
||||
$(if $(skip_clippy),$(RUSTC),$(RUSTC_OR_CLIPPY)) \
|
||||
$(filter-out $(skip_flags),$(rust_common_flags) $(rustc_target_flags)) \
|
||||
--emit=dep-info,link --crate-type rlib -O \
|
||||
--out-dir $(objtree)/$(obj) -L$(objtree)/$(obj) \
|
||||
--crate-name $(patsubst lib%.rlib,%,$(notdir $@)) $<; \
|
||||
mv $(objtree)/$(obj)/$(patsubst lib%.rlib,%,$(notdir $@)).d $(depfile); \
|
||||
sed -i '/^\#/d' $(depfile)
|
||||
|
||||
$(obj)/libproc_macro2.rlib: private skip_clippy = 1
|
||||
$(obj)/libproc_macro2.rlib: private rustc_target_flags = $(proc_macro2-flags)
|
||||
$(obj)/libproc_macro2.rlib: $(src)/proc-macro2/lib.rs FORCE
|
||||
+$(call if_changed_dep,rustc_procmacrolibrary)
|
||||
|
||||
$(obj)/libquote.rlib: private skip_clippy = 1
|
||||
$(obj)/libquote.rlib: private skip_flags = $(quote-skip_flags)
|
||||
$(obj)/libquote.rlib: private rustc_target_flags = $(quote-flags)
|
||||
$(obj)/libquote.rlib: $(src)/quote/lib.rs $(obj)/libproc_macro2.rlib FORCE
|
||||
+$(call if_changed_dep,rustc_procmacrolibrary)
|
||||
|
||||
$(obj)/libsyn.rlib: private skip_clippy = 1
|
||||
$(obj)/libsyn.rlib: private rustc_target_flags = $(syn-flags)
|
||||
$(obj)/libsyn.rlib: $(src)/syn/lib.rs $(obj)/libquote.rlib FORCE
|
||||
+$(call if_changed_dep,rustc_procmacrolibrary)
|
||||
|
||||
quiet_cmd_rustc_procmacro = $(RUSTC_OR_CLIPPY_QUIET) P $@
|
||||
cmd_rustc_procmacro = \
|
||||
$(RUSTC_OR_CLIPPY) $(rust_common_flags) $(rustc_target_flags) \
|
||||
-Clinker-flavor=gcc -Clinker=$(HOSTCC) \
|
||||
-Clink-args='$(call escsq,$(KBUILD_PROCMACROLDFLAGS))' \
|
||||
--emit=dep-info=$(depfile) --emit=link=$@ --extern proc_macro \
|
||||
--crate-type proc-macro \
|
||||
--crate-type proc-macro -L$(objtree)/$(obj) \
|
||||
--crate-name $(patsubst lib%.$(libmacros_extension),%,$(notdir $@)) \
|
||||
@$(objtree)/include/generated/rustc_cfg $<
|
||||
|
||||
# Procedural macros can only be used with the `rustc` that compiled it.
|
||||
$(obj)/$(libmacros_name): $(src)/macros/lib.rs FORCE
|
||||
$(obj)/$(libmacros_name): private rustc_target_flags = \
|
||||
--extern proc_macro2 --extern quote --extern syn
|
||||
$(obj)/$(libmacros_name): $(src)/macros/lib.rs $(obj)/libproc_macro2.rlib \
|
||||
$(obj)/libquote.rlib $(obj)/libsyn.rlib FORCE
|
||||
+$(call if_changed_dep,rustc_procmacro)
|
||||
|
||||
$(obj)/$(libpin_init_internal_name): private rustc_target_flags = --cfg kernel
|
||||
@@ -453,6 +567,9 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
|
||||
rust-analyzer:
|
||||
$(Q)MAKEFLAGS= $(srctree)/scripts/generate_rust_analyzer.py \
|
||||
--cfgs='core=$(core-cfgs)' $(core-edition) \
|
||||
--cfgs='proc_macro2=$(proc_macro2-cfgs)' \
|
||||
--cfgs='quote=$(quote-cfgs)' \
|
||||
--cfgs='syn=$(syn-cfgs)' \
|
||||
$(realpath $(srctree)) $(realpath $(objtree)) \
|
||||
$(rustc_sysroot) $(RUST_LIB_SRC) $(if $(KBUILD_EXTMOD),$(srcroot)) \
|
||||
> rust-project.json
|
||||
@@ -508,9 +625,9 @@ $(obj)/helpers/helpers.o: $(src)/helpers/helpers.c $(recordmcount_source) FORCE
|
||||
$(obj)/exports.o: private skip_gendwarfksyms = 1
|
||||
|
||||
$(obj)/core.o: private skip_clippy = 1
|
||||
$(obj)/core.o: private skip_flags = --edition=2021 -Wunreachable_pub
|
||||
$(obj)/core.o: private skip_flags = $(core-skip_flags)
|
||||
$(obj)/core.o: private rustc_objcopy = $(foreach sym,$(redirect-intrinsics),--redefine-sym $(sym)=__rust$(sym))
|
||||
$(obj)/core.o: private rustc_target_flags = --edition=$(core-edition) $(core-cfgs)
|
||||
$(obj)/core.o: private rustc_target_flags = $(core-flags)
|
||||
$(obj)/core.o: $(RUST_LIB_SRC)/core/src/lib.rs \
|
||||
$(wildcard $(objtree)/include/config/RUSTC_VERSION_TEXT) FORCE
|
||||
+$(call if_changed_rule,rustc_library)
|
||||
|
||||
@@ -46,3 +46,5 @@ alias! {
|
||||
}
|
||||
|
||||
pub use core::ffi::c_void;
|
||||
|
||||
pub use core::ffi::CStr;
|
||||
|
||||
@@ -2,14 +2,14 @@
|
||||
|
||||
//! Errors for the [`Vec`] type.
|
||||
|
||||
use kernel::fmt::{self, Debug, Formatter};
|
||||
use kernel::fmt;
|
||||
use kernel::prelude::*;
|
||||
|
||||
/// Error type for [`Vec::push_within_capacity`].
|
||||
pub struct PushError<T>(pub T);
|
||||
|
||||
impl<T> Debug for PushError<T> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
|
||||
impl<T> fmt::Debug for PushError<T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "Not enough capacity")
|
||||
}
|
||||
}
|
||||
@@ -25,8 +25,8 @@ impl<T> From<PushError<T>> for Error {
|
||||
/// Error type for [`Vec::remove`].
|
||||
pub struct RemoveError;
|
||||
|
||||
impl Debug for RemoveError {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
|
||||
impl fmt::Debug for RemoveError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
write!(f, "Index out of bounds")
|
||||
}
|
||||
}
|
||||
@@ -45,8 +45,8 @@ pub enum InsertError<T> {
|
||||
OutOfCapacity(T),
|
||||
}
|
||||
|
||||
impl<T> Debug for InsertError<T> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
|
||||
impl<T> fmt::Debug for InsertError<T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
match self {
|
||||
InsertError::IndexOutOfBounds(_) => write!(f, "Index out of bounds"),
|
||||
InsertError::OutOfCapacity(_) => write!(f, "Not enough capacity"),
|
||||
|
||||
@@ -136,7 +136,7 @@ mod common_clk {
|
||||
///
|
||||
/// [`clk_get`]: https://docs.kernel.org/core-api/kernel-api.html#c.clk_get
|
||||
pub fn get(dev: &Device, name: Option<&CStr>) -> Result<Self> {
|
||||
let con_id = name.map_or(ptr::null(), |n| n.as_ptr());
|
||||
let con_id = name.map_or(ptr::null(), |n| n.as_char_ptr());
|
||||
|
||||
// SAFETY: It is safe to call [`clk_get`] for a valid device pointer.
|
||||
//
|
||||
@@ -304,7 +304,7 @@ mod common_clk {
|
||||
/// [`clk_get_optional`]:
|
||||
/// https://docs.kernel.org/core-api/kernel-api.html#c.clk_get_optional
|
||||
pub fn get(dev: &Device, name: Option<&CStr>) -> Result<Self> {
|
||||
let con_id = name.map_or(ptr::null(), |n| n.as_ptr());
|
||||
let con_id = name.map_or(ptr::null(), |n| n.as_char_ptr());
|
||||
|
||||
// SAFETY: It is safe to call [`clk_get_optional`] for a valid device pointer.
|
||||
//
|
||||
|
||||
@@ -157,7 +157,7 @@ impl<Data> Subsystem<Data> {
|
||||
unsafe {
|
||||
bindings::config_group_init_type_name(
|
||||
&mut (*place.get()).su_group,
|
||||
name.as_ptr(),
|
||||
name.as_char_ptr(),
|
||||
item_type.as_ptr(),
|
||||
)
|
||||
};
|
||||
|
||||
@@ -8,12 +8,12 @@
|
||||
// When DebugFS is disabled, many parameters are dead. Linting for this isn't helpful.
|
||||
#![cfg_attr(not(CONFIG_DEBUG_FS), allow(unused_variables))]
|
||||
|
||||
use crate::fmt;
|
||||
use crate::prelude::*;
|
||||
use crate::str::CStr;
|
||||
#[cfg(CONFIG_DEBUG_FS)]
|
||||
use crate::sync::Arc;
|
||||
use crate::uaccess::UserSliceReader;
|
||||
use core::fmt;
|
||||
use core::marker::PhantomData;
|
||||
use core::marker::PhantomPinned;
|
||||
#[cfg(CONFIG_DEBUG_FS)]
|
||||
|
||||
@@ -5,10 +5,9 @@
|
||||
//! than a trait implementation. If provided, it will override the trait implementation.
|
||||
|
||||
use super::{Reader, Writer};
|
||||
use crate::fmt;
|
||||
use crate::prelude::*;
|
||||
use crate::uaccess::UserSliceReader;
|
||||
use core::fmt;
|
||||
use core::fmt::Formatter;
|
||||
use core::marker::PhantomData;
|
||||
use core::ops::Deref;
|
||||
|
||||
@@ -76,9 +75,9 @@ impl<D, F> Deref for FormatAdapter<D, F> {
|
||||
|
||||
impl<D, F> Writer for FormatAdapter<D, F>
|
||||
where
|
||||
F: Fn(&D, &mut Formatter<'_>) -> fmt::Result + 'static,
|
||||
F: Fn(&D, &mut fmt::Formatter<'_>) -> fmt::Result + 'static,
|
||||
{
|
||||
fn write(&self, fmt: &mut Formatter<'_>) -> fmt::Result {
|
||||
fn write(&self, fmt: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
// SAFETY: FormatAdapter<_, F> can only be constructed if F is inhabited
|
||||
let f: &F = unsafe { materialize_zst() };
|
||||
f(&self.inner, fmt)
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
|
||||
use crate::debugfs::file_ops::FileOps;
|
||||
use crate::ffi::c_void;
|
||||
use crate::str::CStr;
|
||||
use crate::str::{CStr, CStrExt as _};
|
||||
use crate::sync::Arc;
|
||||
use core::marker::PhantomData;
|
||||
|
||||
|
||||
@@ -3,11 +3,11 @@
|
||||
|
||||
use super::{Reader, Writer};
|
||||
use crate::debugfs::callback_adapters::Adapter;
|
||||
use crate::fmt;
|
||||
use crate::prelude::*;
|
||||
use crate::seq_file::SeqFile;
|
||||
use crate::seq_print;
|
||||
use crate::uaccess::UserSlice;
|
||||
use core::fmt::{Display, Formatter, Result};
|
||||
use core::marker::PhantomData;
|
||||
|
||||
#[cfg(CONFIG_DEBUG_FS)]
|
||||
@@ -65,8 +65,8 @@ impl<T> Deref for FileOps<T> {
|
||||
|
||||
struct WriterAdapter<T>(T);
|
||||
|
||||
impl<'a, T: Writer> Display for WriterAdapter<&'a T> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
|
||||
impl<'a, T: Writer> fmt::Display for WriterAdapter<&'a T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
self.0.write(f)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,11 +3,11 @@
|
||||
|
||||
//! Traits for rendering or updating values exported to DebugFS.
|
||||
|
||||
use crate::fmt;
|
||||
use crate::prelude::*;
|
||||
use crate::sync::atomic::{Atomic, AtomicBasicOps, AtomicType, Relaxed};
|
||||
use crate::sync::Mutex;
|
||||
use crate::uaccess::UserSliceReader;
|
||||
use core::fmt::{self, Debug, Formatter};
|
||||
use core::str::FromStr;
|
||||
|
||||
/// A trait for types that can be written into a string.
|
||||
@@ -21,17 +21,17 @@ use core::str::FromStr;
|
||||
/// explicitly instead.
|
||||
pub trait Writer {
|
||||
/// Formats the value using the given formatter.
|
||||
fn write(&self, f: &mut Formatter<'_>) -> fmt::Result;
|
||||
fn write(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result;
|
||||
}
|
||||
|
||||
impl<T: Writer> Writer for Mutex<T> {
|
||||
fn write(&self, f: &mut Formatter<'_>) -> fmt::Result {
|
||||
fn write(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
self.lock().write(f)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Debug> Writer for T {
|
||||
fn write(&self, f: &mut Formatter<'_>) -> fmt::Result {
|
||||
impl<T: fmt::Debug> Writer for T {
|
||||
fn write(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
writeln!(f, "{self:?}")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@ use core::{marker::PhantomData, ptr};
|
||||
|
||||
#[cfg(CONFIG_PRINTK)]
|
||||
use crate::c_str;
|
||||
use crate::str::CStrExt as _;
|
||||
|
||||
pub mod property;
|
||||
|
||||
|
||||
@@ -156,7 +156,9 @@ macro_rules! declare_drm_ioctls {
|
||||
Some($cmd)
|
||||
},
|
||||
flags: $flags,
|
||||
name: $crate::c_str!(::core::stringify!($cmd)).as_char_ptr(),
|
||||
name: $crate::str::as_char_ptr_in_const_context(
|
||||
$crate::c_str!(::core::stringify!($cmd)),
|
||||
),
|
||||
}
|
||||
),*];
|
||||
ioctls
|
||||
|
||||
@@ -182,6 +182,8 @@ impl Error {
|
||||
if ptr.is_null() {
|
||||
None
|
||||
} else {
|
||||
use crate::str::CStrExt as _;
|
||||
|
||||
// SAFETY: The string returned by `errname` is static and `NUL`-terminated.
|
||||
Some(unsafe { CStr::from_char_ptr(ptr) })
|
||||
}
|
||||
|
||||
@@ -4,7 +4,14 @@
|
||||
//!
|
||||
//! C header: [`include/linux/firmware.h`](srctree/include/linux/firmware.h)
|
||||
|
||||
use crate::{bindings, device::Device, error::Error, error::Result, ffi, str::CStr};
|
||||
use crate::{
|
||||
bindings,
|
||||
device::Device,
|
||||
error::Error,
|
||||
error::Result,
|
||||
ffi,
|
||||
str::{CStr, CStrExt as _},
|
||||
};
|
||||
use core::ptr::NonNull;
|
||||
|
||||
/// # Invariants
|
||||
@@ -44,13 +51,13 @@ impl FwFunc {
|
||||
/// # Examples
|
||||
///
|
||||
/// ```no_run
|
||||
/// # use kernel::{c_str, device::Device, firmware::Firmware};
|
||||
/// # use kernel::{device::Device, firmware::Firmware};
|
||||
///
|
||||
/// # fn no_run() -> Result<(), Error> {
|
||||
/// # // SAFETY: *NOT* safe, just for the example to get an `ARef<Device>` instance
|
||||
/// # let dev = unsafe { Device::get_device(core::ptr::null_mut()) };
|
||||
///
|
||||
/// let fw = Firmware::request(c_str!("path/to/firmware.bin"), &dev)?;
|
||||
/// let fw = Firmware::request(c"path/to/firmware.bin", &dev)?;
|
||||
/// let blob = fw.data();
|
||||
///
|
||||
/// # Ok(())
|
||||
@@ -197,7 +204,7 @@ macro_rules! module_firmware {
|
||||
($($builder:tt)*) => {
|
||||
const _: () = {
|
||||
const __MODULE_FIRMWARE_PREFIX: &'static $crate::str::CStr = if cfg!(MODULE) {
|
||||
$crate::c_str!("")
|
||||
c""
|
||||
} else {
|
||||
<LocalModule as $crate::ModuleMetadata>::NAME
|
||||
};
|
||||
|
||||
@@ -4,4 +4,89 @@
|
||||
//!
|
||||
//! This module is intended to be used in place of `core::fmt` in kernel code.
|
||||
|
||||
pub use core::fmt::{Arguments, Debug, Display, Error, Formatter, Result, Write};
|
||||
pub use core::fmt::{Arguments, Debug, Error, Formatter, Result, Write};
|
||||
|
||||
/// Internal adapter used to route allow implementations of formatting traits for foreign types.
|
||||
///
|
||||
/// It is inserted automatically by the [`fmt!`] macro and is not meant to be used directly.
|
||||
///
|
||||
/// [`fmt!`]: crate::prelude::fmt!
|
||||
#[doc(hidden)]
|
||||
pub struct Adapter<T>(pub T);
|
||||
|
||||
macro_rules! impl_fmt_adapter_forward {
|
||||
($($trait:ident),* $(,)?) => {
|
||||
$(
|
||||
impl<T: $trait> $trait for Adapter<T> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
|
||||
let Self(t) = self;
|
||||
$trait::fmt(t, f)
|
||||
}
|
||||
}
|
||||
)*
|
||||
};
|
||||
}
|
||||
|
||||
use core::fmt::{Binary, LowerExp, LowerHex, Octal, Pointer, UpperExp, UpperHex};
|
||||
impl_fmt_adapter_forward!(Debug, LowerHex, UpperHex, Octal, Binary, Pointer, LowerExp, UpperExp);
|
||||
|
||||
/// A copy of [`core::fmt::Display`] that allows us to implement it for foreign types.
|
||||
///
|
||||
/// Types should implement this trait rather than [`core::fmt::Display`]. Together with the
|
||||
/// [`Adapter`] type and [`fmt!`] macro, it allows for formatting foreign types (e.g. types from
|
||||
/// core) which do not implement [`core::fmt::Display`] directly.
|
||||
///
|
||||
/// [`fmt!`]: crate::prelude::fmt!
|
||||
pub trait Display {
|
||||
/// Same as [`core::fmt::Display::fmt`].
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> Result;
|
||||
}
|
||||
|
||||
impl<T: ?Sized + Display> Display for &T {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
|
||||
Display::fmt(*self, f)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized + Display> core::fmt::Display for Adapter<&T> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
|
||||
let Self(t) = self;
|
||||
Display::fmt(t, f)
|
||||
}
|
||||
}
|
||||
|
||||
macro_rules! impl_display_forward {
|
||||
($(
|
||||
$( { $($generics:tt)* } )? $ty:ty $( { where $($where:tt)* } )?
|
||||
),* $(,)?) => {
|
||||
$(
|
||||
impl$($($generics)*)? Display for $ty $(where $($where)*)? {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> Result {
|
||||
core::fmt::Display::fmt(self, f)
|
||||
}
|
||||
}
|
||||
)*
|
||||
};
|
||||
}
|
||||
|
||||
impl_display_forward!(
|
||||
bool,
|
||||
char,
|
||||
core::panic::PanicInfo<'_>,
|
||||
Arguments<'_>,
|
||||
i128,
|
||||
i16,
|
||||
i32,
|
||||
i64,
|
||||
i8,
|
||||
isize,
|
||||
str,
|
||||
u128,
|
||||
u16,
|
||||
u32,
|
||||
u64,
|
||||
u8,
|
||||
usize,
|
||||
{<T: ?Sized>} crate::sync::Arc<T> {where crate::sync::Arc<T>: core::fmt::Display},
|
||||
{<T: ?Sized>} crate::sync::UniqueArc<T> {where crate::sync::UniqueArc<T>: core::fmt::Display},
|
||||
);
|
||||
|
||||
@@ -30,7 +30,7 @@
|
||||
//! ## General Examples
|
||||
//!
|
||||
//! ```rust
|
||||
//! # #![expect(clippy::disallowed_names, clippy::undocumented_unsafe_blocks)]
|
||||
//! # #![expect(clippy::undocumented_unsafe_blocks)]
|
||||
//! use kernel::types::Opaque;
|
||||
//! use pin_init::pin_init_from_closure;
|
||||
//!
|
||||
@@ -67,7 +67,6 @@
|
||||
//! ```
|
||||
//!
|
||||
//! ```rust
|
||||
//! # #![expect(unreachable_pub, clippy::disallowed_names)]
|
||||
//! use kernel::{prelude::*, types::Opaque};
|
||||
//! use core::{ptr::addr_of_mut, marker::PhantomPinned, pin::Pin};
|
||||
//! # mod bindings {
|
||||
|
||||
@@ -109,6 +109,7 @@ pub mod miscdevice;
|
||||
pub mod mm;
|
||||
#[cfg(CONFIG_NET)]
|
||||
pub mod net;
|
||||
pub mod num;
|
||||
pub mod of;
|
||||
#[cfg(CONFIG_PM_OPP)]
|
||||
pub mod opp;
|
||||
|
||||
79
rust/kernel/num.rs
Normal file
79
rust/kernel/num.rs
Normal file
@@ -0,0 +1,79 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
//! Additional numerical features for the kernel.
|
||||
|
||||
use core::ops;
|
||||
|
||||
pub mod bounded;
|
||||
pub use bounded::*;
|
||||
|
||||
/// Designates unsigned primitive types.
|
||||
pub enum Unsigned {}
|
||||
|
||||
/// Designates signed primitive types.
|
||||
pub enum Signed {}
|
||||
|
||||
/// Describes core properties of integer types.
|
||||
pub trait Integer:
|
||||
Sized
|
||||
+ Copy
|
||||
+ Clone
|
||||
+ PartialEq
|
||||
+ Eq
|
||||
+ PartialOrd
|
||||
+ Ord
|
||||
+ ops::Add<Output = Self>
|
||||
+ ops::AddAssign
|
||||
+ ops::Sub<Output = Self>
|
||||
+ ops::SubAssign
|
||||
+ ops::Mul<Output = Self>
|
||||
+ ops::MulAssign
|
||||
+ ops::Div<Output = Self>
|
||||
+ ops::DivAssign
|
||||
+ ops::Rem<Output = Self>
|
||||
+ ops::RemAssign
|
||||
+ ops::BitAnd<Output = Self>
|
||||
+ ops::BitAndAssign
|
||||
+ ops::BitOr<Output = Self>
|
||||
+ ops::BitOrAssign
|
||||
+ ops::BitXor<Output = Self>
|
||||
+ ops::BitXorAssign
|
||||
+ ops::Shl<u32, Output = Self>
|
||||
+ ops::ShlAssign<u32>
|
||||
+ ops::Shr<u32, Output = Self>
|
||||
+ ops::ShrAssign<u32>
|
||||
+ ops::Not
|
||||
{
|
||||
/// Whether this type is [`Signed`] or [`Unsigned`].
|
||||
type Signedness;
|
||||
|
||||
/// Number of bits used for value representation.
|
||||
const BITS: u32;
|
||||
}
|
||||
|
||||
macro_rules! impl_integer {
|
||||
($($type:ty: $signedness:ty), *) => {
|
||||
$(
|
||||
impl Integer for $type {
|
||||
type Signedness = $signedness;
|
||||
|
||||
const BITS: u32 = <$type>::BITS;
|
||||
}
|
||||
)*
|
||||
};
|
||||
}
|
||||
|
||||
impl_integer!(
|
||||
u8: Unsigned,
|
||||
u16: Unsigned,
|
||||
u32: Unsigned,
|
||||
u64: Unsigned,
|
||||
u128: Unsigned,
|
||||
usize: Unsigned,
|
||||
i8: Signed,
|
||||
i16: Signed,
|
||||
i32: Signed,
|
||||
i64: Signed,
|
||||
i128: Signed,
|
||||
isize: Signed
|
||||
);
|
||||
1058
rust/kernel/num/bounded.rs
Normal file
1058
rust/kernel/num/bounded.rs
Normal file
File diff suppressed because it is too large
Load Diff
@@ -13,7 +13,7 @@ use crate::{
|
||||
cpumask::{Cpumask, CpumaskVar},
|
||||
device::Device,
|
||||
error::{code::*, from_err_ptr, from_result, to_result, Result, VTABLE_DEFAULT_ERROR},
|
||||
ffi::c_ulong,
|
||||
ffi::{c_char, c_ulong},
|
||||
prelude::*,
|
||||
str::CString,
|
||||
sync::aref::{ARef, AlwaysRefCounted},
|
||||
@@ -88,12 +88,12 @@ use core::{marker::PhantomData, ptr};
|
||||
use macros::vtable;
|
||||
|
||||
/// Creates a null-terminated slice of pointers to [`CString`]s.
|
||||
fn to_c_str_array(names: &[CString]) -> Result<KVec<*const u8>> {
|
||||
fn to_c_str_array(names: &[CString]) -> Result<KVec<*const c_char>> {
|
||||
// Allocated a null-terminated vector of pointers.
|
||||
let mut list = KVec::with_capacity(names.len() + 1, GFP_KERNEL)?;
|
||||
|
||||
for name in names.iter() {
|
||||
list.push(name.as_ptr().cast(), GFP_KERNEL)?;
|
||||
list.push(name.as_char_ptr(), GFP_KERNEL)?;
|
||||
}
|
||||
|
||||
list.push(ptr::null(), GFP_KERNEL)?;
|
||||
|
||||
@@ -4,8 +4,7 @@
|
||||
//!
|
||||
//! This module contains PCI class codes, Vendor IDs, and supporting types.
|
||||
|
||||
use crate::{bindings, error::code::EINVAL, error::Error, prelude::*};
|
||||
use core::fmt;
|
||||
use crate::{bindings, error::code::EINVAL, error::Error, fmt, prelude::*};
|
||||
|
||||
/// PCI device class codes.
|
||||
///
|
||||
|
||||
@@ -19,13 +19,13 @@ pub use core::{
|
||||
|
||||
pub use ::ffi::{
|
||||
c_char, c_int, c_long, c_longlong, c_schar, c_short, c_uchar, c_uint, c_ulong, c_ulonglong,
|
||||
c_ushort, c_void,
|
||||
c_ushort, c_void, CStr,
|
||||
};
|
||||
|
||||
pub use crate::alloc::{flags::*, Box, KBox, KVBox, KVVec, KVec, VBox, VVec, Vec};
|
||||
|
||||
#[doc(no_inline)]
|
||||
pub use macros::{export, kunit_tests, module, vtable};
|
||||
pub use macros::{export, fmt, kunit_tests, module, vtable};
|
||||
|
||||
pub use pin_init::{init, pin_data, pin_init, pinned_drop, InPlaceWrite, Init, PinInit, Zeroable};
|
||||
|
||||
@@ -36,7 +36,6 @@ pub use super::{build_assert, build_error};
|
||||
pub use super::dbg;
|
||||
pub use super::{dev_alert, dev_crit, dev_dbg, dev_emerg, dev_err, dev_info, dev_notice, dev_warn};
|
||||
pub use super::{pr_alert, pr_crit, pr_debug, pr_emerg, pr_err, pr_info, pr_notice, pr_warn};
|
||||
pub use core::format_args as fmt;
|
||||
|
||||
pub use super::{try_init, try_pin_init};
|
||||
|
||||
@@ -44,7 +43,7 @@ pub use super::static_assert;
|
||||
|
||||
pub use super::error::{code::*, Error, Result};
|
||||
|
||||
pub use super::{str::CStr, ThisModule};
|
||||
pub use super::{str::CStrExt as _, ThisModule};
|
||||
|
||||
pub use super::init::InPlaceInit;
|
||||
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
|
||||
//! Types and functions to work with pointers and addresses.
|
||||
|
||||
use core::fmt::Debug;
|
||||
use core::mem::align_of;
|
||||
use core::num::NonZero;
|
||||
|
||||
|
||||
@@ -243,14 +243,44 @@ impl<K, V> RBTree<K, V> {
|
||||
}
|
||||
|
||||
/// Returns a cursor over the tree nodes, starting with the smallest key.
|
||||
pub fn cursor_front(&mut self) -> Option<Cursor<'_, K, V>> {
|
||||
pub fn cursor_front_mut(&mut self) -> Option<CursorMut<'_, K, V>> {
|
||||
let root = addr_of_mut!(self.root);
|
||||
// SAFETY: `self.root` is always a valid root node
|
||||
// SAFETY: `self.root` is always a valid root node.
|
||||
let current = unsafe { bindings::rb_first(root) };
|
||||
NonNull::new(current).map(|current| {
|
||||
// INVARIANT:
|
||||
// - `current` is a valid node in the [`RBTree`] pointed to by `self`.
|
||||
CursorMut {
|
||||
current,
|
||||
tree: self,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns an immutable cursor over the tree nodes, starting with the smallest key.
|
||||
pub fn cursor_front(&self) -> Option<Cursor<'_, K, V>> {
|
||||
let root = &raw const self.root;
|
||||
// SAFETY: `self.root` is always a valid root node.
|
||||
let current = unsafe { bindings::rb_first(root) };
|
||||
NonNull::new(current).map(|current| {
|
||||
// INVARIANT:
|
||||
// - `current` is a valid node in the [`RBTree`] pointed to by `self`.
|
||||
Cursor {
|
||||
current,
|
||||
_tree: PhantomData,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns a cursor over the tree nodes, starting with the largest key.
|
||||
pub fn cursor_back_mut(&mut self) -> Option<CursorMut<'_, K, V>> {
|
||||
let root = addr_of_mut!(self.root);
|
||||
// SAFETY: `self.root` is always a valid root node.
|
||||
let current = unsafe { bindings::rb_last(root) };
|
||||
NonNull::new(current).map(|current| {
|
||||
// INVARIANT:
|
||||
// - `current` is a valid node in the [`RBTree`] pointed to by `self`.
|
||||
CursorMut {
|
||||
current,
|
||||
tree: self,
|
||||
}
|
||||
@@ -258,16 +288,16 @@ impl<K, V> RBTree<K, V> {
|
||||
}
|
||||
|
||||
/// Returns a cursor over the tree nodes, starting with the largest key.
|
||||
pub fn cursor_back(&mut self) -> Option<Cursor<'_, K, V>> {
|
||||
let root = addr_of_mut!(self.root);
|
||||
// SAFETY: `self.root` is always a valid root node
|
||||
pub fn cursor_back(&self) -> Option<Cursor<'_, K, V>> {
|
||||
let root = &raw const self.root;
|
||||
// SAFETY: `self.root` is always a valid root node.
|
||||
let current = unsafe { bindings::rb_last(root) };
|
||||
NonNull::new(current).map(|current| {
|
||||
// INVARIANT:
|
||||
// - `current` is a valid node in the [`RBTree`] pointed to by `self`.
|
||||
Cursor {
|
||||
current,
|
||||
tree: self,
|
||||
_tree: PhantomData,
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -421,12 +451,47 @@ where
|
||||
/// If the given key exists, the cursor starts there.
|
||||
/// Otherwise it starts with the first larger key in sort order.
|
||||
/// If there is no larger key, it returns [`None`].
|
||||
pub fn cursor_lower_bound(&mut self, key: &K) -> Option<Cursor<'_, K, V>>
|
||||
pub fn cursor_lower_bound_mut(&mut self, key: &K) -> Option<CursorMut<'_, K, V>>
|
||||
where
|
||||
K: Ord,
|
||||
{
|
||||
let best = self.find_best_match(key)?;
|
||||
|
||||
NonNull::new(best.as_ptr()).map(|current| {
|
||||
// INVARIANT:
|
||||
// - `current` is a valid node in the [`RBTree`] pointed to by `self`.
|
||||
CursorMut {
|
||||
current,
|
||||
tree: self,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns a cursor over the tree nodes based on the given key.
|
||||
///
|
||||
/// If the given key exists, the cursor starts there.
|
||||
/// Otherwise it starts with the first larger key in sort order.
|
||||
/// If there is no larger key, it returns [`None`].
|
||||
pub fn cursor_lower_bound(&self, key: &K) -> Option<Cursor<'_, K, V>>
|
||||
where
|
||||
K: Ord,
|
||||
{
|
||||
let best = self.find_best_match(key)?;
|
||||
|
||||
NonNull::new(best.as_ptr()).map(|current| {
|
||||
// INVARIANT:
|
||||
// - `current` is a valid node in the [`RBTree`] pointed to by `self`.
|
||||
Cursor {
|
||||
current,
|
||||
_tree: PhantomData,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn find_best_match(&self, key: &K) -> Option<NonNull<bindings::rb_node>> {
|
||||
let mut node = self.root.rb_node;
|
||||
let mut best_match: Option<NonNull<Node<K, V>>> = None;
|
||||
let mut best_key: Option<&K> = None;
|
||||
let mut best_links: Option<NonNull<bindings::rb_node>> = None;
|
||||
while !node.is_null() {
|
||||
// SAFETY: By the type invariant of `Self`, all non-null `rb_node` pointers stored in `self`
|
||||
// point to the links field of `Node<K, V>` objects.
|
||||
@@ -439,42 +504,28 @@ where
|
||||
let right_child = unsafe { (*node).rb_right };
|
||||
match key.cmp(this_key) {
|
||||
Ordering::Equal => {
|
||||
best_match = NonNull::new(this);
|
||||
// SAFETY: `this` is a non-null node so it is valid by the type invariants.
|
||||
best_links = Some(unsafe { NonNull::new_unchecked(&mut (*this).links) });
|
||||
break;
|
||||
}
|
||||
Ordering::Greater => {
|
||||
node = right_child;
|
||||
}
|
||||
Ordering::Less => {
|
||||
let is_better_match = match best_match {
|
||||
let is_better_match = match best_key {
|
||||
None => true,
|
||||
Some(best) => {
|
||||
// SAFETY: `best` is a non-null node so it is valid by the type invariants.
|
||||
let best_key = unsafe { &(*best.as_ptr()).key };
|
||||
best_key > this_key
|
||||
}
|
||||
Some(best) => best > this_key,
|
||||
};
|
||||
if is_better_match {
|
||||
best_match = NonNull::new(this);
|
||||
best_key = Some(this_key);
|
||||
// SAFETY: `this` is a non-null node so it is valid by the type invariants.
|
||||
best_links = Some(unsafe { NonNull::new_unchecked(&mut (*this).links) });
|
||||
}
|
||||
node = left_child;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
let best = best_match?;
|
||||
|
||||
// SAFETY: `best` is a non-null node so it is valid by the type invariants.
|
||||
let links = unsafe { addr_of_mut!((*best.as_ptr()).links) };
|
||||
|
||||
NonNull::new(links).map(|current| {
|
||||
// INVARIANT:
|
||||
// - `current` is a valid node in the [`RBTree`] pointed to by `self`.
|
||||
Cursor {
|
||||
current,
|
||||
tree: self,
|
||||
}
|
||||
})
|
||||
best_links
|
||||
}
|
||||
}
|
||||
|
||||
@@ -507,7 +558,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
}
|
||||
}
|
||||
|
||||
/// A bidirectional cursor over the tree nodes, sorted by key.
|
||||
/// A bidirectional mutable cursor over the tree nodes, sorted by key.
|
||||
///
|
||||
/// # Examples
|
||||
///
|
||||
@@ -526,7 +577,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?;
|
||||
///
|
||||
/// // Get a cursor to the first element.
|
||||
/// let mut cursor = tree.cursor_front().unwrap();
|
||||
/// let mut cursor = tree.cursor_front_mut().unwrap();
|
||||
/// let mut current = cursor.current();
|
||||
/// assert_eq!(current, (&10, &100));
|
||||
///
|
||||
@@ -564,7 +615,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?;
|
||||
/// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?;
|
||||
///
|
||||
/// let mut cursor = tree.cursor_back().unwrap();
|
||||
/// let mut cursor = tree.cursor_back_mut().unwrap();
|
||||
/// let current = cursor.current();
|
||||
/// assert_eq!(current, (&30, &300));
|
||||
///
|
||||
@@ -577,7 +628,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// use kernel::rbtree::RBTree;
|
||||
///
|
||||
/// let mut tree: RBTree<u16, u16> = RBTree::new();
|
||||
/// assert!(tree.cursor_front().is_none());
|
||||
/// assert!(tree.cursor_front_mut().is_none());
|
||||
///
|
||||
/// # Ok::<(), Error>(())
|
||||
/// ```
|
||||
@@ -628,7 +679,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?;
|
||||
///
|
||||
/// // Retrieve a cursor.
|
||||
/// let mut cursor = tree.cursor_front().unwrap();
|
||||
/// let mut cursor = tree.cursor_front_mut().unwrap();
|
||||
///
|
||||
/// // Get a mutable reference to the current value.
|
||||
/// let (k, v) = cursor.current_mut();
|
||||
@@ -655,7 +706,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?;
|
||||
///
|
||||
/// // Remove the first element.
|
||||
/// let mut cursor = tree.cursor_front().unwrap();
|
||||
/// let mut cursor = tree.cursor_front_mut().unwrap();
|
||||
/// let mut current = cursor.current();
|
||||
/// assert_eq!(current, (&10, &100));
|
||||
/// cursor = cursor.remove_current().0.unwrap();
|
||||
@@ -665,7 +716,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// assert_eq!(current, (&20, &200));
|
||||
///
|
||||
/// // Get a cursor to the last element, and remove it.
|
||||
/// cursor = tree.cursor_back().unwrap();
|
||||
/// cursor = tree.cursor_back_mut().unwrap();
|
||||
/// current = cursor.current();
|
||||
/// assert_eq!(current, (&30, &300));
|
||||
///
|
||||
@@ -694,7 +745,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?;
|
||||
///
|
||||
/// // Get a cursor to the first element.
|
||||
/// let mut cursor = tree.cursor_front().unwrap();
|
||||
/// let mut cursor = tree.cursor_front_mut().unwrap();
|
||||
/// let mut current = cursor.current();
|
||||
/// assert_eq!(current, (&10, &100));
|
||||
///
|
||||
@@ -702,7 +753,7 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
/// assert!(cursor.remove_prev().is_none());
|
||||
///
|
||||
/// // Get a cursor to the last element.
|
||||
/// cursor = tree.cursor_back().unwrap();
|
||||
/// cursor = tree.cursor_back_mut().unwrap();
|
||||
/// current = cursor.current();
|
||||
/// assert_eq!(current, (&30, &300));
|
||||
///
|
||||
@@ -726,18 +777,48 @@ impl<K, V> Drop for RBTree<K, V> {
|
||||
///
|
||||
/// # Invariants
|
||||
/// - `current` points to a node that is in the same [`RBTree`] as `tree`.
|
||||
pub struct Cursor<'a, K, V> {
|
||||
pub struct CursorMut<'a, K, V> {
|
||||
tree: &'a mut RBTree<K, V>,
|
||||
current: NonNull<bindings::rb_node>,
|
||||
}
|
||||
|
||||
// SAFETY: The [`Cursor`] has exclusive access to both `K` and `V`, so it is sufficient to require them to be `Send`.
|
||||
// The cursor only gives out immutable references to the keys, but since it has excusive access to those same
|
||||
// keys, `Send` is sufficient. `Sync` would be okay, but it is more restrictive to the user.
|
||||
unsafe impl<'a, K: Send, V: Send> Send for Cursor<'a, K, V> {}
|
||||
/// A bidirectional immutable cursor over the tree nodes, sorted by key. This is a simpler
|
||||
/// variant of [`CursorMut`] that is basically providing read only access.
|
||||
///
|
||||
/// # Examples
|
||||
///
|
||||
/// In the following example, we obtain a cursor to the first element in the tree.
|
||||
/// The cursor allows us to iterate bidirectionally over key/value pairs in the tree.
|
||||
///
|
||||
/// ```
|
||||
/// use kernel::{alloc::flags, rbtree::RBTree};
|
||||
///
|
||||
/// // Create a new tree.
|
||||
/// let mut tree = RBTree::new();
|
||||
///
|
||||
/// // Insert three elements.
|
||||
/// tree.try_create_and_insert(10, 100, flags::GFP_KERNEL)?;
|
||||
/// tree.try_create_and_insert(20, 200, flags::GFP_KERNEL)?;
|
||||
/// tree.try_create_and_insert(30, 300, flags::GFP_KERNEL)?;
|
||||
///
|
||||
/// // Get a cursor to the first element.
|
||||
/// let cursor = tree.cursor_front().unwrap();
|
||||
/// let current = cursor.current();
|
||||
/// assert_eq!(current, (&10, &100));
|
||||
///
|
||||
/// # Ok::<(), Error>(())
|
||||
/// ```
|
||||
pub struct Cursor<'a, K, V> {
|
||||
_tree: PhantomData<&'a RBTree<K, V>>,
|
||||
current: NonNull<bindings::rb_node>,
|
||||
}
|
||||
|
||||
// SAFETY: The [`Cursor`] gives out immutable references to K and mutable references to V,
|
||||
// so it has the same thread safety requirements as mutable references.
|
||||
// SAFETY: The immutable cursor gives out shared access to `K` and `V` so if `K` and `V` can be
|
||||
// shared across threads, then it's safe to share the cursor.
|
||||
unsafe impl<'a, K: Sync, V: Sync> Send for Cursor<'a, K, V> {}
|
||||
|
||||
// SAFETY: The immutable cursor gives out shared access to `K` and `V` so if `K` and `V` can be
|
||||
// shared across threads, then it's safe to share the cursor.
|
||||
unsafe impl<'a, K: Sync, V: Sync> Sync for Cursor<'a, K, V> {}
|
||||
|
||||
impl<'a, K, V> Cursor<'a, K, V> {
|
||||
@@ -749,6 +830,75 @@ impl<'a, K, V> Cursor<'a, K, V> {
|
||||
unsafe { Self::to_key_value(self.current) }
|
||||
}
|
||||
|
||||
/// # Safety
|
||||
///
|
||||
/// - `node` must be a valid pointer to a node in an [`RBTree`].
|
||||
/// - The caller has immutable access to `node` for the duration of `'b`.
|
||||
unsafe fn to_key_value<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b V) {
|
||||
// SAFETY: By the type invariant of `Self`, all non-null `rb_node` pointers stored in `self`
|
||||
// point to the links field of `Node<K, V>` objects.
|
||||
let this = unsafe { container_of!(node.as_ptr(), Node<K, V>, links) };
|
||||
// SAFETY: The passed `node` is the current node or a non-null neighbor,
|
||||
// thus `this` is valid by the type invariants.
|
||||
let k = unsafe { &(*this).key };
|
||||
// SAFETY: The passed `node` is the current node or a non-null neighbor,
|
||||
// thus `this` is valid by the type invariants.
|
||||
let v = unsafe { &(*this).value };
|
||||
(k, v)
|
||||
}
|
||||
|
||||
/// Access the previous node without moving the cursor.
|
||||
pub fn peek_prev(&self) -> Option<(&K, &V)> {
|
||||
self.peek(Direction::Prev)
|
||||
}
|
||||
|
||||
/// Access the next node without moving the cursor.
|
||||
pub fn peek_next(&self) -> Option<(&K, &V)> {
|
||||
self.peek(Direction::Next)
|
||||
}
|
||||
|
||||
fn peek(&self, direction: Direction) -> Option<(&K, &V)> {
|
||||
self.get_neighbor_raw(direction).map(|neighbor| {
|
||||
// SAFETY:
|
||||
// - `neighbor` is a valid tree node.
|
||||
// - By the function signature, we have an immutable reference to `self`.
|
||||
unsafe { Self::to_key_value(neighbor) }
|
||||
})
|
||||
}
|
||||
|
||||
fn get_neighbor_raw(&self, direction: Direction) -> Option<NonNull<bindings::rb_node>> {
|
||||
// SAFETY: `self.current` is valid by the type invariants.
|
||||
let neighbor = unsafe {
|
||||
match direction {
|
||||
Direction::Prev => bindings::rb_prev(self.current.as_ptr()),
|
||||
Direction::Next => bindings::rb_next(self.current.as_ptr()),
|
||||
}
|
||||
};
|
||||
|
||||
NonNull::new(neighbor)
|
||||
}
|
||||
}
|
||||
|
||||
// SAFETY: The [`CursorMut`] has exclusive access to both `K` and `V`, so it is sufficient to
|
||||
// require them to be `Send`.
|
||||
// The cursor only gives out immutable references to the keys, but since it has exclusive access to
|
||||
// those same keys, `Send` is sufficient. `Sync` would be okay, but it is more restrictive to the
|
||||
// user.
|
||||
unsafe impl<'a, K: Send, V: Send> Send for CursorMut<'a, K, V> {}
|
||||
|
||||
// SAFETY: The [`CursorMut`] gives out immutable references to `K` and mutable references to `V`,
|
||||
// so it has the same thread safety requirements as mutable references.
|
||||
unsafe impl<'a, K: Sync, V: Sync> Sync for CursorMut<'a, K, V> {}
|
||||
|
||||
impl<'a, K, V> CursorMut<'a, K, V> {
|
||||
/// The current node.
|
||||
pub fn current(&self) -> (&K, &V) {
|
||||
// SAFETY:
|
||||
// - `self.current` is a valid node by the type invariants.
|
||||
// - We have an immutable reference by the function signature.
|
||||
unsafe { Self::to_key_value(self.current) }
|
||||
}
|
||||
|
||||
/// The current node, with a mutable value
|
||||
pub fn current_mut(&mut self) -> (&K, &mut V) {
|
||||
// SAFETY:
|
||||
@@ -920,7 +1070,7 @@ impl<'a, K, V> Cursor<'a, K, V> {
|
||||
}
|
||||
}
|
||||
|
||||
/// Direction for [`Cursor`] operations.
|
||||
/// Direction for [`Cursor`] and [`CursorMut`] operations.
|
||||
enum Direction {
|
||||
/// the node immediately before, in sort order
|
||||
Prev,
|
||||
|
||||
@@ -84,7 +84,7 @@ pub struct Error<State: RegulatorState> {
|
||||
pub fn devm_enable(dev: &Device<Bound>, name: &CStr) -> Result {
|
||||
// SAFETY: `dev` is a valid and bound device, while `name` is a valid C
|
||||
// string.
|
||||
to_result(unsafe { bindings::devm_regulator_get_enable(dev.as_raw(), name.as_ptr()) })
|
||||
to_result(unsafe { bindings::devm_regulator_get_enable(dev.as_raw(), name.as_char_ptr()) })
|
||||
}
|
||||
|
||||
/// Same as [`devm_enable`], but calls `devm_regulator_get_enable_optional`
|
||||
@@ -102,7 +102,9 @@ pub fn devm_enable(dev: &Device<Bound>, name: &CStr) -> Result {
|
||||
pub fn devm_enable_optional(dev: &Device<Bound>, name: &CStr) -> Result {
|
||||
// SAFETY: `dev` is a valid and bound device, while `name` is a valid C
|
||||
// string.
|
||||
to_result(unsafe { bindings::devm_regulator_get_enable_optional(dev.as_raw(), name.as_ptr()) })
|
||||
to_result(unsafe {
|
||||
bindings::devm_regulator_get_enable_optional(dev.as_raw(), name.as_char_ptr())
|
||||
})
|
||||
}
|
||||
|
||||
/// A `struct regulator` abstraction.
|
||||
@@ -266,9 +268,10 @@ impl<T: RegulatorState> Regulator<T> {
|
||||
}
|
||||
|
||||
fn get_internal(dev: &Device, name: &CStr) -> Result<Regulator<T>> {
|
||||
// SAFETY: It is safe to call `regulator_get()`, on a device pointer
|
||||
// received from the C code.
|
||||
let inner = from_err_ptr(unsafe { bindings::regulator_get(dev.as_raw(), name.as_ptr()) })?;
|
||||
let inner =
|
||||
// SAFETY: It is safe to call `regulator_get()`, on a device pointer
|
||||
// received from the C code.
|
||||
from_err_ptr(unsafe { bindings::regulator_get(dev.as_raw(), name.as_char_ptr()) })?;
|
||||
|
||||
// SAFETY: We can safely trust `inner` to be a pointer to a valid
|
||||
// regulator if `ERR_PTR` was not returned.
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
//!
|
||||
//! C header: [`include/linux/seq_file.h`](srctree/include/linux/seq_file.h)
|
||||
|
||||
use crate::{bindings, c_str, fmt, types::NotThreadSafe, types::Opaque};
|
||||
use crate::{bindings, c_str, fmt, str::CStrExt as _, types::NotThreadSafe, types::Opaque};
|
||||
|
||||
/// A utility for generating the contents of a seq file.
|
||||
#[repr(transparent)]
|
||||
|
||||
@@ -10,9 +10,11 @@ use crate::{
|
||||
};
|
||||
use core::{
|
||||
marker::PhantomData,
|
||||
ops::{self, Deref, DerefMut, Index},
|
||||
ops::{Deref, DerefMut, Index},
|
||||
};
|
||||
|
||||
pub use crate::prelude::CStr;
|
||||
|
||||
/// Byte string without UTF-8 validity guarantee.
|
||||
#[repr(transparent)]
|
||||
pub struct BStr([u8]);
|
||||
@@ -186,58 +188,17 @@ macro_rules! b_str {
|
||||
// - error[E0379]: functions in trait impls cannot be declared const
|
||||
#[inline]
|
||||
pub const fn as_char_ptr_in_const_context(c_str: &CStr) -> *const c_char {
|
||||
c_str.0.as_ptr()
|
||||
c_str.as_ptr().cast()
|
||||
}
|
||||
|
||||
/// Possible errors when using conversion functions in [`CStr`].
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
pub enum CStrConvertError {
|
||||
/// Supplied bytes contain an interior `NUL`.
|
||||
InteriorNul,
|
||||
mod private {
|
||||
pub trait Sealed {}
|
||||
|
||||
/// Supplied bytes are not terminated by `NUL`.
|
||||
NotNulTerminated,
|
||||
impl Sealed for super::CStr {}
|
||||
}
|
||||
|
||||
impl From<CStrConvertError> for Error {
|
||||
#[inline]
|
||||
fn from(_: CStrConvertError) -> Error {
|
||||
EINVAL
|
||||
}
|
||||
}
|
||||
|
||||
/// A string that is guaranteed to have exactly one `NUL` byte, which is at the
|
||||
/// end.
|
||||
///
|
||||
/// Used for interoperability with kernel APIs that take C strings.
|
||||
#[repr(transparent)]
|
||||
pub struct CStr([u8]);
|
||||
|
||||
impl CStr {
|
||||
/// Returns the length of this string excluding `NUL`.
|
||||
#[inline]
|
||||
pub const fn len(&self) -> usize {
|
||||
self.len_with_nul() - 1
|
||||
}
|
||||
|
||||
/// Returns the length of this string with `NUL`.
|
||||
#[inline]
|
||||
pub const fn len_with_nul(&self) -> usize {
|
||||
if self.0.is_empty() {
|
||||
// SAFETY: This is one of the invariant of `CStr`.
|
||||
// We add a `unreachable_unchecked` here to hint the optimizer that
|
||||
// the value returned from this function is non-zero.
|
||||
unsafe { core::hint::unreachable_unchecked() };
|
||||
}
|
||||
self.0.len()
|
||||
}
|
||||
|
||||
/// Returns `true` if the string only includes `NUL`.
|
||||
#[inline]
|
||||
pub const fn is_empty(&self) -> bool {
|
||||
self.len() == 0
|
||||
}
|
||||
|
||||
/// Extensions to [`CStr`].
|
||||
pub trait CStrExt: private::Sealed {
|
||||
/// Wraps a raw C string pointer.
|
||||
///
|
||||
/// # Safety
|
||||
@@ -245,54 +206,9 @@ impl CStr {
|
||||
/// `ptr` must be a valid pointer to a `NUL`-terminated C string, and it must
|
||||
/// last at least `'a`. When `CStr` is alive, the memory pointed by `ptr`
|
||||
/// must not be mutated.
|
||||
#[inline]
|
||||
pub unsafe fn from_char_ptr<'a>(ptr: *const c_char) -> &'a Self {
|
||||
// SAFETY: The safety precondition guarantees `ptr` is a valid pointer
|
||||
// to a `NUL`-terminated C string.
|
||||
let len = unsafe { bindings::strlen(ptr) } + 1;
|
||||
// SAFETY: Lifetime guaranteed by the safety precondition.
|
||||
let bytes = unsafe { core::slice::from_raw_parts(ptr.cast(), len) };
|
||||
// SAFETY: As `len` is returned by `strlen`, `bytes` does not contain interior `NUL`.
|
||||
// As we have added 1 to `len`, the last byte is known to be `NUL`.
|
||||
unsafe { Self::from_bytes_with_nul_unchecked(bytes) }
|
||||
}
|
||||
|
||||
/// Creates a [`CStr`] from a `[u8]`.
|
||||
///
|
||||
/// The provided slice must be `NUL`-terminated, does not contain any
|
||||
/// interior `NUL` bytes.
|
||||
pub const fn from_bytes_with_nul(bytes: &[u8]) -> Result<&Self, CStrConvertError> {
|
||||
if bytes.is_empty() {
|
||||
return Err(CStrConvertError::NotNulTerminated);
|
||||
}
|
||||
if bytes[bytes.len() - 1] != 0 {
|
||||
return Err(CStrConvertError::NotNulTerminated);
|
||||
}
|
||||
let mut i = 0;
|
||||
// `i + 1 < bytes.len()` allows LLVM to optimize away bounds checking,
|
||||
// while it couldn't optimize away bounds checks for `i < bytes.len() - 1`.
|
||||
while i + 1 < bytes.len() {
|
||||
if bytes[i] == 0 {
|
||||
return Err(CStrConvertError::InteriorNul);
|
||||
}
|
||||
i += 1;
|
||||
}
|
||||
// SAFETY: We just checked that all properties hold.
|
||||
Ok(unsafe { Self::from_bytes_with_nul_unchecked(bytes) })
|
||||
}
|
||||
|
||||
/// Creates a [`CStr`] from a `[u8]` without performing any additional
|
||||
/// checks.
|
||||
///
|
||||
/// # Safety
|
||||
///
|
||||
/// `bytes` *must* end with a `NUL` byte, and should only have a single
|
||||
/// `NUL` byte (or the string will be truncated).
|
||||
#[inline]
|
||||
pub const unsafe fn from_bytes_with_nul_unchecked(bytes: &[u8]) -> &CStr {
|
||||
// SAFETY: Properties of `bytes` guaranteed by the safety precondition.
|
||||
unsafe { core::mem::transmute(bytes) }
|
||||
}
|
||||
// This function exists to paper over the fact that `CStr::from_ptr` takes a `*const
|
||||
// core::ffi::c_char` rather than a `*const crate::ffi::c_char`.
|
||||
unsafe fn from_char_ptr<'a>(ptr: *const c_char) -> &'a Self;
|
||||
|
||||
/// Creates a mutable [`CStr`] from a `[u8]` without performing any
|
||||
/// additional checks.
|
||||
@@ -301,99 +217,16 @@ impl CStr {
|
||||
///
|
||||
/// `bytes` *must* end with a `NUL` byte, and should only have a single
|
||||
/// `NUL` byte (or the string will be truncated).
|
||||
#[inline]
|
||||
pub unsafe fn from_bytes_with_nul_unchecked_mut(bytes: &mut [u8]) -> &mut CStr {
|
||||
// SAFETY: Properties of `bytes` guaranteed by the safety precondition.
|
||||
unsafe { &mut *(core::ptr::from_mut(bytes) as *mut CStr) }
|
||||
}
|
||||
unsafe fn from_bytes_with_nul_unchecked_mut(bytes: &mut [u8]) -> &mut Self;
|
||||
|
||||
/// Returns a C pointer to the string.
|
||||
///
|
||||
/// Using this function in a const context is deprecated in favor of
|
||||
/// [`as_char_ptr_in_const_context`] in preparation for replacing `CStr` with `core::ffi::CStr`
|
||||
/// which does not have this method.
|
||||
#[inline]
|
||||
pub const fn as_char_ptr(&self) -> *const c_char {
|
||||
as_char_ptr_in_const_context(self)
|
||||
}
|
||||
|
||||
/// Convert the string to a byte slice without the trailing `NUL` byte.
|
||||
#[inline]
|
||||
pub fn to_bytes(&self) -> &[u8] {
|
||||
&self.0[..self.len()]
|
||||
}
|
||||
|
||||
/// Convert the string to a byte slice without the trailing `NUL` byte.
|
||||
///
|
||||
/// This function is deprecated in favor of [`Self::to_bytes`] in preparation for replacing
|
||||
/// `CStr` with `core::ffi::CStr` which does not have this method.
|
||||
#[inline]
|
||||
pub fn as_bytes(&self) -> &[u8] {
|
||||
self.to_bytes()
|
||||
}
|
||||
|
||||
/// Convert the string to a byte slice containing the trailing `NUL` byte.
|
||||
#[inline]
|
||||
pub const fn to_bytes_with_nul(&self) -> &[u8] {
|
||||
&self.0
|
||||
}
|
||||
|
||||
/// Convert the string to a byte slice containing the trailing `NUL` byte.
|
||||
///
|
||||
/// This function is deprecated in favor of [`Self::to_bytes_with_nul`] in preparation for
|
||||
/// replacing `CStr` with `core::ffi::CStr` which does not have this method.
|
||||
#[inline]
|
||||
pub const fn as_bytes_with_nul(&self) -> &[u8] {
|
||||
self.to_bytes_with_nul()
|
||||
}
|
||||
|
||||
/// Yields a [`&str`] slice if the [`CStr`] contains valid UTF-8.
|
||||
///
|
||||
/// If the contents of the [`CStr`] are valid UTF-8 data, this
|
||||
/// function will return the corresponding [`&str`] slice. Otherwise,
|
||||
/// it will return an error with details of where UTF-8 validation failed.
|
||||
///
|
||||
/// # Examples
|
||||
///
|
||||
/// ```
|
||||
/// # use kernel::str::CStr;
|
||||
/// let cstr = CStr::from_bytes_with_nul(b"foo\0")?;
|
||||
/// assert_eq!(cstr.to_str(), Ok("foo"));
|
||||
/// # Ok::<(), kernel::error::Error>(())
|
||||
/// ```
|
||||
#[inline]
|
||||
pub fn to_str(&self) -> Result<&str, core::str::Utf8Error> {
|
||||
core::str::from_utf8(self.as_bytes())
|
||||
}
|
||||
|
||||
/// Unsafely convert this [`CStr`] into a [`&str`], without checking for
|
||||
/// valid UTF-8.
|
||||
///
|
||||
/// # Safety
|
||||
///
|
||||
/// The contents must be valid UTF-8.
|
||||
///
|
||||
/// # Examples
|
||||
///
|
||||
/// ```
|
||||
/// # use kernel::c_str;
|
||||
/// # use kernel::str::CStr;
|
||||
/// let bar = c_str!("ツ");
|
||||
/// // SAFETY: String literals are guaranteed to be valid UTF-8
|
||||
/// // by the Rust compiler.
|
||||
/// assert_eq!(unsafe { bar.as_str_unchecked() }, "ツ");
|
||||
/// ```
|
||||
#[inline]
|
||||
pub unsafe fn as_str_unchecked(&self) -> &str {
|
||||
// SAFETY: TODO.
|
||||
unsafe { core::str::from_utf8_unchecked(self.as_bytes()) }
|
||||
}
|
||||
// This function exists to paper over the fact that `CStr::as_ptr` returns a `*const
|
||||
// core::ffi::c_char` rather than a `*const crate::ffi::c_char`.
|
||||
fn as_char_ptr(&self) -> *const c_char;
|
||||
|
||||
/// Convert this [`CStr`] into a [`CString`] by allocating memory and
|
||||
/// copying over the string data.
|
||||
pub fn to_cstring(&self) -> Result<CString, AllocError> {
|
||||
CString::try_from(self)
|
||||
}
|
||||
fn to_cstring(&self) -> Result<CString, AllocError>;
|
||||
|
||||
/// Converts this [`CStr`] to its ASCII lower case equivalent in-place.
|
||||
///
|
||||
@@ -404,11 +237,7 @@ impl CStr {
|
||||
/// [`to_ascii_lowercase()`].
|
||||
///
|
||||
/// [`to_ascii_lowercase()`]: #method.to_ascii_lowercase
|
||||
pub fn make_ascii_lowercase(&mut self) {
|
||||
// INVARIANT: This doesn't introduce or remove NUL bytes in the C
|
||||
// string.
|
||||
self.0.make_ascii_lowercase();
|
||||
}
|
||||
fn make_ascii_lowercase(&mut self);
|
||||
|
||||
/// Converts this [`CStr`] to its ASCII upper case equivalent in-place.
|
||||
///
|
||||
@@ -419,11 +248,7 @@ impl CStr {
|
||||
/// [`to_ascii_uppercase()`].
|
||||
///
|
||||
/// [`to_ascii_uppercase()`]: #method.to_ascii_uppercase
|
||||
pub fn make_ascii_uppercase(&mut self) {
|
||||
// INVARIANT: This doesn't introduce or remove NUL bytes in the C
|
||||
// string.
|
||||
self.0.make_ascii_uppercase();
|
||||
}
|
||||
fn make_ascii_uppercase(&mut self);
|
||||
|
||||
/// Returns a copy of this [`CString`] where each character is mapped to its
|
||||
/// ASCII lower case equivalent.
|
||||
@@ -434,13 +259,7 @@ impl CStr {
|
||||
/// To lowercase the value in-place, use [`make_ascii_lowercase`].
|
||||
///
|
||||
/// [`make_ascii_lowercase`]: str::make_ascii_lowercase
|
||||
pub fn to_ascii_lowercase(&self) -> Result<CString, AllocError> {
|
||||
let mut s = self.to_cstring()?;
|
||||
|
||||
s.make_ascii_lowercase();
|
||||
|
||||
Ok(s)
|
||||
}
|
||||
fn to_ascii_lowercase(&self) -> Result<CString, AllocError>;
|
||||
|
||||
/// Returns a copy of this [`CString`] where each character is mapped to its
|
||||
/// ASCII upper case equivalent.
|
||||
@@ -451,28 +270,21 @@ impl CStr {
|
||||
/// To uppercase the value in-place, use [`make_ascii_uppercase`].
|
||||
///
|
||||
/// [`make_ascii_uppercase`]: str::make_ascii_uppercase
|
||||
pub fn to_ascii_uppercase(&self) -> Result<CString, AllocError> {
|
||||
let mut s = self.to_cstring()?;
|
||||
|
||||
s.make_ascii_uppercase();
|
||||
|
||||
Ok(s)
|
||||
}
|
||||
fn to_ascii_uppercase(&self) -> Result<CString, AllocError>;
|
||||
}
|
||||
|
||||
impl fmt::Display for CStr {
|
||||
/// Formats printable ASCII characters, escaping the rest.
|
||||
///
|
||||
/// ```
|
||||
/// # use kernel::c_str;
|
||||
/// # use kernel::prelude::fmt;
|
||||
/// # use kernel::str::CStr;
|
||||
/// # use kernel::str::CString;
|
||||
/// let penguin = c_str!("🐧");
|
||||
/// let penguin = c"🐧";
|
||||
/// let s = CString::try_from_fmt(fmt!("{penguin}"))?;
|
||||
/// assert_eq!(s.to_bytes_with_nul(), "\\xf0\\x9f\\x90\\xa7\0".as_bytes());
|
||||
///
|
||||
/// let ascii = c_str!("so \"cool\"");
|
||||
/// let ascii = c"so \"cool\"";
|
||||
/// let s = CString::try_from_fmt(fmt!("{ascii}"))?;
|
||||
/// assert_eq!(s.to_bytes_with_nul(), "so \"cool\"\0".as_bytes());
|
||||
/// # Ok::<(), kernel::error::Error>(())
|
||||
@@ -490,98 +302,75 @@ impl fmt::Display for CStr {
|
||||
}
|
||||
}
|
||||
|
||||
impl fmt::Debug for CStr {
|
||||
/// Formats printable ASCII characters with a double quote on either end, escaping the rest.
|
||||
///
|
||||
/// ```
|
||||
/// # use kernel::c_str;
|
||||
/// # use kernel::prelude::fmt;
|
||||
/// # use kernel::str::CStr;
|
||||
/// # use kernel::str::CString;
|
||||
/// let penguin = c_str!("🐧");
|
||||
/// let s = CString::try_from_fmt(fmt!("{penguin:?}"))?;
|
||||
/// assert_eq!(s.as_bytes_with_nul(), "\"\\xf0\\x9f\\x90\\xa7\"\0".as_bytes());
|
||||
///
|
||||
/// // Embedded double quotes are escaped.
|
||||
/// let ascii = c_str!("so \"cool\"");
|
||||
/// let s = CString::try_from_fmt(fmt!("{ascii:?}"))?;
|
||||
/// assert_eq!(s.as_bytes_with_nul(), "\"so \\\"cool\\\"\"\0".as_bytes());
|
||||
/// # Ok::<(), kernel::error::Error>(())
|
||||
/// ```
|
||||
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
|
||||
f.write_str("\"")?;
|
||||
for &c in self.as_bytes() {
|
||||
match c {
|
||||
// Printable characters.
|
||||
b'\"' => f.write_str("\\\"")?,
|
||||
0x20..=0x7e => f.write_char(c as char)?,
|
||||
_ => write!(f, "\\x{c:02x}")?,
|
||||
}
|
||||
}
|
||||
f.write_str("\"")
|
||||
/// Converts a mutable C string to a mutable byte slice.
|
||||
///
|
||||
/// # Safety
|
||||
///
|
||||
/// The caller must ensure that the slice ends in a NUL byte and contains no other NUL bytes before
|
||||
/// the borrow ends and the underlying [`CStr`] is used.
|
||||
unsafe fn to_bytes_mut(s: &mut CStr) -> &mut [u8] {
|
||||
// SAFETY: the cast from `&CStr` to `&[u8]` is safe since `CStr` has the same layout as `&[u8]`
|
||||
// (this is technically not guaranteed, but we rely on it here). The pointer dereference is
|
||||
// safe since it comes from a mutable reference which is guaranteed to be valid for writes.
|
||||
unsafe { &mut *(core::ptr::from_mut(s) as *mut [u8]) }
|
||||
}
|
||||
|
||||
impl CStrExt for CStr {
|
||||
#[inline]
|
||||
unsafe fn from_char_ptr<'a>(ptr: *const c_char) -> &'a Self {
|
||||
// SAFETY: The safety preconditions are the same as for `CStr::from_ptr`.
|
||||
unsafe { CStr::from_ptr(ptr.cast()) }
|
||||
}
|
||||
|
||||
#[inline]
|
||||
unsafe fn from_bytes_with_nul_unchecked_mut(bytes: &mut [u8]) -> &mut Self {
|
||||
// SAFETY: the cast from `&[u8]` to `&CStr` is safe since the properties of `bytes` are
|
||||
// guaranteed by the safety precondition and `CStr` has the same layout as `&[u8]` (this is
|
||||
// technically not guaranteed, but we rely on it here). The pointer dereference is safe
|
||||
// since it comes from a mutable reference which is guaranteed to be valid for writes.
|
||||
unsafe { &mut *(core::ptr::from_mut(bytes) as *mut CStr) }
|
||||
}
|
||||
|
||||
#[inline]
|
||||
fn as_char_ptr(&self) -> *const c_char {
|
||||
self.as_ptr().cast()
|
||||
}
|
||||
|
||||
fn to_cstring(&self) -> Result<CString, AllocError> {
|
||||
CString::try_from(self)
|
||||
}
|
||||
|
||||
fn make_ascii_lowercase(&mut self) {
|
||||
// SAFETY: This doesn't introduce or remove NUL bytes in the C string.
|
||||
unsafe { to_bytes_mut(self) }.make_ascii_lowercase();
|
||||
}
|
||||
|
||||
fn make_ascii_uppercase(&mut self) {
|
||||
// SAFETY: This doesn't introduce or remove NUL bytes in the C string.
|
||||
unsafe { to_bytes_mut(self) }.make_ascii_uppercase();
|
||||
}
|
||||
|
||||
fn to_ascii_lowercase(&self) -> Result<CString, AllocError> {
|
||||
let mut s = self.to_cstring()?;
|
||||
|
||||
s.make_ascii_lowercase();
|
||||
|
||||
Ok(s)
|
||||
}
|
||||
|
||||
fn to_ascii_uppercase(&self) -> Result<CString, AllocError> {
|
||||
let mut s = self.to_cstring()?;
|
||||
|
||||
s.make_ascii_uppercase();
|
||||
|
||||
Ok(s)
|
||||
}
|
||||
}
|
||||
|
||||
impl AsRef<BStr> for CStr {
|
||||
#[inline]
|
||||
fn as_ref(&self) -> &BStr {
|
||||
BStr::from_bytes(self.as_bytes())
|
||||
}
|
||||
}
|
||||
|
||||
impl Deref for CStr {
|
||||
type Target = BStr;
|
||||
|
||||
#[inline]
|
||||
fn deref(&self) -> &Self::Target {
|
||||
self.as_ref()
|
||||
}
|
||||
}
|
||||
|
||||
impl Index<ops::RangeFrom<usize>> for CStr {
|
||||
type Output = CStr;
|
||||
|
||||
#[inline]
|
||||
fn index(&self, index: ops::RangeFrom<usize>) -> &Self::Output {
|
||||
// Delegate bounds checking to slice.
|
||||
// Assign to _ to mute clippy's unnecessary operation warning.
|
||||
let _ = &self.as_bytes()[index.start..];
|
||||
// SAFETY: We just checked the bounds.
|
||||
unsafe { Self::from_bytes_with_nul_unchecked(&self.0[index.start..]) }
|
||||
}
|
||||
}
|
||||
|
||||
impl Index<ops::RangeFull> for CStr {
|
||||
type Output = CStr;
|
||||
|
||||
#[inline]
|
||||
fn index(&self, _index: ops::RangeFull) -> &Self::Output {
|
||||
self
|
||||
}
|
||||
}
|
||||
|
||||
mod private {
|
||||
use core::ops;
|
||||
|
||||
// Marker trait for index types that can be forward to `BStr`.
|
||||
pub trait CStrIndex {}
|
||||
|
||||
impl CStrIndex for usize {}
|
||||
impl CStrIndex for ops::Range<usize> {}
|
||||
impl CStrIndex for ops::RangeInclusive<usize> {}
|
||||
impl CStrIndex for ops::RangeToInclusive<usize> {}
|
||||
}
|
||||
|
||||
impl<Idx> Index<Idx> for CStr
|
||||
where
|
||||
Idx: private::CStrIndex,
|
||||
BStr: Index<Idx>,
|
||||
{
|
||||
type Output = <BStr as Index<Idx>>::Output;
|
||||
|
||||
#[inline]
|
||||
fn index(&self, index: Idx) -> &Self::Output {
|
||||
&self.as_ref()[index]
|
||||
BStr::from_bytes(self.to_bytes())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -612,6 +401,13 @@ macro_rules! c_str {
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
impl From<core::ffi::FromBytesWithNulError> for Error {
|
||||
#[inline]
|
||||
fn from(_: core::ffi::FromBytesWithNulError) -> Error {
|
||||
EINVAL
|
||||
}
|
||||
}
|
||||
|
||||
macro_rules! format {
|
||||
($($f:tt)*) => ({
|
||||
CString::try_from_fmt(fmt!($($f)*))?.to_str()?
|
||||
@@ -634,40 +430,28 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_cstr_to_str() -> Result {
|
||||
let good_bytes = b"\xf0\x9f\xa6\x80\0";
|
||||
let checked_cstr = CStr::from_bytes_with_nul(good_bytes)?;
|
||||
let checked_str = checked_cstr.to_str()?;
|
||||
let cstr = c"\xf0\x9f\xa6\x80";
|
||||
let checked_str = cstr.to_str()?;
|
||||
assert_eq!(checked_str, "🦀");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cstr_to_str_invalid_utf8() -> Result {
|
||||
let bad_bytes = b"\xc3\x28\0";
|
||||
let checked_cstr = CStr::from_bytes_with_nul(bad_bytes)?;
|
||||
assert!(checked_cstr.to_str().is_err());
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cstr_as_str_unchecked() -> Result {
|
||||
let good_bytes = b"\xf0\x9f\x90\xA7\0";
|
||||
let checked_cstr = CStr::from_bytes_with_nul(good_bytes)?;
|
||||
// SAFETY: The contents come from a string literal which contains valid UTF-8.
|
||||
let unchecked_str = unsafe { checked_cstr.as_str_unchecked() };
|
||||
assert_eq!(unchecked_str, "🐧");
|
||||
let cstr = c"\xc3\x28";
|
||||
assert!(cstr.to_str().is_err());
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_cstr_display() -> Result {
|
||||
let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0")?;
|
||||
let hello_world = c"hello, world!";
|
||||
assert_eq!(format!("{hello_world}"), "hello, world!");
|
||||
let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0")?;
|
||||
let non_printables = c"\x01\x09\x0a";
|
||||
assert_eq!(format!("{non_printables}"), "\\x01\\x09\\x0a");
|
||||
let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0")?;
|
||||
let non_ascii = c"d\xe9j\xe0 vu";
|
||||
assert_eq!(format!("{non_ascii}"), "d\\xe9j\\xe0 vu");
|
||||
let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0")?;
|
||||
let good_bytes = c"\xf0\x9f\xa6\x80";
|
||||
assert_eq!(format!("{good_bytes}"), "\\xf0\\x9f\\xa6\\x80");
|
||||
Ok(())
|
||||
}
|
||||
@@ -686,14 +470,12 @@ mod tests {
|
||||
|
||||
#[test]
|
||||
fn test_cstr_debug() -> Result {
|
||||
let hello_world = CStr::from_bytes_with_nul(b"hello, world!\0")?;
|
||||
let hello_world = c"hello, world!";
|
||||
assert_eq!(format!("{hello_world:?}"), "\"hello, world!\"");
|
||||
let non_printables = CStr::from_bytes_with_nul(b"\x01\x09\x0a\0")?;
|
||||
assert_eq!(format!("{non_printables:?}"), "\"\\x01\\x09\\x0a\"");
|
||||
let non_ascii = CStr::from_bytes_with_nul(b"d\xe9j\xe0 vu\0")?;
|
||||
let non_printables = c"\x01\x09\x0a";
|
||||
assert_eq!(format!("{non_printables:?}"), "\"\\x01\\t\\n\"");
|
||||
let non_ascii = c"d\xe9j\xe0 vu";
|
||||
assert_eq!(format!("{non_ascii:?}"), "\"d\\xe9j\\xe0 vu\"");
|
||||
let good_bytes = CStr::from_bytes_with_nul(b"\xf0\x9f\xa6\x80\0")?;
|
||||
assert_eq!(format!("{good_bytes:?}"), "\"\\xf0\\x9f\\xa6\\x80\"");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -941,43 +723,43 @@ unsafe fn kstrtobool_raw(string: *const u8) -> Result<bool> {
|
||||
/// # Examples
|
||||
///
|
||||
/// ```
|
||||
/// # use kernel::{c_str, str::kstrtobool};
|
||||
/// # use kernel::str::kstrtobool;
|
||||
///
|
||||
/// // Lowercase
|
||||
/// assert_eq!(kstrtobool(c_str!("true")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("tr")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("t")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("twrong")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("false")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c_str!("f")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c_str!("yes")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("no")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c_str!("on")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("off")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"true"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"tr"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"t"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"twrong"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"false"), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"f"), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"yes"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"no"), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"on"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"off"), Ok(false));
|
||||
///
|
||||
/// // Camel case
|
||||
/// assert_eq!(kstrtobool(c_str!("True")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("False")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c_str!("Yes")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("No")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c_str!("On")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("Off")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"True"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"False"), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"Yes"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"No"), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"On"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"Off"), Ok(false));
|
||||
///
|
||||
/// // All caps
|
||||
/// assert_eq!(kstrtobool(c_str!("TRUE")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("FALSE")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c_str!("YES")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("NO")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c_str!("ON")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("OFF")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"TRUE"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"FALSE"), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"YES"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"NO"), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"ON"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"OFF"), Ok(false));
|
||||
///
|
||||
/// // Numeric
|
||||
/// assert_eq!(kstrtobool(c_str!("1")), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c_str!("0")), Ok(false));
|
||||
/// assert_eq!(kstrtobool(c"1"), Ok(true));
|
||||
/// assert_eq!(kstrtobool(c"0"), Ok(false));
|
||||
///
|
||||
/// // Invalid input
|
||||
/// assert_eq!(kstrtobool(c_str!("invalid")), Err(EINVAL));
|
||||
/// assert_eq!(kstrtobool(c_str!("2")), Err(EINVAL));
|
||||
/// assert_eq!(kstrtobool(c"invalid"), Err(EINVAL));
|
||||
/// assert_eq!(kstrtobool(c"2"), Err(EINVAL));
|
||||
/// ```
|
||||
pub fn kstrtobool(string: &CStr) -> Result<bool> {
|
||||
// SAFETY:
|
||||
|
||||
@@ -48,7 +48,6 @@ impl LockClassKey {
|
||||
///
|
||||
/// # Examples
|
||||
/// ```
|
||||
/// # use kernel::c_str;
|
||||
/// # use kernel::alloc::KBox;
|
||||
/// # use kernel::types::ForeignOwnable;
|
||||
/// # use kernel::sync::{LockClassKey, SpinLock};
|
||||
@@ -60,7 +59,7 @@ impl LockClassKey {
|
||||
/// {
|
||||
/// stack_pin_init!(let num: SpinLock<u32> = SpinLock::new(
|
||||
/// 0,
|
||||
/// c_str!("my_spinlock"),
|
||||
/// c"my_spinlock",
|
||||
/// // SAFETY: `key_ptr` is returned by the above `into_foreign()`, whose
|
||||
/// // `from_foreign()` has not yet been called.
|
||||
/// unsafe { <Pin<KBox<LockClassKey>> as ForeignOwnable>::borrow(key_ptr) }
|
||||
|
||||
@@ -8,7 +8,7 @@
|
||||
use super::{lock::Backend, lock::Guard, LockClassKey};
|
||||
use crate::{
|
||||
ffi::{c_int, c_long},
|
||||
str::CStr,
|
||||
str::{CStr, CStrExt as _},
|
||||
task::{
|
||||
MAX_SCHEDULE_TIMEOUT, TASK_FREEZABLE, TASK_INTERRUPTIBLE, TASK_NORMAL, TASK_UNINTERRUPTIBLE,
|
||||
},
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
use super::LockClassKey;
|
||||
use crate::{
|
||||
str::CStr,
|
||||
str::{CStr, CStrExt as _},
|
||||
types::{NotThreadSafe, Opaque, ScopeGuard},
|
||||
};
|
||||
use core::{cell::UnsafeCell, marker::PhantomPinned, pin::Pin};
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
//! Support for defining statics containing locks.
|
||||
|
||||
use crate::{
|
||||
str::CStr,
|
||||
str::{CStr, CStrExt as _},
|
||||
sync::lock::{Backend, Guard, Lock},
|
||||
sync::{LockClassKey, LockedBy},
|
||||
types::Opaque,
|
||||
|
||||
@@ -289,7 +289,6 @@ impl<T, F: FnOnce(T)> Drop for ScopeGuard<T, F> {
|
||||
/// # Examples
|
||||
///
|
||||
/// ```
|
||||
/// # #![expect(unreachable_pub, clippy::disallowed_names)]
|
||||
/// use kernel::types::Opaque;
|
||||
/// # // Emulate a C struct binding which is from C, maybe uninitialized or not, only the C side
|
||||
/// # // knows.
|
||||
|
||||
94
rust/macros/fmt.rs
Normal file
94
rust/macros/fmt.rs
Normal file
@@ -0,0 +1,94 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
use proc_macro::{Ident, TokenStream, TokenTree};
|
||||
use std::collections::BTreeSet;
|
||||
|
||||
/// Please see [`crate::fmt`] for documentation.
|
||||
pub(crate) fn fmt(input: TokenStream) -> TokenStream {
|
||||
let mut input = input.into_iter();
|
||||
|
||||
let first_opt = input.next();
|
||||
let first_owned_str;
|
||||
let mut names = BTreeSet::new();
|
||||
let first_span = {
|
||||
let Some((mut first_str, first_span)) = (match first_opt.as_ref() {
|
||||
Some(TokenTree::Literal(first_lit)) => {
|
||||
first_owned_str = first_lit.to_string();
|
||||
Some(first_owned_str.as_str()).and_then(|first| {
|
||||
let first = first.strip_prefix('"')?;
|
||||
let first = first.strip_suffix('"')?;
|
||||
Some((first, first_lit.span()))
|
||||
})
|
||||
}
|
||||
_ => None,
|
||||
}) else {
|
||||
return first_opt.into_iter().chain(input).collect();
|
||||
};
|
||||
|
||||
// Parse `identifier`s from the format string.
|
||||
//
|
||||
// See https://doc.rust-lang.org/std/fmt/index.html#syntax.
|
||||
while let Some((_, rest)) = first_str.split_once('{') {
|
||||
first_str = rest;
|
||||
if let Some(rest) = first_str.strip_prefix('{') {
|
||||
first_str = rest;
|
||||
continue;
|
||||
}
|
||||
if let Some((name, rest)) = first_str.split_once('}') {
|
||||
first_str = rest;
|
||||
let name = name.split_once(':').map_or(name, |(name, _)| name);
|
||||
if !name.is_empty() && !name.chars().all(|c| c.is_ascii_digit()) {
|
||||
names.insert(name);
|
||||
}
|
||||
}
|
||||
}
|
||||
first_span
|
||||
};
|
||||
|
||||
let adapter = quote_spanned!(first_span => ::kernel::fmt::Adapter);
|
||||
|
||||
let mut args = TokenStream::from_iter(first_opt);
|
||||
{
|
||||
let mut flush = |args: &mut TokenStream, current: &mut TokenStream| {
|
||||
let current = std::mem::take(current);
|
||||
if !current.is_empty() {
|
||||
let (lhs, rhs) = (|| {
|
||||
let mut current = current.into_iter();
|
||||
let mut acc = TokenStream::new();
|
||||
while let Some(tt) = current.next() {
|
||||
// Split on `=` only once to handle cases like `a = b = c`.
|
||||
if matches!(&tt, TokenTree::Punct(p) if p.as_char() == '=') {
|
||||
names.remove(acc.to_string().as_str());
|
||||
// Include the `=` itself to keep the handling below uniform.
|
||||
acc.extend([tt]);
|
||||
return (Some(acc), current.collect::<TokenStream>());
|
||||
}
|
||||
acc.extend([tt]);
|
||||
}
|
||||
(None, acc)
|
||||
})();
|
||||
args.extend(quote_spanned!(first_span => #lhs #adapter(&#rhs)));
|
||||
}
|
||||
};
|
||||
|
||||
let mut current = TokenStream::new();
|
||||
for tt in input {
|
||||
match &tt {
|
||||
TokenTree::Punct(p) if p.as_char() == ',' => {
|
||||
flush(&mut args, &mut current);
|
||||
&mut args
|
||||
}
|
||||
_ => &mut current,
|
||||
}
|
||||
.extend([tt]);
|
||||
}
|
||||
flush(&mut args, &mut current);
|
||||
}
|
||||
|
||||
for name in names {
|
||||
let name = Ident::new(name, first_span);
|
||||
args.extend(quote_spanned!(first_span => , #name = #adapter(&#name)));
|
||||
}
|
||||
|
||||
quote_spanned!(first_span => ::core::format_args!(#args))
|
||||
}
|
||||
@@ -15,6 +15,7 @@
|
||||
mod quote;
|
||||
mod concat_idents;
|
||||
mod export;
|
||||
mod fmt;
|
||||
mod helpers;
|
||||
mod kunit;
|
||||
mod module;
|
||||
@@ -201,6 +202,24 @@ pub fn export(attr: TokenStream, ts: TokenStream) -> TokenStream {
|
||||
export::export(attr, ts)
|
||||
}
|
||||
|
||||
/// Like [`core::format_args!`], but automatically wraps arguments in [`kernel::fmt::Adapter`].
|
||||
///
|
||||
/// This macro allows generating `fmt::Arguments` while ensuring that each argument is wrapped with
|
||||
/// `::kernel::fmt::Adapter`, which customizes formatting behavior for kernel logging.
|
||||
///
|
||||
/// Named arguments used in the format string (e.g. `{foo}`) are detected and resolved from local
|
||||
/// bindings. All positional and named arguments are automatically wrapped.
|
||||
///
|
||||
/// This macro is an implementation detail of other kernel logging macros like [`pr_info!`] and
|
||||
/// should not typically be used directly.
|
||||
///
|
||||
/// [`kernel::fmt::Adapter`]: ../kernel/fmt/struct.Adapter.html
|
||||
/// [`pr_info!`]: ../kernel/macro.pr_info.html
|
||||
#[proc_macro]
|
||||
pub fn fmt(input: TokenStream) -> TokenStream {
|
||||
fmt::fmt(input)
|
||||
}
|
||||
|
||||
/// Concatenate two identifiers.
|
||||
///
|
||||
/// This is useful in macros that need to declare or reference items with names
|
||||
|
||||
@@ -228,7 +228,7 @@ pub(crate) fn module(ts: TokenStream) -> TokenStream {
|
||||
type LocalModule = {type_};
|
||||
|
||||
impl ::kernel::ModuleMetadata for {type_} {{
|
||||
const NAME: &'static ::kernel::str::CStr = ::kernel::c_str!(\"{name}\");
|
||||
const NAME: &'static ::kernel::str::CStr = c\"{name}\";
|
||||
}}
|
||||
|
||||
// Double nested modules, since then nobody can access the public items inside.
|
||||
|
||||
@@ -48,6 +48,7 @@ macro_rules! quote_spanned {
|
||||
($span:expr => $($tt:tt)*) => {{
|
||||
let mut tokens = ::proc_macro::TokenStream::new();
|
||||
{
|
||||
#[allow(unused_variables)]
|
||||
let span = $span;
|
||||
quote_spanned!(@proc tokens span $($tt)*);
|
||||
}
|
||||
@@ -146,6 +147,12 @@ macro_rules! quote_spanned {
|
||||
)]);
|
||||
quote_spanned!(@proc $v $span $($tt)*);
|
||||
};
|
||||
(@proc $v:ident $span:ident & $($tt:tt)*) => {
|
||||
$v.extend([::proc_macro::TokenTree::Punct(
|
||||
::proc_macro::Punct::new('&', ::proc_macro::Spacing::Alone),
|
||||
)]);
|
||||
quote_spanned!(@proc $v $span $($tt)*);
|
||||
};
|
||||
(@proc $v:ident $span:ident _ $($tt:tt)*) => {
|
||||
$v.extend([::proc_macro::TokenTree::Ident(
|
||||
::proc_macro::Ident::new("_", $span),
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
> [!NOTE]
|
||||
>
|
||||
> This crate was originally named [`pinned-init`], but the migration to
|
||||
> `pin-init` is not yet complete. The `legcay` branch contains the current
|
||||
> `pin-init` is not yet complete. The `legacy` branch contains the current
|
||||
> version of the `pinned-init` crate & the `main` branch already incorporates
|
||||
> the rename to `pin-init`.
|
||||
>
|
||||
|
||||
@@ -506,6 +506,8 @@ pub use ::paste::paste;
|
||||
/// Creates a `unsafe impl<...> PinnedDrop for $type` block.
|
||||
///
|
||||
/// See [`PinnedDrop`] for more information.
|
||||
///
|
||||
/// [`PinnedDrop`]: crate::PinnedDrop
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! __pinned_drop {
|
||||
|
||||
13
rust/proc-macro2/README.md
Normal file
13
rust/proc-macro2/README.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# `proc-macro2`
|
||||
|
||||
These source files come from the Rust `proc-macro2` crate, version
|
||||
1.0.101 (released 2025-08-16), hosted in the
|
||||
<https://github.com/dtolnay/proc-macro2> repository, licensed under
|
||||
"Apache-2.0 OR MIT" and only modified to add the SPDX license
|
||||
identifiers and to remove the `unicode-ident` dependency.
|
||||
|
||||
For copyright details, please see:
|
||||
|
||||
https://github.com/dtolnay/proc-macro2/blob/1.0.101/README.md#license
|
||||
https://github.com/dtolnay/proc-macro2/blob/1.0.101/LICENSE-APACHE
|
||||
https://github.com/dtolnay/proc-macro2/blob/1.0.101/LICENSE-MIT
|
||||
77
rust/proc-macro2/detection.rs
Normal file
77
rust/proc-macro2/detection.rs
Normal file
@@ -0,0 +1,77 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use core::sync::atomic::{AtomicUsize, Ordering};
|
||||
use std::sync::Once;
|
||||
|
||||
static WORKS: AtomicUsize = AtomicUsize::new(0);
|
||||
static INIT: Once = Once::new();
|
||||
|
||||
pub(crate) fn inside_proc_macro() -> bool {
|
||||
match WORKS.load(Ordering::Relaxed) {
|
||||
1 => return false,
|
||||
2 => return true,
|
||||
_ => {}
|
||||
}
|
||||
|
||||
INIT.call_once(initialize);
|
||||
inside_proc_macro()
|
||||
}
|
||||
|
||||
pub(crate) fn force_fallback() {
|
||||
WORKS.store(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
pub(crate) fn unforce_fallback() {
|
||||
initialize();
|
||||
}
|
||||
|
||||
#[cfg(not(no_is_available))]
|
||||
fn initialize() {
|
||||
let available = proc_macro::is_available();
|
||||
WORKS.store(available as usize + 1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
// Swap in a null panic hook to avoid printing "thread panicked" to stderr,
|
||||
// then use catch_unwind to determine whether the compiler's proc_macro is
|
||||
// working. When proc-macro2 is used from outside of a procedural macro all
|
||||
// of the proc_macro crate's APIs currently panic.
|
||||
//
|
||||
// The Once is to prevent the possibility of this ordering:
|
||||
//
|
||||
// thread 1 calls take_hook, gets the user's original hook
|
||||
// thread 1 calls set_hook with the null hook
|
||||
// thread 2 calls take_hook, thinks null hook is the original hook
|
||||
// thread 2 calls set_hook with the null hook
|
||||
// thread 1 calls set_hook with the actual original hook
|
||||
// thread 2 calls set_hook with what it thinks is the original hook
|
||||
//
|
||||
// in which the user's hook has been lost.
|
||||
//
|
||||
// There is still a race condition where a panic in a different thread can
|
||||
// happen during the interval that the user's original panic hook is
|
||||
// unregistered such that their hook is incorrectly not called. This is
|
||||
// sufficiently unlikely and less bad than printing panic messages to stderr
|
||||
// on correct use of this crate. Maybe there is a libstd feature request
|
||||
// here. For now, if a user needs to guarantee that this failure mode does
|
||||
// not occur, they need to call e.g. `proc_macro2::Span::call_site()` from
|
||||
// the main thread before launching any other threads.
|
||||
#[cfg(no_is_available)]
|
||||
fn initialize() {
|
||||
use std::panic::{self, PanicInfo};
|
||||
|
||||
type PanicHook = dyn Fn(&PanicInfo) + Sync + Send + 'static;
|
||||
|
||||
let null_hook: Box<PanicHook> = Box::new(|_panic_info| { /* ignore */ });
|
||||
let sanity_check = &*null_hook as *const PanicHook;
|
||||
let original_hook = panic::take_hook();
|
||||
panic::set_hook(null_hook);
|
||||
|
||||
let works = panic::catch_unwind(proc_macro::Span::call_site).is_ok();
|
||||
WORKS.store(works as usize + 1, Ordering::Relaxed);
|
||||
|
||||
let hopefully_null_hook = panic::take_hook();
|
||||
panic::set_hook(original_hook);
|
||||
if sanity_check != &*hopefully_null_hook {
|
||||
panic!("observed race condition in proc_macro2::inside_proc_macro");
|
||||
}
|
||||
}
|
||||
153
rust/proc-macro2/extra.rs
Normal file
153
rust/proc-macro2/extra.rs
Normal file
@@ -0,0 +1,153 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
//! Items which do not have a correspondence to any API in the proc_macro crate,
|
||||
//! but are necessary to include in proc-macro2.
|
||||
|
||||
use crate::fallback;
|
||||
use crate::imp;
|
||||
use crate::marker::{ProcMacroAutoTraits, MARKER};
|
||||
use crate::Span;
|
||||
use core::fmt::{self, Debug};
|
||||
|
||||
/// Invalidate any `proc_macro2::Span` that exist on the current thread.
|
||||
///
|
||||
/// The implementation of `Span` uses thread-local data structures and this
|
||||
/// function clears them. Calling any method on a `Span` on the current thread
|
||||
/// created prior to the invalidation will return incorrect values or crash.
|
||||
///
|
||||
/// This function is useful for programs that process more than 2<sup>32</sup>
|
||||
/// bytes of Rust source code on the same thread. Just like rustc, proc-macro2
|
||||
/// uses 32-bit source locations, and these wrap around when the total source
|
||||
/// code processed by the same thread exceeds 2<sup>32</sup> bytes (4
|
||||
/// gigabytes). After a wraparound, `Span` methods such as `source_text()` can
|
||||
/// return wrong data.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// As of late 2023, there is 200 GB of Rust code published on crates.io.
|
||||
/// Looking at just the newest version of every crate, it is 16 GB of code. So a
|
||||
/// workload that involves parsing it all would overflow a 32-bit source
|
||||
/// location unless spans are being invalidated.
|
||||
///
|
||||
/// ```
|
||||
/// use flate2::read::GzDecoder;
|
||||
/// use std::ffi::OsStr;
|
||||
/// use std::io::{BufReader, Read};
|
||||
/// use std::str::FromStr;
|
||||
/// use tar::Archive;
|
||||
///
|
||||
/// rayon::scope(|s| {
|
||||
/// for krate in every_version_of_every_crate() {
|
||||
/// s.spawn(move |_| {
|
||||
/// proc_macro2::extra::invalidate_current_thread_spans();
|
||||
///
|
||||
/// let reader = BufReader::new(krate);
|
||||
/// let tar = GzDecoder::new(reader);
|
||||
/// let mut archive = Archive::new(tar);
|
||||
/// for entry in archive.entries().unwrap() {
|
||||
/// let mut entry = entry.unwrap();
|
||||
/// let path = entry.path().unwrap();
|
||||
/// if path.extension() != Some(OsStr::new("rs")) {
|
||||
/// continue;
|
||||
/// }
|
||||
/// let mut content = String::new();
|
||||
/// entry.read_to_string(&mut content).unwrap();
|
||||
/// match proc_macro2::TokenStream::from_str(&content) {
|
||||
/// Ok(tokens) => {/* ... */},
|
||||
/// Err(_) => continue,
|
||||
/// }
|
||||
/// }
|
||||
/// });
|
||||
/// }
|
||||
/// });
|
||||
/// #
|
||||
/// # fn every_version_of_every_crate() -> Vec<std::fs::File> {
|
||||
/// # Vec::new()
|
||||
/// # }
|
||||
/// ```
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// This function is not applicable to and will panic if called from a
|
||||
/// procedural macro.
|
||||
#[cfg(span_locations)]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))]
|
||||
pub fn invalidate_current_thread_spans() {
|
||||
crate::imp::invalidate_current_thread_spans();
|
||||
}
|
||||
|
||||
/// An object that holds a [`Group`]'s `span_open()` and `span_close()` together
|
||||
/// in a more compact representation than holding those 2 spans individually.
|
||||
///
|
||||
/// [`Group`]: crate::Group
|
||||
#[derive(Copy, Clone)]
|
||||
pub struct DelimSpan {
|
||||
inner: DelimSpanEnum,
|
||||
_marker: ProcMacroAutoTraits,
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone)]
|
||||
enum DelimSpanEnum {
|
||||
#[cfg(wrap_proc_macro)]
|
||||
Compiler {
|
||||
join: proc_macro::Span,
|
||||
open: proc_macro::Span,
|
||||
close: proc_macro::Span,
|
||||
},
|
||||
Fallback(fallback::Span),
|
||||
}
|
||||
|
||||
impl DelimSpan {
|
||||
pub(crate) fn new(group: &imp::Group) -> Self {
|
||||
#[cfg(wrap_proc_macro)]
|
||||
let inner = match group {
|
||||
imp::Group::Compiler(group) => DelimSpanEnum::Compiler {
|
||||
join: group.span(),
|
||||
open: group.span_open(),
|
||||
close: group.span_close(),
|
||||
},
|
||||
imp::Group::Fallback(group) => DelimSpanEnum::Fallback(group.span()),
|
||||
};
|
||||
|
||||
#[cfg(not(wrap_proc_macro))]
|
||||
let inner = DelimSpanEnum::Fallback(group.span());
|
||||
|
||||
DelimSpan {
|
||||
inner,
|
||||
_marker: MARKER,
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a span covering the entire delimited group.
|
||||
pub fn join(&self) -> Span {
|
||||
match &self.inner {
|
||||
#[cfg(wrap_proc_macro)]
|
||||
DelimSpanEnum::Compiler { join, .. } => Span::_new(imp::Span::Compiler(*join)),
|
||||
DelimSpanEnum::Fallback(span) => Span::_new_fallback(*span),
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a span for the opening punctuation of the group only.
|
||||
pub fn open(&self) -> Span {
|
||||
match &self.inner {
|
||||
#[cfg(wrap_proc_macro)]
|
||||
DelimSpanEnum::Compiler { open, .. } => Span::_new(imp::Span::Compiler(*open)),
|
||||
DelimSpanEnum::Fallback(span) => Span::_new_fallback(span.first_byte()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns a span for the closing punctuation of the group only.
|
||||
pub fn close(&self) -> Span {
|
||||
match &self.inner {
|
||||
#[cfg(wrap_proc_macro)]
|
||||
DelimSpanEnum::Compiler { close, .. } => Span::_new(imp::Span::Compiler(*close)),
|
||||
DelimSpanEnum::Fallback(span) => Span::_new_fallback(span.last_byte()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for DelimSpan {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
Debug::fmt(&self.join(), f)
|
||||
}
|
||||
}
|
||||
1258
rust/proc-macro2/fallback.rs
Normal file
1258
rust/proc-macro2/fallback.rs
Normal file
File diff suppressed because it is too large
Load Diff
1351
rust/proc-macro2/lib.rs
Normal file
1351
rust/proc-macro2/lib.rs
Normal file
File diff suppressed because it is too large
Load Diff
31
rust/proc-macro2/location.rs
Normal file
31
rust/proc-macro2/location.rs
Normal file
@@ -0,0 +1,31 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use core::cmp::Ordering;
|
||||
|
||||
/// A line-column pair representing the start or end of a `Span`.
|
||||
///
|
||||
/// This type is semver exempt and not exposed by default.
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "span-locations")))]
|
||||
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash)]
|
||||
pub struct LineColumn {
|
||||
/// The 1-indexed line in the source file on which the span starts or ends
|
||||
/// (inclusive).
|
||||
pub line: usize,
|
||||
/// The 0-indexed column (in UTF-8 characters) in the source file on which
|
||||
/// the span starts or ends (inclusive).
|
||||
pub column: usize,
|
||||
}
|
||||
|
||||
impl Ord for LineColumn {
|
||||
fn cmp(&self, other: &Self) -> Ordering {
|
||||
self.line
|
||||
.cmp(&other.line)
|
||||
.then(self.column.cmp(&other.column))
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialOrd for LineColumn {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
Some(self.cmp(other))
|
||||
}
|
||||
}
|
||||
19
rust/proc-macro2/marker.rs
Normal file
19
rust/proc-macro2/marker.rs
Normal file
@@ -0,0 +1,19 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use alloc::rc::Rc;
|
||||
use core::marker::PhantomData;
|
||||
use core::panic::{RefUnwindSafe, UnwindSafe};
|
||||
|
||||
// Zero sized marker with the correct set of autotrait impls we want all proc
|
||||
// macro types to have.
|
||||
#[derive(Copy, Clone)]
|
||||
#[cfg_attr(
|
||||
all(procmacro2_semver_exempt, any(not(wrap_proc_macro), super_unstable)),
|
||||
derive(PartialEq, Eq)
|
||||
)]
|
||||
pub(crate) struct ProcMacroAutoTraits(PhantomData<Rc<()>>);
|
||||
|
||||
pub(crate) const MARKER: ProcMacroAutoTraits = ProcMacroAutoTraits(PhantomData);
|
||||
|
||||
impl UnwindSafe for ProcMacroAutoTraits {}
|
||||
impl RefUnwindSafe for ProcMacroAutoTraits {}
|
||||
997
rust/proc-macro2/parse.rs
Normal file
997
rust/proc-macro2/parse.rs
Normal file
@@ -0,0 +1,997 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::fallback::{
|
||||
self, is_ident_continue, is_ident_start, Group, Ident, LexError, Literal, Span, TokenStream,
|
||||
TokenStreamBuilder,
|
||||
};
|
||||
use crate::{Delimiter, Punct, Spacing, TokenTree};
|
||||
use core::char;
|
||||
use core::str::{Bytes, CharIndices, Chars};
|
||||
|
||||
#[derive(Copy, Clone, Eq, PartialEq)]
|
||||
pub(crate) struct Cursor<'a> {
|
||||
pub(crate) rest: &'a str,
|
||||
#[cfg(span_locations)]
|
||||
pub(crate) off: u32,
|
||||
}
|
||||
|
||||
impl<'a> Cursor<'a> {
|
||||
pub(crate) fn advance(&self, bytes: usize) -> Cursor<'a> {
|
||||
let (_front, rest) = self.rest.split_at(bytes);
|
||||
Cursor {
|
||||
rest,
|
||||
#[cfg(span_locations)]
|
||||
off: self.off + _front.chars().count() as u32,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn starts_with(&self, s: &str) -> bool {
|
||||
self.rest.starts_with(s)
|
||||
}
|
||||
|
||||
pub(crate) fn starts_with_char(&self, ch: char) -> bool {
|
||||
self.rest.starts_with(ch)
|
||||
}
|
||||
|
||||
pub(crate) fn starts_with_fn<Pattern>(&self, f: Pattern) -> bool
|
||||
where
|
||||
Pattern: FnMut(char) -> bool,
|
||||
{
|
||||
self.rest.starts_with(f)
|
||||
}
|
||||
|
||||
pub(crate) fn is_empty(&self) -> bool {
|
||||
self.rest.is_empty()
|
||||
}
|
||||
|
||||
fn len(&self) -> usize {
|
||||
self.rest.len()
|
||||
}
|
||||
|
||||
fn as_bytes(&self) -> &'a [u8] {
|
||||
self.rest.as_bytes()
|
||||
}
|
||||
|
||||
fn bytes(&self) -> Bytes<'a> {
|
||||
self.rest.bytes()
|
||||
}
|
||||
|
||||
fn chars(&self) -> Chars<'a> {
|
||||
self.rest.chars()
|
||||
}
|
||||
|
||||
fn char_indices(&self) -> CharIndices<'a> {
|
||||
self.rest.char_indices()
|
||||
}
|
||||
|
||||
fn parse(&self, tag: &str) -> Result<Cursor<'a>, Reject> {
|
||||
if self.starts_with(tag) {
|
||||
Ok(self.advance(tag.len()))
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) struct Reject;
|
||||
type PResult<'a, O> = Result<(Cursor<'a>, O), Reject>;
|
||||
|
||||
fn skip_whitespace(input: Cursor) -> Cursor {
|
||||
let mut s = input;
|
||||
|
||||
while !s.is_empty() {
|
||||
let byte = s.as_bytes()[0];
|
||||
if byte == b'/' {
|
||||
if s.starts_with("//")
|
||||
&& (!s.starts_with("///") || s.starts_with("////"))
|
||||
&& !s.starts_with("//!")
|
||||
{
|
||||
let (cursor, _) = take_until_newline_or_eof(s);
|
||||
s = cursor;
|
||||
continue;
|
||||
} else if s.starts_with("/**/") {
|
||||
s = s.advance(4);
|
||||
continue;
|
||||
} else if s.starts_with("/*")
|
||||
&& (!s.starts_with("/**") || s.starts_with("/***"))
|
||||
&& !s.starts_with("/*!")
|
||||
{
|
||||
match block_comment(s) {
|
||||
Ok((rest, _)) => {
|
||||
s = rest;
|
||||
continue;
|
||||
}
|
||||
Err(Reject) => return s,
|
||||
}
|
||||
}
|
||||
}
|
||||
match byte {
|
||||
b' ' | 0x09..=0x0d => {
|
||||
s = s.advance(1);
|
||||
continue;
|
||||
}
|
||||
b if b.is_ascii() => {}
|
||||
_ => {
|
||||
let ch = s.chars().next().unwrap();
|
||||
if is_whitespace(ch) {
|
||||
s = s.advance(ch.len_utf8());
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
return s;
|
||||
}
|
||||
s
|
||||
}
|
||||
|
||||
fn block_comment(input: Cursor) -> PResult<&str> {
|
||||
if !input.starts_with("/*") {
|
||||
return Err(Reject);
|
||||
}
|
||||
|
||||
let mut depth = 0usize;
|
||||
let bytes = input.as_bytes();
|
||||
let mut i = 0usize;
|
||||
let upper = bytes.len() - 1;
|
||||
|
||||
while i < upper {
|
||||
if bytes[i] == b'/' && bytes[i + 1] == b'*' {
|
||||
depth += 1;
|
||||
i += 1; // eat '*'
|
||||
} else if bytes[i] == b'*' && bytes[i + 1] == b'/' {
|
||||
depth -= 1;
|
||||
if depth == 0 {
|
||||
return Ok((input.advance(i + 2), &input.rest[..i + 2]));
|
||||
}
|
||||
i += 1; // eat '/'
|
||||
}
|
||||
i += 1;
|
||||
}
|
||||
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn is_whitespace(ch: char) -> bool {
|
||||
// Rust treats left-to-right mark and right-to-left mark as whitespace
|
||||
ch.is_whitespace() || ch == '\u{200e}' || ch == '\u{200f}'
|
||||
}
|
||||
|
||||
fn word_break(input: Cursor) -> Result<Cursor, Reject> {
|
||||
match input.chars().next() {
|
||||
Some(ch) if is_ident_continue(ch) => Err(Reject),
|
||||
Some(_) | None => Ok(input),
|
||||
}
|
||||
}
|
||||
|
||||
// Rustc's representation of a macro expansion error in expression position or
|
||||
// type position.
|
||||
const ERROR: &str = "(/*ERROR*/)";
|
||||
|
||||
pub(crate) fn token_stream(mut input: Cursor) -> Result<TokenStream, LexError> {
|
||||
let mut trees = TokenStreamBuilder::new();
|
||||
let mut stack = Vec::new();
|
||||
|
||||
loop {
|
||||
input = skip_whitespace(input);
|
||||
|
||||
if let Ok((rest, ())) = doc_comment(input, &mut trees) {
|
||||
input = rest;
|
||||
continue;
|
||||
}
|
||||
|
||||
#[cfg(span_locations)]
|
||||
let lo = input.off;
|
||||
|
||||
let first = match input.bytes().next() {
|
||||
Some(first) => first,
|
||||
None => match stack.last() {
|
||||
None => return Ok(trees.build()),
|
||||
#[cfg(span_locations)]
|
||||
Some((lo, _frame)) => {
|
||||
return Err(LexError {
|
||||
span: Span { lo: *lo, hi: *lo },
|
||||
})
|
||||
}
|
||||
#[cfg(not(span_locations))]
|
||||
Some(_frame) => return Err(LexError { span: Span {} }),
|
||||
},
|
||||
};
|
||||
|
||||
if let Some(open_delimiter) = match first {
|
||||
b'(' if !input.starts_with(ERROR) => Some(Delimiter::Parenthesis),
|
||||
b'[' => Some(Delimiter::Bracket),
|
||||
b'{' => Some(Delimiter::Brace),
|
||||
_ => None,
|
||||
} {
|
||||
input = input.advance(1);
|
||||
let frame = (open_delimiter, trees);
|
||||
#[cfg(span_locations)]
|
||||
let frame = (lo, frame);
|
||||
stack.push(frame);
|
||||
trees = TokenStreamBuilder::new();
|
||||
} else if let Some(close_delimiter) = match first {
|
||||
b')' => Some(Delimiter::Parenthesis),
|
||||
b']' => Some(Delimiter::Bracket),
|
||||
b'}' => Some(Delimiter::Brace),
|
||||
_ => None,
|
||||
} {
|
||||
let frame = match stack.pop() {
|
||||
Some(frame) => frame,
|
||||
None => return Err(lex_error(input)),
|
||||
};
|
||||
#[cfg(span_locations)]
|
||||
let (lo, frame) = frame;
|
||||
let (open_delimiter, outer) = frame;
|
||||
if open_delimiter != close_delimiter {
|
||||
return Err(lex_error(input));
|
||||
}
|
||||
input = input.advance(1);
|
||||
let mut g = Group::new(open_delimiter, trees.build());
|
||||
g.set_span(Span {
|
||||
#[cfg(span_locations)]
|
||||
lo,
|
||||
#[cfg(span_locations)]
|
||||
hi: input.off,
|
||||
});
|
||||
trees = outer;
|
||||
trees.push_token_from_parser(TokenTree::Group(crate::Group::_new_fallback(g)));
|
||||
} else {
|
||||
let (rest, mut tt) = match leaf_token(input) {
|
||||
Ok((rest, tt)) => (rest, tt),
|
||||
Err(Reject) => return Err(lex_error(input)),
|
||||
};
|
||||
tt.set_span(crate::Span::_new_fallback(Span {
|
||||
#[cfg(span_locations)]
|
||||
lo,
|
||||
#[cfg(span_locations)]
|
||||
hi: rest.off,
|
||||
}));
|
||||
trees.push_token_from_parser(tt);
|
||||
input = rest;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn lex_error(cursor: Cursor) -> LexError {
|
||||
#[cfg(not(span_locations))]
|
||||
let _ = cursor;
|
||||
LexError {
|
||||
span: Span {
|
||||
#[cfg(span_locations)]
|
||||
lo: cursor.off,
|
||||
#[cfg(span_locations)]
|
||||
hi: cursor.off,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn leaf_token(input: Cursor) -> PResult<TokenTree> {
|
||||
if let Ok((input, l)) = literal(input) {
|
||||
// must be parsed before ident
|
||||
Ok((input, TokenTree::Literal(crate::Literal::_new_fallback(l))))
|
||||
} else if let Ok((input, p)) = punct(input) {
|
||||
Ok((input, TokenTree::Punct(p)))
|
||||
} else if let Ok((input, i)) = ident(input) {
|
||||
Ok((input, TokenTree::Ident(i)))
|
||||
} else if input.starts_with(ERROR) {
|
||||
let rest = input.advance(ERROR.len());
|
||||
let repr = crate::Literal::_new_fallback(Literal::_new(ERROR.to_owned()));
|
||||
Ok((rest, TokenTree::Literal(repr)))
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
|
||||
fn ident(input: Cursor) -> PResult<crate::Ident> {
|
||||
if [
|
||||
"r\"", "r#\"", "r##", "b\"", "b\'", "br\"", "br#", "c\"", "cr\"", "cr#",
|
||||
]
|
||||
.iter()
|
||||
.any(|prefix| input.starts_with(prefix))
|
||||
{
|
||||
Err(Reject)
|
||||
} else {
|
||||
ident_any(input)
|
||||
}
|
||||
}
|
||||
|
||||
fn ident_any(input: Cursor) -> PResult<crate::Ident> {
|
||||
let raw = input.starts_with("r#");
|
||||
let rest = input.advance((raw as usize) << 1);
|
||||
|
||||
let (rest, sym) = ident_not_raw(rest)?;
|
||||
|
||||
if !raw {
|
||||
let ident =
|
||||
crate::Ident::_new_fallback(Ident::new_unchecked(sym, fallback::Span::call_site()));
|
||||
return Ok((rest, ident));
|
||||
}
|
||||
|
||||
match sym {
|
||||
"_" | "super" | "self" | "Self" | "crate" => return Err(Reject),
|
||||
_ => {}
|
||||
}
|
||||
|
||||
let ident =
|
||||
crate::Ident::_new_fallback(Ident::new_raw_unchecked(sym, fallback::Span::call_site()));
|
||||
Ok((rest, ident))
|
||||
}
|
||||
|
||||
fn ident_not_raw(input: Cursor) -> PResult<&str> {
|
||||
let mut chars = input.char_indices();
|
||||
|
||||
match chars.next() {
|
||||
Some((_, ch)) if is_ident_start(ch) => {}
|
||||
_ => return Err(Reject),
|
||||
}
|
||||
|
||||
let mut end = input.len();
|
||||
for (i, ch) in chars {
|
||||
if !is_ident_continue(ch) {
|
||||
end = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
Ok((input.advance(end), &input.rest[..end]))
|
||||
}
|
||||
|
||||
pub(crate) fn literal(input: Cursor) -> PResult<Literal> {
|
||||
let rest = literal_nocapture(input)?;
|
||||
let end = input.len() - rest.len();
|
||||
Ok((rest, Literal::_new(input.rest[..end].to_string())))
|
||||
}
|
||||
|
||||
fn literal_nocapture(input: Cursor) -> Result<Cursor, Reject> {
|
||||
if let Ok(ok) = string(input) {
|
||||
Ok(ok)
|
||||
} else if let Ok(ok) = byte_string(input) {
|
||||
Ok(ok)
|
||||
} else if let Ok(ok) = c_string(input) {
|
||||
Ok(ok)
|
||||
} else if let Ok(ok) = byte(input) {
|
||||
Ok(ok)
|
||||
} else if let Ok(ok) = character(input) {
|
||||
Ok(ok)
|
||||
} else if let Ok(ok) = float(input) {
|
||||
Ok(ok)
|
||||
} else if let Ok(ok) = int(input) {
|
||||
Ok(ok)
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
|
||||
fn literal_suffix(input: Cursor) -> Cursor {
|
||||
match ident_not_raw(input) {
|
||||
Ok((input, _)) => input,
|
||||
Err(Reject) => input,
|
||||
}
|
||||
}
|
||||
|
||||
fn string(input: Cursor) -> Result<Cursor, Reject> {
|
||||
if let Ok(input) = input.parse("\"") {
|
||||
cooked_string(input)
|
||||
} else if let Ok(input) = input.parse("r") {
|
||||
raw_string(input)
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
|
||||
fn cooked_string(mut input: Cursor) -> Result<Cursor, Reject> {
|
||||
let mut chars = input.char_indices();
|
||||
|
||||
while let Some((i, ch)) = chars.next() {
|
||||
match ch {
|
||||
'"' => {
|
||||
let input = input.advance(i + 1);
|
||||
return Ok(literal_suffix(input));
|
||||
}
|
||||
'\r' => match chars.next() {
|
||||
Some((_, '\n')) => {}
|
||||
_ => break,
|
||||
},
|
||||
'\\' => match chars.next() {
|
||||
Some((_, 'x')) => {
|
||||
backslash_x_char(&mut chars)?;
|
||||
}
|
||||
Some((_, 'n' | 'r' | 't' | '\\' | '\'' | '"' | '0')) => {}
|
||||
Some((_, 'u')) => {
|
||||
backslash_u(&mut chars)?;
|
||||
}
|
||||
Some((newline, ch @ ('\n' | '\r'))) => {
|
||||
input = input.advance(newline + 1);
|
||||
trailing_backslash(&mut input, ch as u8)?;
|
||||
chars = input.char_indices();
|
||||
}
|
||||
_ => break,
|
||||
},
|
||||
_ch => {}
|
||||
}
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn raw_string(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let (input, delimiter) = delimiter_of_raw_string(input)?;
|
||||
let mut bytes = input.bytes().enumerate();
|
||||
while let Some((i, byte)) = bytes.next() {
|
||||
match byte {
|
||||
b'"' if input.rest[i + 1..].starts_with(delimiter) => {
|
||||
let rest = input.advance(i + 1 + delimiter.len());
|
||||
return Ok(literal_suffix(rest));
|
||||
}
|
||||
b'\r' => match bytes.next() {
|
||||
Some((_, b'\n')) => {}
|
||||
_ => break,
|
||||
},
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn byte_string(input: Cursor) -> Result<Cursor, Reject> {
|
||||
if let Ok(input) = input.parse("b\"") {
|
||||
cooked_byte_string(input)
|
||||
} else if let Ok(input) = input.parse("br") {
|
||||
raw_byte_string(input)
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
|
||||
fn cooked_byte_string(mut input: Cursor) -> Result<Cursor, Reject> {
|
||||
let mut bytes = input.bytes().enumerate();
|
||||
while let Some((offset, b)) = bytes.next() {
|
||||
match b {
|
||||
b'"' => {
|
||||
let input = input.advance(offset + 1);
|
||||
return Ok(literal_suffix(input));
|
||||
}
|
||||
b'\r' => match bytes.next() {
|
||||
Some((_, b'\n')) => {}
|
||||
_ => break,
|
||||
},
|
||||
b'\\' => match bytes.next() {
|
||||
Some((_, b'x')) => {
|
||||
backslash_x_byte(&mut bytes)?;
|
||||
}
|
||||
Some((_, b'n' | b'r' | b't' | b'\\' | b'0' | b'\'' | b'"')) => {}
|
||||
Some((newline, b @ (b'\n' | b'\r'))) => {
|
||||
input = input.advance(newline + 1);
|
||||
trailing_backslash(&mut input, b)?;
|
||||
bytes = input.bytes().enumerate();
|
||||
}
|
||||
_ => break,
|
||||
},
|
||||
b if b.is_ascii() => {}
|
||||
_ => break,
|
||||
}
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn delimiter_of_raw_string(input: Cursor) -> PResult<&str> {
|
||||
for (i, byte) in input.bytes().enumerate() {
|
||||
match byte {
|
||||
b'"' => {
|
||||
if i > 255 {
|
||||
// https://github.com/rust-lang/rust/pull/95251
|
||||
return Err(Reject);
|
||||
}
|
||||
return Ok((input.advance(i + 1), &input.rest[..i]));
|
||||
}
|
||||
b'#' => {}
|
||||
_ => break,
|
||||
}
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn raw_byte_string(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let (input, delimiter) = delimiter_of_raw_string(input)?;
|
||||
let mut bytes = input.bytes().enumerate();
|
||||
while let Some((i, byte)) = bytes.next() {
|
||||
match byte {
|
||||
b'"' if input.rest[i + 1..].starts_with(delimiter) => {
|
||||
let rest = input.advance(i + 1 + delimiter.len());
|
||||
return Ok(literal_suffix(rest));
|
||||
}
|
||||
b'\r' => match bytes.next() {
|
||||
Some((_, b'\n')) => {}
|
||||
_ => break,
|
||||
},
|
||||
other => {
|
||||
if !other.is_ascii() {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn c_string(input: Cursor) -> Result<Cursor, Reject> {
|
||||
if let Ok(input) = input.parse("c\"") {
|
||||
cooked_c_string(input)
|
||||
} else if let Ok(input) = input.parse("cr") {
|
||||
raw_c_string(input)
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
|
||||
fn raw_c_string(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let (input, delimiter) = delimiter_of_raw_string(input)?;
|
||||
let mut bytes = input.bytes().enumerate();
|
||||
while let Some((i, byte)) = bytes.next() {
|
||||
match byte {
|
||||
b'"' if input.rest[i + 1..].starts_with(delimiter) => {
|
||||
let rest = input.advance(i + 1 + delimiter.len());
|
||||
return Ok(literal_suffix(rest));
|
||||
}
|
||||
b'\r' => match bytes.next() {
|
||||
Some((_, b'\n')) => {}
|
||||
_ => break,
|
||||
},
|
||||
b'\0' => break,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn cooked_c_string(mut input: Cursor) -> Result<Cursor, Reject> {
|
||||
let mut chars = input.char_indices();
|
||||
|
||||
while let Some((i, ch)) = chars.next() {
|
||||
match ch {
|
||||
'"' => {
|
||||
let input = input.advance(i + 1);
|
||||
return Ok(literal_suffix(input));
|
||||
}
|
||||
'\r' => match chars.next() {
|
||||
Some((_, '\n')) => {}
|
||||
_ => break,
|
||||
},
|
||||
'\\' => match chars.next() {
|
||||
Some((_, 'x')) => {
|
||||
backslash_x_nonzero(&mut chars)?;
|
||||
}
|
||||
Some((_, 'n' | 'r' | 't' | '\\' | '\'' | '"')) => {}
|
||||
Some((_, 'u')) => {
|
||||
if backslash_u(&mut chars)? == '\0' {
|
||||
break;
|
||||
}
|
||||
}
|
||||
Some((newline, ch @ ('\n' | '\r'))) => {
|
||||
input = input.advance(newline + 1);
|
||||
trailing_backslash(&mut input, ch as u8)?;
|
||||
chars = input.char_indices();
|
||||
}
|
||||
_ => break,
|
||||
},
|
||||
'\0' => break,
|
||||
_ch => {}
|
||||
}
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn byte(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let input = input.parse("b'")?;
|
||||
let mut bytes = input.bytes().enumerate();
|
||||
let ok = match bytes.next().map(|(_, b)| b) {
|
||||
Some(b'\\') => match bytes.next().map(|(_, b)| b) {
|
||||
Some(b'x') => backslash_x_byte(&mut bytes).is_ok(),
|
||||
Some(b'n' | b'r' | b't' | b'\\' | b'0' | b'\'' | b'"') => true,
|
||||
_ => false,
|
||||
},
|
||||
b => b.is_some(),
|
||||
};
|
||||
if !ok {
|
||||
return Err(Reject);
|
||||
}
|
||||
let (offset, _) = bytes.next().ok_or(Reject)?;
|
||||
if !input.chars().as_str().is_char_boundary(offset) {
|
||||
return Err(Reject);
|
||||
}
|
||||
let input = input.advance(offset).parse("'")?;
|
||||
Ok(literal_suffix(input))
|
||||
}
|
||||
|
||||
fn character(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let input = input.parse("'")?;
|
||||
let mut chars = input.char_indices();
|
||||
let ok = match chars.next().map(|(_, ch)| ch) {
|
||||
Some('\\') => match chars.next().map(|(_, ch)| ch) {
|
||||
Some('x') => backslash_x_char(&mut chars).is_ok(),
|
||||
Some('u') => backslash_u(&mut chars).is_ok(),
|
||||
Some('n' | 'r' | 't' | '\\' | '0' | '\'' | '"') => true,
|
||||
_ => false,
|
||||
},
|
||||
ch => ch.is_some(),
|
||||
};
|
||||
if !ok {
|
||||
return Err(Reject);
|
||||
}
|
||||
let (idx, _) = chars.next().ok_or(Reject)?;
|
||||
let input = input.advance(idx).parse("'")?;
|
||||
Ok(literal_suffix(input))
|
||||
}
|
||||
|
||||
macro_rules! next_ch {
|
||||
($chars:ident @ $pat:pat) => {
|
||||
match $chars.next() {
|
||||
Some((_, ch)) => match ch {
|
||||
$pat => ch,
|
||||
_ => return Err(Reject),
|
||||
},
|
||||
None => return Err(Reject),
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
fn backslash_x_char<I>(chars: &mut I) -> Result<(), Reject>
|
||||
where
|
||||
I: Iterator<Item = (usize, char)>,
|
||||
{
|
||||
next_ch!(chars @ '0'..='7');
|
||||
next_ch!(chars @ '0'..='9' | 'a'..='f' | 'A'..='F');
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn backslash_x_byte<I>(chars: &mut I) -> Result<(), Reject>
|
||||
where
|
||||
I: Iterator<Item = (usize, u8)>,
|
||||
{
|
||||
next_ch!(chars @ b'0'..=b'9' | b'a'..=b'f' | b'A'..=b'F');
|
||||
next_ch!(chars @ b'0'..=b'9' | b'a'..=b'f' | b'A'..=b'F');
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn backslash_x_nonzero<I>(chars: &mut I) -> Result<(), Reject>
|
||||
where
|
||||
I: Iterator<Item = (usize, char)>,
|
||||
{
|
||||
let first = next_ch!(chars @ '0'..='9' | 'a'..='f' | 'A'..='F');
|
||||
let second = next_ch!(chars @ '0'..='9' | 'a'..='f' | 'A'..='F');
|
||||
if first == '0' && second == '0' {
|
||||
Err(Reject)
|
||||
} else {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
fn backslash_u<I>(chars: &mut I) -> Result<char, Reject>
|
||||
where
|
||||
I: Iterator<Item = (usize, char)>,
|
||||
{
|
||||
next_ch!(chars @ '{');
|
||||
let mut value = 0;
|
||||
let mut len = 0;
|
||||
for (_, ch) in chars {
|
||||
let digit = match ch {
|
||||
'0'..='9' => ch as u8 - b'0',
|
||||
'a'..='f' => 10 + ch as u8 - b'a',
|
||||
'A'..='F' => 10 + ch as u8 - b'A',
|
||||
'_' if len > 0 => continue,
|
||||
'}' if len > 0 => return char::from_u32(value).ok_or(Reject),
|
||||
_ => break,
|
||||
};
|
||||
if len == 6 {
|
||||
break;
|
||||
}
|
||||
value *= 0x10;
|
||||
value += u32::from(digit);
|
||||
len += 1;
|
||||
}
|
||||
Err(Reject)
|
||||
}
|
||||
|
||||
fn trailing_backslash(input: &mut Cursor, mut last: u8) -> Result<(), Reject> {
|
||||
let mut whitespace = input.bytes().enumerate();
|
||||
loop {
|
||||
if last == b'\r' && whitespace.next().map_or(true, |(_, b)| b != b'\n') {
|
||||
return Err(Reject);
|
||||
}
|
||||
match whitespace.next() {
|
||||
Some((_, b @ (b' ' | b'\t' | b'\n' | b'\r'))) => {
|
||||
last = b;
|
||||
}
|
||||
Some((offset, _)) => {
|
||||
*input = input.advance(offset);
|
||||
return Ok(());
|
||||
}
|
||||
None => return Err(Reject),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn float(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let mut rest = float_digits(input)?;
|
||||
if let Some(ch) = rest.chars().next() {
|
||||
if is_ident_start(ch) {
|
||||
rest = ident_not_raw(rest)?.0;
|
||||
}
|
||||
}
|
||||
word_break(rest)
|
||||
}
|
||||
|
||||
fn float_digits(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let mut chars = input.chars().peekable();
|
||||
match chars.next() {
|
||||
Some(ch) if '0' <= ch && ch <= '9' => {}
|
||||
_ => return Err(Reject),
|
||||
}
|
||||
|
||||
let mut len = 1;
|
||||
let mut has_dot = false;
|
||||
let mut has_exp = false;
|
||||
while let Some(&ch) = chars.peek() {
|
||||
match ch {
|
||||
'0'..='9' | '_' => {
|
||||
chars.next();
|
||||
len += 1;
|
||||
}
|
||||
'.' => {
|
||||
if has_dot {
|
||||
break;
|
||||
}
|
||||
chars.next();
|
||||
if chars
|
||||
.peek()
|
||||
.map_or(false, |&ch| ch == '.' || is_ident_start(ch))
|
||||
{
|
||||
return Err(Reject);
|
||||
}
|
||||
len += 1;
|
||||
has_dot = true;
|
||||
}
|
||||
'e' | 'E' => {
|
||||
chars.next();
|
||||
len += 1;
|
||||
has_exp = true;
|
||||
break;
|
||||
}
|
||||
_ => break,
|
||||
}
|
||||
}
|
||||
|
||||
if !(has_dot || has_exp) {
|
||||
return Err(Reject);
|
||||
}
|
||||
|
||||
if has_exp {
|
||||
let token_before_exp = if has_dot {
|
||||
Ok(input.advance(len - 1))
|
||||
} else {
|
||||
Err(Reject)
|
||||
};
|
||||
let mut has_sign = false;
|
||||
let mut has_exp_value = false;
|
||||
while let Some(&ch) = chars.peek() {
|
||||
match ch {
|
||||
'+' | '-' => {
|
||||
if has_exp_value {
|
||||
break;
|
||||
}
|
||||
if has_sign {
|
||||
return token_before_exp;
|
||||
}
|
||||
chars.next();
|
||||
len += 1;
|
||||
has_sign = true;
|
||||
}
|
||||
'0'..='9' => {
|
||||
chars.next();
|
||||
len += 1;
|
||||
has_exp_value = true;
|
||||
}
|
||||
'_' => {
|
||||
chars.next();
|
||||
len += 1;
|
||||
}
|
||||
_ => break,
|
||||
}
|
||||
}
|
||||
if !has_exp_value {
|
||||
return token_before_exp;
|
||||
}
|
||||
}
|
||||
|
||||
Ok(input.advance(len))
|
||||
}
|
||||
|
||||
fn int(input: Cursor) -> Result<Cursor, Reject> {
|
||||
let mut rest = digits(input)?;
|
||||
if let Some(ch) = rest.chars().next() {
|
||||
if is_ident_start(ch) {
|
||||
rest = ident_not_raw(rest)?.0;
|
||||
}
|
||||
}
|
||||
word_break(rest)
|
||||
}
|
||||
|
||||
fn digits(mut input: Cursor) -> Result<Cursor, Reject> {
|
||||
let base = if input.starts_with("0x") {
|
||||
input = input.advance(2);
|
||||
16
|
||||
} else if input.starts_with("0o") {
|
||||
input = input.advance(2);
|
||||
8
|
||||
} else if input.starts_with("0b") {
|
||||
input = input.advance(2);
|
||||
2
|
||||
} else {
|
||||
10
|
||||
};
|
||||
|
||||
let mut len = 0;
|
||||
let mut empty = true;
|
||||
for b in input.bytes() {
|
||||
match b {
|
||||
b'0'..=b'9' => {
|
||||
let digit = (b - b'0') as u64;
|
||||
if digit >= base {
|
||||
return Err(Reject);
|
||||
}
|
||||
}
|
||||
b'a'..=b'f' => {
|
||||
let digit = 10 + (b - b'a') as u64;
|
||||
if digit >= base {
|
||||
break;
|
||||
}
|
||||
}
|
||||
b'A'..=b'F' => {
|
||||
let digit = 10 + (b - b'A') as u64;
|
||||
if digit >= base {
|
||||
break;
|
||||
}
|
||||
}
|
||||
b'_' => {
|
||||
if empty && base == 10 {
|
||||
return Err(Reject);
|
||||
}
|
||||
len += 1;
|
||||
continue;
|
||||
}
|
||||
_ => break,
|
||||
}
|
||||
len += 1;
|
||||
empty = false;
|
||||
}
|
||||
if empty {
|
||||
Err(Reject)
|
||||
} else {
|
||||
Ok(input.advance(len))
|
||||
}
|
||||
}
|
||||
|
||||
fn punct(input: Cursor) -> PResult<Punct> {
|
||||
let (rest, ch) = punct_char(input)?;
|
||||
if ch == '\'' {
|
||||
let (after_lifetime, _ident) = ident_any(rest)?;
|
||||
if after_lifetime.starts_with_char('\'')
|
||||
|| (after_lifetime.starts_with_char('#') && !rest.starts_with("r#"))
|
||||
{
|
||||
Err(Reject)
|
||||
} else {
|
||||
Ok((rest, Punct::new('\'', Spacing::Joint)))
|
||||
}
|
||||
} else {
|
||||
let kind = match punct_char(rest) {
|
||||
Ok(_) => Spacing::Joint,
|
||||
Err(Reject) => Spacing::Alone,
|
||||
};
|
||||
Ok((rest, Punct::new(ch, kind)))
|
||||
}
|
||||
}
|
||||
|
||||
fn punct_char(input: Cursor) -> PResult<char> {
|
||||
if input.starts_with("//") || input.starts_with("/*") {
|
||||
// Do not accept `/` of a comment as a punct.
|
||||
return Err(Reject);
|
||||
}
|
||||
|
||||
let mut chars = input.chars();
|
||||
let first = match chars.next() {
|
||||
Some(ch) => ch,
|
||||
None => {
|
||||
return Err(Reject);
|
||||
}
|
||||
};
|
||||
let recognized = "~!@#$%^&*-=+|;:,<.>/?'";
|
||||
if recognized.contains(first) {
|
||||
Ok((input.advance(first.len_utf8()), first))
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
|
||||
fn doc_comment<'a>(input: Cursor<'a>, trees: &mut TokenStreamBuilder) -> PResult<'a, ()> {
|
||||
#[cfg(span_locations)]
|
||||
let lo = input.off;
|
||||
let (rest, (comment, inner)) = doc_comment_contents(input)?;
|
||||
let fallback_span = Span {
|
||||
#[cfg(span_locations)]
|
||||
lo,
|
||||
#[cfg(span_locations)]
|
||||
hi: rest.off,
|
||||
};
|
||||
let span = crate::Span::_new_fallback(fallback_span);
|
||||
|
||||
let mut scan_for_bare_cr = comment;
|
||||
while let Some(cr) = scan_for_bare_cr.find('\r') {
|
||||
let rest = &scan_for_bare_cr[cr + 1..];
|
||||
if !rest.starts_with('\n') {
|
||||
return Err(Reject);
|
||||
}
|
||||
scan_for_bare_cr = rest;
|
||||
}
|
||||
|
||||
let mut pound = Punct::new('#', Spacing::Alone);
|
||||
pound.set_span(span);
|
||||
trees.push_token_from_parser(TokenTree::Punct(pound));
|
||||
|
||||
if inner {
|
||||
let mut bang = Punct::new('!', Spacing::Alone);
|
||||
bang.set_span(span);
|
||||
trees.push_token_from_parser(TokenTree::Punct(bang));
|
||||
}
|
||||
|
||||
let doc_ident = crate::Ident::_new_fallback(Ident::new_unchecked("doc", fallback_span));
|
||||
let mut equal = Punct::new('=', Spacing::Alone);
|
||||
equal.set_span(span);
|
||||
let mut literal = crate::Literal::_new_fallback(Literal::string(comment));
|
||||
literal.set_span(span);
|
||||
let mut bracketed = TokenStreamBuilder::with_capacity(3);
|
||||
bracketed.push_token_from_parser(TokenTree::Ident(doc_ident));
|
||||
bracketed.push_token_from_parser(TokenTree::Punct(equal));
|
||||
bracketed.push_token_from_parser(TokenTree::Literal(literal));
|
||||
let group = Group::new(Delimiter::Bracket, bracketed.build());
|
||||
let mut group = crate::Group::_new_fallback(group);
|
||||
group.set_span(span);
|
||||
trees.push_token_from_parser(TokenTree::Group(group));
|
||||
|
||||
Ok((rest, ()))
|
||||
}
|
||||
|
||||
fn doc_comment_contents(input: Cursor) -> PResult<(&str, bool)> {
|
||||
if input.starts_with("//!") {
|
||||
let input = input.advance(3);
|
||||
let (input, s) = take_until_newline_or_eof(input);
|
||||
Ok((input, (s, true)))
|
||||
} else if input.starts_with("/*!") {
|
||||
let (input, s) = block_comment(input)?;
|
||||
Ok((input, (&s[3..s.len() - 2], true)))
|
||||
} else if input.starts_with("///") {
|
||||
let input = input.advance(3);
|
||||
if input.starts_with_char('/') {
|
||||
return Err(Reject);
|
||||
}
|
||||
let (input, s) = take_until_newline_or_eof(input);
|
||||
Ok((input, (s, false)))
|
||||
} else if input.starts_with("/**") && !input.rest[3..].starts_with('*') {
|
||||
let (input, s) = block_comment(input)?;
|
||||
Ok((input, (&s[3..s.len() - 2], false)))
|
||||
} else {
|
||||
Err(Reject)
|
||||
}
|
||||
}
|
||||
|
||||
fn take_until_newline_or_eof(input: Cursor) -> (Cursor, &str) {
|
||||
let chars = input.char_indices();
|
||||
|
||||
for (i, ch) in chars {
|
||||
if ch == '\n' {
|
||||
return (input.advance(i), &input.rest[..i]);
|
||||
} else if ch == '\r' && input.rest[i + 1..].starts_with('\n') {
|
||||
return (input.advance(i + 1), &input.rest[..i]);
|
||||
}
|
||||
}
|
||||
|
||||
(input.advance(input.len()), input.rest)
|
||||
}
|
||||
12
rust/proc-macro2/probe.rs
Normal file
12
rust/proc-macro2/probe.rs
Normal file
@@ -0,0 +1,12 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
#![allow(dead_code)]
|
||||
|
||||
#[cfg(proc_macro_span)]
|
||||
pub(crate) mod proc_macro_span;
|
||||
|
||||
#[cfg(proc_macro_span_file)]
|
||||
pub(crate) mod proc_macro_span_file;
|
||||
|
||||
#[cfg(proc_macro_span_location)]
|
||||
pub(crate) mod proc_macro_span_location;
|
||||
53
rust/proc-macro2/probe/proc_macro_span.rs
Normal file
53
rust/proc-macro2/probe/proc_macro_span.rs
Normal file
@@ -0,0 +1,53 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
// This code exercises the surface area that we expect of Span's unstable API.
|
||||
// If the current toolchain is able to compile it, then proc-macro2 is able to
|
||||
// offer these APIs too.
|
||||
|
||||
#![cfg_attr(procmacro2_build_probe, feature(proc_macro_span))]
|
||||
|
||||
extern crate proc_macro;
|
||||
|
||||
use core::ops::{Range, RangeBounds};
|
||||
use proc_macro::{Literal, Span};
|
||||
use std::path::PathBuf;
|
||||
|
||||
pub fn byte_range(this: &Span) -> Range<usize> {
|
||||
this.byte_range()
|
||||
}
|
||||
|
||||
pub fn start(this: &Span) -> Span {
|
||||
this.start()
|
||||
}
|
||||
|
||||
pub fn end(this: &Span) -> Span {
|
||||
this.end()
|
||||
}
|
||||
|
||||
pub fn line(this: &Span) -> usize {
|
||||
this.line()
|
||||
}
|
||||
|
||||
pub fn column(this: &Span) -> usize {
|
||||
this.column()
|
||||
}
|
||||
|
||||
pub fn file(this: &Span) -> String {
|
||||
this.file()
|
||||
}
|
||||
|
||||
pub fn local_file(this: &Span) -> Option<PathBuf> {
|
||||
this.local_file()
|
||||
}
|
||||
|
||||
pub fn join(this: &Span, other: Span) -> Option<Span> {
|
||||
this.join(other)
|
||||
}
|
||||
|
||||
pub fn subspan<R: RangeBounds<usize>>(this: &Literal, range: R) -> Option<Span> {
|
||||
this.subspan(range)
|
||||
}
|
||||
|
||||
// Include in sccache cache key.
|
||||
#[cfg(procmacro2_build_probe)]
|
||||
const _: Option<&str> = option_env!("RUSTC_BOOTSTRAP");
|
||||
16
rust/proc-macro2/probe/proc_macro_span_file.rs
Normal file
16
rust/proc-macro2/probe/proc_macro_span_file.rs
Normal file
@@ -0,0 +1,16 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
// The subset of Span's API stabilized in Rust 1.88.
|
||||
|
||||
extern crate proc_macro;
|
||||
|
||||
use proc_macro::Span;
|
||||
use std::path::PathBuf;
|
||||
|
||||
pub fn file(this: &Span) -> String {
|
||||
this.file()
|
||||
}
|
||||
|
||||
pub fn local_file(this: &Span) -> Option<PathBuf> {
|
||||
this.local_file()
|
||||
}
|
||||
23
rust/proc-macro2/probe/proc_macro_span_location.rs
Normal file
23
rust/proc-macro2/probe/proc_macro_span_location.rs
Normal file
@@ -0,0 +1,23 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
// The subset of Span's API stabilized in Rust 1.88.
|
||||
|
||||
extern crate proc_macro;
|
||||
|
||||
use proc_macro::Span;
|
||||
|
||||
pub fn start(this: &Span) -> Span {
|
||||
this.start()
|
||||
}
|
||||
|
||||
pub fn end(this: &Span) -> Span {
|
||||
this.end()
|
||||
}
|
||||
|
||||
pub fn line(this: &Span) -> usize {
|
||||
this.line()
|
||||
}
|
||||
|
||||
pub fn column(this: &Span) -> usize {
|
||||
this.column()
|
||||
}
|
||||
148
rust/proc-macro2/rcvec.rs
Normal file
148
rust/proc-macro2/rcvec.rs
Normal file
@@ -0,0 +1,148 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use alloc::rc::Rc;
|
||||
use alloc::vec;
|
||||
use core::mem;
|
||||
use core::panic::RefUnwindSafe;
|
||||
use core::slice;
|
||||
|
||||
pub(crate) struct RcVec<T> {
|
||||
inner: Rc<Vec<T>>,
|
||||
}
|
||||
|
||||
pub(crate) struct RcVecBuilder<T> {
|
||||
inner: Vec<T>,
|
||||
}
|
||||
|
||||
pub(crate) struct RcVecMut<'a, T> {
|
||||
inner: &'a mut Vec<T>,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) struct RcVecIntoIter<T> {
|
||||
inner: vec::IntoIter<T>,
|
||||
}
|
||||
|
||||
impl<T> RcVec<T> {
|
||||
pub(crate) fn is_empty(&self) -> bool {
|
||||
self.inner.is_empty()
|
||||
}
|
||||
|
||||
pub(crate) fn len(&self) -> usize {
|
||||
self.inner.len()
|
||||
}
|
||||
|
||||
pub(crate) fn iter(&self) -> slice::Iter<T> {
|
||||
self.inner.iter()
|
||||
}
|
||||
|
||||
pub(crate) fn make_mut(&mut self) -> RcVecMut<T>
|
||||
where
|
||||
T: Clone,
|
||||
{
|
||||
RcVecMut {
|
||||
inner: Rc::make_mut(&mut self.inner),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn get_mut(&mut self) -> Option<RcVecMut<T>> {
|
||||
let inner = Rc::get_mut(&mut self.inner)?;
|
||||
Some(RcVecMut { inner })
|
||||
}
|
||||
|
||||
pub(crate) fn make_owned(mut self) -> RcVecBuilder<T>
|
||||
where
|
||||
T: Clone,
|
||||
{
|
||||
let vec = if let Some(owned) = Rc::get_mut(&mut self.inner) {
|
||||
mem::take(owned)
|
||||
} else {
|
||||
Vec::clone(&self.inner)
|
||||
};
|
||||
RcVecBuilder { inner: vec }
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> RcVecBuilder<T> {
|
||||
pub(crate) fn new() -> Self {
|
||||
RcVecBuilder { inner: Vec::new() }
|
||||
}
|
||||
|
||||
pub(crate) fn with_capacity(cap: usize) -> Self {
|
||||
RcVecBuilder {
|
||||
inner: Vec::with_capacity(cap),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn push(&mut self, element: T) {
|
||||
self.inner.push(element);
|
||||
}
|
||||
|
||||
pub(crate) fn extend(&mut self, iter: impl IntoIterator<Item = T>) {
|
||||
self.inner.extend(iter);
|
||||
}
|
||||
|
||||
pub(crate) fn as_mut(&mut self) -> RcVecMut<T> {
|
||||
RcVecMut {
|
||||
inner: &mut self.inner,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn build(self) -> RcVec<T> {
|
||||
RcVec {
|
||||
inner: Rc::new(self.inner),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, T> RcVecMut<'a, T> {
|
||||
pub(crate) fn push(&mut self, element: T) {
|
||||
self.inner.push(element);
|
||||
}
|
||||
|
||||
pub(crate) fn extend(&mut self, iter: impl IntoIterator<Item = T>) {
|
||||
self.inner.extend(iter);
|
||||
}
|
||||
|
||||
pub(crate) fn as_mut(&mut self) -> RcVecMut<T> {
|
||||
RcVecMut { inner: self.inner }
|
||||
}
|
||||
|
||||
pub(crate) fn take(self) -> RcVecBuilder<T> {
|
||||
let vec = mem::take(self.inner);
|
||||
RcVecBuilder { inner: vec }
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> Clone for RcVec<T> {
|
||||
fn clone(&self) -> Self {
|
||||
RcVec {
|
||||
inner: Rc::clone(&self.inner),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> IntoIterator for RcVecBuilder<T> {
|
||||
type Item = T;
|
||||
type IntoIter = RcVecIntoIter<T>;
|
||||
|
||||
fn into_iter(self) -> Self::IntoIter {
|
||||
RcVecIntoIter {
|
||||
inner: self.inner.into_iter(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> Iterator for RcVecIntoIter<T> {
|
||||
type Item = T;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
self.inner.next()
|
||||
}
|
||||
|
||||
fn size_hint(&self) -> (usize, Option<usize>) {
|
||||
self.inner.size_hint()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> RefUnwindSafe for RcVec<T> where T: RefUnwindSafe {}
|
||||
986
rust/proc-macro2/wrapper.rs
Normal file
986
rust/proc-macro2/wrapper.rs
Normal file
@@ -0,0 +1,986 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::detection::inside_proc_macro;
|
||||
use crate::fallback::{self, FromStr2 as _};
|
||||
#[cfg(span_locations)]
|
||||
use crate::location::LineColumn;
|
||||
#[cfg(proc_macro_span)]
|
||||
use crate::probe::proc_macro_span;
|
||||
#[cfg(all(span_locations, proc_macro_span_file))]
|
||||
use crate::probe::proc_macro_span_file;
|
||||
#[cfg(all(span_locations, proc_macro_span_location))]
|
||||
use crate::probe::proc_macro_span_location;
|
||||
use crate::{Delimiter, Punct, Spacing, TokenTree};
|
||||
use core::fmt::{self, Debug, Display};
|
||||
#[cfg(span_locations)]
|
||||
use core::ops::Range;
|
||||
use core::ops::RangeBounds;
|
||||
use std::ffi::CStr;
|
||||
#[cfg(span_locations)]
|
||||
use std::path::PathBuf;
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) enum TokenStream {
|
||||
Compiler(DeferredTokenStream),
|
||||
Fallback(fallback::TokenStream),
|
||||
}
|
||||
|
||||
// Work around https://github.com/rust-lang/rust/issues/65080.
|
||||
// In `impl Extend<TokenTree> for TokenStream` which is used heavily by quote,
|
||||
// we hold on to the appended tokens and do proc_macro::TokenStream::extend as
|
||||
// late as possible to batch together consecutive uses of the Extend impl.
|
||||
#[derive(Clone)]
|
||||
pub(crate) struct DeferredTokenStream {
|
||||
stream: proc_macro::TokenStream,
|
||||
extra: Vec<proc_macro::TokenTree>,
|
||||
}
|
||||
|
||||
pub(crate) enum LexError {
|
||||
Compiler(proc_macro::LexError),
|
||||
Fallback(fallback::LexError),
|
||||
|
||||
// Rustc was supposed to return a LexError, but it panicked instead.
|
||||
// https://github.com/rust-lang/rust/issues/58736
|
||||
CompilerPanic,
|
||||
}
|
||||
|
||||
#[cold]
|
||||
fn mismatch(line: u32) -> ! {
|
||||
#[cfg(procmacro2_backtrace)]
|
||||
{
|
||||
let backtrace = std::backtrace::Backtrace::force_capture();
|
||||
panic!("compiler/fallback mismatch L{}\n\n{}", line, backtrace)
|
||||
}
|
||||
#[cfg(not(procmacro2_backtrace))]
|
||||
{
|
||||
panic!("compiler/fallback mismatch L{}", line)
|
||||
}
|
||||
}
|
||||
|
||||
impl DeferredTokenStream {
|
||||
fn new(stream: proc_macro::TokenStream) -> Self {
|
||||
DeferredTokenStream {
|
||||
stream,
|
||||
extra: Vec::new(),
|
||||
}
|
||||
}
|
||||
|
||||
fn is_empty(&self) -> bool {
|
||||
self.stream.is_empty() && self.extra.is_empty()
|
||||
}
|
||||
|
||||
fn evaluate_now(&mut self) {
|
||||
// If-check provides a fast short circuit for the common case of `extra`
|
||||
// being empty, which saves a round trip over the proc macro bridge.
|
||||
// Improves macro expansion time in winrt by 6% in debug mode.
|
||||
if !self.extra.is_empty() {
|
||||
self.stream.extend(self.extra.drain(..));
|
||||
}
|
||||
}
|
||||
|
||||
fn into_token_stream(mut self) -> proc_macro::TokenStream {
|
||||
self.evaluate_now();
|
||||
self.stream
|
||||
}
|
||||
}
|
||||
|
||||
impl TokenStream {
|
||||
pub(crate) fn new() -> Self {
|
||||
if inside_proc_macro() {
|
||||
TokenStream::Compiler(DeferredTokenStream::new(proc_macro::TokenStream::new()))
|
||||
} else {
|
||||
TokenStream::Fallback(fallback::TokenStream::new())
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn from_str_checked(src: &str) -> Result<Self, LexError> {
|
||||
if inside_proc_macro() {
|
||||
Ok(TokenStream::Compiler(DeferredTokenStream::new(
|
||||
proc_macro::TokenStream::from_str_checked(src)?,
|
||||
)))
|
||||
} else {
|
||||
Ok(TokenStream::Fallback(
|
||||
fallback::TokenStream::from_str_checked(src)?,
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn is_empty(&self) -> bool {
|
||||
match self {
|
||||
TokenStream::Compiler(tts) => tts.is_empty(),
|
||||
TokenStream::Fallback(tts) => tts.is_empty(),
|
||||
}
|
||||
}
|
||||
|
||||
fn unwrap_nightly(self) -> proc_macro::TokenStream {
|
||||
match self {
|
||||
TokenStream::Compiler(s) => s.into_token_stream(),
|
||||
TokenStream::Fallback(_) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
|
||||
fn unwrap_stable(self) -> fallback::TokenStream {
|
||||
match self {
|
||||
TokenStream::Compiler(_) => mismatch(line!()),
|
||||
TokenStream::Fallback(s) => s,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for TokenStream {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
TokenStream::Compiler(tts) => Display::fmt(&tts.clone().into_token_stream(), f),
|
||||
TokenStream::Fallback(tts) => Display::fmt(tts, f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<proc_macro::TokenStream> for TokenStream {
|
||||
fn from(inner: proc_macro::TokenStream) -> Self {
|
||||
TokenStream::Compiler(DeferredTokenStream::new(inner))
|
||||
}
|
||||
}
|
||||
|
||||
impl From<TokenStream> for proc_macro::TokenStream {
|
||||
fn from(inner: TokenStream) -> Self {
|
||||
match inner {
|
||||
TokenStream::Compiler(inner) => inner.into_token_stream(),
|
||||
TokenStream::Fallback(inner) => {
|
||||
proc_macro::TokenStream::from_str_unchecked(&inner.to_string())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<fallback::TokenStream> for TokenStream {
|
||||
fn from(inner: fallback::TokenStream) -> Self {
|
||||
TokenStream::Fallback(inner)
|
||||
}
|
||||
}
|
||||
|
||||
// Assumes inside_proc_macro().
|
||||
fn into_compiler_token(token: TokenTree) -> proc_macro::TokenTree {
|
||||
match token {
|
||||
TokenTree::Group(tt) => proc_macro::TokenTree::Group(tt.inner.unwrap_nightly()),
|
||||
TokenTree::Punct(tt) => {
|
||||
let spacing = match tt.spacing() {
|
||||
Spacing::Joint => proc_macro::Spacing::Joint,
|
||||
Spacing::Alone => proc_macro::Spacing::Alone,
|
||||
};
|
||||
let mut punct = proc_macro::Punct::new(tt.as_char(), spacing);
|
||||
punct.set_span(tt.span().inner.unwrap_nightly());
|
||||
proc_macro::TokenTree::Punct(punct)
|
||||
}
|
||||
TokenTree::Ident(tt) => proc_macro::TokenTree::Ident(tt.inner.unwrap_nightly()),
|
||||
TokenTree::Literal(tt) => proc_macro::TokenTree::Literal(tt.inner.unwrap_nightly()),
|
||||
}
|
||||
}
|
||||
|
||||
impl From<TokenTree> for TokenStream {
|
||||
fn from(token: TokenTree) -> Self {
|
||||
if inside_proc_macro() {
|
||||
TokenStream::Compiler(DeferredTokenStream::new(proc_macro::TokenStream::from(
|
||||
into_compiler_token(token),
|
||||
)))
|
||||
} else {
|
||||
TokenStream::Fallback(fallback::TokenStream::from(token))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl FromIterator<TokenTree> for TokenStream {
|
||||
fn from_iter<I: IntoIterator<Item = TokenTree>>(trees: I) -> Self {
|
||||
if inside_proc_macro() {
|
||||
TokenStream::Compiler(DeferredTokenStream::new(
|
||||
trees.into_iter().map(into_compiler_token).collect(),
|
||||
))
|
||||
} else {
|
||||
TokenStream::Fallback(trees.into_iter().collect())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl FromIterator<TokenStream> for TokenStream {
|
||||
fn from_iter<I: IntoIterator<Item = TokenStream>>(streams: I) -> Self {
|
||||
let mut streams = streams.into_iter();
|
||||
match streams.next() {
|
||||
Some(TokenStream::Compiler(mut first)) => {
|
||||
first.evaluate_now();
|
||||
first.stream.extend(streams.map(|s| match s {
|
||||
TokenStream::Compiler(s) => s.into_token_stream(),
|
||||
TokenStream::Fallback(_) => mismatch(line!()),
|
||||
}));
|
||||
TokenStream::Compiler(first)
|
||||
}
|
||||
Some(TokenStream::Fallback(mut first)) => {
|
||||
first.extend(streams.map(|s| match s {
|
||||
TokenStream::Fallback(s) => s,
|
||||
TokenStream::Compiler(_) => mismatch(line!()),
|
||||
}));
|
||||
TokenStream::Fallback(first)
|
||||
}
|
||||
None => TokenStream::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Extend<TokenTree> for TokenStream {
|
||||
fn extend<I: IntoIterator<Item = TokenTree>>(&mut self, stream: I) {
|
||||
match self {
|
||||
TokenStream::Compiler(tts) => {
|
||||
// Here is the reason for DeferredTokenStream.
|
||||
for token in stream {
|
||||
tts.extra.push(into_compiler_token(token));
|
||||
}
|
||||
}
|
||||
TokenStream::Fallback(tts) => tts.extend(stream),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Extend<TokenStream> for TokenStream {
|
||||
fn extend<I: IntoIterator<Item = TokenStream>>(&mut self, streams: I) {
|
||||
match self {
|
||||
TokenStream::Compiler(tts) => {
|
||||
tts.evaluate_now();
|
||||
tts.stream
|
||||
.extend(streams.into_iter().map(TokenStream::unwrap_nightly));
|
||||
}
|
||||
TokenStream::Fallback(tts) => {
|
||||
tts.extend(streams.into_iter().map(TokenStream::unwrap_stable));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for TokenStream {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
TokenStream::Compiler(tts) => Debug::fmt(&tts.clone().into_token_stream(), f),
|
||||
TokenStream::Fallback(tts) => Debug::fmt(tts, f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl LexError {
|
||||
pub(crate) fn span(&self) -> Span {
|
||||
match self {
|
||||
LexError::Compiler(_) | LexError::CompilerPanic => Span::call_site(),
|
||||
LexError::Fallback(e) => Span::Fallback(e.span()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<proc_macro::LexError> for LexError {
|
||||
fn from(e: proc_macro::LexError) -> Self {
|
||||
LexError::Compiler(e)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<fallback::LexError> for LexError {
|
||||
fn from(e: fallback::LexError) -> Self {
|
||||
LexError::Fallback(e)
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for LexError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
LexError::Compiler(e) => Debug::fmt(e, f),
|
||||
LexError::Fallback(e) => Debug::fmt(e, f),
|
||||
LexError::CompilerPanic => {
|
||||
let fallback = fallback::LexError::call_site();
|
||||
Debug::fmt(&fallback, f)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for LexError {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
LexError::Compiler(e) => Display::fmt(e, f),
|
||||
LexError::Fallback(e) => Display::fmt(e, f),
|
||||
LexError::CompilerPanic => {
|
||||
let fallback = fallback::LexError::call_site();
|
||||
Display::fmt(&fallback, f)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) enum TokenTreeIter {
|
||||
Compiler(proc_macro::token_stream::IntoIter),
|
||||
Fallback(fallback::TokenTreeIter),
|
||||
}
|
||||
|
||||
impl IntoIterator for TokenStream {
|
||||
type Item = TokenTree;
|
||||
type IntoIter = TokenTreeIter;
|
||||
|
||||
fn into_iter(self) -> TokenTreeIter {
|
||||
match self {
|
||||
TokenStream::Compiler(tts) => {
|
||||
TokenTreeIter::Compiler(tts.into_token_stream().into_iter())
|
||||
}
|
||||
TokenStream::Fallback(tts) => TokenTreeIter::Fallback(tts.into_iter()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Iterator for TokenTreeIter {
|
||||
type Item = TokenTree;
|
||||
|
||||
fn next(&mut self) -> Option<TokenTree> {
|
||||
let token = match self {
|
||||
TokenTreeIter::Compiler(iter) => iter.next()?,
|
||||
TokenTreeIter::Fallback(iter) => return iter.next(),
|
||||
};
|
||||
Some(match token {
|
||||
proc_macro::TokenTree::Group(tt) => {
|
||||
TokenTree::Group(crate::Group::_new(Group::Compiler(tt)))
|
||||
}
|
||||
proc_macro::TokenTree::Punct(tt) => {
|
||||
let spacing = match tt.spacing() {
|
||||
proc_macro::Spacing::Joint => Spacing::Joint,
|
||||
proc_macro::Spacing::Alone => Spacing::Alone,
|
||||
};
|
||||
let mut o = Punct::new(tt.as_char(), spacing);
|
||||
o.set_span(crate::Span::_new(Span::Compiler(tt.span())));
|
||||
TokenTree::Punct(o)
|
||||
}
|
||||
proc_macro::TokenTree::Ident(s) => {
|
||||
TokenTree::Ident(crate::Ident::_new(Ident::Compiler(s)))
|
||||
}
|
||||
proc_macro::TokenTree::Literal(l) => {
|
||||
TokenTree::Literal(crate::Literal::_new(Literal::Compiler(l)))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn size_hint(&self) -> (usize, Option<usize>) {
|
||||
match self {
|
||||
TokenTreeIter::Compiler(tts) => tts.size_hint(),
|
||||
TokenTreeIter::Fallback(tts) => tts.size_hint(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone)]
|
||||
pub(crate) enum Span {
|
||||
Compiler(proc_macro::Span),
|
||||
Fallback(fallback::Span),
|
||||
}
|
||||
|
||||
impl Span {
|
||||
pub(crate) fn call_site() -> Self {
|
||||
if inside_proc_macro() {
|
||||
Span::Compiler(proc_macro::Span::call_site())
|
||||
} else {
|
||||
Span::Fallback(fallback::Span::call_site())
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn mixed_site() -> Self {
|
||||
if inside_proc_macro() {
|
||||
Span::Compiler(proc_macro::Span::mixed_site())
|
||||
} else {
|
||||
Span::Fallback(fallback::Span::mixed_site())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(super_unstable)]
|
||||
pub(crate) fn def_site() -> Self {
|
||||
if inside_proc_macro() {
|
||||
Span::Compiler(proc_macro::Span::def_site())
|
||||
} else {
|
||||
Span::Fallback(fallback::Span::def_site())
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn resolved_at(&self, other: Span) -> Span {
|
||||
match (self, other) {
|
||||
(Span::Compiler(a), Span::Compiler(b)) => Span::Compiler(a.resolved_at(b)),
|
||||
(Span::Fallback(a), Span::Fallback(b)) => Span::Fallback(a.resolved_at(b)),
|
||||
(Span::Compiler(_), Span::Fallback(_)) => mismatch(line!()),
|
||||
(Span::Fallback(_), Span::Compiler(_)) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn located_at(&self, other: Span) -> Span {
|
||||
match (self, other) {
|
||||
(Span::Compiler(a), Span::Compiler(b)) => Span::Compiler(a.located_at(b)),
|
||||
(Span::Fallback(a), Span::Fallback(b)) => Span::Fallback(a.located_at(b)),
|
||||
(Span::Compiler(_), Span::Fallback(_)) => mismatch(line!()),
|
||||
(Span::Fallback(_), Span::Compiler(_)) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn unwrap(self) -> proc_macro::Span {
|
||||
match self {
|
||||
Span::Compiler(s) => s,
|
||||
Span::Fallback(_) => panic!("proc_macro::Span is only available in procedural macros"),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(span_locations)]
|
||||
pub(crate) fn byte_range(&self) -> Range<usize> {
|
||||
match self {
|
||||
#[cfg(proc_macro_span)]
|
||||
Span::Compiler(s) => proc_macro_span::byte_range(s),
|
||||
#[cfg(not(proc_macro_span))]
|
||||
Span::Compiler(_) => 0..0,
|
||||
Span::Fallback(s) => s.byte_range(),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(span_locations)]
|
||||
pub(crate) fn start(&self) -> LineColumn {
|
||||
match self {
|
||||
#[cfg(proc_macro_span_location)]
|
||||
Span::Compiler(s) => LineColumn {
|
||||
line: proc_macro_span_location::line(s),
|
||||
column: proc_macro_span_location::column(s).saturating_sub(1),
|
||||
},
|
||||
#[cfg(not(proc_macro_span_location))]
|
||||
Span::Compiler(_) => LineColumn { line: 0, column: 0 },
|
||||
Span::Fallback(s) => s.start(),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(span_locations)]
|
||||
pub(crate) fn end(&self) -> LineColumn {
|
||||
match self {
|
||||
#[cfg(proc_macro_span_location)]
|
||||
Span::Compiler(s) => {
|
||||
let end = proc_macro_span_location::end(s);
|
||||
LineColumn {
|
||||
line: proc_macro_span_location::line(&end),
|
||||
column: proc_macro_span_location::column(&end).saturating_sub(1),
|
||||
}
|
||||
}
|
||||
#[cfg(not(proc_macro_span_location))]
|
||||
Span::Compiler(_) => LineColumn { line: 0, column: 0 },
|
||||
Span::Fallback(s) => s.end(),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(span_locations)]
|
||||
pub(crate) fn file(&self) -> String {
|
||||
match self {
|
||||
#[cfg(proc_macro_span_file)]
|
||||
Span::Compiler(s) => proc_macro_span_file::file(s),
|
||||
#[cfg(not(proc_macro_span_file))]
|
||||
Span::Compiler(_) => "<token stream>".to_owned(),
|
||||
Span::Fallback(s) => s.file(),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(span_locations)]
|
||||
pub(crate) fn local_file(&self) -> Option<PathBuf> {
|
||||
match self {
|
||||
#[cfg(proc_macro_span_file)]
|
||||
Span::Compiler(s) => proc_macro_span_file::local_file(s),
|
||||
#[cfg(not(proc_macro_span_file))]
|
||||
Span::Compiler(_) => None,
|
||||
Span::Fallback(s) => s.local_file(),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn join(&self, other: Span) -> Option<Span> {
|
||||
let ret = match (self, other) {
|
||||
#[cfg(proc_macro_span)]
|
||||
(Span::Compiler(a), Span::Compiler(b)) => Span::Compiler(proc_macro_span::join(a, b)?),
|
||||
(Span::Fallback(a), Span::Fallback(b)) => Span::Fallback(a.join(b)?),
|
||||
_ => return None,
|
||||
};
|
||||
Some(ret)
|
||||
}
|
||||
|
||||
#[cfg(super_unstable)]
|
||||
pub(crate) fn eq(&self, other: &Span) -> bool {
|
||||
match (self, other) {
|
||||
(Span::Compiler(a), Span::Compiler(b)) => a.eq(b),
|
||||
(Span::Fallback(a), Span::Fallback(b)) => a.eq(b),
|
||||
_ => false,
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn source_text(&self) -> Option<String> {
|
||||
match self {
|
||||
#[cfg(not(no_source_text))]
|
||||
Span::Compiler(s) => s.source_text(),
|
||||
#[cfg(no_source_text)]
|
||||
Span::Compiler(_) => None,
|
||||
Span::Fallback(s) => s.source_text(),
|
||||
}
|
||||
}
|
||||
|
||||
fn unwrap_nightly(self) -> proc_macro::Span {
|
||||
match self {
|
||||
Span::Compiler(s) => s,
|
||||
Span::Fallback(_) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<proc_macro::Span> for crate::Span {
|
||||
fn from(proc_span: proc_macro::Span) -> Self {
|
||||
crate::Span::_new(Span::Compiler(proc_span))
|
||||
}
|
||||
}
|
||||
|
||||
impl From<fallback::Span> for Span {
|
||||
fn from(inner: fallback::Span) -> Self {
|
||||
Span::Fallback(inner)
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for Span {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
Span::Compiler(s) => Debug::fmt(s, f),
|
||||
Span::Fallback(s) => Debug::fmt(s, f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn debug_span_field_if_nontrivial(debug: &mut fmt::DebugStruct, span: Span) {
|
||||
match span {
|
||||
Span::Compiler(s) => {
|
||||
debug.field("span", &s);
|
||||
}
|
||||
Span::Fallback(s) => fallback::debug_span_field_if_nontrivial(debug, s),
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) enum Group {
|
||||
Compiler(proc_macro::Group),
|
||||
Fallback(fallback::Group),
|
||||
}
|
||||
|
||||
impl Group {
|
||||
pub(crate) fn new(delimiter: Delimiter, stream: TokenStream) -> Self {
|
||||
match stream {
|
||||
TokenStream::Compiler(tts) => {
|
||||
let delimiter = match delimiter {
|
||||
Delimiter::Parenthesis => proc_macro::Delimiter::Parenthesis,
|
||||
Delimiter::Bracket => proc_macro::Delimiter::Bracket,
|
||||
Delimiter::Brace => proc_macro::Delimiter::Brace,
|
||||
Delimiter::None => proc_macro::Delimiter::None,
|
||||
};
|
||||
Group::Compiler(proc_macro::Group::new(delimiter, tts.into_token_stream()))
|
||||
}
|
||||
TokenStream::Fallback(stream) => {
|
||||
Group::Fallback(fallback::Group::new(delimiter, stream))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn delimiter(&self) -> Delimiter {
|
||||
match self {
|
||||
Group::Compiler(g) => match g.delimiter() {
|
||||
proc_macro::Delimiter::Parenthesis => Delimiter::Parenthesis,
|
||||
proc_macro::Delimiter::Bracket => Delimiter::Bracket,
|
||||
proc_macro::Delimiter::Brace => Delimiter::Brace,
|
||||
proc_macro::Delimiter::None => Delimiter::None,
|
||||
},
|
||||
Group::Fallback(g) => g.delimiter(),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn stream(&self) -> TokenStream {
|
||||
match self {
|
||||
Group::Compiler(g) => TokenStream::Compiler(DeferredTokenStream::new(g.stream())),
|
||||
Group::Fallback(g) => TokenStream::Fallback(g.stream()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn span(&self) -> Span {
|
||||
match self {
|
||||
Group::Compiler(g) => Span::Compiler(g.span()),
|
||||
Group::Fallback(g) => Span::Fallback(g.span()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn span_open(&self) -> Span {
|
||||
match self {
|
||||
Group::Compiler(g) => Span::Compiler(g.span_open()),
|
||||
Group::Fallback(g) => Span::Fallback(g.span_open()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn span_close(&self) -> Span {
|
||||
match self {
|
||||
Group::Compiler(g) => Span::Compiler(g.span_close()),
|
||||
Group::Fallback(g) => Span::Fallback(g.span_close()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn set_span(&mut self, span: Span) {
|
||||
match (self, span) {
|
||||
(Group::Compiler(g), Span::Compiler(s)) => g.set_span(s),
|
||||
(Group::Fallback(g), Span::Fallback(s)) => g.set_span(s),
|
||||
(Group::Compiler(_), Span::Fallback(_)) => mismatch(line!()),
|
||||
(Group::Fallback(_), Span::Compiler(_)) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
|
||||
fn unwrap_nightly(self) -> proc_macro::Group {
|
||||
match self {
|
||||
Group::Compiler(g) => g,
|
||||
Group::Fallback(_) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<fallback::Group> for Group {
|
||||
fn from(g: fallback::Group) -> Self {
|
||||
Group::Fallback(g)
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for Group {
|
||||
fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
Group::Compiler(group) => Display::fmt(group, formatter),
|
||||
Group::Fallback(group) => Display::fmt(group, formatter),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for Group {
|
||||
fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
Group::Compiler(group) => Debug::fmt(group, formatter),
|
||||
Group::Fallback(group) => Debug::fmt(group, formatter),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) enum Ident {
|
||||
Compiler(proc_macro::Ident),
|
||||
Fallback(fallback::Ident),
|
||||
}
|
||||
|
||||
impl Ident {
|
||||
#[track_caller]
|
||||
pub(crate) fn new_checked(string: &str, span: Span) -> Self {
|
||||
match span {
|
||||
Span::Compiler(s) => Ident::Compiler(proc_macro::Ident::new(string, s)),
|
||||
Span::Fallback(s) => Ident::Fallback(fallback::Ident::new_checked(string, s)),
|
||||
}
|
||||
}
|
||||
|
||||
#[track_caller]
|
||||
pub(crate) fn new_raw_checked(string: &str, span: Span) -> Self {
|
||||
match span {
|
||||
Span::Compiler(s) => Ident::Compiler(proc_macro::Ident::new_raw(string, s)),
|
||||
Span::Fallback(s) => Ident::Fallback(fallback::Ident::new_raw_checked(string, s)),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn span(&self) -> Span {
|
||||
match self {
|
||||
Ident::Compiler(t) => Span::Compiler(t.span()),
|
||||
Ident::Fallback(t) => Span::Fallback(t.span()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn set_span(&mut self, span: Span) {
|
||||
match (self, span) {
|
||||
(Ident::Compiler(t), Span::Compiler(s)) => t.set_span(s),
|
||||
(Ident::Fallback(t), Span::Fallback(s)) => t.set_span(s),
|
||||
(Ident::Compiler(_), Span::Fallback(_)) => mismatch(line!()),
|
||||
(Ident::Fallback(_), Span::Compiler(_)) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
|
||||
fn unwrap_nightly(self) -> proc_macro::Ident {
|
||||
match self {
|
||||
Ident::Compiler(s) => s,
|
||||
Ident::Fallback(_) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<fallback::Ident> for Ident {
|
||||
fn from(inner: fallback::Ident) -> Self {
|
||||
Ident::Fallback(inner)
|
||||
}
|
||||
}
|
||||
|
||||
impl PartialEq for Ident {
|
||||
fn eq(&self, other: &Ident) -> bool {
|
||||
match (self, other) {
|
||||
(Ident::Compiler(t), Ident::Compiler(o)) => t.to_string() == o.to_string(),
|
||||
(Ident::Fallback(t), Ident::Fallback(o)) => t == o,
|
||||
(Ident::Compiler(_), Ident::Fallback(_)) => mismatch(line!()),
|
||||
(Ident::Fallback(_), Ident::Compiler(_)) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> PartialEq<T> for Ident
|
||||
where
|
||||
T: ?Sized + AsRef<str>,
|
||||
{
|
||||
fn eq(&self, other: &T) -> bool {
|
||||
let other = other.as_ref();
|
||||
match self {
|
||||
Ident::Compiler(t) => t.to_string() == other,
|
||||
Ident::Fallback(t) => t == other,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for Ident {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
Ident::Compiler(t) => Display::fmt(t, f),
|
||||
Ident::Fallback(t) => Display::fmt(t, f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for Ident {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
Ident::Compiler(t) => Debug::fmt(t, f),
|
||||
Ident::Fallback(t) => Debug::fmt(t, f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) enum Literal {
|
||||
Compiler(proc_macro::Literal),
|
||||
Fallback(fallback::Literal),
|
||||
}
|
||||
|
||||
macro_rules! suffixed_numbers {
|
||||
($($name:ident => $kind:ident,)*) => ($(
|
||||
pub(crate) fn $name(n: $kind) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::$name(n))
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::$name(n))
|
||||
}
|
||||
}
|
||||
)*)
|
||||
}
|
||||
|
||||
macro_rules! unsuffixed_integers {
|
||||
($($name:ident => $kind:ident,)*) => ($(
|
||||
pub(crate) fn $name(n: $kind) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::$name(n))
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::$name(n))
|
||||
}
|
||||
}
|
||||
)*)
|
||||
}
|
||||
|
||||
impl Literal {
|
||||
pub(crate) fn from_str_checked(repr: &str) -> Result<Self, LexError> {
|
||||
if inside_proc_macro() {
|
||||
let literal = proc_macro::Literal::from_str_checked(repr)?;
|
||||
Ok(Literal::Compiler(literal))
|
||||
} else {
|
||||
let literal = fallback::Literal::from_str_checked(repr)?;
|
||||
Ok(Literal::Fallback(literal))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) unsafe fn from_str_unchecked(repr: &str) -> Self {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::from_str_unchecked(repr))
|
||||
} else {
|
||||
Literal::Fallback(unsafe { fallback::Literal::from_str_unchecked(repr) })
|
||||
}
|
||||
}
|
||||
|
||||
suffixed_numbers! {
|
||||
u8_suffixed => u8,
|
||||
u16_suffixed => u16,
|
||||
u32_suffixed => u32,
|
||||
u64_suffixed => u64,
|
||||
u128_suffixed => u128,
|
||||
usize_suffixed => usize,
|
||||
i8_suffixed => i8,
|
||||
i16_suffixed => i16,
|
||||
i32_suffixed => i32,
|
||||
i64_suffixed => i64,
|
||||
i128_suffixed => i128,
|
||||
isize_suffixed => isize,
|
||||
|
||||
f32_suffixed => f32,
|
||||
f64_suffixed => f64,
|
||||
}
|
||||
|
||||
unsuffixed_integers! {
|
||||
u8_unsuffixed => u8,
|
||||
u16_unsuffixed => u16,
|
||||
u32_unsuffixed => u32,
|
||||
u64_unsuffixed => u64,
|
||||
u128_unsuffixed => u128,
|
||||
usize_unsuffixed => usize,
|
||||
i8_unsuffixed => i8,
|
||||
i16_unsuffixed => i16,
|
||||
i32_unsuffixed => i32,
|
||||
i64_unsuffixed => i64,
|
||||
i128_unsuffixed => i128,
|
||||
isize_unsuffixed => isize,
|
||||
}
|
||||
|
||||
pub(crate) fn f32_unsuffixed(f: f32) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::f32_unsuffixed(f))
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::f32_unsuffixed(f))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn f64_unsuffixed(f: f64) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::f64_unsuffixed(f))
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::f64_unsuffixed(f))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn string(string: &str) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::string(string))
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::string(string))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn character(ch: char) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::character(ch))
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::character(ch))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn byte_character(byte: u8) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler({
|
||||
#[cfg(not(no_literal_byte_character))]
|
||||
{
|
||||
proc_macro::Literal::byte_character(byte)
|
||||
}
|
||||
|
||||
#[cfg(no_literal_byte_character)]
|
||||
{
|
||||
let fallback = fallback::Literal::byte_character(byte);
|
||||
proc_macro::Literal::from_str_unchecked(&fallback.repr)
|
||||
}
|
||||
})
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::byte_character(byte))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn byte_string(bytes: &[u8]) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler(proc_macro::Literal::byte_string(bytes))
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::byte_string(bytes))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn c_string(string: &CStr) -> Literal {
|
||||
if inside_proc_macro() {
|
||||
Literal::Compiler({
|
||||
#[cfg(not(no_literal_c_string))]
|
||||
{
|
||||
proc_macro::Literal::c_string(string)
|
||||
}
|
||||
|
||||
#[cfg(no_literal_c_string)]
|
||||
{
|
||||
let fallback = fallback::Literal::c_string(string);
|
||||
proc_macro::Literal::from_str_unchecked(&fallback.repr)
|
||||
}
|
||||
})
|
||||
} else {
|
||||
Literal::Fallback(fallback::Literal::c_string(string))
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn span(&self) -> Span {
|
||||
match self {
|
||||
Literal::Compiler(lit) => Span::Compiler(lit.span()),
|
||||
Literal::Fallback(lit) => Span::Fallback(lit.span()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn set_span(&mut self, span: Span) {
|
||||
match (self, span) {
|
||||
(Literal::Compiler(lit), Span::Compiler(s)) => lit.set_span(s),
|
||||
(Literal::Fallback(lit), Span::Fallback(s)) => lit.set_span(s),
|
||||
(Literal::Compiler(_), Span::Fallback(_)) => mismatch(line!()),
|
||||
(Literal::Fallback(_), Span::Compiler(_)) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn subspan<R: RangeBounds<usize>>(&self, range: R) -> Option<Span> {
|
||||
match self {
|
||||
#[cfg(proc_macro_span)]
|
||||
Literal::Compiler(lit) => proc_macro_span::subspan(lit, range).map(Span::Compiler),
|
||||
#[cfg(not(proc_macro_span))]
|
||||
Literal::Compiler(_lit) => None,
|
||||
Literal::Fallback(lit) => lit.subspan(range).map(Span::Fallback),
|
||||
}
|
||||
}
|
||||
|
||||
fn unwrap_nightly(self) -> proc_macro::Literal {
|
||||
match self {
|
||||
Literal::Compiler(s) => s,
|
||||
Literal::Fallback(_) => mismatch(line!()),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl From<fallback::Literal> for Literal {
|
||||
fn from(s: fallback::Literal) -> Self {
|
||||
Literal::Fallback(s)
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for Literal {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
Literal::Compiler(t) => Display::fmt(t, f),
|
||||
Literal::Fallback(t) => Display::fmt(t, f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for Literal {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
match self {
|
||||
Literal::Compiler(t) => Debug::fmt(t, f),
|
||||
Literal::Fallback(t) => Debug::fmt(t, f),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(span_locations)]
|
||||
pub(crate) fn invalidate_current_thread_spans() {
|
||||
if inside_proc_macro() {
|
||||
panic!(
|
||||
"proc_macro2::extra::invalidate_current_thread_spans is not available in procedural macros"
|
||||
);
|
||||
} else {
|
||||
crate::fallback::invalidate_current_thread_spans();
|
||||
}
|
||||
}
|
||||
12
rust/quote/README.md
Normal file
12
rust/quote/README.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# `quote`
|
||||
|
||||
These source files come from the Rust `quote` crate, version 1.0.40
|
||||
(released 2025-03-12), hosted in the <https://github.com/dtolnay/quote>
|
||||
repository, licensed under "Apache-2.0 OR MIT" and only modified to add
|
||||
the SPDX license identifiers.
|
||||
|
||||
For copyright details, please see:
|
||||
|
||||
https://github.com/dtolnay/quote/blob/1.0.40/README.md#license
|
||||
https://github.com/dtolnay/quote/blob/1.0.40/LICENSE-APACHE
|
||||
https://github.com/dtolnay/quote/blob/1.0.40/LICENSE-MIT
|
||||
112
rust/quote/ext.rs
Normal file
112
rust/quote/ext.rs
Normal file
@@ -0,0 +1,112 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use super::ToTokens;
|
||||
use core::iter;
|
||||
use proc_macro2::{TokenStream, TokenTree};
|
||||
|
||||
/// TokenStream extension trait with methods for appending tokens.
|
||||
///
|
||||
/// This trait is sealed and cannot be implemented outside of the `quote` crate.
|
||||
pub trait TokenStreamExt: private::Sealed {
|
||||
/// For use by `ToTokens` implementations.
|
||||
///
|
||||
/// Appends the token specified to this list of tokens.
|
||||
fn append<U>(&mut self, token: U)
|
||||
where
|
||||
U: Into<TokenTree>;
|
||||
|
||||
/// For use by `ToTokens` implementations.
|
||||
///
|
||||
/// ```
|
||||
/// # use quote::{quote, TokenStreamExt, ToTokens};
|
||||
/// # use proc_macro2::TokenStream;
|
||||
/// #
|
||||
/// struct X;
|
||||
///
|
||||
/// impl ToTokens for X {
|
||||
/// fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
/// tokens.append_all(&[true, false]);
|
||||
/// }
|
||||
/// }
|
||||
///
|
||||
/// let tokens = quote!(#X);
|
||||
/// assert_eq!(tokens.to_string(), "true false");
|
||||
/// ```
|
||||
fn append_all<I>(&mut self, iter: I)
|
||||
where
|
||||
I: IntoIterator,
|
||||
I::Item: ToTokens;
|
||||
|
||||
/// For use by `ToTokens` implementations.
|
||||
///
|
||||
/// Appends all of the items in the iterator `I`, separated by the tokens
|
||||
/// `U`.
|
||||
fn append_separated<I, U>(&mut self, iter: I, op: U)
|
||||
where
|
||||
I: IntoIterator,
|
||||
I::Item: ToTokens,
|
||||
U: ToTokens;
|
||||
|
||||
/// For use by `ToTokens` implementations.
|
||||
///
|
||||
/// Appends all tokens in the iterator `I`, appending `U` after each
|
||||
/// element, including after the last element of the iterator.
|
||||
fn append_terminated<I, U>(&mut self, iter: I, term: U)
|
||||
where
|
||||
I: IntoIterator,
|
||||
I::Item: ToTokens,
|
||||
U: ToTokens;
|
||||
}
|
||||
|
||||
impl TokenStreamExt for TokenStream {
|
||||
fn append<U>(&mut self, token: U)
|
||||
where
|
||||
U: Into<TokenTree>,
|
||||
{
|
||||
self.extend(iter::once(token.into()));
|
||||
}
|
||||
|
||||
fn append_all<I>(&mut self, iter: I)
|
||||
where
|
||||
I: IntoIterator,
|
||||
I::Item: ToTokens,
|
||||
{
|
||||
for token in iter {
|
||||
token.to_tokens(self);
|
||||
}
|
||||
}
|
||||
|
||||
fn append_separated<I, U>(&mut self, iter: I, op: U)
|
||||
where
|
||||
I: IntoIterator,
|
||||
I::Item: ToTokens,
|
||||
U: ToTokens,
|
||||
{
|
||||
for (i, token) in iter.into_iter().enumerate() {
|
||||
if i > 0 {
|
||||
op.to_tokens(self);
|
||||
}
|
||||
token.to_tokens(self);
|
||||
}
|
||||
}
|
||||
|
||||
fn append_terminated<I, U>(&mut self, iter: I, term: U)
|
||||
where
|
||||
I: IntoIterator,
|
||||
I::Item: ToTokens,
|
||||
U: ToTokens,
|
||||
{
|
||||
for token in iter {
|
||||
token.to_tokens(self);
|
||||
term.to_tokens(self);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
mod private {
|
||||
use proc_macro2::TokenStream;
|
||||
|
||||
pub trait Sealed {}
|
||||
|
||||
impl Sealed for TokenStream {}
|
||||
}
|
||||
170
rust/quote/format.rs
Normal file
170
rust/quote/format.rs
Normal file
@@ -0,0 +1,170 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
/// Formatting macro for constructing `Ident`s.
|
||||
///
|
||||
/// <br>
|
||||
///
|
||||
/// # Syntax
|
||||
///
|
||||
/// Syntax is copied from the [`format!`] macro, supporting both positional and
|
||||
/// named arguments.
|
||||
///
|
||||
/// Only a limited set of formatting traits are supported. The current mapping
|
||||
/// of format types to traits is:
|
||||
///
|
||||
/// * `{}` ⇒ [`IdentFragment`]
|
||||
/// * `{:o}` ⇒ [`Octal`](std::fmt::Octal)
|
||||
/// * `{:x}` ⇒ [`LowerHex`](std::fmt::LowerHex)
|
||||
/// * `{:X}` ⇒ [`UpperHex`](std::fmt::UpperHex)
|
||||
/// * `{:b}` ⇒ [`Binary`](std::fmt::Binary)
|
||||
///
|
||||
/// See [`std::fmt`] for more information.
|
||||
///
|
||||
/// <br>
|
||||
///
|
||||
/// # IdentFragment
|
||||
///
|
||||
/// Unlike `format!`, this macro uses the [`IdentFragment`] formatting trait by
|
||||
/// default. This trait is like `Display`, with a few differences:
|
||||
///
|
||||
/// * `IdentFragment` is only implemented for a limited set of types, such as
|
||||
/// unsigned integers and strings.
|
||||
/// * [`Ident`] arguments will have their `r#` prefixes stripped, if present.
|
||||
///
|
||||
/// [`IdentFragment`]: crate::IdentFragment
|
||||
/// [`Ident`]: proc_macro2::Ident
|
||||
///
|
||||
/// <br>
|
||||
///
|
||||
/// # Hygiene
|
||||
///
|
||||
/// The [`Span`] of the first `Ident` argument is used as the span of the final
|
||||
/// identifier, falling back to [`Span::call_site`] when no identifiers are
|
||||
/// provided.
|
||||
///
|
||||
/// ```
|
||||
/// # use quote::format_ident;
|
||||
/// # let ident = format_ident!("Ident");
|
||||
/// // If `ident` is an Ident, the span of `my_ident` will be inherited from it.
|
||||
/// let my_ident = format_ident!("My{}{}", ident, "IsCool");
|
||||
/// assert_eq!(my_ident, "MyIdentIsCool");
|
||||
/// ```
|
||||
///
|
||||
/// Alternatively, the span can be overridden by passing the `span` named
|
||||
/// argument.
|
||||
///
|
||||
/// ```
|
||||
/// # use quote::format_ident;
|
||||
/// # const IGNORE_TOKENS: &'static str = stringify! {
|
||||
/// let my_span = /* ... */;
|
||||
/// # };
|
||||
/// # let my_span = proc_macro2::Span::call_site();
|
||||
/// format_ident!("MyIdent", span = my_span);
|
||||
/// ```
|
||||
///
|
||||
/// [`Span`]: proc_macro2::Span
|
||||
/// [`Span::call_site`]: proc_macro2::Span::call_site
|
||||
///
|
||||
/// <p><br></p>
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// This method will panic if the resulting formatted string is not a valid
|
||||
/// identifier.
|
||||
///
|
||||
/// <br>
|
||||
///
|
||||
/// # Examples
|
||||
///
|
||||
/// Composing raw and non-raw identifiers:
|
||||
/// ```
|
||||
/// # use quote::format_ident;
|
||||
/// let my_ident = format_ident!("My{}", "Ident");
|
||||
/// assert_eq!(my_ident, "MyIdent");
|
||||
///
|
||||
/// let raw = format_ident!("r#Raw");
|
||||
/// assert_eq!(raw, "r#Raw");
|
||||
///
|
||||
/// let my_ident_raw = format_ident!("{}Is{}", my_ident, raw);
|
||||
/// assert_eq!(my_ident_raw, "MyIdentIsRaw");
|
||||
/// ```
|
||||
///
|
||||
/// Integer formatting options:
|
||||
/// ```
|
||||
/// # use quote::format_ident;
|
||||
/// let num: u32 = 10;
|
||||
///
|
||||
/// let decimal = format_ident!("Id_{}", num);
|
||||
/// assert_eq!(decimal, "Id_10");
|
||||
///
|
||||
/// let octal = format_ident!("Id_{:o}", num);
|
||||
/// assert_eq!(octal, "Id_12");
|
||||
///
|
||||
/// let binary = format_ident!("Id_{:b}", num);
|
||||
/// assert_eq!(binary, "Id_1010");
|
||||
///
|
||||
/// let lower_hex = format_ident!("Id_{:x}", num);
|
||||
/// assert_eq!(lower_hex, "Id_a");
|
||||
///
|
||||
/// let upper_hex = format_ident!("Id_{:X}", num);
|
||||
/// assert_eq!(upper_hex, "Id_A");
|
||||
/// ```
|
||||
#[macro_export]
|
||||
macro_rules! format_ident {
|
||||
($fmt:expr) => {
|
||||
$crate::format_ident_impl!([
|
||||
$crate::__private::Option::None,
|
||||
$fmt
|
||||
])
|
||||
};
|
||||
|
||||
($fmt:expr, $($rest:tt)*) => {
|
||||
$crate::format_ident_impl!([
|
||||
$crate::__private::Option::None,
|
||||
$fmt
|
||||
] $($rest)*)
|
||||
};
|
||||
}
|
||||
|
||||
#[macro_export]
|
||||
#[doc(hidden)]
|
||||
macro_rules! format_ident_impl {
|
||||
// Final state
|
||||
([$span:expr, $($fmt:tt)*]) => {
|
||||
$crate::__private::mk_ident(
|
||||
&$crate::__private::format!($($fmt)*),
|
||||
$span,
|
||||
)
|
||||
};
|
||||
|
||||
// Span argument
|
||||
([$old:expr, $($fmt:tt)*] span = $span:expr) => {
|
||||
$crate::format_ident_impl!([$old, $($fmt)*] span = $span,)
|
||||
};
|
||||
([$old:expr, $($fmt:tt)*] span = $span:expr, $($rest:tt)*) => {
|
||||
$crate::format_ident_impl!([
|
||||
$crate::__private::Option::Some::<$crate::__private::Span>($span),
|
||||
$($fmt)*
|
||||
] $($rest)*)
|
||||
};
|
||||
|
||||
// Named argument
|
||||
([$span:expr, $($fmt:tt)*] $name:ident = $arg:expr) => {
|
||||
$crate::format_ident_impl!([$span, $($fmt)*] $name = $arg,)
|
||||
};
|
||||
([$span:expr, $($fmt:tt)*] $name:ident = $arg:expr, $($rest:tt)*) => {
|
||||
match $crate::__private::IdentFragmentAdapter(&$arg) {
|
||||
arg => $crate::format_ident_impl!([$span.or(arg.span()), $($fmt)*, $name = arg] $($rest)*),
|
||||
}
|
||||
};
|
||||
|
||||
// Positional argument
|
||||
([$span:expr, $($fmt:tt)*] $arg:expr) => {
|
||||
$crate::format_ident_impl!([$span, $($fmt)*] $arg,)
|
||||
};
|
||||
([$span:expr, $($fmt:tt)*] $arg:expr, $($rest:tt)*) => {
|
||||
match $crate::__private::IdentFragmentAdapter(&$arg) {
|
||||
arg => $crate::format_ident_impl!([$span.or(arg.span()), $($fmt)*, arg] $($rest)*),
|
||||
}
|
||||
};
|
||||
}
|
||||
90
rust/quote/ident_fragment.rs
Normal file
90
rust/quote/ident_fragment.rs
Normal file
@@ -0,0 +1,90 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use alloc::borrow::Cow;
|
||||
use core::fmt;
|
||||
use proc_macro2::{Ident, Span};
|
||||
|
||||
/// Specialized formatting trait used by `format_ident!`.
|
||||
///
|
||||
/// [`Ident`] arguments formatted using this trait will have their `r#` prefix
|
||||
/// stripped, if present.
|
||||
///
|
||||
/// See [`format_ident!`] for more information.
|
||||
///
|
||||
/// [`format_ident!`]: crate::format_ident
|
||||
pub trait IdentFragment {
|
||||
/// Format this value as an identifier fragment.
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result;
|
||||
|
||||
/// Span associated with this `IdentFragment`.
|
||||
///
|
||||
/// If non-`None`, may be inherited by formatted identifiers.
|
||||
fn span(&self) -> Option<Span> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: IdentFragment + ?Sized> IdentFragment for &T {
|
||||
fn span(&self) -> Option<Span> {
|
||||
<T as IdentFragment>::span(*self)
|
||||
}
|
||||
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
IdentFragment::fmt(*self, f)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: IdentFragment + ?Sized> IdentFragment for &mut T {
|
||||
fn span(&self) -> Option<Span> {
|
||||
<T as IdentFragment>::span(*self)
|
||||
}
|
||||
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
IdentFragment::fmt(*self, f)
|
||||
}
|
||||
}
|
||||
|
||||
impl IdentFragment for Ident {
|
||||
fn span(&self) -> Option<Span> {
|
||||
Some(self.span())
|
||||
}
|
||||
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
let id = self.to_string();
|
||||
if let Some(id) = id.strip_prefix("r#") {
|
||||
fmt::Display::fmt(id, f)
|
||||
} else {
|
||||
fmt::Display::fmt(&id[..], f)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> IdentFragment for Cow<'_, T>
|
||||
where
|
||||
T: IdentFragment + ToOwned + ?Sized,
|
||||
{
|
||||
fn span(&self) -> Option<Span> {
|
||||
T::span(self)
|
||||
}
|
||||
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
T::fmt(self, f)
|
||||
}
|
||||
}
|
||||
|
||||
// Limited set of types which this is implemented for, as we want to avoid types
|
||||
// which will often include non-identifier characters in their `Display` impl.
|
||||
macro_rules! ident_fragment_display {
|
||||
($($T:ty),*) => {
|
||||
$(
|
||||
impl IdentFragment for $T {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
fmt::Display::fmt(self, f)
|
||||
}
|
||||
}
|
||||
)*
|
||||
};
|
||||
}
|
||||
|
||||
ident_fragment_display!(bool, str, String, char);
|
||||
ident_fragment_display!(u8, u16, u32, u64, u128, usize);
|
||||
1456
rust/quote/lib.rs
Normal file
1456
rust/quote/lib.rs
Normal file
File diff suppressed because it is too large
Load Diff
494
rust/quote/runtime.rs
Normal file
494
rust/quote/runtime.rs
Normal file
@@ -0,0 +1,494 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use self::get_span::{GetSpan, GetSpanBase, GetSpanInner};
|
||||
use crate::{IdentFragment, ToTokens, TokenStreamExt};
|
||||
use core::fmt;
|
||||
use core::iter;
|
||||
use core::ops::BitOr;
|
||||
use proc_macro2::{Group, Ident, Punct, Spacing, TokenTree};
|
||||
|
||||
#[doc(hidden)]
|
||||
pub use alloc::format;
|
||||
#[doc(hidden)]
|
||||
pub use core::option::Option;
|
||||
|
||||
#[doc(hidden)]
|
||||
pub type Delimiter = proc_macro2::Delimiter;
|
||||
#[doc(hidden)]
|
||||
pub type Span = proc_macro2::Span;
|
||||
#[doc(hidden)]
|
||||
pub type TokenStream = proc_macro2::TokenStream;
|
||||
|
||||
#[doc(hidden)]
|
||||
pub struct HasIterator; // True
|
||||
#[doc(hidden)]
|
||||
pub struct ThereIsNoIteratorInRepetition; // False
|
||||
|
||||
impl BitOr<ThereIsNoIteratorInRepetition> for ThereIsNoIteratorInRepetition {
|
||||
type Output = ThereIsNoIteratorInRepetition;
|
||||
fn bitor(self, _rhs: ThereIsNoIteratorInRepetition) -> ThereIsNoIteratorInRepetition {
|
||||
ThereIsNoIteratorInRepetition
|
||||
}
|
||||
}
|
||||
|
||||
impl BitOr<ThereIsNoIteratorInRepetition> for HasIterator {
|
||||
type Output = HasIterator;
|
||||
fn bitor(self, _rhs: ThereIsNoIteratorInRepetition) -> HasIterator {
|
||||
HasIterator
|
||||
}
|
||||
}
|
||||
|
||||
impl BitOr<HasIterator> for ThereIsNoIteratorInRepetition {
|
||||
type Output = HasIterator;
|
||||
fn bitor(self, _rhs: HasIterator) -> HasIterator {
|
||||
HasIterator
|
||||
}
|
||||
}
|
||||
|
||||
impl BitOr<HasIterator> for HasIterator {
|
||||
type Output = HasIterator;
|
||||
fn bitor(self, _rhs: HasIterator) -> HasIterator {
|
||||
HasIterator
|
||||
}
|
||||
}
|
||||
|
||||
/// Extension traits used by the implementation of `quote!`. These are defined
|
||||
/// in separate traits, rather than as a single trait due to ambiguity issues.
|
||||
///
|
||||
/// These traits expose a `quote_into_iter` method which should allow calling
|
||||
/// whichever impl happens to be applicable. Calling that method repeatedly on
|
||||
/// the returned value should be idempotent.
|
||||
#[doc(hidden)]
|
||||
pub mod ext {
|
||||
use super::RepInterp;
|
||||
use super::{HasIterator as HasIter, ThereIsNoIteratorInRepetition as DoesNotHaveIter};
|
||||
use crate::ToTokens;
|
||||
use alloc::collections::btree_set::{self, BTreeSet};
|
||||
use core::slice;
|
||||
|
||||
/// Extension trait providing the `quote_into_iter` method on iterators.
|
||||
#[doc(hidden)]
|
||||
pub trait RepIteratorExt: Iterator + Sized {
|
||||
fn quote_into_iter(self) -> (Self, HasIter) {
|
||||
(self, HasIter)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Iterator> RepIteratorExt for T {}
|
||||
|
||||
/// Extension trait providing the `quote_into_iter` method for
|
||||
/// non-iterable types. These types interpolate the same value in each
|
||||
/// iteration of the repetition.
|
||||
#[doc(hidden)]
|
||||
pub trait RepToTokensExt {
|
||||
/// Pretend to be an iterator for the purposes of `quote_into_iter`.
|
||||
/// This allows repeated calls to `quote_into_iter` to continue
|
||||
/// correctly returning DoesNotHaveIter.
|
||||
fn next(&self) -> Option<&Self> {
|
||||
Some(self)
|
||||
}
|
||||
|
||||
fn quote_into_iter(&self) -> (&Self, DoesNotHaveIter) {
|
||||
(self, DoesNotHaveIter)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ToTokens + ?Sized> RepToTokensExt for T {}
|
||||
|
||||
/// Extension trait providing the `quote_into_iter` method for types that
|
||||
/// can be referenced as an iterator.
|
||||
#[doc(hidden)]
|
||||
pub trait RepAsIteratorExt<'q> {
|
||||
type Iter: Iterator;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter);
|
||||
}
|
||||
|
||||
impl<'q, T: RepAsIteratorExt<'q> + ?Sized> RepAsIteratorExt<'q> for &T {
|
||||
type Iter = T::Iter;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) {
|
||||
<T as RepAsIteratorExt>::quote_into_iter(*self)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'q, T: RepAsIteratorExt<'q> + ?Sized> RepAsIteratorExt<'q> for &mut T {
|
||||
type Iter = T::Iter;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) {
|
||||
<T as RepAsIteratorExt>::quote_into_iter(*self)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'q, T: 'q> RepAsIteratorExt<'q> for [T] {
|
||||
type Iter = slice::Iter<'q, T>;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) {
|
||||
(self.iter(), HasIter)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'q, T: 'q, const N: usize> RepAsIteratorExt<'q> for [T; N] {
|
||||
type Iter = slice::Iter<'q, T>;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) {
|
||||
(self.iter(), HasIter)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'q, T: 'q> RepAsIteratorExt<'q> for Vec<T> {
|
||||
type Iter = slice::Iter<'q, T>;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) {
|
||||
(self.iter(), HasIter)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'q, T: 'q> RepAsIteratorExt<'q> for BTreeSet<T> {
|
||||
type Iter = btree_set::Iter<'q, T>;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) {
|
||||
(self.iter(), HasIter)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'q, T: RepAsIteratorExt<'q>> RepAsIteratorExt<'q> for RepInterp<T> {
|
||||
type Iter = T::Iter;
|
||||
|
||||
fn quote_into_iter(&'q self) -> (Self::Iter, HasIter) {
|
||||
self.0.quote_into_iter()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Helper type used within interpolations to allow for repeated binding names.
|
||||
// Implements the relevant traits, and exports a dummy `next()` method.
|
||||
#[derive(Copy, Clone)]
|
||||
#[doc(hidden)]
|
||||
pub struct RepInterp<T>(pub T);
|
||||
|
||||
impl<T> RepInterp<T> {
|
||||
// This method is intended to look like `Iterator::next`, and is called when
|
||||
// a name is bound multiple times, as the previous binding will shadow the
|
||||
// original `Iterator` object. This allows us to avoid advancing the
|
||||
// iterator multiple times per iteration.
|
||||
pub fn next(self) -> Option<T> {
|
||||
Some(self.0)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: Iterator> Iterator for RepInterp<T> {
|
||||
type Item = T::Item;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
self.0.next()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ToTokens> ToTokens for RepInterp<T> {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
self.0.to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[inline]
|
||||
pub fn get_span<T>(span: T) -> GetSpan<T> {
|
||||
GetSpan(GetSpanInner(GetSpanBase(span)))
|
||||
}
|
||||
|
||||
mod get_span {
|
||||
use core::ops::Deref;
|
||||
use proc_macro2::extra::DelimSpan;
|
||||
use proc_macro2::Span;
|
||||
|
||||
pub struct GetSpan<T>(pub(crate) GetSpanInner<T>);
|
||||
|
||||
pub struct GetSpanInner<T>(pub(crate) GetSpanBase<T>);
|
||||
|
||||
pub struct GetSpanBase<T>(pub(crate) T);
|
||||
|
||||
impl GetSpan<Span> {
|
||||
#[inline]
|
||||
pub fn __into_span(self) -> Span {
|
||||
((self.0).0).0
|
||||
}
|
||||
}
|
||||
|
||||
impl GetSpanInner<DelimSpan> {
|
||||
#[inline]
|
||||
pub fn __into_span(&self) -> Span {
|
||||
(self.0).0.join()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> GetSpanBase<T> {
|
||||
#[allow(clippy::unused_self)]
|
||||
pub fn __into_span(&self) -> T {
|
||||
unreachable!()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> Deref for GetSpan<T> {
|
||||
type Target = GetSpanInner<T>;
|
||||
|
||||
#[inline]
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> Deref for GetSpanInner<T> {
|
||||
type Target = GetSpanBase<T>;
|
||||
|
||||
#[inline]
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_group(tokens: &mut TokenStream, delimiter: Delimiter, inner: TokenStream) {
|
||||
tokens.append(Group::new(delimiter, inner));
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_group_spanned(
|
||||
tokens: &mut TokenStream,
|
||||
span: Span,
|
||||
delimiter: Delimiter,
|
||||
inner: TokenStream,
|
||||
) {
|
||||
let mut g = Group::new(delimiter, inner);
|
||||
g.set_span(span);
|
||||
tokens.append(g);
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn parse(tokens: &mut TokenStream, s: &str) {
|
||||
let s: TokenStream = s.parse().expect("invalid token stream");
|
||||
tokens.extend(iter::once(s));
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn parse_spanned(tokens: &mut TokenStream, span: Span, s: &str) {
|
||||
let s: TokenStream = s.parse().expect("invalid token stream");
|
||||
tokens.extend(s.into_iter().map(|t| respan_token_tree(t, span)));
|
||||
}
|
||||
|
||||
// Token tree with every span replaced by the given one.
|
||||
fn respan_token_tree(mut token: TokenTree, span: Span) -> TokenTree {
|
||||
match &mut token {
|
||||
TokenTree::Group(g) => {
|
||||
let stream = g
|
||||
.stream()
|
||||
.into_iter()
|
||||
.map(|token| respan_token_tree(token, span))
|
||||
.collect();
|
||||
*g = Group::new(g.delimiter(), stream);
|
||||
g.set_span(span);
|
||||
}
|
||||
other => other.set_span(span),
|
||||
}
|
||||
token
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_ident(tokens: &mut TokenStream, s: &str) {
|
||||
let span = Span::call_site();
|
||||
push_ident_spanned(tokens, span, s);
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_ident_spanned(tokens: &mut TokenStream, span: Span, s: &str) {
|
||||
tokens.append(ident_maybe_raw(s, span));
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_lifetime(tokens: &mut TokenStream, lifetime: &str) {
|
||||
tokens.extend([
|
||||
TokenTree::Punct(Punct::new('\'', Spacing::Joint)),
|
||||
TokenTree::Ident(Ident::new(&lifetime[1..], Span::call_site())),
|
||||
]);
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_lifetime_spanned(tokens: &mut TokenStream, span: Span, lifetime: &str) {
|
||||
tokens.extend([
|
||||
TokenTree::Punct({
|
||||
let mut apostrophe = Punct::new('\'', Spacing::Joint);
|
||||
apostrophe.set_span(span);
|
||||
apostrophe
|
||||
}),
|
||||
TokenTree::Ident(Ident::new(&lifetime[1..], span)),
|
||||
]);
|
||||
}
|
||||
|
||||
macro_rules! push_punct {
|
||||
($name:ident $spanned:ident $char1:tt) => {
|
||||
#[doc(hidden)]
|
||||
pub fn $name(tokens: &mut TokenStream) {
|
||||
tokens.append(Punct::new($char1, Spacing::Alone));
|
||||
}
|
||||
#[doc(hidden)]
|
||||
pub fn $spanned(tokens: &mut TokenStream, span: Span) {
|
||||
let mut punct = Punct::new($char1, Spacing::Alone);
|
||||
punct.set_span(span);
|
||||
tokens.append(punct);
|
||||
}
|
||||
};
|
||||
($name:ident $spanned:ident $char1:tt $char2:tt) => {
|
||||
#[doc(hidden)]
|
||||
pub fn $name(tokens: &mut TokenStream) {
|
||||
tokens.append(Punct::new($char1, Spacing::Joint));
|
||||
tokens.append(Punct::new($char2, Spacing::Alone));
|
||||
}
|
||||
#[doc(hidden)]
|
||||
pub fn $spanned(tokens: &mut TokenStream, span: Span) {
|
||||
let mut punct = Punct::new($char1, Spacing::Joint);
|
||||
punct.set_span(span);
|
||||
tokens.append(punct);
|
||||
let mut punct = Punct::new($char2, Spacing::Alone);
|
||||
punct.set_span(span);
|
||||
tokens.append(punct);
|
||||
}
|
||||
};
|
||||
($name:ident $spanned:ident $char1:tt $char2:tt $char3:tt) => {
|
||||
#[doc(hidden)]
|
||||
pub fn $name(tokens: &mut TokenStream) {
|
||||
tokens.append(Punct::new($char1, Spacing::Joint));
|
||||
tokens.append(Punct::new($char2, Spacing::Joint));
|
||||
tokens.append(Punct::new($char3, Spacing::Alone));
|
||||
}
|
||||
#[doc(hidden)]
|
||||
pub fn $spanned(tokens: &mut TokenStream, span: Span) {
|
||||
let mut punct = Punct::new($char1, Spacing::Joint);
|
||||
punct.set_span(span);
|
||||
tokens.append(punct);
|
||||
let mut punct = Punct::new($char2, Spacing::Joint);
|
||||
punct.set_span(span);
|
||||
tokens.append(punct);
|
||||
let mut punct = Punct::new($char3, Spacing::Alone);
|
||||
punct.set_span(span);
|
||||
tokens.append(punct);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
push_punct!(push_add push_add_spanned '+');
|
||||
push_punct!(push_add_eq push_add_eq_spanned '+' '=');
|
||||
push_punct!(push_and push_and_spanned '&');
|
||||
push_punct!(push_and_and push_and_and_spanned '&' '&');
|
||||
push_punct!(push_and_eq push_and_eq_spanned '&' '=');
|
||||
push_punct!(push_at push_at_spanned '@');
|
||||
push_punct!(push_bang push_bang_spanned '!');
|
||||
push_punct!(push_caret push_caret_spanned '^');
|
||||
push_punct!(push_caret_eq push_caret_eq_spanned '^' '=');
|
||||
push_punct!(push_colon push_colon_spanned ':');
|
||||
push_punct!(push_colon2 push_colon2_spanned ':' ':');
|
||||
push_punct!(push_comma push_comma_spanned ',');
|
||||
push_punct!(push_div push_div_spanned '/');
|
||||
push_punct!(push_div_eq push_div_eq_spanned '/' '=');
|
||||
push_punct!(push_dot push_dot_spanned '.');
|
||||
push_punct!(push_dot2 push_dot2_spanned '.' '.');
|
||||
push_punct!(push_dot3 push_dot3_spanned '.' '.' '.');
|
||||
push_punct!(push_dot_dot_eq push_dot_dot_eq_spanned '.' '.' '=');
|
||||
push_punct!(push_eq push_eq_spanned '=');
|
||||
push_punct!(push_eq_eq push_eq_eq_spanned '=' '=');
|
||||
push_punct!(push_ge push_ge_spanned '>' '=');
|
||||
push_punct!(push_gt push_gt_spanned '>');
|
||||
push_punct!(push_le push_le_spanned '<' '=');
|
||||
push_punct!(push_lt push_lt_spanned '<');
|
||||
push_punct!(push_mul_eq push_mul_eq_spanned '*' '=');
|
||||
push_punct!(push_ne push_ne_spanned '!' '=');
|
||||
push_punct!(push_or push_or_spanned '|');
|
||||
push_punct!(push_or_eq push_or_eq_spanned '|' '=');
|
||||
push_punct!(push_or_or push_or_or_spanned '|' '|');
|
||||
push_punct!(push_pound push_pound_spanned '#');
|
||||
push_punct!(push_question push_question_spanned '?');
|
||||
push_punct!(push_rarrow push_rarrow_spanned '-' '>');
|
||||
push_punct!(push_larrow push_larrow_spanned '<' '-');
|
||||
push_punct!(push_rem push_rem_spanned '%');
|
||||
push_punct!(push_rem_eq push_rem_eq_spanned '%' '=');
|
||||
push_punct!(push_fat_arrow push_fat_arrow_spanned '=' '>');
|
||||
push_punct!(push_semi push_semi_spanned ';');
|
||||
push_punct!(push_shl push_shl_spanned '<' '<');
|
||||
push_punct!(push_shl_eq push_shl_eq_spanned '<' '<' '=');
|
||||
push_punct!(push_shr push_shr_spanned '>' '>');
|
||||
push_punct!(push_shr_eq push_shr_eq_spanned '>' '>' '=');
|
||||
push_punct!(push_star push_star_spanned '*');
|
||||
push_punct!(push_sub push_sub_spanned '-');
|
||||
push_punct!(push_sub_eq push_sub_eq_spanned '-' '=');
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_underscore(tokens: &mut TokenStream) {
|
||||
push_underscore_spanned(tokens, Span::call_site());
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub fn push_underscore_spanned(tokens: &mut TokenStream, span: Span) {
|
||||
tokens.append(Ident::new("_", span));
|
||||
}
|
||||
|
||||
// Helper method for constructing identifiers from the `format_ident!` macro,
|
||||
// handling `r#` prefixes.
|
||||
#[doc(hidden)]
|
||||
pub fn mk_ident(id: &str, span: Option<Span>) -> Ident {
|
||||
let span = span.unwrap_or_else(Span::call_site);
|
||||
ident_maybe_raw(id, span)
|
||||
}
|
||||
|
||||
fn ident_maybe_raw(id: &str, span: Span) -> Ident {
|
||||
if let Some(id) = id.strip_prefix("r#") {
|
||||
Ident::new_raw(id, span)
|
||||
} else {
|
||||
Ident::new(id, span)
|
||||
}
|
||||
}
|
||||
|
||||
// Adapts from `IdentFragment` to `fmt::Display` for use by the `format_ident!`
|
||||
// macro, and exposes span information from these fragments.
|
||||
//
|
||||
// This struct also has forwarding implementations of the formatting traits
|
||||
// `Octal`, `LowerHex`, `UpperHex`, and `Binary` to allow for their use within
|
||||
// `format_ident!`.
|
||||
#[derive(Copy, Clone)]
|
||||
#[doc(hidden)]
|
||||
pub struct IdentFragmentAdapter<T: IdentFragment>(pub T);
|
||||
|
||||
impl<T: IdentFragment> IdentFragmentAdapter<T> {
|
||||
pub fn span(&self) -> Option<Span> {
|
||||
self.0.span()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: IdentFragment> fmt::Display for IdentFragmentAdapter<T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
IdentFragment::fmt(&self.0, f)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: IdentFragment + fmt::Octal> fmt::Octal for IdentFragmentAdapter<T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
fmt::Octal::fmt(&self.0, f)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: IdentFragment + fmt::LowerHex> fmt::LowerHex for IdentFragmentAdapter<T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
fmt::LowerHex::fmt(&self.0, f)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: IdentFragment + fmt::UpperHex> fmt::UpperHex for IdentFragmentAdapter<T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
fmt::UpperHex::fmt(&self.0, f)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: IdentFragment + fmt::Binary> fmt::Binary for IdentFragmentAdapter<T> {
|
||||
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
|
||||
fmt::Binary::fmt(&self.0, f)
|
||||
}
|
||||
}
|
||||
52
rust/quote/spanned.rs
Normal file
52
rust/quote/spanned.rs
Normal file
@@ -0,0 +1,52 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::ToTokens;
|
||||
use proc_macro2::extra::DelimSpan;
|
||||
use proc_macro2::{Span, TokenStream};
|
||||
|
||||
// Not public API other than via the syn crate. Use syn::spanned::Spanned.
|
||||
pub trait Spanned: private::Sealed {
|
||||
fn __span(&self) -> Span;
|
||||
}
|
||||
|
||||
impl Spanned for Span {
|
||||
fn __span(&self) -> Span {
|
||||
*self
|
||||
}
|
||||
}
|
||||
|
||||
impl Spanned for DelimSpan {
|
||||
fn __span(&self) -> Span {
|
||||
self.join()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized + ToTokens> Spanned for T {
|
||||
fn __span(&self) -> Span {
|
||||
join_spans(self.into_token_stream())
|
||||
}
|
||||
}
|
||||
|
||||
fn join_spans(tokens: TokenStream) -> Span {
|
||||
let mut iter = tokens.into_iter().map(|tt| tt.span());
|
||||
|
||||
let first = match iter.next() {
|
||||
Some(span) => span,
|
||||
None => return Span::call_site(),
|
||||
};
|
||||
|
||||
iter.fold(None, |_prev, next| Some(next))
|
||||
.and_then(|last| first.join(last))
|
||||
.unwrap_or(first)
|
||||
}
|
||||
|
||||
mod private {
|
||||
use crate::ToTokens;
|
||||
use proc_macro2::extra::DelimSpan;
|
||||
use proc_macro2::Span;
|
||||
|
||||
pub trait Sealed {}
|
||||
impl Sealed for Span {}
|
||||
impl Sealed for DelimSpan {}
|
||||
impl<T: ?Sized + ToTokens> Sealed for T {}
|
||||
}
|
||||
273
rust/quote/to_tokens.rs
Normal file
273
rust/quote/to_tokens.rs
Normal file
@@ -0,0 +1,273 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use super::TokenStreamExt;
|
||||
use alloc::borrow::Cow;
|
||||
use alloc::rc::Rc;
|
||||
use core::iter;
|
||||
use proc_macro2::{Group, Ident, Literal, Punct, Span, TokenStream, TokenTree};
|
||||
use std::ffi::{CStr, CString};
|
||||
|
||||
/// Types that can be interpolated inside a `quote!` invocation.
|
||||
pub trait ToTokens {
|
||||
/// Write `self` to the given `TokenStream`.
|
||||
///
|
||||
/// The token append methods provided by the [`TokenStreamExt`] extension
|
||||
/// trait may be useful for implementing `ToTokens`.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// Example implementation for a struct representing Rust paths like
|
||||
/// `std::cmp::PartialEq`:
|
||||
///
|
||||
/// ```
|
||||
/// use proc_macro2::{TokenTree, Spacing, Span, Punct, TokenStream};
|
||||
/// use quote::{TokenStreamExt, ToTokens};
|
||||
///
|
||||
/// pub struct Path {
|
||||
/// pub global: bool,
|
||||
/// pub segments: Vec<PathSegment>,
|
||||
/// }
|
||||
///
|
||||
/// impl ToTokens for Path {
|
||||
/// fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
/// for (i, segment) in self.segments.iter().enumerate() {
|
||||
/// if i > 0 || self.global {
|
||||
/// // Double colon `::`
|
||||
/// tokens.append(Punct::new(':', Spacing::Joint));
|
||||
/// tokens.append(Punct::new(':', Spacing::Alone));
|
||||
/// }
|
||||
/// segment.to_tokens(tokens);
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// #
|
||||
/// # pub struct PathSegment;
|
||||
/// #
|
||||
/// # impl ToTokens for PathSegment {
|
||||
/// # fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
/// # unimplemented!()
|
||||
/// # }
|
||||
/// # }
|
||||
/// ```
|
||||
fn to_tokens(&self, tokens: &mut TokenStream);
|
||||
|
||||
/// Convert `self` directly into a `TokenStream` object.
|
||||
///
|
||||
/// This method is implicitly implemented using `to_tokens`, and acts as a
|
||||
/// convenience method for consumers of the `ToTokens` trait.
|
||||
fn to_token_stream(&self) -> TokenStream {
|
||||
let mut tokens = TokenStream::new();
|
||||
self.to_tokens(&mut tokens);
|
||||
tokens
|
||||
}
|
||||
|
||||
/// Convert `self` directly into a `TokenStream` object.
|
||||
///
|
||||
/// This method is implicitly implemented using `to_tokens`, and acts as a
|
||||
/// convenience method for consumers of the `ToTokens` trait.
|
||||
fn into_token_stream(self) -> TokenStream
|
||||
where
|
||||
Self: Sized,
|
||||
{
|
||||
self.to_token_stream()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized + ToTokens> ToTokens for &T {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
(**self).to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized + ToTokens> ToTokens for &mut T {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
(**self).to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, T: ?Sized + ToOwned + ToTokens> ToTokens for Cow<'a, T> {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
(**self).to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized + ToTokens> ToTokens for Box<T> {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
(**self).to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized + ToTokens> ToTokens for Rc<T> {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
(**self).to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ToTokens> ToTokens for Option<T> {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
if let Some(t) = self {
|
||||
t.to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for str {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::string(self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for String {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
self.as_str().to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for i8 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::i8_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for i16 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::i16_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for i32 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::i32_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for i64 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::i64_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for i128 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::i128_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for isize {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::isize_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for u8 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::u8_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for u16 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::u16_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for u32 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::u32_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for u64 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::u64_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for u128 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::u128_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for usize {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::usize_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for f32 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::f32_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for f64 {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::f64_suffixed(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for char {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::character(*self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for bool {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
let word = if *self { "true" } else { "false" };
|
||||
tokens.append(Ident::new(word, Span::call_site()));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for CStr {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::c_string(self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for CString {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(Literal::c_string(self));
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for Group {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(self.clone());
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for Ident {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(self.clone());
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for Punct {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(self.clone());
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for Literal {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(self.clone());
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for TokenTree {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append(self.clone());
|
||||
}
|
||||
}
|
||||
|
||||
impl ToTokens for TokenStream {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.extend(iter::once(self.clone()));
|
||||
}
|
||||
|
||||
fn into_token_stream(self) -> TokenStream {
|
||||
self
|
||||
}
|
||||
}
|
||||
13
rust/syn/README.md
Normal file
13
rust/syn/README.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# `syn`
|
||||
|
||||
These source files come from the Rust `syn` crate, version 2.0.106
|
||||
(released 2025-08-16), hosted in the <https://github.com/dtolnay/syn>
|
||||
repository, licensed under "Apache-2.0 OR MIT" and only modified to add
|
||||
the SPDX license identifiers and to remove the `unicode-ident`
|
||||
dependency.
|
||||
|
||||
For copyright details, please see:
|
||||
|
||||
https://github.com/dtolnay/syn/blob/2.0.106/README.md#license
|
||||
https://github.com/dtolnay/syn/blob/2.0.106/LICENSE-APACHE
|
||||
https://github.com/dtolnay/syn/blob/2.0.106/LICENSE-MIT
|
||||
838
rust/syn/attr.rs
Normal file
838
rust/syn/attr.rs
Normal file
@@ -0,0 +1,838 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
use crate::error::Error;
|
||||
#[cfg(feature = "parsing")]
|
||||
use crate::error::Result;
|
||||
use crate::expr::Expr;
|
||||
use crate::mac::MacroDelimiter;
|
||||
#[cfg(feature = "parsing")]
|
||||
use crate::meta::{self, ParseNestedMeta};
|
||||
#[cfg(feature = "parsing")]
|
||||
use crate::parse::{Parse, ParseStream, Parser};
|
||||
use crate::path::Path;
|
||||
use crate::token;
|
||||
use proc_macro2::TokenStream;
|
||||
#[cfg(feature = "printing")]
|
||||
use std::iter;
|
||||
#[cfg(feature = "printing")]
|
||||
use std::slice;
|
||||
|
||||
ast_struct! {
|
||||
/// An attribute, like `#[repr(transparent)]`.
|
||||
///
|
||||
/// <br>
|
||||
///
|
||||
/// # Syntax
|
||||
///
|
||||
/// Rust has six types of attributes.
|
||||
///
|
||||
/// - Outer attributes like `#[repr(transparent)]`. These appear outside or
|
||||
/// in front of the item they describe.
|
||||
///
|
||||
/// - Inner attributes like `#![feature(proc_macro)]`. These appear inside
|
||||
/// of the item they describe, usually a module.
|
||||
///
|
||||
/// - Outer one-line doc comments like `/// Example`.
|
||||
///
|
||||
/// - Inner one-line doc comments like `//! Please file an issue`.
|
||||
///
|
||||
/// - Outer documentation blocks `/** Example */`.
|
||||
///
|
||||
/// - Inner documentation blocks `/*! Please file an issue */`.
|
||||
///
|
||||
/// The `style` field of type `AttrStyle` distinguishes whether an attribute
|
||||
/// is outer or inner.
|
||||
///
|
||||
/// Every attribute has a `path` that indicates the intended interpretation
|
||||
/// of the rest of the attribute's contents. The path and the optional
|
||||
/// additional contents are represented together in the `meta` field of the
|
||||
/// attribute in three possible varieties:
|
||||
///
|
||||
/// - Meta::Path — attributes whose information content conveys just a
|
||||
/// path, for example the `#[test]` attribute.
|
||||
///
|
||||
/// - Meta::List — attributes that carry arbitrary tokens after the
|
||||
/// path, surrounded by a delimiter (parenthesis, bracket, or brace). For
|
||||
/// example `#[derive(Copy)]` or `#[precondition(x < 5)]`.
|
||||
///
|
||||
/// - Meta::NameValue — attributes with an `=` sign after the path,
|
||||
/// followed by a Rust expression. For example `#[path =
|
||||
/// "sys/windows.rs"]`.
|
||||
///
|
||||
/// All doc comments are represented in the NameValue style with a path of
|
||||
/// "doc", as this is how they are processed by the compiler and by
|
||||
/// `macro_rules!` macros.
|
||||
///
|
||||
/// ```text
|
||||
/// #[derive(Copy, Clone)]
|
||||
/// ~~~~~~Path
|
||||
/// ^^^^^^^^^^^^^^^^^^^Meta::List
|
||||
///
|
||||
/// #[path = "sys/windows.rs"]
|
||||
/// ~~~~Path
|
||||
/// ^^^^^^^^^^^^^^^^^^^^^^^Meta::NameValue
|
||||
///
|
||||
/// #[test]
|
||||
/// ^^^^Meta::Path
|
||||
/// ```
|
||||
///
|
||||
/// <br>
|
||||
///
|
||||
/// # Parsing from tokens to Attribute
|
||||
///
|
||||
/// This type does not implement the [`Parse`] trait and thus cannot be
|
||||
/// parsed directly by [`ParseStream::parse`]. Instead use
|
||||
/// [`ParseStream::call`] with one of the two parser functions
|
||||
/// [`Attribute::parse_outer`] or [`Attribute::parse_inner`] depending on
|
||||
/// which you intend to parse.
|
||||
///
|
||||
/// [`Parse`]: crate::parse::Parse
|
||||
/// [`ParseStream::parse`]: crate::parse::ParseBuffer::parse
|
||||
/// [`ParseStream::call`]: crate::parse::ParseBuffer::call
|
||||
///
|
||||
/// ```
|
||||
/// use syn::{Attribute, Ident, Result, Token};
|
||||
/// use syn::parse::{Parse, ParseStream};
|
||||
///
|
||||
/// // Parses a unit struct with attributes.
|
||||
/// //
|
||||
/// // #[path = "s.tmpl"]
|
||||
/// // struct S;
|
||||
/// struct UnitStruct {
|
||||
/// attrs: Vec<Attribute>,
|
||||
/// struct_token: Token![struct],
|
||||
/// name: Ident,
|
||||
/// semi_token: Token![;],
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for UnitStruct {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// Ok(UnitStruct {
|
||||
/// attrs: input.call(Attribute::parse_outer)?,
|
||||
/// struct_token: input.parse()?,
|
||||
/// name: input.parse()?,
|
||||
/// semi_token: input.parse()?,
|
||||
/// })
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// <p><br></p>
|
||||
///
|
||||
/// # Parsing from Attribute to structured arguments
|
||||
///
|
||||
/// The grammar of attributes in Rust is very flexible, which makes the
|
||||
/// syntax tree not that useful on its own. In particular, arguments of the
|
||||
/// `Meta::List` variety of attribute are held in an arbitrary `tokens:
|
||||
/// TokenStream`. Macros are expected to check the `path` of the attribute,
|
||||
/// decide whether they recognize it, and then parse the remaining tokens
|
||||
/// according to whatever grammar they wish to require for that kind of
|
||||
/// attribute. Use [`parse_args()`] to parse those tokens into the expected
|
||||
/// data structure.
|
||||
///
|
||||
/// [`parse_args()`]: Attribute::parse_args
|
||||
///
|
||||
/// <p><br></p>
|
||||
///
|
||||
/// # Doc comments
|
||||
///
|
||||
/// The compiler transforms doc comments, such as `/// comment` and `/*!
|
||||
/// comment */`, into attributes before macros are expanded. Each comment is
|
||||
/// expanded into an attribute of the form `#[doc = r"comment"]`.
|
||||
///
|
||||
/// As an example, the following `mod` items are expanded identically:
|
||||
///
|
||||
/// ```
|
||||
/// # use syn::{ItemMod, parse_quote};
|
||||
/// let doc: ItemMod = parse_quote! {
|
||||
/// /// Single line doc comments
|
||||
/// /// We write so many!
|
||||
/// /**
|
||||
/// * Multi-line comments...
|
||||
/// * May span many lines
|
||||
/// */
|
||||
/// mod example {
|
||||
/// //! Of course, they can be inner too
|
||||
/// /*! And fit in a single line */
|
||||
/// }
|
||||
/// };
|
||||
/// let attr: ItemMod = parse_quote! {
|
||||
/// #[doc = r" Single line doc comments"]
|
||||
/// #[doc = r" We write so many!"]
|
||||
/// #[doc = r"
|
||||
/// * Multi-line comments...
|
||||
/// * May span many lines
|
||||
/// "]
|
||||
/// mod example {
|
||||
/// #![doc = r" Of course, they can be inner too"]
|
||||
/// #![doc = r" And fit in a single line "]
|
||||
/// }
|
||||
/// };
|
||||
/// assert_eq!(doc, attr);
|
||||
/// ```
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub struct Attribute {
|
||||
pub pound_token: Token![#],
|
||||
pub style: AttrStyle,
|
||||
pub bracket_token: token::Bracket,
|
||||
pub meta: Meta,
|
||||
}
|
||||
}
|
||||
|
||||
impl Attribute {
|
||||
/// Returns the path that identifies the interpretation of this attribute.
|
||||
///
|
||||
/// For example this would return the `test` in `#[test]`, the `derive` in
|
||||
/// `#[derive(Copy)]`, and the `path` in `#[path = "sys/windows.rs"]`.
|
||||
pub fn path(&self) -> &Path {
|
||||
self.meta.path()
|
||||
}
|
||||
|
||||
/// Parse the arguments to the attribute as a syntax tree.
|
||||
///
|
||||
/// This is similar to pulling out the `TokenStream` from `Meta::List` and
|
||||
/// doing `syn::parse2::<T>(meta_list.tokens)`, except that using
|
||||
/// `parse_args` the error message has a more useful span when `tokens` is
|
||||
/// empty.
|
||||
///
|
||||
/// The surrounding delimiters are *not* included in the input to the
|
||||
/// parser.
|
||||
///
|
||||
/// ```text
|
||||
/// #[my_attr(value < 5)]
|
||||
/// ^^^^^^^^^ what gets parsed
|
||||
/// ```
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// use syn::{parse_quote, Attribute, Expr};
|
||||
///
|
||||
/// let attr: Attribute = parse_quote! {
|
||||
/// #[precondition(value < 5)]
|
||||
/// };
|
||||
///
|
||||
/// if attr.path().is_ident("precondition") {
|
||||
/// let precondition: Expr = attr.parse_args()?;
|
||||
/// // ...
|
||||
/// }
|
||||
/// # anyhow::Ok(())
|
||||
/// ```
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_args<T: Parse>(&self) -> Result<T> {
|
||||
self.parse_args_with(T::parse)
|
||||
}
|
||||
|
||||
/// Parse the arguments to the attribute using the given parser.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// use syn::{parse_quote, Attribute};
|
||||
///
|
||||
/// let attr: Attribute = parse_quote! {
|
||||
/// #[inception { #[brrrrrrraaaaawwwwrwrrrmrmrmmrmrmmmmm] }]
|
||||
/// };
|
||||
///
|
||||
/// let bwom = attr.parse_args_with(Attribute::parse_outer)?;
|
||||
///
|
||||
/// // Attribute does not have a Parse impl, so we couldn't directly do:
|
||||
/// // let bwom: Attribute = attr.parse_args()?;
|
||||
/// # anyhow::Ok(())
|
||||
/// ```
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_args_with<F: Parser>(&self, parser: F) -> Result<F::Output> {
|
||||
match &self.meta {
|
||||
Meta::Path(path) => Err(crate::error::new2(
|
||||
path.segments.first().unwrap().ident.span(),
|
||||
path.segments.last().unwrap().ident.span(),
|
||||
format!(
|
||||
"expected attribute arguments in parentheses: {}[{}(...)]",
|
||||
parsing::DisplayAttrStyle(&self.style),
|
||||
parsing::DisplayPath(path),
|
||||
),
|
||||
)),
|
||||
Meta::NameValue(meta) => Err(Error::new(
|
||||
meta.eq_token.span,
|
||||
format_args!(
|
||||
"expected parentheses: {}[{}(...)]",
|
||||
parsing::DisplayAttrStyle(&self.style),
|
||||
parsing::DisplayPath(&meta.path),
|
||||
),
|
||||
)),
|
||||
Meta::List(meta) => meta.parse_args_with(parser),
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse the arguments to the attribute, expecting it to follow the
|
||||
/// conventional structure used by most of Rust's built-in attributes.
|
||||
///
|
||||
/// The [*Meta Item Attribute Syntax*][syntax] section in the Rust reference
|
||||
/// explains the convention in more detail. Not all attributes follow this
|
||||
/// convention, so [`parse_args()`][Self::parse_args] is available if you
|
||||
/// need to parse arbitrarily goofy attribute syntax.
|
||||
///
|
||||
/// [syntax]: https://doc.rust-lang.org/reference/attributes.html#meta-item-attribute-syntax
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// We'll parse a struct, and then parse some of Rust's `#[repr]` attribute
|
||||
/// syntax.
|
||||
///
|
||||
/// ```
|
||||
/// use syn::{parenthesized, parse_quote, token, ItemStruct, LitInt};
|
||||
///
|
||||
/// let input: ItemStruct = parse_quote! {
|
||||
/// #[repr(C, align(4))]
|
||||
/// pub struct MyStruct(u16, u32);
|
||||
/// };
|
||||
///
|
||||
/// let mut repr_c = false;
|
||||
/// let mut repr_transparent = false;
|
||||
/// let mut repr_align = None::<usize>;
|
||||
/// let mut repr_packed = None::<usize>;
|
||||
/// for attr in &input.attrs {
|
||||
/// if attr.path().is_ident("repr") {
|
||||
/// attr.parse_nested_meta(|meta| {
|
||||
/// // #[repr(C)]
|
||||
/// if meta.path.is_ident("C") {
|
||||
/// repr_c = true;
|
||||
/// return Ok(());
|
||||
/// }
|
||||
///
|
||||
/// // #[repr(transparent)]
|
||||
/// if meta.path.is_ident("transparent") {
|
||||
/// repr_transparent = true;
|
||||
/// return Ok(());
|
||||
/// }
|
||||
///
|
||||
/// // #[repr(align(N))]
|
||||
/// if meta.path.is_ident("align") {
|
||||
/// let content;
|
||||
/// parenthesized!(content in meta.input);
|
||||
/// let lit: LitInt = content.parse()?;
|
||||
/// let n: usize = lit.base10_parse()?;
|
||||
/// repr_align = Some(n);
|
||||
/// return Ok(());
|
||||
/// }
|
||||
///
|
||||
/// // #[repr(packed)] or #[repr(packed(N))], omitted N means 1
|
||||
/// if meta.path.is_ident("packed") {
|
||||
/// if meta.input.peek(token::Paren) {
|
||||
/// let content;
|
||||
/// parenthesized!(content in meta.input);
|
||||
/// let lit: LitInt = content.parse()?;
|
||||
/// let n: usize = lit.base10_parse()?;
|
||||
/// repr_packed = Some(n);
|
||||
/// } else {
|
||||
/// repr_packed = Some(1);
|
||||
/// }
|
||||
/// return Ok(());
|
||||
/// }
|
||||
///
|
||||
/// Err(meta.error("unrecognized repr"))
|
||||
/// })?;
|
||||
/// }
|
||||
/// }
|
||||
/// # anyhow::Ok(())
|
||||
/// ```
|
||||
///
|
||||
/// # Alternatives
|
||||
///
|
||||
/// In some cases, for attributes which have nested layers of structured
|
||||
/// content, the following less flexible approach might be more convenient:
|
||||
///
|
||||
/// ```
|
||||
/// # use syn::{parse_quote, ItemStruct};
|
||||
/// #
|
||||
/// # let input: ItemStruct = parse_quote! {
|
||||
/// # #[repr(C, align(4))]
|
||||
/// # pub struct MyStruct(u16, u32);
|
||||
/// # };
|
||||
/// #
|
||||
/// use syn::punctuated::Punctuated;
|
||||
/// use syn::{parenthesized, token, Error, LitInt, Meta, Token};
|
||||
///
|
||||
/// let mut repr_c = false;
|
||||
/// let mut repr_transparent = false;
|
||||
/// let mut repr_align = None::<usize>;
|
||||
/// let mut repr_packed = None::<usize>;
|
||||
/// for attr in &input.attrs {
|
||||
/// if attr.path().is_ident("repr") {
|
||||
/// let nested = attr.parse_args_with(Punctuated::<Meta, Token![,]>::parse_terminated)?;
|
||||
/// for meta in nested {
|
||||
/// match meta {
|
||||
/// // #[repr(C)]
|
||||
/// Meta::Path(path) if path.is_ident("C") => {
|
||||
/// repr_c = true;
|
||||
/// }
|
||||
///
|
||||
/// // #[repr(align(N))]
|
||||
/// Meta::List(meta) if meta.path.is_ident("align") => {
|
||||
/// let lit: LitInt = meta.parse_args()?;
|
||||
/// let n: usize = lit.base10_parse()?;
|
||||
/// repr_align = Some(n);
|
||||
/// }
|
||||
///
|
||||
/// /* ... */
|
||||
///
|
||||
/// _ => {
|
||||
/// return Err(Error::new_spanned(meta, "unrecognized repr"));
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// # Ok(())
|
||||
/// ```
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_nested_meta(
|
||||
&self,
|
||||
logic: impl FnMut(ParseNestedMeta) -> Result<()>,
|
||||
) -> Result<()> {
|
||||
self.parse_args_with(meta::parser(logic))
|
||||
}
|
||||
|
||||
/// Parses zero or more outer attributes from the stream.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// See
|
||||
/// [*Parsing from tokens to Attribute*](#parsing-from-tokens-to-attribute).
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_outer(input: ParseStream) -> Result<Vec<Self>> {
|
||||
let mut attrs = Vec::new();
|
||||
while input.peek(Token![#]) {
|
||||
attrs.push(input.call(parsing::single_parse_outer)?);
|
||||
}
|
||||
Ok(attrs)
|
||||
}
|
||||
|
||||
/// Parses zero or more inner attributes from the stream.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// See
|
||||
/// [*Parsing from tokens to Attribute*](#parsing-from-tokens-to-attribute).
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_inner(input: ParseStream) -> Result<Vec<Self>> {
|
||||
let mut attrs = Vec::new();
|
||||
parsing::parse_inner(input, &mut attrs)?;
|
||||
Ok(attrs)
|
||||
}
|
||||
}
|
||||
|
||||
ast_enum! {
|
||||
/// Distinguishes between attributes that decorate an item and attributes
|
||||
/// that are contained within an item.
|
||||
///
|
||||
/// # Outer attributes
|
||||
///
|
||||
/// - `#[repr(transparent)]`
|
||||
/// - `/// # Example`
|
||||
/// - `/** Please file an issue */`
|
||||
///
|
||||
/// # Inner attributes
|
||||
///
|
||||
/// - `#![feature(proc_macro)]`
|
||||
/// - `//! # Example`
|
||||
/// - `/*! Please file an issue */`
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub enum AttrStyle {
|
||||
Outer,
|
||||
Inner(Token![!]),
|
||||
}
|
||||
}
|
||||
|
||||
ast_enum! {
|
||||
/// Content of a compile-time structured attribute.
|
||||
///
|
||||
/// ## Path
|
||||
///
|
||||
/// A meta path is like the `test` in `#[test]`.
|
||||
///
|
||||
/// ## List
|
||||
///
|
||||
/// A meta list is like the `derive(Copy)` in `#[derive(Copy)]`.
|
||||
///
|
||||
/// ## NameValue
|
||||
///
|
||||
/// A name-value meta is like the `path = "..."` in `#[path =
|
||||
/// "sys/windows.rs"]`.
|
||||
///
|
||||
/// # Syntax tree enum
|
||||
///
|
||||
/// This type is a [syntax tree enum].
|
||||
///
|
||||
/// [syntax tree enum]: crate::expr::Expr#syntax-tree-enums
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub enum Meta {
|
||||
Path(Path),
|
||||
|
||||
/// A structured list within an attribute, like `derive(Copy, Clone)`.
|
||||
List(MetaList),
|
||||
|
||||
/// A name-value pair within an attribute, like `feature = "nightly"`.
|
||||
NameValue(MetaNameValue),
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// A structured list within an attribute, like `derive(Copy, Clone)`.
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub struct MetaList {
|
||||
pub path: Path,
|
||||
pub delimiter: MacroDelimiter,
|
||||
pub tokens: TokenStream,
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// A name-value pair within an attribute, like `feature = "nightly"`.
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub struct MetaNameValue {
|
||||
pub path: Path,
|
||||
pub eq_token: Token![=],
|
||||
pub value: Expr,
|
||||
}
|
||||
}
|
||||
|
||||
impl Meta {
|
||||
/// Returns the path that begins this structured meta item.
|
||||
///
|
||||
/// For example this would return the `test` in `#[test]`, the `derive` in
|
||||
/// `#[derive(Copy)]`, and the `path` in `#[path = "sys/windows.rs"]`.
|
||||
pub fn path(&self) -> &Path {
|
||||
match self {
|
||||
Meta::Path(path) => path,
|
||||
Meta::List(meta) => &meta.path,
|
||||
Meta::NameValue(meta) => &meta.path,
|
||||
}
|
||||
}
|
||||
|
||||
/// Error if this is a `Meta::List` or `Meta::NameValue`.
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn require_path_only(&self) -> Result<&Path> {
|
||||
let error_span = match self {
|
||||
Meta::Path(path) => return Ok(path),
|
||||
Meta::List(meta) => meta.delimiter.span().open(),
|
||||
Meta::NameValue(meta) => meta.eq_token.span,
|
||||
};
|
||||
Err(Error::new(error_span, "unexpected token in attribute"))
|
||||
}
|
||||
|
||||
/// Error if this is a `Meta::Path` or `Meta::NameValue`.
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn require_list(&self) -> Result<&MetaList> {
|
||||
match self {
|
||||
Meta::List(meta) => Ok(meta),
|
||||
Meta::Path(path) => Err(crate::error::new2(
|
||||
path.segments.first().unwrap().ident.span(),
|
||||
path.segments.last().unwrap().ident.span(),
|
||||
format!(
|
||||
"expected attribute arguments in parentheses: `{}(...)`",
|
||||
parsing::DisplayPath(path),
|
||||
),
|
||||
)),
|
||||
Meta::NameValue(meta) => Err(Error::new(meta.eq_token.span, "expected `(`")),
|
||||
}
|
||||
}
|
||||
|
||||
/// Error if this is a `Meta::Path` or `Meta::List`.
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn require_name_value(&self) -> Result<&MetaNameValue> {
|
||||
match self {
|
||||
Meta::NameValue(meta) => Ok(meta),
|
||||
Meta::Path(path) => Err(crate::error::new2(
|
||||
path.segments.first().unwrap().ident.span(),
|
||||
path.segments.last().unwrap().ident.span(),
|
||||
format!(
|
||||
"expected a value for this attribute: `{} = ...`",
|
||||
parsing::DisplayPath(path),
|
||||
),
|
||||
)),
|
||||
Meta::List(meta) => Err(Error::new(meta.delimiter.span().open(), "expected `=`")),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl MetaList {
|
||||
/// See [`Attribute::parse_args`].
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_args<T: Parse>(&self) -> Result<T> {
|
||||
self.parse_args_with(T::parse)
|
||||
}
|
||||
|
||||
/// See [`Attribute::parse_args_with`].
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_args_with<F: Parser>(&self, parser: F) -> Result<F::Output> {
|
||||
let scope = self.delimiter.span().close();
|
||||
crate::parse::parse_scoped(parser, scope, self.tokens.clone())
|
||||
}
|
||||
|
||||
/// See [`Attribute::parse_nested_meta`].
|
||||
#[cfg(feature = "parsing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_nested_meta(
|
||||
&self,
|
||||
logic: impl FnMut(ParseNestedMeta) -> Result<()>,
|
||||
) -> Result<()> {
|
||||
self.parse_args_with(meta::parser(logic))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
pub(crate) trait FilterAttrs<'a> {
|
||||
type Ret: Iterator<Item = &'a Attribute>;
|
||||
|
||||
fn outer(self) -> Self::Ret;
|
||||
#[cfg(feature = "full")]
|
||||
fn inner(self) -> Self::Ret;
|
||||
}
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
impl<'a> FilterAttrs<'a> for &'a [Attribute] {
|
||||
type Ret = iter::Filter<slice::Iter<'a, Attribute>, fn(&&Attribute) -> bool>;
|
||||
|
||||
fn outer(self) -> Self::Ret {
|
||||
fn is_outer(attr: &&Attribute) -> bool {
|
||||
match attr.style {
|
||||
AttrStyle::Outer => true,
|
||||
AttrStyle::Inner(_) => false,
|
||||
}
|
||||
}
|
||||
self.iter().filter(is_outer)
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
fn inner(self) -> Self::Ret {
|
||||
fn is_inner(attr: &&Attribute) -> bool {
|
||||
match attr.style {
|
||||
AttrStyle::Inner(_) => true,
|
||||
AttrStyle::Outer => false,
|
||||
}
|
||||
}
|
||||
self.iter().filter(is_inner)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<Path> for Meta {
|
||||
fn from(meta: Path) -> Meta {
|
||||
Meta::Path(meta)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<MetaList> for Meta {
|
||||
fn from(meta: MetaList) -> Meta {
|
||||
Meta::List(meta)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<MetaNameValue> for Meta {
|
||||
fn from(meta: MetaNameValue) -> Meta {
|
||||
Meta::NameValue(meta)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
pub(crate) mod parsing {
|
||||
use crate::attr::{AttrStyle, Attribute, Meta, MetaList, MetaNameValue};
|
||||
use crate::error::Result;
|
||||
use crate::expr::{Expr, ExprLit};
|
||||
use crate::lit::Lit;
|
||||
use crate::parse::discouraged::Speculative as _;
|
||||
use crate::parse::{Parse, ParseStream};
|
||||
use crate::path::Path;
|
||||
use crate::{mac, token};
|
||||
use proc_macro2::Ident;
|
||||
use std::fmt::{self, Display};
|
||||
|
||||
pub(crate) fn parse_inner(input: ParseStream, attrs: &mut Vec<Attribute>) -> Result<()> {
|
||||
while input.peek(Token![#]) && input.peek2(Token![!]) {
|
||||
attrs.push(input.call(single_parse_inner)?);
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) fn single_parse_inner(input: ParseStream) -> Result<Attribute> {
|
||||
let content;
|
||||
Ok(Attribute {
|
||||
pound_token: input.parse()?,
|
||||
style: AttrStyle::Inner(input.parse()?),
|
||||
bracket_token: bracketed!(content in input),
|
||||
meta: content.parse()?,
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) fn single_parse_outer(input: ParseStream) -> Result<Attribute> {
|
||||
let content;
|
||||
Ok(Attribute {
|
||||
pound_token: input.parse()?,
|
||||
style: AttrStyle::Outer,
|
||||
bracket_token: bracketed!(content in input),
|
||||
meta: content.parse()?,
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for Meta {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
let path = parse_outermost_meta_path(input)?;
|
||||
parse_meta_after_path(path, input)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for MetaList {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
let path = parse_outermost_meta_path(input)?;
|
||||
parse_meta_list_after_path(path, input)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for MetaNameValue {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
let path = parse_outermost_meta_path(input)?;
|
||||
parse_meta_name_value_after_path(path, input)
|
||||
}
|
||||
}
|
||||
|
||||
// Unlike meta::parse_meta_path which accepts arbitrary keywords in the path,
|
||||
// only the `unsafe` keyword is accepted as an attribute's outermost path.
|
||||
fn parse_outermost_meta_path(input: ParseStream) -> Result<Path> {
|
||||
if input.peek(Token![unsafe]) {
|
||||
let unsafe_token: Token![unsafe] = input.parse()?;
|
||||
Ok(Path::from(Ident::new("unsafe", unsafe_token.span)))
|
||||
} else {
|
||||
Path::parse_mod_style(input)
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn parse_meta_after_path(path: Path, input: ParseStream) -> Result<Meta> {
|
||||
if input.peek(token::Paren) || input.peek(token::Bracket) || input.peek(token::Brace) {
|
||||
parse_meta_list_after_path(path, input).map(Meta::List)
|
||||
} else if input.peek(Token![=]) {
|
||||
parse_meta_name_value_after_path(path, input).map(Meta::NameValue)
|
||||
} else {
|
||||
Ok(Meta::Path(path))
|
||||
}
|
||||
}
|
||||
|
||||
fn parse_meta_list_after_path(path: Path, input: ParseStream) -> Result<MetaList> {
|
||||
let (delimiter, tokens) = mac::parse_delimiter(input)?;
|
||||
Ok(MetaList {
|
||||
path,
|
||||
delimiter,
|
||||
tokens,
|
||||
})
|
||||
}
|
||||
|
||||
fn parse_meta_name_value_after_path(path: Path, input: ParseStream) -> Result<MetaNameValue> {
|
||||
let eq_token: Token![=] = input.parse()?;
|
||||
let ahead = input.fork();
|
||||
let lit: Option<Lit> = ahead.parse()?;
|
||||
let value = if let (Some(lit), true) = (lit, ahead.is_empty()) {
|
||||
input.advance_to(&ahead);
|
||||
Expr::Lit(ExprLit {
|
||||
attrs: Vec::new(),
|
||||
lit,
|
||||
})
|
||||
} else if input.peek(Token![#]) && input.peek2(token::Bracket) {
|
||||
return Err(input.error("unexpected attribute inside of attribute"));
|
||||
} else {
|
||||
input.parse()?
|
||||
};
|
||||
Ok(MetaNameValue {
|
||||
path,
|
||||
eq_token,
|
||||
value,
|
||||
})
|
||||
}
|
||||
|
||||
pub(super) struct DisplayAttrStyle<'a>(pub &'a AttrStyle);
|
||||
|
||||
impl<'a> Display for DisplayAttrStyle<'a> {
|
||||
fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
formatter.write_str(match self.0 {
|
||||
AttrStyle::Outer => "#",
|
||||
AttrStyle::Inner(_) => "#!",
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) struct DisplayPath<'a>(pub &'a Path);
|
||||
|
||||
impl<'a> Display for DisplayPath<'a> {
|
||||
fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
for (i, segment) in self.0.segments.iter().enumerate() {
|
||||
if i > 0 || self.0.leading_colon.is_some() {
|
||||
formatter.write_str("::")?;
|
||||
}
|
||||
write!(formatter, "{}", segment.ident)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
mod printing {
|
||||
use crate::attr::{AttrStyle, Attribute, Meta, MetaList, MetaNameValue};
|
||||
use crate::path;
|
||||
use crate::path::printing::PathStyle;
|
||||
use proc_macro2::TokenStream;
|
||||
use quote::ToTokens;
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for Attribute {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
self.pound_token.to_tokens(tokens);
|
||||
if let AttrStyle::Inner(b) = &self.style {
|
||||
b.to_tokens(tokens);
|
||||
}
|
||||
self.bracket_token.surround(tokens, |tokens| {
|
||||
self.meta.to_tokens(tokens);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for Meta {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
match self {
|
||||
Meta::Path(path) => path::printing::print_path(tokens, path, PathStyle::Mod),
|
||||
Meta::List(meta_list) => meta_list.to_tokens(tokens),
|
||||
Meta::NameValue(meta_name_value) => meta_name_value.to_tokens(tokens),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for MetaList {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
path::printing::print_path(tokens, &self.path, PathStyle::Mod);
|
||||
self.delimiter.surround(tokens, self.tokens.clone());
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for MetaNameValue {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
path::printing::print_path(tokens, &self.path, PathStyle::Mod);
|
||||
self.eq_token.to_tokens(tokens);
|
||||
self.value.to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
}
|
||||
68
rust/syn/bigint.rs
Normal file
68
rust/syn/bigint.rs
Normal file
@@ -0,0 +1,68 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use std::ops::{AddAssign, MulAssign};
|
||||
|
||||
// For implementing base10_digits() accessor on LitInt.
|
||||
pub(crate) struct BigInt {
|
||||
digits: Vec<u8>,
|
||||
}
|
||||
|
||||
impl BigInt {
|
||||
pub(crate) fn new() -> Self {
|
||||
BigInt { digits: Vec::new() }
|
||||
}
|
||||
|
||||
pub(crate) fn to_string(&self) -> String {
|
||||
let mut repr = String::with_capacity(self.digits.len());
|
||||
|
||||
let mut has_nonzero = false;
|
||||
for digit in self.digits.iter().rev() {
|
||||
has_nonzero |= *digit != 0;
|
||||
if has_nonzero {
|
||||
repr.push((*digit + b'0') as char);
|
||||
}
|
||||
}
|
||||
|
||||
if repr.is_empty() {
|
||||
repr.push('0');
|
||||
}
|
||||
|
||||
repr
|
||||
}
|
||||
|
||||
fn reserve_two_digits(&mut self) {
|
||||
let len = self.digits.len();
|
||||
let desired =
|
||||
len + !self.digits.ends_with(&[0, 0]) as usize + !self.digits.ends_with(&[0]) as usize;
|
||||
self.digits.resize(desired, 0);
|
||||
}
|
||||
}
|
||||
|
||||
impl AddAssign<u8> for BigInt {
|
||||
// Assumes increment <16.
|
||||
fn add_assign(&mut self, mut increment: u8) {
|
||||
self.reserve_two_digits();
|
||||
|
||||
let mut i = 0;
|
||||
while increment > 0 {
|
||||
let sum = self.digits[i] + increment;
|
||||
self.digits[i] = sum % 10;
|
||||
increment = sum / 10;
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl MulAssign<u8> for BigInt {
|
||||
// Assumes base <=16.
|
||||
fn mul_assign(&mut self, base: u8) {
|
||||
self.reserve_two_digits();
|
||||
|
||||
let mut carry = 0;
|
||||
for digit in &mut self.digits {
|
||||
let prod = *digit * base + carry;
|
||||
*digit = prod % 10;
|
||||
carry = prod / 10;
|
||||
}
|
||||
}
|
||||
}
|
||||
436
rust/syn/buffer.rs
Normal file
436
rust/syn/buffer.rs
Normal file
@@ -0,0 +1,436 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
//! A stably addressed token buffer supporting efficient traversal based on a
|
||||
//! cheaply copyable cursor.
|
||||
|
||||
// This module is heavily commented as it contains most of the unsafe code in
|
||||
// Syn, and caution should be used when editing it. The public-facing interface
|
||||
// is 100% safe but the implementation is fragile internally.
|
||||
|
||||
use crate::Lifetime;
|
||||
use proc_macro2::extra::DelimSpan;
|
||||
use proc_macro2::{Delimiter, Group, Ident, Literal, Punct, Spacing, Span, TokenStream, TokenTree};
|
||||
use std::cmp::Ordering;
|
||||
use std::marker::PhantomData;
|
||||
use std::ptr;
|
||||
|
||||
/// Internal type which is used instead of `TokenTree` to represent a token tree
|
||||
/// within a `TokenBuffer`.
|
||||
enum Entry {
|
||||
// Mimicking types from proc-macro.
|
||||
// Group entries contain the offset to the matching End entry.
|
||||
Group(Group, usize),
|
||||
Ident(Ident),
|
||||
Punct(Punct),
|
||||
Literal(Literal),
|
||||
// End entries contain the offset (negative) to the start of the buffer, and
|
||||
// offset (negative) to the matching Group entry.
|
||||
End(isize, isize),
|
||||
}
|
||||
|
||||
/// A buffer that can be efficiently traversed multiple times, unlike
|
||||
/// `TokenStream` which requires a deep copy in order to traverse more than
|
||||
/// once.
|
||||
pub struct TokenBuffer {
|
||||
// NOTE: Do not implement clone on this - while the current design could be
|
||||
// cloned, other designs which could be desirable may not be cloneable.
|
||||
entries: Box<[Entry]>,
|
||||
}
|
||||
|
||||
impl TokenBuffer {
|
||||
fn recursive_new(entries: &mut Vec<Entry>, stream: TokenStream) {
|
||||
for tt in stream {
|
||||
match tt {
|
||||
TokenTree::Ident(ident) => entries.push(Entry::Ident(ident)),
|
||||
TokenTree::Punct(punct) => entries.push(Entry::Punct(punct)),
|
||||
TokenTree::Literal(literal) => entries.push(Entry::Literal(literal)),
|
||||
TokenTree::Group(group) => {
|
||||
let group_start_index = entries.len();
|
||||
entries.push(Entry::End(0, 0)); // we replace this below
|
||||
Self::recursive_new(entries, group.stream());
|
||||
let group_end_index = entries.len();
|
||||
let group_offset = group_end_index - group_start_index;
|
||||
entries.push(Entry::End(
|
||||
-(group_end_index as isize),
|
||||
-(group_offset as isize),
|
||||
));
|
||||
entries[group_start_index] = Entry::Group(group, group_offset);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a `TokenBuffer` containing all the tokens from the input
|
||||
/// `proc_macro::TokenStream`.
|
||||
#[cfg(feature = "proc-macro")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "proc-macro")))]
|
||||
pub fn new(stream: proc_macro::TokenStream) -> Self {
|
||||
Self::new2(stream.into())
|
||||
}
|
||||
|
||||
/// Creates a `TokenBuffer` containing all the tokens from the input
|
||||
/// `proc_macro2::TokenStream`.
|
||||
pub fn new2(stream: TokenStream) -> Self {
|
||||
let mut entries = Vec::new();
|
||||
Self::recursive_new(&mut entries, stream);
|
||||
entries.push(Entry::End(-(entries.len() as isize), 0));
|
||||
Self {
|
||||
entries: entries.into_boxed_slice(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates a cursor referencing the first token in the buffer and able to
|
||||
/// traverse until the end of the buffer.
|
||||
pub fn begin(&self) -> Cursor {
|
||||
let ptr = self.entries.as_ptr();
|
||||
unsafe { Cursor::create(ptr, ptr.add(self.entries.len() - 1)) }
|
||||
}
|
||||
}
|
||||
|
||||
/// A cheaply copyable cursor into a `TokenBuffer`.
|
||||
///
|
||||
/// This cursor holds a shared reference into the immutable data which is used
|
||||
/// internally to represent a `TokenStream`, and can be efficiently manipulated
|
||||
/// and copied around.
|
||||
///
|
||||
/// An empty `Cursor` can be created directly, or one may create a `TokenBuffer`
|
||||
/// object and get a cursor to its first token with `begin()`.
|
||||
pub struct Cursor<'a> {
|
||||
// The current entry which the `Cursor` is pointing at.
|
||||
ptr: *const Entry,
|
||||
// This is the only `Entry::End` object which this cursor is allowed to
|
||||
// point at. All other `End` objects are skipped over in `Cursor::create`.
|
||||
scope: *const Entry,
|
||||
// Cursor is covariant in 'a. This field ensures that our pointers are still
|
||||
// valid.
|
||||
marker: PhantomData<&'a Entry>,
|
||||
}
|
||||
|
||||
impl<'a> Cursor<'a> {
|
||||
/// Creates a cursor referencing a static empty TokenStream.
|
||||
pub fn empty() -> Self {
|
||||
// It's safe in this situation for us to put an `Entry` object in global
|
||||
// storage, despite it not actually being safe to send across threads
|
||||
// (`Ident` is a reference into a thread-local table). This is because
|
||||
// this entry never includes a `Ident` object.
|
||||
//
|
||||
// This wrapper struct allows us to break the rules and put a `Sync`
|
||||
// object in global storage.
|
||||
struct UnsafeSyncEntry(Entry);
|
||||
unsafe impl Sync for UnsafeSyncEntry {}
|
||||
static EMPTY_ENTRY: UnsafeSyncEntry = UnsafeSyncEntry(Entry::End(0, 0));
|
||||
|
||||
Cursor {
|
||||
ptr: &EMPTY_ENTRY.0,
|
||||
scope: &EMPTY_ENTRY.0,
|
||||
marker: PhantomData,
|
||||
}
|
||||
}
|
||||
|
||||
/// This create method intelligently exits non-explicitly-entered
|
||||
/// `None`-delimited scopes when the cursor reaches the end of them,
|
||||
/// allowing for them to be treated transparently.
|
||||
unsafe fn create(mut ptr: *const Entry, scope: *const Entry) -> Self {
|
||||
// NOTE: If we're looking at a `End`, we want to advance the cursor
|
||||
// past it, unless `ptr == scope`, which means that we're at the edge of
|
||||
// our cursor's scope. We should only have `ptr != scope` at the exit
|
||||
// from None-delimited groups entered with `ignore_none`.
|
||||
while let Entry::End(..) = unsafe { &*ptr } {
|
||||
if ptr::eq(ptr, scope) {
|
||||
break;
|
||||
}
|
||||
ptr = unsafe { ptr.add(1) };
|
||||
}
|
||||
|
||||
Cursor {
|
||||
ptr,
|
||||
scope,
|
||||
marker: PhantomData,
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the current entry.
|
||||
fn entry(self) -> &'a Entry {
|
||||
unsafe { &*self.ptr }
|
||||
}
|
||||
|
||||
/// Bump the cursor to point at the next token after the current one. This
|
||||
/// is undefined behavior if the cursor is currently looking at an
|
||||
/// `Entry::End`.
|
||||
///
|
||||
/// If the cursor is looking at an `Entry::Group`, the bumped cursor will
|
||||
/// point at the first token in the group (with the same scope end).
|
||||
unsafe fn bump_ignore_group(self) -> Cursor<'a> {
|
||||
unsafe { Cursor::create(self.ptr.offset(1), self.scope) }
|
||||
}
|
||||
|
||||
/// While the cursor is looking at a `None`-delimited group, move it to look
|
||||
/// at the first token inside instead. If the group is empty, this will move
|
||||
/// the cursor past the `None`-delimited group.
|
||||
///
|
||||
/// WARNING: This mutates its argument.
|
||||
fn ignore_none(&mut self) {
|
||||
while let Entry::Group(group, _) = self.entry() {
|
||||
if group.delimiter() == Delimiter::None {
|
||||
unsafe { *self = self.bump_ignore_group() };
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Checks whether the cursor is currently pointing at the end of its valid
|
||||
/// scope.
|
||||
pub fn eof(self) -> bool {
|
||||
// We're at eof if we're at the end of our scope.
|
||||
ptr::eq(self.ptr, self.scope)
|
||||
}
|
||||
|
||||
/// If the cursor is pointing at a `Ident`, returns it along with a cursor
|
||||
/// pointing at the next `TokenTree`.
|
||||
pub fn ident(mut self) -> Option<(Ident, Cursor<'a>)> {
|
||||
self.ignore_none();
|
||||
match self.entry() {
|
||||
Entry::Ident(ident) => Some((ident.clone(), unsafe { self.bump_ignore_group() })),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// If the cursor is pointing at a `Punct`, returns it along with a cursor
|
||||
/// pointing at the next `TokenTree`.
|
||||
pub fn punct(mut self) -> Option<(Punct, Cursor<'a>)> {
|
||||
self.ignore_none();
|
||||
match self.entry() {
|
||||
Entry::Punct(punct) if punct.as_char() != '\'' => {
|
||||
Some((punct.clone(), unsafe { self.bump_ignore_group() }))
|
||||
}
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// If the cursor is pointing at a `Literal`, return it along with a cursor
|
||||
/// pointing at the next `TokenTree`.
|
||||
pub fn literal(mut self) -> Option<(Literal, Cursor<'a>)> {
|
||||
self.ignore_none();
|
||||
match self.entry() {
|
||||
Entry::Literal(literal) => Some((literal.clone(), unsafe { self.bump_ignore_group() })),
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// If the cursor is pointing at a `Lifetime`, returns it along with a
|
||||
/// cursor pointing at the next `TokenTree`.
|
||||
pub fn lifetime(mut self) -> Option<(Lifetime, Cursor<'a>)> {
|
||||
self.ignore_none();
|
||||
match self.entry() {
|
||||
Entry::Punct(punct) if punct.as_char() == '\'' && punct.spacing() == Spacing::Joint => {
|
||||
let next = unsafe { self.bump_ignore_group() };
|
||||
let (ident, rest) = next.ident()?;
|
||||
let lifetime = Lifetime {
|
||||
apostrophe: punct.span(),
|
||||
ident,
|
||||
};
|
||||
Some((lifetime, rest))
|
||||
}
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
/// If the cursor is pointing at a `Group` with the given delimiter, returns
|
||||
/// a cursor into that group and one pointing to the next `TokenTree`.
|
||||
pub fn group(mut self, delim: Delimiter) -> Option<(Cursor<'a>, DelimSpan, Cursor<'a>)> {
|
||||
// If we're not trying to enter a none-delimited group, we want to
|
||||
// ignore them. We have to make sure to _not_ ignore them when we want
|
||||
// to enter them, of course. For obvious reasons.
|
||||
if delim != Delimiter::None {
|
||||
self.ignore_none();
|
||||
}
|
||||
|
||||
if let Entry::Group(group, end_offset) = self.entry() {
|
||||
if group.delimiter() == delim {
|
||||
let span = group.delim_span();
|
||||
let end_of_group = unsafe { self.ptr.add(*end_offset) };
|
||||
let inside_of_group = unsafe { Cursor::create(self.ptr.add(1), end_of_group) };
|
||||
let after_group = unsafe { Cursor::create(end_of_group, self.scope) };
|
||||
return Some((inside_of_group, span, after_group));
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// If the cursor is pointing at a `Group`, returns a cursor into the group
|
||||
/// and one pointing to the next `TokenTree`.
|
||||
pub fn any_group(self) -> Option<(Cursor<'a>, Delimiter, DelimSpan, Cursor<'a>)> {
|
||||
if let Entry::Group(group, end_offset) = self.entry() {
|
||||
let delimiter = group.delimiter();
|
||||
let span = group.delim_span();
|
||||
let end_of_group = unsafe { self.ptr.add(*end_offset) };
|
||||
let inside_of_group = unsafe { Cursor::create(self.ptr.add(1), end_of_group) };
|
||||
let after_group = unsafe { Cursor::create(end_of_group, self.scope) };
|
||||
return Some((inside_of_group, delimiter, span, after_group));
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
pub(crate) fn any_group_token(self) -> Option<(Group, Cursor<'a>)> {
|
||||
if let Entry::Group(group, end_offset) = self.entry() {
|
||||
let end_of_group = unsafe { self.ptr.add(*end_offset) };
|
||||
let after_group = unsafe { Cursor::create(end_of_group, self.scope) };
|
||||
return Some((group.clone(), after_group));
|
||||
}
|
||||
|
||||
None
|
||||
}
|
||||
|
||||
/// Copies all remaining tokens visible from this cursor into a
|
||||
/// `TokenStream`.
|
||||
pub fn token_stream(self) -> TokenStream {
|
||||
let mut tts = Vec::new();
|
||||
let mut cursor = self;
|
||||
while let Some((tt, rest)) = cursor.token_tree() {
|
||||
tts.push(tt);
|
||||
cursor = rest;
|
||||
}
|
||||
tts.into_iter().collect()
|
||||
}
|
||||
|
||||
/// If the cursor is pointing at a `TokenTree`, returns it along with a
|
||||
/// cursor pointing at the next `TokenTree`.
|
||||
///
|
||||
/// Returns `None` if the cursor has reached the end of its stream.
|
||||
///
|
||||
/// This method does not treat `None`-delimited groups as transparent, and
|
||||
/// will return a `Group(None, ..)` if the cursor is looking at one.
|
||||
pub fn token_tree(self) -> Option<(TokenTree, Cursor<'a>)> {
|
||||
let (tree, len) = match self.entry() {
|
||||
Entry::Group(group, end_offset) => (group.clone().into(), *end_offset),
|
||||
Entry::Literal(literal) => (literal.clone().into(), 1),
|
||||
Entry::Ident(ident) => (ident.clone().into(), 1),
|
||||
Entry::Punct(punct) => (punct.clone().into(), 1),
|
||||
Entry::End(..) => return None,
|
||||
};
|
||||
|
||||
let rest = unsafe { Cursor::create(self.ptr.add(len), self.scope) };
|
||||
Some((tree, rest))
|
||||
}
|
||||
|
||||
/// Returns the `Span` of the current token, or `Span::call_site()` if this
|
||||
/// cursor points to eof.
|
||||
pub fn span(mut self) -> Span {
|
||||
match self.entry() {
|
||||
Entry::Group(group, _) => group.span(),
|
||||
Entry::Literal(literal) => literal.span(),
|
||||
Entry::Ident(ident) => ident.span(),
|
||||
Entry::Punct(punct) => punct.span(),
|
||||
Entry::End(_, offset) => {
|
||||
self.ptr = unsafe { self.ptr.offset(*offset) };
|
||||
if let Entry::Group(group, _) = self.entry() {
|
||||
group.span_close()
|
||||
} else {
|
||||
Span::call_site()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the `Span` of the token immediately prior to the position of
|
||||
/// this cursor, or of the current token if there is no previous one.
|
||||
#[cfg(any(feature = "full", feature = "derive"))]
|
||||
pub(crate) fn prev_span(mut self) -> Span {
|
||||
if start_of_buffer(self) < self.ptr {
|
||||
self.ptr = unsafe { self.ptr.offset(-1) };
|
||||
}
|
||||
self.span()
|
||||
}
|
||||
|
||||
/// Skip over the next token that is not a None-delimited group, without
|
||||
/// cloning it. Returns `None` if this cursor points to eof.
|
||||
///
|
||||
/// This method treats `'lifetimes` as a single token.
|
||||
pub(crate) fn skip(mut self) -> Option<Cursor<'a>> {
|
||||
self.ignore_none();
|
||||
|
||||
let len = match self.entry() {
|
||||
Entry::End(..) => return None,
|
||||
|
||||
// Treat lifetimes as a single tt for the purposes of 'skip'.
|
||||
Entry::Punct(punct) if punct.as_char() == '\'' && punct.spacing() == Spacing::Joint => {
|
||||
match unsafe { &*self.ptr.add(1) } {
|
||||
Entry::Ident(_) => 2,
|
||||
_ => 1,
|
||||
}
|
||||
}
|
||||
|
||||
Entry::Group(_, end_offset) => *end_offset,
|
||||
_ => 1,
|
||||
};
|
||||
|
||||
Some(unsafe { Cursor::create(self.ptr.add(len), self.scope) })
|
||||
}
|
||||
|
||||
pub(crate) fn scope_delimiter(self) -> Delimiter {
|
||||
match unsafe { &*self.scope } {
|
||||
Entry::End(_, offset) => match unsafe { &*self.scope.offset(*offset) } {
|
||||
Entry::Group(group, _) => group.delimiter(),
|
||||
_ => Delimiter::None,
|
||||
},
|
||||
_ => unreachable!(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Copy for Cursor<'a> {}
|
||||
|
||||
impl<'a> Clone for Cursor<'a> {
|
||||
fn clone(&self) -> Self {
|
||||
*self
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Eq for Cursor<'a> {}
|
||||
|
||||
impl<'a> PartialEq for Cursor<'a> {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
ptr::eq(self.ptr, other.ptr)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> PartialOrd for Cursor<'a> {
|
||||
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
|
||||
if same_buffer(*self, *other) {
|
||||
Some(cmp_assuming_same_buffer(*self, *other))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn same_scope(a: Cursor, b: Cursor) -> bool {
|
||||
ptr::eq(a.scope, b.scope)
|
||||
}
|
||||
|
||||
pub(crate) fn same_buffer(a: Cursor, b: Cursor) -> bool {
|
||||
ptr::eq(start_of_buffer(a), start_of_buffer(b))
|
||||
}
|
||||
|
||||
fn start_of_buffer(cursor: Cursor) -> *const Entry {
|
||||
unsafe {
|
||||
match &*cursor.scope {
|
||||
Entry::End(offset, _) => cursor.scope.offset(*offset),
|
||||
_ => unreachable!(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn cmp_assuming_same_buffer(a: Cursor, b: Cursor) -> Ordering {
|
||||
a.ptr.cmp(&b.ptr)
|
||||
}
|
||||
|
||||
pub(crate) fn open_span_of_group(cursor: Cursor) -> Span {
|
||||
match cursor.entry() {
|
||||
Entry::Group(group, _) => group.span_open(),
|
||||
_ => cursor.span(),
|
||||
}
|
||||
}
|
||||
313
rust/syn/classify.rs
Normal file
313
rust/syn/classify.rs
Normal file
@@ -0,0 +1,313 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
use crate::expr::Expr;
|
||||
#[cfg(any(feature = "printing", feature = "full"))]
|
||||
use crate::generics::TypeParamBound;
|
||||
#[cfg(any(feature = "printing", feature = "full"))]
|
||||
use crate::path::{Path, PathArguments};
|
||||
#[cfg(any(feature = "printing", feature = "full"))]
|
||||
use crate::punctuated::Punctuated;
|
||||
#[cfg(any(feature = "printing", feature = "full"))]
|
||||
use crate::ty::{ReturnType, Type};
|
||||
#[cfg(feature = "full")]
|
||||
use proc_macro2::{Delimiter, TokenStream, TokenTree};
|
||||
#[cfg(any(feature = "printing", feature = "full"))]
|
||||
use std::ops::ControlFlow;
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
pub(crate) fn requires_semi_to_be_stmt(expr: &Expr) -> bool {
|
||||
match expr {
|
||||
Expr::Macro(expr) => !expr.mac.delimiter.is_brace(),
|
||||
_ => requires_comma_to_be_match_arm(expr),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
pub(crate) fn requires_comma_to_be_match_arm(expr: &Expr) -> bool {
|
||||
match expr {
|
||||
Expr::If(_)
|
||||
| Expr::Match(_)
|
||||
| Expr::Block(_) | Expr::Unsafe(_) // both under ExprKind::Block in rustc
|
||||
| Expr::While(_)
|
||||
| Expr::Loop(_)
|
||||
| Expr::ForLoop(_)
|
||||
| Expr::TryBlock(_)
|
||||
| Expr::Const(_) => false,
|
||||
|
||||
Expr::Array(_)
|
||||
| Expr::Assign(_)
|
||||
| Expr::Async(_)
|
||||
| Expr::Await(_)
|
||||
| Expr::Binary(_)
|
||||
| Expr::Break(_)
|
||||
| Expr::Call(_)
|
||||
| Expr::Cast(_)
|
||||
| Expr::Closure(_)
|
||||
| Expr::Continue(_)
|
||||
| Expr::Field(_)
|
||||
| Expr::Group(_)
|
||||
| Expr::Index(_)
|
||||
| Expr::Infer(_)
|
||||
| Expr::Let(_)
|
||||
| Expr::Lit(_)
|
||||
| Expr::Macro(_)
|
||||
| Expr::MethodCall(_)
|
||||
| Expr::Paren(_)
|
||||
| Expr::Path(_)
|
||||
| Expr::Range(_)
|
||||
| Expr::RawAddr(_)
|
||||
| Expr::Reference(_)
|
||||
| Expr::Repeat(_)
|
||||
| Expr::Return(_)
|
||||
| Expr::Struct(_)
|
||||
| Expr::Try(_)
|
||||
| Expr::Tuple(_)
|
||||
| Expr::Unary(_)
|
||||
| Expr::Yield(_)
|
||||
| Expr::Verbatim(_) => true,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
pub(crate) fn trailing_unparameterized_path(mut ty: &Type) -> bool {
|
||||
loop {
|
||||
match ty {
|
||||
Type::BareFn(t) => match &t.output {
|
||||
ReturnType::Default => return false,
|
||||
ReturnType::Type(_, ret) => ty = ret,
|
||||
},
|
||||
Type::ImplTrait(t) => match last_type_in_bounds(&t.bounds) {
|
||||
ControlFlow::Break(trailing_path) => return trailing_path,
|
||||
ControlFlow::Continue(t) => ty = t,
|
||||
},
|
||||
Type::Path(t) => match last_type_in_path(&t.path) {
|
||||
ControlFlow::Break(trailing_path) => return trailing_path,
|
||||
ControlFlow::Continue(t) => ty = t,
|
||||
},
|
||||
Type::Ptr(t) => ty = &t.elem,
|
||||
Type::Reference(t) => ty = &t.elem,
|
||||
Type::TraitObject(t) => match last_type_in_bounds(&t.bounds) {
|
||||
ControlFlow::Break(trailing_path) => return trailing_path,
|
||||
ControlFlow::Continue(t) => ty = t,
|
||||
},
|
||||
|
||||
Type::Array(_)
|
||||
| Type::Group(_)
|
||||
| Type::Infer(_)
|
||||
| Type::Macro(_)
|
||||
| Type::Never(_)
|
||||
| Type::Paren(_)
|
||||
| Type::Slice(_)
|
||||
| Type::Tuple(_)
|
||||
| Type::Verbatim(_) => return false,
|
||||
}
|
||||
}
|
||||
|
||||
fn last_type_in_path(path: &Path) -> ControlFlow<bool, &Type> {
|
||||
match &path.segments.last().unwrap().arguments {
|
||||
PathArguments::None => ControlFlow::Break(true),
|
||||
PathArguments::AngleBracketed(_) => ControlFlow::Break(false),
|
||||
PathArguments::Parenthesized(arg) => match &arg.output {
|
||||
ReturnType::Default => ControlFlow::Break(false),
|
||||
ReturnType::Type(_, ret) => ControlFlow::Continue(ret),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn last_type_in_bounds(
|
||||
bounds: &Punctuated<TypeParamBound, Token![+]>,
|
||||
) -> ControlFlow<bool, &Type> {
|
||||
match bounds.last().unwrap() {
|
||||
TypeParamBound::Trait(t) => last_type_in_path(&t.path),
|
||||
TypeParamBound::Lifetime(_)
|
||||
| TypeParamBound::PreciseCapture(_)
|
||||
| TypeParamBound::Verbatim(_) => ControlFlow::Break(false),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Whether the expression's first token is the label of a loop/block.
|
||||
#[cfg(all(feature = "printing", feature = "full"))]
|
||||
pub(crate) fn expr_leading_label(mut expr: &Expr) -> bool {
|
||||
loop {
|
||||
match expr {
|
||||
Expr::Block(e) => return e.label.is_some(),
|
||||
Expr::ForLoop(e) => return e.label.is_some(),
|
||||
Expr::Loop(e) => return e.label.is_some(),
|
||||
Expr::While(e) => return e.label.is_some(),
|
||||
|
||||
Expr::Assign(e) => expr = &e.left,
|
||||
Expr::Await(e) => expr = &e.base,
|
||||
Expr::Binary(e) => expr = &e.left,
|
||||
Expr::Call(e) => expr = &e.func,
|
||||
Expr::Cast(e) => expr = &e.expr,
|
||||
Expr::Field(e) => expr = &e.base,
|
||||
Expr::Index(e) => expr = &e.expr,
|
||||
Expr::MethodCall(e) => expr = &e.receiver,
|
||||
Expr::Range(e) => match &e.start {
|
||||
Some(start) => expr = start,
|
||||
None => return false,
|
||||
},
|
||||
Expr::Try(e) => expr = &e.expr,
|
||||
|
||||
Expr::Array(_)
|
||||
| Expr::Async(_)
|
||||
| Expr::Break(_)
|
||||
| Expr::Closure(_)
|
||||
| Expr::Const(_)
|
||||
| Expr::Continue(_)
|
||||
| Expr::Group(_)
|
||||
| Expr::If(_)
|
||||
| Expr::Infer(_)
|
||||
| Expr::Let(_)
|
||||
| Expr::Lit(_)
|
||||
| Expr::Macro(_)
|
||||
| Expr::Match(_)
|
||||
| Expr::Paren(_)
|
||||
| Expr::Path(_)
|
||||
| Expr::RawAddr(_)
|
||||
| Expr::Reference(_)
|
||||
| Expr::Repeat(_)
|
||||
| Expr::Return(_)
|
||||
| Expr::Struct(_)
|
||||
| Expr::TryBlock(_)
|
||||
| Expr::Tuple(_)
|
||||
| Expr::Unary(_)
|
||||
| Expr::Unsafe(_)
|
||||
| Expr::Verbatim(_)
|
||||
| Expr::Yield(_) => return false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Whether the expression's last token is `}`.
|
||||
#[cfg(feature = "full")]
|
||||
pub(crate) fn expr_trailing_brace(mut expr: &Expr) -> bool {
|
||||
loop {
|
||||
match expr {
|
||||
Expr::Async(_)
|
||||
| Expr::Block(_)
|
||||
| Expr::Const(_)
|
||||
| Expr::ForLoop(_)
|
||||
| Expr::If(_)
|
||||
| Expr::Loop(_)
|
||||
| Expr::Match(_)
|
||||
| Expr::Struct(_)
|
||||
| Expr::TryBlock(_)
|
||||
| Expr::Unsafe(_)
|
||||
| Expr::While(_) => return true,
|
||||
|
||||
Expr::Assign(e) => expr = &e.right,
|
||||
Expr::Binary(e) => expr = &e.right,
|
||||
Expr::Break(e) => match &e.expr {
|
||||
Some(e) => expr = e,
|
||||
None => return false,
|
||||
},
|
||||
Expr::Cast(e) => return type_trailing_brace(&e.ty),
|
||||
Expr::Closure(e) => expr = &e.body,
|
||||
Expr::Let(e) => expr = &e.expr,
|
||||
Expr::Macro(e) => return e.mac.delimiter.is_brace(),
|
||||
Expr::Range(e) => match &e.end {
|
||||
Some(end) => expr = end,
|
||||
None => return false,
|
||||
},
|
||||
Expr::RawAddr(e) => expr = &e.expr,
|
||||
Expr::Reference(e) => expr = &e.expr,
|
||||
Expr::Return(e) => match &e.expr {
|
||||
Some(e) => expr = e,
|
||||
None => return false,
|
||||
},
|
||||
Expr::Unary(e) => expr = &e.expr,
|
||||
Expr::Verbatim(e) => return tokens_trailing_brace(e),
|
||||
Expr::Yield(e) => match &e.expr {
|
||||
Some(e) => expr = e,
|
||||
None => return false,
|
||||
},
|
||||
|
||||
Expr::Array(_)
|
||||
| Expr::Await(_)
|
||||
| Expr::Call(_)
|
||||
| Expr::Continue(_)
|
||||
| Expr::Field(_)
|
||||
| Expr::Group(_)
|
||||
| Expr::Index(_)
|
||||
| Expr::Infer(_)
|
||||
| Expr::Lit(_)
|
||||
| Expr::MethodCall(_)
|
||||
| Expr::Paren(_)
|
||||
| Expr::Path(_)
|
||||
| Expr::Repeat(_)
|
||||
| Expr::Try(_)
|
||||
| Expr::Tuple(_) => return false,
|
||||
}
|
||||
}
|
||||
|
||||
fn type_trailing_brace(mut ty: &Type) -> bool {
|
||||
loop {
|
||||
match ty {
|
||||
Type::BareFn(t) => match &t.output {
|
||||
ReturnType::Default => return false,
|
||||
ReturnType::Type(_, ret) => ty = ret,
|
||||
},
|
||||
Type::ImplTrait(t) => match last_type_in_bounds(&t.bounds) {
|
||||
ControlFlow::Break(trailing_brace) => return trailing_brace,
|
||||
ControlFlow::Continue(t) => ty = t,
|
||||
},
|
||||
Type::Macro(t) => return t.mac.delimiter.is_brace(),
|
||||
Type::Path(t) => match last_type_in_path(&t.path) {
|
||||
Some(t) => ty = t,
|
||||
None => return false,
|
||||
},
|
||||
Type::Ptr(t) => ty = &t.elem,
|
||||
Type::Reference(t) => ty = &t.elem,
|
||||
Type::TraitObject(t) => match last_type_in_bounds(&t.bounds) {
|
||||
ControlFlow::Break(trailing_brace) => return trailing_brace,
|
||||
ControlFlow::Continue(t) => ty = t,
|
||||
},
|
||||
Type::Verbatim(t) => return tokens_trailing_brace(t),
|
||||
|
||||
Type::Array(_)
|
||||
| Type::Group(_)
|
||||
| Type::Infer(_)
|
||||
| Type::Never(_)
|
||||
| Type::Paren(_)
|
||||
| Type::Slice(_)
|
||||
| Type::Tuple(_) => return false,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn last_type_in_path(path: &Path) -> Option<&Type> {
|
||||
match &path.segments.last().unwrap().arguments {
|
||||
PathArguments::None | PathArguments::AngleBracketed(_) => None,
|
||||
PathArguments::Parenthesized(arg) => match &arg.output {
|
||||
ReturnType::Default => None,
|
||||
ReturnType::Type(_, ret) => Some(ret),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
fn last_type_in_bounds(
|
||||
bounds: &Punctuated<TypeParamBound, Token![+]>,
|
||||
) -> ControlFlow<bool, &Type> {
|
||||
match bounds.last().unwrap() {
|
||||
TypeParamBound::Trait(t) => match last_type_in_path(&t.path) {
|
||||
Some(t) => ControlFlow::Continue(t),
|
||||
None => ControlFlow::Break(false),
|
||||
},
|
||||
TypeParamBound::Lifetime(_) | TypeParamBound::PreciseCapture(_) => {
|
||||
ControlFlow::Break(false)
|
||||
}
|
||||
TypeParamBound::Verbatim(t) => ControlFlow::Break(tokens_trailing_brace(t)),
|
||||
}
|
||||
}
|
||||
|
||||
fn tokens_trailing_brace(tokens: &TokenStream) -> bool {
|
||||
if let Some(TokenTree::Group(last)) = tokens.clone().into_iter().last() {
|
||||
last.delimiter() == Delimiter::Brace
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
}
|
||||
262
rust/syn/custom_keyword.rs
Normal file
262
rust/syn/custom_keyword.rs
Normal file
@@ -0,0 +1,262 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
/// Define a type that supports parsing and printing a given identifier as if it
|
||||
/// were a keyword.
|
||||
///
|
||||
/// # Usage
|
||||
///
|
||||
/// As a convention, it is recommended that this macro be invoked within a
|
||||
/// module called `kw` or `keyword` and that the resulting parser be invoked
|
||||
/// with a `kw::` or `keyword::` prefix.
|
||||
///
|
||||
/// ```
|
||||
/// mod kw {
|
||||
/// syn::custom_keyword!(whatever);
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// The generated syntax tree node supports the following operations just like
|
||||
/// any built-in keyword token.
|
||||
///
|
||||
/// - [Peeking] — `input.peek(kw::whatever)`
|
||||
///
|
||||
/// - [Parsing] — `input.parse::<kw::whatever>()?`
|
||||
///
|
||||
/// - [Printing] — `quote!( ... #whatever_token ... )`
|
||||
///
|
||||
/// - Construction from a [`Span`] — `let whatever_token = kw::whatever(sp)`
|
||||
///
|
||||
/// - Field access to its span — `let sp = whatever_token.span`
|
||||
///
|
||||
/// [Peeking]: crate::parse::ParseBuffer::peek
|
||||
/// [Parsing]: crate::parse::ParseBuffer::parse
|
||||
/// [Printing]: quote::ToTokens
|
||||
/// [`Span`]: proc_macro2::Span
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// This example parses input that looks like `bool = true` or `str = "value"`.
|
||||
/// The key must be either the identifier `bool` or the identifier `str`. If
|
||||
/// `bool`, the value may be either `true` or `false`. If `str`, the value may
|
||||
/// be any string literal.
|
||||
///
|
||||
/// The symbols `bool` and `str` are not reserved keywords in Rust so these are
|
||||
/// not considered keywords in the `syn::token` module. Like any other
|
||||
/// identifier that is not a keyword, these can be declared as custom keywords
|
||||
/// by crates that need to use them as such.
|
||||
///
|
||||
/// ```
|
||||
/// use syn::{LitBool, LitStr, Result, Token};
|
||||
/// use syn::parse::{Parse, ParseStream};
|
||||
///
|
||||
/// mod kw {
|
||||
/// syn::custom_keyword!(bool);
|
||||
/// syn::custom_keyword!(str);
|
||||
/// }
|
||||
///
|
||||
/// enum Argument {
|
||||
/// Bool {
|
||||
/// bool_token: kw::bool,
|
||||
/// eq_token: Token![=],
|
||||
/// value: LitBool,
|
||||
/// },
|
||||
/// Str {
|
||||
/// str_token: kw::str,
|
||||
/// eq_token: Token![=],
|
||||
/// value: LitStr,
|
||||
/// },
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for Argument {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// let lookahead = input.lookahead1();
|
||||
/// if lookahead.peek(kw::bool) {
|
||||
/// Ok(Argument::Bool {
|
||||
/// bool_token: input.parse::<kw::bool>()?,
|
||||
/// eq_token: input.parse()?,
|
||||
/// value: input.parse()?,
|
||||
/// })
|
||||
/// } else if lookahead.peek(kw::str) {
|
||||
/// Ok(Argument::Str {
|
||||
/// str_token: input.parse::<kw::str>()?,
|
||||
/// eq_token: input.parse()?,
|
||||
/// value: input.parse()?,
|
||||
/// })
|
||||
/// } else {
|
||||
/// Err(lookahead.error())
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
#[macro_export]
|
||||
macro_rules! custom_keyword {
|
||||
($ident:ident) => {
|
||||
#[allow(non_camel_case_types)]
|
||||
pub struct $ident {
|
||||
#[allow(dead_code)]
|
||||
pub span: $crate::__private::Span,
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[allow(dead_code, non_snake_case)]
|
||||
pub fn $ident<__S: $crate::__private::IntoSpans<$crate::__private::Span>>(
|
||||
span: __S,
|
||||
) -> $ident {
|
||||
$ident {
|
||||
span: $crate::__private::IntoSpans::into_spans(span),
|
||||
}
|
||||
}
|
||||
|
||||
const _: () = {
|
||||
impl $crate::__private::Default for $ident {
|
||||
fn default() -> Self {
|
||||
$ident {
|
||||
span: $crate::__private::Span::call_site(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
$crate::impl_parse_for_custom_keyword!($ident);
|
||||
$crate::impl_to_tokens_for_custom_keyword!($ident);
|
||||
$crate::impl_clone_for_custom_keyword!($ident);
|
||||
$crate::impl_extra_traits_for_custom_keyword!($ident);
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "parsing")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_parse_for_custom_keyword {
|
||||
($ident:ident) => {
|
||||
// For peek.
|
||||
impl $crate::__private::CustomToken for $ident {
|
||||
fn peek(cursor: $crate::buffer::Cursor) -> $crate::__private::bool {
|
||||
if let $crate::__private::Some((ident, _rest)) = cursor.ident() {
|
||||
ident == $crate::__private::stringify!($ident)
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
fn display() -> &'static $crate::__private::str {
|
||||
$crate::__private::concat!("`", $crate::__private::stringify!($ident), "`")
|
||||
}
|
||||
}
|
||||
|
||||
impl $crate::parse::Parse for $ident {
|
||||
fn parse(input: $crate::parse::ParseStream) -> $crate::parse::Result<$ident> {
|
||||
input.step(|cursor| {
|
||||
if let $crate::__private::Some((ident, rest)) = cursor.ident() {
|
||||
if ident == $crate::__private::stringify!($ident) {
|
||||
return $crate::__private::Ok(($ident { span: ident.span() }, rest));
|
||||
}
|
||||
}
|
||||
$crate::__private::Err(cursor.error($crate::__private::concat!(
|
||||
"expected `",
|
||||
$crate::__private::stringify!($ident),
|
||||
"`",
|
||||
)))
|
||||
})
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "parsing"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_parse_for_custom_keyword {
|
||||
($ident:ident) => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "printing")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_to_tokens_for_custom_keyword {
|
||||
($ident:ident) => {
|
||||
impl $crate::__private::ToTokens for $ident {
|
||||
fn to_tokens(&self, tokens: &mut $crate::__private::TokenStream2) {
|
||||
let ident = $crate::Ident::new($crate::__private::stringify!($ident), self.span);
|
||||
$crate::__private::TokenStreamExt::append(tokens, ident);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "printing"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_to_tokens_for_custom_keyword {
|
||||
($ident:ident) => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "clone-impls")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_clone_for_custom_keyword {
|
||||
($ident:ident) => {
|
||||
impl $crate::__private::Copy for $ident {}
|
||||
|
||||
#[allow(clippy::expl_impl_clone_on_copy)]
|
||||
impl $crate::__private::Clone for $ident {
|
||||
fn clone(&self) -> Self {
|
||||
*self
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "clone-impls"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_clone_for_custom_keyword {
|
||||
($ident:ident) => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "extra-traits")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_extra_traits_for_custom_keyword {
|
||||
($ident:ident) => {
|
||||
impl $crate::__private::Debug for $ident {
|
||||
fn fmt(&self, f: &mut $crate::__private::Formatter) -> $crate::__private::FmtResult {
|
||||
$crate::__private::Formatter::write_str(
|
||||
f,
|
||||
$crate::__private::concat!(
|
||||
"Keyword [",
|
||||
$crate::__private::stringify!($ident),
|
||||
"]",
|
||||
),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
impl $crate::__private::Eq for $ident {}
|
||||
|
||||
impl $crate::__private::PartialEq for $ident {
|
||||
fn eq(&self, _other: &Self) -> $crate::__private::bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
impl $crate::__private::Hash for $ident {
|
||||
fn hash<__H: $crate::__private::Hasher>(&self, _state: &mut __H) {}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "extra-traits"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_extra_traits_for_custom_keyword {
|
||||
($ident:ident) => {};
|
||||
}
|
||||
306
rust/syn/custom_punctuation.rs
Normal file
306
rust/syn/custom_punctuation.rs
Normal file
@@ -0,0 +1,306 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
/// Define a type that supports parsing and printing a multi-character symbol
|
||||
/// as if it were a punctuation token.
|
||||
///
|
||||
/// # Usage
|
||||
///
|
||||
/// ```
|
||||
/// syn::custom_punctuation!(LeftRightArrow, <=>);
|
||||
/// ```
|
||||
///
|
||||
/// The generated syntax tree node supports the following operations just like
|
||||
/// any built-in punctuation token.
|
||||
///
|
||||
/// - [Peeking] — `input.peek(LeftRightArrow)`
|
||||
///
|
||||
/// - [Parsing] — `input.parse::<LeftRightArrow>()?`
|
||||
///
|
||||
/// - [Printing] — `quote!( ... #lrarrow ... )`
|
||||
///
|
||||
/// - Construction from a [`Span`] — `let lrarrow = LeftRightArrow(sp)`
|
||||
///
|
||||
/// - Construction from multiple [`Span`] — `let lrarrow = LeftRightArrow([sp, sp, sp])`
|
||||
///
|
||||
/// - Field access to its spans — `let spans = lrarrow.spans`
|
||||
///
|
||||
/// [Peeking]: crate::parse::ParseBuffer::peek
|
||||
/// [Parsing]: crate::parse::ParseBuffer::parse
|
||||
/// [Printing]: quote::ToTokens
|
||||
/// [`Span`]: proc_macro2::Span
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// use proc_macro2::{TokenStream, TokenTree};
|
||||
/// use syn::parse::{Parse, ParseStream, Peek, Result};
|
||||
/// use syn::punctuated::Punctuated;
|
||||
/// use syn::Expr;
|
||||
///
|
||||
/// syn::custom_punctuation!(PathSeparator, </>);
|
||||
///
|
||||
/// // expr </> expr </> expr ...
|
||||
/// struct PathSegments {
|
||||
/// segments: Punctuated<Expr, PathSeparator>,
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for PathSegments {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// let mut segments = Punctuated::new();
|
||||
///
|
||||
/// let first = parse_until(input, PathSeparator)?;
|
||||
/// segments.push_value(syn::parse2(first)?);
|
||||
///
|
||||
/// while input.peek(PathSeparator) {
|
||||
/// segments.push_punct(input.parse()?);
|
||||
///
|
||||
/// let next = parse_until(input, PathSeparator)?;
|
||||
/// segments.push_value(syn::parse2(next)?);
|
||||
/// }
|
||||
///
|
||||
/// Ok(PathSegments { segments })
|
||||
/// }
|
||||
/// }
|
||||
///
|
||||
/// fn parse_until<E: Peek>(input: ParseStream, end: E) -> Result<TokenStream> {
|
||||
/// let mut tokens = TokenStream::new();
|
||||
/// while !input.is_empty() && !input.peek(end) {
|
||||
/// let next: TokenTree = input.parse()?;
|
||||
/// tokens.extend(Some(next));
|
||||
/// }
|
||||
/// Ok(tokens)
|
||||
/// }
|
||||
///
|
||||
/// fn main() {
|
||||
/// let input = r#" a::b </> c::d::e "#;
|
||||
/// let _: PathSegments = syn::parse_str(input).unwrap();
|
||||
/// }
|
||||
/// ```
|
||||
#[macro_export]
|
||||
macro_rules! custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {
|
||||
pub struct $ident {
|
||||
#[allow(dead_code)]
|
||||
pub spans: $crate::custom_punctuation_repr!($($tt)+),
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
#[allow(dead_code, non_snake_case)]
|
||||
pub fn $ident<__S: $crate::__private::IntoSpans<$crate::custom_punctuation_repr!($($tt)+)>>(
|
||||
spans: __S,
|
||||
) -> $ident {
|
||||
let _validate_len = 0 $(+ $crate::custom_punctuation_len!(strict, $tt))*;
|
||||
$ident {
|
||||
spans: $crate::__private::IntoSpans::into_spans(spans)
|
||||
}
|
||||
}
|
||||
|
||||
const _: () = {
|
||||
impl $crate::__private::Default for $ident {
|
||||
fn default() -> Self {
|
||||
$ident($crate::__private::Span::call_site())
|
||||
}
|
||||
}
|
||||
|
||||
$crate::impl_parse_for_custom_punctuation!($ident, $($tt)+);
|
||||
$crate::impl_to_tokens_for_custom_punctuation!($ident, $($tt)+);
|
||||
$crate::impl_clone_for_custom_punctuation!($ident, $($tt)+);
|
||||
$crate::impl_extra_traits_for_custom_punctuation!($ident, $($tt)+);
|
||||
};
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "parsing")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_parse_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {
|
||||
impl $crate::__private::CustomToken for $ident {
|
||||
fn peek(cursor: $crate::buffer::Cursor) -> $crate::__private::bool {
|
||||
$crate::__private::peek_punct(cursor, $crate::stringify_punct!($($tt)+))
|
||||
}
|
||||
|
||||
fn display() -> &'static $crate::__private::str {
|
||||
$crate::__private::concat!("`", $crate::stringify_punct!($($tt)+), "`")
|
||||
}
|
||||
}
|
||||
|
||||
impl $crate::parse::Parse for $ident {
|
||||
fn parse(input: $crate::parse::ParseStream) -> $crate::parse::Result<$ident> {
|
||||
let spans: $crate::custom_punctuation_repr!($($tt)+) =
|
||||
$crate::__private::parse_punct(input, $crate::stringify_punct!($($tt)+))?;
|
||||
Ok($ident(spans))
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "parsing"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_parse_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "printing")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_to_tokens_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {
|
||||
impl $crate::__private::ToTokens for $ident {
|
||||
fn to_tokens(&self, tokens: &mut $crate::__private::TokenStream2) {
|
||||
$crate::__private::print_punct($crate::stringify_punct!($($tt)+), &self.spans, tokens)
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "printing"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_to_tokens_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "clone-impls")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_clone_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {
|
||||
impl $crate::__private::Copy for $ident {}
|
||||
|
||||
#[allow(clippy::expl_impl_clone_on_copy)]
|
||||
impl $crate::__private::Clone for $ident {
|
||||
fn clone(&self) -> Self {
|
||||
*self
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "clone-impls"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_clone_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(feature = "extra-traits")]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_extra_traits_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {
|
||||
impl $crate::__private::Debug for $ident {
|
||||
fn fmt(&self, f: &mut $crate::__private::Formatter) -> $crate::__private::FmtResult {
|
||||
$crate::__private::Formatter::write_str(f, $crate::__private::stringify!($ident))
|
||||
}
|
||||
}
|
||||
|
||||
impl $crate::__private::Eq for $ident {}
|
||||
|
||||
impl $crate::__private::PartialEq for $ident {
|
||||
fn eq(&self, _other: &Self) -> $crate::__private::bool {
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
impl $crate::__private::Hash for $ident {
|
||||
fn hash<__H: $crate::__private::Hasher>(&self, _state: &mut __H) {}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(not(feature = "extra-traits"))]
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! impl_extra_traits_for_custom_punctuation {
|
||||
($ident:ident, $($tt:tt)+) => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! custom_punctuation_repr {
|
||||
($($tt:tt)+) => {
|
||||
[$crate::__private::Span; 0 $(+ $crate::custom_punctuation_len!(lenient, $tt))+]
|
||||
};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
#[rustfmt::skip]
|
||||
macro_rules! custom_punctuation_len {
|
||||
($mode:ident, &) => { 1 };
|
||||
($mode:ident, &&) => { 2 };
|
||||
($mode:ident, &=) => { 2 };
|
||||
($mode:ident, @) => { 1 };
|
||||
($mode:ident, ^) => { 1 };
|
||||
($mode:ident, ^=) => { 2 };
|
||||
($mode:ident, :) => { 1 };
|
||||
($mode:ident, ,) => { 1 };
|
||||
($mode:ident, $) => { 1 };
|
||||
($mode:ident, .) => { 1 };
|
||||
($mode:ident, ..) => { 2 };
|
||||
($mode:ident, ...) => { 3 };
|
||||
($mode:ident, ..=) => { 3 };
|
||||
($mode:ident, =) => { 1 };
|
||||
($mode:ident, ==) => { 2 };
|
||||
($mode:ident, =>) => { 2 };
|
||||
($mode:ident, >=) => { 2 };
|
||||
($mode:ident, >) => { 1 };
|
||||
($mode:ident, <-) => { 2 };
|
||||
($mode:ident, <=) => { 2 };
|
||||
($mode:ident, <) => { 1 };
|
||||
($mode:ident, -) => { 1 };
|
||||
($mode:ident, -=) => { 2 };
|
||||
($mode:ident, !=) => { 2 };
|
||||
($mode:ident, !) => { 1 };
|
||||
($mode:ident, |) => { 1 };
|
||||
($mode:ident, |=) => { 2 };
|
||||
($mode:ident, ||) => { 2 };
|
||||
($mode:ident, ::) => { 2 };
|
||||
($mode:ident, %) => { 1 };
|
||||
($mode:ident, %=) => { 2 };
|
||||
($mode:ident, +) => { 1 };
|
||||
($mode:ident, +=) => { 2 };
|
||||
($mode:ident, #) => { 1 };
|
||||
($mode:ident, ?) => { 1 };
|
||||
($mode:ident, ->) => { 2 };
|
||||
($mode:ident, ;) => { 1 };
|
||||
($mode:ident, <<) => { 2 };
|
||||
($mode:ident, <<=) => { 3 };
|
||||
($mode:ident, >>) => { 2 };
|
||||
($mode:ident, >>=) => { 3 };
|
||||
($mode:ident, /) => { 1 };
|
||||
($mode:ident, /=) => { 2 };
|
||||
($mode:ident, *) => { 1 };
|
||||
($mode:ident, *=) => { 2 };
|
||||
($mode:ident, ~) => { 1 };
|
||||
(lenient, $tt:tt) => { 0 };
|
||||
(strict, $tt:tt) => {{ $crate::custom_punctuation_unexpected!($tt); 0 }};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! custom_punctuation_unexpected {
|
||||
() => {};
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
#[macro_export]
|
||||
macro_rules! stringify_punct {
|
||||
($($tt:tt)+) => {
|
||||
$crate::__private::concat!($($crate::__private::stringify!($tt)),+)
|
||||
};
|
||||
}
|
||||
426
rust/syn/data.rs
Normal file
426
rust/syn/data.rs
Normal file
@@ -0,0 +1,426 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::attr::Attribute;
|
||||
use crate::expr::{Expr, Index, Member};
|
||||
use crate::ident::Ident;
|
||||
use crate::punctuated::{self, Punctuated};
|
||||
use crate::restriction::{FieldMutability, Visibility};
|
||||
use crate::token;
|
||||
use crate::ty::Type;
|
||||
|
||||
ast_struct! {
|
||||
/// An enum variant.
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub struct Variant {
|
||||
pub attrs: Vec<Attribute>,
|
||||
|
||||
/// Name of the variant.
|
||||
pub ident: Ident,
|
||||
|
||||
/// Content stored in the variant.
|
||||
pub fields: Fields,
|
||||
|
||||
/// Explicit discriminant: `Variant = 1`
|
||||
pub discriminant: Option<(Token![=], Expr)>,
|
||||
}
|
||||
}
|
||||
|
||||
ast_enum_of_structs! {
|
||||
/// Data stored within an enum variant or struct.
|
||||
///
|
||||
/// # Syntax tree enum
|
||||
///
|
||||
/// This type is a [syntax tree enum].
|
||||
///
|
||||
/// [syntax tree enum]: crate::expr::Expr#syntax-tree-enums
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub enum Fields {
|
||||
/// Named fields of a struct or struct variant such as `Point { x: f64,
|
||||
/// y: f64 }`.
|
||||
Named(FieldsNamed),
|
||||
|
||||
/// Unnamed fields of a tuple struct or tuple variant such as `Some(T)`.
|
||||
Unnamed(FieldsUnnamed),
|
||||
|
||||
/// Unit struct or unit variant such as `None`.
|
||||
Unit,
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// Named fields of a struct or struct variant such as `Point { x: f64,
|
||||
/// y: f64 }`.
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub struct FieldsNamed {
|
||||
pub brace_token: token::Brace,
|
||||
pub named: Punctuated<Field, Token![,]>,
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// Unnamed fields of a tuple struct or tuple variant such as `Some(T)`.
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub struct FieldsUnnamed {
|
||||
pub paren_token: token::Paren,
|
||||
pub unnamed: Punctuated<Field, Token![,]>,
|
||||
}
|
||||
}
|
||||
|
||||
impl Fields {
|
||||
/// Get an iterator over the borrowed [`Field`] items in this object. This
|
||||
/// iterator can be used to iterate over a named or unnamed struct or
|
||||
/// variant's fields uniformly.
|
||||
pub fn iter(&self) -> punctuated::Iter<Field> {
|
||||
match self {
|
||||
Fields::Unit => crate::punctuated::empty_punctuated_iter(),
|
||||
Fields::Named(f) => f.named.iter(),
|
||||
Fields::Unnamed(f) => f.unnamed.iter(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Get an iterator over the mutably borrowed [`Field`] items in this
|
||||
/// object. This iterator can be used to iterate over a named or unnamed
|
||||
/// struct or variant's fields uniformly.
|
||||
pub fn iter_mut(&mut self) -> punctuated::IterMut<Field> {
|
||||
match self {
|
||||
Fields::Unit => crate::punctuated::empty_punctuated_iter_mut(),
|
||||
Fields::Named(f) => f.named.iter_mut(),
|
||||
Fields::Unnamed(f) => f.unnamed.iter_mut(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns the number of fields.
|
||||
pub fn len(&self) -> usize {
|
||||
match self {
|
||||
Fields::Unit => 0,
|
||||
Fields::Named(f) => f.named.len(),
|
||||
Fields::Unnamed(f) => f.unnamed.len(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Returns `true` if there are zero fields.
|
||||
pub fn is_empty(&self) -> bool {
|
||||
match self {
|
||||
Fields::Unit => true,
|
||||
Fields::Named(f) => f.named.is_empty(),
|
||||
Fields::Unnamed(f) => f.unnamed.is_empty(),
|
||||
}
|
||||
}
|
||||
|
||||
return_impl_trait! {
|
||||
/// Get an iterator over the fields of a struct or variant as [`Member`]s.
|
||||
/// This iterator can be used to iterate over a named or unnamed struct or
|
||||
/// variant's fields uniformly.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// The following is a simplistic [`Clone`] derive for structs. (A more
|
||||
/// complete implementation would additionally want to infer trait bounds on
|
||||
/// the generic type parameters.)
|
||||
///
|
||||
/// ```
|
||||
/// # use quote::quote;
|
||||
/// #
|
||||
/// fn derive_clone(input: &syn::ItemStruct) -> proc_macro2::TokenStream {
|
||||
/// let ident = &input.ident;
|
||||
/// let members = input.fields.members();
|
||||
/// let (impl_generics, ty_generics, where_clause) = input.generics.split_for_impl();
|
||||
/// quote! {
|
||||
/// impl #impl_generics Clone for #ident #ty_generics #where_clause {
|
||||
/// fn clone(&self) -> Self {
|
||||
/// Self {
|
||||
/// #(#members: self.#members.clone()),*
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// For structs with named fields, it produces an expression like `Self { a:
|
||||
/// self.a.clone() }`. For structs with unnamed fields, `Self { 0:
|
||||
/// self.0.clone() }`. And for unit structs, `Self {}`.
|
||||
pub fn members(&self) -> impl Iterator<Item = Member> + Clone + '_ [Members] {
|
||||
Members {
|
||||
fields: self.iter(),
|
||||
index: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl IntoIterator for Fields {
|
||||
type Item = Field;
|
||||
type IntoIter = punctuated::IntoIter<Field>;
|
||||
|
||||
fn into_iter(self) -> Self::IntoIter {
|
||||
match self {
|
||||
Fields::Unit => Punctuated::<Field, ()>::new().into_iter(),
|
||||
Fields::Named(f) => f.named.into_iter(),
|
||||
Fields::Unnamed(f) => f.unnamed.into_iter(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> IntoIterator for &'a Fields {
|
||||
type Item = &'a Field;
|
||||
type IntoIter = punctuated::Iter<'a, Field>;
|
||||
|
||||
fn into_iter(self) -> Self::IntoIter {
|
||||
self.iter()
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> IntoIterator for &'a mut Fields {
|
||||
type Item = &'a mut Field;
|
||||
type IntoIter = punctuated::IterMut<'a, Field>;
|
||||
|
||||
fn into_iter(self) -> Self::IntoIter {
|
||||
self.iter_mut()
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// A field of a struct or enum variant.
|
||||
#[cfg_attr(docsrs, doc(cfg(any(feature = "full", feature = "derive"))))]
|
||||
pub struct Field {
|
||||
pub attrs: Vec<Attribute>,
|
||||
|
||||
pub vis: Visibility,
|
||||
|
||||
pub mutability: FieldMutability,
|
||||
|
||||
/// Name of the field, if any.
|
||||
///
|
||||
/// Fields of tuple structs have no names.
|
||||
pub ident: Option<Ident>,
|
||||
|
||||
pub colon_token: Option<Token![:]>,
|
||||
|
||||
pub ty: Type,
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Members<'a> {
|
||||
fields: punctuated::Iter<'a, Field>,
|
||||
index: u32,
|
||||
}
|
||||
|
||||
impl<'a> Iterator for Members<'a> {
|
||||
type Item = Member;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
let field = self.fields.next()?;
|
||||
let member = match &field.ident {
|
||||
Some(ident) => Member::Named(ident.clone()),
|
||||
None => {
|
||||
#[cfg(all(feature = "parsing", feature = "printing"))]
|
||||
let span = crate::spanned::Spanned::span(&field.ty);
|
||||
#[cfg(not(all(feature = "parsing", feature = "printing")))]
|
||||
let span = proc_macro2::Span::call_site();
|
||||
Member::Unnamed(Index {
|
||||
index: self.index,
|
||||
span,
|
||||
})
|
||||
}
|
||||
};
|
||||
self.index += 1;
|
||||
Some(member)
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> Clone for Members<'a> {
|
||||
fn clone(&self) -> Self {
|
||||
Members {
|
||||
fields: self.fields.clone(),
|
||||
index: self.index,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
pub(crate) mod parsing {
|
||||
use crate::attr::Attribute;
|
||||
use crate::data::{Field, Fields, FieldsNamed, FieldsUnnamed, Variant};
|
||||
use crate::error::Result;
|
||||
use crate::expr::Expr;
|
||||
use crate::ext::IdentExt as _;
|
||||
use crate::ident::Ident;
|
||||
#[cfg(not(feature = "full"))]
|
||||
use crate::parse::discouraged::Speculative as _;
|
||||
use crate::parse::{Parse, ParseStream};
|
||||
use crate::restriction::{FieldMutability, Visibility};
|
||||
#[cfg(not(feature = "full"))]
|
||||
use crate::scan_expr::scan_expr;
|
||||
use crate::token;
|
||||
use crate::ty::Type;
|
||||
use crate::verbatim;
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for Variant {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
let attrs = input.call(Attribute::parse_outer)?;
|
||||
let _visibility: Visibility = input.parse()?;
|
||||
let ident: Ident = input.parse()?;
|
||||
let fields = if input.peek(token::Brace) {
|
||||
Fields::Named(input.parse()?)
|
||||
} else if input.peek(token::Paren) {
|
||||
Fields::Unnamed(input.parse()?)
|
||||
} else {
|
||||
Fields::Unit
|
||||
};
|
||||
let discriminant = if input.peek(Token![=]) {
|
||||
let eq_token: Token![=] = input.parse()?;
|
||||
#[cfg(feature = "full")]
|
||||
let discriminant: Expr = input.parse()?;
|
||||
#[cfg(not(feature = "full"))]
|
||||
let discriminant = {
|
||||
let begin = input.fork();
|
||||
let ahead = input.fork();
|
||||
let mut discriminant: Result<Expr> = ahead.parse();
|
||||
if discriminant.is_ok() {
|
||||
input.advance_to(&ahead);
|
||||
} else if scan_expr(input).is_ok() {
|
||||
discriminant = Ok(Expr::Verbatim(verbatim::between(&begin, input)));
|
||||
}
|
||||
discriminant?
|
||||
};
|
||||
Some((eq_token, discriminant))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
Ok(Variant {
|
||||
attrs,
|
||||
ident,
|
||||
fields,
|
||||
discriminant,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for FieldsNamed {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
let content;
|
||||
Ok(FieldsNamed {
|
||||
brace_token: braced!(content in input),
|
||||
named: content.parse_terminated(Field::parse_named, Token![,])?,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for FieldsUnnamed {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
let content;
|
||||
Ok(FieldsUnnamed {
|
||||
paren_token: parenthesized!(content in input),
|
||||
unnamed: content.parse_terminated(Field::parse_unnamed, Token![,])?,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Field {
|
||||
/// Parses a named (braced struct) field.
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_named(input: ParseStream) -> Result<Self> {
|
||||
let attrs = input.call(Attribute::parse_outer)?;
|
||||
let vis: Visibility = input.parse()?;
|
||||
|
||||
let unnamed_field = cfg!(feature = "full") && input.peek(Token![_]);
|
||||
let ident = if unnamed_field {
|
||||
input.call(Ident::parse_any)
|
||||
} else {
|
||||
input.parse()
|
||||
}?;
|
||||
|
||||
let colon_token: Token![:] = input.parse()?;
|
||||
|
||||
let ty: Type = if unnamed_field
|
||||
&& (input.peek(Token![struct])
|
||||
|| input.peek(Token![union]) && input.peek2(token::Brace))
|
||||
{
|
||||
let begin = input.fork();
|
||||
input.call(Ident::parse_any)?;
|
||||
input.parse::<FieldsNamed>()?;
|
||||
Type::Verbatim(verbatim::between(&begin, input))
|
||||
} else {
|
||||
input.parse()?
|
||||
};
|
||||
|
||||
Ok(Field {
|
||||
attrs,
|
||||
vis,
|
||||
mutability: FieldMutability::None,
|
||||
ident: Some(ident),
|
||||
colon_token: Some(colon_token),
|
||||
ty,
|
||||
})
|
||||
}
|
||||
|
||||
/// Parses an unnamed (tuple struct) field.
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
pub fn parse_unnamed(input: ParseStream) -> Result<Self> {
|
||||
Ok(Field {
|
||||
attrs: input.call(Attribute::parse_outer)?,
|
||||
vis: input.parse()?,
|
||||
mutability: FieldMutability::None,
|
||||
ident: None,
|
||||
colon_token: None,
|
||||
ty: input.parse()?,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
mod printing {
|
||||
use crate::data::{Field, FieldsNamed, FieldsUnnamed, Variant};
|
||||
use crate::print::TokensOrDefault;
|
||||
use proc_macro2::TokenStream;
|
||||
use quote::{ToTokens, TokenStreamExt};
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for Variant {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append_all(&self.attrs);
|
||||
self.ident.to_tokens(tokens);
|
||||
self.fields.to_tokens(tokens);
|
||||
if let Some((eq_token, disc)) = &self.discriminant {
|
||||
eq_token.to_tokens(tokens);
|
||||
disc.to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for FieldsNamed {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
self.brace_token.surround(tokens, |tokens| {
|
||||
self.named.to_tokens(tokens);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for FieldsUnnamed {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
self.paren_token.surround(tokens, |tokens| {
|
||||
self.unnamed.to_tokens(tokens);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for Field {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append_all(&self.attrs);
|
||||
self.vis.to_tokens(tokens);
|
||||
if let Some(ident) = &self.ident {
|
||||
ident.to_tokens(tokens);
|
||||
TokensOrDefault(&self.colon_token).to_tokens(tokens);
|
||||
}
|
||||
self.ty.to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
}
|
||||
261
rust/syn/derive.rs
Normal file
261
rust/syn/derive.rs
Normal file
@@ -0,0 +1,261 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::attr::Attribute;
|
||||
use crate::data::{Fields, FieldsNamed, Variant};
|
||||
use crate::generics::Generics;
|
||||
use crate::ident::Ident;
|
||||
use crate::punctuated::Punctuated;
|
||||
use crate::restriction::Visibility;
|
||||
use crate::token;
|
||||
|
||||
ast_struct! {
|
||||
/// Data structure sent to a `proc_macro_derive` macro.
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "derive")))]
|
||||
pub struct DeriveInput {
|
||||
pub attrs: Vec<Attribute>,
|
||||
pub vis: Visibility,
|
||||
pub ident: Ident,
|
||||
pub generics: Generics,
|
||||
pub data: Data,
|
||||
}
|
||||
}
|
||||
|
||||
ast_enum! {
|
||||
/// The storage of a struct, enum or union data structure.
|
||||
///
|
||||
/// # Syntax tree enum
|
||||
///
|
||||
/// This type is a [syntax tree enum].
|
||||
///
|
||||
/// [syntax tree enum]: crate::expr::Expr#syntax-tree-enums
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "derive")))]
|
||||
pub enum Data {
|
||||
Struct(DataStruct),
|
||||
Enum(DataEnum),
|
||||
Union(DataUnion),
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// A struct input to a `proc_macro_derive` macro.
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "derive")))]
|
||||
pub struct DataStruct {
|
||||
pub struct_token: Token![struct],
|
||||
pub fields: Fields,
|
||||
pub semi_token: Option<Token![;]>,
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// An enum input to a `proc_macro_derive` macro.
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "derive")))]
|
||||
pub struct DataEnum {
|
||||
pub enum_token: Token![enum],
|
||||
pub brace_token: token::Brace,
|
||||
pub variants: Punctuated<Variant, Token![,]>,
|
||||
}
|
||||
}
|
||||
|
||||
ast_struct! {
|
||||
/// An untagged union input to a `proc_macro_derive` macro.
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "derive")))]
|
||||
pub struct DataUnion {
|
||||
pub union_token: Token![union],
|
||||
pub fields: FieldsNamed,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
pub(crate) mod parsing {
|
||||
use crate::attr::Attribute;
|
||||
use crate::data::{Fields, FieldsNamed, Variant};
|
||||
use crate::derive::{Data, DataEnum, DataStruct, DataUnion, DeriveInput};
|
||||
use crate::error::Result;
|
||||
use crate::generics::{Generics, WhereClause};
|
||||
use crate::ident::Ident;
|
||||
use crate::parse::{Parse, ParseStream};
|
||||
use crate::punctuated::Punctuated;
|
||||
use crate::restriction::Visibility;
|
||||
use crate::token;
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for DeriveInput {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
let attrs = input.call(Attribute::parse_outer)?;
|
||||
let vis = input.parse::<Visibility>()?;
|
||||
|
||||
let lookahead = input.lookahead1();
|
||||
if lookahead.peek(Token![struct]) {
|
||||
let struct_token = input.parse::<Token![struct]>()?;
|
||||
let ident = input.parse::<Ident>()?;
|
||||
let generics = input.parse::<Generics>()?;
|
||||
let (where_clause, fields, semi) = data_struct(input)?;
|
||||
Ok(DeriveInput {
|
||||
attrs,
|
||||
vis,
|
||||
ident,
|
||||
generics: Generics {
|
||||
where_clause,
|
||||
..generics
|
||||
},
|
||||
data: Data::Struct(DataStruct {
|
||||
struct_token,
|
||||
fields,
|
||||
semi_token: semi,
|
||||
}),
|
||||
})
|
||||
} else if lookahead.peek(Token![enum]) {
|
||||
let enum_token = input.parse::<Token![enum]>()?;
|
||||
let ident = input.parse::<Ident>()?;
|
||||
let generics = input.parse::<Generics>()?;
|
||||
let (where_clause, brace, variants) = data_enum(input)?;
|
||||
Ok(DeriveInput {
|
||||
attrs,
|
||||
vis,
|
||||
ident,
|
||||
generics: Generics {
|
||||
where_clause,
|
||||
..generics
|
||||
},
|
||||
data: Data::Enum(DataEnum {
|
||||
enum_token,
|
||||
brace_token: brace,
|
||||
variants,
|
||||
}),
|
||||
})
|
||||
} else if lookahead.peek(Token![union]) {
|
||||
let union_token = input.parse::<Token![union]>()?;
|
||||
let ident = input.parse::<Ident>()?;
|
||||
let generics = input.parse::<Generics>()?;
|
||||
let (where_clause, fields) = data_union(input)?;
|
||||
Ok(DeriveInput {
|
||||
attrs,
|
||||
vis,
|
||||
ident,
|
||||
generics: Generics {
|
||||
where_clause,
|
||||
..generics
|
||||
},
|
||||
data: Data::Union(DataUnion {
|
||||
union_token,
|
||||
fields,
|
||||
}),
|
||||
})
|
||||
} else {
|
||||
Err(lookahead.error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn data_struct(
|
||||
input: ParseStream,
|
||||
) -> Result<(Option<WhereClause>, Fields, Option<Token![;]>)> {
|
||||
let mut lookahead = input.lookahead1();
|
||||
let mut where_clause = None;
|
||||
if lookahead.peek(Token![where]) {
|
||||
where_clause = Some(input.parse()?);
|
||||
lookahead = input.lookahead1();
|
||||
}
|
||||
|
||||
if where_clause.is_none() && lookahead.peek(token::Paren) {
|
||||
let fields = input.parse()?;
|
||||
|
||||
lookahead = input.lookahead1();
|
||||
if lookahead.peek(Token![where]) {
|
||||
where_clause = Some(input.parse()?);
|
||||
lookahead = input.lookahead1();
|
||||
}
|
||||
|
||||
if lookahead.peek(Token![;]) {
|
||||
let semi = input.parse()?;
|
||||
Ok((where_clause, Fields::Unnamed(fields), Some(semi)))
|
||||
} else {
|
||||
Err(lookahead.error())
|
||||
}
|
||||
} else if lookahead.peek(token::Brace) {
|
||||
let fields = input.parse()?;
|
||||
Ok((where_clause, Fields::Named(fields), None))
|
||||
} else if lookahead.peek(Token![;]) {
|
||||
let semi = input.parse()?;
|
||||
Ok((where_clause, Fields::Unit, Some(semi)))
|
||||
} else {
|
||||
Err(lookahead.error())
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn data_enum(
|
||||
input: ParseStream,
|
||||
) -> Result<(
|
||||
Option<WhereClause>,
|
||||
token::Brace,
|
||||
Punctuated<Variant, Token![,]>,
|
||||
)> {
|
||||
let where_clause = input.parse()?;
|
||||
|
||||
let content;
|
||||
let brace = braced!(content in input);
|
||||
let variants = content.parse_terminated(Variant::parse, Token![,])?;
|
||||
|
||||
Ok((where_clause, brace, variants))
|
||||
}
|
||||
|
||||
pub(crate) fn data_union(input: ParseStream) -> Result<(Option<WhereClause>, FieldsNamed)> {
|
||||
let where_clause = input.parse()?;
|
||||
let fields = input.parse()?;
|
||||
Ok((where_clause, fields))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
mod printing {
|
||||
use crate::attr::FilterAttrs;
|
||||
use crate::data::Fields;
|
||||
use crate::derive::{Data, DeriveInput};
|
||||
use crate::print::TokensOrDefault;
|
||||
use proc_macro2::TokenStream;
|
||||
use quote::ToTokens;
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for DeriveInput {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
for attr in self.attrs.outer() {
|
||||
attr.to_tokens(tokens);
|
||||
}
|
||||
self.vis.to_tokens(tokens);
|
||||
match &self.data {
|
||||
Data::Struct(d) => d.struct_token.to_tokens(tokens),
|
||||
Data::Enum(d) => d.enum_token.to_tokens(tokens),
|
||||
Data::Union(d) => d.union_token.to_tokens(tokens),
|
||||
}
|
||||
self.ident.to_tokens(tokens);
|
||||
self.generics.to_tokens(tokens);
|
||||
match &self.data {
|
||||
Data::Struct(data) => match &data.fields {
|
||||
Fields::Named(fields) => {
|
||||
self.generics.where_clause.to_tokens(tokens);
|
||||
fields.to_tokens(tokens);
|
||||
}
|
||||
Fields::Unnamed(fields) => {
|
||||
fields.to_tokens(tokens);
|
||||
self.generics.where_clause.to_tokens(tokens);
|
||||
TokensOrDefault(&data.semi_token).to_tokens(tokens);
|
||||
}
|
||||
Fields::Unit => {
|
||||
self.generics.where_clause.to_tokens(tokens);
|
||||
TokensOrDefault(&data.semi_token).to_tokens(tokens);
|
||||
}
|
||||
},
|
||||
Data::Enum(data) => {
|
||||
self.generics.where_clause.to_tokens(tokens);
|
||||
data.brace_token.surround(tokens, |tokens| {
|
||||
data.variants.to_tokens(tokens);
|
||||
});
|
||||
}
|
||||
Data::Union(data) => {
|
||||
self.generics.where_clause.to_tokens(tokens);
|
||||
data.fields.to_tokens(tokens);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
227
rust/syn/discouraged.rs
Normal file
227
rust/syn/discouraged.rs
Normal file
@@ -0,0 +1,227 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
//! Extensions to the parsing API with niche applicability.
|
||||
|
||||
use crate::buffer::Cursor;
|
||||
use crate::error::Result;
|
||||
use crate::parse::{inner_unexpected, ParseBuffer, Unexpected};
|
||||
use proc_macro2::extra::DelimSpan;
|
||||
use proc_macro2::Delimiter;
|
||||
use std::cell::Cell;
|
||||
use std::mem;
|
||||
use std::rc::Rc;
|
||||
|
||||
/// Extensions to the `ParseStream` API to support speculative parsing.
|
||||
pub trait Speculative {
|
||||
/// Advance this parse stream to the position of a forked parse stream.
|
||||
///
|
||||
/// This is the opposite operation to [`ParseStream::fork`]. You can fork a
|
||||
/// parse stream, perform some speculative parsing, then join the original
|
||||
/// stream to the fork to "commit" the parsing from the fork to the main
|
||||
/// stream.
|
||||
///
|
||||
/// If you can avoid doing this, you should, as it limits the ability to
|
||||
/// generate useful errors. That said, it is often the only way to parse
|
||||
/// syntax of the form `A* B*` for arbitrary syntax `A` and `B`. The problem
|
||||
/// is that when the fork fails to parse an `A`, it's impossible to tell
|
||||
/// whether that was because of a syntax error and the user meant to provide
|
||||
/// an `A`, or that the `A`s are finished and it's time to start parsing
|
||||
/// `B`s. Use with care.
|
||||
///
|
||||
/// Also note that if `A` is a subset of `B`, `A* B*` can be parsed by
|
||||
/// parsing `B*` and removing the leading members of `A` from the
|
||||
/// repetition, bypassing the need to involve the downsides associated with
|
||||
/// speculative parsing.
|
||||
///
|
||||
/// [`ParseStream::fork`]: ParseBuffer::fork
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// There has been chatter about the possibility of making the colons in the
|
||||
/// turbofish syntax like `path::to::<T>` no longer required by accepting
|
||||
/// `path::to<T>` in expression position. Specifically, according to [RFC
|
||||
/// 2544], [`PathSegment`] parsing should always try to consume a following
|
||||
/// `<` token as the start of generic arguments, and reset to the `<` if
|
||||
/// that fails (e.g. the token is acting as a less-than operator).
|
||||
///
|
||||
/// This is the exact kind of parsing behavior which requires the "fork,
|
||||
/// try, commit" behavior that [`ParseStream::fork`] discourages. With
|
||||
/// `advance_to`, we can avoid having to parse the speculatively parsed
|
||||
/// content a second time.
|
||||
///
|
||||
/// This change in behavior can be implemented in syn by replacing just the
|
||||
/// `Parse` implementation for `PathSegment`:
|
||||
///
|
||||
/// ```
|
||||
/// # use syn::ext::IdentExt;
|
||||
/// use syn::parse::discouraged::Speculative;
|
||||
/// # use syn::parse::{Parse, ParseStream};
|
||||
/// # use syn::{Ident, PathArguments, Result, Token};
|
||||
///
|
||||
/// pub struct PathSegment {
|
||||
/// pub ident: Ident,
|
||||
/// pub arguments: PathArguments,
|
||||
/// }
|
||||
/// #
|
||||
/// # impl<T> From<T> for PathSegment
|
||||
/// # where
|
||||
/// # T: Into<Ident>,
|
||||
/// # {
|
||||
/// # fn from(ident: T) -> Self {
|
||||
/// # PathSegment {
|
||||
/// # ident: ident.into(),
|
||||
/// # arguments: PathArguments::None,
|
||||
/// # }
|
||||
/// # }
|
||||
/// # }
|
||||
///
|
||||
/// impl Parse for PathSegment {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// if input.peek(Token![super])
|
||||
/// || input.peek(Token![self])
|
||||
/// || input.peek(Token![Self])
|
||||
/// || input.peek(Token![crate])
|
||||
/// {
|
||||
/// let ident = input.call(Ident::parse_any)?;
|
||||
/// return Ok(PathSegment::from(ident));
|
||||
/// }
|
||||
///
|
||||
/// let ident = input.parse()?;
|
||||
/// if input.peek(Token![::]) && input.peek3(Token![<]) {
|
||||
/// return Ok(PathSegment {
|
||||
/// ident,
|
||||
/// arguments: PathArguments::AngleBracketed(input.parse()?),
|
||||
/// });
|
||||
/// }
|
||||
/// if input.peek(Token![<]) && !input.peek(Token![<=]) {
|
||||
/// let fork = input.fork();
|
||||
/// if let Ok(arguments) = fork.parse() {
|
||||
/// input.advance_to(&fork);
|
||||
/// return Ok(PathSegment {
|
||||
/// ident,
|
||||
/// arguments: PathArguments::AngleBracketed(arguments),
|
||||
/// });
|
||||
/// }
|
||||
/// }
|
||||
/// Ok(PathSegment::from(ident))
|
||||
/// }
|
||||
/// }
|
||||
///
|
||||
/// # syn::parse_str::<PathSegment>("a<b,c>").unwrap();
|
||||
/// ```
|
||||
///
|
||||
/// # Drawbacks
|
||||
///
|
||||
/// The main drawback of this style of speculative parsing is in error
|
||||
/// presentation. Even if the lookahead is the "correct" parse, the error
|
||||
/// that is shown is that of the "fallback" parse. To use the same example
|
||||
/// as the turbofish above, take the following unfinished "turbofish":
|
||||
///
|
||||
/// ```text
|
||||
/// let _ = f<&'a fn(), for<'a> serde::>();
|
||||
/// ```
|
||||
///
|
||||
/// If this is parsed as generic arguments, we can provide the error message
|
||||
///
|
||||
/// ```text
|
||||
/// error: expected identifier
|
||||
/// --> src.rs:L:C
|
||||
/// |
|
||||
/// L | let _ = f<&'a fn(), for<'a> serde::>();
|
||||
/// | ^
|
||||
/// ```
|
||||
///
|
||||
/// but if parsed using the above speculative parsing, it falls back to
|
||||
/// assuming that the `<` is a less-than when it fails to parse the generic
|
||||
/// arguments, and tries to interpret the `&'a` as the start of a labelled
|
||||
/// loop, resulting in the much less helpful error
|
||||
///
|
||||
/// ```text
|
||||
/// error: expected `:`
|
||||
/// --> src.rs:L:C
|
||||
/// |
|
||||
/// L | let _ = f<&'a fn(), for<'a> serde::>();
|
||||
/// | ^^
|
||||
/// ```
|
||||
///
|
||||
/// This can be mitigated with various heuristics (two examples: show both
|
||||
/// forks' parse errors, or show the one that consumed more tokens), but
|
||||
/// when you can control the grammar, sticking to something that can be
|
||||
/// parsed LL(3) and without the LL(*) speculative parsing this makes
|
||||
/// possible, displaying reasonable errors becomes much more simple.
|
||||
///
|
||||
/// [RFC 2544]: https://github.com/rust-lang/rfcs/pull/2544
|
||||
/// [`PathSegment`]: crate::PathSegment
|
||||
///
|
||||
/// # Performance
|
||||
///
|
||||
/// This method performs a cheap fixed amount of work that does not depend
|
||||
/// on how far apart the two streams are positioned.
|
||||
///
|
||||
/// # Panics
|
||||
///
|
||||
/// The forked stream in the argument of `advance_to` must have been
|
||||
/// obtained by forking `self`. Attempting to advance to any other stream
|
||||
/// will cause a panic.
|
||||
fn advance_to(&self, fork: &Self);
|
||||
}
|
||||
|
||||
impl<'a> Speculative for ParseBuffer<'a> {
|
||||
fn advance_to(&self, fork: &Self) {
|
||||
if !crate::buffer::same_scope(self.cursor(), fork.cursor()) {
|
||||
panic!("fork was not derived from the advancing parse stream");
|
||||
}
|
||||
|
||||
let (self_unexp, self_sp) = inner_unexpected(self);
|
||||
let (fork_unexp, fork_sp) = inner_unexpected(fork);
|
||||
if !Rc::ptr_eq(&self_unexp, &fork_unexp) {
|
||||
match (fork_sp, self_sp) {
|
||||
// Unexpected set on the fork, but not on `self`, copy it over.
|
||||
(Some((span, delimiter)), None) => {
|
||||
self_unexp.set(Unexpected::Some(span, delimiter));
|
||||
}
|
||||
// Unexpected unset. Use chain to propagate errors from fork.
|
||||
(None, None) => {
|
||||
fork_unexp.set(Unexpected::Chain(self_unexp));
|
||||
|
||||
// Ensure toplevel 'unexpected' tokens from the fork don't
|
||||
// propagate up the chain by replacing the root `unexpected`
|
||||
// pointer, only 'unexpected' tokens from existing group
|
||||
// parsers should propagate.
|
||||
fork.unexpected
|
||||
.set(Some(Rc::new(Cell::new(Unexpected::None))));
|
||||
}
|
||||
// Unexpected has been set on `self`. No changes needed.
|
||||
(_, Some(_)) => {}
|
||||
}
|
||||
}
|
||||
|
||||
// See comment on `cell` in the struct definition.
|
||||
self.cell
|
||||
.set(unsafe { mem::transmute::<Cursor, Cursor<'static>>(fork.cursor()) });
|
||||
}
|
||||
}
|
||||
|
||||
/// Extensions to the `ParseStream` API to support manipulating invisible
|
||||
/// delimiters the same as if they were visible.
|
||||
pub trait AnyDelimiter {
|
||||
/// Returns the delimiter, the span of the delimiter token, and the nested
|
||||
/// contents for further parsing.
|
||||
fn parse_any_delimiter(&self) -> Result<(Delimiter, DelimSpan, ParseBuffer)>;
|
||||
}
|
||||
|
||||
impl<'a> AnyDelimiter for ParseBuffer<'a> {
|
||||
fn parse_any_delimiter(&self) -> Result<(Delimiter, DelimSpan, ParseBuffer)> {
|
||||
self.step(|cursor| {
|
||||
if let Some((content, delimiter, span, rest)) = cursor.any_group() {
|
||||
let scope = span.close();
|
||||
let nested = crate::parse::advance_step_cursor(cursor, content);
|
||||
let unexpected = crate::parse::get_unexpected(self);
|
||||
let content = crate::parse::new_parse_buffer(scope, nested, unexpected);
|
||||
Ok(((delimiter, span, content), rest))
|
||||
} else {
|
||||
Err(cursor.error("expected any delimiter"))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
60
rust/syn/drops.rs
Normal file
60
rust/syn/drops.rs
Normal file
@@ -0,0 +1,60 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use std::iter;
|
||||
use std::mem::ManuallyDrop;
|
||||
use std::ops::{Deref, DerefMut};
|
||||
use std::option;
|
||||
use std::slice;
|
||||
|
||||
#[repr(transparent)]
|
||||
pub(crate) struct NoDrop<T: ?Sized>(ManuallyDrop<T>);
|
||||
|
||||
impl<T> NoDrop<T> {
|
||||
pub(crate) fn new(value: T) -> Self
|
||||
where
|
||||
T: TrivialDrop,
|
||||
{
|
||||
NoDrop(ManuallyDrop::new(value))
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized> Deref for NoDrop<T> {
|
||||
type Target = T;
|
||||
fn deref(&self) -> &Self::Target {
|
||||
&self.0
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: ?Sized> DerefMut for NoDrop<T> {
|
||||
fn deref_mut(&mut self) -> &mut Self::Target {
|
||||
&mut self.0
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) trait TrivialDrop {}
|
||||
|
||||
impl<T> TrivialDrop for iter::Empty<T> {}
|
||||
impl<T> TrivialDrop for slice::Iter<'_, T> {}
|
||||
impl<T> TrivialDrop for slice::IterMut<'_, T> {}
|
||||
impl<T> TrivialDrop for option::IntoIter<&T> {}
|
||||
impl<T> TrivialDrop for option::IntoIter<&mut T> {}
|
||||
|
||||
#[test]
|
||||
fn test_needs_drop() {
|
||||
use std::mem::needs_drop;
|
||||
|
||||
struct NeedsDrop;
|
||||
|
||||
impl Drop for NeedsDrop {
|
||||
fn drop(&mut self) {}
|
||||
}
|
||||
|
||||
assert!(needs_drop::<NeedsDrop>());
|
||||
|
||||
// Test each of the types with a handwritten TrivialDrop impl above.
|
||||
assert!(!needs_drop::<iter::Empty<NeedsDrop>>());
|
||||
assert!(!needs_drop::<slice::Iter<NeedsDrop>>());
|
||||
assert!(!needs_drop::<slice::IterMut<NeedsDrop>>());
|
||||
assert!(!needs_drop::<option::IntoIter<&NeedsDrop>>());
|
||||
assert!(!needs_drop::<option::IntoIter<&mut NeedsDrop>>());
|
||||
}
|
||||
469
rust/syn/error.rs
Normal file
469
rust/syn/error.rs
Normal file
@@ -0,0 +1,469 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
use crate::buffer::Cursor;
|
||||
use crate::thread::ThreadBound;
|
||||
use proc_macro2::{
|
||||
Delimiter, Group, Ident, LexError, Literal, Punct, Spacing, Span, TokenStream, TokenTree,
|
||||
};
|
||||
#[cfg(feature = "printing")]
|
||||
use quote::ToTokens;
|
||||
use std::fmt::{self, Debug, Display};
|
||||
use std::slice;
|
||||
use std::vec;
|
||||
|
||||
/// The result of a Syn parser.
|
||||
pub type Result<T> = std::result::Result<T, Error>;
|
||||
|
||||
/// Error returned when a Syn parser cannot parse the input tokens.
|
||||
///
|
||||
/// # Error reporting in proc macros
|
||||
///
|
||||
/// The correct way to report errors back to the compiler from a procedural
|
||||
/// macro is by emitting an appropriately spanned invocation of
|
||||
/// [`compile_error!`] in the generated code. This produces a better diagnostic
|
||||
/// message than simply panicking the macro.
|
||||
///
|
||||
/// [`compile_error!`]: std::compile_error!
|
||||
///
|
||||
/// When parsing macro input, the [`parse_macro_input!`] macro handles the
|
||||
/// conversion to `compile_error!` automatically.
|
||||
///
|
||||
/// [`parse_macro_input!`]: crate::parse_macro_input!
|
||||
///
|
||||
/// ```
|
||||
/// # extern crate proc_macro;
|
||||
/// #
|
||||
/// use proc_macro::TokenStream;
|
||||
/// use syn::parse::{Parse, ParseStream, Result};
|
||||
/// use syn::{parse_macro_input, ItemFn};
|
||||
///
|
||||
/// # const IGNORE: &str = stringify! {
|
||||
/// #[proc_macro_attribute]
|
||||
/// # };
|
||||
/// pub fn my_attr(args: TokenStream, input: TokenStream) -> TokenStream {
|
||||
/// let args = parse_macro_input!(args as MyAttrArgs);
|
||||
/// let input = parse_macro_input!(input as ItemFn);
|
||||
///
|
||||
/// /* ... */
|
||||
/// # TokenStream::new()
|
||||
/// }
|
||||
///
|
||||
/// struct MyAttrArgs {
|
||||
/// # _k: [(); { stringify! {
|
||||
/// ...
|
||||
/// # }; 0 }]
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for MyAttrArgs {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// # stringify! {
|
||||
/// ...
|
||||
/// # };
|
||||
/// # unimplemented!()
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// For errors that arise later than the initial parsing stage, the
|
||||
/// [`.to_compile_error()`] or [`.into_compile_error()`] methods can be used to
|
||||
/// perform an explicit conversion to `compile_error!`.
|
||||
///
|
||||
/// [`.to_compile_error()`]: Error::to_compile_error
|
||||
/// [`.into_compile_error()`]: Error::into_compile_error
|
||||
///
|
||||
/// ```
|
||||
/// # extern crate proc_macro;
|
||||
/// #
|
||||
/// # use proc_macro::TokenStream;
|
||||
/// # use syn::{parse_macro_input, DeriveInput};
|
||||
/// #
|
||||
/// # const IGNORE: &str = stringify! {
|
||||
/// #[proc_macro_derive(MyDerive)]
|
||||
/// # };
|
||||
/// pub fn my_derive(input: TokenStream) -> TokenStream {
|
||||
/// let input = parse_macro_input!(input as DeriveInput);
|
||||
///
|
||||
/// // fn(DeriveInput) -> syn::Result<proc_macro2::TokenStream>
|
||||
/// expand::my_derive(input)
|
||||
/// .unwrap_or_else(syn::Error::into_compile_error)
|
||||
/// .into()
|
||||
/// }
|
||||
/// #
|
||||
/// # mod expand {
|
||||
/// # use proc_macro2::TokenStream;
|
||||
/// # use syn::{DeriveInput, Result};
|
||||
/// #
|
||||
/// # pub fn my_derive(input: DeriveInput) -> Result<TokenStream> {
|
||||
/// # unimplemented!()
|
||||
/// # }
|
||||
/// # }
|
||||
/// ```
|
||||
pub struct Error {
|
||||
messages: Vec<ErrorMessage>,
|
||||
}
|
||||
|
||||
struct ErrorMessage {
|
||||
// Span is implemented as an index into a thread-local interner to keep the
|
||||
// size small. It is not safe to access from a different thread. We want
|
||||
// errors to be Send and Sync to play nicely with ecosystem crates for error
|
||||
// handling, so pin the span we're given to its original thread and assume
|
||||
// it is Span::call_site if accessed from any other thread.
|
||||
span: ThreadBound<SpanRange>,
|
||||
message: String,
|
||||
}
|
||||
|
||||
// Cannot use std::ops::Range<Span> because that does not implement Copy,
|
||||
// whereas ThreadBound<T> requires a Copy impl as a way to ensure no Drop impls
|
||||
// are involved.
|
||||
struct SpanRange {
|
||||
start: Span,
|
||||
end: Span,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
struct _Test
|
||||
where
|
||||
Error: Send + Sync;
|
||||
|
||||
impl Error {
|
||||
/// Usually the [`ParseStream::error`] method will be used instead, which
|
||||
/// automatically uses the correct span from the current position of the
|
||||
/// parse stream.
|
||||
///
|
||||
/// Use `Error::new` when the error needs to be triggered on some span other
|
||||
/// than where the parse stream is currently positioned.
|
||||
///
|
||||
/// [`ParseStream::error`]: crate::parse::ParseBuffer::error
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// use syn::{Error, Ident, LitStr, Result, Token};
|
||||
/// use syn::parse::ParseStream;
|
||||
///
|
||||
/// // Parses input that looks like `name = "string"` where the key must be
|
||||
/// // the identifier `name` and the value may be any string literal.
|
||||
/// // Returns the string literal.
|
||||
/// fn parse_name(input: ParseStream) -> Result<LitStr> {
|
||||
/// let name_token: Ident = input.parse()?;
|
||||
/// if name_token != "name" {
|
||||
/// // Trigger an error not on the current position of the stream,
|
||||
/// // but on the position of the unexpected identifier.
|
||||
/// return Err(Error::new(name_token.span(), "expected `name`"));
|
||||
/// }
|
||||
/// input.parse::<Token![=]>()?;
|
||||
/// let s: LitStr = input.parse()?;
|
||||
/// Ok(s)
|
||||
/// }
|
||||
/// ```
|
||||
pub fn new<T: Display>(span: Span, message: T) -> Self {
|
||||
return new(span, message.to_string());
|
||||
|
||||
fn new(span: Span, message: String) -> Error {
|
||||
Error {
|
||||
messages: vec![ErrorMessage {
|
||||
span: ThreadBound::new(SpanRange {
|
||||
start: span,
|
||||
end: span,
|
||||
}),
|
||||
message,
|
||||
}],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates an error with the specified message spanning the given syntax
|
||||
/// tree node.
|
||||
///
|
||||
/// Unlike the `Error::new` constructor, this constructor takes an argument
|
||||
/// `tokens` which is a syntax tree node. This allows the resulting `Error`
|
||||
/// to attempt to span all tokens inside of `tokens`. While you would
|
||||
/// typically be able to use the `Spanned` trait with the above `Error::new`
|
||||
/// constructor, implementation limitations today mean that
|
||||
/// `Error::new_spanned` may provide a higher-quality error message on
|
||||
/// stable Rust.
|
||||
///
|
||||
/// When in doubt it's recommended to stick to `Error::new` (or
|
||||
/// `ParseStream::error`)!
|
||||
#[cfg(feature = "printing")]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
pub fn new_spanned<T: ToTokens, U: Display>(tokens: T, message: U) -> Self {
|
||||
return new_spanned(tokens.into_token_stream(), message.to_string());
|
||||
|
||||
fn new_spanned(tokens: TokenStream, message: String) -> Error {
|
||||
let mut iter = tokens.into_iter();
|
||||
let start = iter.next().map_or_else(Span::call_site, |t| t.span());
|
||||
let end = iter.last().map_or(start, |t| t.span());
|
||||
Error {
|
||||
messages: vec![ErrorMessage {
|
||||
span: ThreadBound::new(SpanRange { start, end }),
|
||||
message,
|
||||
}],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// The source location of the error.
|
||||
///
|
||||
/// Spans are not thread-safe so this function returns `Span::call_site()`
|
||||
/// if called from a different thread than the one on which the `Error` was
|
||||
/// originally created.
|
||||
pub fn span(&self) -> Span {
|
||||
let SpanRange { start, end } = match self.messages[0].span.get() {
|
||||
Some(span) => *span,
|
||||
None => return Span::call_site(),
|
||||
};
|
||||
start.join(end).unwrap_or(start)
|
||||
}
|
||||
|
||||
/// Render the error as an invocation of [`compile_error!`].
|
||||
///
|
||||
/// The [`parse_macro_input!`] macro provides a convenient way to invoke
|
||||
/// this method correctly in a procedural macro.
|
||||
///
|
||||
/// [`compile_error!`]: std::compile_error!
|
||||
/// [`parse_macro_input!`]: crate::parse_macro_input!
|
||||
pub fn to_compile_error(&self) -> TokenStream {
|
||||
self.messages
|
||||
.iter()
|
||||
.map(ErrorMessage::to_compile_error)
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Render the error as an invocation of [`compile_error!`].
|
||||
///
|
||||
/// [`compile_error!`]: std::compile_error!
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// # extern crate proc_macro;
|
||||
/// #
|
||||
/// use proc_macro::TokenStream;
|
||||
/// use syn::{parse_macro_input, DeriveInput, Error};
|
||||
///
|
||||
/// # const _: &str = stringify! {
|
||||
/// #[proc_macro_derive(MyTrait)]
|
||||
/// # };
|
||||
/// pub fn derive_my_trait(input: TokenStream) -> TokenStream {
|
||||
/// let input = parse_macro_input!(input as DeriveInput);
|
||||
/// my_trait::expand(input)
|
||||
/// .unwrap_or_else(Error::into_compile_error)
|
||||
/// .into()
|
||||
/// }
|
||||
///
|
||||
/// mod my_trait {
|
||||
/// use proc_macro2::TokenStream;
|
||||
/// use syn::{DeriveInput, Result};
|
||||
///
|
||||
/// pub(crate) fn expand(input: DeriveInput) -> Result<TokenStream> {
|
||||
/// /* ... */
|
||||
/// # unimplemented!()
|
||||
/// }
|
||||
/// }
|
||||
/// ```
|
||||
pub fn into_compile_error(self) -> TokenStream {
|
||||
self.to_compile_error()
|
||||
}
|
||||
|
||||
/// Add another error message to self such that when `to_compile_error()` is
|
||||
/// called, both errors will be emitted together.
|
||||
pub fn combine(&mut self, another: Error) {
|
||||
self.messages.extend(another.messages);
|
||||
}
|
||||
}
|
||||
|
||||
impl ErrorMessage {
|
||||
fn to_compile_error(&self) -> TokenStream {
|
||||
let (start, end) = match self.span.get() {
|
||||
Some(range) => (range.start, range.end),
|
||||
None => (Span::call_site(), Span::call_site()),
|
||||
};
|
||||
|
||||
// ::core::compile_error!($message)
|
||||
TokenStream::from_iter([
|
||||
TokenTree::Punct({
|
||||
let mut punct = Punct::new(':', Spacing::Joint);
|
||||
punct.set_span(start);
|
||||
punct
|
||||
}),
|
||||
TokenTree::Punct({
|
||||
let mut punct = Punct::new(':', Spacing::Alone);
|
||||
punct.set_span(start);
|
||||
punct
|
||||
}),
|
||||
TokenTree::Ident(Ident::new("core", start)),
|
||||
TokenTree::Punct({
|
||||
let mut punct = Punct::new(':', Spacing::Joint);
|
||||
punct.set_span(start);
|
||||
punct
|
||||
}),
|
||||
TokenTree::Punct({
|
||||
let mut punct = Punct::new(':', Spacing::Alone);
|
||||
punct.set_span(start);
|
||||
punct
|
||||
}),
|
||||
TokenTree::Ident(Ident::new("compile_error", start)),
|
||||
TokenTree::Punct({
|
||||
let mut punct = Punct::new('!', Spacing::Alone);
|
||||
punct.set_span(start);
|
||||
punct
|
||||
}),
|
||||
TokenTree::Group({
|
||||
let mut group = Group::new(Delimiter::Brace, {
|
||||
TokenStream::from_iter([TokenTree::Literal({
|
||||
let mut string = Literal::string(&self.message);
|
||||
string.set_span(end);
|
||||
string
|
||||
})])
|
||||
});
|
||||
group.set_span(end);
|
||||
group
|
||||
}),
|
||||
])
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
pub(crate) fn new_at<T: Display>(scope: Span, cursor: Cursor, message: T) -> Error {
|
||||
if cursor.eof() {
|
||||
Error::new(scope, format!("unexpected end of input, {}", message))
|
||||
} else {
|
||||
let span = crate::buffer::open_span_of_group(cursor);
|
||||
Error::new(span, message)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(all(feature = "parsing", any(feature = "full", feature = "derive")))]
|
||||
pub(crate) fn new2<T: Display>(start: Span, end: Span, message: T) -> Error {
|
||||
return new2(start, end, message.to_string());
|
||||
|
||||
fn new2(start: Span, end: Span, message: String) -> Error {
|
||||
Error {
|
||||
messages: vec![ErrorMessage {
|
||||
span: ThreadBound::new(SpanRange { start, end }),
|
||||
message,
|
||||
}],
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for Error {
|
||||
fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
if self.messages.len() == 1 {
|
||||
formatter
|
||||
.debug_tuple("Error")
|
||||
.field(&self.messages[0])
|
||||
.finish()
|
||||
} else {
|
||||
formatter
|
||||
.debug_tuple("Error")
|
||||
.field(&self.messages)
|
||||
.finish()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Debug for ErrorMessage {
|
||||
fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
Debug::fmt(&self.message, formatter)
|
||||
}
|
||||
}
|
||||
|
||||
impl Display for Error {
|
||||
fn fmt(&self, formatter: &mut fmt::Formatter) -> fmt::Result {
|
||||
formatter.write_str(&self.messages[0].message)
|
||||
}
|
||||
}
|
||||
|
||||
impl Clone for Error {
|
||||
fn clone(&self) -> Self {
|
||||
Error {
|
||||
messages: self.messages.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Clone for ErrorMessage {
|
||||
fn clone(&self) -> Self {
|
||||
ErrorMessage {
|
||||
span: self.span,
|
||||
message: self.message.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Clone for SpanRange {
|
||||
fn clone(&self) -> Self {
|
||||
*self
|
||||
}
|
||||
}
|
||||
|
||||
impl Copy for SpanRange {}
|
||||
|
||||
impl std::error::Error for Error {}
|
||||
|
||||
impl From<LexError> for Error {
|
||||
fn from(err: LexError) -> Self {
|
||||
Error::new(err.span(), err)
|
||||
}
|
||||
}
|
||||
|
||||
impl IntoIterator for Error {
|
||||
type Item = Error;
|
||||
type IntoIter = IntoIter;
|
||||
|
||||
fn into_iter(self) -> Self::IntoIter {
|
||||
IntoIter {
|
||||
messages: self.messages.into_iter(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct IntoIter {
|
||||
messages: vec::IntoIter<ErrorMessage>,
|
||||
}
|
||||
|
||||
impl Iterator for IntoIter {
|
||||
type Item = Error;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
Some(Error {
|
||||
messages: vec![self.messages.next()?],
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a> IntoIterator for &'a Error {
|
||||
type Item = Error;
|
||||
type IntoIter = Iter<'a>;
|
||||
|
||||
fn into_iter(self) -> Self::IntoIter {
|
||||
Iter {
|
||||
messages: self.messages.iter(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub struct Iter<'a> {
|
||||
messages: slice::Iter<'a, ErrorMessage>,
|
||||
}
|
||||
|
||||
impl<'a> Iterator for Iter<'a> {
|
||||
type Item = Error;
|
||||
|
||||
fn next(&mut self) -> Option<Self::Item> {
|
||||
Some(Error {
|
||||
messages: vec![self.messages.next()?.clone()],
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Extend<Error> for Error {
|
||||
fn extend<T: IntoIterator<Item = Error>>(&mut self, iter: T) {
|
||||
for err in iter {
|
||||
self.combine(err);
|
||||
}
|
||||
}
|
||||
}
|
||||
75
rust/syn/export.rs
Normal file
75
rust/syn/export.rs
Normal file
@@ -0,0 +1,75 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
#[doc(hidden)]
|
||||
pub use std::clone::Clone;
|
||||
#[doc(hidden)]
|
||||
pub use std::cmp::{Eq, PartialEq};
|
||||
#[doc(hidden)]
|
||||
pub use std::concat;
|
||||
#[doc(hidden)]
|
||||
pub use std::default::Default;
|
||||
#[doc(hidden)]
|
||||
pub use std::fmt::Debug;
|
||||
#[doc(hidden)]
|
||||
pub use std::hash::{Hash, Hasher};
|
||||
#[doc(hidden)]
|
||||
pub use std::marker::Copy;
|
||||
#[doc(hidden)]
|
||||
pub use std::option::Option::{None, Some};
|
||||
#[doc(hidden)]
|
||||
pub use std::result::Result::{Err, Ok};
|
||||
#[doc(hidden)]
|
||||
pub use std::stringify;
|
||||
|
||||
#[doc(hidden)]
|
||||
pub type Formatter<'a> = std::fmt::Formatter<'a>;
|
||||
#[doc(hidden)]
|
||||
pub type FmtResult = std::fmt::Result;
|
||||
|
||||
#[doc(hidden)]
|
||||
pub type bool = std::primitive::bool;
|
||||
#[doc(hidden)]
|
||||
pub type str = std::primitive::str;
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
#[doc(hidden)]
|
||||
pub use quote;
|
||||
|
||||
#[doc(hidden)]
|
||||
pub type Span = proc_macro2::Span;
|
||||
#[doc(hidden)]
|
||||
pub type TokenStream2 = proc_macro2::TokenStream;
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
#[doc(hidden)]
|
||||
pub use crate::group::{parse_braces, parse_brackets, parse_parens};
|
||||
|
||||
#[doc(hidden)]
|
||||
pub use crate::span::IntoSpans;
|
||||
|
||||
#[cfg(all(feature = "parsing", feature = "printing"))]
|
||||
#[doc(hidden)]
|
||||
pub use crate::parse_quote::parse as parse_quote;
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
#[doc(hidden)]
|
||||
pub use crate::token::parsing::{peek_punct, punct as parse_punct};
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
#[doc(hidden)]
|
||||
pub use crate::token::printing::punct as print_punct;
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
#[doc(hidden)]
|
||||
pub use crate::token::private::CustomToken;
|
||||
|
||||
#[cfg(feature = "proc-macro")]
|
||||
#[doc(hidden)]
|
||||
pub type TokenStream = proc_macro::TokenStream;
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
#[doc(hidden)]
|
||||
pub use quote::{ToTokens, TokenStreamExt};
|
||||
|
||||
#[doc(hidden)]
|
||||
pub struct private(pub(crate) ());
|
||||
4175
rust/syn/expr.rs
Normal file
4175
rust/syn/expr.rs
Normal file
File diff suppressed because it is too large
Load Diff
138
rust/syn/ext.rs
Normal file
138
rust/syn/ext.rs
Normal file
@@ -0,0 +1,138 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
//! Extension traits to provide parsing methods on foreign types.
|
||||
|
||||
use crate::buffer::Cursor;
|
||||
use crate::error::Result;
|
||||
use crate::parse::ParseStream;
|
||||
use crate::parse::Peek;
|
||||
use crate::sealed::lookahead;
|
||||
use crate::token::CustomToken;
|
||||
use proc_macro2::Ident;
|
||||
|
||||
/// Additional methods for `Ident` not provided by proc-macro2 or libproc_macro.
|
||||
///
|
||||
/// This trait is sealed and cannot be implemented for types outside of Syn. It
|
||||
/// is implemented only for `proc_macro2::Ident`.
|
||||
pub trait IdentExt: Sized + private::Sealed {
|
||||
/// Parses any identifier including keywords.
|
||||
///
|
||||
/// This is useful when parsing macro input which allows Rust keywords as
|
||||
/// identifiers.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// use syn::{Error, Ident, Result, Token};
|
||||
/// use syn::ext::IdentExt;
|
||||
/// use syn::parse::ParseStream;
|
||||
///
|
||||
/// mod kw {
|
||||
/// syn::custom_keyword!(name);
|
||||
/// }
|
||||
///
|
||||
/// // Parses input that looks like `name = NAME` where `NAME` can be
|
||||
/// // any identifier.
|
||||
/// //
|
||||
/// // Examples:
|
||||
/// //
|
||||
/// // name = anything
|
||||
/// // name = impl
|
||||
/// fn parse_dsl(input: ParseStream) -> Result<Ident> {
|
||||
/// input.parse::<kw::name>()?;
|
||||
/// input.parse::<Token![=]>()?;
|
||||
/// let name = input.call(Ident::parse_any)?;
|
||||
/// Ok(name)
|
||||
/// }
|
||||
/// ```
|
||||
fn parse_any(input: ParseStream) -> Result<Self>;
|
||||
|
||||
/// Peeks any identifier including keywords. Usage:
|
||||
/// `input.peek(Ident::peek_any)`
|
||||
///
|
||||
/// This is different from `input.peek(Ident)` which only returns true in
|
||||
/// the case of an ident which is not a Rust keyword.
|
||||
#[allow(non_upper_case_globals)]
|
||||
const peek_any: private::PeekFn = private::PeekFn;
|
||||
|
||||
/// Strips the raw marker `r#`, if any, from the beginning of an ident.
|
||||
///
|
||||
/// - unraw(`x`) = `x`
|
||||
/// - unraw(`move`) = `move`
|
||||
/// - unraw(`r#move`) = `move`
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// In the case of interop with other languages like Python that have a
|
||||
/// different set of keywords than Rust, we might come across macro input
|
||||
/// that involves raw identifiers to refer to ordinary variables in the
|
||||
/// other language with a name that happens to be a Rust keyword.
|
||||
///
|
||||
/// The function below appends an identifier from the caller's input onto a
|
||||
/// fixed prefix. Without using `unraw()`, this would tend to produce
|
||||
/// invalid identifiers like `__pyo3_get_r#move`.
|
||||
///
|
||||
/// ```
|
||||
/// use proc_macro2::Span;
|
||||
/// use syn::Ident;
|
||||
/// use syn::ext::IdentExt;
|
||||
///
|
||||
/// fn ident_for_getter(variable: &Ident) -> Ident {
|
||||
/// let getter = format!("__pyo3_get_{}", variable.unraw());
|
||||
/// Ident::new(&getter, Span::call_site())
|
||||
/// }
|
||||
/// ```
|
||||
fn unraw(&self) -> Ident;
|
||||
}
|
||||
|
||||
impl IdentExt for Ident {
|
||||
fn parse_any(input: ParseStream) -> Result<Self> {
|
||||
input.step(|cursor| match cursor.ident() {
|
||||
Some((ident, rest)) => Ok((ident, rest)),
|
||||
None => Err(cursor.error("expected ident")),
|
||||
})
|
||||
}
|
||||
|
||||
fn unraw(&self) -> Ident {
|
||||
let string = self.to_string();
|
||||
if let Some(string) = string.strip_prefix("r#") {
|
||||
Ident::new(string, self.span())
|
||||
} else {
|
||||
self.clone()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl Peek for private::PeekFn {
|
||||
type Token = private::IdentAny;
|
||||
}
|
||||
|
||||
impl CustomToken for private::IdentAny {
|
||||
fn peek(cursor: Cursor) -> bool {
|
||||
cursor.ident().is_some()
|
||||
}
|
||||
|
||||
fn display() -> &'static str {
|
||||
"identifier"
|
||||
}
|
||||
}
|
||||
|
||||
impl lookahead::Sealed for private::PeekFn {}
|
||||
|
||||
mod private {
|
||||
use proc_macro2::Ident;
|
||||
|
||||
pub trait Sealed {}
|
||||
|
||||
impl Sealed for Ident {}
|
||||
|
||||
pub struct PeekFn;
|
||||
pub struct IdentAny;
|
||||
|
||||
impl Copy for PeekFn {}
|
||||
impl Clone for PeekFn {
|
||||
fn clone(&self) -> Self {
|
||||
*self
|
||||
}
|
||||
}
|
||||
}
|
||||
127
rust/syn/file.rs
Normal file
127
rust/syn/file.rs
Normal file
@@ -0,0 +1,127 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::attr::Attribute;
|
||||
use crate::item::Item;
|
||||
|
||||
ast_struct! {
|
||||
/// A complete file of Rust source code.
|
||||
///
|
||||
/// Typically `File` objects are created with [`parse_file`].
|
||||
///
|
||||
/// [`parse_file`]: crate::parse_file
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// Parse a Rust source file into a `syn::File` and print out a debug
|
||||
/// representation of the syntax tree.
|
||||
///
|
||||
/// ```
|
||||
/// use std::env;
|
||||
/// use std::fs;
|
||||
/// use std::process;
|
||||
///
|
||||
/// fn main() {
|
||||
/// # }
|
||||
/// #
|
||||
/// # fn fake_main() {
|
||||
/// let mut args = env::args();
|
||||
/// let _ = args.next(); // executable name
|
||||
///
|
||||
/// let filename = match (args.next(), args.next()) {
|
||||
/// (Some(filename), None) => filename,
|
||||
/// _ => {
|
||||
/// eprintln!("Usage: dump-syntax path/to/filename.rs");
|
||||
/// process::exit(1);
|
||||
/// }
|
||||
/// };
|
||||
///
|
||||
/// let src = fs::read_to_string(&filename).expect("unable to read file");
|
||||
/// let syntax = syn::parse_file(&src).expect("unable to parse file");
|
||||
///
|
||||
/// // Debug impl is available if Syn is built with "extra-traits" feature.
|
||||
/// println!("{:#?}", syntax);
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// Running with its own source code as input, this program prints output
|
||||
/// that begins with:
|
||||
///
|
||||
/// ```text
|
||||
/// File {
|
||||
/// shebang: None,
|
||||
/// attrs: [],
|
||||
/// items: [
|
||||
/// Use(
|
||||
/// ItemUse {
|
||||
/// attrs: [],
|
||||
/// vis: Inherited,
|
||||
/// use_token: Use,
|
||||
/// leading_colon: None,
|
||||
/// tree: Path(
|
||||
/// UsePath {
|
||||
/// ident: Ident(
|
||||
/// std,
|
||||
/// ),
|
||||
/// colon2_token: Colon2,
|
||||
/// tree: Name(
|
||||
/// UseName {
|
||||
/// ident: Ident(
|
||||
/// env,
|
||||
/// ),
|
||||
/// },
|
||||
/// ),
|
||||
/// },
|
||||
/// ),
|
||||
/// semi_token: Semi,
|
||||
/// },
|
||||
/// ),
|
||||
/// ...
|
||||
/// ```
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "full")))]
|
||||
pub struct File {
|
||||
pub shebang: Option<String>,
|
||||
pub attrs: Vec<Attribute>,
|
||||
pub items: Vec<Item>,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
pub(crate) mod parsing {
|
||||
use crate::attr::Attribute;
|
||||
use crate::error::Result;
|
||||
use crate::file::File;
|
||||
use crate::parse::{Parse, ParseStream};
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for File {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
Ok(File {
|
||||
shebang: None,
|
||||
attrs: input.call(Attribute::parse_inner)?,
|
||||
items: {
|
||||
let mut items = Vec::new();
|
||||
while !input.is_empty() {
|
||||
items.push(input.parse()?);
|
||||
}
|
||||
items
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "printing")]
|
||||
mod printing {
|
||||
use crate::attr::FilterAttrs;
|
||||
use crate::file::File;
|
||||
use proc_macro2::TokenStream;
|
||||
use quote::{ToTokens, TokenStreamExt};
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "printing")))]
|
||||
impl ToTokens for File {
|
||||
fn to_tokens(&self, tokens: &mut TokenStream) {
|
||||
tokens.append_all(self.attrs.inner());
|
||||
tokens.append_all(&self.items);
|
||||
}
|
||||
}
|
||||
}
|
||||
775
rust/syn/fixup.rs
Normal file
775
rust/syn/fixup.rs
Normal file
@@ -0,0 +1,775 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::classify;
|
||||
use crate::expr::Expr;
|
||||
#[cfg(feature = "full")]
|
||||
use crate::expr::{
|
||||
ExprBreak, ExprRange, ExprRawAddr, ExprReference, ExprReturn, ExprUnary, ExprYield,
|
||||
};
|
||||
use crate::precedence::Precedence;
|
||||
#[cfg(feature = "full")]
|
||||
use crate::ty::ReturnType;
|
||||
|
||||
pub(crate) struct FixupContext {
|
||||
#[cfg(feature = "full")]
|
||||
previous_operator: Precedence,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator: Precedence,
|
||||
|
||||
// Print expression such that it can be parsed back as a statement
|
||||
// consisting of the original expression.
|
||||
//
|
||||
// The effect of this is for binary operators in statement position to set
|
||||
// `leftmost_subexpression_in_stmt` when printing their left-hand operand.
|
||||
//
|
||||
// (match x {}) - 1; // match needs parens when LHS of binary operator
|
||||
//
|
||||
// match x {}; // not when its own statement
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
stmt: bool,
|
||||
|
||||
// This is the difference between:
|
||||
//
|
||||
// (match x {}) - 1; // subexpression needs parens
|
||||
//
|
||||
// let _ = match x {} - 1; // no parens
|
||||
//
|
||||
// There are 3 distinguishable contexts in which `print_expr` might be
|
||||
// called with the expression `$match` as its argument, where `$match`
|
||||
// represents an expression of kind `ExprKind::Match`:
|
||||
//
|
||||
// - stmt=false leftmost_subexpression_in_stmt=false
|
||||
//
|
||||
// Example: `let _ = $match - 1;`
|
||||
//
|
||||
// No parentheses required.
|
||||
//
|
||||
// - stmt=false leftmost_subexpression_in_stmt=true
|
||||
//
|
||||
// Example: `$match - 1;`
|
||||
//
|
||||
// Must parenthesize `($match)`, otherwise parsing back the output as a
|
||||
// statement would terminate the statement after the closing brace of
|
||||
// the match, parsing `-1;` as a separate statement.
|
||||
//
|
||||
// - stmt=true leftmost_subexpression_in_stmt=false
|
||||
//
|
||||
// Example: `$match;`
|
||||
//
|
||||
// No parentheses required.
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_stmt: bool,
|
||||
|
||||
// Print expression such that it can be parsed as a match arm.
|
||||
//
|
||||
// This is almost equivalent to `stmt`, but the grammar diverges a tiny bit
|
||||
// between statements and match arms when it comes to braced macro calls.
|
||||
// Macro calls with brace delimiter terminate a statement without a
|
||||
// semicolon, but do not terminate a match-arm without comma.
|
||||
//
|
||||
// m! {} - 1; // two statements: a macro call followed by -1 literal
|
||||
//
|
||||
// match () {
|
||||
// _ => m! {} - 1, // binary subtraction operator
|
||||
// }
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
match_arm: bool,
|
||||
|
||||
// This is almost equivalent to `leftmost_subexpression_in_stmt`, other than
|
||||
// for braced macro calls.
|
||||
//
|
||||
// If we have `m! {} - 1` as an expression, the leftmost subexpression
|
||||
// `m! {}` will need to be parenthesized in the statement case but not the
|
||||
// match-arm case.
|
||||
//
|
||||
// (m! {}) - 1; // subexpression needs parens
|
||||
//
|
||||
// match () {
|
||||
// _ => m! {} - 1, // no parens
|
||||
// }
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_match_arm: bool,
|
||||
|
||||
// This is the difference between:
|
||||
//
|
||||
// if let _ = (Struct {}) {} // needs parens
|
||||
//
|
||||
// match () {
|
||||
// () if let _ = Struct {} => {} // no parens
|
||||
// }
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
condition: bool,
|
||||
|
||||
// This is the difference between:
|
||||
//
|
||||
// if break Struct {} == (break) {} // needs parens
|
||||
//
|
||||
// if break break == Struct {} {} // no parens
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
rightmost_subexpression_in_condition: bool,
|
||||
|
||||
// This is the difference between:
|
||||
//
|
||||
// if break ({ x }).field + 1 {} needs parens
|
||||
//
|
||||
// if break 1 + { x }.field {} // no parens
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_optional_operand: bool,
|
||||
|
||||
// This is the difference between:
|
||||
//
|
||||
// let _ = (return) - 1; // without paren, this would return -1
|
||||
//
|
||||
// let _ = return + 1; // no paren because '+' cannot begin expr
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_begin_expr: bool,
|
||||
|
||||
// This is the difference between:
|
||||
//
|
||||
// let _ = 1 + return 1; // no parens if rightmost subexpression
|
||||
//
|
||||
// let _ = 1 + (return 1) + 1; // needs parens
|
||||
//
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_continue_expr: bool,
|
||||
|
||||
// This is the difference between:
|
||||
//
|
||||
// let _ = x as u8 + T;
|
||||
//
|
||||
// let _ = (x as u8) < T;
|
||||
//
|
||||
// Without parens, the latter would want to parse `u8<T...` as a type.
|
||||
next_operator_can_begin_generics: bool,
|
||||
}
|
||||
|
||||
impl FixupContext {
|
||||
/// The default amount of fixing is minimal fixing. Fixups should be turned
|
||||
/// on in a targeted fashion where needed.
|
||||
pub const NONE: Self = FixupContext {
|
||||
#[cfg(feature = "full")]
|
||||
previous_operator: Precedence::MIN,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator: Precedence::MIN,
|
||||
#[cfg(feature = "full")]
|
||||
stmt: false,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_stmt: false,
|
||||
#[cfg(feature = "full")]
|
||||
match_arm: false,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_match_arm: false,
|
||||
#[cfg(feature = "full")]
|
||||
condition: false,
|
||||
#[cfg(feature = "full")]
|
||||
rightmost_subexpression_in_condition: false,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_optional_operand: false,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_begin_expr: false,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_continue_expr: false,
|
||||
next_operator_can_begin_generics: false,
|
||||
};
|
||||
|
||||
/// Create the initial fixup for printing an expression in statement
|
||||
/// position.
|
||||
#[cfg(feature = "full")]
|
||||
pub fn new_stmt() -> Self {
|
||||
FixupContext {
|
||||
stmt: true,
|
||||
..FixupContext::NONE
|
||||
}
|
||||
}
|
||||
|
||||
/// Create the initial fixup for printing an expression as the right-hand
|
||||
/// side of a match arm.
|
||||
#[cfg(feature = "full")]
|
||||
pub fn new_match_arm() -> Self {
|
||||
FixupContext {
|
||||
match_arm: true,
|
||||
..FixupContext::NONE
|
||||
}
|
||||
}
|
||||
|
||||
/// Create the initial fixup for printing an expression as the "condition"
|
||||
/// of an `if` or `while`. There are a few other positions which are
|
||||
/// grammatically equivalent and also use this, such as the iterator
|
||||
/// expression in `for` and the scrutinee in `match`.
|
||||
#[cfg(feature = "full")]
|
||||
pub fn new_condition() -> Self {
|
||||
FixupContext {
|
||||
condition: true,
|
||||
rightmost_subexpression_in_condition: true,
|
||||
..FixupContext::NONE
|
||||
}
|
||||
}
|
||||
|
||||
/// Transform this fixup into the one that should apply when printing the
|
||||
/// leftmost subexpression of the current expression.
|
||||
///
|
||||
/// The leftmost subexpression is any subexpression that has the same first
|
||||
/// token as the current expression, but has a different last token.
|
||||
///
|
||||
/// For example in `$a + $b` and `$a.method()`, the subexpression `$a` is a
|
||||
/// leftmost subexpression.
|
||||
///
|
||||
/// Not every expression has a leftmost subexpression. For example neither
|
||||
/// `-$a` nor `[$a]` have one.
|
||||
pub fn leftmost_subexpression_with_operator(
|
||||
self,
|
||||
expr: &Expr,
|
||||
#[cfg(feature = "full")] next_operator_can_begin_expr: bool,
|
||||
next_operator_can_begin_generics: bool,
|
||||
#[cfg(feature = "full")] precedence: Precedence,
|
||||
) -> (Precedence, Self) {
|
||||
let fixup = FixupContext {
|
||||
#[cfg(feature = "full")]
|
||||
next_operator: precedence,
|
||||
#[cfg(feature = "full")]
|
||||
stmt: false,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_stmt: self.stmt || self.leftmost_subexpression_in_stmt,
|
||||
#[cfg(feature = "full")]
|
||||
match_arm: false,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_match_arm: self.match_arm
|
||||
|| self.leftmost_subexpression_in_match_arm,
|
||||
#[cfg(feature = "full")]
|
||||
rightmost_subexpression_in_condition: false,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_begin_expr,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_continue_expr: true,
|
||||
next_operator_can_begin_generics,
|
||||
..self
|
||||
};
|
||||
|
||||
(fixup.leftmost_subexpression_precedence(expr), fixup)
|
||||
}
|
||||
|
||||
/// Transform this fixup into the one that should apply when printing a
|
||||
/// leftmost subexpression followed by a `.` or `?` token, which confer
|
||||
/// different statement boundary rules compared to other leftmost
|
||||
/// subexpressions.
|
||||
pub fn leftmost_subexpression_with_dot(self, expr: &Expr) -> (Precedence, Self) {
|
||||
let fixup = FixupContext {
|
||||
#[cfg(feature = "full")]
|
||||
next_operator: Precedence::Unambiguous,
|
||||
#[cfg(feature = "full")]
|
||||
stmt: self.stmt || self.leftmost_subexpression_in_stmt,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_stmt: false,
|
||||
#[cfg(feature = "full")]
|
||||
match_arm: self.match_arm || self.leftmost_subexpression_in_match_arm,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_match_arm: false,
|
||||
#[cfg(feature = "full")]
|
||||
rightmost_subexpression_in_condition: false,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_begin_expr: false,
|
||||
#[cfg(feature = "full")]
|
||||
next_operator_can_continue_expr: true,
|
||||
next_operator_can_begin_generics: false,
|
||||
..self
|
||||
};
|
||||
|
||||
(fixup.leftmost_subexpression_precedence(expr), fixup)
|
||||
}
|
||||
|
||||
fn leftmost_subexpression_precedence(self, expr: &Expr) -> Precedence {
|
||||
#[cfg(feature = "full")]
|
||||
if !self.next_operator_can_begin_expr || self.next_operator == Precedence::Range {
|
||||
if let Scan::Bailout = scan_right(expr, self, Precedence::MIN, 0, 0) {
|
||||
if scan_left(expr, self) {
|
||||
return Precedence::Unambiguous;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
self.precedence(expr)
|
||||
}
|
||||
|
||||
/// Transform this fixup into the one that should apply when printing the
|
||||
/// rightmost subexpression of the current expression.
|
||||
///
|
||||
/// The rightmost subexpression is any subexpression that has a different
|
||||
/// first token than the current expression, but has the same last token.
|
||||
///
|
||||
/// For example in `$a + $b` and `-$b`, the subexpression `$b` is a
|
||||
/// rightmost subexpression.
|
||||
///
|
||||
/// Not every expression has a rightmost subexpression. For example neither
|
||||
/// `[$b]` nor `$a.f($b)` have one.
|
||||
pub fn rightmost_subexpression(
|
||||
self,
|
||||
expr: &Expr,
|
||||
#[cfg(feature = "full")] precedence: Precedence,
|
||||
) -> (Precedence, Self) {
|
||||
let fixup = self.rightmost_subexpression_fixup(
|
||||
#[cfg(feature = "full")]
|
||||
false,
|
||||
#[cfg(feature = "full")]
|
||||
false,
|
||||
#[cfg(feature = "full")]
|
||||
precedence,
|
||||
);
|
||||
(fixup.rightmost_subexpression_precedence(expr), fixup)
|
||||
}
|
||||
|
||||
pub fn rightmost_subexpression_fixup(
|
||||
self,
|
||||
#[cfg(feature = "full")] reset_allow_struct: bool,
|
||||
#[cfg(feature = "full")] optional_operand: bool,
|
||||
#[cfg(feature = "full")] precedence: Precedence,
|
||||
) -> Self {
|
||||
FixupContext {
|
||||
#[cfg(feature = "full")]
|
||||
previous_operator: precedence,
|
||||
#[cfg(feature = "full")]
|
||||
stmt: false,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_stmt: false,
|
||||
#[cfg(feature = "full")]
|
||||
match_arm: false,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_match_arm: false,
|
||||
#[cfg(feature = "full")]
|
||||
condition: self.condition && !reset_allow_struct,
|
||||
#[cfg(feature = "full")]
|
||||
leftmost_subexpression_in_optional_operand: self.condition && optional_operand,
|
||||
..self
|
||||
}
|
||||
}
|
||||
|
||||
pub fn rightmost_subexpression_precedence(self, expr: &Expr) -> Precedence {
|
||||
let default_prec = self.precedence(expr);
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
if match self.previous_operator {
|
||||
Precedence::Assign | Precedence::Let | Precedence::Prefix => {
|
||||
default_prec < self.previous_operator
|
||||
}
|
||||
_ => default_prec <= self.previous_operator,
|
||||
} && match self.next_operator {
|
||||
Precedence::Range | Precedence::Or | Precedence::And => true,
|
||||
_ => !self.next_operator_can_begin_expr,
|
||||
} {
|
||||
if let Scan::Bailout | Scan::Fail = scan_right(expr, self, self.previous_operator, 1, 0)
|
||||
{
|
||||
if scan_left(expr, self) {
|
||||
return Precedence::Prefix;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
default_prec
|
||||
}
|
||||
|
||||
/// Determine whether parentheses are needed around the given expression to
|
||||
/// head off the early termination of a statement or condition.
|
||||
#[cfg(feature = "full")]
|
||||
pub fn parenthesize(self, expr: &Expr) -> bool {
|
||||
(self.leftmost_subexpression_in_stmt && !classify::requires_semi_to_be_stmt(expr))
|
||||
|| ((self.stmt || self.leftmost_subexpression_in_stmt) && matches!(expr, Expr::Let(_)))
|
||||
|| (self.leftmost_subexpression_in_match_arm
|
||||
&& !classify::requires_comma_to_be_match_arm(expr))
|
||||
|| (self.condition && matches!(expr, Expr::Struct(_)))
|
||||
|| (self.rightmost_subexpression_in_condition
|
||||
&& matches!(
|
||||
expr,
|
||||
Expr::Return(ExprReturn { expr: None, .. })
|
||||
| Expr::Yield(ExprYield { expr: None, .. })
|
||||
))
|
||||
|| (self.rightmost_subexpression_in_condition
|
||||
&& !self.condition
|
||||
&& matches!(
|
||||
expr,
|
||||
Expr::Break(ExprBreak { expr: None, .. })
|
||||
| Expr::Path(_)
|
||||
| Expr::Range(ExprRange { end: None, .. })
|
||||
))
|
||||
|| (self.leftmost_subexpression_in_optional_operand
|
||||
&& matches!(expr, Expr::Block(expr) if expr.attrs.is_empty() && expr.label.is_none()))
|
||||
}
|
||||
|
||||
/// Determines the effective precedence of a subexpression. Some expressions
|
||||
/// have higher or lower precedence when adjacent to particular operators.
|
||||
fn precedence(self, expr: &Expr) -> Precedence {
|
||||
#[cfg(feature = "full")]
|
||||
if self.next_operator_can_begin_expr {
|
||||
// Decrease precedence of value-less jumps when followed by an
|
||||
// operator that would otherwise get interpreted as beginning a
|
||||
// value for the jump.
|
||||
if let Expr::Break(ExprBreak { expr: None, .. })
|
||||
| Expr::Return(ExprReturn { expr: None, .. })
|
||||
| Expr::Yield(ExprYield { expr: None, .. }) = expr
|
||||
{
|
||||
return Precedence::Jump;
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
if !self.next_operator_can_continue_expr {
|
||||
match expr {
|
||||
// Increase precedence of expressions that extend to the end of
|
||||
// current statement or group.
|
||||
Expr::Break(_)
|
||||
| Expr::Closure(_)
|
||||
| Expr::Let(_)
|
||||
| Expr::Return(_)
|
||||
| Expr::Yield(_) => {
|
||||
return Precedence::Prefix;
|
||||
}
|
||||
Expr::Range(e) if e.start.is_none() => return Precedence::Prefix,
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
if self.next_operator_can_begin_generics {
|
||||
if let Expr::Cast(cast) = expr {
|
||||
if classify::trailing_unparameterized_path(&cast.ty) {
|
||||
return Precedence::MIN;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Precedence::of(expr)
|
||||
}
|
||||
}
|
||||
|
||||
impl Copy for FixupContext {}
|
||||
|
||||
impl Clone for FixupContext {
|
||||
fn clone(&self) -> Self {
|
||||
*self
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
enum Scan {
|
||||
Fail,
|
||||
Bailout,
|
||||
Consume,
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
impl Copy for Scan {}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
impl Clone for Scan {
|
||||
fn clone(&self) -> Self {
|
||||
*self
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
impl PartialEq for Scan {
|
||||
fn eq(&self, other: &Self) -> bool {
|
||||
*self as u8 == *other as u8
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
fn scan_left(expr: &Expr, fixup: FixupContext) -> bool {
|
||||
match expr {
|
||||
Expr::Assign(_) => fixup.previous_operator <= Precedence::Assign,
|
||||
Expr::Binary(e) => match Precedence::of_binop(&e.op) {
|
||||
Precedence::Assign => fixup.previous_operator <= Precedence::Assign,
|
||||
binop_prec => fixup.previous_operator < binop_prec,
|
||||
},
|
||||
Expr::Cast(_) => fixup.previous_operator < Precedence::Cast,
|
||||
Expr::Range(e) => e.start.is_none() || fixup.previous_operator < Precedence::Assign,
|
||||
_ => true,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "full")]
|
||||
fn scan_right(
|
||||
expr: &Expr,
|
||||
fixup: FixupContext,
|
||||
precedence: Precedence,
|
||||
fail_offset: u8,
|
||||
bailout_offset: u8,
|
||||
) -> Scan {
|
||||
let consume_by_precedence = if match precedence {
|
||||
Precedence::Assign | Precedence::Compare => precedence <= fixup.next_operator,
|
||||
_ => precedence < fixup.next_operator,
|
||||
} || fixup.next_operator == Precedence::MIN
|
||||
{
|
||||
Scan::Consume
|
||||
} else {
|
||||
Scan::Bailout
|
||||
};
|
||||
if fixup.parenthesize(expr) {
|
||||
return consume_by_precedence;
|
||||
}
|
||||
match expr {
|
||||
Expr::Assign(e) if e.attrs.is_empty() => {
|
||||
if match fixup.next_operator {
|
||||
Precedence::Unambiguous => fail_offset >= 2,
|
||||
_ => bailout_offset >= 1,
|
||||
} {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup = fixup.rightmost_subexpression_fixup(false, false, Precedence::Assign);
|
||||
let scan = scan_right(
|
||||
&e.right,
|
||||
right_fixup,
|
||||
Precedence::Assign,
|
||||
match fixup.next_operator {
|
||||
Precedence::Unambiguous => fail_offset,
|
||||
_ => 1,
|
||||
},
|
||||
1,
|
||||
);
|
||||
if let Scan::Bailout | Scan::Consume = scan {
|
||||
Scan::Consume
|
||||
} else if let Precedence::Unambiguous = fixup.next_operator {
|
||||
Scan::Fail
|
||||
} else {
|
||||
Scan::Bailout
|
||||
}
|
||||
}
|
||||
Expr::Binary(e) if e.attrs.is_empty() => {
|
||||
if match fixup.next_operator {
|
||||
Precedence::Unambiguous => {
|
||||
fail_offset >= 2
|
||||
&& (consume_by_precedence == Scan::Consume || bailout_offset >= 1)
|
||||
}
|
||||
_ => bailout_offset >= 1,
|
||||
} {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let binop_prec = Precedence::of_binop(&e.op);
|
||||
if binop_prec == Precedence::Compare && fixup.next_operator == Precedence::Compare {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup = fixup.rightmost_subexpression_fixup(false, false, binop_prec);
|
||||
let scan = scan_right(
|
||||
&e.right,
|
||||
right_fixup,
|
||||
binop_prec,
|
||||
match fixup.next_operator {
|
||||
Precedence::Unambiguous => fail_offset,
|
||||
_ => 1,
|
||||
},
|
||||
consume_by_precedence as u8 - Scan::Bailout as u8,
|
||||
);
|
||||
match scan {
|
||||
Scan::Fail => {}
|
||||
Scan::Bailout => return consume_by_precedence,
|
||||
Scan::Consume => return Scan::Consume,
|
||||
}
|
||||
let right_needs_group = binop_prec != Precedence::Assign
|
||||
&& right_fixup.rightmost_subexpression_precedence(&e.right) <= binop_prec;
|
||||
if right_needs_group {
|
||||
consume_by_precedence
|
||||
} else if let (Scan::Fail, Precedence::Unambiguous) = (scan, fixup.next_operator) {
|
||||
Scan::Fail
|
||||
} else {
|
||||
Scan::Bailout
|
||||
}
|
||||
}
|
||||
Expr::RawAddr(ExprRawAddr { expr, .. })
|
||||
| Expr::Reference(ExprReference { expr, .. })
|
||||
| Expr::Unary(ExprUnary { expr, .. }) => {
|
||||
if match fixup.next_operator {
|
||||
Precedence::Unambiguous => {
|
||||
fail_offset >= 2
|
||||
&& (consume_by_precedence == Scan::Consume || bailout_offset >= 1)
|
||||
}
|
||||
_ => bailout_offset >= 1,
|
||||
} {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup = fixup.rightmost_subexpression_fixup(false, false, Precedence::Prefix);
|
||||
let scan = scan_right(
|
||||
expr,
|
||||
right_fixup,
|
||||
precedence,
|
||||
match fixup.next_operator {
|
||||
Precedence::Unambiguous => fail_offset,
|
||||
_ => 1,
|
||||
},
|
||||
consume_by_precedence as u8 - Scan::Bailout as u8,
|
||||
);
|
||||
match scan {
|
||||
Scan::Fail => {}
|
||||
Scan::Bailout => return consume_by_precedence,
|
||||
Scan::Consume => return Scan::Consume,
|
||||
}
|
||||
if right_fixup.rightmost_subexpression_precedence(expr) < Precedence::Prefix {
|
||||
consume_by_precedence
|
||||
} else if let (Scan::Fail, Precedence::Unambiguous) = (scan, fixup.next_operator) {
|
||||
Scan::Fail
|
||||
} else {
|
||||
Scan::Bailout
|
||||
}
|
||||
}
|
||||
Expr::Range(e) if e.attrs.is_empty() => match &e.end {
|
||||
Some(end) => {
|
||||
if fail_offset >= 2 {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup =
|
||||
fixup.rightmost_subexpression_fixup(false, true, Precedence::Range);
|
||||
let scan = scan_right(
|
||||
end,
|
||||
right_fixup,
|
||||
Precedence::Range,
|
||||
fail_offset,
|
||||
match fixup.next_operator {
|
||||
Precedence::Assign | Precedence::Range => 0,
|
||||
_ => 1,
|
||||
},
|
||||
);
|
||||
if match (scan, fixup.next_operator) {
|
||||
(Scan::Fail, _) => false,
|
||||
(Scan::Bailout, Precedence::Assign | Precedence::Range) => false,
|
||||
(Scan::Bailout | Scan::Consume, _) => true,
|
||||
} {
|
||||
return Scan::Consume;
|
||||
}
|
||||
if right_fixup.rightmost_subexpression_precedence(end) <= Precedence::Range {
|
||||
Scan::Consume
|
||||
} else {
|
||||
Scan::Fail
|
||||
}
|
||||
}
|
||||
None => {
|
||||
if fixup.next_operator_can_begin_expr {
|
||||
Scan::Consume
|
||||
} else {
|
||||
Scan::Fail
|
||||
}
|
||||
}
|
||||
},
|
||||
Expr::Break(e) => match &e.expr {
|
||||
Some(value) => {
|
||||
if bailout_offset >= 1 || e.label.is_none() && classify::expr_leading_label(value) {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup = fixup.rightmost_subexpression_fixup(true, true, Precedence::Jump);
|
||||
match scan_right(value, right_fixup, Precedence::Jump, 1, 1) {
|
||||
Scan::Fail => Scan::Bailout,
|
||||
Scan::Bailout | Scan::Consume => Scan::Consume,
|
||||
}
|
||||
}
|
||||
None => match fixup.next_operator {
|
||||
Precedence::Assign if precedence > Precedence::Assign => Scan::Fail,
|
||||
_ => Scan::Consume,
|
||||
},
|
||||
},
|
||||
Expr::Return(ExprReturn { expr, .. }) | Expr::Yield(ExprYield { expr, .. }) => match expr {
|
||||
Some(e) => {
|
||||
if bailout_offset >= 1 {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup =
|
||||
fixup.rightmost_subexpression_fixup(true, false, Precedence::Jump);
|
||||
match scan_right(e, right_fixup, Precedence::Jump, 1, 1) {
|
||||
Scan::Fail => Scan::Bailout,
|
||||
Scan::Bailout | Scan::Consume => Scan::Consume,
|
||||
}
|
||||
}
|
||||
None => match fixup.next_operator {
|
||||
Precedence::Assign if precedence > Precedence::Assign => Scan::Fail,
|
||||
_ => Scan::Consume,
|
||||
},
|
||||
},
|
||||
Expr::Closure(e) => {
|
||||
if matches!(e.output, ReturnType::Default)
|
||||
|| matches!(&*e.body, Expr::Block(body) if body.attrs.is_empty() && body.label.is_none())
|
||||
{
|
||||
if bailout_offset >= 1 {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup =
|
||||
fixup.rightmost_subexpression_fixup(false, false, Precedence::Jump);
|
||||
match scan_right(&e.body, right_fixup, Precedence::Jump, 1, 1) {
|
||||
Scan::Fail => Scan::Bailout,
|
||||
Scan::Bailout | Scan::Consume => Scan::Consume,
|
||||
}
|
||||
} else {
|
||||
Scan::Consume
|
||||
}
|
||||
}
|
||||
Expr::Let(e) => {
|
||||
if bailout_offset >= 1 {
|
||||
return Scan::Consume;
|
||||
}
|
||||
let right_fixup = fixup.rightmost_subexpression_fixup(false, false, Precedence::Let);
|
||||
let scan = scan_right(
|
||||
&e.expr,
|
||||
right_fixup,
|
||||
Precedence::Let,
|
||||
1,
|
||||
if fixup.next_operator < Precedence::Let {
|
||||
0
|
||||
} else {
|
||||
1
|
||||
},
|
||||
);
|
||||
match scan {
|
||||
Scan::Fail | Scan::Bailout if fixup.next_operator < Precedence::Let => {
|
||||
return Scan::Bailout;
|
||||
}
|
||||
Scan::Consume => return Scan::Consume,
|
||||
_ => {}
|
||||
}
|
||||
if right_fixup.rightmost_subexpression_precedence(&e.expr) < Precedence::Let {
|
||||
Scan::Consume
|
||||
} else if let Scan::Fail = scan {
|
||||
Scan::Bailout
|
||||
} else {
|
||||
Scan::Consume
|
||||
}
|
||||
}
|
||||
Expr::Array(_)
|
||||
| Expr::Assign(_)
|
||||
| Expr::Async(_)
|
||||
| Expr::Await(_)
|
||||
| Expr::Binary(_)
|
||||
| Expr::Block(_)
|
||||
| Expr::Call(_)
|
||||
| Expr::Cast(_)
|
||||
| Expr::Const(_)
|
||||
| Expr::Continue(_)
|
||||
| Expr::Field(_)
|
||||
| Expr::ForLoop(_)
|
||||
| Expr::Group(_)
|
||||
| Expr::If(_)
|
||||
| Expr::Index(_)
|
||||
| Expr::Infer(_)
|
||||
| Expr::Lit(_)
|
||||
| Expr::Loop(_)
|
||||
| Expr::Macro(_)
|
||||
| Expr::Match(_)
|
||||
| Expr::MethodCall(_)
|
||||
| Expr::Paren(_)
|
||||
| Expr::Path(_)
|
||||
| Expr::Range(_)
|
||||
| Expr::Repeat(_)
|
||||
| Expr::Struct(_)
|
||||
| Expr::Try(_)
|
||||
| Expr::TryBlock(_)
|
||||
| Expr::Tuple(_)
|
||||
| Expr::Unsafe(_)
|
||||
| Expr::Verbatim(_)
|
||||
| Expr::While(_) => match fixup.next_operator {
|
||||
Precedence::Assign | Precedence::Range if precedence == Precedence::Range => Scan::Fail,
|
||||
_ if precedence == Precedence::Let && fixup.next_operator < Precedence::Let => {
|
||||
Scan::Fail
|
||||
}
|
||||
_ => consume_by_precedence,
|
||||
},
|
||||
}
|
||||
}
|
||||
2269
rust/syn/gen/clone.rs
Normal file
2269
rust/syn/gen/clone.rs
Normal file
File diff suppressed because it is too large
Load Diff
3240
rust/syn/gen/debug.rs
Normal file
3240
rust/syn/gen/debug.rs
Normal file
File diff suppressed because it is too large
Load Diff
2308
rust/syn/gen/eq.rs
Normal file
2308
rust/syn/gen/eq.rs
Normal file
File diff suppressed because it is too large
Load Diff
3904
rust/syn/gen/fold.rs
Normal file
3904
rust/syn/gen/fold.rs
Normal file
File diff suppressed because it is too large
Load Diff
2878
rust/syn/gen/hash.rs
Normal file
2878
rust/syn/gen/hash.rs
Normal file
File diff suppressed because it is too large
Load Diff
3943
rust/syn/gen/visit.rs
Normal file
3943
rust/syn/gen/visit.rs
Normal file
File diff suppressed because it is too large
Load Diff
3761
rust/syn/gen/visit_mut.rs
Normal file
3761
rust/syn/gen/visit_mut.rs
Normal file
File diff suppressed because it is too large
Load Diff
1479
rust/syn/generics.rs
Normal file
1479
rust/syn/generics.rs
Normal file
File diff suppressed because it is too large
Load Diff
293
rust/syn/group.rs
Normal file
293
rust/syn/group.rs
Normal file
@@ -0,0 +1,293 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
use crate::error::Result;
|
||||
use crate::parse::ParseBuffer;
|
||||
use crate::token;
|
||||
use proc_macro2::extra::DelimSpan;
|
||||
use proc_macro2::Delimiter;
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
pub struct Parens<'a> {
|
||||
#[doc(hidden)]
|
||||
pub token: token::Paren,
|
||||
#[doc(hidden)]
|
||||
pub content: ParseBuffer<'a>,
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
pub struct Braces<'a> {
|
||||
#[doc(hidden)]
|
||||
pub token: token::Brace,
|
||||
#[doc(hidden)]
|
||||
pub content: ParseBuffer<'a>,
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
pub struct Brackets<'a> {
|
||||
#[doc(hidden)]
|
||||
pub token: token::Bracket,
|
||||
#[doc(hidden)]
|
||||
pub content: ParseBuffer<'a>,
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[cfg(any(feature = "full", feature = "derive"))]
|
||||
#[doc(hidden)]
|
||||
pub struct Group<'a> {
|
||||
#[doc(hidden)]
|
||||
pub token: token::Group,
|
||||
#[doc(hidden)]
|
||||
pub content: ParseBuffer<'a>,
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
pub fn parse_parens<'a>(input: &ParseBuffer<'a>) -> Result<Parens<'a>> {
|
||||
parse_delimited(input, Delimiter::Parenthesis).map(|(span, content)| Parens {
|
||||
token: token::Paren(span),
|
||||
content,
|
||||
})
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
pub fn parse_braces<'a>(input: &ParseBuffer<'a>) -> Result<Braces<'a>> {
|
||||
parse_delimited(input, Delimiter::Brace).map(|(span, content)| Braces {
|
||||
token: token::Brace(span),
|
||||
content,
|
||||
})
|
||||
}
|
||||
|
||||
// Not public API.
|
||||
#[doc(hidden)]
|
||||
pub fn parse_brackets<'a>(input: &ParseBuffer<'a>) -> Result<Brackets<'a>> {
|
||||
parse_delimited(input, Delimiter::Bracket).map(|(span, content)| Brackets {
|
||||
token: token::Bracket(span),
|
||||
content,
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(any(feature = "full", feature = "derive"))]
|
||||
pub(crate) fn parse_group<'a>(input: &ParseBuffer<'a>) -> Result<Group<'a>> {
|
||||
parse_delimited(input, Delimiter::None).map(|(span, content)| Group {
|
||||
token: token::Group(span.join()),
|
||||
content,
|
||||
})
|
||||
}
|
||||
|
||||
fn parse_delimited<'a>(
|
||||
input: &ParseBuffer<'a>,
|
||||
delimiter: Delimiter,
|
||||
) -> Result<(DelimSpan, ParseBuffer<'a>)> {
|
||||
input.step(|cursor| {
|
||||
if let Some((content, span, rest)) = cursor.group(delimiter) {
|
||||
let scope = span.close();
|
||||
let nested = crate::parse::advance_step_cursor(cursor, content);
|
||||
let unexpected = crate::parse::get_unexpected(input);
|
||||
let content = crate::parse::new_parse_buffer(scope, nested, unexpected);
|
||||
Ok(((span, content), rest))
|
||||
} else {
|
||||
let message = match delimiter {
|
||||
Delimiter::Parenthesis => "expected parentheses",
|
||||
Delimiter::Brace => "expected curly braces",
|
||||
Delimiter::Bracket => "expected square brackets",
|
||||
Delimiter::None => "expected invisible group",
|
||||
};
|
||||
Err(cursor.error(message))
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Parse a set of parentheses and expose their content to subsequent parsers.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// # use quote::quote;
|
||||
/// #
|
||||
/// use syn::{parenthesized, token, Ident, Result, Token, Type};
|
||||
/// use syn::parse::{Parse, ParseStream};
|
||||
/// use syn::punctuated::Punctuated;
|
||||
///
|
||||
/// // Parse a simplified tuple struct syntax like:
|
||||
/// //
|
||||
/// // struct S(A, B);
|
||||
/// struct TupleStruct {
|
||||
/// struct_token: Token![struct],
|
||||
/// ident: Ident,
|
||||
/// paren_token: token::Paren,
|
||||
/// fields: Punctuated<Type, Token![,]>,
|
||||
/// semi_token: Token![;],
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for TupleStruct {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// let content;
|
||||
/// Ok(TupleStruct {
|
||||
/// struct_token: input.parse()?,
|
||||
/// ident: input.parse()?,
|
||||
/// paren_token: parenthesized!(content in input),
|
||||
/// fields: content.parse_terminated(Type::parse, Token![,])?,
|
||||
/// semi_token: input.parse()?,
|
||||
/// })
|
||||
/// }
|
||||
/// }
|
||||
/// #
|
||||
/// # fn main() {
|
||||
/// # let input = quote! {
|
||||
/// # struct S(A, B);
|
||||
/// # };
|
||||
/// # syn::parse2::<TupleStruct>(input).unwrap();
|
||||
/// # }
|
||||
/// ```
|
||||
#[macro_export]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
macro_rules! parenthesized {
|
||||
($content:ident in $cursor:expr) => {
|
||||
match $crate::__private::parse_parens(&$cursor) {
|
||||
$crate::__private::Ok(parens) => {
|
||||
$content = parens.content;
|
||||
parens.token
|
||||
}
|
||||
$crate::__private::Err(error) => {
|
||||
return $crate::__private::Err(error);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/// Parse a set of curly braces and expose their content to subsequent parsers.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// # use quote::quote;
|
||||
/// #
|
||||
/// use syn::{braced, token, Ident, Result, Token, Type};
|
||||
/// use syn::parse::{Parse, ParseStream};
|
||||
/// use syn::punctuated::Punctuated;
|
||||
///
|
||||
/// // Parse a simplified struct syntax like:
|
||||
/// //
|
||||
/// // struct S {
|
||||
/// // a: A,
|
||||
/// // b: B,
|
||||
/// // }
|
||||
/// struct Struct {
|
||||
/// struct_token: Token![struct],
|
||||
/// ident: Ident,
|
||||
/// brace_token: token::Brace,
|
||||
/// fields: Punctuated<Field, Token![,]>,
|
||||
/// }
|
||||
///
|
||||
/// struct Field {
|
||||
/// name: Ident,
|
||||
/// colon_token: Token![:],
|
||||
/// ty: Type,
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for Struct {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// let content;
|
||||
/// Ok(Struct {
|
||||
/// struct_token: input.parse()?,
|
||||
/// ident: input.parse()?,
|
||||
/// brace_token: braced!(content in input),
|
||||
/// fields: content.parse_terminated(Field::parse, Token![,])?,
|
||||
/// })
|
||||
/// }
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for Field {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// Ok(Field {
|
||||
/// name: input.parse()?,
|
||||
/// colon_token: input.parse()?,
|
||||
/// ty: input.parse()?,
|
||||
/// })
|
||||
/// }
|
||||
/// }
|
||||
/// #
|
||||
/// # fn main() {
|
||||
/// # let input = quote! {
|
||||
/// # struct S {
|
||||
/// # a: A,
|
||||
/// # b: B,
|
||||
/// # }
|
||||
/// # };
|
||||
/// # syn::parse2::<Struct>(input).unwrap();
|
||||
/// # }
|
||||
/// ```
|
||||
#[macro_export]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
macro_rules! braced {
|
||||
($content:ident in $cursor:expr) => {
|
||||
match $crate::__private::parse_braces(&$cursor) {
|
||||
$crate::__private::Ok(braces) => {
|
||||
$content = braces.content;
|
||||
braces.token
|
||||
}
|
||||
$crate::__private::Err(error) => {
|
||||
return $crate::__private::Err(error);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
/// Parse a set of square brackets and expose their content to subsequent
|
||||
/// parsers.
|
||||
///
|
||||
/// # Example
|
||||
///
|
||||
/// ```
|
||||
/// # use quote::quote;
|
||||
/// #
|
||||
/// use proc_macro2::TokenStream;
|
||||
/// use syn::{bracketed, token, Result, Token};
|
||||
/// use syn::parse::{Parse, ParseStream};
|
||||
///
|
||||
/// // Parse an outer attribute like:
|
||||
/// //
|
||||
/// // #[repr(C, packed)]
|
||||
/// struct OuterAttribute {
|
||||
/// pound_token: Token![#],
|
||||
/// bracket_token: token::Bracket,
|
||||
/// content: TokenStream,
|
||||
/// }
|
||||
///
|
||||
/// impl Parse for OuterAttribute {
|
||||
/// fn parse(input: ParseStream) -> Result<Self> {
|
||||
/// let content;
|
||||
/// Ok(OuterAttribute {
|
||||
/// pound_token: input.parse()?,
|
||||
/// bracket_token: bracketed!(content in input),
|
||||
/// content: content.parse()?,
|
||||
/// })
|
||||
/// }
|
||||
/// }
|
||||
/// #
|
||||
/// # fn main() {
|
||||
/// # let input = quote! {
|
||||
/// # #[repr(C, packed)]
|
||||
/// # };
|
||||
/// # syn::parse2::<OuterAttribute>(input).unwrap();
|
||||
/// # }
|
||||
/// ```
|
||||
#[macro_export]
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
macro_rules! bracketed {
|
||||
($content:ident in $cursor:expr) => {
|
||||
match $crate::__private::parse_brackets(&$cursor) {
|
||||
$crate::__private::Ok(brackets) => {
|
||||
$content = brackets.content;
|
||||
brackets.token
|
||||
}
|
||||
$crate::__private::Err(error) => {
|
||||
return $crate::__private::Err(error);
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
110
rust/syn/ident.rs
Normal file
110
rust/syn/ident.rs
Normal file
@@ -0,0 +1,110 @@
|
||||
// SPDX-License-Identifier: Apache-2.0 OR MIT
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
use crate::lookahead;
|
||||
|
||||
pub use proc_macro2::Ident;
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
pub_if_not_doc! {
|
||||
#[doc(hidden)]
|
||||
#[allow(non_snake_case)]
|
||||
pub fn Ident(marker: lookahead::TokenMarker) -> Ident {
|
||||
match marker {}
|
||||
}
|
||||
}
|
||||
|
||||
macro_rules! ident_from_token {
|
||||
($token:ident) => {
|
||||
impl From<Token![$token]> for Ident {
|
||||
fn from(token: Token![$token]) -> Ident {
|
||||
Ident::new(stringify!($token), token.span)
|
||||
}
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
ident_from_token!(self);
|
||||
ident_from_token!(Self);
|
||||
ident_from_token!(super);
|
||||
ident_from_token!(crate);
|
||||
ident_from_token!(extern);
|
||||
|
||||
impl From<Token![_]> for Ident {
|
||||
fn from(token: Token![_]) -> Ident {
|
||||
Ident::new("_", token.span)
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn xid_ok(symbol: &str) -> bool {
|
||||
let mut chars = symbol.chars();
|
||||
let first = chars.next().unwrap();
|
||||
if !(first == '_' || first.is_ascii_alphabetic()) {
|
||||
return false;
|
||||
}
|
||||
for ch in chars {
|
||||
if !(ch == '_' || ch.is_ascii_alphanumeric()) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
true
|
||||
}
|
||||
|
||||
#[cfg(feature = "parsing")]
|
||||
mod parsing {
|
||||
use crate::buffer::Cursor;
|
||||
use crate::error::Result;
|
||||
use crate::parse::{Parse, ParseStream};
|
||||
use crate::token::Token;
|
||||
use proc_macro2::Ident;
|
||||
|
||||
fn accept_as_ident(ident: &Ident) -> bool {
|
||||
match ident.to_string().as_str() {
|
||||
"_" |
|
||||
// Based on https://doc.rust-lang.org/1.65.0/reference/keywords.html
|
||||
"abstract" | "as" | "async" | "await" | "become" | "box" | "break" |
|
||||
"const" | "continue" | "crate" | "do" | "dyn" | "else" | "enum" |
|
||||
"extern" | "false" | "final" | "fn" | "for" | "if" | "impl" | "in" |
|
||||
"let" | "loop" | "macro" | "match" | "mod" | "move" | "mut" |
|
||||
"override" | "priv" | "pub" | "ref" | "return" | "Self" | "self" |
|
||||
"static" | "struct" | "super" | "trait" | "true" | "try" | "type" |
|
||||
"typeof" | "unsafe" | "unsized" | "use" | "virtual" | "where" |
|
||||
"while" | "yield" => false,
|
||||
_ => true,
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg_attr(docsrs, doc(cfg(feature = "parsing")))]
|
||||
impl Parse for Ident {
|
||||
fn parse(input: ParseStream) -> Result<Self> {
|
||||
input.step(|cursor| {
|
||||
if let Some((ident, rest)) = cursor.ident() {
|
||||
if accept_as_ident(&ident) {
|
||||
Ok((ident, rest))
|
||||
} else {
|
||||
Err(cursor.error(format_args!(
|
||||
"expected identifier, found keyword `{}`",
|
||||
ident,
|
||||
)))
|
||||
}
|
||||
} else {
|
||||
Err(cursor.error("expected identifier"))
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Token for Ident {
|
||||
fn peek(cursor: Cursor) -> bool {
|
||||
if let Some((ident, _rest)) = cursor.ident() {
|
||||
accept_as_ident(&ident)
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
fn display() -> &'static str {
|
||||
"identifier"
|
||||
}
|
||||
}
|
||||
}
|
||||
3492
rust/syn/item.rs
Normal file
3492
rust/syn/item.rs
Normal file
File diff suppressed because it is too large
Load Diff
1013
rust/syn/lib.rs
Normal file
1013
rust/syn/lib.rs
Normal file
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user