Would it makes sense to provide a “share nothing” runtime implementation that can be injected at startup?
Isn’t this tokio::task::spawn_local
?
Would it makes sense to provide a “share nothing” runtime implementation that can be injected at startup?
Isn’t this tokio::task::spawn_local
?
compile rust 0.7 in scala
Not sure if there was another rewrite, but AFAIK (and the article agrees with me) rustc was originally written in Ocaml
Have you actually measured a performance impact from RefCell
checks?
High or low level doesn’t matter. Mathematically it just makes more sense to use 0-based indexing https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html
It’s mentioned in footnote 6:
As an example, to make this work I’m assuming some kind of “true deref” trait that indicates that Deref yields a reference that remains valid even as the value being deref’d moves from place to place. We need a trait much like this for other reasons too.
It would only work for references that are stable after the value they reference is moved. Think for example of a &str
you get from a String
.
They tested the same strings on that implementation
The code they were looking at was used for writing the table, but they were testing the one that read it (which is instead correct).
though judging by the recent comments someone’s found something.
Yeah that’s me :)The translation using an associated const also works when the const
block uses generic parameters. For example:
fn require_zst<T>() {
const { assert!(std::mem::size_of::<T>() == 0) }
}
This can be written as:
fn require_zst<T>() {
struct Foo<T>(PhantomData<T>);
impl<T> Foo<T> {
const FOO: () = assert!(std::mem::size_of::<T>() == 0);
}
Foo::<T>::FOO
}
However it cannot be written as:
fn require_zst<T>() {
const FOO: () = assert!(std::mem::size_of::<T>() == 0);
FOO
}
Because const FOO: ()
is an item, thus it is only lexically scoped (i.e. visible) inside require_zst
, but does not inherit its generics (thus it cannot use T
).
They tested the same strings on that implementation
The strings were the same, but not the implementation. They were testing the decoding of the strings, but the C function they were looking at was the one for encoding them. The decoding function was correct but what it read didn’t match the encoding one.
though judging by the recent comments someone’s found something.
Yeah, that’s me :)
while a similar C implementation does not need this fix
No, that implementation also needs the fix. It’s just that it was never properly tested, so they thought it was working correctly.
It’s also extremely unlikely that you’d be running a bat script with untrusted arguments on Windows.
It happens in yt-dl, which is where this was first reported https://github.com/yt-dlp/yt-dlp/security/advisories/GHSA-hjq6-52gw-2g7p
Very likely yes
Loop unrolling is not really the speedup, autovectorization is. Loop unrolling does often help with autovectorization, but is not enough, especially with floating point numbers. In fact the accumulation operation you’re doing needs to be associative, and floating point numbers addition is not associative (i.e. (x + y) + z
is not always equal to (x + (y + z)
). Hence autovectorizing the code would change the semantics and the compiler is not allowed to do that.
If your goal is to just .await
some future that require the Tokio runtime you can use the async-compat
crate.
Another option is to compile dependencies with LLVM and optimizations, which will likely be done only once in the first clean build, and then compile your main binary with Cranelift, thus getting the juicy fast compile times without having to worry about the slow dependencies.
My laptop has an italian layout keyboard because it was a pain to find a good priced one with the US layout. On windows there’s no way to do the ` and ~ symbols without using Alt combinations and on linux you need to use a weird compose key. Also square brackets require you to press Shift and curly brackets require both Shift and Alt.
TBF everyone in school learn to start counting at 1, then they unlearn that in programming. There are also some objective reasons to use 0-based indexing https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/EWD831.html
Vs Codium is the open source version. Vs Code is based on Vs Codium but with the addition of closed source telemetry and the extensions marketplace.
Luckily for me I’m not poor.
If you live in the USA you don’t suffer from the problem it solves because you have ~5 IP v4 addresses per capita (totaling to 41% of all the IP v4 addresses), and likewise many european countries have ~2 per capita (although there are expeptions like Italy and Spain which are a bit under 1 per capita). However many other countries don’t have such luxury, for example in india there’s one for every 36 people, which is obviously not enough and thus they have to either use NAT everywhere or switch to IPv6.
No, macros can see only the tokens you give them. They have no notion of the fact that
crate::functions
is a module, thatPluginFunction
is a trait and thatfunctions_map
is a variable. Not even the compiler may know those informations when the macro is expanded.If you really really want to do this, you can use something like
inventory
. Note thatinventory
uses a special linker section to do this, which some consider a hack. This is also not supported on WASM if you want to target it.