This code below benchmarks the & operation given opposite conditions that benefit or not from lazy evaluation conditions in the vectors.
set.seed(1)
N <- 1e6
V <- runif(N)
v1 <- V > 0.1
v2 <- V > 0.4
v3 <- V > 0.9
mb_res_le <- microbenchmark::microbenchmark(
times = 100, unit = "ms",
v1 & v2 & v3, v3 & v2 & v1
)
ggplot2::autoplot(mb_res_le)
I understand the result from R 4.2.0. i.e. Having more FALSEes to the LHS of & performs quicker because of lazy evaluation. But I don't get how it is the other way aound for R 4.3.1 with the same comparison.
There are a few versions between 4.2.0 and 4.3.1, and this could be introduced somewhere in between, but I can't find anything in the release notes that could explain this.
I was surprised by the result from R 4.3.1. Which is why I tried earlier versions of R for which I remember being able to interpret the performance differences.
[updated plots after adding set.seed(1)]

Docker allows us to compare easily. I happen to have R 4.3.1 (current) and R 4.2.3 (final of the previous release cycle) here. I simply added
install.packages("microbenchmark")and printed the result summary.Then for R 4.3.1:
and for R 4.2.3
and modulo normal variation during benchmarking seem identical.
Modified code
Call
This is the one for 4.2.3, and 4.3.1 is obviously similar: