If you only have a few rules in a discrete event simulation this is not critical but if you have a lot of them and they can interfere with each other and you may want to track the "which" and "where" they are used.
- Does anybody know how to get the code below as fast as the original function?
- Are there better options than
eval(parse(...)
?
Here is an simple example which shows that I loose a factor 100 in speed. Assume you run a simulation and one (of many rules) is: Select the states with time less 5:
> a <- rnorm(100, 50, 10)
> print(summary(microbenchmark::microbenchmark(a[a < 5], times = 1000L, unit = "us")))
expr min lq mean median uq max neval
a[a < 5] 0.76 1.14 1.266745 1.141 1.52 11.404 1000
myfun <- function(a0) {
return(eval(parse(text = myrule)))
}
> myrule <- "a < a0" # The rule could be read from a file.
print(summary(microbenchmark::microbenchmark(a[myfun(5)], times = 1000L, unit = "us")))
expr min lq mean median uq max neval
a[myfun(5)] 137.61 140.271 145.6047 141.411 142.932 343.644 1000
Note: I don't think that I need an extra rete package which can do the book keeping efficiently. But if there are other opinions, let me know...
Let's profile this:
Most of the time is spent in
parse
. We can confirm this with a benchmark:If reading the rules as text from a file is a hard requirement, I don't think there is a way to speed this up. Of course, you should not parse the same rule repeatedly, but I assume you now that.
Edit in response to a comment providing more explanation:
You should store your rules as quoted expressions (e.g., in a list using
saveRDS
if you need them as a file):For convenience, you could then make that list of expressions an S3 object and create a nice
print
method for it in order to get a better overview.