Title: | Accurate Timing Functions |
---|---|
Description: | Provides infrastructure to accurately measure and compare the execution time of R expressions. |
Authors: | Olaf Mersmann [aut], Claudia Beleites [ctb], Rainer Hurling [ctb], Ari Friedman [ctb], Joshua M. Ulrich [cre] |
Maintainer: | Joshua M. Ulrich <[email protected]> |
License: | BSD_2_clause + file LICENSE |
Version: | 1.5.0 |
Built: | 2024-10-25 05:47:38 UTC |
Source: | https://github.com/joshuaulrich/microbenchmark |
Uses ggplot2 to produce a more legible graph of microbenchmark timings.
autoplot.microbenchmark( object, ..., order = NULL, log = TRUE, unit = NULL, y_max = NULL )
autoplot.microbenchmark( object, ..., order = NULL, log = TRUE, unit = NULL, y_max = NULL )
object |
A microbenchmark object. |
... |
Ignored. |
order |
Names of output column(s) to order the results. |
log |
If |
unit |
The unit to use for graph labels. |
y_max |
The upper limit of the y axis, in the unit automatically chosen for the time axis (defaults to the maximum value). |
A ggplot2 object.
Ari Friedman, Olaf Mersmann
if (requireNamespace("ggplot2", quietly = TRUE)) { tm <- microbenchmark(rchisq(100, 0), rchisq(100, 1), rchisq(100, 2), rchisq(100, 3), rchisq(100, 5), times=1000L) ggplot2::autoplot(tm) # add a custom title ggplot2::autoplot(tm) + ggplot2::ggtitle("my timings") }
if (requireNamespace("ggplot2", quietly = TRUE)) { tm <- microbenchmark(rchisq(100, 0), rchisq(100, 1), rchisq(100, 2), rchisq(100, 3), rchisq(100, 5), times=1000L) ggplot2::autoplot(tm) # add a custom title ggplot2::autoplot(tm) + ggplot2::ggtitle("my timings") }
microbenchmark
timings.Boxplot of microbenchmark
timings.
## S3 method for class 'microbenchmark' boxplot( x, unit = "t", log = TRUE, xlab, ylab, horizontal = FALSE, main = "microbenchmark timings", ... )
## S3 method for class 'microbenchmark' boxplot( x, unit = "t", log = TRUE, xlab, ylab, horizontal = FALSE, main = "microbenchmark timings", ... )
x |
A |
unit |
Unit in which the results be plotted. |
log |
Should times be plotted on log scale? |
xlab |
X axis label. |
ylab |
Y axis label. |
horizontal |
Switch X and Y axes. |
main |
Plot title. |
... |
Passed on to boxplot.formula. |
Olaf Mersmann
The current value of the most accurate timer of the platform is returned. This can be used as a time stamp for logging or similar purposes. Please note that there is no common reference, that is, the timer value cannot be converted to a date and time value.
get_nanotime()
get_nanotime()
Olaf Mersmann
microbenchmark
serves as a more accurate replacement of the
often seen system.time(replicate(1000, expr))
expression. It tries hard to accurately measure only the time it
takes to evaluate expr
. To achieved this, the
sub-millisecond (supposedly nanosecond) accurate timing functions
most modern operating systems provide are used. Additionally all
evaluations of the expressions are done in C code to minimize any
overhead.
microbenchmark( ..., list = NULL, times = 100L, unit = NULL, check = NULL, control = list(), setup = NULL )
microbenchmark( ..., list = NULL, times = 100L, unit = NULL, check = NULL, control = list(), setup = NULL )
... |
Expressions to benchmark. |
list |
List of unevaluated expressions to benchmark. |
times |
Number of times to evaluate each expression. |
unit |
Default unit used in |
check |
A function to check if the expressions are equal. By default |
control |
List of control arguments. See Details. |
setup |
An unevaluated expression to be run (untimed) before each benchmark expression. |
This function is only meant for micro-benchmarking small pieces of source code and to compare their relative performance characteristics. You should generally avoid benchmarking larger chunks of your code using this function. Instead, try using the R profiler to detect hot spots and consider rewriting them in C/C++ or FORTRAN.
The control
list can contain the following entries:
the order in which the expressions are evaluated. “random” (the default) randomizes the execution order, “inorder” executes each expression in order and “block” executes all repetitions of each expression as one block.
the number of iterations to run the timing code before evaluating the expressions in .... These warm-up iterations are used to estimate the timing overhead as well as spinning up the processor from any sleep or idle states it might be in. The default value is 2.
Object of class ‘microbenchmark’, a data frame with
columns expr
and time
. expr
contains the
deparsed expression as passed to microbenchmark
or the name
of the argument if the expression was passed as a named
argument. time
is the measured execution time of the
expression in nanoseconds. The order of the observations in the
data frame is the order in which they were executed.
Depending on the underlying operating system, different
methods are used for timing. On Windows the
QueryPerformanceCounter
interface is used to measure the
time passed. For Linux the clock_gettime
API is used and on
Solaris the gethrtime
function. Finally on MacOS X the,
undocumented, mach_absolute_time
function is used to avoid
a dependency on the CoreServices Framework.
Before evaluating each expression times
times, the overhead
of calling the timing functions and the C function call overhead
are estimated. This estimated overhead is subtracted from each
measured evaluation time. Should the resulting timing be negative,
a warning is thrown and the respective value is replaced by
0
. If the timing is zero, a warning is raised.
Should all evaluations result in one of the two error conditions described above, an error is raised.
One platform on which the clock resolution is known to be too low to measure short runtimes with the required precision is Oracle® Solaris on some SPARC® hardware. Reports of other platforms with similar problems are welcome. Please contact the package maintainer.
Olaf Mersmann
print.microbenchmark
to display and
boxplot.microbenchmark
or
autoplot.microbenchmark
to plot the results.
## Measure the time it takes to dispatch a simple function call ## compared to simply evaluating the constant \code{NULL} f <- function() NULL res <- microbenchmark(NULL, f(), times=1000L) ## Print results: print(res) ## Plot results: boxplot(res) ## Pretty plot: if (requireNamespace("ggplot2")) { ggplot2::autoplot(res) } ## Example check usage my_check <- function(values) { all(sapply(values[-1], function(x) identical(values[[1]], x))) } f <- function(a, b) 2 + 2 a <- 2 ## Check passes microbenchmark(2 + 2, 2 + a, f(2, a), f(2, 2), check=my_check) ## Not run: a <- 3 ## Check fails microbenchmark(2 + 2, 2 + a, f(2, a), f(2, 2), check=my_check) ## End(Not run) ## Example setup usage set.seed(21) x <- rnorm(10) microbenchmark(x, rnorm(10), check=my_check, setup=set.seed(21)) ## Will fail without setup ## Not run: microbenchmark(x, rnorm(10), check=my_check) ## End(Not run) ## using check a <- 2 microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='identical') microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='equal') attr(a, 'abc') <- 123 microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='equivalent') ## check='equal' will fail due to difference in attribute ## Not run: microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='equal') ## End(Not run)
## Measure the time it takes to dispatch a simple function call ## compared to simply evaluating the constant \code{NULL} f <- function() NULL res <- microbenchmark(NULL, f(), times=1000L) ## Print results: print(res) ## Plot results: boxplot(res) ## Pretty plot: if (requireNamespace("ggplot2")) { ggplot2::autoplot(res) } ## Example check usage my_check <- function(values) { all(sapply(values[-1], function(x) identical(values[[1]], x))) } f <- function(a, b) 2 + 2 a <- 2 ## Check passes microbenchmark(2 + 2, 2 + a, f(2, a), f(2, 2), check=my_check) ## Not run: a <- 3 ## Check fails microbenchmark(2 + 2, 2 + a, f(2, a), f(2, 2), check=my_check) ## End(Not run) ## Example setup usage set.seed(21) x <- rnorm(10) microbenchmark(x, rnorm(10), check=my_check, setup=set.seed(21)) ## Will fail without setup ## Not run: microbenchmark(x, rnorm(10), check=my_check) ## End(Not run) ## using check a <- 2 microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='identical') microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='equal') attr(a, 'abc') <- 123 microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='equivalent') ## check='equal' will fail due to difference in attribute ## Not run: microbenchmark(2 + 2, 2 + a, sum(2, a), sum(2, 2), check='equal') ## End(Not run)
This function is currently experimental. Its main use is to judge
the quality of the underlying timer implementation of the
operating system. The function measures the overhead of timing a C
function call rounds
times and returns all non-zero timings
observed. This can be used to judge the granularity and resolution
of the timing subsystem.
microtiming_precision(rounds = 100L, warmup = 2^18)
microtiming_precision(rounds = 100L, warmup = 2^18)
rounds |
Number of measurements used to estimate the precision. |
warmup |
Number of iterations used to warm up the CPU. |
A vector of observed non-zero timings.
Olaf Mersmann
microbenchmark
timings.Print microbenchmark
timings.
## S3 method for class 'microbenchmark' print(x, unit, order, signif, ...)
## S3 method for class 'microbenchmark' print(x, unit, order, signif, ...)
x |
An object of class |
unit |
What unit to print the timings in. Default value taken
from to option |
order |
If present, order results according to this column of the output. |
signif |
If present, limit the number of significant digits shown. |
... |
Passed to |
The available units are nanoseconds ("ns"
), microseconds
("us"
), milliseconds ("ms"
), seconds ("s"
)
and evaluations per seconds ("eps"
) and relative runtime
compared to the best median time ("relative"
).
If the multcomp
package is available a statistical
ranking is calculated and displayed in compact letter display from
in the cld
column.
Olaf Mersmann
boxplot.microbenchmark
and
autoplot.microbenchmark
for a plot methods.
a1 <- a2 <- a3 <- a4 <- numeric(0) res <- microbenchmark(a1 <- c(a1, 1), a2 <- append(a2, 1), a3[length(a3) + 1] <- 1, a4[[length(a4) + 1]] <- 1, times=100L) print(res) ## Change default unit to relative runtime options(microbenchmark.unit="relative") print(res) ## Change default unit to evaluations per second options(microbenchmark.unit="eps") print(res)
a1 <- a2 <- a3 <- a4 <- numeric(0) res <- microbenchmark(a1 <- c(a1, 1), a2 <- append(a2, 1), a3[length(a3) + 1] <- 1, a4[[length(a4) + 1]] <- 1, times=100L) print(res) ## Change default unit to relative runtime options(microbenchmark.unit="relative") print(res) ## Change default unit to evaluations per second options(microbenchmark.unit="eps") print(res)
microbenchmark
timings.Summarize microbenchmark
timings.
## S3 method for class 'microbenchmark' summary(object, unit, ..., include_cld = TRUE)
## S3 method for class 'microbenchmark' summary(object, unit, ..., include_cld = TRUE)
object |
An object of class |
unit |
What unit to print the timings in. If none is given,
either the |
... |
Ignored |
include_cld |
Calculate |
A data.frame
containing the aggregated results.
The available units are nanoseconds ("ns"
),
microseconds ("us"
), milliseconds ("ms"
), seconds
("s"
) and evaluations per seconds ("eps"
) and
relative runtime compared to the best median time
("relative"
).