API

Creating benchmarks

From the command line:

AirspeedVelocity.BenchPkg.benchpkgFunction
benchpkg package_name [-r --rev <arg>] [-o, --output-dir <arg>]
                      [-s, --script <arg>] [-e, --exeflags <arg>]
                      [-a, --add <arg>] [--tune]
                      [--url <arg>] [--path <arg>]
                      [--bench-on <arg>] [--nsamples-load-time <arg>]

Benchmark a package over a set of revisions.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma). Use dirty to benchmark the current state of the package at path (and not a git commit).
  • -o, --output-dir <arg>: Where to save the JSON results.
  • -s, --script <arg>: The benchmark script. Default: benchmark/benchmarks.jl downloaded from stable.
  • -e, --exeflags <arg>: CLI flags for Julia (default: none).
  • -a, --add <arg>: Extra packages needed (delimit by comma).
  • --url <arg>: URL of the package.
  • --path <arg>: Path of the package.
  • --bench-on <arg>: If the script is not set, this specifies the revision at which to download benchmark/benchmarks.jl from the package.
  • --nsamples-load-time <arg>: Number of samples to take when measuring load time of the package (default: 5). (This means starting a Julia process for each sample.)

Flags

  • --tune: Whether to run benchmarks with tuning (default: false).
source

Or, directly from Julia:

AirspeedVelocity.Utils.benchmarkMethod
benchmark(package_name::String, rev::Union{String,Vector{String}}; output_dir::String=".", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])

Run benchmarks for a given Julia package.

This function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.

The benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.

The results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.

Arguments

  • package_name::String: The name of the package for which to run the benchmarks.
  • rev::Union{String,Vector{String}}: The revision of the package for which to run the benchmarks. You can also pass a vector of revisions to run benchmarks for multiple versions of a package.
  • output_dir::String=".": The directory where the benchmark results JSON file will be saved (default: current directory).
  • script::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.
  • tune::Bool=false: Whether to run benchmarks with tuning (default: false).
  • exeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).
  • extra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.
  • url::Union{String,Nothing}=nothing: URL of the package.
  • path::Union{String,Nothing}=nothing: Path to the package.
  • benchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.
  • nsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.
source
AirspeedVelocity.Utils.benchmarkMethod
benchmark(package_specs::Union{PackageSpec,Vector{PackageSpec}}; output_dir::String=".", script::Union{String,Nothing}=nothing, tune::Bool=false, exeflags::Cmd=``, extra_pkgs::Vector{String}=String[])

Run benchmarks for a given Julia package.

This function runs the benchmarks specified in the script for the package defined by the package_spec. If script is not provided, the function will use the default benchmark script located at {PACKAGE_SRC_DIR}/benchmark/benchmarks.jl.

The benchmarks are run using the SUITE variable defined in the benchmark script, which should be of type BenchmarkTools.BenchmarkGroup. The benchmarks can be run with or without tuning depending on the value of the tune argument.

The results of the benchmarks are saved to a JSON file named results_packagename@rev.json in the specified output_dir.

Arguments

  • package::Union{PackageSpec,Vector{PackageSpec}}: The package specification containing information about the package for which to run the benchmarks. You can also pass a vector of package specifications to run benchmarks for multiple versions of a package.
  • output_dir::String=".": The directory where the benchmark results JSON file will be saved (default: current directory).
  • script::Union{String,Nothing}=nothing: The path to the benchmark script file. If not provided, the default script at {PACKAGE}/benchmark/benchmarks.jl will be used.
  • tune::Bool=false: Whether to run benchmarks with tuning (default: false).
  • exeflags::Cmd=``: Additional execution flags for running the benchmark script (default: empty).
  • extra_pkgs::Vector{String}=String[]: Additional packages to add to the benchmark environment.
  • benchmark_on::Union{String,Nothing}=nothing: If the benchmark script file is to be downloaded, this specifies the revision to use.
  • nsamples_load_time::Int=5: Number of samples to take for the time-to-load benchmark.
source

Loading and visualizing benchmarks

From the command line:

AirspeedVelocity.BenchPkgTable.benchpkgtableFunction
benchpkgtable package_name [-r --rev <arg>] [-i --input-dir <arg>]
                           [--ratio] [--mode <arg>]

Print a table of the benchmarks of a package as created with benchpkg.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma).
  • -i, --input-dir <arg>: Where the JSON results were saved (default: ".").

Flags

  • --ratio: Whether to include the ratio (default: false). Only applies when comparing two revisions.
  • --mode: Table mode(s). Valid values are "time" (default), to print the benchmark time, or "memory", to print the allocation and memory usage. Both options can be passed, if delimited by comma.
source
AirspeedVelocity.BenchPkgPlot.benchpkgplotFunction
benchpkgplot package_name [-r --rev <arg>] [-i --input-dir <arg>]
                          [-o --output-dir <arg>] [-n --npart <arg>]
                          [--format <arg>]

Plot the benchmarks of a package as created with benchpkg.

Arguments

  • package_name: Name of the package.

Options

  • -r, --rev <arg>: Revisions to test (delimit by comma).
  • -i, --input-dir <arg>: Where the JSON results were saved (default: ".").
  • -o, --output-dir <arg>: Where to save the plots results (default: ".").
  • -n, --npart <arg>: Max number of plots per page (default: 10).
  • --format <arg>: File type to save the plots as (default: "png").
source
AirspeedVelocity.Utils.load_resultsMethod
load_results(specs::Vector{PackageSpec}; input_dir::String=".")

Load the results from JSON files for each PackageSpec in the specs vector. The function assumes that the JSON files are located in the input_dir directory and are named as "results_{s}.json" where s is equal to PackageName@Rev.

The function returns a combined OrderedDict, to be input to the combined_plots function.

Arguments

  • specs::Vector{PackageSpec}: Vector of each package revision to be loaded (as PackageSpec).
  • input_dir::String=".": Directory where the results. Default is current directory.

Returns

  • OrderedDict{String,OrderedDict}: Combined results ready to be passed to the combined_plots function.
source
AirspeedVelocity.PlotUtils.combined_plotsMethod
combined_plots(combined_results::OrderedDict; npart=10)

Create a combined plot of the results loaded from the load_results function. The function partitions the plots into smaller groups of size npart (defaults to 10) and combines the plots in each group vertically. It returns an array of combined plots.

Arguments

  • combined_results::OrderedDict: Data to be plotted, obtained from the load_results function.
  • npart::Int=10: Max plots to be combined in a single vertical group. Default is 10.

Returns

  • Array{Plotly.Plot,1}: An array of combined Plots objects, with each element representing a group of up to npart vertical plots.
source
AirspeedVelocity.TableUtils.create_tableMethod
create_table(combined_results::OrderedDict; kws...)

Create a markdown table of the results loaded from the load_results function. If there are two results for a given benchmark, will have an additional column for the comparison, assuming the first revision is one to compare against.

The formatter keyword argument generates the column value. It defaults to TableUtils.format_time, which prints the median time ± the interquantile range. TableUtils.format_memory is also available to print the number of allocations and the allocated memory.

source