API

EquationSearch

SymbolicRegression.EquationSearchMethod
EquationSearch(X, y[; kws...])

Perform a distributed equation search for functions f_i which describe the mapping f_i(X[:, j]) ≈ y[i, j]. Options are configured using SymbolicRegression.Options(...), which should be passed as a keyword argument to options. One can turn off parallelism with numprocs=0, which is useful for debugging and profiling.

Arguments

  • X::AbstractMatrix{T}: The input dataset to predict y from. The first dimension is features, the second dimension is rows.
  • y::Union{AbstractMatrix{T}, AbstractVector{T}}: The values to predict. The first dimension is the output feature to predict with each equation, and the second dimension is rows.
  • niterations::Int=10: The number of iterations to perform the search. More iterations will improve the results.
  • weights::Union{AbstractMatrix{T}, AbstractVector{T}, Nothing}=nothing: Optionally weight the loss for each y by this value (same shape as y).
  • varMap::Union{Array{String, 1}, Nothing}=nothing: The names of each feature in X, which will be used during printing of equations.
  • options::Options=Options(): The options for the search, such as which operators to use, evolution hyperparameters, etc.
  • numprocs::Union{Int, Nothing}=nothing: The number of processes to use, if you want EquationSearch to set this up automatically. By default this will be 4, but can be any number (you should pick a number <= the number of cores available).
  • procs::Union{Array{Int, 1}, Nothing}=nothing: If you have set up a distributed run manually with procs = addprocs() and @everywhere, pass the procs to this keyword argument.
  • runtests::Bool=true: Whether to run (quick) tests before starting the search, to see if there will be any problems during the equation search related to the host environment.
  • saved_state::Union{StateType, Nothing}=nothing: If you have already run EquationSearch and want to resume it, pass the state here. To get this to work, you need to have stateReturn=true in the options, which will cause EquationSearch to return the state. Note that you cannot change the operators or dataset, but most other options should be changeable.

Returns

  • hallOfFame::HallOfFame: The best equations seen during the search. hallOfFame.members gives an array of PopMember objects, which have their tree (equation) stored in .tree. Their score (loss) is given in .score. The array of PopMember objects is enumerated by size from 1 to options.maxsize.
source

Options

SymbolicRegression.../Options.jl.OptionsMethod
Options(;kws...)

Construct options for EquationSearch and other functions.

Arguments

  • binary_operators=(div, plus, mult): Tuple of binary operators to use. Each operator should be defined for two input scalars, and one output scalar. All operators need to be defined over the entire real line (excluding infinity - these are stopped before they are input). Thus, log should be replaced with log_abs, etc. For speed, define it so it takes two reals of the same type as input, and outputs the same type. For the SymbolicUtils simplification backend, you will need to define a generic method of the operator so it takes arbitrary types.
  • unary_operators=(exp, cos): Same, but for unary operators (one input scalar, gives an output scalar).
  • constraints=nothing: Array of pairs specifying size constraints for each operator. The constraints for a binary operator should be a 2-tuple (e.g., (-1, -1)) and the constraints for a unary operator should be an Int. A size constraint is a limit to the size of the subtree in each argument of an operator. e.g., [(^)=>(-1, 3)] means that the ^ operator can have arbitrary size (-1) in its left argument, but a maximum size of 3 in its right argument. Default is no constraints.
  • batching=false: Whether to evolve based on small mini-batches of data, rather than the entire dataset.
  • batchSize=50: What batch size to use if using batching.
  • loss=L2DistLoss(): What loss function to use. Can be one of the following losses, or any other loss of type SupervisedLoss. You can also pass a function that takes a scalar target (left argument), and scalar predicted (right argument), and returns a scalar. This will be averaged over the predicted data. If weights are supplied, your function should take a third argument for the weight scalar. Included losses: Regression: - LPDistLoss{P}(), - L1DistLoss(), - L2DistLoss() (mean square), - LogitDistLoss(), - HuberLoss(d), - L1EpsilonInsLoss(ϵ), - L2EpsilonInsLoss(ϵ), - PeriodicLoss(c), - QuantileLoss(τ), Classification: - ZeroOneLoss(), - PerceptronLoss(), - L1HingeLoss(), - SmoothedL1HingeLoss(γ), - ModifiedHuberLoss(), - L2MarginLoss(), - ExpLoss(), - SigmoidLoss(), - DWDMarginLoss(q).
  • npopulations=nothing: How many populations of equations to use. By default this is set equal to the number of cores
  • npop=1000: How many equations in each population.
  • ncyclesperiteration=300: How many generations to consider per iteration.
  • ns=10: Number of equations in each subsample during regularized evolution.
  • topn=10: Number of equations to return to the host process, and to consider for the hall of fame.
  • alpha=0.100000f0: The probability of accepting an equation mutation during regularized evolution is given by exp(-delta_loss/(alpha * T)), where T goes from 1 to 0. Thus, alpha=infinite is the same as no annealing.
  • maxsize=20: Maximum size of equations during the search.
  • maxdepth=nothing: Maximum depth of equations during the search, by default this is set equal to the maxsize.
  • parsimony=0.000100f0: A multiplicative factor for how much complexity is punished.
  • useFrequency=false: Whether to use a parsimony that adapts to the relative proportion of equations at each complexity; this will ensure that there are a balanced number of equations considered for every complexity.
  • fast_cycle=false: Whether to thread over subsamples of equations during regularized evolution. Slightly improves performance, but is a different algorithm.
  • migration=true: Whether to migrate equations between processes.
  • hofMigration=true: Whether to migrate equations from the hall of fame to processes.
  • fractionReplaced=0.1f0: What fraction of each population to replace with migrated equations at the end of each cycle.
  • fractionReplacedHof=0.1f0: What fraction to replace with hall of fame equations at the end of each cycle.
  • shouldOptimizeConstants=true: Whether to use NelderMead optimization to periodically optimize constants in equations.
  • optimizer_nrestarts=3: How many different random starting positions to consider when using NelderMead optimization.
  • hofFile=nothing: What file to store equations to, as a backup.
  • perturbationFactor=1.000000f0: When mutating a constant, either multiply or divide by (1+perturbationFactor)^(rand()+1).
  • probNegate=0.01f0: Probability of negating a constant in the equation when mutating it.
  • mutationWeights=[10.000000, 1.000000, 1.000000, 3.000000, 3.000000, 0.010000, 1.000000, 1.000000]:
  • annealing=true: Whether to use simulated annealing.
  • warmupMaxsize=0: Whether to slowly increase the max size from 5 up to maxsize. If nonzero, specifies how many cycles (populations*iterations) before increasing by 1.
  • verbosity=convert(Int, 1e9): Whether to print debugging statements or not.
  • bin_constraints=nothing:
  • una_constraints=nothing:
  • seed=nothing: What random seed to use. nothing uses no seed.
  • progress=false: Whether to use a progress bar output (verbosity will have no effect).
  • probPickFirst=1.0: Expressions in subsample are chosen based on, for p=probPickFirst: p, p(1-p), p(1-p)^2, and so on.
  • earlyStopCondition=nothing: Float - whether to stop early if the mean loss gets below this value. Function - a function taking (loss, complexity) as arguments and returning true or false.
source

Printing and Evaluation

SymbolicRegression.../EvaluateEquation.jl.evalTreeArrayMethod
evalTreeArray(tree::Node, cX::AbstractMatrix{T}, options::Options)

Evaluate a binary tree (equation) over a given input data matrix. The options contain all of the operators used. This function fuses doublets and triplets of operations for lower memory usage.

Returns

  • (output, complete)::Tuple{AbstractVector{T}, Bool}: the result, which is a 1D array, as well as if the evaluation completed successfully (true/false). A false complete means an infinity or nan was encountered, and a large loss should be assigned to the equation.
source

SymbolicUtils.jl interface

SymbolicRegression.../InterfaceSymbolicUtils.jl.node_to_symbolicMethod
node_to_symbolic(tree::Node, options::Options;
            varMap::Union{Array{String, 1}, Nothing}=nothing,
            evaluate_functions::Bool=false,
            index_functions::Bool=false)

The interface to SymbolicUtils.jl. Passing a tree to this function will generate a symbolic equation in SymbolicUtils.jl format.

Arguments

  • tree::Node: The equation to convert.
  • options::Options: Options, which contains the operators used in the equation.
  • varMap::Union{Array{String, 1}, Nothing}=nothing: What variable names to use for each feature. Default is [x1, x2, x3, ...].
  • evaluate_functions::Bool=false: Whether to evaluate the operators, or leave them as symbolic.
  • index_functions::Bool=false: Whether to generate special names for the operators, which then allows one to convert back to a Node format using symbolic_to_node.
source

Pareto frontier

SymbolicRegression.../HallOfFame.jl.calculateParetoFrontierMethod
calculateParetoFrontier(X::AbstractMatrix{T}, y::AbstractVector{T},
                        hallOfFame::HallOfFame, options::Options;
                        weights=nothing, varMap=nothing) where {T<:Real}

Compute the dominating Pareto frontier for a given hallOfFame. This is the list of equations where each equation has a better loss than all simpler equations.

source