Types

Equations

Equations are specified as binary trees with the Node type, defined as follows:

DynamicExpressions.EquationModule.NodeType
Node{T} <: AbstractExpressionNode{T}

Node defines a symbolic expression stored in a binary tree. A single Node instance is one "node" of this tree, and has references to its children. By tracing through the children nodes, you can evaluate or print a given expression.

Fields

  • degree::UInt8: Degree of the node. 0 for constants, 1 for unary operators, 2 for binary operators.
  • constant::Bool: Whether the node is a constant.
  • val::T: Value of the node. If degree==0, and constant==true, this is the value of the constant. It has a type specified by the overall type of the Node (e.g., Float64).
  • feature::UInt16: Index of the feature to use in the case of a feature node. Only used if degree==0 and constant==false. Only defined if degree == 0 && constant == false.
  • op::UInt8: If degree==1, this is the index of the operator in operators.unaops. If degree==2, this is the index of the operator in operators.binops. In other words, this is an enum of the operators, and is dependent on the specific OperatorEnum object. Only defined if degree >= 1
  • l::Node{T}: Left child of the node. Only defined if degree >= 1. Same type as the parent node.
  • r::Node{T}: Right child of the node. Only defined if degree == 2. Same type as the parent node. This is to be passed as the right argument to the binary operator.

Constructors

Node([T]; val=nothing, feature=nothing, op=nothing, l=nothing, r=nothing, children=nothing, allocator=default_allocator)
Node{T}(; val=nothing, feature=nothing, op=nothing, l=nothing, r=nothing, children=nothing, allocator=default_allocator)

Create a new node in an expression tree. If T is not specified in either the type or the first argument, it will be inferred from the value of val passed or l and/or r. If it cannot be inferred from these, it will default to Float32.

The children keyword can be used instead of l and r and should be a tuple of children. This is to permit the use of splatting in constructors.

You may also construct nodes via the convenience operators generated by creating an OperatorEnum.

You may also choose to specify a default memory allocator for the node other than simply Node{T}() in the allocator keyword argument.

There are a variety of constructors for Node objects, including:

Missing docstring.

Missing docstring for Node(; val::DATA_TYPE=nothing, feature::Integer=nothing). Check Documenter's build log for details.

Missing docstring.

Missing docstring for Node(op::Int, l::Node). Check Documenter's build log for details.

Missing docstring.

Missing docstring for Node(op::Int, l::Node, r::Node). Check Documenter's build log for details.

Missing docstring.

Missing docstring for Node(var_string::String). Check Documenter's build log for details.

When you create an Options object, the operators passed are also re-defined for Node types. This allows you use, e.g., t=Node(; feature=1) * 3f0 to create a tree, so long as * was specified as a binary operator. This works automatically for operators defined in Base, although you can also get this to work for user-defined operators by using @extend_operators:

SymbolicRegression.InterfaceDynamicExpressionsModule.@extend_operatorsMacro
@extend_operators options

Extends all operators defined in this options object to work on the AbstractExpressionNode type. While by default this is already done for operators defined in Base when you create an options and pass define_helper_functions=true, this does not apply to the user-defined operators. Thus, to do so, you must apply this macro to the operator enum in the same module you have the operators defined.

source

When using these node constructors, types will automatically be promoted. You can convert the type of a node using convert:

Base.convertMethod
convert(::Type{<:AbstractExpressionNode{T1}}, n::AbstractExpressionNode{T2}) where {T1,T2}

Convert a AbstractExpressionNode{T2} to a AbstractExpressionNode{T1}. This will recursively convert all children nodes to AbstractExpressionNode{T1}, using convert(T1, tree.val) at constant nodes.

Arguments

  • ::Type{AbstractExpressionNode{T1}}: Type to convert to.
  • tree::AbstractExpressionNode{T2}: AbstractExpressionNode to convert.

You can set a tree (in-place) with set_node!:

Missing docstring.

Missing docstring for set_node!(tree::Node{T}, new_tree::Node{T}) where {T}. Check Documenter's build log for details.

You can create a copy of a node with copy_node:

DynamicExpressions.EquationModule.copy_nodeMethod
copy_node(tree::AbstractExpressionNode; break_sharing::Val=Val(false))

Copy a node, recursively copying all children nodes. This is more efficient than the built-in copy.

If break_sharing is set to Val(true), sharing in a tree will be ignored.

Population

Groups of equations are given as a population, which is an array of trees tagged with score, loss, and birthdate–-these values are given in the PopMember.

SymbolicRegression.PopulationModule.PopulationType
Population(pop::Array{PopMember{T,L}, 1})

Create population from list of PopMembers.

source
Population(dataset::Dataset{T,L};
           population_size, nlength::Int=3, options::Options,
           nfeatures::Int)

Create random population and score them on the dataset.

source
Population(X::AbstractMatrix{T}, y::AbstractVector{T};
           population_size, nlength::Int=3,
           options::Options, nfeatures::Int,
           loss_type::Type=Nothing)

Create random population and score them on the dataset.

source

Population members

SymbolicRegression.PopMemberModule.PopMemberType
PopMember(t::AbstractExpressionNode{T}, score::L, loss::L)

Create a population member with a birth date at the current time. The type of the Node may be different from the type of the score and loss.

Arguments

  • t::AbstractExpressionNode{T}: The tree for the population member.
  • score::L: The score (normalized to a baseline, and offset by a complexity penalty)
  • loss::L: The raw loss to assign.
source
PopMember(dataset::Dataset{T,L},
          t::AbstractExpressionNode{T}, options::Options)

Create a population member with a birth date at the current time. Automatically compute the score for this tree.

Arguments

  • dataset::Dataset{T,L}: The dataset to evaluate the tree on.
  • t::AbstractExpressionNode{T}: The tree for the population member.
  • options::Options: What options to use.
source

Hall of Fame

SymbolicRegression.HallOfFameModule.HallOfFameType
HallOfFame{T<:DATA_TYPE,L<:LOSS_TYPE}

List of the best members seen all time in .members, with .members[c] being the best member seen at complexity c. Including only the members which actually have been set, you can run .members[exists].

Fields

  • members::Array{PopMember{T,L},1}: List of the best members seen all time. These are ordered by complexity, with .members[1] the member with complexity 1.
  • exists::Array{Bool,1}: Whether the member at the given complexity has been set.
source
SymbolicRegression.HallOfFameModule.HallOfFameMethod
HallOfFame(options::Options, ::Type{T}, ::Type{L}) where {T<:DATA_TYPE,L<:LOSS_TYPE,N<:AbstractExpressionNode}

Create empty HallOfFame. The HallOfFame stores a list of PopMember objects in .members, which is enumerated by size (i.e., .members[1] is the constant solution). .exists is used to determine whether the particular member has been instantiated or not.

Arguments:

  • options: Options containing specification about deterministic.
  • T: Type of Nodes to use in the population. e.g., Float64.
  • L: Type of loss to use in the population. e.g., Float64.
source

Dataset

SymbolicRegression.CoreModule.DatasetModule.DatasetType
Dataset{T<:DATA_TYPE,L<:LOSS_TYPE}

Fields

  • X::AbstractMatrix{T}: The input features, with shape (nfeatures, n).
  • y::AbstractVector{T}: The desired output values, with shape (n,).
  • n::Int: The number of samples.
  • nfeatures::Int: The number of features.
  • weighted::Bool: Whether the dataset is non-uniformly weighted.
  • weights::Union{AbstractVector{T},Nothing}: If the dataset is weighted, these specify the per-sample weight (with shape (n,)).
  • extra::NamedTuple: Extra information to pass to a custom evaluation function. Since this is an arbitrary named tuple, you could pass any sort of dataset you wish to here.
  • avg_y: The average value of y (weighted, if weights are passed).
  • use_baseline: Whether to use a baseline loss. This will be set to false if the baseline loss is calculated to be Inf.
  • baseline_loss: The loss of a constant function which predicts the average value of y. This is loss-dependent and should be updated with update_baseline_loss!.
  • variable_names::Array{String,1}: The names of the features, with shape (nfeatures,).
  • display_variable_names::Array{String,1}: A version of variable_names but for printing to the terminal (e.g., with unicode versions).
  • y_variable_name::String: The name of the output variable.
  • X_units: Unit information of X. When used, this is a vector of DynamicQuantities.Quantity{<:Any,<:Dimensions} with shape (nfeatures,).
  • y_units: Unit information of y. When used, this is a single DynamicQuantities.Quantity{<:Any,<:Dimensions}.
  • X_sym_units: Unit information of X. When used, this is a vector of DynamicQuantities.Quantity{<:Any,<:SymbolicDimensions} with shape (nfeatures,).
  • y_sym_units: Unit information of y. When used, this is a single DynamicQuantities.Quantity{<:Any,<:SymbolicDimensions}.
source
SymbolicRegression.CoreModule.DatasetModule.DatasetMethod
Dataset(X::AbstractMatrix{T},
        y::Union{AbstractVector{T},Nothing}=nothing,
        loss_type::Type=Nothing;
        weights::Union{AbstractVector{T}, Nothing}=nothing,
        variable_names::Union{Array{String, 1}, Nothing}=nothing,
        y_variable_name::Union{String,Nothing}=nothing,
        extra::NamedTuple=NamedTuple(),
        X_units::Union{AbstractVector, Nothing}=nothing,
        y_units=nothing,
) where {T<:DATA_TYPE}

Construct a dataset to pass between internal functions.

source