Skip to content

Custom Operators

Define your own differential operators when the built-ins (partial, laplacian, jacobian, directional) don't cover your use case.

Prerequisite: Operators & Type Hierarchy explains AbstractOperator{N}, rank semantics, and basis derivative functors.

julia
using RadialBasisFunctions
using RadialBasisFunctions: ∂, ∂², ∇²
using StaticArrays

x = [SVector{2}(rand(2)) for _ in 1:100]
f(p) = sin(p[1]) * cos(p[2])
u = f.(x)

The @operator macro lets you write PDE operators in mathematical notation. It translates symbolic expressions into composable operator objects:

julia
= 4.0
op = @operator ∇² +* f
helm = op(x)

# Verify against separate built-in operators
expected = laplacian(x)(u) .+.* u
maximum(abs, helm(u) .- expected)
1.822542117224657e-12

Recognized symbols

SymbolMeaning
∇², ΔLaplacian
∂(dim)First partial derivative in dimension dim
∂²(dim)Second partial derivative in dimension dim
∇ ⋅ (κ * ∇)Diffusion operator (scalar or vector κ)
c ⋅ ∇Advection operator (vector c)
f, IIdentity operator
Everything elseScalar coefficient

Standard arithmetic (+, -, *) and unary negation work as expected. Scalars can be literals, variables, or expressions like k^2 or c[1].

For more @operator recipes — diffusion, anisotropic diffusion, advection-diffusion — see the PDE Operators Cookbook.

Understanding Rank

The rank is auto-inferred — you don't need to specify it. This table explains what each rank means:

Rank 0 (scalar output)Rank 1 (vector output)
Output has same shape as inputOutput gains a spatial dimension
Single weight matrix WTuple of D weight matrices
Laplacian, partial derivative, directional derivativeGradient, Jacobian, Hessian-like operators

For AbstractOperator inputs (from @operator or algebra), the rank is encoded in the type parameter. For Function closures, it's inferred by probing: a tuple return means rank 1, a single callable means rank 0. You can still pass rank explicitly as an override if needed.

Hermite Boundary Conditions

Custom operators support Hermite interpolation via the hermite keyword, just like built-in operators:

julia
op = my_ℒ(data; hermite=(
    is_boundary=is_boundary,
    bc=bcs,
    normals=normals
))

See the Boundary Conditions section of Getting Started for details on Hermite interpolation.

The Contract

Any AbstractOperator — including results from @operator — can be called directly with data points to build a RadialBasisOperator. Under the hood this calls RadialBasisOperator(op, data; kw...).

For raw closure-based operators, the custom function wraps a function in a Custom{N} operator:

julia
custom(data, ℒ)

The function must follow a three-layer structure:

  1. receives the basis instance — e.g., PHS(3; poly_deg=2)

  2. Returns a callable (x, xᵢ) -> value — this evaluates the operator applied to the basis function

  3. The value is   — the operator acting on the basis function centered at

This callable fills the right-hand side of the stencil system that determines the weights. For a rank-0 operator it returns a scalar; for rank-1 it returns a tuple of callables (one per spatial dimension).

Escape Hatch: Function Form

If @operator can't express your operator, you can pass a closure directly to custom(). This is a last resort — if you find yourself needing this, consider opening an issue so macro support can be added.

Rank-1 example

For a rank-1 operator (one that adds a trailing dimension), return a tuple of callables. The @operator macro currently only produces rank-0 operators, so rank-1 requires the function form:

julia
# Custom gradient: tuple of ∂/∂x₁ and ∂/∂x₂
custom_grad = custom(x, basis -> ((basis, 1), (basis, 2)))

# Compare with built-in jacobian
builtin_jac = jacobian(x)
maximum(abs, custom_grad(u) .- builtin_jac(u))
0.0

Each element of the tuple produces one column of the output matrix.

Dual dispatch for composed functors

When you compose multiple functors with arithmetic inside a lambda, you need two methods — one for the RBF basis and one for MonomialBasis. The @operator macro handles this automatically, which is why it's preferred.

Why dual dispatch is needed: The system calls with both the RBF basis (e.g., PHS(3)) and a MonomialBasis (for polynomial augmentation). RBF functors like ∇²(basis) return (x, xᵢ) -> scalar, but monomial functors return (b, x) -> nothing (in-place buffer fill). Arithmetic on nothing fails.

julia
# Two-method operator function (advanced — prefer @operator for this)
function helmholtz_op(basis)
    lap = ∇²(basis)
    (x, xc) -> lap(x, xc) +* basis(x, xc)
end
function helmholtz_op(basis::MonomialBasis)
    lap = ∇²(basis)
    function (b, x)
        b .= lap(x) .+.* basis(x)
        return nothing
    end
end

helm3 = Custom{0}(helmholtz_op)(x)
maximum(abs, helm3(u) .- helm(u))
0.0

Note

Simple cases that return a single functor directly — like basis -> ∂(basis, 1) — don't need dual dispatch. The built-in functors already handle both basis types internally. Two methods are only needed when you compose multiple functors with arithmetic.