Custom Operators
Define your own differential operators when the built-ins (partial, laplacian, jacobian, directional) don't cover your use case.
Prerequisite: Operators & Type Hierarchy explains AbstractOperator{N}, rank semantics, and basis derivative functors.
using RadialBasisFunctions
using RadialBasisFunctions: ∂, ∂², ∇²
using StaticArrays
x = [SVector{2}(rand(2)) for _ in 1:100]
f(p) = sin(p[1]) * cos(p[2])
u = f.(x)@operator Macro (Recommended)
The @operator macro lets you write PDE operators in mathematical notation. It translates symbolic expressions into composable operator objects:
k² = 4.0
op = @operator ∇² + k² * f
helm = op(x)
# Verify against separate built-in operators
expected = laplacian(x)(u) .+ k² .* u
maximum(abs, helm(u) .- expected)1.822542117224657e-12Recognized symbols
| Symbol | Meaning |
|---|---|
∇², Δ | Laplacian |
∂(dim) | First partial derivative in dimension dim |
∂²(dim) | Second partial derivative in dimension dim |
∇ ⋅ (κ * ∇) | Diffusion operator (scalar or vector κ) |
c ⋅ ∇ | Advection operator (vector c) |
f, I | Identity operator |
| Everything else | Scalar coefficient |
Standard arithmetic (+, -, *) and unary negation work as expected. Scalars can be literals, variables, or expressions like k^2 or c[1].
For more @operator recipes — diffusion, anisotropic diffusion, advection-diffusion — see the PDE Operators Cookbook.
Understanding Rank
The rank is auto-inferred — you don't need to specify it. This table explains what each rank means:
| Rank 0 (scalar output) | Rank 1 (vector output) |
|---|---|
| Output has same shape as input | Output gains a spatial dimension |
Single weight matrix W | Tuple of D weight matrices |
| Laplacian, partial derivative, directional derivative | Gradient, Jacobian, Hessian-like operators |
For AbstractOperator inputs (from @operator or algebra), the rank is encoded in the type parameter. For Function closures, it's inferred by probing: a tuple return means rank 1, a single callable means rank 0. You can still pass rank explicitly as an override if needed.
Hermite Boundary Conditions
Custom operators support Hermite interpolation via the hermite keyword, just like built-in operators:
op = my_ℒ(data; hermite=(
is_boundary=is_boundary,
bc=bcs,
normals=normals
))See the Boundary Conditions section of Getting Started for details on Hermite interpolation.
The Contract
Any AbstractOperator — including results from @operator — can be called directly with data points to build a RadialBasisOperator. Under the hood this calls RadialBasisOperator(op, data; kw...).
For raw closure-based operators, the custom function wraps a function in a Custom{N} operator:
custom(data, ℒ)The function ℒ must follow a three-layer structure:
ℒreceives the basis instance — e.g.,PHS(3; poly_deg=2)Returns a callable
(x, xᵢ) -> value— this evaluates the operator applied to the basis functionThe value is
— the operator acting on the basis function centered at
This callable fills the right-hand side of the stencil system that determines the weights. For a rank-0 operator it returns a scalar; for rank-1 it returns a tuple of callables (one per spatial dimension).
Escape Hatch: Function Form
If @operator can't express your operator, you can pass a closure directly to custom(). This is a last resort — if you find yourself needing this, consider opening an issue so macro support can be added.
Rank-1 example
For a rank-1 operator (one that adds a trailing dimension), return a tuple of callables. The @operator macro currently only produces rank-0 operators, so rank-1 requires the function form:
# Custom gradient: tuple of ∂/∂x₁ and ∂/∂x₂
custom_grad = custom(x, basis -> (∂(basis, 1), ∂(basis, 2)))
# Compare with built-in jacobian
builtin_jac = jacobian(x)
maximum(abs, custom_grad(u) .- builtin_jac(u))0.0Each element of the tuple produces one column of the output matrix.
Dual dispatch for composed functors
When you compose multiple functors with arithmetic inside a lambda, you need two methods — one for the RBF basis and one for MonomialBasis. The @operator macro handles this automatically, which is why it's preferred.
Why dual dispatch is needed: The system calls ℒ with both the RBF basis (e.g., PHS(3)) and a MonomialBasis (for polynomial augmentation). RBF functors like ∇²(basis) return (x, xᵢ) -> scalar, but monomial functors return (b, x) -> nothing (in-place buffer fill). Arithmetic on nothing fails.
# Two-method operator function (advanced — prefer @operator for this)
function helmholtz_op(basis)
lap = ∇²(basis)
(x, xc) -> lap(x, xc) + k² * basis(x, xc)
end
function helmholtz_op(basis::MonomialBasis)
lap = ∇²(basis)
function (b, x)
b .= lap(x) .+ k² .* basis(x)
return nothing
end
end
helm3 = Custom{0}(helmholtz_op)(x)
maximum(abs, helm3(u) .- helm(u))0.0Note
Simple cases that return a single functor directly — like basis -> ∂(basis, 1) — don't need dual dispatch. The built-in functors already handle both basis types internally. Two methods are only needed when you compose multiple functors with arithmetic.