Workflow with Builder Kit
This walkthrough shows you how to host workflows on Orchestration Layer - the backbone of Brahma's Builder Kit. We'll explore how orchestration layer simplifies automation management and operations.
Orchestration Layer
Orchestration Layer is a workflow management layer built on top of the Execution Layer. It manages the complete end-to-end lifecycle of an automation. The SDK consists of a KeyManager, Persisted Stores, and a scheduler which is responsible for managing and triggering the workflow.
Every Automation is an independent activity and can be written in any language of choice. This allows a more scalable approach to building automations while abstracting redundant tasks such as implementing a log store, a reliable scheduling engine, and crucial components like key management.
API
While developers can choose to self-host orchestration layer, it requires additional infrastructure provisioning. The API offers access to the hosted version of orchestration layer which is responsible for managing and running your account's existing automations like Morpho Yield Optimizer.
Getting Started
To set up a Workflow, we first need to create a configuration which would create certain entities required for the automation
POST ${KERNEL_BASE}/v1/config
registryID:
the registryID for which this config is created
relayerCount:
the number of addresses to generate which would be used to relay the transaction by Brahma-relayer
verifyPayload
: if set to true
, the API would expect a HMAC signature of the payload, to determine the integrity and the authenticity of the payload before executing
Source Configuration
The source
object defines when and how often the automation should be triggered:
type
: Specifies the trigger type"INTERVAL"
: Indicates that the automation runs at regular intervals"FILTER"
: Used for blockchain event-based triggers, where value contains an eth filter queryValue must be either:
"INTERVAL"
or"FILTER"
value
: Specifies either the interval duration or filter configurationFor
INTERVAL
type: represents minutes between executionsFor
FILTER
type: contains an eth event filter queryExample:
30
means the automation runs every 30 seconds when type isINTERVAL
Example filter query:
Destination Configuration
The destination
object defines where and how the automation should execute:
baseURL
: The endpoint URL where the automation hooks will be calledMust be a valid HTTPS URL
Example:
"https://executor.com
The signature
field contains an EIP-712 signature from the executor's address. This signature is generated by signing a structured message containing the destination fields:
This would result in a response like
id
: the id of the created config
registryID
: the executor's id for which this config is created
executorAddress
: if autosign is set to true, this address would be the new signing address generated by the orchestration layer and would be used to sign execution requests
verifyingAddress
: the address which will be used to attest generated API payloads and can be used for verification on the executor's end
hmacKey
: The secret key used for request verification
256-bit (32-byte) key for HMAC operations
Used to sign and verify API payloads between executor and automation service
Should be stored securely and never exposed
Can be used with standard HMAC implementations or secure key management systems like HashiCorp Vault
if
verifyPayload
is set to true, the executor shall also include the signature of the payload along with the
Once registered, this configuration automatically handles the complete lifecycle of subscriptions:
New subscriptions are scheduled based on the trigger config
Destination hooks are called when triggers fire
Monitors are automatically deregistered when subscriptions are cancelled
This automation of subscription management lets developers focus solely on writing their automation-specific hooks.
Execution Hook
Executions hooks are called whenever the defined trigger is valid, this then sends an execution payload, to which the executor can choose to respond with the callData to execute or can skip
POST {destination}/kernel/trigger
This large payload contains everything which is needed for executor to validate and generate an execution request
params
: contains everything related to subscription, subaccount address, metadata given by user
requestID:
is the unique requestID generated for this request, and is is hashed and signed by the hmac key always for receiver to verify
trigger
: contains the type and data which cause it to trigger this request, if type is INTERVAL
data would be the UTC timestamp
signature
: signature field contains the signed requestID
context
fields can be used to derive previous context
Once the executor receives this request, It can decide to respond back with a signed executable, which will be relayed or can skip this trigger and wait for next one,
To execute the callData the response should be:
Here, the payload is similar to the execute task request, with extra fields
skip
: to indicate whether to skip the execution or not
requestID
: the request ID to which this is a response
task
: similar to the execute automation payload, except the web hook
signature
: if in executor config the verifyPayload
is set to true, the executor needs to send this signaure, which is signed using the given HMAC key
the payload to be signed is: ${requestID}:${executorSignature}
To skip execution, the payload would be
here the payload to generate signature field would be ${requestID}:0x
Post - Execution Hook
Once the payload sent above is executed or has been failed to be relayed or validated by the policy. a post execution hook is trigger, this is optional and the executors can omit handling this endpoint, a 404 would result in skipping of this step all together.
POST {destination}/kernel/executed
signature
: signature field contains the signed requestID
successful:
if the execution was successful or not, if false, the error field contains the description
relayResponse
: contains the standard response which can be used to get the output transaction hash
executionRequestID
: the original request ID of the execution
The executor should respond with a 200 Status code and two options
NO BODY
Nothing is sent in response and this is just a webhook
JSON BODY
In order to persist logs, which can be queried in future and are stored in persistent storage and are associated with this execution should be sent in the following format
Fetching Execution Logs
Every time an execution happens or the trigger is called, irrespective of the on-chain execution, a log is maintained and created by orchestration layer, this allows better visibility and potentially omits the need to manage state by the executor. This also contains the custom log sent by the executor after execution, hence the executor can use this to perform a lookback, persist a state or even query all vaules since inception
GET ${KERNEL_BASE}/v1/:sub_id/logs?offset=0&limit=10
This would result in an array of logs like following
Summary
Our Rebalancing workflow now would look something like this with hosted workflow.
API simplifies and abstracts the redundant and hard-to-manage components, making it particularly effective for trigger-driven automations.
However, executors should carefully evaluate which approach works best for them depending on the complexity of their automation.
Approaches
Hosted Solution
Use the hosted API
Ideal for simpler automations
Quick to implement and maintain
Self-Hosted
Deploy and manage Orchestration Layer independently
Complete control over infrastructure
Ability to run unlimited automations
Freedom to use any programming language
Developers can leverage the Orchestration Layer by hosting it themselves end-to-end and can build any number of automations. It can be polyglot in nature, since the core Orchestration Layer is not constrained by language itself and comes with the vault-secured key manager.
The next section elaborates more on the API itself and how it can be set up to run automations.
Last updated