# Workflow with Builder Kit

## Orchestration Layer

Orchestration Layer is a workflow management layer built on top of the Execution Layer. It manages the complete end-to-end lifecycle of an automation. The SDK consists of a KeyManager, Persisted Stores, and a scheduler which is responsible for managing and triggering the workflow.

Every Automation is an independent activity and can be written in any language of choice. This allows a more scalable approach to building automations while abstracting redundant tasks such as implementing a log store, a reliable scheduling engine, and crucial components like key management.

## API

While developers can choose to self-host orchestration layer, it requires additional infrastructure provisioning. The API offers access to the hosted version of orchestration layer which is responsible for managing and running your account's existing automations like Morpho Yield Optimizer.

### Getting Started

To set up a Workflow, we first need to create a configuration which would create certain entities required for the automation

**POST** `${KERNEL_BASE}/v1/config`

```json
{
  "registryID": "UUID",
  "source": {
    "type": "INTERVAL",
    "value": 30
  },
  "destination": {
    "baseURL": "https://executor.com"
  },
  "verifyPayload": true,
  "relayerCount": 1,
  "signature": "0x123..."  // EIP-712 signature from executor's address
}
```

`registryID:`the registryID for which this config is created

`relayerCount:` the number of addresses to generate which would be used to relay the transaction by Brahma-relayer

`verifyPayload`: if set to `true`, the API would expect a HMAC signature of the payload, to determine the integrity and the authenticity of the payload before executing

### Source Configuration

The `source` object defines when and how often the automation should be triggered:

* `type`: Specifies the trigger type
  * `"INTERVAL"`: Indicates that the automation runs at regular intervals
  * `"FILTER"`: Used for blockchain event-based triggers, where value contains an eth filter query
  * Value must be either: `"INTERVAL"` or `"FILTER"`
* `value`: Specifies either the interval duration or filter configuration
  * For `INTERVAL` type: represents minutes between executions
  * For `FILTER` type: contains an eth event filter query
  * Example: `30` means the automation runs every 30 seconds when type is `INTERVAL`
  * Example filter query:&#x20;

```json
{
    "type": "FILTER",
    "value": {
        "chainId": 1,
        "address": "0x1234...",
        "topics": [
            "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
            null,
            "0x000000000000000000000000a4d77147b70a25d4ce4ebf4641f9538307587958"
        ],
        "fromBlock": "latest"
    }
}
```

### Destination Configuration

The `destination` object defines where and how the automation should execute:

* `baseURL`: The endpoint URL where the automation hooks will be called
  * Must be a valid HTTPS URL
  * Example: `"https://executor.com`

The `signature` field contains an EIP-712 signature from the executor's address. This signature is generated by signing a structured message containing the destination fields:

```typescript
// EIP-712 Domain Configuration
const domain = {
    name: 'Brahma Builder Kit',
    version: '1'
};

// EIP-712 Type Definition
const types = {
    KernelAction: [
        { name: 'destinationURL', type: 'string' },
        { name: 'verifyPayload', type: 'bool' }
        { name: 'action', type: 'string' },
    ]
};

// Message to be signed
const message = {
    destinationURL: "https://executor.com",
    verifyPayload: true,
    action: 'CREATE'
};

// Example signing (using ethers.js)
const signature = await signer._signTypedData(domain, types, message);

```

This would result in a response like&#x20;

```json
{
    "id": "UUID",
    "registryID": "UUID",
    "executorAddress": "0x123",
    "hmacKey": "ed2cff7120f726311a0eb048c2d3af3ef3a411cc93bcf82883bc8988638dcae7",
    "relayerAddress": ["0x123"]
}
```

`id`: the id of the created config

`registryID`: the executor's id for which this config is created

`executorAddress`: if autosign is set to true, this address would be the new signing address generated by the orchestration layer and would be used to sign execution requests

`verifyingAddress`: the address which will be used to attest generated API payloads and can be used for verification on the executor's end

`hmacKey`: The secret key used for request verification

* 256-bit (32-byte) key for HMAC operations
* Used to sign and verify API payloads between executor and automation service
* Should be stored securely and never exposed
* Can be used with standard HMAC implementations or secure key management systems like HashiCorp Vault
* if `verifyPayload` is set to true, the executor shall also include the signature of the payload along with the&#x20;

Once registered, this configuration automatically handles the complete lifecycle of subscriptions:

* New subscriptions are scheduled based on the trigger config
* Destination hooks are called when triggers fire
* Monitors are automatically deregistered when subscriptions are cancelled

This automation of subscription management lets developers focus solely on writing their automation-specific hooks.

### Execution Hook

Executions hooks are called whenever the defined trigger is valid, this  then sends an execution payload, to which the executor can choose to respond with the callData to execute or can skip

**POST** `{destination}/kernel/trigger`

```json
{
  "context":{
    "executionCount": 11931,
    "prevExecutionAt": "2024-10-22T08:09:30.39587404Z",
    "prevExecutionID": "a4a4b4ff-3b5d-4f90-b434-747d388e305a-2024-10-22T08:09:30Z"
  },
  "params": {
    "executorAddress": "0xd1f745f0d14918a2c1e31153f2492891e4526ea4",
    "subAccountAddress": "0x8a17fe295bf517fbc148f565e5d6a2fe4f930cba",
    "executorID": "1fcc089a-9faf-40e9-9ec8-3e34d2f4614f",
    "chainID": 8453,
    "subscription": {
      "chainId": 8453,
      "commitHash": "0xae0205e59170df2e64e1fb3ffbe65d83538aeae38e5e63d9b306dc2dd5862828",
      "createdAt": "2024-10-18T03:48:13.496342Z",
      "duration": 30,
      "feeAmount": "0",
      "feeToken": "0xaf88d065e77c8cc2239327c5edb3a432268e5831",
      "id": "49366ccf-1980-4288-86b1-7ffcfd889d35",
      "metadata": {
        "baseToken": "0x4200000000000000000000000000000000000006",
        "every": "30"
      },
      "registryId": "1fcc089a-9faf-40e9-9ec8-3e34d2f4614f",
      "status": 2,
      "subAccountAddress": "0x8a17fe295bf517fbc148f565e5d6a2fe4f930cba",
      "tokenInputs": {
        "0x4200000000000000000000000000000000000006": "800000000000000",
        "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913": "0"
      },
      "tokenLimits": {
        "0x4200000000000000000000000000000000000006": "0.0008",
        "0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913": "0"
      }
    }
  },
  "requestID": "b4acb4ff-3b5d-4f90-b434-747d388e305a-2024-10-22T08:09:30Z",
  "signature": "8a9cf1b2e96b9e3230bce8f10ac4319986f019b8435c44e976e2833346dba158",
  "trigger" : {
    "type": "INTERVAL",
    "data": "2024-10-22T08:09:30.276713056Z"
  }
}
```

This large payload contains everything which is needed for executor to validate and generate an execution request

`params`: contains everything related to subscription, subaccount address, metadata given by user

`requestID:` is the unique requestID generated for this request, and is is hashed and signed by the hmac key always for receiver to verify

`trigger`: contains the type and data which cause it to trigger this request, if type is `INTERVAL` data would be the UTC timestamp

`signature` : signature field contains the signed requestID

`context` fields can be used to derive previous context

Once the executor receives this request, It can decide to respond back with a signed executable, which will be relayed or can skip this trigger and wait for next one,&#x20;

To execute the callData the response should be:&#x20;

```json
{
  "skip":false,
  "requestID": "[src-request-id]"
  "task": {
    "subaccount": "[subAccount-address]",
    "executor": "[executor-address]",
    "executorSignature": "[executor-signature]",
    "executable": {
      "callType": "[call-type]",
      "to": "[target-address]",
      "value": "[value]",
      "data": "[data]"
    }
  },
  "signature":""
}
```

Here, the payload is similar to the execute task request, with extra fields

`skip`: to indicate whether to skip the execution or not

`requestID`: the request ID to which this is a response

`task`: similar to the execute automation payload, except the web hook&#x20;

`signature`: if in executor config the `verifyPayload` is set to true, the executor needs to send this signaure, which is signed using the given HMAC key

the payload to be signed is: `${requestID}:${executorSignature}`

```typescript
// Payload to sign: ${requestID}:${executorSignature}
const hmacKey = "your-hmac-key";
const payloadToSign = `${requestID}:${executorSignature}`;
const signature = crypto
    .createHmac('sha256', hmacKey)
    .update(payloadToSign)
    .digest('hex');

```

To skip execution, the payload would be

```json
{
  "skip":true,
  "requestID": "[src-request-id]"
  "task": null,
  },
  "signature":""
}
```

here the payload to generate signature field would be `${requestID}:0x`

### Post - Execution Hook

Once the payload sent above is executed or has been failed to be relayed or validated by the policy. a post execution hook is trigger, this is optional and the executors can omit handling this endpoint, a 404 would result in skipping of this step all together.

**POST** `{destination}/kernel/executed`&#x20;

```json
{
  "requestID":"",
  "successful": true,
  "error":{
    "type":"",
    "message":""
  },
  "executionRequestID":"",
  "relayResponse":{
    "taskId": "[task-id]",
    "metadata": {
      "request": {
        "taskId": "[task-id]",
        "to": "[to]",
        "callData": "[call-data]",
        "requestedAt": 0,
        "timeout": 0,
        "signer": "[signer]",
        "chainID": "[chain-id]",
        "useSafeGasEstimate": false,
        "maxGasLimit": 0,
        "enableAccessList": false,
        "backendId": "[backend-id]",
        "webhook": "[webhook-url]"
      },
      "response": {
        "isSuccessful": false,
        "error": "[error]",
        "transactionHash": null
      }
    },
    "outputTransactionHash": null,
    "status": "[status]",
    "createdAt": "0001-01-01T00:00:00Z"
  },
  "signature":""
}
```

`signature` : signature field contains the signed requestID

`successful:` if the execution was successful or not, if false, the error field contains the description

`relayResponse`: contains the standard response which can be used to get the output transaction hash

`executionRequestID`: the original request ID of the execution

The executor should respond with a 200 Status code and two options

NO BODY

Nothing is sent in response and this is just a webhook

#### JSON BODY

In order to persist logs, which can be queried in future and are stored in persistent storage and are associated with this execution should be sent in the following format

```json
{
    "log": {}, // any json value, this would be persisted and can be queried later
}
```

## Fetching Execution Logs

Every time an execution happens or the trigger is called, irrespective of the on-chain execution, a log is maintained and created by orchestration layer, this allows better visibility and potentially omits the need to manage state by the executor. This also contains the custom log sent by the executor after execution, hence the executor can use this to perform a lookback, persist a state or even query all vaules since inception

**GET**  `${KERNEL_BASE}/v1/:sub_id/logs?offset=0&limit=10`

This would result in an array of logs like following

```json
{
  "data": [
      {
        "metadata": {
          "postExecutionState": {}, // sent back as part of post-execution webhook response
          "request": {}, // the execution request payload
          "response": {} // response frome executor
        },
        "message": "trigger",
        "createdAt": "0001-01-01T00:00:00Z",
        "subAccountAddress": "0x8a17fe295bf517fbc148f565e5d6a2fe4f930cba",
        "chainId": 8453,
        "id": "abcdef123456",
        "subId": "49366ccf-1980-4288-86b1-7ffcfd889d35",
        "outputTxHash": "0x0000000000000000000000000000000000000000000000000000000000000000"
      }
    ]
}
```

## Summary

Our Rebalancing workflow now would look something like this with hosted workflow.

<figure><img src="https://1982200391-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FrTh7VrjJ4wgoL7inqiR5%2Fuploads%2F8s8JHok1sBEroiP70XEJ%2Fimage.png?alt=media&#x26;token=ab808f5b-b98f-4182-9816-53692b3b970c" alt=""><figcaption></figcaption></figure>

API simplifies and abstracts the redundant and hard-to-manage components, making it particularly effective for trigger-driven automations.

However, executors should carefully evaluate which approach works best for them depending on the complexity of their automation.

### Approaches

1. **Hosted Solution**
   * Use the hosted API
   * Ideal for simpler automations
   * Quick to implement and maintain
2. **Self-Hosted**
   * Deploy and manage Orchestration Layer independently
   * Complete control over infrastructure
   * Ability to run unlimited automations
   * Freedom to use any programming language

Developers can leverage the Orchestration Layer by hosting it themselves end-to-end and can build any number of automations. It can be polyglot in nature, since the core Orchestration Layer is not constrained by language itself and comes with the vault-secured key manager.

The next section elaborates more on the API itself and how it can be set up to run automations.
