Skip to content


Opni requires an S3 endpoint to store the AI models for the Drain and Inference services. This can be endpoint for an external S3 compatible API, or Opni can deploy a SeaweedFS pod to serve the S3 API.


kind: OpniCluster
  name: example
  namespace: opni
    internal: {}

Custom Resource Specs


Field Required Type Description
internal No InternalSpec If set will deploy an internal S3 endpoint to use
external No ExternalSpec The reference to the external S3 compatible API to use
nulogS3Bucket No string Name of the S3 bucket to use for the Nulog model. Defaults to opni-nulog-models
drainS3Bucket No string Name of the S3 bucket to use for the Drain model. Defaults to opni-drain-model


Field Required Type Description
persistence No PersistenceSpec If set SeaweedFS will be configured to use persistent storage


Field Required Type Description
enabled No bool Whether persistent storage is enabled. Defaults to false
storageClassName No string If persistent storage is enabled, the name of the StorageClass to use. If not set will use the default StorageClass
accessModes No string array An array of the access modes the volume supports
request No string The size of the volume to request. Defaults to 10Gi


Field Required Type Description
endpoint Yes string The external S3 endpoint URL
credentials Yes SecretReference Reference to a secret containing the S3 credentials. It must have accessKey and secretKey items