Skip to main content

S3

Opni requires an S3 endpoint to store the AI models for the Drain and Inference services. This can be endpoint for an external S3 compatible API, or Opni can deploy a SeaweedFS pod to serve the S3 API.

example.yaml

apiVersion: opni.io/v1beta1
kind: OpniCluster
metadata:
name: example
namespace: opni
spec:
s3:
internal: {}

Custom Resource Specs

S3Spec

FieldRequiredTypeDescription
internalNoInternalSpecIf set will deploy an internal S3 endpoint to use
externalNoExternalSpecThe reference to the external S3 compatible API to use
nulogS3BucketNostringName of the S3 bucket to use for the Nulog model. Defaults to opni-nulog-models
drainS3BucketNostringName of the S3 bucket to use for the Drain model. Defaults to opni-drain-model

InternalSpec

FieldRequiredTypeDescription
persistenceNoPersistenceSpecIf set SeaweedFS will be configured to use persistent storage

PersistenceSpec

FieldRequiredTypeDescription
enabledNoboolWhether persistent storage is enabled. Defaults to false
storageClassNameNostringIf persistent storage is enabled, the name of the StorageClass to use. If not set will use the default StorageClass
accessModesNostring arrayAn array of the access modes the volume supports
requestNostringThe size of the volume to request. Defaults to 10Gi

ExternalSpec

FieldRequiredTypeDescription
endpointYesstringThe external S3 endpoint URL
credentialsYesSecretReferenceReference to a secret containing the S3 credentials. It must have accessKey and secretKey items