AWS S3 Sink
This page describes how to use the AWS S3 service with the IntegrationSink API for Eventing in OpenShift Serverless.
|
OpenShift Serverless Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Developer Preview features, see access.redhat.com/support/offerings/devpreview/. |
Amazon credentials
For connecting to AWS the IntegrationSink uses Kubernetes Secret, present in the namespace of the resource. The Secret can be created like:
$ oc -n <namespace> create secret generic my-secret --from-literal=aws.accessKey=<accessKey> --from-literal=aws.secretKey=<secretKey>
AWS S3 Sink Example
Below is an IntegrationSink to send data to an Amazon S3 Bucket:
apiVersion: sinks.knative.dev/v1alpha1
kind: IntegrationSink
metadata:
name: integration-sink-aws-s3
namespace: knative-samples
spec:
aws:
s3:
arn: "arn:aws:s3:::my-bucket"
region: "eu-north-1"
auth:
secret:
ref:
name: "my-secret"
Inside of the aws.s3 object we define the name of the bucket (or arn) and its region. The credentials for the AWS service are referenced from the my-secret Kubernetes Secret
More details about the Apache Camel Kamelet aws-s3-sink.