diff --git a/docs/modules/ROOT/partials/aws-s3-event-based-source-description.adoc b/docs/modules/ROOT/partials/aws-s3-event-based-source-description.adoc new file mode 100644 index 000000000..01a9c01a3 --- /dev/null +++ b/docs/modules/ROOT/partials/aws-s3-event-based-source-description.adoc @@ -0,0 +1,11 @@ +== AWS S3 Event based Source Kamelet Description + +=== Authentication methods + +Access Key/Secret Key are the basic method for authenticating to the AWS SQS Service. + +=== Required Setup + +To use this Kamelet you'll need to set up Eventbridge on your bucket and subscribe Eventbridge bus to an SQS Queue. + +For doing this you'll need to enable Evenbridge notification on your bucket and creating a rule on Eventbridge console related to all the events on S3 bucket and pointing to the SQS Queue specified as parameter in this Kamelet. diff --git a/docs/modules/ROOT/partials/aws-s3-sink-description.adoc b/docs/modules/ROOT/partials/aws-s3-sink-description.adoc new file mode 100644 index 000000000..dee86f3d5 --- /dev/null +++ b/docs/modules/ROOT/partials/aws-s3-sink-description.adoc @@ -0,0 +1,26 @@ +== AWS S3 Sink Kamelet Description + +=== Authentication methods + +In this Kamelet you have the possibility of avoiding the usage of explicit static credentials by specifying the useDefaultCredentialsProvider option and set it to true. + +The order of evaluation for Default Credentials Provider is the following: + + - Java system properties - `aws.accessKeyId` and `aws.secretKey`. + - Environment variables - `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY`. + - Web Identity Token from AWS STS. + - The shared credentials and config files. + - Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable `AWS_CONTAINER_CREDENTIALS_RELATIVE_URI` is set. + - Amazon EC2 Instance profile credentials. + +You have also the possibility of using Profile Credentials Provider, by specifying the useProfileCredentialsProvider option to true and profileCredentialsName to the profile name. + +Only one of access key/secret key or default credentials provider could be used + +For more information about this you can look at https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/credentials.html[AWS credentials documentation] + +=== Optional Headers + +In the header, you can optionally set the `file` / `ce-file` property to specify the name of the file to upload. + +If you do not set the property in the header, the Kamelet uses the exchange ID for the file name. diff --git a/kamelets/aws-s3-event-based-source.kamelet.yaml b/kamelets/aws-s3-event-based-source.kamelet.yaml index 396d3508c..a3c524785 100644 --- a/kamelets/aws-s3-event-based-source.kamelet.yaml +++ b/kamelets/aws-s3-event-based-source.kamelet.yaml @@ -33,15 +33,7 @@ metadata: spec: definition: title: AWS S3 Event Based Source - description: >- - Receive data from AWS SQS subscribed to Eventbridge Bus reporting events related to an S3 bucket or multiple buckets. - - Access Key/Secret Key are the basic method for authenticating to the AWS - SQS Service. - - To use this Kamelet you'll need to set up Eventbridge on your bucket and subscribe Eventbridge bus to an SQS Queue. - - For doing this you'll need to enable Evenbridge notification on your bucket and creating a rule on Eventbridge console related to all the events on S3 bucket and pointing to the SQS Queue specified as parameter in this Kamelet. + description: Receive data from AWS SQS subscribed to Eventbridge Bus reporting events related to an S3 bucket or multiple buckets. required: - accessKey - secretKey diff --git a/kamelets/aws-s3-sink.kamelet.yaml b/kamelets/aws-s3-sink.kamelet.yaml index fd0a2a1dd..d73e2f6eb 100644 --- a/kamelets/aws-s3-sink.kamelet.yaml +++ b/kamelets/aws-s3-sink.kamelet.yaml @@ -31,16 +31,7 @@ metadata: spec: definition: title: "AWS S3 Sink" - description: |- - Upload data to an Amazon S3 Bucket. - - The basic authentication method for the S3 service is to specify an access key and a secret key. These parameters are optional because the Kamelet provides a default credentials provider. - - If you use the default credentials provider, the S3 client loads the credentials through this provider and doesn't use the basic authentication method. - - In the header, you can optionally set the `file` / `ce-partition` property to specify the name of the file to upload. - - If you do not set the property in the header, the Kamelet uses the exchange ID for the file name. + description: Upload data to an Amazon S3 Bucket. required: - bucketNameOrArn - region diff --git a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml index 396d3508c..a3c524785 100644 --- a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml +++ b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-event-based-source.kamelet.yaml @@ -33,15 +33,7 @@ metadata: spec: definition: title: AWS S3 Event Based Source - description: >- - Receive data from AWS SQS subscribed to Eventbridge Bus reporting events related to an S3 bucket or multiple buckets. - - Access Key/Secret Key are the basic method for authenticating to the AWS - SQS Service. - - To use this Kamelet you'll need to set up Eventbridge on your bucket and subscribe Eventbridge bus to an SQS Queue. - - For doing this you'll need to enable Evenbridge notification on your bucket and creating a rule on Eventbridge console related to all the events on S3 bucket and pointing to the SQS Queue specified as parameter in this Kamelet. + description: Receive data from AWS SQS subscribed to Eventbridge Bus reporting events related to an S3 bucket or multiple buckets. required: - accessKey - secretKey diff --git a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml index fd0a2a1dd..d73e2f6eb 100644 --- a/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml +++ b/library/camel-kamelets/src/main/resources/kamelets/aws-s3-sink.kamelet.yaml @@ -31,16 +31,7 @@ metadata: spec: definition: title: "AWS S3 Sink" - description: |- - Upload data to an Amazon S3 Bucket. - - The basic authentication method for the S3 service is to specify an access key and a secret key. These parameters are optional because the Kamelet provides a default credentials provider. - - If you use the default credentials provider, the S3 client loads the credentials through this provider and doesn't use the basic authentication method. - - In the header, you can optionally set the `file` / `ce-partition` property to specify the name of the file to upload. - - If you do not set the property in the header, the Kamelet uses the exchange ID for the file name. + description: Upload data to an Amazon S3 Bucket. required: - bucketNameOrArn - region