copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2024-05-24 |
DevSecOps, inventory model, inventory, IBM Cloud |
devsecops |
{{site.data.keyword.attribute-definition-list}}
{: #cd-devsecops-cos-bucket-evidence}
You can configure {{site.data.keyword.cos_full_notm}} (COS) buckets to store the evidence that is generated by the compliance checks that are integrated into the DevSecOps pipelines. Compliance evidence creates the audit trail that auditors look for during a compliance audit. One of the goals of DevSecOps is automated evidence generation and storage in auditable evidence lockers. For more information, see evidence locker. {: shortdesc}
The compliance automation pipeline stores the following information in the COS bucket:
Task artifacts : Test results, scan results, or any saved output by tasks.
Task logs : After the pipeline runs, the logs for that run are sent to the evidence locker.
Evidence : Information about tasks and their result output, which can be either a failure or a success. For more information about the format of the evidence that is sent, see Evidence summary.
{: #cd-devsecops-cos-bucket-config}
A dedicated Cloud {{site.data.keyword.cos_short}} instance must be created before setting-up a continuous integration or continuous deployment toolchain. This COS bucket is used for compliance-related storage as evidence lockers must be created in-boundary to your applications. This helps to improve the resiliency of your pipeline. For more information, see Resiliency.
To configure your Cloud {{site.data.keyword.cos_short}} bucket to act as a compliance evidence locker as part of a continuous integration or continuous deployment pipeline, you can use the following information as a guide. The pipeline or toolchain template scripts do not set up the locker in Cloud {{site.data.keyword.cos_short}}.
{: #cd-devsecops-cos-bucket-content-format}
Cloud {{site.data.keyword.cos_short}} object names consist of the following components:
{NAMESPACE} / {PIPELINE_RUN_ID} / {TYPE} / {FILE_NAME} _ {HASH}
{: codeblock}
{: #cd-devsecops-cos-bucket-content-example}
ci/48decaa9-9042-498f-b58d-3577e0ac0158/evidences/build-vulnerability-advisor.json_362c06afa88b3f304878f0d0979e834f
ci/48decaa9-9042-498f-b58d-3577e0ac0158/artifacts/app-image-va-report.json_b3f30487f0d0979e834f362c06afaaa8
{: codeblock}
The name component {NAMESPACE} / {PIPELINE_RUN_ID} / {TYPE}
is useful as a prefix when you're looking for collected data from a certain pipeline run.
{: tip}
{: #cd-devsecops-cos-bucket-retention}
You can set Cloud {{site.data.keyword.cos_short}} buckets to enforce a retention policy or period for uploaded objects, otherwise known as Immutable Object Storage. Immutable Object Storage preserves electronic records and maintains data integrity. Retention policies ensure that data is stored in a Write-One-Read-Many (WORM), nonerasable, and nonrewritable manner. You cannot change or delete objects in protected buckets within the retention period, or delete protected buckets with objects themselves until the retention period is over. The policy is enforced until the end of a retention period and the removal of any legal holds.
It is recommended that teams set a retention policy for the buckets that are used as their evidence locker that stores every object for a minimum of 365 days.
{: #cd-devsecops-cos-bucket-log}
For buckets that are not configured, access to log data is available for Immutable Object Storage through support requests.
With {{site.data.keyword.at_full_notm}}, you can audit the requests that are made against a bucket and the objects it contains. You can review all of the {{site.data.keyword.cos_short}} events that are related to the evidence locker bucket on the {{site.data.keyword.at_full_notm}} web UI. You can also have all of the data that is collected in an instance of {{site.data.keyword.at_full_notm}} archived and written to a separate bucket.
{: #cd-devsecops-cos-bucket-permissions}
Pipelines put objects (evidence, evidence summary, and artifacts) and read objects (evidence summary) from Buckets. The tools do not alter or delete objects or create, update, or delete Buckets.
Use the following access policies to access the Cloud {{site.data.keyword.cos_short}} buckets:
- Reader. Required for the CD pipeline to check the bucket retention policy.
- Object Writer. Required for all CI, CD, and CC pipelines to persist new evidence into buckets.
When you use the Cloud {{site.data.keyword.cos_short}} bucket as the evidence storage, the recommended permissions are Reader and Object writer. Permissions that have higher privileges might be harmful. {: note}
{: #cd-devsecops-cos-bucket-classes}
Costs vary for teams with different setups and deployment frequency. It is not recommended that you use the free tier as Cloud {{site.data.keyword.cos_short}} buckets because the free tier cannot be configured to be immutable.
{: #cd-devsecops-cos-bucket-estimate}
If you are working with a reference continuous integration or continuous deployment pipeline with six evidence in each, a single continuous integration and continuous deployment run pair makes 37 class A requests and six class B requests.
- Continuous integration writes six logs, six artifacts, and six evidence, which equals 18 PUT - Class A.
- Continuous deployment reads six evidence (six GET - Class B), writes six evidence, six logs, six artifacts, and a summary, which equals 19 PUT - Class A.
With an average of five microservices (five x continuous integration) and four deployment regions (four x continuous deployment), one full deployment equals 166 Class A and 24 Class B requests.
With one full deployment per week (four per every month), you can calculate 664 Class A and 96 Class B requests per month.
The data amount that is collected varies by use-case. With average sizes for evidence (1 kB), test artifacts (100 kB), and logs (15 kB), you can calculate 0.01 GByte of data that is created and transferred per month.
{: #cd-devsecops-cos-bucket-resiliency}
It is recommended that you use Cross-Region
or Regional
resiliency, if it needs to be kept in-boundary. For more information about these regions, see Endpoints and storage locations.
{: #cd-devsecops-cos-bucket-name}
Cloud {{site.data.keyword.cos_short}} bucket names must be globally unique and DNS-compliant. Names must be 3 - 63 characters in length and must contain lowercase letters, numbers, and dashes. Bucket names must begin and end with a lowercase letter or number. Names that resemble IP addresses are not allowed. Bucket names be unique across the entire {{site.data.keyword.cos_full_notm}} system and they cannot contain any personal information, such as any part of a name or address, or financial, security accounts, or SSN.
{: #cd-devsecops-cos-bucket-endpoint}
Use private
endpoints for most requests that originate from within {{site.data.keyword.cloud}} and use public
endpoints for most requests that originate from outside {{site.data.keyword.cloud}}. For more information, see Endpoint Types.
For pipelines that are running in the London region, use direct
endpoints due to the pipeline-managed worker infrastructure there.
{: note}
{: #best-practices-multiple-apps}
This is a list of tips and best practices to follow when you configure multiple applications on your CI toolchain as this involves multiple properties and their information. They are:
-
It is best to name the triggers by the application name and then its purpose, so people can easily filter on them in the dashboard. For example,
hello-world Git <branch> Trigger
. -
Seek to maximize the usage of the trigger properties in the manual triggers to reduce the need to insert input values. It also has the helpful effect of moving those values to the top of the properties list, so there's no need to search for the properties that need to change.