- Upgrade to Cumulus v18.5.2
- add cumulus outputs to allow deployment of the rds_lambda which is used as a plugin for PyLOT
- Upgrade to Cumulus v18.5.1
- NOTE This version of Cumulus requires changes to the RDS database
- For Serverless v2 RDS migration please see the cumulus instructions
- Upgrade to Cumulus v18.5.0
- NOTE This version of Cumulus requires changes to the RDS database
- For Serverless v2 RDS migration please see the cumulus instructions
- Upgrade to Cumulus v18.4.0
- Add 'Docker in Docker' functionality by giving the container access to the
host's docker engine. This requires running
make docker-in-docker-permissions
- Upgrade to Cumulus v18.3.3
- add .gitconfig file to Docker image to mark /CIRRUS-core and /CIRRUS-DAAC as safe
- Tag resources using the aws provider level
default_tags
configuration
- Update the Makefile so that
make image
can be run without having set any environment variables. - update core image to build from aws/lambda/python:3.9 as the python3 target
- Upgrade to Cumulus v18.3.1
- Added separate urs_tea_client_id and urs_tea_client_password that can be specified if these are different from the non-tea versions of the variables.
- Added optional ecs_include_docker_cleanup_cronjob variable, defaulting to False.
- Fixed the value of the output report_granules_sns_topic_arn to point to module.cumulus.report_granules_sns_topic_arn instead of report_executions_sns_topic_arn.
- Updated aws_s3_object.bucket_map_yaml so we only deploy this TEA bucket map when we don't provide a bucket_map_key from the daac module.
- add a Makefile target to import tea lambda cloudwatch group if getting an "The
specified log group already exists" error:
make import-thin-egress-log
- Update cumulus module to v18.3.1
- Fix typo
thottled_queue_execution_limit
tothrottled_queue_execution_limit
in cumulus/variables.tf - Update data-persistence module to v18.3.1
- Update Dockerfile:
- FromAsCasing: 'as' and 'FROM' keywords' casing do not match (line 1)
- LegacyKeyValueFormat: "ENV key=value" should be used instead of legacy "ENV key value" format
- NODE_VERSION="20.x"
- TERRAFORM_VERSION="1.9.2"
- AWS_CLI_VERSION="2.17.13"
- Upgrade to amazonlinux:2023 from amazonlinux:2
- Use
dnf
instead ofyum
- Upgrade to Python 3.9.x from Python 3.8
- Update TEA module from 1.3.5 -> 2.0.1
- TEA - Breaking Changes (For full list of changes visit the TEA release page)
- The
/locate
endpoint now requires the full bucket name to be provided in thebucket_name
query parameter. Previously it expected only the trailing part of the bucket name with theBUCKET_NAME_PREFIX
stripped off
- The
- NOTE - POST UPGRADE this version of Cumulus requires changes to the dead-letter archive per the instructions found under CUMULUS-3617
- Update
DOCKER_TAG := v18.3.1.0
in Makefile
- Upgrade to Cumulus v18.2.0
- NOTE this version of Cumulus requires changes to the RDS database per these instructions
- upgrade TEA to v1.3.5
- update required terraform version to
>= 1.5
in all CIRRUS modules matching the requirements from the Cumulus application. - Add
DAR=YES
tag to terraform state bucket created bymake tf
- replace deprecated use of terraform
s3_bucket_object
withs3_object
- expose the TEA lambda timeout value to allow for DAAC customization
- add
--platform linux/amd64
to all Docker commands inMakefile
somake image
andmake container-shell
work on Apple Silicon machines
- Remove terraform lockfiles to ensure easy upgrade to v18 and ratchet provider versions
- Capture and pass through CIRRUS-core and CIRRUS-DAAC versions
- Upgrade to Cumulus v18.0.0
- NOTE If you are upgrading from a version of CIRRUS earlier than
v17.0.0.3
you may observe a CloudFormation error. Use the information in thev17.0.0.3
entry below to resolve it - This version of Cumulus uses Terraform v1.5.3, it's possible that DAAC terraform code may need to be updated.
- Pass tags to Thin Egress App module in
cumulus
cirrus module - Add
html_template_dir
variable to cumulus module - NOTE ORNL observed a TEA CloudFormation error on the the first run of
make cumulus
with this release. DO NOT rerunmake cumulus
until you resolve the problem per this document
-
Adds the following outputs to the
cumulus
cirrus module that were added to the cumulus core module output added in cumulus v17:- "orca_recovery_adapter_task"
- "orca_copy_to_archive_adapter_task"
-
Update core
cumulus
module to allow for an Orca module that provides configuration values via remote state for the following Cumulus module variables:- orca_lambda_copy_to_archive_arn
- orca_sfn_recovery_workflow_arn
- orca_api_uri
-
Adds the following configuration variable:
variable "use_orca" {
description = "If set to true, pull in remote state values from 'orca' module to configure cumulus core module for ORCA"
type = bool
default = false
}
- Updates
cumulus
module behavior such that whenuse_orca
is set to true, the module reads a cirrus-daac moduleorca
's remote state via convention and uses the following remote state values to to pass configuration values to thecumulus
module:- outputs.orca.orca_lambda_copy_to_archive_arn
- outputs.orca.orca_sfn_recovery_workflow_arn
- outputs.orca.orca_api_uri
- Add
deploy_cumulus_distribution
variable to cumulus module - Update build Dockerfile to create an entry in /etc/passwd for the user building
the image. It allows the
setup_jwt_cookie.sh
script to be run inside the container. - update Node version to 16.x in Docker file to support v12.0.1 of the Cumulus Dashboard
- Upgrade to Cumulus v17.0.0
- Upgrade terraform modules to use AWS provider version 5.0
- Remove data-migration1 from repo
-
Upgrade to Cumulus v16.0.0
-
BUG: This version of Cumulus has a bug for async operations. Issue CUMULUS-3382 has been opened to track this issue and it should be resolved in a future version of Cumulus.
-
Note: When upgrading to cumulus 16.0.0
make cumulus
tries to delete 3 lambda functions and their associated cloudwatch log groups. For some deployments it gets stuck in a cycle. Thescripts/cumulus-v16.0.0/delete_log_groups.sh
script deletes the log groups which then allowsmake cumulus
to complete
Error: Cycle: module.cumulus.module.archive.aws_lambda_function.granule_files_cache_updater (destroy), module.cumulus.module.archive.aws_cloudwatch_log_group.granule_files_cache_updater_logs (destroy)
Error: Cycle: module.cumulus.module.archive.aws_lambda_function.publish_pdrs (destroy), module.cumulus.module.archive.aws_cloudwatch_log_group.publish_pdrs_logs (destroy)
Error: Cycle: module.cumulus.module.archive.aws_lambda_function.publish_granules (destroy), module.cumulus.module.archive.aws_cloudwatch_log_group.publish_granules_logs (destroy)
- Add
html_template_dir
variable to cumulus module
- Add
deploy_cumulus_distribution
variable to cumulus module
- Update outputs to match cumulus module
- Add support for EMS Reporting SNS policy
- Add 'lzards' support to cumulus module
- Upgrade to TEA Release 1.3.3 which now handles URLs with non-standard characters like colons
- one additional change to the
tf
module to handle bucket versioning as required by the terraform aws version>= 3.75.2
- Add
lambda_memory_sizes
to cumulus module variables
- Upgrade to Cumulus v15.0.3
- Per Cumulus v15.0.2
release notes, the new
default_log_retention_days
variable has been exposed in the Cumulus module to allow daac customization, default is 30 days (the release notes name it incorrectly) - Per Cumulus v15.0.0
release notes, all ECS tasks should be upgraded to use the
1.9.0
image - Upgraded the terraform aws version to
>= 3.75.2
to supportnodejs16.x
Lambdas
- Upgrade to TEA Release 1.3.2 which
upgrades dependencies and offers a new optional
s3credentials
end-point. To make use of this feature set thes3credentials_endpoint
variable to True. Feature is documented here. If using this feature you should disable the Cumulus equivalentdeploy_distribution_s3_credentials_endpoint
variable, testing would be required. - Add name and tags to the
background_job_queue_watcher
event rule - PR #145
- Upgrade to Cumulus v14.1.0
- Exposes the new
cloudwatch_log_retention_periods
as mentioned in the release notes in case a DAAC wants to modify the retention of any CloudWatch log groups. - updated the terraform
aws
provider in thecumulus
anddata-persistence
modules to match those in the underlying Cumulus modules. - Reminder - this version requires Cumulus Dashboard v12.0.0
- Also, any ECS tasks are required to use the
cumuluss/cumulus-ecs-task:1.8.0
docker image. This requirement is listed in the Cumulus v11.1.8 breaking changes section.
- Upgrade to Cumulus v13.3.2
- Upgrade to TEA Release 1.1.1
- Allow CIRRUS Docker image to use Python 3.8
-
By default
make image
Docker image with Python 3.7. Nowmake image
target looks for an environment variable namedPYTHON_VER
. If this variable is set topython38
it will build an image with python 3.8. Here's an example:export PYTHON_VER=python38 make image
-
- bug fixed - The
background_job_queue
built bymake cumulus
did not get tagged correctly, this version corrects that problem.
- Upgrade to Cumulus v11.1.5
- Upgrade to Cumulus v11.1.4
- Note instructions for creating the
files_granule_cumulus_id_index
in the release notes if you are continually ingesting data
- Upgrade to Cumulus v11.1.3
- Upgrade container packages
- Upgrade node to 14.x
- Add C++ compiler
- upgrade AWS CLI to V2
- add
PREFIX
env variable as output from env.sh
- Add a catchall target to the Makefile for forwarding unknown commands. For
instance running
make foo
from CIRRUS-core will attempt to callmake foo
on CIRRUS-DAAC instead of erroring out immediately. - Upgrade TEA to build 1.1.0, Note that if you're upgarding from a version of TEA older than 115 you must run these commands scripts/tea-115/iam_update_and_cache_clear.sh
- Add
scripts/cumulus-v11.0.0/
for the "After thecumulus
deployment" section of the Cumulus v11.0.0 release notes; see the release notes for full details and usage
- Upgrade to Cumulus v11.1.0
- see Cumulus v11.0.0 release notes for required migration steps for workflows and collection configurations, as well as lambda executions. If upgrading from CIRRUS v9.9.0.0 or an earlier version, see the v10.1.2.0 notes as well.
- Upgrade to Cumulus v10.1.2
- see Cumulus v10.0.0 release notes for required migration steps for workflows and collection configurations
- note that some lambdas and other workflow components may need to be updated for compatibility with the message format changes made in Cumulus v10.0.0, e.g., the dmrpp-generator must be upgraded to v3.3.0.beta
- add
scripts/cumulus-v10.1.1/data-migration1.sh
to execute thedata-migration1
lambda, per the migration step note for Cumulus v10.1.1. It must be run afterdata_migration1
anddata-persistence
are deployed, but beforecumulus
is deployed. - upgrade TEA to v1.0.2
- see Cumulus v10.0.0 release notes for required migration steps for workflows and collection configurations
- Upgrade to Cumulus v9.9.0
- Upgrade TEA to build 121
- Upgrade hashicorp/aws terraform to
~> v3.70.0
- Pin hashicorp/archive terraform to
~> v2.2.0
- Pin hashicrop/null terraform to
~> v2.1
consistently
- Upgrade TEA to build 118
- Upgrade to Cumulus v9.7.0
- remove rds_connection_heartbeat variable due to change in Cumulus v9.3.0
- add timeout variables introduced in Cumulus v9.5.0 to allow CIRRUS users to customize Lambda timeouts. Usage is documented here
- Upgrade TEA to build 115 to add Smarter In-Region control and Dynamic CORS Support plus CVE Remediation. As described in the release notes, it requires two migration steps to rebuild an IAM role and flush the IAM cache. These commands are in scripts/tea-115/iam_update_and_cache_clear.sh
- add
use_cors
variable to allow CIRRUS users to use this new feature of TEA - REMINDER: This release requires v7.0.0 of the Cumulus Dashboard
- Add a throttled queue to the cumulus deployment per the
data cookbook instructions.
The number of concurrent executions defaults to 5 and can be overriden via a
thottled_queue_execution_limit
variable in the appropriate CIRRUS-DAAC/cumulus/env.tfvars file - Added a
destroy-data-persistence
target to the Makefile borrowing heavily from NSIDC's instructions for destroying the dynamo_db tables Since the script exits as soon as the tables are marked for destruction (and not actually destroyed), it's possiblemake destroy-data-persistence
might need to be executed multiple times to fully destroy the environment. Also need to runmake image
to addjq
to it. - Add GitHub Action configuration for TFLint
- Upgrade to TEA to build 1.1.1 to resolve several CVE's
- Upgrade to Cumulus v9.2.0
- Version v9.0.1 has a number of migration instructions detailed here
- Per the migration instructions, this version of CIRRUS assumes that the DAACs
RDS creation is contained the
rds
directory of that DAACsCIRRUS-DAAC
code. The datbase is created by runningmake rds
with the appropriate parameters.make plan-rds
can be run to check parameters. - A new
make data-migration1
target has been created and can be used for that step of the migration.make plan-data-migration1
can be run to check parameters - Scripts for the migration1 and 2 lambda invokecations can be found in the
scripts/cumulus-v9.2.0
directory - While there are no specific migration instructions, the release notes for Version v9.1.0 should also be reviewed
- a serverless RDS requires at least 2 subnets to be defined, CIRRUS had only been using one via commands like this:
- Added
MAKE_COMMAND
tojenkins/Jenkinsfile
to allow specific cumulus modules to be deployed from Jenkins.
data "aws_subnet_ids" "subnet_ids" {
vpc_id = data.aws_vpc.application_vpcs.id
tags = {
Name = "Private application ${data.aws_region.current.name}a subnet"
}
}
- subnets are now defined like this throughout CIRRUS:
data "aws_subnet_ids" "subnet_ids" {
vpc_id = data.aws_vpc.application_vpcs.id
filter {
name = "tag:Name"
values = ["Private application ${data.aws_region.current.name}a subnet",
"Private application ${data.aws_region.current.name}b subnet"]
}
}
- ElasticSearch stacks with mulitiple subnets require at least 2 nodes so the default number was raised to 2
- If using a serverless RDS make sure to set
rds_connection_heartbeat
to true in the cumulus module
- Upgrade to Cumulus v8.1.1
- Upgrade to Cumulus v8.1.0
- Cumulus v7.0.0 removed the
log2elasticsearch_lambda_function_arn
output from the cumulus module. Any workflows which expected it will need to be updated. - Any workflows using DMRPP should upgrade to v2.1.0
- This version requires v6.0.0 of the Cumulus Dashboard
- The Cumulus team recommends updgrading your CIRRUS-core release to v6.0.0.0 across all your environments prior to upgrading to Cumulus 8.1.0.
- Upgrade your CIRRUS-core release to v5.0.1.3 across all your environments to ensure Terraform is upgraded to v13.6.
The purpose of this update is to upgrade Terraform to v0.13.6 on existing deployments of Cumulus v5.0.1. Cumulus notes for upgrading Terraform are available here.
This CIRRUS update takes care of the running of the 0.13upgrade
command across
all modules. It resulted in the creation of a versions.tf file in each module
and a syntax change to the required_providers
section in the main.tf file in
each module.
- Upgrade your CIRRUS-core release to v5.0.1.2 across all your environments.
- Per the Cumulus notes, apply any configuration across all environments.
- Review the changes in CIRRUS-DAAC. Add the versions.tf file and update the
required_providers
section of the main.tf file in yourdaac
andworkflows
modules - In CIRRUS-core run
make image
to create a new Dockercirrus-core
image with Terraform 0.13.6. - Run
make container-shell
- all the following commands are run from inside the container. - use the
source env.sh ...
command to set up your environment variables for the deployment you will be upgrading. - For each module run
make plan-modulename
(make plan-tf
,make plan-daac
, etc). Onlymake plan-tf
will succeed the first time. Even though unsuccessul, this step is necessary as it runs theterraform init --reconfigure
mentioned in the Cumulus upgrade instructions. - cd to the module directory and run the necessary
terraform state replace-provider
commands to resolve the issues noted in theplan
failure. - Run
make plan-module
again to confirm the issues are resolved
The scripts/cumulus-v5.0.1-tf-upgrade/replace_tf_providers.sh
script has all
the commands necessary to iterate over each module. BE WARNED You may want
to use this as more of a copy-paste guide rather than actually running it. It
does work running end-to-end on my deployments but your mileage may vary. In
particular, you may want to remove the -auto-approve
switch from the
terraform state replace-provider
command so you have a chance to review the
changes before accepting them.
- Once all
plans
run successfully you can then runmake modulename
for each module to complete the upgrade.
The process will need to be repeated for each deployment.
- Expose elasticsearch configuration parameters in both data-persistence and cumulus modules.
- Breaking change: the
Makefile
updated to handle per maturity data-persistence variables. To be consistent with how othermake
targets behave, the CIRRUS-DAACdata-persistence/terraform.tfvars
file needs to exist formake data-persistence
to work. The file can be empty, but must exist.
- Upgrade to TEA to build 102
- Upgrade to Cumulus v5.0.1
- Upgrade to Cumulus V5.0.0
- Update includes the addition of the
egress_lambda_log_group
andegress_lambda_log_subscription_filter
plus the removal of thetea_stack_name
mentioned in the migration steps - Cumulus 5.0.0 requires a one time reindex and changeindex
- Update adds a
egress_lambda_log_retention_days
variable with a default of 30 to allow a DAAC to control the number of days to keep logs.
- Upgrade aws terraform provider to 3.19.x and ignore gsfc-ngap tags when deciding what components need to be rebuilt
- Upgrade to Cumulus V4.0.0
- change
cumulus_message_adapter_lambda_layer_arn
->cumulus_message_adapter_lambda_layer_arn
under thecumulus
module incumulus/main.tf
- change
thin_egress_app
module's source tothin-egress-app/tea-terraform-build.100.zip
incumulus/thin_egress_app
- add cumulus module output for new
update_granules_cmr_metadata_file_links
workflow lambda - add
egress_api_gateway_log_subscription_filter
subscription filter incumulus/thin_egress.tf
per Cumulus upgrade instructions - expose several ecs_cluster variables in
cumulus/main.tf
andcumulus/variables.tf
for modification by CIRRUS users
- Upgrade to Cumulus V3.0.1
-
Upgrade to Cumulus V3.0.0
-
NOTE: Make sure to follow the upgrade instructions to prevent the deletion and recreation of your TEA API Gateway.
-
change
Makefile
to add newplan-*
targets for each step to allow running ofterraform plan
for each step (ex.make plan-cumulus
) -
change
cumulus/main.tf
changes to support separation of TEA from Cumulus -
change
cumulus/outputs.tf
update the TEA outputs needed by workflows -
change
cumulus/variables.tf
add new variables and formatting -
change
data-persistence/main.tf
update for cumulus 3.0.0 -
add
cumulus/common.tf
items which are needed by both cumulus and tea -
add
cumulus/thin_egress.tf
TEA tf module definition -
add
cumulus/thin-egress-app/bucket_map.yaml.tmpl
the default TEA bucket map template -
add
scripts/cumulus-v3.0.0/move-tea-tf-state.sh
contains the commands mentioned in the TEA migration instructions (https://nasa.github.io/cumulus/docs/upgrade-notes/migrate_tea_standalone)
- The
make daac
step of this version of CIRRUS generates a new output (bucket_map_key
). Look at the corresponding CIRRUS-DAAC to add that value to yourdaac/outputs.tf
file and then runmake daac
- Where the TEA migration instructions mention
terraform plan
use the newmake plan-cumulus
target get the output mentioned - Run
make daac
andmake data-persistence
prior tomake plan-cumulus
- Normally you run all CIRRUS
make
commands from the rootCIRRUS-core
directory. All theterraform state mv *
commands require being in theCIRRUS-core/cumulus
directory. A script has been added toscripts/cumulus-v3.0.0
with all the commands in one file. You may wish to run them from the script, or you may want to run them one at a time. Make sure you have a backup of your state per the Cumulus instructions.
- Upgrade to Cumulus V2.0.7
- Upgrade to Cumulus V2.0.6
- Upgrade to Cumulus V2.0.4 to upgrade TEA to build 88
- Upgrade to Cumulus V2.0.3 to fix syncgranule checksum and dashboard stats issues
- Upgrade to Cumulus V2.0.2 to fix delete granule bug
- add optional bucket_map_key variable to allow override of default TEA bucket_map
- output cmr environment and hyrax-metadata-update task for use in workflows
- Upgrade to Cumulus v2.0.1.
- review Cumulus deployment instructions for version 2.0.0 there is a manual step. 2.0.1 only contains a bug fix
- Expose EC2 instance type for the default Cumulus ECS cluster. Still deafaults to
t3.medium
. Can be changed via any of the cumulus .tfvars files in CIRRUS-DAAC
- Upgrade to Cumulus v1.24.0.
- review Cumulus deployment instructions
- added two extra cumulus outputs which are needed for ECS based tasks
- Upgrade to Cumulus v1.23.2.
- review Cumulus deployment instructions
- Upgrade to Cumulus v1.22.1.
- review Cumulus deployment instructions
- Upgrade to Cumulus v1.21.0.
- review Cumulus deployment instructions
- Upgrade to Cumulus v1.20.0. There are several breaking changes in this release.
cumulus/main.tf
addeddeploy_to_ngap = true
per Cumulus deployment instructions
- Upgrade to Cumulus v1.19.0. There are several breaking changes in this release. Make sure to follow the deployment instructions.
setup_jwt_cookie.sh
script added to create and deploy a TEA secret with the name of${DEPLOY_NAME}-cumulus-${MATURITY}-jwt_secret_for_tea
cumulus/main.tf
updated to make use of secret created bysetup_jwt_cookie.sh
cumuluse/outputs.tf
updated to outputsf_sqs_report_task
rather thansf_sns_report_task
- Remove the deprecated TF state resources that are no longer needed.
- Upgrade to Cumulus v1.18.0. There should be no breaking changes from CIRRUS v1.17.0.0.
-
CIRRUS' Makefile will now delegate to the DAAC repo for the following make targets:
- migrate-tf-state: NEW--see note below
- daac
- workflows
Add these three targets to your DAAC Makefile. See the CIRRUS-DAAC repo for examples of each of these three targets.
-
If you're currently using a previous version of CIRRUS, you'll need to migrate the Terraform state from the old backend AWS resources to new ones. You can do this by running this for each deployment / maturity combination that you've deployed:
source env.sh ... # See README make migrate-tf-state
You'll be prompted to migrate state from the old resources to the new. Simply respond with 'yes' to each of the four prompts and you'll be ready to go.
- For local development, CIRRUS no longer looks for secrets in the
CIRRUS-core repo's
.secrets
directory. Instead, it relies on the secrets being configured as described in the CIRRUS-DAAC repo. Remove any local.secrets
files and directory and see the CIRRUS-DAAC README for instructions on how to setup local development secrets.
- First official full release of CIRRUS
- Uses Cumulus v1.17.0
- Fix TF state resource names and add a Makefile target to migrate state from the old resources to the new one.
- Get the bucket config from the DAAC module (which needs to create it) and pass it to Cumulus.
- Set and export the AWS_PROFILE envvar in the
env.sh
script. - Fix a stringification bug in the Jenkinsfile.
- Fix the extra 'retry' command if deploying Cumulus fails randomly the first time.
- Pass the ECS cluster instance AMI id to Cumulus.
- The Makefile now defers to the DAAC repo to run the
daac
andworkflows
targets. It does this bycd
ing into the DAAC repo directory and simply executingmake daac
andmake workflows
. This means that the DAAC repo should have a Makefile with those two targets defined. - Use the MATURITY as the value for Cumulus'
api_gateway_stage
anddistribution_api_gateway_stage
. This means the API gateway stage in each NGAP account corresponds with the MATURITY. - Fix various Jenkinsfile parameter declarations, defaults, and descriptions.
- Include an example secrets TF variable file.
- Add output variables for all Cumulus tasks (lambdas) so they can be used in downstream TF modules.
- Add output variables for Cumulus'
lambda_processing_role_arn
andno_ingress_all_egress
AWS security group. - Turn off TF color output.
- The default Makefile target is now
all
. So runningmake
andmake all
are equivalent.
- Lookup the correct NGAP VPC using the Name property.
- Initial CIRRUS release