Releases: nasa/cumulus-orca
v10.1.1
Release v10.1.1
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.1.0...v10.1.1
Migration Notes
- The user should update their
orca.tf
,variables.tf
andterraform.tfvars
files with new variables. The following optional variables have been added:- max_pool_connections
- max_concurrency
- lambda_log_retention_in_days
Delete Log Groups
ORCA has added the capability to set log retention on ORCA Lambdas e.g. 30 days, 60 days, 90 days, etc.
- Deployment Steps
- Run the script located at bin/delete_log_groups.py
- These must be deleted before a
terraform apply
is ran due to the current log groups being created by AWS by default which retention cannot be modified via Terraform.
- These must be deleted before a
- Set the
lambda_log_retention_in_days
variable to the number of days which you would like the logs to be retained. e.g.lambda_log_retention_in_days = 30
- To set the logs to never expire the variable does not have to be set since it is set to never expire by default, if you would still like the variable to be set to never expire the value can be set at 0 e.g.
lambda_log_retention_in_days = 0
- To set the logs to never expire the variable does not have to be set since it is set to never expire by default, if you would still like the variable to be set to never expire the value can be set at 0 e.g.
- Once these steps are completed a
terraform apply
can be executed.
- Run the script located at bin/delete_log_groups.py
Added
- ORCA-904 - Added to integration tests that verifies recovered objects are in the destination bucket.
- ORCA-907 - Added integration test for internal reconciliation at
integration_test/workflow_tests/test_packages/reconciliation
and updated documentation with new variables. - LPCUMULUS-1474 - Added log groups that can have set retention periods in
modules/lambdas/main.tf
with a variable to set the retention in days. As well as added a script to delete the log groups AWS creates by default since those cannot be modified by Terraform. - ORCA-957 Added outbound HTTPS security group rule in order for the Internal Reconciliation Workflow to perform the S3 import successfully at
modules/security_groups/main.tf
Changed
- ORCA-918 - Updated
copy_to_archive
andcopy_from_archive
lambdas to include two new optional ORCA variablesmax_pool_connections
andmax_concurrency
that can be used to change parallelism of s3 copy operation. - ORCA-958 - Upgraded flake8, isort and black packages to latest versions in ORCA code.
- ORCA-947 - Updated
request_from_archive
lambda to include an optional ORCA variablemax_pool_connections
that can be used to change parallelism of s3 copy operation.
Removed
Fixed
- ORCA-939 - Fixed snyk vulnerabilities showing high issues and upgraded docusaurus to v3.6.0.
v10.1.0
Release v10.1.0
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.0.1...v10.1.0
Added
- ORCA-905 - Added integration test for recovery large file.
- ORCA-567 - Specified build scripts to use specific version of pip to resolve any future errors/issues that could be caused by using the latest version of pip.
- ORCA-933 - Added dead letter queue for the Metadata SQS queue in
modules/sqs/main.tf
Changed
- ORCA-900 - Updated aws_lambda_powertools to latest version to resolve errors users were experiencing in older version. Updated boto3 as it is a dependecy of aws_lambda_powertools.
- ORCA-927 - Updated archive architecture to include metadata deadletter queue in
website/static/img/ORCA-Architecture-Archive-Container-Component-Updated.svg
- ORCA-937 - Updated get_current_archive_list Lambda to use the gql_tasks_role to resolve database errors when trying to S3 import in
modules/lambdas/main.tf
. Updated gql_tasks_role with needed permissions inmodules/graphql_0/main.tf
, as well as updated Secrets Manager permissions to allow the role to get DB secret inmodules/secretsmanager/main.tf
. - ORCA-942 - Fixed npm tarball error found during ORCA website deployment.
- ORCA-850 - Updated copy_to_archive documentation containing the additional s3 destination property functionality.
- ORCA-774 - Updated Lambdas and GraphQL to Python 3.10.
- ORCA-896 - Updated Bamboo files to use
latest
tag oncumulus_orca
Docker image to resolve Bamboo jobs using old images. - 530 - Added explicit
s3:GetObjectTagging
ands3:PutObjectTagging
actions to IAMrestore_object_role_policy
Fixed
- ORCA-822 - Fixed nodejs installation error in bamboo CI/CD ORCA distribution docker image.
- ORCA-810 - Fixed db_deploy unit test error in bamboo due to wheel installation during python 3.10 upgrade.
- ORCA-861 - Updated docusaurus to fix Snyk vulnerabilities.
- ORCA-862 - Updated docusaurus to v3.4.0.
- ORCA-890 - Fixed snyk vulnerabilities showing high issues and upgraded docusaurus to v3.5.2
- ORCA-902 - Upgraded bandit to version 1.7.9 to fix snyk vulnerabilities.
- ORCA-937 - Updated get_current_archive_list Lambda to use the gql_tasks_role to resolve database errors when trying to S3 import in modules/lambdas/main.tf. Updated gql_tasks_role with needed permissions in modules/graphql_0/main.tf, as well as updated Secrets Manager permissions to allow the role to get DB secret in modules/secretsmanager/main.tf.
- ORCA-942 - Fixed npm tarball error found during ORCA website deployment.
Removed
- ORCA-933 - Removed S3 credential references that were causing errors in
tasks/get_current_archive_list/get_current_archive_list.py
andtasks/get_current_archive_list/test/unit_tests/test_get_current_archive_list.py
v10.0.1
Release v10.0.1
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v10.0.0...v10.0.1
Added
- ORCA-920 - Fixed ORCA deployment failure for Cumulus when sharing an RDS cluster due to multiple IAM role association attempts. Added a new boolean variable
deploy_rds_cluster_role_association
which can be used to deploy multiple ORCA/cumulus stacks sharing the same RDS cluster in the same account by overwriting it tofalse
for 2nd user.
v10.0.0
Release v10.0.0
Important information
This release is only compatible with Cumulus v18.5.0 and up.
- Full Change Comparison: v9.0.5...v10.0.0
Migration Notes
Remove the s3_access_key
and s3_secret_key
variables from your orca.tf
file.
Post V2 Upgrade Comparison
Once the Aurora V1 database has been migrated/upgrade to Aurora V2 you can verify data integrity of the ORCA database by deploying the EC2 comparison instance which can be found at modules/db_compare_instance/main.tf
- Deployment Steps
- Fill in the variables in
modules/db_compare_instance/scripts/db_config.sh
- archive_bucket - ORCA Archive Bucket Name IMPORTANT: use underscores in place of dashes e.g. zrtest_orca_archive
- v1_endpoint - Endpoint of the V1 cluster e.g. orcaV1.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v1_database - Database of the V1 cluster e.g. orca_db
- v1_user - Username of the V1 cluster e.g orcaV1_user
- v1_password - Password for the V1 user e.g. OrcaDBPass_4
- v2_endpoint - Endpoint of the V2 cluster e.g. orcaV2.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v2_database - Database of the V2 cluster e.g. orca_db2
- v2_user - Username of the V2 cluster e.g orcaV2_user
- v2_password - Password for the V2 user e.g. OrcaDB2Pass_9
- cd to
modules/db_compare_instance
- Run
terraform init
- Run
terraform apply
- Once the instance is deployed add an inbound rule to both the V1 and V2 database security groups with the private IP of the EC2 instance.
- The private IP of the instance can be found via the console or AWS CLI by running the command:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=instance-id,Values=<INSTANCE_ID>" --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text
- This needs to be performed on BOTH V1 and V2 Security Groups The inbound rule can be added via the AWS console or AWS CLI by running the command:
aws ec2 authorize-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- The private IP of the instance can be found via the console or AWS CLI by running the command:
- Now you can connect to the EC2 via the AWS console or AWS CLI with the command:
aws ssm start-session --target <INSTANCE_ID>
- Once connected run the command
cd /home
- Once at the
/home
directory run the command:sh db_compare.sh
- When the script completes it will output two tables:
- v1_cluster - This table is count of data in the ORCA database of each table in the V1 cluster.
- v2_cluster - This table is count of data in the ORCA database of each table in the V2 cluster.
- Verify that the output of the V2 database matches that of the V1 database to ensure no data was lost during the migration.
- Once verified the EC2 instance can be destroyed by running
terraform destroy
Verify you are in the modules/db_compare_instance directory - This needs to be performed on BOTH V1 and V2 Security Groups Remove the added inbound rules that were added in step 5 either in the AWS Console or AWS CLI by running the command:
aws ec2 revoke-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- Delete the V1 database.
- Remove the snapshot identifier from the Terraform (If Applicable)
- In the AWS console navigate to RDS -> Snapshots and delete the snapshot the V2 database was restored from.
- Fill in the variables in
Added
- ORCA-845 - Created IAM role for RDS S3 import needed for Aurora v2 upgrade.
- ORCA-792 - Added DB comparison script at
modules/db_compare_instance/scripts/db_compare.sh
for the temporary EC2 to compare databases post migration. - ORCA-868 - Added EC2 instance for DB comparison after migration under
modules/db_compare_instance/main.tf
Changed
- ORCA-832 - Modified pyscopg2 installation to allow for SSL connections to database.
- ORCA-795 - Modified Graphql task policy to allow for S3 imports.
- ORCA-797 - Removed s3 credential variables from
deployment-with-cumulus.md
ands3-credentials.md
documentations since they are no longer used in Aurora v2 DB. - ORCA-873 - Modified build task script to copy schemas into a schema folder to resolve errors.
- ORCA-872 - Updated grapql version, modified policy in
modules/iam/main.tf
to resolve errors, and added DB role attachment tomodules/graphql_0/main.tf
- 530 - Added explicit
s3:GetObjectTagging
ands3:PutObjectTagging
actions to IAMrestore_object_role_policy
Deprecated
Removed
- ORCA-793 - Removed
s3_access_key
ands3_secret_key
variables from terraform. - ORCA-795 - Removed
s3_access_key
ands3_secret_key
variables from Graphql code and from get_current_archive_list task. - ORCA-798 - Removed
s3_access_key
ands3_secret_key
variables from integration tests. - ORCA-783 - Removed
tasks/copy_to_archive_adapter
andtasks/orca_recovery_adapter
as they are handled by Cumulus.
Fixed
- ORCA-835 - Fixed ORCA documentation bamboo CI/CD pipeline showing node package import errors.
- ORCA-864 - Updated ORCA archive bucket policy and IAM role to fix access denied error during backup/recovery process.
Security
- ORCA-851 - Updated bandit libraries to fix Snyk vulnerabilities.
v10.0.0-beta
Release v10.0.0-beta
Migration Notes
Remove the s3_access_key
and s3_secret_key
variables from your orca.tf
file.
Post V2 Upgrade Comparison
Once the Aurora V1 database has been migrated/upgrade to Aurora V2 you can verify data integrity of the ORCA database by deploying the EC2 comparison instance which can be found at modules/db_compare_instance/main.tf
- Deployment Steps
- Fill in the variables in
modules/db_compare_instance/scripts/db_config.sh
- archive_bucket - ORCA Archive Bucket Name IMPORTANT: use underscores in place of dashes e.g. zrtest_orca_archive
- v1_endpoint - Endpoint of the V1 cluster e.g. orcaV1.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v1_database - Database of the V1 cluster e.g. orca_db
- v1_user - Username of the V1 cluster e.g orcaV1_user
- v1_password - Password for the V1 user e.g. OrcaDBPass_4
- v2_endpoint - Endpoint of the V2 cluster e.g. orcaV2.cluster-c1xufm1sp0ux.us-west-2.rds.amazonaws.com
- v2_database - Database of the V2 cluster e.g. orca_db2
- v2_user - Username of the V2 cluster e.g orcaV2_user
- v2_password - Password for the V2 user e.g. OrcaDB2Pass_9
- cd to
modules/db_compare_instance
- Run
terraform init
- Run
terraform apply
- Once the instance is deployed add an inbound rule to both the V1 and V2 database security groups with the private IP of the EC2 instance.
- The private IP of the instance can be found via the console or AWS CLI by running the command:
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running" "Name=instance-id,Values=<INSTANCE_ID>" --query 'Reservations[*].Instances[*].[PrivateIpAddress]' --output text
- This needs to be performed on BOTH V1 and V2 Security Groups The inbound rule can be added via the AWS console or AWS CLI by running the command:
aws ec2 authorize-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- The private IP of the instance can be found via the console or AWS CLI by running the command:
- Now you can connect to the EC2 via the AWS console or AWS CLI with the command:
aws ssm start-session --target <INSTANCE_ID>
- Once connected run the command
cd /home
- Once at the
/home
directory run the command:sh db_compare.sh
- When the script completes it will output two tables:
- v1_cluster - This table is count of data in the ORCA database of each table in the V1 cluster.
- v2_cluster - This table is count of data in the ORCA database of each table in the V2 cluster.
- Verify that the output of the V2 database matches that of the V1 database to ensure no data was lost during the migration.
- Once verified the EC2 instance can be destroyed by running
terraform destroy
Verify you are in the modules/db_compare_instance directory - This needs to be performed on BOTH V1 and V2 Security Groups Remove the added inbound rules that were added in step 5 either in the AWS Console or AWS CLI by running the command:
aws ec2 revoke-security-group-ingress --group-id <DB_SECURITY_GROUP_ID> --protocol tcp --port 5432 --cidr <INSTANCE_PRIVATE_IP>/32
- Delete the V1 database.
- Remove the snapshot identifier from the Terraform (If Applicable)
- In the AWS console navigate to RDS -> Snapshots and delete the snapshot the V2 database was restored from.
- Fill in the variables in
Added
- ORCA-845 - Created IAM role for RDS S3 import needed for Aurora v2 upgrade.
- ORCA-792 - Added DB comparison script at
modules/db_compare_instance/scripts/db_compare.sh
for the temporary EC2 to compare databases post migration. - ORCA-868 - Added EC2 instance for DB comparison after migration under
modules/db_compare_instance/main.tf
Changed
- ORCA-832 - Modified pyscopg2 installation to allow for SSL connections to database.
- ORCA-795 - Modified Graphql task policy to allow for S3 imports.
- ORCA-797 - Removed s3 credential variables from
deployment-with-cumulus.md
ands3-credentials.md
documentations since they are no longer used in Aurora v2 DB. - ORCA-873 - Modified build task script to copy schemas into a schema folder to resolve errors.
- ORCA-872 - Updated grapql version, modified policy in
modules/iam/main.tf
to resolve errors, and added DB role attachment tomodules/graphql_0/main.tf
Deprecated
Removed
- ORCA-793 - Removed
s3_access_key
ands3_secret_key
variables from terraform. - ORCA-795 - Removed
s3_access_key
ands3_secret_key
variables from Graphql code and from get_current_archive_list task. - ORCA-798 - Removed
s3_access_key
ands3_secret_key
variables from integration tests. - ORCA-783 - Removed
tasks/copy_to_archive_adapter
andtasks/orca_recovery_adapter
as they are handled by Cumulus.
Fixed
- ORCA-835 - Fixed ORCA documentation bamboo CI/CD pipeline showing node package import errors.
- ORCA-864 - Updated ORCA archive bucket policy and IAM role to fix access denied error during backup/recovery process.
Security
- ORCA-851 - Updated bandit libraries to fix Snyk vulnerabilities.
v9.0.5
Release v9.0.5
Important information
This release is only compatible with Cumulus v18.x.x and up.
- Full Change Comparison: v9.0.4...v9.0.5
Migration Notes
If you are deploying ORCA for the first time or migrating from v6, no changes are needed.
If you are currently on v8 or v9, this means you already have load balancer deployed and you need to delete the load balancer target group before deploying this version. This is because terraform cannot delete existing load balancer target groups having a listener attached. Adding a HTTPS to the target group requires replacing the target group. Once the target group is deleted, you should be able to deploy ORCA.
- From AWS EC2 console, go to your load balancer named
<prefix-gql-a>
and select theListeners and rules
tab. Delete the rule. - Delete your target group
<random_name>-gql-a
. The target group name has been randomized to avoid terraform resource error. - Deploy ORCA.
If deployed correctly, the target group health checks should show as healthy.
For the DR buckets modify the bucket policy and remove the line that contains "s3:x-amz-acl": "bucket-owner-full-control" as well as the comma that is before/after it.
Added
- ORCA-450 - Removed Access Control List (ACL) requirement and added BucketOwnerEnforced to ORCA bucket objects.
- ORCA-452 - Added Deny non SSL policy to S3 buckets in
modules/dr_buckets/dr_buckets.tf
andmodules/dr_buckets_cloudformation/ dr-buckets.yaml
Changed
- ORCA-441 - Updated policies for ORCA buckets and copy_to_archive to give them only the permissions needed to restrict unwanted/unintended actions.
- ORCA-746 - Enabled HTTPS listener in application load balancer for GraphQL server using AWS Certificate Manager.
- ORCA-828 - Added prefix to ORCA SNS topic names to avoid
object already exists
errors.
Security
- ORCA-821 - Fixed snyk vulnerabilities from snyk report showing high issues and upgraded docusaurus to v3.1.0.
v9.0.4
Release v9.0.4
Important information
This release is only compatible with Cumulus v18.x.x and up.
- Full Change Comparison: v9.0.3...v9.0.4
Migration Notes
- For users upgrading from ORCA v8.x.x to v9.x.x, follow the below steps before deploying:
- Run the Lambda deletion script found in
python3 bin/delete_lambda.py
which will delete all of the ORCA lambdas with a provided prefix. You can also delete them manually in the AWS console. - Navigate to the AWS console and search for the Cumulus RDS security group.
- Remove the inbound rule with the source of
PREFIX-vpc-ingress-all-egress
in Cumulus RDS security group. - Search for
PREFIX-vpc-ingress-all-egress
and delete the security group NOTE: Due to the Lambdas using ENIs, when deleting the security groups it may say they are still associated with a Lambda that was deleted by the script. AWS may need a few minutes to refresh to fully disassociate the ENIs completely, if this error appears wait a few minutes and then try again.
- Run the Lambda deletion script found in
Changed
- ORCA-826 - Changed
bin/delete_lambda.py
to delete ORCA lambdas based on their tags. - ORCA-827 - Changed ORCA API gateway stage name from
orca
toorca_api
to avoid confusion in the URL path. The new ORCA execute API URL will behttps://<API_ID>.execute-api.<AWS_REGION>.amazonaws.com/orca_api
.
Fixed
- ORCA-827 Fixed API gateway URL not found issue seen in ORCA v9.0.3.
v9.0.4-beta
Release v9.0.4-beta
v9.0.3
Release v9.0.3
Important information
🔥 This release is only compatible with Cumulus v18.x.x and up🔥
- Full Change Comparison: v9.0.2...v9.0.3
Migration notes
If you are migrating from ORCA v8.x.x to this version, see the migration notes under v9.0.0.
Fixed
- ORCA-823 Fixed ORCA security group related deployment error seen in ORCA v9.0.2.
v9.0.2
Release v9.0.2
Important information
🔥 This release is only compatible with Cumulus v18.x.x and up🔥
- Full Change Comparison: v9.0.1...v9.0.2
Added
- ORCA-366 Added unit test for shared libraries.
- ORCA-769 Added API Gateway Stage resource to
modules/api-gateway/main.tf
- ORCA-369 Added DR S3 bucket template to
modules/dr_buckets/dr_buckets.tf
and updated S3 deployment documentation with steps.
Changed
- ORCA-784 Changed documentation to replace restore with copy based on task's naming as well as changed file name from
website/docs/operator/restore-to-orca.mdx
towebsite/docs/operator/reingest-to-orca.mdx
. - ORCA-724 Updated ORCA recovery documentation to include recovery workflow process and relevant inputs and outputs in
website/docs/operator/data-recovery.md
. - ORCA-789 Updated
extract_filepaths_for_granule
to more flexibly match file-regex values to keys. - ORCA-787 Modified
modules/api-gateway/main.tf
api gateway stage name to remove the extra orca from the data management URL path - ORCA-805 Changed
modules/security_groups/main.tf
security group resource name fromvpc_postgres_ingress_all_egress
tovpc-postgres-ingress-all-egress
to resolve errors when upgrading from ORCA v8 to v9. Also removed graphql_1 dependencymodule.orca_lambdas
since this module does not depend on the lambda module inmodules/orca/main.tf
Removed
- ORCA-361 Removed hardcoded test values from
extract_file_paths_for_granule
unit tests. - ORCA-710 Removed duplicate logging messages in
integration_test/workflow_tests/custom_logger.py
- ORCA-815 Removed steps for creating buckets using NGAP form in ORCA archive bucket documentation.
Fixed
- ORCA-811 Fixed
cumulus_orca
docker image by updating nodejs installation process. - ORCA-802 Fixed
extract_file_for_granule
documentation and schemas to includecollectionId
in input. - ORCA-785 Fixed checksum integrity issue in ORCA documentation bamboo pipeline.
- ORCA-820 Updated bandit and moto libraries to fix some snyk vulnerabilities.