Skip to content

Releases: digital-asset/canton

canton v2.10.0

12 Feb 20:41
4e07aac
Compare
Choose a tag to compare

Release of Canton 2.10.0

Canton 2.10.0 has been released on February 12, 2025. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

Release 2.10.0 has three key enhancements.

First, the long awaited Smart Contract Upgrade (SCU) is now Generally Available. So, fixing application bugs or extending Daml models is possible without downtime or breaking Daml clients. This feature also eliminates the need to hardcode package IDs which increases developer efficiency. The steps to enable this feature are described below. It was introduced as Beta in v2.9.1 and has since undergone a lot of hardening based on customer feedback so it transitioned to GA status.

Secondly, the new “Tri-State Vetting” feature complements SCU by allowing a DAR to be unvetted in production. This is helpful to disable a DAR that has a bug in its business logic and then upgrade it without downtime.

Lastly, this is the first Daml Enterprise release to be designated a Long Term Support (LTS) release. An LTS release focuses on long term stability and may forgo feature enhancements. It is supported for multiple years and can be upgraded to with full data continuity, but it does have some caveats:
Limited cross-version compatibility, specifically no guaranteed support for old Canton Protocol versions.
No long-term support for deprecated features.
Focuses on supporting LTS versions of environment components (java, dbs, etc).
LTS releases are only guaranteed to have Critical or High issues fixed. Lower priority fixes are possible but not mandatory.

What’s New

Introduction of protocol version 7

Background

There is a new Canton protocol version (PV=7) in this release and it also supports PV=5.

Specific Changes

Protocol version 6 has been marked as deleted and should not be used.
Protocol version 7 has been introduced as its stable replacement. There is also a new Daml LF version (LF 1.17).
Some features have been deprecated in support of this being a LTS release and also to enable SCU.

Impact and Migration

Please remember that since version 2.9, you must set the protocol version explicitly. In prior releases, the domain protocol version was set to the latest protocol version by default. To specify the protocol version for a domain:

myDomain {
init.domain-parameters.protocol-version = 7
}

Specifying for a domain manager is:

domainManager {
init.domain-parameters.protocol-version = 7
}

You can read more about protocol versions in the public docs. If you are unsure which protocol version to pick, use the latest one supported by your binary (see docs).

Please ensure all your environments use the same protocol version: you should not use one protocol version in your test environment and another one in production.

If a protocol version is not provided, then an error message like this will be generated:

ERROR c.d.c.CantonEnterpriseApp$ - CONFIG_VALIDATION_ERROR(8,0): Failed to validate the configuration due to: Protocol version is not defined for domain `mydomain`. Define protocol version at key `init.domain-parameters.protocol-version` …

Daml-LF versions 1.14, 1.15, and 1.17 are available in this release. LF 1.15 is the default Daml compiler setting. LF 1.16 has been deleted and should not be used. Use LF 1.17 to enable the SCU feature.

The compatibility matrix is:

  • PV5 is compatible with LF 1.14 and LF 1.15.
  • PV7 is compatible with LF 1.14, LF 1.15, and LF 1.17.
  • PV7 and LF 1.17 enable SCU and SCU is disabled with any other configuration.

Smart Contract Upgrading (SCU)

The SCU feature debuted as a Beta level feature in 2.9.1 and it is now GA. These release notes are similar to the 2.9.1 release notes with some updates.

Background

SCU allows Daml models (packages in DAR files) to be updated on Canton transparently, provided guidelines in making the changes are followed. For example, you can fix a Daml model bug by uploading the DAR that has the fixed package version. This feature requires LF 1.17 and Canton Protocol versions 7. The detailed documentation is available here with the reference documentation available here. Please note that enabling this feature may require code updates for rarely used language idioms. For example, it does require that the daml.yaml files set the package version and that the version is increasing as new versions are developed.

Details

This feature is well-suited for developing and rolling out incremental template updates. There are guidelines to ensure upgrade compatibility between DAR files. The compatibility is checked at compile time, DAR upload time, and runtime. This is to ensure data backwards compatibility and forward compatibility (subject to the guidelines being followed) so that DARs can be safely upgraded to new versions. It also prevents unexpected data loss if a runtime downgrade occurs (e.g., a ledger client is using template version 1.0.0 while the participant node has the newer version 1.1.0).

A general guideline is that additive model changes are allowed but items cannot be removed. A summary of the allowed changes in templates are:

  • A template can add new optional fields at the end of the list of fields;
  • A record datatype can add new optional fields at the end of the list of fields, and a variant/enum datatype can add new constructors at the end;
  • The ensure predicate can be changed and it is reevaluated at interpretation;
  • A choice signature can be changed by adding optional parameters at the end;
  • The controller of a choice can be changed;
  • The observer of a choice can be changed
  • The body of a choice can be changed;
  • A new choice can be added to a template;
  • The implementation of an interface instance can be changed;

The Beta version of this feature allowed a new interface instance to be added to a template but this ability is not available in this GA release. Please consult the documentation for more information.

The package name associates a series of DAR versions where the newest version is the default version to use. The package name and version (e.g., “version: 1.1.0”) are specified in the daml.yaml file. Package name is now part of the Fully Qualified Name instead of the package ID. Internally, the package ID is still available and used at run time where the package name and version are resolved to a package ID. This allows backwards compatibility.There is flexibility where the package ID can still be specified (prior approach) or the package name can be used (new approach). A side effect is that the package name provides a namespace scope where modules, templates, and data belong to the namespace of a package.

To prevent unexpected behavior, this feature enforces that a DAR being uploaded to a participant node has a unique package name and version. This closes a loophole where the PN allowed uploading multiple DARs with the same package name and version. For backward compatibility, this restriction only applies for packages compiled with LF >= 1.17. If LF <= 1.15 is used, there can be several packages with the same name and version but this should be corrected as it will not be supported further.

Compilation support for smart contract upgrade is enabled by adding following fields to the daml.yaml:

  • --target=1.17
  • ‘upgrades: ’

For additional type checking, use the ‘dry-run’ option which simulates the checks a PN will run during the upload step. The format of the command is “daml ledger upload-dar --dry-run” which can be included as part of a CI/CD process.

The JSON API server is compatible with the smart contract upgrade feature by:

  • Supporting package names for commands and queries;
  • Allowing use of an optional packageIdSelectionPreference field to specify a preferred package ID to use. This allows the client to specify the package ID like in prior releases but it is not a best practice;
  • Requiring either a package ID or package name to be present to disambiguate the partially-qualified form of template/interface ids.

Previously JSON API had supported partially qualified template IDs, (i.e. simply <module>:<entity>) as an interactive convenience which fails if there is more than one package with matching template names. Since this format was not supported for production use and will not work with smart contract upgrades, it is now unavailable.

The Java and TypeScript codegen allow the use of package name and package ID (if needed).

Impact and Migration

This feature is not enabled by default. There are four steps that are required to enable this feature:

  • Compile the Daml model(s) into LF 1.17 (LF 1.15 is the default). When using the daml build command to compile a Daml project, make sure the LF-version is set to 1.17. To do this, set this field in the daml.yaml

    build-options:
    ---target=1.17
    

    Additionally, use the following field to enable compile time checking of LF 1.17 upgrade DARs:

    upgrades: <path to dar files of prior versions>
    
  • The Canton protocol version needs to be set to 7. (This is the default version for daml sandbox and daml start.) See here (the parameter protocolVersion) for domain parameter configuration.

  • Use the script library called daml-script-lts instead of the older daml-script.

  • Domain migration. For existing systems, the protocol version change requires a domain migration that is discussed in “Change the Canton Protocol Version.”

This feature is not compatible with some deprecated coding patterns which may require some code changes.

  1. Retroactive interface instances are not supported.
  2. Usi...
Read more

canton v2.9.6

24 Jan 08:17
c9e0b36
Compare
Choose a tag to compare

Release of Canton 2.9.6

Canton 2.9.6 has been released on January 24, 2025. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release that fixes one high and four medium severity issues. Please update during the next maintenance window.

What’s New

Memory check during node startup

A memory check has been introduced when starting the node. This check compares the memory allocated to the container
with the -Xmx JVM option.
The goal is to ensure that the container has sufficient memory to run the application.
To configure the memory check behavior, add one of the following to your configuration:

canton.parameters.startup-memory-check-config = warn  // Default behavior: Logs a warning.
canton.parameters.startup-memory-check-config = crash // Terminates the node if the check fails.
canton.parameters.startup-memory-check-config = ignore // Skips the memory check entirely.

Minor Improvements

  • Two new metrics have been added that count the number of created and archived contracts observed by a participant.
    Contracts created as part of the standard Canton ping workflow are excluded from the tally.
participant_daml_parallel_indexer_creates
participant_daml_parallel_indexer_archivals
  • A participant will now crash in exceptional cases during transaction validation instead of remaining in a failed state.
  • Disabled the onboarding timeout for participants to support onboarding to domains with very large topology states
    without annoying warnings and timeouts.
  • Removed warnings about failing periodic acknowledgements during initial domain onboarding of participants.
  • Removed warnings about unhealthy sequencers during startup.

Bugfixes

(24-028, Medium): ACS export and party replication is broken after hard domain migration

Issue Description

The macros for the various steps for migrating a party look up domain parameters in the topology store, but don't filter
out irrelevant domains. This results in the macro throwing an error because it finds multiple domain parameters after a
hard domain migration, even though one of them comes from an inactive domain.

Affected Deployments

Participant

Affected Versions

2.8.0-2.8.11, 2.9.0-2.9.5

Impact

You cannot migrate a party to or from a participant that went through a hard domain migration.

Symptom

Calling repair.party_migration.step1_store_acs fails with the error "Found more than one (2) domain parameters set for
the given domain and time!".

Workaround

The non-self-service work-around is to not call the party migration macros but replicate what these macros do.

Likeliness

The issue consistently occurs when calling the party migration macros after a hard domain migration.

Recommendation

Upgrade the involved participant nodes to the next patch release: 2.8.12 or 2.9.6.

(24-029, Medium): Domain topology manager gets stuck on too large batches

Issue Description

An off by one check fails in the topology dispatcher of the domain manager as
batches are not limited to N but to N+1, while we check for N.

Affected Deployments

Domain and Domain topology manager nodes.

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

Topology transactions stop being propagated through the system.

Symptom

Participants cannot onboard to domains, parties do not appear on the domain, uploaded dars cannot be used.

Workaround

Restart domain topology manager.

Likeliness

Can happen under high topology management load which is rather unusual (adding thousands of parties at full speed).

Recommendation

Update during the next maintenance window.

(25-001, Medium): Newly onboarded participants may compute a wrong topology state during bootstrapping

Issue Description

When a participant is onboarded to a domain, the domain manager will send the topology state to the participant. The
topology state is split into batches of 100. If the state contains an add and a subsequent remove of a topology transaction,
and these two topology transactions are in the same batch (so less than 100 transactions apart), but the namespace certificate
or identifier delegation is in a previous batch, then the participant will miss the removal of the topology transaction.
In the common cases, the namespace delegation is always followed by a subsequent add, but it can happen.

Affected Deployments

Participant

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

Depends on the type of topology transaction, but the result is a fork in the topology state, which in a rare but theoretically
possible case (observer nodes and participant using previously removed parties) might create a ledger fork, leading to
participants disconnecting from the domain.

Symptom

If the missed transaction was a mediator domain state, then the participant will fail to submit transactions whenever it
randomly selects the non-existent mediator.

Workaround

No workaround available. Manually repairing the topology state is likely possible, but not recommended.

Likeliness

Happens deterministically if the conditions are met, but the conditions are rare and require a specific sequence of
events with removal of topology state.

Recommendation

Upgrade before removing topology state (disabling parties, rolling keys) or onboarding a new participant to a domain
with a larger number of topology transactions that includes removals.

(25-002, Medium): Intermediate certificate renewal will delete topology state

Issue Description

A Canton node uses topology keys to sign topology transactions. The ultimate trust is tied to the root node key,
which by default is held by the node, but can be moved offline. In such a case, the node may use an intermediate
certificate to manage the topology state. In order to renew such intermediate certificates, the topology state needs
to be re-issued in 2.x, which can be done using the convenience function node.topology.all.renew(oldKey, newKey).
The convenience function contains an error that will instead of renewing the topology state, delete topology transactions
of the type party to participant, mediator domain state and participant domain state (the ones that contain the
replaceExisting flag).

Affected Deployments

Domain, Domain manager, Participant nodes.

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

Some of the topology state will be removed after running this operation.

Symptom

Parties, participants and mediators will be missing after running the operation.

Workaround

Manually re-add the missing parties, participants and mediators.

Likeliness

Deterministic if the convenience function is used.

Recommendation

Upgrade before renewing intermediate certificates.

(25-003, High): Identifier delegation cannot be renewed

Issue Description

A Canton node uses topology keys to sign topology transactions. The ultimate trust is tied to the root node key,
which by default is held by the node, but can be moved offline. In such a case, the node may use an intermediate
certificate to manage the topology state. If such an intermediate certificate is used to sign an identifier delegation
(used as an intermediate certificate for a specific uid), then the identifier delegation cannot be renewed,
as the renewal operation will remove the old and the new certificate from the in-memory state. Unfortunately,
after a restart, the certificate could be loaded again which can cause a ledger fork.

Affected Deployments

Domain, Domain manager, Participant nodes.

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

The topology state signed with a particular key authorized by an identifier delegation will be removed from the state,
and the key cannot be used to sign new transactions. After a restart of a node, the key would be loaded again, leading
to a possible ledger fork.

Symptom

Topology state missing after an intermediate certificate renewal, with a possible subsequent ledger fork after a restart.

Workaround

Theoretically issue a new identifier delegation for a new key and re-create the topology state. In practice, upgrade
all nodes before renewing intermediate certificates.

Likeliness

Deterministic if several intermediate certificates are used and one of them is rolled in the chain.

Recommendation

Update all nodes to a version with a fix before renewing intermediate certificates.

Compatibility

The following Canton protocol versions are supported:

Dependency Version
Canton protocol versions 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.22 (Debian 12.22-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.18 (Debian 13.18-1.pgdg120+1), PostgreSQL 14.15 (Debian 14.15-1.pgdg120+1), PostgreSQL 15.10 (Debian 15.10-1.pgdg120+1)
Oracle 19.20.0

canton v2.8.12

24 Jan 08:40
c9e0b36
Compare
Choose a tag to compare

Release of Canton 2.8.12

Canton 2.8.12 has been released on January 24, 2025. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release that fixes one high and five medium severity issues. Please update during the next maintenance window.

What’s New

Memory check during node startup

A memory check has been introduced when starting the node. This check compares the memory allocated to the container
with the -Xmx JVM option.
The goal is to ensure that the container has sufficient memory to run the application.
To configure the memory check behavior, add one of the following to your configuration:

canton.parameters.startup-memory-check-config = warn  // Default behavior: Logs a warning.
canton.parameters.startup-memory-check-config = crash // Terminates the node if the check fails.
canton.parameters.startup-memory-check-config = ignore // Skips the memory check entirely.

Minor Improvements

  • Fixed one issue preventing a participant to connect to an old domain even if they support a common protocol version.
  • Fixed a minor issue where the validUntil time of the topology transaction results was incorrectly set to validFrom
    on the console client side.
  • Disabled the onboarding timeout for participants to support onboarding to domains with very large topology states
    without annoying warnings and timeouts.
  • Removed warnings about failing periodic acknowledgements during initial domain onboarding of participants.
  • Removed warnings about unhealthy sequencers during startup.

Bugfixes

(24-022, Medium): Participant replica does not clear package service cache

Issue Description

When a participant replica becomes active, it does not refresh the package dependency cache. If a vetting attempt is
made on the participant that fails because the package is not uploaded, the "missing package" response is cached.
If the package is then uploaded to another replica, and we switch to the original participant, this package service
cache will still record the package as nonexistent. When the package is used in a transaction, we will get a local
model conformance error as the transaction validator cannot find the package, whereas other parts of the participant
that don't use the package service can successfully locate it.

Affected Deployments

Participant

Affected Versions

2.8.0-2.8.11, 2.9.0-2.9.4

Impact

Replica crashes during transaction validation.

Symptom

Validating participant emits warning:

LOCAL_VERDICT_FAILED_MODEL_CONFORMANCE_CHECK(5,a2b60642): Rejected transaction due to a failed model conformance check: UnvettedPackages

And then emits an error:

An internal error has occurred.
java.lang.IllegalStateException: Mediator approved a request that we have locally rejected

Workaround

Restart recently active replica.

Likeliness

Likely to happen in any replicated participant setup with frequent vetting attempts and switches between active and
passive replicated participants between those vetting attempts.

Recommendation

Users are advised to upgrade to the next patch release during their maintenance window.

(24-028, Medium): ACS export and party replication is broken after hard domain migration

Issue Description

The macros for the various steps for migrating a party look up domain parameters in the topology store, but don't filter
out irrelevant domains. This results in the macro throwing an error because it finds multiple domain parameters after a
hard domain migration, even though one of them comes from an inactive domain.

Affected Deployments

Participant

Affected Versions

2.8.0-2.8.11, 2.9.0-2.9.5

Impact

You cannot migrate a party to or from a participant that went through a hard domain migration.

Symptom

Calling repair.party_migration.step1_store_acs fails with the error "Found more than one (2) domain parameters set for
the given domain and time!".

Workaround

The non-self-service work-around is to not call the party migration macros but replicate what these macros do.

Likeliness

The issue consistently occurs when calling the party migration macros after a hard domain migration.

Recommendation

Upgrade the involved participant nodes to the next patch release: 2.8.12 or 2.9.6.

(24-029, Medium): Domain topology manager gets stuck on too large batches

Issue Description

An off by one check fails in the topology dispatcher of the domain manager as
batches are not limited to N but to N+1, while we check for N.

Affected Deployments

Domain and Domain topology manager nodes.

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

Topology transactions stop being propagated through the system.

Symptom

Participants cannot onboard to domains, parties do not appear on the domain, uploaded dars cannot be used.

Workaround

Restart domain topology manager.

Likeliness

Can happen under high topology management load which is rather unusual (adding thousands of parties at full speed).

Recommendation

Update during the next maintenance window.

(25-001, Medium): Newly onboarded participants may compute a wrong topology state during bootstrapping

Issue Description

When a participant is onboarded to a domain, the domain manager will send the topology state to the participant. The
topology state is split into batches of 100. If the state contains an add and a subsequent remove of a topology transaction,
and these two topology transactions are in the same batch (so less than 100 transactions apart), but the namespace certificate
or identifier delegation is in a previous batch, then the participant will miss the removal of the topology transaction.
In the common cases, the namespace delegation is always followed by a subsequent add, but it can happen.

Affected Deployments

Participant

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

Depends on the type of topology transaction, but the result is a fork in the topology state, which in a rare but theoretically
possible case (observer nodes and participant using previously removed parties) might create a ledger fork, leading to
participants disconnecting from the domain.

Symptom

If the missed transaction was a mediator domain state, then the participant will fail to submit transactions whenever it
randomly selects the non-existent mediator.

Workaround

No workaround available. Manually repairing the topology state is likely possible, but not recommended.

Likeliness

Happens deterministically if the conditions are met, but the conditions are rare and require a specific sequence of
events with removal of topology state.

Recommendation

Upgrade before removing topology state (disabling parties, rolling keys) or onboarding a new participant to a domain
with a larger number of topology transactions that includes removals.

(25-002, Medium): Intermediate certificate renewal will delete topology state

Issue Description

A Canton node uses topology keys to sign topology transactions. The ultimate trust is tied to the root node key,
which by default is held by the node, but can be moved offline. In such a case, the node may use an intermediate
certificate to manage the topology state. In order to renew such intermediate certificates, the topology state needs
to be re-issued in 2.x, which can be done using the convenience function node.topology.all.renew(oldKey, newKey).
The convenience function contains an error that will instead of renewing the topology state, delete topology transactions
of the type party to participant, mediator domain state and participant domain state (the ones that contain the
replaceExisting flag).

Affected Deployments

Domain, Domain manager, Participant nodes.

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

Some of the topology state will be removed after running this operation.

Symptom

Parties, participants and mediators will be missing after running the operation.

Workaround

Manually re-add the missing parties, participants and mediators.

Likeliness

Deterministic if the convenience function is used.

Recommendation

Upgrade before renewing intermediate certificates.

(25-003, High): Identifier delegation cannot be renewed

Issue Description

A Canton node uses topology keys to sign topology transactions. The ultimate trust is tied to the root node key,
which by default is held by the node, but can be moved offline. In such a case, the node may use an intermediate
certificate to manage the topology state. If such an intermediate certificate is used to sign an identifier delegation
(used as an intermediate certificate for a specific uid), then the identifier delegation cannot be renewed,
as the renewal operation will remove the old and the new certificate from the in-memory state. Unfortunately,
after a restart, the certificate could be loaded again which can cause a ledger fork.

Affected Deployments

Domain, Domain manager, Participant nodes.

Affected Versions

All versions before 2.8, 2.8.0-2.8.11, 2.9.0-2.9.5

Impact

The topology state signed with a particular key authorized by an identifier delegation will be removed from the state,
and the key cannot be used to sign new transactions. After a restart of a node, the key would be loaded again, leading
to a possible ledger fork.

Symptom

Topology state missing after an intermediate certificate renewal, with a possible subsequent ledger f...

Read more

canton v2.10.0-rc2

21 Jan 10:55
08489e4
Compare
Choose a tag to compare

Release candidates such as 2.10.0-rc2 don't come with release notes

canton v2.10.0-rc1

18 Dec 12:32
b518331
Compare
Choose a tag to compare

Release candidates such as 2.10.0-rc1 don't come with release notes

canton v2.8.11

26 Nov 14:32
aead92e
Compare
Choose a tag to compare

Release of Canton 2.8.11

Canton 2.8.11 has been released on November 26, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release that provides performance improvements and fixes minor bugs.

What’s New

Minor Improvements

  • Two new metrics have been added that count the number of created and archived contracts observed by a participant.
    Contracts created as part of the standard Canton ping workflow are excluded from the tally.
participant_daml_parallel_indexer_creates
participant_daml_parallel_indexer_archivals
  • Two more metrics have been added to the db storage metrics: exectime and load to capture the execution time and load
    of the database storage pool.
  • We added batch insertion to the single dimension event log to reduce the database load and improve performance.
  • We reduced latency on the sequencer for processing and sequencing events from other nodes.

Node's Exit on Fatal Failures

Since v2.8.4 when a node encounters a fatal failure that Canton cannot handle gracefully yet, the node will exit/stop the process and relies on an external process or service monitor to restart the node's process.

Now a node also exits on failed transition from a passive replica to an active replica, which may result in an invalid state of the node.

The crashing on fatal failures can be reverted by setting: canton.parameters.exit-on-fatal-failures = false in the configuration.

Bugfixes

(24-027, Low): Bootstrap of the domain fails if the mediator or sequencer share the same key as the domain manager

Issue Description

Domain bootstrapping fails with a KeyAlreadyExists error when the signing key is shared between the mediator/sequencer
and the domain manager.

Impact

You cannot bootstrap a domain when the signing key is shared between the domain manager and mediator or sequencer nodes.

Symptom

After calling bootstrap_domain we get a KeyAlreadyExists error.

Workaround

Use different signing keys for the mediator, sequencer and the domain manager.

Likeliness

This issue consistently occurs whenever we attempt to bootstrap a domain where the domain manager's signing key is shared with the mediator or the sequencer.

Recommendation

Upgrade to 2.8.11 when affected by this limitation.

(24-025, Low): Commands for single key rotation for sequencer and mediator node fail

Issue Description

The current commands for single key rotation with sequencer and mediator nodes (rotate_node_key
and rotate_kms_node_key) fail because they do not have the necessary domain manager reference needed to find
the old key and export the new key.

Affected Deployments

Sequencer and mediator nodes

Affected Versions

All 2.3-2.7, 2.8.0-2.8.10, 2.9.0-2.9.4

Impact

Key rotation for individual keys with sequencer or mediator nodes cannot be performed using the provided commands.

Symptom

Current single key rotation for sequencer and mediator, with commands rotate_node_key and
rotate_kms_node_key, fails with an IllegalStateException: key xyz does not exist.

Workaround

Use the domain manager to rotate a mediator or sequencer key, or use the rotate_node_keys command
with a domain manager reference to rotate all keys.

Likeliness

This issue consistently occurs when trying to rotate keys individually with sequencer or mediator nodes in
a distributed environment.

Recommendation

Upgrade to 2.8.11 when affected, and run the rotate_node_key and rotate_kms_node_key commands with a reference to the
domain topology manager to successfully perform the rotation.

(24-021, Medium): Participant replica fails to become active

Issue Description

A participant replica fails to become active under certain database network conditions. The previously active replica fails to fully transition to passive due to blocked database connection health checks, which leaves the other replica to fail to transition to active. Eventually the database health checks get unblocked and the replica transitions to passive, but the other replica does not recover from the previous active transition failure, which leaves both replicas passive.

Affected Deployments

Participant

Affected Versions

All 2.3-2.7
2.8.0-2.8.10
2.9.0-2.9.4

Impact

Both participant replicas remain passive and do not serve transactions.

Symptom

The transition to active failed on a participant due to maximum retries exhausted:

2024-09-02T07:08:56,178Z participant2 [c.d.c.r.DbStorageMulti:participant=participant1] [canton-env-ec-36] ERROR dd:[ ] c.d.c.r.DbStorageMulti:participant=participant1 tid:effa59a8f7ddec2e132079f2a4bd9885 - Failed to transition replica state
com.daml.timer.RetryStrategy$TooManyAttemptsException: Gave up trying after Some(3000) attempts and 300.701142545 seconds.

Workaround

Restart both replicas of the participant

Likeliness

Possible under specific database connection issues

Recommendation

Upgrade to the next patch release during regular maintenance window.

Compatibility

The following Canton protocol versions are supported:

Dependency Version
Canton protocol versions 3, 4, 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.70+15-CA (build 11.0.22+7-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.22 (Debian 12.22-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.18 (Debian 13.18-1.pgdg120+1), PostgreSQL 14.15 (Debian 14.15-1.pgdg120+1), PostgreSQL 15.10 (Debian 15.10-1.pgdg120+1)
Oracle 19.20.0

canton v2.9.5

23 Oct 08:54
c198fb6
Compare
Choose a tag to compare

Release of Canton 2.9.5

Canton 2.9.5 has been released on October 22, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release of Canton that fixes bugs, including two critical bugs that can corrupt the state of a participant node when retroactive interfaces or migrated contracts from protocol version 3 are used.

Bugfixes

(24-020, Critical): Participant crashes due to retroactive interface validation

Description

The view reinterpreation of an exercise of a retroactive interface may fail because the engine does not explicitliy request the interface package. This can lead to a ledger fork as participants come to different conclusions.

Affected Deployments

Participant

Affected Versions

2.5, 2.6, 2.7, 2.8.0-2.8.9, 2.9.0-2.9.4

Impact

A participant crashes during transaction validation when using retroactive interfaces.

Symptom

"Validating participant emits warning:

Workaround

None

Likeliness

Very likely for all multi participant setups that uses retroactive interface instances.

Recommendation

Upgrade to 2.9.5


LOCAL_VERDICT_FAILED_MODEL_CONFORMANCE_CHECK(5,571d2e8a): Rejected transaction due to a failed model conformance check: DAMLeError(
  Preprocessing(
    Lookup(
      NotFound(
        Package(

And then emits an error:

An internal error has occurred.
java.lang.IllegalStateException: Mediator approved a request that we have locally rejected

Workaround

None

Likeliness

Very likely for all multi participant setups that uses retroactive interface instances.

Recommendation

Upgrade to 2.9.5

(24-024, Critical): Participant incorrectly handles unauthenticated contract IDs in PV5

Issue Description

Contracts created on participants running PV3 have an unauthenticated contract ID. When these participants are upgraded to PV5 without setting the allow-for-unauthenticated-contract-ids flag to true, any submitted transaction that uses such unauthenticated contract IDs will produce warnings during validation, but also put the participants in an incorrect state. From then on, the participant will not output any ledger events any more and fail to reconnect to the domain.

Affected Deployments

Participant

Affected Versions

2.9.0-2.9.4

Impact

The participant is left in a failed state.

Symptom

Connecting to the domain fails with an internal error IllegalStateException: Cannot find event for sequenced in-flight submission.

The participant does not emit any ledger events any more.

Workaround

No workaround by clients possible. Support and engineering can try to fix the participants by modifying the participant's database tables.

Likeliness

Needs a submission request using a contract with unauthenticated contract ID. This can only happen for participants who have been migrated from using PV3 to PV5, and have not set the flag to allow unauthenticated contracts on all involved participants.

Recommendation

Upgrade during the next maintenance window to a version with the fix.
If an upgrade is not possible and old contracts from PV3 are used, enable the allow-for-unauthenticated-contract-ids flag on all the participants.

(24-026, High): Hard Synchronization Domain Migration fails to check for in-flight transactions

Issue Description

Since 2.9.0, the Hard Synchronization Domain Migration command repair.migrate_domain aborts when it detects in-flight submissions on the participant. However, it should also check for in-flight transactions.

Affected Deployments

Participant

Affected Versions

2.9.0-2.9.4

Impact

Performing a Hard Synchronization Domain Migration while there are still in-flight submissions and transactions may result in a ledger-fork.

Symptom

Ledger-fork after running the Hard Synchronization Domain Migration command repair.migrate_domain that may result in ACS commitment mismatches.

Workaround

Follow the documented steps, in particular ensure that there is no activity on all participants before proceeding with a Hard Synchronization Domain Migration.

Likeliness

The bug only manifests when the operator skips the documented step for the Hard Synchronization Domain Migration to ensure that there is no activity on all participants anymore in combination with still having in-flight transactions when the migration executes.

Recommendation

Upgrade to 2.9.5 to properly safe-guard against running the Hard Synchronization Domain Migration command repair.migrate_domain while there are still in-flight submissions or transactions.

(24-021, Medium): Participant replica fails to become active

Issue Description

A participant replica fails to become active under certain database network conditions. The previously active replica fails to fully transition to passive due to blocked database connection health checks, which leaves the other replica to fail to transition to active. Eventually the database health checks get unblocked and the replica transitions to passive, but the other replica does not recover from the previous active transition failure, which leaves both replicas passive.

Affected Deployments

Participant

Affected Versions

All 2.3-2.7
2.8.0-2.8.10
2.9.0-2.9.4

Impact

Both participant replicas remain passive and do not serve transactions.

Symptom

The transition to active failed on a participant due to maximum retries exhausted:

2024-09-02T07:08:56,178Z participant2 [c.d.c.r.DbStorageMulti:participant=participant1] [canton-env-ec-36] ERROR dd:[ ] c.d.c.r.DbStorageMulti:participant=participant1 tid:effa59a8f7ddec2e132079f2a4bd9885 - Failed to transition replica state
com.daml.timer.RetryStrategy$TooManyAttemptsException: Gave up trying after Some(3000) attempts and 300.701142545 seconds.

Workaround

Restart both replicas of the participant

Likeliness

Possible under specific database connection issues

Recommendation

Upgrade to the next patch release during regular maintenance window.

(24-022, Medium): Participant replica does not clear package service cache

Issue Description

When a participant replica becomes active, it does not refresh its package service cache. If a vetting attempt is made on the participant that fails because the package is not uploaded, the "missing package" response is cached. If the package is then uploaded to another replica, and we switch to the original participant, this package service cache will still record the package as nonexistent. When the package is used in a transaction, we will get a local model conformance error as the transaction validator cannot find the package, whereas other parts of the participant that don't use the package service can successfully locate it.

Affected Deployments

Participant

Affected Versions

2.8.0-2.8.10, 2.9.0-2.9.4

Impact

Replica crashes during transaction validation.

Symptom

Validating participant emits warning:


LOCAL_VERDICT_FAILED_MODEL_CONFORMANCE_CHECK(5,a2b60642): Rejected transaction due to a failed model conformance check: UnvettedPackages

And then emits an error:

An internal error has occurred.
java.lang.IllegalStateException: Mediator approved a request that we have locally rejected

Workaround

Restart recently active replica

Likeliness

Likely to happen in any replicated participant setup with frequent vetting attempts and switches between active and passive replicated participants between those vetting attempts.

Recommendation

Users are advised to upgrade to the next patch release (2.9.5) during their maintenance window.

(24-023, Low): Participant fails to start if quickly acquiring and then losing DB connection during bootstrap

Issue Description

When a participant starts up and acquires the active lock, the participant replica initializes its storage and begins its bootstrap logic. If during the bootstrap logic and before the replica attempts to initializate its identity, the replica loses the DB connection, bootstrapping will be halted until its identity is initialized by another replica or re-acquires the lock. When the lock is lost, the replica manager will attempt to transition the participant state to passive, which assumes the participant has been initialized fully, which in this case it hasn't. Therefore the passive transition waits indefinitely.

Affected Deployments

Participant

Affected Versions

2.8.0-2.8.10, 2.9.0-2.9.4

Impact

Replica gets stuck transitioning to passive state during bootstrap.

Symptom

Participant keeps emitting info logs as follows indefinitely

Replica state update to Passive has not completed after

Workaround

Restart the node

Likeliness

Exceptional, requires acquiring then losing the DB connection with a precise timing during bootstrap of the node.

Recommendation

Users are advised to upgrade to the next patch release (2.9.5) during their maintenance window.

(24-025, Low): Commands for single key rotation for sequencer and mediator node fail

Description

The current commands for single key rotation with sequencer and mediator nodes (rotate_node_key
and rotate_kms_node_key) fail because they do not have the necessary domain manager reference needed to find
the old key and export the new key.

Affected Deployments

Sequencer and mediator nodes

Affected Versions

All 2.3-2.7, 2.8.0-2.8.10, 2.9.0-2.9.4

Impact

Key rotation for individual keys with sequencer or mediator nodes c...

Read more

canton v2.8.10

16 Sep 12:35
37f1308
Compare
Choose a tag to compare

Release of Canton 2.8.10

Canton 2.8.10 has been released on September 16, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release that fixes a critical bug for retroactive interfaces.

Bugfixes

(24-020, Critical): Participant crashes due to retroactive interface validation

Description

The view reinterpretation of an exercise of a retroactive interface may fail because the engine does not explicitly request the interface package. This can lead to a ledger fork as participants come to different conclusions.

Affected Deployments

Participant

Affected Versions

2.5, 2.6, 2.7, 2.8.0-2.8.9

Impact

A participant crashes during transaction validation when using retroactive interfaces.

Symptom

Validating participant emits warning:


LOCAL_VERDICT_FAILED_MODEL_CONFORMANCE_CHECK(5,571d2e8a): Rejected transaction due to a failed model conformance check: DAMLeError(
  Preprocessing(
    Lookup(
      NotFound(
        Package(

And then emits an error:

An internal error has occurred.
java.lang.IllegalStateException: Mediator approved a request that we have locally rejected

Workaround

None

Likeliness

Very likely for all multi participant setups that uses retroactive interface instances.

Recommendation

Upgrade to 2.8.10

Compatibility

The following Canton protocol versions are supported:

Dependency Version
Canton protocol versions 3, 4, 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.70+15-CA (build 11.0.22+7-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.20 (Debian 12.20-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.16 (Debian 13.16-1.pgdg120+1), PostgreSQL 14.13 (Debian 14.13-1.pgdg120+1), PostgreSQL 15.8 (Debian 15.8-1.pgdg120+1)
Oracle 19.20.0

canton v2.9.4

26 Aug 07:48
e0af6b1
Compare
Choose a tag to compare

Release of Canton 2.9.4

Canton 2.9.4 has been released on August 23, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

  • Protocol version 6 has had its status changed from "Beta" to "Unstable" due to a number of rare, but grave bugs in the new beta smart contract upgrading feature
  • Minor improvements around logging and DAR upload validation

What’s New

Protocol Version 6 Marked as Unstable

Background

In Daml 2.9 we released a smart contract upgrading feature in Beta. Underlying the feature are a new protocol version (6), and a new Daml-LF version (1.16) that were also released in Beta status.

Beta status is intended to designate features that do not yet have full backwards compatibility guarantees, or may still have some limitations, but are ready to be supported for select customers under an "initial availability" arrangement.

A number of rare, but grave bugs in the new beta smart contract upgrading feature have been discovered during internal testing and will require breaking changes at the protocol level to fix. As a consequence data continuity will be broken in the sense that smart contracts created on protocol version 6 in 2.9.1-2.9.4 will not be readable in future versions.

The 2.9 release as a whole is robust and functional. Only Beta features are affected.

Specific Changes

To prevent any accidental corruption of prod, or even pre-prod systems, protocol version 6 has had its status changed from "Beta" to "Unstable" to clearly designate that it do not have appropriate guarantees.

Impact and Migration

Customers who are not using beta features or protocol version 6 can continue to use the 2.9 release. Customers using beta features are advised to move their testing of these features to the 2.10 release line.

To continue to use the beta features in 2.9.4 it will be necessary to enable support for unstable features.

See the user manual section on how to enable unsupported features to find out how this is done.

Minor Improvements

  • Fix one issue preventing a participant to connect to an old domain even if they support a common protocol version.
  • Startup errors due to TLS issues / misconfigurations are now correctly logged via the regular canton logging tooling instead of appearing only on stdout.
  • Added extra validation to prevent malformed DARs from being uploaded

Compatibility

The following Canton protocol and Ethereum sequencer contract versions are supported:

Dependency Version
Canton protocol versions 5

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.20 (Debian 12.20-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.16 (Debian 13.16-1.pgdg120+1), PostgreSQL 14.13 (Debian 14.13-1.pgdg120+1), PostgreSQL 15.8 (Debian 15.8-1.pgdg120+1)
Oracle 19.20.0

canton v2.9.3

22 Jul 09:57
558d88e
Compare
Choose a tag to compare

Release of Canton 2.9.3

Canton 2.9.3 has been released on July 22, 2024. You can download the Daml Open Source edition from the Daml Connect Github Release Section. The Enterprise edition is available on Artifactory.
Please also consult the full documentation of this release.

Summary

This is a maintenance release of Canton that fixes one high risk bug, which can
crash a participant node due to out of memory, and two low risk bugs.

Bugfixes

(24-017, High): Participants crash with an OutOfMemoryError

Description

The TaskScheduler keeps a huge number of tasks into a queue. The queue has been newly introduced. Therefore the memory comsumption (HEAP) is much higher than in previous versions. The queue size is proportional to the number of requests processed during the decision time.

Affected Deployments

Participant

Impact

Memory consumption is much higher than in previous Canton versions.

Symptom

The participant crashes with an OutOfMemoryError.

Workaround

Test the participant under load, increase the heap size accordingly. If possible, decrease confirmation response timeout and mediator reaction timeout.

Likeliness

High likelihood under high load and with large confirmation response and mediator reaction timeouts.

Recommendation

Upgrade to 2.9.3.

(24-018, Low): Participants log "ERROR: The check for missing ticks has failed unexpectedly"

Description

The TaskScheduler monitoring crashes and logs an Error.

Affected Deployments

Participant

Impact

The monitoring of the task scheduler crashes.

Symptom

You see an error in the logs: ERROR: The check for missing ticks has failed unexpectedly..

Workaround

If you need the monitoring to trouble-shoot missing ticks, restart the participant to restart the monitoring.

Likeliness

This will eventually occur on every system.

Recommendation

Ignore the message until upgrading to 2.9.3.

(24-015, Low): Pointwise flat transaction Ledger API queries can unexpectedly return TRANSACTION_NOT_FOUND

Description

When a party submits a command that has no events for contracts whose stakeholders are amongst the submitters, the resulted transaction cannot be queried by pointwise flat transaction Ledger API queries. This impacts GetTransactionById, GetTransactionByEventId and SubmitAndWaitForTransaction gRPC endpoints.

Affected Deployments

Participant

Impact

User might perceive that a command was not successful even if it was.

Symptom

TRANSACTION_NOT_FOUND is returned on a query that is expected to succeed.

Workaround

Query instead the transaction tree by transaction-id to get the transaction details.

Likeliness

Lower likelihood as commands usually have events whose contracts' stakeholders are amongst the submitting parties.

Recommendation

Users are advised to upgrade to the next patch release during their maintenance window.

Compatibility

The following Canton protocol and Ethereum sequencer contract versions are supported:

Dependency Version
Canton protocol versions 5, 6*

Canton has been tested against the following versions of its dependencies:

Dependency Version
Java Runtime OpenJDK 64-Bit Server VM Zulu11.72+19-CA (build 11.0.23+9-LTS, mixed mode)
Postgres Recommended: PostgreSQL 12.19 (Debian 12.19-1.pgdg120+1) – Also tested: PostgreSQL 11.16 (Debian 11.16-1.pgdg90+1), PostgreSQL 13.15 (Debian 13.15-1.pgdg120+1), PostgreSQL 14.12 (Debian 14.12-1.pgdg120+1), PostgreSQL 15.7 (Debian 15.7-1.pgdg120+1)
Oracle 19.20.0