+
+- Added a new Kafka [changefeed sink]({% link v24.2/changefeed-sinks.md %}) that uses the [`franz-go` library](https://github.com/twmb/franz-go) and CockroachDB's `batching_sink` implementation. The new Kafka sink can be enabled with the [`changefeed.new_kafka_sink_enabled`]({% link v24.2/cluster-settings.md %}) cluster setting, which is disabled by default. [#127899][#127899]
+- The v2 Kafka [changefeed sink]({% link v24.2/changefeed-sinks.md %}) now supports [Amazon Managed Streaming for Apache Kafka (MSK)](https://aws.amazon.com/msk/) IAM SASL authentication. [#127899][#127899]
+
+
DB Console changes
+
+- The [Databases]({% link v24.2/ui-databases-page.md %}) and [Tables]({% link v24.2/ui-databases-page.md %}#tables-view) pages in the [DB Console]({% link v24.2/ui-overview.md %}) will show a loading state while loading information for databases and tables including size and range counts. [#127696][#127696]
+- On the [Database details]({% link v24.2/ui-databases-page.md %}) page, the table name will no longer appear with quotes around the schema and table name. [#127770][#127770]
+
+
Bug fixes
+
+- Fixed a bug that caused a memory leak when executing SQL statements with comments, for example, `SELECT /* comment */ 1;`. Memory owned by a SQL session would continue to grow as these types of statements were executed. The memory would only be released when closing the [SQL session]({% link v24.2/show-sessions.md %}). This bug has been present since v23.1. [#127760][#127760]
+- Fixed a bug in [debug zip]({% link v24.2/cockroach-debug-zip.md %}) generation where an error was produced while fetching unstructured/malformed [logs]({% link v24.2/log-formats.md %}). [#127883][#127883]
+- Fixed small memory leaks that occur during [changefeed creation]({% link v24.2/create-changefeed.md %}). [#127899][#127899]
+- Fixed a [known limitation]({% link v24.2/physical-cluster-replication-overview.md %}#known-limitations) in which [fast cutback]({% link v24.2/cutover-replication.md %}#cut-back-to-the-primary-cluster) could fail. Users can now protect data for the [default protection window]({% link v24.2/physical-cluster-replication-technical-overview.md %}) of 4 hours on both the primary and the standby clusters. [#127892][#127892]
+
+
+
+
Contributors
+
+This release includes 29 merged PRs by 21 authors.
+
+
+
+[#127696]: https://github.com/cockroachdb/cockroach/pull/127696
+[#127760]: https://github.com/cockroachdb/cockroach/pull/127760
+[#127770]: https://github.com/cockroachdb/cockroach/pull/127770
+[#127883]: https://github.com/cockroachdb/cockroach/pull/127883
+[#127892]: https://github.com/cockroachdb/cockroach/pull/127892
+[#127899]: https://github.com/cockroachdb/cockroach/pull/127899
From ec591c4c2625d544c6f390ea3adefa36fbf289aa Mon Sep 17 00:00:00 2001
From: Rich Loveland
Date: Thu, 8 Aug 2024 12:27:50 -0400
Subject: [PATCH 02/15] Stored computed columns support FKs, with limits
(#18792)
* Stored computed columns support FKs, with limits
Fixes DOC-10491
---
src/current/v24.2/computed-columns.md | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/current/v24.2/computed-columns.md b/src/current/v24.2/computed-columns.md
index 71f0f066e8f..15743202bba 100644
--- a/src/current/v24.2/computed-columns.md
+++ b/src/current/v24.2/computed-columns.md
@@ -30,6 +30,9 @@ Computed columns:
- Cannot be used to generate other computed columns.
- Behave like any other column, with the exception that they cannot be written to directly.
- Are mutually exclusive with [`DEFAULT`]({% link {{ page.version.version }}/default-value.md %}) and [`ON UPDATE`]({% link {{ page.version.version }}/create-table.md %}#on-update-expressions) expressions.
+- {% include_cached new-in.html version="v24.2" %} Can be used in [`FOREIGN KEY`]({% link {{ page.version.version }}/foreign-key.md %}) constraints, but are restricted to the following subset of supported options. This restriction is necessary because we cannot allow the computed column value to change.
+ - `ON UPDATE (NO ACTION|RESTRICT)`
+ - `ON DELETE (NO ACTION|RESTRICT|CASCADE)`
Virtual computed columns:
From 7e5695d473f7a6fe9c046e77401b393c5936d6c2 Mon Sep 17 00:00:00 2001
From: Rich Loveland
Date: Thu, 8 Aug 2024 14:36:06 -0400
Subject: [PATCH 03/15] Update COMMENT ON docs for types (#18761)
* Add types to COMMENT ON docs for v24.2
Fixes DOC-10455, DOC-10572
---
src/current/v24.2/comment-on.md | 58 +++++++++++++++++++++++++++++++--
1 file changed, 56 insertions(+), 2 deletions(-)
diff --git a/src/current/v24.2/comment-on.md b/src/current/v24.2/comment-on.md
index 33e289714bc..8625702e1a6 100644
--- a/src/current/v24.2/comment-on.md
+++ b/src/current/v24.2/comment-on.md
@@ -1,11 +1,11 @@
---
title: COMMENT ON
-summary: The COMMENT ON statement associates comments to databases, tables, columns, or indexes.
+summary: The COMMENT ON statement associates comments to databases, tables, columns, indexes, or types.
toc: true
docs_area: reference.sql
---
-The `COMMENT ON` [statement]({% link {{ page.version.version }}/sql-statements.md %}) associates comments to [databases]({% link {{ page.version.version }}/create-database.md %}), [tables]({% link {{ page.version.version }}/create-table.md %}), [columns]({% link {{ page.version.version }}/alter-table.md %}#add-column), or [indexes]({% link {{ page.version.version }}/indexes.md %}).
+The `COMMENT ON` [statement]({% link {{ page.version.version }}/sql-statements.md %}) associates comments to [databases]({% link {{ page.version.version }}/create-database.md %}), [tables]({% link {{ page.version.version }}/create-table.md %}), [columns]({% link {{ page.version.version }}/alter-table.md %}#add-column), [indexes]({% link {{ page.version.version }}/indexes.md %}), or [types]({% link {{page.version.version}}/show-types.md %}).
{% include {{ page.version.version }}/misc/schema-change-stmt-note.md %}
@@ -25,6 +25,7 @@ The user must have the `CREATE` [privilege]({% link {{ page.version.version }}/s
------------|--------------
`database_name` | The name of the [database]({% link {{ page.version.version }}/create-database.md %}) on which you are commenting.
`schema_name` | The name of the [schema]({% link {{ page.version.version }}/create-schema.md %}) on which you are commenting.
+`type_name` | The name of the [type]({% link {{ page.version.version }}/show-types.md %}) on which you are commenting.
`table_name` | The name of the [table]({% link {{ page.version.version }}/create-table.md %}) on which you are commenting.
`column_name` | The name of the [column]({% link {{ page.version.version }}/alter-table.md %}#add-column) on which you are commenting.
`table_index_name` | The name of the [index]({% link {{ page.version.version }}/indexes.md %}) on which you are commenting.
@@ -180,6 +181,50 @@ To view column comments, use [`SHOW INDEXES ... WITH COMMENT`]({% link {{ page.v
(8 rows)
~~~
+### Add a comment to a type
+
+Issue a SQL statement to [create a type]({% link {{ page.version.version }}/create-type.md %}):
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE TYPE IF NOT EXISTS my_point AS (x FLOAT, y FLOAT, z FLOAT);
+~~~
+
+To view the type you just created, use [`SHOW TYPES`]({% link {{page.version.version}}/show-types.md %}):
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SHOW TYPES;
+~~~
+
+~~~
+ schema | name | owner
+---------+----------+--------
+ public | my_point | root
+(1 row)
+~~~
+
+To add a comment on the type, use a statement like the following:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+COMMENT ON TYPE my_point IS '3D point';
+~~~
+
+To view all comments on types, make a [selection query]({% link {{page.version.version}}/select-clause.md %}) against the `system.comments` table:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SELECT * FROM system.comments;
+~~~
+
+~~~
+ type | object_id | sub_id | comment
+-------+-----------+--------+-----------
+ 7 | 112 | 0 | 3D POINT
+(1 row)
+~~~
+
### Remove a comment from a database
To remove a comment from a database:
@@ -204,6 +249,15 @@ To remove a comment from a database:
(4 rows)
~~~
+### Remove a comment from a type
+
+To remove the comment from the type you created in the [preceding example](#add-a-comment-to-a-type), add a `NULL` comment:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+COMMENT ON TYPE my_point IS NULL;
+~~~
+
## See also
- [`CREATE DATABASE`]({% link {{ page.version.version }}/create-database.md %})
From 5808e14157c88ff53e859a4224633476e4a8e5bd Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 15:09:13 -0400
Subject: [PATCH 04/15] add VECTOR doc (#18791)
* add VECTOR doc
---
.../v24.2/misc/enterprise-features.md | 1 +
.../_includes/v24.2/sidebar-data/sql.json | 6 ++
src/current/v24.2/data-types.md | 1 +
src/current/v24.2/vector.md | 94 +++++++++++++++++++
4 files changed, 102 insertions(+)
create mode 100644 src/current/v24.2/vector.md
diff --git a/src/current/_includes/v24.2/misc/enterprise-features.md b/src/current/_includes/v24.2/misc/enterprise-features.md
index 258370890be..3ed7f1b04fa 100644
--- a/src/current/_includes/v24.2/misc/enterprise-features.md
+++ b/src/current/_includes/v24.2/misc/enterprise-features.md
@@ -7,6 +7,7 @@ Feature | Description
[Multi-Region Capabilities]({% link {{ page.version.version }}/multiregion-overview.md %}) | Row-level control over where your data is stored to help you reduce read and write latency and meet regulatory requirements.
[PL/pgSQL]({% link {{ page.version.version }}/plpgsql.md %}) | Use a procedural language in [user-defined functions]({% link {{ page.version.version }}/user-defined-functions.md %}) and [stored procedures]({% link {{ page.version.version }}/stored-procedures.md %}) to improve performance and enable more complex queries.
[Node Map]({% link {{ page.version.version }}/enable-node-map.md %}) | Visualize the geographical distribution of a cluster by plotting its node localities on a world map.
+[`VECTOR` type]({% link {{ page.version.version }}/vector.md %}) | Represent data points in multi-dimensional space, using fixed-length arrays of floating-point numbers.
## Recovery and streaming
diff --git a/src/current/_includes/v24.2/sidebar-data/sql.json b/src/current/_includes/v24.2/sidebar-data/sql.json
index 9bb38aacbe9..4b3d88d428d 100644
--- a/src/current/_includes/v24.2/sidebar-data/sql.json
+++ b/src/current/_includes/v24.2/sidebar-data/sql.json
@@ -1015,6 +1015,12 @@
"urls": [
"/${VERSION}/uuid.html"
]
+ },
+ {
+ "title": "VECTOR",
+ "urls": [
+ "/${VERSION}/vector.html"
+ ]
}
]
},
diff --git a/src/current/v24.2/data-types.md b/src/current/v24.2/data-types.md
index 634b247591a..ab14f2bee51 100644
--- a/src/current/v24.2/data-types.md
+++ b/src/current/v24.2/data-types.md
@@ -33,6 +33,7 @@ Type | Description | Example
[`TSQUERY`]({% link {{ page.version.version }}/tsquery.md %}) | A list of lexemes and operators used in [full-text search]({% link {{ page.version.version }}/full-text-search.md %}). | `'list' & 'lexem' & 'oper' & 'use' & 'full' & 'text' & 'search'`
[`TSVECTOR`]({% link {{ page.version.version }}/tsvector.md %}) | A list of lexemes with optional integer positions and weights used in [full-text search]({% link {{ page.version.version }}/full-text-search.md %}). | `'full':13 'integ':7 'lexem':4 'list':2 'option':6 'posit':8 'search':15 'text':14 'use':11 'weight':10`
[`UUID`]({% link {{ page.version.version }}/uuid.md %}) | A 128-bit hexadecimal value. | `7f9c24e8-3b12-4fef-91e0-56a2d5a246ec`
+[`VECTOR`]({% link {{ page.version.version }}/vector.md %}) | A fixed-length array of floating-point numbers. | `[1.0, 0.0, 0.0]`
## Data type conversions and casts
diff --git a/src/current/v24.2/vector.md b/src/current/v24.2/vector.md
new file mode 100644
index 00000000000..a07e2c12d1e
--- /dev/null
+++ b/src/current/v24.2/vector.md
@@ -0,0 +1,94 @@
+---
+title: VECTOR
+summary: The VECTOR data type stores fixed-length arrays of floating-point numbers, which represent data points in multi-dimensional space.
+toc: true
+docs_area: reference.sql
+---
+
+{% include enterprise-feature.md %}
+
+{{site.data.alerts.callout_info}}
+{% include feature-phases/preview.md %}
+{{site.data.alerts.end}}
+
+The `VECTOR` data type stores fixed-length arrays of floating-point numbers, which represent data points in multi-dimensional space. Vector search is often used in AI applications such as Large Language Models (LLMs) that rely on vector representations.
+
+For details on valid `VECTOR` comparison operators, refer to [Syntax](#syntax). For the list of supported `VECTOR` functions, refer to [Functions and Operators]({% link {{ page.version.version }}/functions-and-operators.md %}#pgvector-functions).
+
+{{site.data.alerts.callout_info}}
+`VECTOR` functionality is compatible with the [`pgvector`](https://github.com/pgvector/pgvector) extension for PostgreSQL. Vector indexing is **not** supported at this time.
+{{site.data.alerts.end}}
+
+## Syntax
+
+A `VECTOR` value is expressed as an [array]({% link {{ page.version.version }}/array.md %}) of [floating-point numbers]({% link {{ page.version.version }}/float.md %}). The array size corresponds to the number of `VECTOR` dimensions. For example, the following `VECTOR` has 3 dimensions:
+
+~~~
+[1.0, 0.0, 0.0]
+~~~
+
+You can specify the dimensions when defining a `VECTOR` column. This will enforce the number of dimensions in the column values. For example:
+
+~~~ sql
+ALTER TABLE foo ADD COLUMN bar VECTOR(3);
+~~~
+
+The following `VECTOR` comparison operators are valid:
+
+- `=` (equals). Compare vectors for equality in filtering and conditional queries.
+- `<>` (not equal to). Compare vectors for inequality in filtering and conditional queries.
+- `<->` (L2 distance). Calculate the Euclidean distance between two vectors, as used in [nearest neighbor search](https://en.wikipedia.org/wiki/Nearest_neighbor_search) and clustering algorithms.
+- `<#>` (negative inner product). Calculate the [inner product](https://en.wikipedia.org/wiki/Inner_product_space) of two vectors, as used in similarity searches where the inner product can represent the similarity score.
+- `<=>` (cosine distance). Calculate the [cosine distance](https://en.wikipedia.org/wiki/Cosine_similarity) between vectors, such as in text and image similarity measures where the orientation of vectors is more important than their magnitude.
+
+## Size
+
+The size of a `VECTOR` value is variable, but it's recommended to keep values under 1 MB to ensure performance. Above that threshold, [write amplification]({% link {{ page.version.version }}/architecture/storage-layer.md %}#write-amplification) and other considerations may cause significant performance degradation.
+
+## Functions
+
+For the list of supported `VECTOR` functions, refer to [Functions and Operators]({% link {{ page.version.version }}/functions-and-operators.md %}#pgvector-functions).
+
+## Example
+
+Create a table with a `VECTOR` column, specifying `3` dimensions:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+CREATE TABLE items (
+ category STRING,
+ vector VECTOR(3),
+ INDEX (category)
+);
+~~~
+
+Insert some sample data into the table:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+INSERT INTO items (category, vector) VALUES
+ ('electronics', '[1.0, 0.0, 0.0]'),
+ ('electronics', '[0.9, 0.1, 0.0]'),
+ ('furniture', '[0.0, 1.0, 0.0]'),
+ ('furniture', '[0.0, 0.9, 0.1]'),
+ ('clothing', '[0.0, 0.0, 1.0]');
+~~~
+
+Use the [`<->` operator](#syntax) to sort values with the `electronics` category by their similarity to `[1.0, 0.0, 0.0]`, based on geographic distance.
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SELECT category, vector FROM items WHERE category = 'electronics' ORDER BY vector <-> '[1.0, 0.0, 0.0]' LIMIT 5;
+~~~
+
+~~~
+ category | vector
+--------------+--------------
+ electronics | [1,0,0]
+ electronics | [0.9,0.1,0]
+~~~
+
+## See also
+
+- [Functions and Operators]({% link {{ page.version.version }}/functions-and-operators.md %}#pgvector-functions)
+- [Data Types]({% link {{ page.version.version }}/data-types.md %})
\ No newline at end of file
From d65f8685128b4ae55932c934c43e60153280fbc6 Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 15:16:51 -0400
Subject: [PATCH 05/15] document generic query plans (#18753)
* document generic query plans
---------
Co-authored-by: Florence Morris
---
.../v24.2/misc/enterprise-features.md | 1 +
.../_includes/v24.2/misc/session-vars.md | 1 +
src/current/v24.2/cost-based-optimizer.md | 75 ++++++++++++++++---
src/current/v24.2/explain-analyze.md | 5 ++
4 files changed, 70 insertions(+), 12 deletions(-)
diff --git a/src/current/_includes/v24.2/misc/enterprise-features.md b/src/current/_includes/v24.2/misc/enterprise-features.md
index 3ed7f1b04fa..765da40371a 100644
--- a/src/current/_includes/v24.2/misc/enterprise-features.md
+++ b/src/current/_includes/v24.2/misc/enterprise-features.md
@@ -7,6 +7,7 @@ Feature | Description
[Multi-Region Capabilities]({% link {{ page.version.version }}/multiregion-overview.md %}) | Row-level control over where your data is stored to help you reduce read and write latency and meet regulatory requirements.
[PL/pgSQL]({% link {{ page.version.version }}/plpgsql.md %}) | Use a procedural language in [user-defined functions]({% link {{ page.version.version }}/user-defined-functions.md %}) and [stored procedures]({% link {{ page.version.version }}/stored-procedures.md %}) to improve performance and enable more complex queries.
[Node Map]({% link {{ page.version.version }}/enable-node-map.md %}) | Visualize the geographical distribution of a cluster by plotting its node localities on a world map.
+[Generic query plans]({% link {{ page.version.version }}/cost-based-optimizer.md %}#query-plan-type) | Improve performance for prepared statements by enabling generic plans that eliminate most of the query latency attributed to planning.
[`VECTOR` type]({% link {{ page.version.version }}/vector.md %}) | Represent data points in multi-dimensional space, using fixed-length arrays of floating-point numbers.
## Recovery and streaming
diff --git a/src/current/_includes/v24.2/misc/session-vars.md b/src/current/_includes/v24.2/misc/session-vars.md
index c96060545d6..59b24a55a81 100644
--- a/src/current/_includes/v24.2/misc/session-vars.md
+++ b/src/current/_includes/v24.2/misc/session-vars.md
@@ -54,6 +54,7 @@
| `optimizer_use_multicol_stats` | If `on`, the optimizer uses collected multi-column statistics for cardinality estimation. | `on` | No | Yes |
| `optimizer_use_not_visible_indexes` | If `on`, the optimizer uses not visible indexes for planning. | `off` | No | Yes |
| `optimizer_use_virtual_computed_column_stats` | If `on`, the optimizer uses table statistics on [virtual computed columns]({% link {{ page.version.version }}/computed-columns.md %}#virtual-computed-columns). | `on` | Yes | Yes
+| `plan_cache_mode` | The type of plan that is cached in the [query plan cache]({% link {{ page.version.version }}/cost-based-optimizer.md %}#query-plan-cache): `auto`, `force_generic_plan`, or `force_custom_plan`.
For more information, refer to [Query plan type]({% link {{ page.version.version }}/cost-based-optimizer.md %}#query-plan-type). | `force_custom_plan` | Yes | Yes
| `plpgsql_use_strict_into` | If `on`, PL/pgSQL [`SELECT ... INTO` and `RETURNING ... INTO` statements]({% link {{ page.version.version }}/plpgsql.md %}#assign-a-result-to-a-variable) behave as though the `STRICT` option is specified. This causes the SQL statement to error if it does not return exactly one row. | `off` | Yes | Yes |
| `pg_trgm.similarity_threshold` | The threshold above which a [`%`]({% link {{ page.version.version }}/functions-and-operators.md %}#operators) string comparison returns `true`. The value must be between `0` and `1`. For more information, see [Trigram Indexes]({% link {{ page.version.version }}/trigram-indexes.md %}). | `0.3` | Yes | Yes |
| `prefer_lookup_joins_for_fks` | If `on`, the optimizer prefers [`lookup joins`]({% link {{ page.version.version }}/joins.md %}#lookup-joins) to [`merge joins`]({% link {{ page.version.version }}/joins.md %}#merge-joins) when performing [`foreign key`]({% link {{ page.version.version }}/foreign-key.md %}) checks. | `off` | Yes | Yes |
diff --git a/src/current/v24.2/cost-based-optimizer.md b/src/current/v24.2/cost-based-optimizer.md
index b6fb9b8e418..2c162789a70 100644
--- a/src/current/v24.2/cost-based-optimizer.md
+++ b/src/current/v24.2/cost-based-optimizer.md
@@ -277,27 +277,77 @@ Only tables with `ZONE` [survivability]({% link {{ page.version.version }}/multi
## Query plan cache
-CockroachDB uses a cache for the query plans generated by the optimizer. This can lead to faster query execution since the database can reuse a query plan that was previously calculated, rather than computing a new plan each time a query is executed.
+CockroachDB caches some of the query plans generated by the optimizer. The query plan cache is used for the following types of statements:
+
+- Prepared statements.
+- Non-prepared statements using identical constant values.
+
+Caching query plans leads to faster query execution: rather than generating a new plan each time a query is executed, CockroachDB reuses a query plan that was previously generated.
The query plan cache is enabled by default. To disable it, execute the following statement:
{% include_cached copy-clipboard.html %}
~~~ sql
-> SET CLUSTER SETTING sql.query_cache.enabled = false;
+SET CLUSTER SETTING sql.query_cache.enabled = false;
~~~
-Only the following statements use the plan cache:
+The following statements can use the plan cache: [`SELECT`]({% link {{ page.version.version }}/select-clause.md %}), [`INSERT`]({% link {{ page.version.version }}/insert.md %}), [`UPDATE`]({% link {{ page.version.version }}/update.md %}), [`UPSERT`]({% link {{ page.version.version }}/upsert.md %}), and [`DELETE`]({% link {{ page.version.version }}/delete.md %}).
-- [`SELECT`]({% link {{ page.version.version }}/select-clause.md %})
-- [`INSERT`]({% link {{ page.version.version }}/insert.md %})
-- [`UPDATE`]({% link {{ page.version.version }}/update.md %})
-- [`UPSERT`]({% link {{ page.version.version }}/upsert.md %})
-- [`DELETE`]({% link {{ page.version.version }}/delete.md %})
+Two types of plans can be cached: custom and generic. Refer to [Query plan type](#query-plan-type).
-The optimizer can use cached plans if they are:
+### Query plan type
-- Prepared statements.
-- Non-prepared statements using identical constant values.
+The following types of plans can be cached:
+
+- *Custom* query plans are generated for a given query structure and optimized for specific placeholder values, and are re-optimized on subsequent executions. By default, the optimizer uses custom plans.
+- {% include_cached new-in.html version="v24.2" %} *Generic* query plans are generated and optimized once without considering specific placeholder values, and are **not** regenerated on subsequent executions, unless the plan becomes stale due to [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) or new [table statistics](#table-statistics) and must be re-optimized. This approach eliminates most of the query latency attributed to planning.
+
+ Generic query plans require an [Enterprise license]({% link {{ page.version.version }}/enterprise-licensing.md %}).
+
+ {{site.data.alerts.callout_success}}
+ Generic query plans will only benefit workloads that use prepared statements, which are issued via explicit `PREPARE` statements or by client libraries using the [PostgreSQL extended wire protocol](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY). Generic query plans are most beneficial for queries with high planning times, such as queries with many [joins]({% link {{ page.version.version }}/joins.md %}). For more information on reducing planning time for such queries, refer to [Reduce planning time for queries with many joins](#reduce-planning-time-for-queries-with-many-joins).
+ {{site.data.alerts.end}}
+
+To change the type of plan that is cached, use the [`plan_cache_mode`]({% link {{ page.version.version }}/session-variables.md %}#plan-cache-mode) session setting. This setting applies when a statement is executed, not when it is prepared. Statements are therefore not associated with a specific query plan type when they are prepared.
+
+The following modes can be set:
+
+- `force_custom_plan` (default): Force the use of custom plans.
+- `force_generic_plan`: Force the use of generic plans.
+- `auto`: Automatically determine whether to use custom or generic query plans for prepared statements. Custom plans are used for the first five statement executions. Subsequent executions use a generic plan if its estimated cost is not significantly higher than the average cost of the preceding custom plans.
+
+{{site.data.alerts.callout_info}}
+Generic plans are always used for non-prepared statements that do not contain placeholders or [stable functions]({% link {{ page.version.version }}/functions-and-operators.md %}#function-volatility), regardless of the `plan_cache_mode` setting.
+{{site.data.alerts.end}}
+
+In some cases, generic query plans are less efficient than custom plans. For this reason, Cockroach Labs recommends setting `plan_cache_mode` to `auto` instead of `force_generic_plan`. Under the `auto` setting, the optimizer avoids bad generic plans by falling back to custom plans. For example:
+
+Set `plan_cache_mode` to `auto` at the session level:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SET plan_cache_mode = auto
+~~~
+
+At the [database level]({% link {{ page.version.version }}/alter-database.md %}#set-session-variable):
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+ALTER DATABASE db SET plan_cache_mode = auto;
+~~~
+
+At the [role level]({% link {{ page.version.version }}/alter-role.md %}#set-default-session-variable-values-for-a-role):
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+ALTER ROLE db_user SET plan_cache_mode = auto;
+~~~
+
+To verify the plan type used by a query, check the [`EXPLAIN ANALYZE`]({% link {{ page.version.version }}/explain-analyze.md %}) output for the query.
+
+- If a generic query plan is optimized for the current execution, the `plan type` in the output is `generic, re-optimized`.
+- If a generic query plan is reused for the current execution without performing optimization, the `plan type` in the output is `generic, reused`.
+- If a custom query plan is used for the current execution, the `plan type` in the output is `custom`.
## Join reordering
@@ -309,7 +359,7 @@ To change this setting, which is controlled by the `reorder_joins_limit` [sessio
{% include_cached copy-clipboard.html %}
~~~ sql
-> SET reorder_joins_limit = 0;
+SET reorder_joins_limit = 0;
~~~
To disable this feature, set the variable to `0`. You can configure the default `reorder_joins_limit` session setting with the [cluster setting]({% link {{ page.version.version }}/cluster-settings.md %}) `sql.defaults.reorder_joins_limit`, which has a default value of `8`.
@@ -328,6 +378,7 @@ The cost-based optimizer explores multiple join orderings to find the lowest-cos
- To limit the size of the subtree that can be reordered, set the `reorder_joins_limit` [session variable]({% link {{ page.version.version }}/set-vars.md %}) to a lower value, for example:
+ {% include_cached copy-clipboard.html %}
~~~ sql
SET reorder_joins_limit = 2;
~~~
diff --git a/src/current/v24.2/explain-analyze.md b/src/current/v24.2/explain-analyze.md
index 68cb8e8c7d5..2ed8ddce396 100644
--- a/src/current/v24.2/explain-analyze.md
+++ b/src/current/v24.2/explain-analyze.md
@@ -212,6 +212,7 @@ EXPLAIN ANALYZE SELECT city, AVG(revenue) FROM rides GROUP BY city;
execution time: 8ms
distribution: full
vectorized: true
+ plan type: custom
rows decoded from KV: 500 (88 KiB, 1 gRPC calls)
cumulative time spent in KV: 6ms
maximum memory usage: 240 KiB
@@ -262,6 +263,7 @@ EXPLAIN ANALYZE SELECT * FROM vehicles JOIN rides ON rides.vehicle_id = vehicles
execution time: 5ms
distribution: local
vectorized: true
+ plan type: custom
rows decoded from KV: 515 (90 KiB, 2 gRPC calls)
cumulative time spent in KV: 4ms
maximum memory usage: 580 KiB
@@ -335,6 +337,7 @@ EXPLAIN ANALYZE (VERBOSE) SELECT city, AVG(revenue) FROM rides GROUP BY city;
execution time: 5ms
distribution: full
vectorized: true
+ plan type: custom
rows decoded from KV: 500 (88 KiB, 500 KVs, 1 gRPC calls)
cumulative time spent in KV: 4ms
maximum memory usage: 240 KiB
@@ -397,6 +400,7 @@ EXPLAIN ANALYZE (DISTSQL) SELECT city, AVG(revenue) FROM rides GROUP BY city;
execution time: 4ms
distribution: full
vectorized: true
+ plan type: custom
rows decoded from KV: 500 (88 KiB, 1 gRPC calls)
cumulative time spent in KV: 3ms
maximum memory usage: 240 KiB
@@ -475,6 +479,7 @@ EXPLAIN ANALYZE (REDACT) SELECT * FROM rides WHERE revenue > 90 ORDER BY revenue
execution time: 6ms
distribution: full
vectorized: true
+ plan type: custom
rows decoded from KV: 500 (88 KiB, 1 gRPC calls)
cumulative time spent in KV: 4ms
maximum memory usage: 280 KiB
From 2095ce5aa8f2329d9c3c705133c63eb221b122ed Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 15:50:31 -0400
Subject: [PATCH 06/15] molt fetch transformation rules (#18797)
* molt fetch transformation rules
---------
Co-authored-by: Jane Xing <53610260+ZhouXing19@users.noreply.github.com>
Co-authored-by: Florence Morris
---
src/current/molt/molt-fetch.md | 98 +++++++++++++++++++++++++++++++++-
1 file changed, 96 insertions(+), 2 deletions(-)
diff --git a/src/current/molt/molt-fetch.md b/src/current/molt/molt-fetch.md
index 5398a460b94..c9eef75f90a 100644
--- a/src/current/molt/molt-fetch.md
+++ b/src/current/molt/molt-fetch.md
@@ -216,6 +216,7 @@ To verify that your connections and configuration work properly, run MOLT Fetch
| `--table-exclusion-filter` | Exclude tables that match a specified [POSIX regular expression](https://wikipedia.org/wiki/Regular_expression).
This value **cannot** be set to `'.*'`, which would cause every table to be excluded.
**Default:** Empty string |
| `--table-filter` | Move tables that match a specified [POSIX regular expression](https://wikipedia.org/wiki/Regular_expression).
**Default:** `'.*'` |
| `--table-handling` | How tables are initialized on the target database (`'none'`/`'drop-on-target-and-recreate'`/`'truncate-if-exists'`). For details, see [Target table handling](#target-table-handling).
**Default:** `'none'` |
+| `--transformations-file` | Path to a JSON file that defines transformations to be performed on the target schema during the fetch process. Refer to [Transformations](#transformations). |
| `--type-map-file` | Path to a JSON file that contains explicit type mappings for automatic schema creation, when enabled with `--table-handling 'drop-on-target-and-recreate'`. For details on the JSON format and valid type mappings, see [type mapping](#type-mapping). |
| `--use-console-writer` | Use the console writer, which has cleaner log output but introduces more latency.
**Default:** `false` (log as structured JSON) |
| `--use-copy` | Use [`COPY FROM` mode](#fetch-mode) to move data. This makes tables queryable during data load, but is slower than `IMPORT INTO` mode. For details, see [Fetch mode](#fetch-mode). |
@@ -453,7 +454,7 @@ If [`'drop-on-target-and-recreate'`](#target-table-handling) is set, MOLT Fetch
| `BOOL`, `BOOLEAN` | [`BOOL`]({% link {{site.current_cloud_version}}/bool.md %}) |
| `ENUM` | [`ANY_ENUM`]({% link {{site.current_cloud_version}}/enum.md %}) |
-- To override the default mappings for automatic schema creation, you can map source to target CockroachDB types explicitly. These are specified using a JSON file and `--type-map-file`. The allowable custom mappings are valid CockroachDB aliases, casts, and the following mappings specific to MOLT Fetch and [Verify]({% link molt/molt-verify.md %}):
+- To override the default mappings for automatic schema creation, you can map source to target CockroachDB types explicitly. These are defined in the JSON file indicated by the `--type-map-file` flag. The allowable custom mappings are valid CockroachDB aliases, casts, and the following mappings specific to MOLT Fetch and [Verify]({% link molt/molt-verify.md %}):
- [`TIMESTAMP`]({% link {{site.current_cloud_version}}/timestamp.md %}) <> [`TIMESTAMPTZ`]({% link {{site.current_cloud_version}}/timestamp.md %})
- [`VARCHAR`]({% link {{site.current_cloud_version}}/string.md %}) <> [`UUID`]({% link {{site.current_cloud_version}}/uuid.md %})
@@ -470,7 +471,7 @@ If [`'drop-on-target-and-recreate'`](#target-table-handling) is set, MOLT Fetch
--type-map-file 'type-mappings.json'
~~~
-The JSON is formatted as follows:
+The following JSON example defines two type mappings:
~~~ json
[
@@ -500,6 +501,99 @@ The JSON is formatted as follows:
- `column` specifies the column that will use the custom type mapping in `type-kv`. If `*` is specified, then all columns in the `table` with the matching `source-type` are converted.
- `type-kv` specifies the `source-type` that maps to the target `crdb-type`.
+### Transformations
+
+You can define transformation rules to be performed on the target schema during the fetch process. These can be used to:
+
+- Map [computed columns]({% link {{ site.current_cloud_version }}/computed-columns.md %}) to a target schema.
+- Map [partitioned tables]({% link {{ site.current_cloud_version }}/partitioning.md %}) to a single target table.
+- Rename tables on the target schema.
+
+Transformation rules are defined in the JSON file indicated by the `--transformations-file` flag. For example:
+
+{% include_cached copy-clipboard.html %}
+~~~
+--transformations-file 'transformation-rules.json'
+~~~
+
+The following JSON example defines two transformation rules:
+
+~~~ json
+{
+ "transforms": [
+ {
+ "id": 1,
+ "resource_specifier": {
+ "schema": ".*",
+ "table": ".*"
+ },
+ "column_exclusion_opts": {
+ "add_computed_def": true,
+ "column": "^age$"
+ }
+ },
+ {
+ "id": 2,
+ "resource_specifier": {
+ "schema": "public",
+ "table": "charges_part.*"
+ },
+ "table_rename_opts": {
+ "value": "charges"
+ }
+ }
+ ]
+}
+~~~
+
+- `resource_specifier` configures the following options for transformation rules:
+ - `schema` specifies the schemas to be affected by the transformation rule, formatted as a POSIX regex string.
+ - `table` specifies the tables to be affected by the transformation rule, formatted as a POSIX regex string.
+- `column_exclusion_opts` configures the following options for column exclusions and computed columns:
+ - `column` specifies source columns to exclude from being mapped to regular columns on the target schema. It is formatted as a POSIX regex string.
+ - `add_computed_def`, when set to `true`, specifies that each matching `column` should be mapped to a [computed column]({% link {{ site.current_cloud_version }}/computed-columns.md %}) on the target schema. Instead of being moved from the source, the column data is generated on the target using [`ALTER TABLE ... ADD COLUMN`]({% link {{ site.current_cloud_version }}/alter-table.md %}#add-column) and the computed column definition from the source schema. This assumes that all matching columns are computed columns on the source.
+ {{site.data.alerts.callout_danger}}
+ Columns that match the `column` regex will **not** be moved to CockroachDB if `add_computed_def` is omitted or set to `false` (default), or if a matching column is a non-computed column.
+ {{site.data.alerts.end}}
+- `table_rename_opts` configures the following option for table renaming:
+ - `value` specifies the table name to which the matching `resource_specifier` is mapped. If only one source table matches `resource_specifier`, it is renamed to `table_rename_opts.value` on the target. If more than one table matches `resource_specifier` (i.e., an n-to-1 mapping), the fetch process assumes that all matching tables are [partitioned tables]({% link {{ site.current_cloud_version }}/partitioning.md %}) with the same schema, and moves their data to a table named `table_rename_opts.value` on the target. Otherwise, the process will error.
+
+ Additionally, in an n-to-1 mapping situation:
+
+ - Specify [`--use-copy`](#fetch-mode) or [`--direct-copy`](#direct-copy) mode for data movement. This is because the data from the source tables is loaded concurrently into the target table.
+ - Create the target table schema manually, and do **not** use [`--table-handling 'drop-on-target-and-recreate'`](#target-table-handling) for target table handling.
+
+The preceding JSON example therefore defines two rules:
+
+- Rule `1` maps all source `age` columns on the source database to [computed columns]({% link {{ site.current_cloud_version }}/computed-columns.md %}) on CockroachDB. This assumes that all matching `age` columns are defined as computed columns on the source.
+- Rule `2` maps all table names with prefix `charges_part` from the source database to a single `charges` table on CockroachDB (i.e., an n-to-1 mapping). This assumes that all matching `charges_part.*` tables have the same schema.
+
+Each rule is applied in the order it is defined. If two rules overlap, the later rule will override the earlier rule.
+
+To verify that the logging shows that the computed columns are being created:
+
+When running `molt fetch`, set `--logging 'debug'` and look for `ALTER TABLE ... ADD COLUMN` statements with the `STORED` or `VIRTUAL` keywords in the log output:
+
+~~~ json
+{"level":"debug","time":"2024-07-22T12:01:51-04:00","message":"running: ALTER TABLE IF EXISTS public.computed ADD COLUMN computed_col INT8 NOT NULL AS ((col1 + col2)) STORED"}
+~~~
+
+After running `molt fetch`, issue a `SHOW CREATE TABLE` statement on CockroachDB:
+
+{% include_cached copy-clipboard.html %}
+~~~ sql
+SHOW CREATE TABLE computed;
+~~~
+
+~~~
+ table_name | create_statement
+-------------+-------------------------------------------------------------------
+ computed | CREATE TABLE public.computed (
+ ...
+ | computed_col INT8 NOT NULL AS (col1 + col2) STORED
+ | )
+~~~
+
### Fetch continuation
If MOLT Fetch fails while loading data into CockroachDB from intermediate files, it exits with an error message, fetch ID, and [continuation token](#list-active-continuation-tokens) for each table that failed to load on the target database. You can use this information to continue the process from the *continuation point* where it was interrupted. For an example, see [Continue fetch after encountering an error](#continue-fetch-after-encountering-an-error).
From 55649ce2ce05fbc79961593184d86ae799522f8a Mon Sep 17 00:00:00 2001
From: "Matt Linville (he/him)"
Date: Thu, 8 Aug 2024 13:32:23 -0700
Subject: [PATCH 07/15] [DOC-10894] Update docs for selecting a cluster version
(#18794)
* [DOC-10894] Update docs for selecting a cluster version
---
src/current/cockroachcloud/create-your-cluster.md | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)
diff --git a/src/current/cockroachcloud/create-your-cluster.md b/src/current/cockroachcloud/create-your-cluster.md
index ba8535cce8e..015d3408e68 100644
--- a/src/current/cockroachcloud/create-your-cluster.md
+++ b/src/current/cockroachcloud/create-your-cluster.md
@@ -153,20 +153,17 @@ The cluster is automatically given a randomly-generated name. If desired, change
## Step 8. Select the CockroachDB version
-When you create a new CockroachDB {{ site.data.products.dedicated }} cluster, it defaults to using the [latest CockroachDB {{ site.data.products.cloud }} production release]({% link releases/cloud.md %}) unless you select a release explicitly. Releases are rolled out gradually to CockroachDB {{ site.data.products.cloud }}. At any given time, you may be able to choose among two or more types of releases. In the list, releases are labeled according to their stability:
+When you create a new CockroachDB {{ site.data.products.dedicated }} cluster, it defaults to using the [latest CockroachDB {{ site.data.products.cloud }} production release]({% link releases/cloud.md %}) unless you select a release explicitly. Releases are rolled out gradually to CockroachDB {{ site.data.products.cloud }}. At any given time, you may be able to choose among multiple releases. In the list:
-- **Latest Stable**: The latest stable GA release is the default version and is suitable for production.
-- **Stable**: One or more stable releases may be listed at any given time. All listed releases that are not labeled **Pre-Production Preview** are stable releases suitable for production.
-- **Pre-Production Preview**: Prior to the GA release of a new CockroachDB major version, a series of Beta and Release Candidate (RC) releases may be made available for CockroachDB {{ site.data.products.dedicated }} as [Pre-Production Preview]({% link cockroachcloud/upgrade-policy.md %}#pre-production-preview-upgrades) releases. Pre-Production Preview releases are no longer available after the GA release of a major version.
+- **No label**: The latest patch of a Regular [Production release]({% link cockroachcloud/upgrade-policy.md %}) that is not the latest. A Regular release has full support for one year from the release date, at which a cluster must be [upgraded]({% link cockroachcloud/upgrade-policy.md %}) to maintain support.
+- **Latest**: The latest patch of the latest regular [Production release]({% link cockroachcloud/upgrade-policy.md %}). This is the default version for new clusters.
+- **Pre-Production Version**: A [Pre-Production Preview]({% link cockroachcloud/upgrade-policy.md %}#pre-production-preview-upgrades). Leading up to a new CockroachDB Regular [Production release]({% link cockroachcloud/upgrade-policy.md %}), a series of Beta and Release Candidate (RC) patches may be made available for CockroachDB {{ site.data.products.dedicated }} as Pre-Production Preview releases. Pre-Production Preview releases are not suitable for production environments. They are no longer available in CockroachDB {{ site.data.products.cloud }} for new clusters or upgrades after the new version is GA. When the GA release is available, a cluster running a Pre-Production Preview is automatically upgraded to the GA release and subsequent patches and is eligible for support.
{{site.data.alerts.callout_danger}}
Testing releases, including Pre-Production Preview releases, are provided for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.
{{site.data.alerts.end}}
-To select a version for your cluster:
-
-1. Under **Cluster Version**, click **More versions**.
-1. Select the cluster version from the **Cluster version** list.
+1. To choose a version for your cluster, select the cluster version from the **Cluster version** list.
After the cluster is created, patch releases within its major version are required and are applied automatically. If you install or upgrade to a Pre-Production Preview release, subsequent Pre-Production Preview patch releases, the GA release, and subsequent patches within the major version are applied automatically. To learn more, refer to the [CockroachDB Cloud Support and Upgrade Policy]({% link cockroachcloud/upgrade-policy.md %}).
From c887c17e435db0bada9e6dd07306b7669caf9d61 Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 16:42:36 -0400
Subject: [PATCH 08/15] add missing known limitations (#18800)
* add missing known limitations
---------
Co-authored-by: Florence Morris
---
.../v24.1/known-limitations/plpgsql-limitations.md | 7 ++++++-
.../v24.1/known-limitations/read-committed-limitations.md | 2 +-
.../v24.1/known-limitations/routine-limitations.md | 4 +++-
.../v24.1/known-limitations/stored-proc-limitations.md | 3 ++-
.../_includes/v24.1/known-limitations/udf-limitations.md | 3 ++-
.../v24.2/known-limitations/plpgsql-limitations.md | 7 ++++++-
.../v24.2/known-limitations/read-committed-limitations.md | 4 ++--
.../v24.2/known-limitations/routine-limitations.md | 5 ++++-
.../v24.2/known-limitations/stored-proc-limitations.md | 3 ++-
.../_includes/v24.2/known-limitations/udf-limitations.md | 3 ++-
10 files changed, 30 insertions(+), 11 deletions(-)
diff --git a/src/current/_includes/v24.1/known-limitations/plpgsql-limitations.md b/src/current/_includes/v24.1/known-limitations/plpgsql-limitations.md
index 70b66dbd6a1..418cbfc8c66 100644
--- a/src/current/_includes/v24.1/known-limitations/plpgsql-limitations.md
+++ b/src/current/_includes/v24.1/known-limitations/plpgsql-limitations.md
@@ -17,4 +17,9 @@
- `NOT NULL` variable declarations are not supported. [#105243](https://github.com/cockroachdb/cockroach/issues/105243)
- Cursors opened in PL/pgSQL execute their queries on opening, affecting performance and resource usage. [#111479](https://github.com/cockroachdb/cockroach/issues/111479)
- Cursors in PL/pgSQL cannot be declared with arguments. [#117746](https://github.com/cockroachdb/cockroach/issues/117746)
-- `OPEN FOR EXECUTE` is not supported for opening cursors. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
\ No newline at end of file
+- `OPEN FOR EXECUTE` is not supported for opening cursors. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
+- The `print_strict_params` option is not supported in PL/pgSQL. [#123671](https://github.com/cockroachdb/cockroach/issues/123671)
+- The `FOUND` local variable, which checks whether a statement affected any rows, is not supported in PL/pgSQL. [#122306](https://github.com/cockroachdb/cockroach/issues/122306)
+- By default, when a PL/pgSQL variable conflicts with a column name, CockroachDB resolves the ambiguity by treating it as a column reference rather than a variable reference. This behavior differs from PostgreSQL, where an ambiguous column error is reported, and it is possible to change the `plpgsql.variable_conflict` setting in order to prefer either columns or variables. [#115680](https://github.com/cockroachdb/cockroach/issues/115680)
+- It is not possible to define a `RECORD`-returning PL/pgSQL function that returns different-typed expressions from different `RETURN` statements. CockroachDB requires a consistent return type for `RECORD`-returning functions. [#115384](https://github.com/cockroachdb/cockroach/issues/115384)
+- Variables cannot be declared with an associated collation using the `COLLATE` keyword. [#105245](https://github.com/cockroachdb/cockroach/issues/105245)
\ No newline at end of file
diff --git a/src/current/_includes/v24.1/known-limitations/read-committed-limitations.md b/src/current/_includes/v24.1/known-limitations/read-committed-limitations.md
index 5087e29ac00..c322ec2585c 100644
--- a/src/current/_includes/v24.1/known-limitations/read-committed-limitations.md
+++ b/src/current/_includes/v24.1/known-limitations/read-committed-limitations.md
@@ -1,5 +1,5 @@
- Schema changes (e.g., [`CREATE TABLE`]({% link {{ page.version.version }}/create-table.md %}), [`CREATE SCHEMA`]({% link {{ page.version.version }}/create-schema.md %}), [`CREATE INDEX`]({% link {{ page.version.version }}/create-index.md %})) cannot be performed within explicit `READ COMMITTED` transactions, and will cause transactions to abort. As a workaround, [set the transaction's isolation level]({% link {{ page.version.version }}/read-committed.md %}#set-the-current-transaction-to-read-committed) to `SERIALIZABLE`. [#114778](https://github.com/cockroachdb/cockroach/issues/114778)
-- `READ COMMITTED` transactions performing `INSERT`, `UPDATE`, or `UPSERT` cannot access [`REGIONAL BY ROW`]({% link {{ page.version.version }}/table-localities.md %}#regional-by-row-tables) tables in which [`UNIQUE`]({% link {{ page.version.version }}/unique.md %}) and [`PRIMARY KEY`]({% link {{ page.version.version }}/primary-key.md %}) constraints exist, the region is not included in the constraint, and the region cannot be computed from the constraint columns.
+- `READ COMMITTED` transactions performing `INSERT`, `UPDATE`, or `UPSERT` cannot access [`REGIONAL BY ROW`]({% link {{ page.version.version }}/table-localities.md %}#regional-by-row-tables) tables in which [`UNIQUE`]({% link {{ page.version.version }}/unique.md %}) and [`PRIMARY KEY`]({% link {{ page.version.version }}/primary-key.md %}) constraints exist, the region is not included in the constraint, and the region cannot be computed from the constraint columns. [#110873](https://github.com/cockroachdb/cockroach/issues/110873)
- Multi-column-family checks during updates are not supported under `READ COMMITTED` isolation. [#112488](https://github.com/cockroachdb/cockroach/issues/112488)
- Because locks acquired by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks, [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}), and [`SELECT FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}) are fully replicated under `READ COMMITTED` isolation, some queries experience a delay for Raft replication.
- [Foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks are not performed in parallel under `READ COMMITTED` isolation.
diff --git a/src/current/_includes/v24.1/known-limitations/routine-limitations.md b/src/current/_includes/v24.1/known-limitations/routine-limitations.md
index f122a76f25f..701ea79c75b 100644
--- a/src/current/_includes/v24.1/known-limitations/routine-limitations.md
+++ b/src/current/_includes/v24.1/known-limitations/routine-limitations.md
@@ -4,4 +4,6 @@
- Routines cannot be created with unnamed `INOUT` parameters. For example, `CREATE PROCEDURE p(INOUT INT) AS $$ BEGIN NULL; END; $$ LANGUAGE PLpgSQL;`. [#121251](https://github.com/cockroachdb/cockroach/issues/121251)
- Routines cannot be created if they return fewer columns than declared. For example, `CREATE FUNCTION f(OUT sum INT, INOUT a INT, INOUT b INT) LANGUAGE SQL AS $$ SELECT (a + b, b); $$;`. [#121247](https://github.com/cockroachdb/cockroach/issues/121247)
{% endif %}
-- DDL statements (e.g., `CREATE TABLE`, `CREATE INDEX`) are not allowed within UDFs or stored procedures. [#110080](https://github.com/cockroachdb/cockroach/issues/110080)
\ No newline at end of file
+- Routines cannot be created with an `OUT` parameter of type `RECORD`. [#123448](https://github.com/cockroachdb/cockroach/issues/123448)
+- DDL statements (e.g., `CREATE TABLE`, `CREATE INDEX`) are not allowed within UDFs or stored procedures. [#110080](https://github.com/cockroachdb/cockroach/issues/110080)
+- Polymorphic types cannot be cast to other types (e.g., `TEXT`) within routine parameters. [#123536](https://github.com/cockroachdb/cockroach/issues/123536)
\ No newline at end of file
diff --git a/src/current/_includes/v24.1/known-limitations/stored-proc-limitations.md b/src/current/_includes/v24.1/known-limitations/stored-proc-limitations.md
index ec0fa31c9b8..70d8cd71791 100644
--- a/src/current/_includes/v24.1/known-limitations/stored-proc-limitations.md
+++ b/src/current/_includes/v24.1/known-limitations/stored-proc-limitations.md
@@ -1,2 +1,3 @@
{% if page.name != "known-limitations.md" # New limitations in v24.1 %}
-{% endif %}
\ No newline at end of file
+{% endif %}
+- `COMMIT` and `ROLLBACK` statements are not supported within nested procedures. [#122266](https://github.com/cockroachdb/cockroach/issues/122266)
\ No newline at end of file
diff --git a/src/current/_includes/v24.1/known-limitations/udf-limitations.md b/src/current/_includes/v24.1/known-limitations/udf-limitations.md
index 7064acb8775..7555fde890a 100644
--- a/src/current/_includes/v24.1/known-limitations/udf-limitations.md
+++ b/src/current/_includes/v24.1/known-limitations/udf-limitations.md
@@ -6,4 +6,5 @@
- Views. [#87699](https://github.com/cockroachdb/cockroach/issues/87699)
- User-defined functions cannot call themselves recursively. [#93049](https://github.com/cockroachdb/cockroach/issues/93049)
- [Common table expressions]({% link {{ page.version.version }}/common-table-expressions.md %}) (CTE), recursive or non-recursive, are not supported in [user-defined functions]({% link {{ page.version.version }}/user-defined-functions.md %}) (UDF). That is, you cannot use a `WITH` clause in the body of a UDF. [#92961](https://github.com/cockroachdb/cockroach/issues/92961)
-- The `setval` function cannot be resolved when used inside UDF bodies. [#110860](https://github.com/cockroachdb/cockroach/issues/110860)
\ No newline at end of file
+- The `setval` function cannot be resolved when used inside UDF bodies. [#110860](https://github.com/cockroachdb/cockroach/issues/110860)
+- Casting subqueries to [user-defined types]({% link {{ page.version.version }}/create-type.md %}) in UDFs is not supported. [#108184](https://github.com/cockroachdb/cockroach/issues/108184)
\ No newline at end of file
diff --git a/src/current/_includes/v24.2/known-limitations/plpgsql-limitations.md b/src/current/_includes/v24.2/known-limitations/plpgsql-limitations.md
index 444943636c0..62e78c1dafd 100644
--- a/src/current/_includes/v24.2/known-limitations/plpgsql-limitations.md
+++ b/src/current/_includes/v24.2/known-limitations/plpgsql-limitations.md
@@ -17,4 +17,9 @@
- `NOT NULL` variable declarations are not supported. [#105243](https://github.com/cockroachdb/cockroach/issues/105243)
- Cursors opened in PL/pgSQL execute their queries on opening, affecting performance and resource usage. [#111479](https://github.com/cockroachdb/cockroach/issues/111479)
- Cursors in PL/pgSQL cannot be declared with arguments. [#117746](https://github.com/cockroachdb/cockroach/issues/117746)
-- `OPEN FOR EXECUTE` is not supported for opening cursors. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
\ No newline at end of file
+- `OPEN FOR EXECUTE` is not supported for opening cursors. [#117744](https://github.com/cockroachdb/cockroach/issues/117744)
+- The `print_strict_params` option is not supported in PL/pgSQL. [#123671](https://github.com/cockroachdb/cockroach/issues/123671)
+- The `FOUND` local variable, which checks whether a statement affected any rows, is not supported in PL/pgSQL. [#122306](https://github.com/cockroachdb/cockroach/issues/122306)
+- By default, when a PL/pgSQL variable conflicts with a column name, CockroachDB resolves the ambiguity by treating it as a column reference rather than a variable reference. This behavior differs from PostgreSQL, where an ambiguous column error is reported, and it is possible to change the `plpgsql.variable_conflict` setting in order to prefer either columns or variables. [#115680](https://github.com/cockroachdb/cockroach/issues/115680)
+- It is not possible to define a `RECORD`-returning PL/pgSQL function that returns different-typed expressions from different `RETURN` statements. CockroachDB requires a consistent return type for `RECORD`-returning functions. [#115384](https://github.com/cockroachdb/cockroach/issues/115384)
+- Variables cannot be declared with an associated collation using the `COLLATE` keyword. [#105245](https://github.com/cockroachdb/cockroach/issues/105245)
\ No newline at end of file
diff --git a/src/current/_includes/v24.2/known-limitations/read-committed-limitations.md b/src/current/_includes/v24.2/known-limitations/read-committed-limitations.md
index 5087e29ac00..947cf56814d 100644
--- a/src/current/_includes/v24.2/known-limitations/read-committed-limitations.md
+++ b/src/current/_includes/v24.2/known-limitations/read-committed-limitations.md
@@ -1,5 +1,5 @@
-- Schema changes (e.g., [`CREATE TABLE`]({% link {{ page.version.version }}/create-table.md %}), [`CREATE SCHEMA`]({% link {{ page.version.version }}/create-schema.md %}), [`CREATE INDEX`]({% link {{ page.version.version }}/create-index.md %})) cannot be performed within explicit `READ COMMITTED` transactions, and will cause transactions to abort. As a workaround, [set the transaction's isolation level]({% link {{ page.version.version }}/read-committed.md %}#set-the-current-transaction-to-read-committed) to `SERIALIZABLE`. [#114778](https://github.com/cockroachdb/cockroach/issues/114778)
-- `READ COMMITTED` transactions performing `INSERT`, `UPDATE`, or `UPSERT` cannot access [`REGIONAL BY ROW`]({% link {{ page.version.version }}/table-localities.md %}#regional-by-row-tables) tables in which [`UNIQUE`]({% link {{ page.version.version }}/unique.md %}) and [`PRIMARY KEY`]({% link {{ page.version.version }}/primary-key.md %}) constraints exist, the region is not included in the constraint, and the region cannot be computed from the constraint columns.
+- Schema changes (e.g., [`CREATE TABLE`]({% link {{ page.version.version }}/create-table.md %}), [`CREATE SCHEMA`]({% link {{ page.version.version }}/create-schema.md %}), [`CREATE INDEX`]({% link {{ page.version.version }}/create-index.md %})) cannot be performed within explicit `READ COMMITTED` transactions when the [`autocommit_before_ddl` session setting]({% link {{page.version.version}}/set-vars.md %}#autocommit-before-ddl) is set to `off`, and will cause transactions to abort. As a workaround, [set the transaction's isolation level]({% link {{ page.version.version }}/read-committed.md %}#set-the-current-transaction-to-read-committed) to `SERIALIZABLE`. [#114778](https://github.com/cockroachdb/cockroach/issues/114778)
+- `READ COMMITTED` transactions performing `INSERT`, `UPDATE`, or `UPSERT` cannot access [`REGIONAL BY ROW`]({% link {{ page.version.version }}/table-localities.md %}#regional-by-row-tables) tables in which [`UNIQUE`]({% link {{ page.version.version }}/unique.md %}) and [`PRIMARY KEY`]({% link {{ page.version.version }}/primary-key.md %}) constraints exist, the region is not included in the constraint, and the region cannot be computed from the constraint columns. [#110873](https://github.com/cockroachdb/cockroach/issues/110873)
- Multi-column-family checks during updates are not supported under `READ COMMITTED` isolation. [#112488](https://github.com/cockroachdb/cockroach/issues/112488)
- Because locks acquired by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks, [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}), and [`SELECT FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}) are fully replicated under `READ COMMITTED` isolation, some queries experience a delay for Raft replication.
- [Foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks are not performed in parallel under `READ COMMITTED` isolation.
diff --git a/src/current/_includes/v24.2/known-limitations/routine-limitations.md b/src/current/_includes/v24.2/known-limitations/routine-limitations.md
index 6a1394b55fa..4718c6c7abf 100644
--- a/src/current/_includes/v24.2/known-limitations/routine-limitations.md
+++ b/src/current/_includes/v24.2/known-limitations/routine-limitations.md
@@ -4,4 +4,7 @@
- Routines cannot be created if they reference temporary tables. [#121375](https://github.com/cockroachdb/cockroach/issues/121375)
- Routines cannot be created with unnamed `INOUT` parameters. For example, `CREATE PROCEDURE p(INOUT INT) AS $$ BEGIN NULL; END; $$ LANGUAGE PLpgSQL;`. [#121251](https://github.com/cockroachdb/cockroach/issues/121251)
- Routines cannot be created if they return fewer columns than declared. For example, `CREATE FUNCTION f(OUT sum INT, INOUT a INT, INOUT b INT) LANGUAGE SQL AS $$ SELECT (a + b, b); $$;`. [#121247](https://github.com/cockroachdb/cockroach/issues/121247)
-- DDL statements (e.g., `CREATE TABLE`, `CREATE INDEX`) are not allowed within UDFs or stored procedures. [#110080](https://github.com/cockroachdb/cockroach/issues/110080)
\ No newline at end of file
+- Routines cannot be created with an `OUT` parameter of type `RECORD`. [#123448](https://github.com/cockroachdb/cockroach/issues/123448)
+- DDL statements (e.g., `CREATE TABLE`, `CREATE INDEX`) are not allowed within UDFs or stored procedures. [#110080](https://github.com/cockroachdb/cockroach/issues/110080)
+- Polymorphic types cannot be cast to other types (e.g., `TEXT`) within routine parameters. [#123536](https://github.com/cockroachdb/cockroach/issues/123536)
+- Routine parameters and return types cannot be declared using the `ANYENUM` polymorphic type, which is able to match any [`ENUM`]({% link {{ page.version.version }}/enum.md %}) type. [123048](https://github.com/cockroachdb/cockroach/issues/123048)
\ No newline at end of file
diff --git a/src/current/_includes/v24.2/known-limitations/stored-proc-limitations.md b/src/current/_includes/v24.2/known-limitations/stored-proc-limitations.md
index db976be3c63..b2ba1b61562 100644
--- a/src/current/_includes/v24.2/known-limitations/stored-proc-limitations.md
+++ b/src/current/_includes/v24.2/known-limitations/stored-proc-limitations.md
@@ -1,2 +1,3 @@
{% if page.name != "known-limitations.md" # New limitations in v24.2 %}
-{% endif %}
\ No newline at end of file
+{% endif %}
+- `COMMIT` and `ROLLBACK` statements are not supported within nested procedures. [#122266](https://github.com/cockroachdb/cockroach/issues/122266)
\ No newline at end of file
diff --git a/src/current/_includes/v24.2/known-limitations/udf-limitations.md b/src/current/_includes/v24.2/known-limitations/udf-limitations.md
index 8903180ded4..57011914407 100644
--- a/src/current/_includes/v24.2/known-limitations/udf-limitations.md
+++ b/src/current/_includes/v24.2/known-limitations/udf-limitations.md
@@ -6,4 +6,5 @@
- Views. [#87699](https://github.com/cockroachdb/cockroach/issues/87699)
- User-defined functions cannot call themselves recursively. [#93049](https://github.com/cockroachdb/cockroach/issues/93049)
- [Common table expressions]({% link {{ page.version.version }}/common-table-expressions.md %}) (CTE), recursive or non-recursive, are not supported in [user-defined functions]({% link {{ page.version.version }}/user-defined-functions.md %}) (UDF). That is, you cannot use a `WITH` clause in the body of a UDF. [#92961](https://github.com/cockroachdb/cockroach/issues/92961)
-- The `setval` function cannot be resolved when used inside UDF bodies. [#110860](https://github.com/cockroachdb/cockroach/issues/110860)
\ No newline at end of file
+- The `setval` function cannot be resolved when used inside UDF bodies. [#110860](https://github.com/cockroachdb/cockroach/issues/110860)
+- Casting subqueries to [user-defined types]({% link {{ page.version.version }}/create-type.md %}) in UDFs is not supported. [#108184](https://github.com/cockroachdb/cockroach/issues/108184)
\ No newline at end of file
From 49dae2687a080b314f6f068836e6789e9ce98670 Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 16:47:58 -0400
Subject: [PATCH 09/15] generic query plans are in preview (#18805)
---
src/current/v24.2/cockroachdb-feature-availability.md | 4 ++++
src/current/v24.2/cost-based-optimizer.md | 2 +-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/src/current/v24.2/cockroachdb-feature-availability.md b/src/current/v24.2/cockroachdb-feature-availability.md
index a7bcedd54a4..47ae6d9d55f 100644
--- a/src/current/v24.2/cockroachdb-feature-availability.md
+++ b/src/current/v24.2/cockroachdb-feature-availability.md
@@ -46,6 +46,10 @@ Any feature made available in a phase prior to GA is provided without any warran
**The following features are in preview** and are subject to change. To share feedback and/or issues, contact [Support](https://support.cockroachlabs.com/hc).
{{site.data.alerts.end}}
+### Generic query plans
+
+[Generic query plans]({% link {{ page.version.version }}/cost-based-optimizer.md %}#query-plan-type) are generated and optimized once without considering specific placeholder values, and are not regenerated on subsequent executions, unless the plan becomes stale due to [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) or new [table statistics]({% link {{ page.version.version }}/cost-based-optimizer.md %}#table-statistics) and must be re-optimized. This approach eliminates most of the query latency attributed to planning.
+
### CockroachDB Cloud Folders
[Organizing CockroachDB {{ site.data.products.cloud }} clusters using folders]({% link cockroachcloud/folders.md %}) is in preview. Folders allow you to organize and manage access to your clusters according to your organization's requirements. For example, you can create top-level folders for each business unit in your organization, and within those folders, organize clusters by geographic location and then by level of maturity, such as production, staging, and testing.
diff --git a/src/current/v24.2/cost-based-optimizer.md b/src/current/v24.2/cost-based-optimizer.md
index 2c162789a70..01040b63996 100644
--- a/src/current/v24.2/cost-based-optimizer.md
+++ b/src/current/v24.2/cost-based-optimizer.md
@@ -302,7 +302,7 @@ The following types of plans can be cached:
- *Custom* query plans are generated for a given query structure and optimized for specific placeholder values, and are re-optimized on subsequent executions. By default, the optimizer uses custom plans.
- {% include_cached new-in.html version="v24.2" %} *Generic* query plans are generated and optimized once without considering specific placeholder values, and are **not** regenerated on subsequent executions, unless the plan becomes stale due to [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) or new [table statistics](#table-statistics) and must be re-optimized. This approach eliminates most of the query latency attributed to planning.
- Generic query plans require an [Enterprise license]({% link {{ page.version.version }}/enterprise-licensing.md %}).
+ Generic query plans require an [Enterprise license]({% link {{ page.version.version }}/enterprise-licensing.md %}). This feature is in [preview]({% link {{ page.version.version }}/cockroachdb-feature-availability.md %}) and is subject to change.
{{site.data.alerts.callout_success}}
Generic query plans will only benefit workloads that use prepared statements, which are issued via explicit `PREPARE` statements or by client libraries using the [PostgreSQL extended wire protocol](https://www.postgresql.org/docs/current/protocol-flow.html#PROTOCOL-FLOW-EXT-QUERY). Generic query plans are most beneficial for queries with high planning times, such as queries with many [joins]({% link {{ page.version.version }}/joins.md %}). For more information on reducing planning time for such queries, refer to [Reduce planning time for queries with many joins](#reduce-planning-time-for-queries-with-many-joins).
From 3c1be1c266a277073a3e2e70777b82fd16e58f68 Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 17:05:14 -0400
Subject: [PATCH 10/15] remove erroneous front-matter field (#18804)
---
src/current/v22.2/user-defined-functions.md | 1 -
src/current/v23.1/user-defined-functions.md | 1 -
src/current/v23.2/plpgsql.md | 1 -
src/current/v23.2/stored-procedures.md | 1 -
src/current/v23.2/user-defined-functions.md | 1 -
src/current/v24.1/plpgsql.md | 1 -
src/current/v24.1/stored-procedures.md | 1 -
src/current/v24.1/user-defined-functions.md | 1 -
src/current/v24.2/plpgsql.md | 1 -
src/current/v24.2/stored-procedures.md | 1 -
src/current/v24.2/user-defined-functions.md | 1 -
11 files changed, 11 deletions(-)
diff --git a/src/current/v22.2/user-defined-functions.md b/src/current/v22.2/user-defined-functions.md
index 2b622053857..c83d9afa79f 100644
--- a/src/current/v22.2/user-defined-functions.md
+++ b/src/current/v22.2/user-defined-functions.md
@@ -2,7 +2,6 @@
title: User-Defined Functions
summary: A user-defined function is a named function defined at the database level that can be called in queries and other contexts.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v23.1/user-defined-functions.md b/src/current/v23.1/user-defined-functions.md
index 9dfe7a5e78b..a8396fed7c5 100644
--- a/src/current/v23.1/user-defined-functions.md
+++ b/src/current/v23.1/user-defined-functions.md
@@ -2,7 +2,6 @@
title: User-Defined Functions
summary: A user-defined function is a named function defined at the database level that can be called in queries and other contexts.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v23.2/plpgsql.md b/src/current/v23.2/plpgsql.md
index 2d3cbc75312..811526bb334 100644
--- a/src/current/v23.2/plpgsql.md
+++ b/src/current/v23.2/plpgsql.md
@@ -2,7 +2,6 @@
title: PL/pgSQL
summary: PL/pgSQL is a procedural language that you can use within user-defined functions and stored procedures.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v23.2/stored-procedures.md b/src/current/v23.2/stored-procedures.md
index f7781db37c1..f4cec303a59 100644
--- a/src/current/v23.2/stored-procedures.md
+++ b/src/current/v23.2/stored-procedures.md
@@ -2,7 +2,6 @@
title: Stored Procedures
summary: A stored procedure consists of PL/pgSQL or SQL statements that can be issued with a single call.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v23.2/user-defined-functions.md b/src/current/v23.2/user-defined-functions.md
index 3504b849b82..fb0684166e3 100644
--- a/src/current/v23.2/user-defined-functions.md
+++ b/src/current/v23.2/user-defined-functions.md
@@ -2,7 +2,6 @@
title: User-Defined Functions
summary: A user-defined function is a named function defined at the database level that can be called in queries and other contexts.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v24.1/plpgsql.md b/src/current/v24.1/plpgsql.md
index 0d028e19419..189fa85c0c4 100644
--- a/src/current/v24.1/plpgsql.md
+++ b/src/current/v24.1/plpgsql.md
@@ -2,7 +2,6 @@
title: PL/pgSQL
summary: PL/pgSQL is a procedural language that you can use within user-defined functions and stored procedures.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v24.1/stored-procedures.md b/src/current/v24.1/stored-procedures.md
index 58ab75a9995..753a650e783 100644
--- a/src/current/v24.1/stored-procedures.md
+++ b/src/current/v24.1/stored-procedures.md
@@ -2,7 +2,6 @@
title: Stored Procedures
summary: A stored procedure consists of PL/pgSQL or SQL statements that can be issued with a single call.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v24.1/user-defined-functions.md b/src/current/v24.1/user-defined-functions.md
index bbec370c7de..fbee3e34b91 100644
--- a/src/current/v24.1/user-defined-functions.md
+++ b/src/current/v24.1/user-defined-functions.md
@@ -2,7 +2,6 @@
title: User-Defined Functions
summary: A user-defined function is a named function defined at the database level that can be called in queries and other contexts.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v24.2/plpgsql.md b/src/current/v24.2/plpgsql.md
index 0d028e19419..189fa85c0c4 100644
--- a/src/current/v24.2/plpgsql.md
+++ b/src/current/v24.2/plpgsql.md
@@ -2,7 +2,6 @@
title: PL/pgSQL
summary: PL/pgSQL is a procedural language that you can use within user-defined functions and stored procedures.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v24.2/stored-procedures.md b/src/current/v24.2/stored-procedures.md
index 58ab75a9995..753a650e783 100644
--- a/src/current/v24.2/stored-procedures.md
+++ b/src/current/v24.2/stored-procedures.md
@@ -2,7 +2,6 @@
title: Stored Procedures
summary: A stored procedure consists of PL/pgSQL or SQL statements that can be issued with a single call.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
diff --git a/src/current/v24.2/user-defined-functions.md b/src/current/v24.2/user-defined-functions.md
index bbec370c7de..fbee3e34b91 100644
--- a/src/current/v24.2/user-defined-functions.md
+++ b/src/current/v24.2/user-defined-functions.md
@@ -2,7 +2,6 @@
title: User-Defined Functions
summary: A user-defined function is a named function defined at the database level that can be called in queries and other contexts.
toc: true
-key: sql-expressions.html
docs_area: reference.sql
---
From 87c34322dd634d371dd1299359f52b2f9c9b9908 Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 17:24:45 -0400
Subject: [PATCH 11/15] add VECTOR to feature availability page (#18806)
---
src/current/v24.2/cockroachdb-feature-availability.md | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/current/v24.2/cockroachdb-feature-availability.md b/src/current/v24.2/cockroachdb-feature-availability.md
index 47ae6d9d55f..241767ac582 100644
--- a/src/current/v24.2/cockroachdb-feature-availability.md
+++ b/src/current/v24.2/cockroachdb-feature-availability.md
@@ -50,6 +50,10 @@ Any feature made available in a phase prior to GA is provided without any warran
[Generic query plans]({% link {{ page.version.version }}/cost-based-optimizer.md %}#query-plan-type) are generated and optimized once without considering specific placeholder values, and are not regenerated on subsequent executions, unless the plan becomes stale due to [schema changes]({% link {{ page.version.version }}/online-schema-changes.md %}) or new [table statistics]({% link {{ page.version.version }}/cost-based-optimizer.md %}#table-statistics) and must be re-optimized. This approach eliminates most of the query latency attributed to planning.
+### Vector search
+
+The [`VECTOR`]({% link {{ page.version.version }}/vector.md %}) data type stores fixed-length arrays of floating-point numbers, which represent data points in multi-dimensional space. Vector search is often used in AI applications such as Large Language Models (LLMs) that rely on vector representations.
+
### CockroachDB Cloud Folders
[Organizing CockroachDB {{ site.data.products.cloud }} clusters using folders]({% link cockroachcloud/folders.md %}) is in preview. Folders allow you to organize and manage access to your clusters according to your organization's requirements. For example, you can create top-level folders for each business unit in your organization, and within those folders, organize clusters by geographic location and then by level of maturity, such as production, staging, and testing.
From e1fd4bde96a3be4f3d6ceea0c2150ff011b7f265 Mon Sep 17 00:00:00 2001
From: Ryan Kuo <8740013+taroface@users.noreply.github.com>
Date: Thu, 8 Aug 2024 18:31:27 -0400
Subject: [PATCH 12/15] add MOLT 1.1.4 release notes (#18807)
---
src/current/releases/molt.md | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/src/current/releases/molt.md b/src/current/releases/molt.md
index 9bd037bf800..d75ed291635 100644
--- a/src/current/releases/molt.md
+++ b/src/current/releases/molt.md
@@ -36,7 +36,18 @@ For more information, refer to [Configuration]({% link molt/live-migration-servi
+## August 8, 2024
+
+MOLT Fetch/Verify 1.1.4 is [available](#installation).
+
+- Added a replication-only mode for Fetch that allows the user to run ongoing replication without schema creation or initial data load. This requires users to set `--mode replicator_only` and `--replicator-args` to specify the `defaultGTIDSet` ([MySQL](https://github.com/cockroachdb/replicator/wiki/MYLogical)) or `slotName` ([PostgreSQL](https://github.com/cockroachdb/replicator/wiki/PGLogical)).
+- Partitioned tables can now be mapped to renamed tables on the target database, using the Fetch [transformations framework]({% link molt/molt-fetch.md %}#transformations).
+- Added a new `--metrics-scrape-interval` flag to allow users to specify their Prometheus scrape interval and apply a sleep at the end to allow for the final metrics to be scraped.
+- Previously, there was a mismatch between the errors logged in log lines and those recorded in the exceptions table when an `IMPORT INTO` or `COPY FROM` operation failed due to a non-PostgreSQL error. Now, all errors will lead to an exceptions table entry that allows the user to continue progress from a certain table's file.
+- Fixed a bug that will allow Fetch to properly determine a GTID if there are multiple `source_uuids`.
+
## July 31, 2024
+
MOLT Fetch/Verify 1.1.3 is [available](#installation).
- `'infinity'::timestamp` values can now be moved with Fetch.
@@ -49,8 +60,8 @@ MOLT Fetch/Verify 1.1.2 is [available](#installation).
- Fetch users can now specify columns to exclude from table migrations in order to migrate a subset of their data. This is supported in the schema creation, export, import, and direct copy phases.
- Fetch now automatically maps a partitioned table from a PostgreSQL source to the target CockroachDB schema.
-- Fetch now supports computed column mappings via a new transformations framework.
-- The new Fetch `--transformations-file` flag specifies a JSON file for schema/table/column transformations, which has validation utilities built in.
+- Fetch now supports column exclusions and computed column mappings via a new [transformations framework]({% link molt/molt-fetch.md %}#transformations).
+- The new Fetch [`--transformations-file`]({% link molt/molt-fetch.md %}#global-flags) flag specifies a JSON file for schema/table/column transformations, which has validation utilities built in.
## July 10, 2024
From 82c7036507e82d76b8af933ec9adef4b55e4bd37 Mon Sep 17 00:00:00 2001
From: "Matt Linville (he/him)"
Date: Fri, 9 Aug 2024 14:37:19 -0700
Subject: [PATCH 13/15] [DOC-10637] Update CV setting and metric references for
v24.2 (#18796)
* [DOC-10637] Update CV setting and metric references for v24.2
---
.../cluster-virtualization-metric-scopes.md | 246 +++++++++++++++---
.../cluster-virtualization-setting-scopes.md | 94 ++++---
2 files changed, 273 insertions(+), 67 deletions(-)
diff --git a/src/current/v24.2/cluster-virtualization-metric-scopes.md b/src/current/v24.2/cluster-virtualization-metric-scopes.md
index 779890abce0..b01bc30cf4f 100644
--- a/src/current/v24.2/cluster-virtualization-metric-scopes.md
+++ b/src/current/v24.2/cluster-virtualization-metric-scopes.md
@@ -17,10 +17,11 @@ When [cluster virtualization]({% link {{ page.version.version }}/cluster-virtual
- When a metric is scoped to the system virtual cluster, it is included only in the metrics for the system virtual cluster. These metrics provide information about the underlying CockroachDB cluster's performance. Refer to [Metrics scoped to the system virtual cluster](#metrics-scoped-to-the-system-virtual-cluster).
{% comment %}
-Src: cockroach gen metrics-list against v23.2.0-rc.2
+Src: cockroach gen metrics-list --format=csv against cockroach-v24.2.0-rc.1.darwin-10.9-amd64
+
Also saved in https://docs.google.com/spreadsheets/d/1HIalzAhwU0CEYzSuG2m1aXSJRpiIyQPJdt8SusHpJ_U/edit?usp=sharing
-(shared CRL-internal). There is a filter-view on the STORAGE column:
+(shared CRL-internal). Sort by Layer, then Metric. Paste into the correct section below.
APPLICATION: Scoped to a virtual cluster
STORAGE: Scoped to the system virtual cluster
@@ -32,11 +33,14 @@ SERVER: n/a
{% comment %}LAYER=APPLICATION{% endcomment %}
- `backup.last-failed-time.kms-inaccessible`
+- `build.timestamp`
- `changefeed.admit_latency`
- `changefeed.aggregator_progress`
- `changefeed.backfill_count`
- `changefeed.backfill_pending_ranges`
- `changefeed.batch_reduction_count`
+- `changefeed.buffer_entries_mem.acquired`
+- `changefeed.buffer_entries_mem.released`
- `changefeed.buffer_entries.allocated_mem`
- `changefeed.buffer_entries.flush`
- `changefeed.buffer_entries.in`
@@ -44,26 +48,26 @@ SERVER: n/a
- `changefeed.buffer_entries.out`
- `changefeed.buffer_entries.released`
- `changefeed.buffer_entries.resolved`
-- `changefeed.buffer_entries_mem.acquired`
-- `changefeed.buffer_entries_mem.released`
- `changefeed.buffer_pushback_nanos`
- `changefeed.bytes.messages_pushback_nanos`
- `changefeed.checkpoint_hist_nanos`
- `changefeed.checkpoint_progress`
- `changefeed.cloudstorage_buffered_bytes`
- `changefeed.commit_latency`
+- `changefeed.emitted_batch_sizes`
- `changefeed.emitted_bytes`
- `changefeed.emitted_messages`
- `changefeed.error_retries`
- `changefeed.failures`
- `changefeed.filtered_messages`
-- `changefeed.flush.messages_pushback_nanos`
- `changefeed.flush_hist_nanos`
+- `changefeed.flush.messages_pushback_nanos`
- `changefeed.flushed_bytes`
- `changefeed.flushes`
- `changefeed.forwarded_resolved_messages`
- `changefeed.frontier_updates`
- `changefeed.internal_retry_message_count`
+- `changefeed.kafka_throttling_hist_nanos`
- `changefeed.lagging_ranges`
- `changefeed.max_behind_nanos`
- `changefeed.message_size_hist`
@@ -71,7 +75,10 @@ SERVER: n/a
- `changefeed.nprocs_consume_event_nanos`
- `changefeed.nprocs_flush_nanos`
- `changefeed.nprocs_in_flight_count`
+- `changefeed.parallel_io_in_flight_keys`
+- `changefeed.parallel_io_pending_rows`
- `changefeed.parallel_io_queue_nanos`
+- `changefeed.parallel_io_result_queue_nanos`
- `changefeed.queue_time_nanos`
- `changefeed.running`
- `changefeed.schema_registry.registrations`
@@ -81,10 +88,22 @@ SERVER: n/a
- `changefeed.sink_batch_hist_nanos`
- `changefeed.sink_io_inflight`
- `changefeed.size_based_flushes`
+- `changefeed.usage.error_count`
+- `changefeed.usage.query_duration`
+- `changefeed.usage.table_bytes`
- `clock-offset.meannanos`
- `clock-offset.stddevnanos`
+- `cloud.conns_opened`
+- `cloud.conns_reused`
+- `cloud.listing_results`
+- `cloud.listings`
+- `cloud.open_readers`
+- `cloud.open_writers`
- `cloud.read_bytes`
+- `cloud.readers_opened`
+- `cloud.tls_handshakes`
- `cloud.write_bytes`
+- `cloud.writers_opened`
- `cluster.preserve-downgrade-option.last-updated`
- `distsender.batch_requests.cross_region.bytes`
- `distsender.batch_requests.cross_zone.bytes`
@@ -96,12 +115,22 @@ SERVER: n/a
- `distsender.batches.async.sent`
- `distsender.batches.async.throttled`
- `distsender.batches.partial`
+- `distsender.circuit_breaker.replicas.count`
+- `distsender.circuit_breaker.replicas.probes.failure`
+- `distsender.circuit_breaker.replicas.probes.running`
+- `distsender.circuit_breaker.replicas.probes.success`
+- `distsender.circuit_breaker.replicas.requests.cancelled`
+- `distsender.circuit_breaker.replicas.requests.rejected`
+- `distsender.circuit_breaker.replicas.tripped`
+- `distsender.circuit_breaker.replicas.tripped_events`
- `distsender.errors.inleasetransferbackoffs`
- `distsender.errors.notleaseholder`
- `distsender.rangefeed.catchup_ranges`
- `distsender.rangefeed.error_catchup_ranges`
+- `distsender.rangefeed.local_ranges`
- `distsender.rangefeed.restart_ranges`
- `distsender.rangefeed.retry.logical_ops_missing`
+- `distsender.rangefeed.retry.manual_range_split`
- `distsender.rangefeed.retry.no_leaseholder`
- `distsender.rangefeed.retry.node_not_found`
- `distsender.rangefeed.retry.raft_snapshot`
@@ -114,7 +143,6 @@ SERVER: n/a
- `distsender.rangefeed.retry.send`
- `distsender.rangefeed.retry.slow_consumer`
- `distsender.rangefeed.retry.store_not_found`
-- `distsender.rangefeed.retry.stuck`
- `distsender.rangefeed.total_ranges`
- `distsender.rangelookups`
- `distsender.rpc.addsstable.sent`
@@ -162,6 +190,7 @@ SERVER: n/a
- `distsender.rpc.err.notleaseholdererrtype`
- `distsender.rpc.err.oprequirestxnerrtype`
- `distsender.rpc.err.optimisticevalconflictserrtype`
+- `distsender.rpc.err.proxyfailederrtype`
- `distsender.rpc.err.raftgroupdeletederrtype`
- `distsender.rpc.err.rangefeedretryerrtype`
- `distsender.rpc.err.rangekeymismatcherrtype`
@@ -170,6 +199,7 @@ SERVER: n/a
- `distsender.rpc.err.refreshfailederrtype`
- `distsender.rpc.err.replicacorruptionerrtype`
- `distsender.rpc.err.replicatooolderrtype`
+- `distsender.rpc.err.replicaunavailableerrtype`
- `distsender.rpc.err.storenotfounderrtype`
- `distsender.rpc.err.transactionabortederrtype`
- `distsender.rpc.err.transactionpusherrtype`
@@ -188,9 +218,14 @@ SERVER: n/a
- `distsender.rpc.initput.sent`
- `distsender.rpc.isspanempty.sent`
- `distsender.rpc.leaseinfo.sent`
+- `distsender.rpc.linkexternalsstable.sent`
- `distsender.rpc.merge.sent`
- `distsender.rpc.migrate.sent`
- `distsender.rpc.probe.sent`
+- `distsender.rpc.proxy.err`
+- `distsender.rpc.proxy.forward.err`
+- `distsender.rpc.proxy.forward.sent`
+- `distsender.rpc.proxy.sent`
- `distsender.rpc.pushtxn.sent`
- `distsender.rpc.put.sent`
- `distsender.rpc.queryintent.sent`
@@ -215,6 +250,7 @@ SERVER: n/a
- `distsender.rpc.transferlease.sent`
- `distsender.rpc.truncatelog.sent`
- `distsender.rpc.writebatch.sent`
+- `distsender.slow.replicarpcs`
- `jobs.adopt_iterations`
- `jobs.auto_config_env_runner.currently_idle`
- `jobs.auto_config_env_runner.currently_paused`
@@ -349,6 +385,30 @@ SERVER: n/a
- `jobs.create_stats.resume_completed`
- `jobs.create_stats.resume_failed`
- `jobs.create_stats.resume_retry_error`
+- `jobs.history_retention.currently_idle`
+- `jobs.history_retention.currently_paused`
+- `jobs.history_retention.currently_running`
+- `jobs.history_retention.expired_pts_records`
+- `jobs.history_retention.fail_or_cancel_completed`
+- `jobs.history_retention.fail_or_cancel_failed`
+- `jobs.history_retention.fail_or_cancel_retry_error`
+- `jobs.history_retention.protected_age_sec`
+- `jobs.history_retention.protected_record_count`
+- `jobs.history_retention.resume_completed`
+- `jobs.history_retention.resume_failed`
+- `jobs.history_retention.resume_retry_error`
+- `jobs.import_rollback.currently_idle`
+- `jobs.import_rollback.currently_paused`
+- `jobs.import_rollback.currently_running`
+- `jobs.import_rollback.expired_pts_records`
+- `jobs.import_rollback.fail_or_cancel_completed`
+- `jobs.import_rollback.fail_or_cancel_failed`
+- `jobs.import_rollback.fail_or_cancel_retry_error`
+- `jobs.import_rollback.protected_age_sec`
+- `jobs.import_rollback.protected_record_count`
+- `jobs.import_rollback.resume_completed`
+- `jobs.import_rollback.resume_failed`
+- `jobs.import_rollback.resume_retry_error`
- `jobs.import.currently_idle`
- `jobs.import.currently_paused`
- `jobs.import.currently_running`
@@ -373,6 +433,18 @@ SERVER: n/a
- `jobs.key_visualizer.resume_completed`
- `jobs.key_visualizer.resume_failed`
- `jobs.key_visualizer.resume_retry_error`
+- `jobs.logical_replication.currently_idle`
+- `jobs.logical_replication.currently_paused`
+- `jobs.logical_replication.currently_running`
+- `jobs.logical_replication.expired_pts_records`
+- `jobs.logical_replication.fail_or_cancel_completed`
+- `jobs.logical_replication.fail_or_cancel_failed`
+- `jobs.logical_replication.fail_or_cancel_retry_error`
+- `jobs.logical_replication.protected_age_sec`
+- `jobs.logical_replication.protected_record_count`
+- `jobs.logical_replication.resume_completed`
+- `jobs.logical_replication.resume_failed`
+- `jobs.logical_replication.resume_retry_error`
- `jobs.metrics.task_failed`
- `jobs.migration.currently_idle`
- `jobs.migration.currently_paused`
@@ -480,18 +552,6 @@ SERVER: n/a
- `jobs.row_level_ttl.total_expired_rows`
- `jobs.row_level_ttl.total_rows`
- `jobs.running_non_idle`
-- `jobs.schema_change.currently_idle`
-- `jobs.schema_change.currently_paused`
-- `jobs.schema_change.currently_running`
-- `jobs.schema_change.expired_pts_records`
-- `jobs.schema_change.fail_or_cancel_completed`
-- `jobs.schema_change.fail_or_cancel_failed`
-- `jobs.schema_change.fail_or_cancel_retry_error`
-- `jobs.schema_change.protected_age_sec`
-- `jobs.schema_change.protected_record_count`
-- `jobs.schema_change.resume_completed`
-- `jobs.schema_change.resume_failed`
-- `jobs.schema_change.resume_retry_error`
- `jobs.schema_change_gc.currently_idle`
- `jobs.schema_change_gc.currently_paused`
- `jobs.schema_change_gc.currently_running`
@@ -504,6 +564,18 @@ SERVER: n/a
- `jobs.schema_change_gc.resume_completed`
- `jobs.schema_change_gc.resume_failed`
- `jobs.schema_change_gc.resume_retry_error`
+- `jobs.schema_change.currently_idle`
+- `jobs.schema_change.currently_paused`
+- `jobs.schema_change.currently_running`
+- `jobs.schema_change.expired_pts_records`
+- `jobs.schema_change.fail_or_cancel_completed`
+- `jobs.schema_change.fail_or_cancel_failed`
+- `jobs.schema_change.fail_or_cancel_retry_error`
+- `jobs.schema_change.protected_age_sec`
+- `jobs.schema_change.protected_record_count`
+- `jobs.schema_change.resume_completed`
+- `jobs.schema_change.resume_failed`
+- `jobs.schema_change.resume_retry_error`
- `jobs.typedesc_schema_change.currently_idle`
- `jobs.typedesc_schema_change.currently_paused`
- `jobs.typedesc_schema_change.currently_running`
@@ -520,6 +592,27 @@ SERVER: n/a
- `kv.protectedts.reconciliation.num_runs`
- `kv.protectedts.reconciliation.records_processed`
- `kv.protectedts.reconciliation.records_removed`
+- `logical_replication.batch_hist_nanos`
+- `logical_replication.checkpoint_events_ingested`
+- `logical_replication.commit_latency`
+- `logical_replication.events_dlqed`
+- `logical_replication.events_dlqed_age`
+- `logical_replication.events_dlqed_errtype`
+- `logical_replication.events_dlqed_space`
+- `logical_replication.events_ingested`
+- `logical_replication.events_initial_failure`
+- `logical_replication.events_initial_success`
+- `logical_replication.events_retry_failure`
+- `logical_replication.events_retry_success`
+- `logical_replication.flush_bytes`
+- `logical_replication.flush_hist_nanos`
+- `logical_replication.flush_row_count`
+- `logical_replication.logical_bytes`
+- `logical_replication.optimistic_insert_conflict_count`
+- `logical_replication.replan_count`
+- `logical_replication.replicated_time_seconds`
+- `logical_replication.retry_queue_bytes`
+- `logical_replication.retry_queue_events`
- `physical_replication.admit_latency`
- `physical_replication.commit_latency`
- `physical_replication.cutover_progress`
@@ -600,6 +693,7 @@ SERVER: n/a
- `sql.disk.distsql.spilled.bytes.read`
- `sql.disk.distsql.spilled.bytes.written`
- `sql.distsql.contended_queries.count`
+- `sql.distsql.cumulative_contention_nanos`
- `sql.distsql.dist_query_rerun_locally.count`
- `sql.distsql.dist_query_rerun_locally.failure_count`
- `sql.distsql.exec.latency`
@@ -651,6 +745,11 @@ SERVER: n/a
- `sql.insights.anomaly_detection.fingerprints`
- `sql.insights.anomaly_detection.memory`
- `sql.leases.active`
+- `sql.leases.expired`
+- `sql.leases.long_wait_for_no_version`
+- `sql.leases.long_wait_for_one_version`
+- `sql.leases.long_wait_for_two_version_invariant`
+- `sql.leases.waiting_to_expire`
- `sql.mem.bulk.current`
- `sql.mem.bulk.max`
- `sql.mem.conns.current`
@@ -689,6 +788,7 @@ SERVER: n/a
- `sql.pgwire_cancel.ignored`
- `sql.pgwire_cancel.successful`
- `sql.pgwire_cancel.total`
+- `sql.pgwire.pipeline.count`
- `sql.pre_serve.bytesin`
- `sql.pre_serve.bytesout`
- `sql.pre_serve.conn.failures`
@@ -723,11 +823,11 @@ SERVER: n/a
- `sql.savepoint.rollback.started.count.internal`
- `sql.savepoint.started.count`
- `sql.savepoint.started.count.internal`
-- `sql.schema.invalid_objects`
- `sql.schema_changer.permanent_errors`
- `sql.schema_changer.retry_errors`
- `sql.schema_changer.running`
- `sql.schema_changer.successes`
+- `sql.schema.invalid_objects`
- `sql.select.count`
- `sql.select.count.internal`
- `sql.select.started.count`
@@ -736,11 +836,16 @@ SERVER: n/a
- `sql.service.latency.internal`
- `sql.statements.active`
- `sql.statements.active.internal`
+- `sql.stats.activity.update.latency`
+- `sql.stats.activity.updates.failed`
+- `sql.stats.activity.updates.successful`
- `sql.stats.cleanup.rows_removed`
- `sql.stats.discarded.current`
-- `sql.stats.flush.count`
-- `sql.stats.flush.duration`
-- `sql.stats.flush.error`
+- `sql.stats.flush.done_signals.ignored`
+- `sql.stats.flush.fingerprint.count`
+- `sql.stats.flush.latency`
+- `sql.stats.flushes.failed`
+- `sql.stats.flushes.successful`
- `sql.stats.mem.current`
- `sql.stats.mem.max`
- `sql.stats.reported.mem.current`
@@ -768,6 +873,8 @@ SERVER: n/a
- `sql.txn.rollback.count.internal`
- `sql.txn.rollback.started.count`
- `sql.txn.rollback.started.count.internal`
+- `sql.txn.upgraded_iso_level.count`
+- `sql.txn.upgraded_iso_level.count.internal`
- `sql.txns.open`
- `sql.txns.open.internal`
- `sql.update.count`
@@ -780,14 +887,33 @@ SERVER: n/a
- `sqlliveness.sessions_deletion_runs`
- `sqlliveness.write_failures`
- `sqlliveness.write_successes`
+- `tenant.cost_client.blocked_requests`
+- `tenant.sql_usage.cross_region_network_ru`
+- `tenant.sql_usage.estimated_cpu_seconds`
+- `tenant.sql_usage.estimated_kv_cpu_seconds`
+- `tenant.sql_usage.estimated_replication_bytes`
+- `tenant.sql_usage.external_io_egress_bytes`
+- `tenant.sql_usage.external_io_ingress_bytes`
+- `tenant.sql_usage.kv_request_units`
+- `tenant.sql_usage.pgwire_egress_bytes`
+- `tenant.sql_usage.read_batches`
+- `tenant.sql_usage.read_bytes`
+- `tenant.sql_usage.read_requests`
+- `tenant.sql_usage.request_units`
+- `tenant.sql_usage.sql_pods_cpu_seconds`
+- `tenant.sql_usage.write_batches`
+- `tenant.sql_usage.write_bytes`
+- `tenant.sql_usage.write_requests`
- `txn.aborts`
- `txn.commit_waits`
- `txn.commits`
+- `txn.commits_read_only`
- `txn.commits1PC`
- `txn.condensed_intent_spans`
- `txn.condensed_intent_spans_gauge`
- `txn.condensed_intent_spans_rejected`
- `txn.durations`
+- `txn.inflight_locks_over_tracking_budget`
- `txn.parallelcommits`
- `txn.parallelcommits.auto_retries`
- `txn.refresh.auto_retries`
@@ -823,13 +949,14 @@ SERVER: n/a
- `admission.admitted.elastic-cpu`
- `admission.admitted.elastic-cpu.bulk-normal-pri`
- `admission.admitted.elastic-cpu.normal-pri`
+- `admission.admitted.elastic-stores`
+- `admission.admitted.elastic-stores.bulk-normal-pri`
+- `admission.admitted.elastic-stores.ttl-low-pri`
- `admission.admitted.kv`
- `admission.admitted.kv-stores`
-- `admission.admitted.kv-stores.bulk-normal-pri`
- `admission.admitted.kv-stores.high-pri`
- `admission.admitted.kv-stores.locking-normal-pri`
- `admission.admitted.kv-stores.normal-pri`
-- `admission.admitted.kv-stores.ttl-low-pri`
- `admission.admitted.kv.high-pri`
- `admission.admitted.kv.locking-normal-pri`
- `admission.admitted.kv.normal-pri`
@@ -857,13 +984,14 @@ SERVER: n/a
- `admission.errored.elastic-cpu`
- `admission.errored.elastic-cpu.bulk-normal-pri`
- `admission.errored.elastic-cpu.normal-pri`
+- `admission.errored.elastic-stores`
+- `admission.errored.elastic-stores.bulk-normal-pri`
+- `admission.errored.elastic-stores.ttl-low-pri`
- `admission.errored.kv`
- `admission.errored.kv-stores`
-- `admission.errored.kv-stores.bulk-normal-pri`
- `admission.errored.kv-stores.high-pri`
- `admission.errored.kv-stores.locking-normal-pri`
- `admission.errored.kv-stores.normal-pri`
-- `admission.errored.kv-stores.ttl-low-pri`
- `admission.errored.kv.high-pri`
- `admission.errored.kv.locking-normal-pri`
- `admission.errored.kv.normal-pri`
@@ -882,6 +1010,7 @@ SERVER: n/a
- `admission.granter.cpu_load_long_period_duration.kv`
- `admission.granter.cpu_load_short_period_duration.kv`
- `admission.granter.elastic_io_tokens_available.kv`
+- `admission.granter.elastic_io_tokens_exhausted_duration.kv`
- `admission.granter.io_tokens_available.kv`
- `admission.granter.io_tokens_bypassed.kv`
- `admission.granter.io_tokens_exhausted_duration.kv`
@@ -902,13 +1031,14 @@ SERVER: n/a
- `admission.requested.elastic-cpu`
- `admission.requested.elastic-cpu.bulk-normal-pri`
- `admission.requested.elastic-cpu.normal-pri`
+- `admission.requested.elastic-stores`
+- `admission.requested.elastic-stores.bulk-normal-pri`
+- `admission.requested.elastic-stores.ttl-low-pri`
- `admission.requested.kv`
- `admission.requested.kv-stores`
-- `admission.requested.kv-stores.bulk-normal-pri`
- `admission.requested.kv-stores.high-pri`
- `admission.requested.kv-stores.locking-normal-pri`
- `admission.requested.kv-stores.normal-pri`
-- `admission.requested.kv-stores.ttl-low-pri`
- `admission.requested.kv.high-pri`
- `admission.requested.kv.locking-normal-pri`
- `admission.requested.kv.normal-pri`
@@ -928,13 +1058,14 @@ SERVER: n/a
- `admission.wait_durations.elastic-cpu`
- `admission.wait_durations.elastic-cpu.bulk-normal-pri`
- `admission.wait_durations.elastic-cpu.normal-pri`
+- `admission.wait_durations.elastic-stores`
+- `admission.wait_durations.elastic-stores.bulk-normal-pri`
+- `admission.wait_durations.elastic-stores.ttl-low-pri`
- `admission.wait_durations.kv`
- `admission.wait_durations.kv-stores`
-- `admission.wait_durations.kv-stores.bulk-normal-pri`
- `admission.wait_durations.kv-stores.high-pri`
- `admission.wait_durations.kv-stores.locking-normal-pri`
- `admission.wait_durations.kv-stores.normal-pri`
-- `admission.wait_durations.kv-stores.ttl-low-pri`
- `admission.wait_durations.kv.high-pri`
- `admission.wait_durations.kv.locking-normal-pri`
- `admission.wait_durations.kv.normal-pri`
@@ -953,13 +1084,14 @@ SERVER: n/a
- `admission.wait_queue_length.elastic-cpu`
- `admission.wait_queue_length.elastic-cpu.bulk-normal-pri`
- `admission.wait_queue_length.elastic-cpu.normal-pri`
+- `admission.wait_queue_length.elastic-stores`
+- `admission.wait_queue_length.elastic-stores.bulk-normal-pri`
+- `admission.wait_queue_length.elastic-stores.ttl-low-pri`
- `admission.wait_queue_length.kv`
- `admission.wait_queue_length.kv-stores`
-- `admission.wait_queue_length.kv-stores.bulk-normal-pri`
- `admission.wait_queue_length.kv-stores.high-pri`
- `admission.wait_queue_length.kv-stores.locking-normal-pri`
- `admission.wait_queue_length.kv-stores.normal-pri`
-- `admission.wait_queue_length.kv-stores.ttl-low-pri`
- `admission.wait_queue_length.kv.high-pri`
- `admission.wait_queue_length.kv.locking-normal-pri`
- `admission.wait_queue_length.kv.normal-pri`
@@ -993,6 +1125,10 @@ SERVER: n/a
- `gcbytesage`
- `gossip.bytes.received`
- `gossip.bytes.sent`
+- `gossip.callbacks.pending`
+- `gossip.callbacks.pending_duration`
+- `gossip.callbacks.processed`
+- `gossip.callbacks.processing_duration`
- `gossip.connections.incoming`
- `gossip.connections.outgoing`
- `gossip.connections.refused`
@@ -1023,6 +1159,7 @@ SERVER: n/a
- `kv.closed_timestamp.max_behind_nanos`
- `kv.concurrency.avg_lock_hold_duration_nanos`
- `kv.concurrency.avg_lock_wait_duration_nanos`
+- `kv.concurrency.latch_conflict_wait_durations`
- `kv.concurrency.lock_wait_queue_waiters`
- `kv.concurrency.locks`
- `kv.concurrency.locks_with_wait_queues`
@@ -1058,6 +1195,8 @@ SERVER: n/a
- `kv.replica_read_batch_evaluate.latency`
- `kv.replica_read_batch_evaluate.without_interleaving_iter`
- `kv.replica_write_batch_evaluate.latency`
+- `kv.split.estimated_stats`
+- `kv.split.total_bytes_estimates`
- `kv.tenant_rate_limit.current_blocked`
- `kv.tenant_rate_limit.num_tenants`
- `kv.tenant_rate_limit.read_batches_admitted`
@@ -1156,6 +1295,11 @@ SERVER: n/a
- `queue.gc.process.failure`
- `queue.gc.process.success`
- `queue.gc.processingnanos`
+- `queue.lease.pending`
+- `queue.lease.process.failure`
+- `queue.lease.process.success`
+- `queue.lease.processingnanos`
+- `queue.lease.purgatory`
- `queue.merge.pending`
- `queue.merge.process.failure`
- `queue.merge.process.success`
@@ -1222,6 +1366,7 @@ SERVER: n/a
- `queue.tsmaintenance.process.failure`
- `queue.tsmaintenance.process.success`
- `queue.tsmaintenance.processingnanos`
+- `raft.commands.pending`
- `raft.commands.proposed`
- `raft.commands.reproposed.new-lai`
- `raft.commands.reproposed.unchanged`
@@ -1234,6 +1379,8 @@ SERVER: n/a
- `raft.entrycache.read_bytes`
- `raft.entrycache.size`
- `raft.heartbeats.pending`
+- `raft.loaded_entries.bytes`
+- `raft.loaded_entries.reserved.bytes`
- `raft.process.applycommitted.latency`
- `raft.process.commandcommit.latency`
- `raft.process.handleready.latency`
@@ -1265,6 +1412,7 @@ SERVER: n/a
- `raft.sent.bytes`
- `raft.sent.cross_region.bytes`
- `raft.sent.cross_zone.bytes`
+- `raft.storage.error`
- `raft.storage.read_bytes`
- `raft.ticks`
- `raft.timeoutcampaign`
@@ -1314,6 +1462,8 @@ SERVER: n/a
- `range.snapshots.sent-bytes`
- `range.snapshots.unknown.rcvd-bytes`
- `range.snapshots.unknown.sent-bytes`
+- `range.snapshots.upreplication.rcvd-bytes`
+- `range.snapshots.upreplication.sent-bytes`
- `range.splits`
- `rangekeybytes`
- `rangekeycount`
@@ -1390,6 +1540,7 @@ SERVER: n/a
- `rpc.method.initput.recv`
- `rpc.method.isspanempty.recv`
- `rpc.method.leaseinfo.recv`
+- `rpc.method.linkexternalsstable.recv`
- `rpc.method.merge.recv`
- `rpc.method.migrate.recv`
- `rpc.method.probe.recv`
@@ -1416,8 +1567,6 @@ SERVER: n/a
- `rpc.method.writebatch.recv`
- `rpc.streams.mux_rangefeed.active`
- `rpc.streams.mux_rangefeed.recv`
-- `rpc.streams.rangefeed.active`
-- `rpc.streams.rangefeed.recv`
- `spanconfig.kvsubscriber.oldest_protected_record_nanos`
- `spanconfig.kvsubscriber.protected_record_count`
- `spanconfig.kvsubscriber.update_behind_nanos`
@@ -1429,12 +1578,28 @@ SERVER: n/a
- `storage.batch-commit.sem-wait.duration`
- `storage.batch-commit.wal-queue-wait.duration`
- `storage.batch-commit.wal-rotation.duration`
+- `storage.block-load.active`
+- `storage.block-load.queued`
+- `storage.category-pebble-manifest.bytes-written`
+- `storage.category-pebble-wal.bytes-written`
+- `storage.category-unspecified.bytes-written`
- `storage.checkpoints`
- `storage.compactions.duration`
- `storage.compactions.keys.pinned.bytes`
- `storage.compactions.keys.pinned.count`
- `storage.disk-slow`
- `storage.disk-stalled`
+- `storage.disk.io.time`
+- `storage.disk.iopsinprogress`
+- `storage.disk.read-max.bytespersecond`
+- `storage.disk.read.bytes`
+- `storage.disk.read.count`
+- `storage.disk.read.time`
+- `storage.disk.weightedio.time`
+- `storage.disk.write-max.bytespersecond`
+- `storage.disk.write.bytes`
+- `storage.disk.write.count`
+- `storage.disk.write.time`
- `storage.flush.ingest.count`
- `storage.flush.ingest.table.bytes`
- `storage.flush.ingest.table.count`
@@ -1489,8 +1654,17 @@ SERVER: n/a
- `storage.shared-storage.write`
- `storage.single-delete.ineffectual`
- `storage.single-delete.invariant-violation`
+- `storage.sstable.compression.none.count`
+- `storage.sstable.compression.snappy.count`
+- `storage.sstable.compression.unknown.count`
+- `storage.sstable.compression.zstd.count`
+- `storage.sstable.zombie.bytes`
- `storage.wal.bytes_in`
- `storage.wal.bytes_written`
+- `storage.wal.failover.primary.duration`
+- `storage.wal.failover.secondary.duration`
+- `storage.wal.failover.switch.count`
+- `storage.wal.failover.write_and_sync.latency`
- `storage.wal.fsync.latency`
- `storage.write-stall-nanos`
- `storage.write-stalls`
@@ -1516,14 +1690,14 @@ SERVER: n/a
- `tscache.skl.pages`
- `tscache.skl.rotations`
- `txn.commit_waits.before_commit_trigger`
-- `txn.server_side.1PC.failure`
-- `txn.server_side.1PC.success`
- `txn.server_side_retry.read_evaluation.failure`
- `txn.server_side_retry.read_evaluation.success`
- `txn.server_side_retry.uncertainty_interval_error.failure`
- `txn.server_side_retry.uncertainty_interval_error.success`
- `txn.server_side_retry.write_evaluation.failure`
- `txn.server_side_retry.write_evaluation.success`
+- `txn.server_side.1PC.failure`
+- `txn.server_side.1PC.success`
- `txnrecovery.attempts.pending`
- `txnrecovery.attempts.total`
- `txnrecovery.failures`
diff --git a/src/current/v24.2/cluster-virtualization-setting-scopes.md b/src/current/v24.2/cluster-virtualization-setting-scopes.md
index 3f4245c117e..f6fcc6658b7 100644
--- a/src/current/v24.2/cluster-virtualization-setting-scopes.md
+++ b/src/current/v24.2/cluster-virtualization-setting-scopes.md
@@ -18,14 +18,14 @@ When [cluster virtualization]({% link {{ page.version.version }}/cluster-virtual
- When a cluster setting is system-visible, it can be set only from the system virtual cluster but can be queried from any virtual cluster. For example, a virtual cluster can query a system-visible cluster setting's value, such as `storage.max_sync_duration`, to help adapt to the CockroachDB cluster's configuration.
{% comment %}
-Src: cockroach gen settings-list --show-class --show-format against v23.2.0-rc.2
+Src: cockroach gen metrics-list --format=csv against cockroach-v24.2.0-rc.1.darwin-10.9-amd64
Also saved in https://docs.google.com/spreadsheets/d/1HIalzAhwU0CEYzSuG2m1aXSJRpiIyQPJdt8SusHpJ_U/edit?usp=sharing
-(shared CRL-internal). There is a filter-view on the Class column:
+(shared CRL-internal). Sort by the Class column, then Settings column, and paste into the correct section below.
application: Scoped to a virtual cluster
-system virtual cluster: Scoped to the system virtual cluster
-system visible: Can be set / modified only from the system virtual cluster, but can be viewed from a VC
+system-only: Scoped to the system virtual cluster
+system-visible: Can be set / modified only from the system virtual cluster, but can be viewed from a VC
{% endcomment %}
## Cluster settings scoped to a virtual cluster
@@ -45,8 +45,8 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `changefeed.aggregator.flush_jitter`
- `changefeed.backfill.concurrent_scan_requests`
- `changefeed.backfill.scan_request_size`
-- `changefeed.balance_range_distribution.enabled`
-- `changefeed.batch_reduction_retry.enabled`
+- `changefeed.batch_reduction_retry.enabled (alias: changefeed.batch_reduction_retry_enabled)`
+- `changefeed.default_range_distribution_strategy`
- `changefeed.event_consumer_worker_queue_size`
- `changefeed.event_consumer_workers`
- `changefeed.fast_gzip.enabled`
@@ -54,8 +54,8 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `changefeed.memory.per_changefeed_limit`
- `changefeed.min_highwater_advance`
- `changefeed.node_throttle_config`
-- `changefeed.protect_timestamp.max_age`
- `changefeed.protect_timestamp_interval`
+- `changefeed.protect_timestamp.max_age`
- `changefeed.schema_feed.read_with_priority_after`
- `changefeed.sink_io_workers`
- `cloudstorage.azure.concurrent_upload_buffers`
@@ -63,6 +63,7 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `cloudstorage.timeout`
- `cluster.auto_upgrade.enabled`
- `cluster.preserve_downgrade_option`
+- `debug.zip.redact_addresses.enabled`
- `diagnostics.forced_sql_stat_reset.interval`
- `diagnostics.reporting.enabled`
- `diagnostics.reporting.interval`
@@ -76,11 +77,21 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `feature.schema_change.enabled`
- `feature.stats.enabled`
- `jobs.retention_time`
+- `kv.dist_sender.circuit_breaker.cancellation.enabled`
+- `kv.dist_sender.circuit_breaker.cancellation.write_grace_period`
+- `kv.dist_sender.circuit_breaker.probe.interval`
+- `kv.dist_sender.circuit_breaker.probe.threshold`
+- `kv.dist_sender.circuit_breaker.probe.timeout`
+- `kv.dist_sender.circuit_breakers.mode`
- `kv.rangefeed.client.stream_startup_rate`
-- `kv.rangefeed.range_stuck_threshold`
- `kv.transaction.max_intents_bytes`
- `kv.transaction.max_refresh_spans_bytes`
+- `kv.transaction.randomized_anchor_key.enabled`
- `kv.transaction.reject_over_max_intents_budget.enabled`
+- `kv.transaction.write_pipelining.enabled (alias: kv.transaction.write_pipelining_enabled)`
+- `kv.transaction.write_pipelining.locking_reads.enabled`
+- `kv.transaction.write_pipelining.max_batch_size (alias: kv.transaction.write_pipelining_max_batch_size)`
+- `kv.transaction.write_pipelining.ranged_writes.enabled`
- `schedules.backup.gc_protection.enabled`
- `security.ocsp.mode`
- `security.ocsp.timeout`
@@ -89,7 +100,7 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `server.authentication_cache.enabled`
- `server.child_metrics.enabled`
- `server.client_cert_expiration_cache.capacity`
-- `server.clock.forward_jump_check.enabled`
+- `server.clock.forward_jump_check.enabled (alias: server.clock.forward_jump_check_enabled)`
- `server.clock.persist_upper_bound_interval`
- `server.eventlog.enabled`
- `server.eventlog.ttl`
@@ -101,20 +112,25 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `server.log_gc.max_deletions_per_cycle`
- `server.log_gc.period`
- `server.max_connections_per_gateway`
-- `server.oidc_authentication.autologin.enabled`
+- `server.max_open_transactions_per_gateway`
+- `server.oidc_authentication.autologin.enabled (alias: server.oidc_authentication.autologin)`
- `server.oidc_authentication.button_text`
- `server.oidc_authentication.claim_json_key`
- `server.oidc_authentication.client_id`
- `server.oidc_authentication.client_secret`
+- `server.oidc_authentication.client.timeout`
- `server.oidc_authentication.enabled`
- `server.oidc_authentication.principal_regex`
- `server.oidc_authentication.provider_url`
- `server.oidc_authentication.redirect_url`
- `server.oidc_authentication.scopes`
-- `server.shutdown.connections.timeout`
-- `server.shutdown.initial_wait`
-- `server.shutdown.jobs.timeout`
-- `server.shutdown.transactions.timeout`
+- `server.redact_sensitive_settings.enabled`
+- `server.shutdown.connections.timeout (alias: server.shutdown.connection_wait)`
+- `server.shutdown.initial_wait (alias: server.shutdown.drain_wait)`
+- `server.shutdown.jobs.timeout (alias: server.shutdown.jobs_wait)`
+- `server.shutdown.transactions.timeout (alias: server.shutdown.query_wait)`
+- `server.sql_tcp_keep_alive.count`
+- `server.sql_tcp_keep_alive.interval`
- `server.time_until_store_dead`
- `server.user_login.cert_password_method.auto_scram_promotion.enabled`
- `server.user_login.downgrade_scram_stored_passwords_to_bcrypt.enabled`
@@ -126,14 +142,17 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `server.user_login.timeout`
- `server.user_login.upgrade_bcrypt_stored_passwords_to_scram.enabled`
- `server.web_session.purge.ttl`
-- `server.web_session.timeout`
+- `server.web_session.timeout (alias: server.web_session_timeout)`
- `sql.auth.change_own_password.enabled`
+- `sql.auth.grant_option_for_owner.enabled`
+- `sql.auth.grant_option_inheritance.enabled`
- `sql.auth.public_schema_create_privilege.enabled`
- `sql.auth.resolve_membership_single_scan.enabled`
- `sql.closed_session_cache.capacity`
- `sql.closed_session_cache.time_to_live`
- `sql.contention.event_store.capacity`
- `sql.contention.event_store.duration_threshold`
+- `sql.contention.record_serialization_conflicts.enabled`
- `sql.contention.txn_id_cache.max_size`
- `sql.cross_db_fks.enabled`
- `sql.cross_db_sequence_owners.enabled`
@@ -184,12 +203,14 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `sql.guardrails.max_row_size_err`
- `sql.guardrails.max_row_size_log`
- `sql.hash_sharded_range_pre_split.max`
+- `sql.index_recommendation.drop_unused_duration`
- `sql.insights.anomaly_detection.enabled`
- `sql.insights.anomaly_detection.latency_threshold`
- `sql.insights.anomaly_detection.memory_limit`
- `sql.insights.execution_insights_capacity`
- `sql.insights.high_retry_count.threshold`
- `sql.insights.latency_threshold`
+- `sql.log.all_statements.enabled (alias: sql.trace.log_statement_execute)`
- `sql.log.slow_query.experimental_full_table_scans.enabled`
- `sql.log.slow_query.internal_queries.enabled`
- `sql.log.slow_query.latency_threshold`
@@ -200,7 +221,7 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `sql.metrics.max_mem_reported_txn_fingerprints`
- `sql.metrics.max_mem_stmt_fingerprints`
- `sql.metrics.max_mem_txn_fingerprints`
-- `sql.metrics.statement_details.dump_to_logs.enabled`
+- `sql.metrics.statement_details.dump_to_logs.enabled (alias: sql.metrics.statement_details.dump_to_logs)`
- `sql.metrics.statement_details.enabled`
- `sql.metrics.statement_details.gateway_node.enabled`
- `sql.metrics.statement_details.index_recommendation_collection.enabled`
@@ -213,7 +234,6 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `sql.multiregion.drop_primary_region.enabled`
- `sql.notices.enabled`
- `sql.optimizer.uniqueness_checks_for_gen_random_uuid.enabled`
-- `sql.show_ranges_deprecated_behavior.enabled`
- `sql.spatial.experimental_box2d_comparison_operators.enabled`
- `sql.stats.activity.persisted_rows.max`
- `sql.stats.automatic_collection.enabled`
@@ -223,6 +243,9 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `sql.stats.flush.enabled`
- `sql.stats.flush.interval`
- `sql.stats.forecasts.enabled`
+- `sql.stats.forecasts.max_decrease`
+- `sql.stats.forecasts.min_goodness_of_fit`
+- `sql.stats.forecasts.min_observations`
- `sql.stats.histogram_buckets.count`
- `sql.stats.histogram_collection.enabled`
- `sql.stats.histogram_samples.count`
@@ -232,27 +255,29 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `sql.stats.post_events.enabled`
- `sql.stats.response.max`
- `sql.stats.response.show_internal.enabled`
-- `sql.stats.system_tables.enabled`
- `sql.stats.system_tables_autostats.enabled`
+- `sql.stats.system_tables.enabled`
+- `sql.stats.virtual_computed_columns.enabled`
- `sql.telemetry.query_sampling.enabled`
- `sql.telemetry.query_sampling.internal.enabled`
- `sql.telemetry.query_sampling.max_event_frequency`
+- `sql.telemetry.query_sampling.mode`
+- `sql.telemetry.transaction_sampling.max_event_frequency`
+- `sql.telemetry.transaction_sampling.statement_events_per_transaction.max`
- `sql.temp_object_cleaner.cleanup_interval`
- `sql.temp_object_cleaner.wait_interval`
-- `sql.log.all_statements.enabled`
-- `sql.trace.session_eventlog.enabled`
- `sql.trace.stmt.enable_threshold`
- `sql.trace.txn.enable_threshold`
+- `sql.ttl.changefeed_replication.disabled`
- `sql.ttl.default_delete_batch_size`
- `sql.ttl.default_delete_rate_limit`
- `sql.ttl.default_select_batch_size`
- `sql.ttl.default_select_rate_limit`
- `sql.ttl.job.enabled`
-- `sql.txn.read_committed_isolation.enabled`
- `sql.txn_fingerprint_id_cache.capacity`
+- `sql.txn.read_committed_isolation.enabled`
- `storage.max_sync_duration.fatal.enabled`
-- `storage.value_blocks.enabled`
-- `trace.debug_http_endpoint.enabled`
+- `trace.debug_http_endpoint.enabled (alias: trace.debug.enable)`
- `trace.opentelemetry.collector`
- `trace.snapshot.rate`
- `trace.span_registry.enabled`
@@ -262,23 +287,24 @@ system visible: Can be set / modified only from the system virtual cluster, but
## Cluster settings scoped to the system virtual cluster
-{% comment %}Class=system virtual cluster{% endcomment %}
+{% comment %}Class=system-only{% endcomment %}
- `admission.disk_bandwidth_tokens.elastic.enabled`
- `admission.kv.enabled`
-- `physical_replication.consumer.minimum_flush_interval`
- `kv.allocator.lease_rebalance_threshold`
- `kv.allocator.load_based_lease_rebalancing.enabled`
- `kv.allocator.load_based_rebalancing`
-- `kv.allocator.load_based_rebalancing.objective`
- `kv.allocator.load_based_rebalancing_interval`
+- `kv.allocator.load_based_rebalancing.objective`
- `kv.allocator.qps_rebalance_threshold`
- `kv.allocator.range_rebalance_threshold`
- `kv.allocator.store_cpu_rebalance_threshold`
- `kv.bulk_io_write.max_rate`
- `kv.bulk_sst.max_allowed_overage`
+- `kv.lease_transfer_read_summary.global_budget`
+- `kv.lease_transfer_read_summary.local_budget`
- `kv.log_range_and_node_events.enabled`
-- `kv.range_split.by_load.enabled`
+- `kv.range_split.by_load.enabled (alias: kv.range_split.by_load_enabled)`
- `kv.range_split.load_cpu_threshold`
- `kv.range_split.load_qps_threshold`
- `kv.replica_circuit_breaker.slow_replication_threshold`
@@ -287,14 +313,16 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `kv.snapshot_rebalance.max_rate`
- `kv.snapshot_receiver.excise.enabled`
- `kvadmission.store.provisioned_bandwidth`
+- `physical_replication.consumer.minimum_flush_interval (alias: bulkio.stream_ingestion.minimum_flush_interval)`
- `server.consistency_check.max_rate`
- `server.rangelog.ttl`
-- `server.shutdown.lease_transfer_iteration.timeout`
+- `server.shutdown.lease_transfer_iteration.timeout (alias: server.shutdown.lease_transfer_wait)`
- `spanconfig.bounds.enabled`
-- `spanconfig.range_coalescing.system.enabled`
-- `spanconfig.range_coalescing.application.enabled`
+- `spanconfig.range_coalescing.application.enabled (alias: spanconfig.tenant_coalesce_adjacent.enabled)`
+- `spanconfig.range_coalescing.system.enabled (alias: spanconfig.storage_coalesce_adjacent.enabled)`
- `storage.experimental.eventually_file_only_snapshots.enabled`
- `storage.ingest_split.enabled`
+- `storage.wal_failover.unhealthy_op_threshold`
- `timeseries.storage.enabled`
## System-visible cluster settings
@@ -306,15 +334,19 @@ system visible: Can be set / modified only from the system virtual cluster, but
- `diagnostics.memory_monitoring_dumps.enabled`
- `enterprise.license`
- `kv.bulk_sst.target_size`
-- `kv.closed_timestamp.follower_reads.enabled`
+- `kv.closed_timestamp.follower_reads.enabled (alias: kv.closed_timestamp.follower_reads_enabled)`
- `kv.closed_timestamp.lead_for_global_reads_override`
- `kv.closed_timestamp.side_transport_interval`
- `kv.closed_timestamp.target_duration`
- `kv.protectedts.reconciliation.interval`
- `kv.rangefeed.closed_timestamp_refresh_interval`
- `kv.rangefeed.enabled`
+- `security.client_cert.subject_required.enabled`
- `sql.schema.telemetry.recurrence`
- `storage.max_sync_duration`
+- `storage.sstable.compression_algorithm`
+- `storage.sstable.compression_algorithm_backup_storage`
+- `storage.sstable.compression_algorithm_backup_transport`
- `timeseries.storage.resolution_10s.ttl`
- `timeseries.storage.resolution_30m.ttl`
From 5ad2b98acd316609cebff7db0b6a6b55f5d3c034 Mon Sep 17 00:00:00 2001
From: Mike Lewis <76072290+mikeCRL@users.noreply.github.com>
Date: Fri, 9 Aug 2024 19:58:04 -0400
Subject: [PATCH 14/15] Content updates for Innovation releases (#18801)
---------
Co-authored-by: Matt Linville
---
src/current/_config_cockroachdb.yml | 4 +-
.../_includes/releases/v20.1/v20.1.0.md | 2 +-
src/current/cockroachcloud/authorization.md | 2 +-
src/current/cockroachcloud/upgrade-policy.md | 94 ++++++----
.../cockroachcloud/upgrade-to-v23.2.md | 2 +-
.../cockroachcloud/upgrade-to-v24.1.md | 3 +-
.../cockroachcloud/upgrade-to-v24.2.md | 169 ++++++++++++++++++
src/current/releases/index.md | 147 +++++++++++----
.../releases/release-support-policy.md | 71 +++++---
9 files changed, 399 insertions(+), 95 deletions(-)
create mode 100644 src/current/cockroachcloud/upgrade-to-v24.2.md
diff --git a/src/current/_config_cockroachdb.yml b/src/current/_config_cockroachdb.yml
index 06587c7afed..f1eae8a8873 100644
--- a/src/current/_config_cockroachdb.yml
+++ b/src/current/_config_cockroachdb.yml
@@ -1,7 +1,7 @@
baseurl: /docs
-current_cloud_version: v24.1
+current_cloud_version: v24.2
destination: _site/docs
homepage_title: CockroachDB Docs
versions:
- stable: v24.1
+ stable: v24.2
dev: v24.2
diff --git a/src/current/_includes/releases/v20.1/v20.1.0.md b/src/current/_includes/releases/v20.1/v20.1.0.md
index 23e625c2ea7..f2bdcefba92 100644
--- a/src/current/_includes/releases/v20.1/v20.1.0.md
+++ b/src/current/_includes/releases/v20.1/v20.1.0.md
@@ -130,4 +130,4 @@ Docs | **"Hello World" Repos** | Added several language-specific [GitHub repos](
Docs | **Multi-Region Sample App and Tutorial** | Added a full-stack, multi-region sample application ([GitHub repo](https://github.com/cockroachlabs/movr-flask)) with an [accompanying tutorial](https://www.cockroachlabs.com/docs/v20.1/multi-region-overview) on building a multi-region application on a multi-region CockroachCloud cluster. Also added a [video demonstration](https://www.youtube.com/playlist?list=PL_QaflmEF2e8o2heLyIt5iDUTgJE3EPkp) as a YouTube playlist.
Docs | **Streaming Changefeeds to Snowflake Tutorial** | Added an [end-to-end tutorial](https://www.cockroachlabs.com/docs/cockroachcloud/stream-changefeed-to-snowflake-aws) on how to use an Enterprise changefeed to stream row-level changes from CockroachCloud to Snowflake, an online analytical processing (OLAP) database.
Docs | **Improved Backup/Restore Docs** | Updated the backup/restore docs to better separate [broadly applicable guidance and best practices](https://www.cockroachlabs.com/docs/v20.1/backup-and-restore) from more advanced topics.
-Docs | **Release Support Policy** | Added a page explaining Cockroach Labs' [policy for supporting major releases of CockroachDB]({% link releases/release-support-policy.md %}), including the phases of support that each major release moves through, the currently supported releases, and an explanation of the [naming scheme]({% link releases/index.md %}#release-naming) used for CockroachDB.
+Docs | **Release Support Policy** | Added a page explaining Cockroach Labs' [policy for supporting major releases of CockroachDB]({% link releases/release-support-policy.md %}), including the phases of support that each major release moves through, the currently supported releases, and an explanation of the [naming scheme]({% link releases/index.md %}#overview) used for CockroachDB.
diff --git a/src/current/cockroachcloud/authorization.md b/src/current/cockroachcloud/authorization.md
index a491f32487a..0ae5620a451 100644
--- a/src/current/cockroachcloud/authorization.md
+++ b/src/current/cockroachcloud/authorization.md
@@ -82,7 +82,7 @@ Cluster Operators can perform a variety of cluster functions:
- View a cluster's Jobs from the [Jobs page]({% link cockroachcloud/jobs-page.md %}).
- View a cluster's Metrics from the [Metrics page]({% link cockroachcloud/metrics-page.md %}).
- View a cluster's Insights from the [Insights page]({% link cockroachcloud/insights-page.md %}).
- - [Upgrade]({% link cockroachcloud/upgrade-to-v23.1.md %}#step-5-start-the-upgrade) a cluster's CRDB version.
+ - [Upgrade]({% link cockroachcloud/upgrade-to-{{site.current_cloud_version}}.md %}) a cluster's CockroachDB version.
- View a cluster's [PCI-readiness status (Dedicated Advanced clusters only)]({% link cockroachcloud/cluster-overview-page.md %}?filters=dedicated#pci-ready-dedicated-advanced).
- Send a test alert from the [Alerts Page]({% link cockroachcloud/alerts-page.md %}).
- Configure single sign-on (SSO) enforcement.
diff --git a/src/current/cockroachcloud/upgrade-policy.md b/src/current/cockroachcloud/upgrade-policy.md
index 3ac7718365d..93085809d44 100644
--- a/src/current/cockroachcloud/upgrade-policy.md
+++ b/src/current/cockroachcloud/upgrade-policy.md
@@ -5,64 +5,98 @@ toc: true
docs_area: manage
---
-This page describes the support and upgrade policy for clusters deployed in CockroachDB {{ site.data.products.cloud }}. For CockroachDB Self-Hosted, refer to the CockroachDB [Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy).
+This page describes the support and upgrade policy for clusters deployed in CockroachDB {{ site.data.products.cloud }}. For CockroachDB {{ site.data.products.core }}, refer to the CockroachDB [Release Support Policy]({% link releases/release-support-policy.md %}).
-Cockroach Labs uses a three-component calendar versioning scheme to name CockroachDB [releases](https://cockroachlabs.com/docs/releases/index#production-releases). The format is `YY.R.PP`, where `YY` indicates the year, `R` indicates the release (historically “1” or “2”, representing a biannual cycle), and `PP` indicates the patch release version. For example: Version 23.1.0 (abbreviated v23.1.0). Leading up to a new major version's initial GA (Generally Available) release, multiple testing builds are produced, moving from Alpha to Beta to Release Candidate. CockroachDB began using this versioning scheme with v19.1. For more details, refer to [Release Naming](https://cockroachlabs.com/docs/releases/index#release-naming).
+## CockroachDB Cloud Support Policy
-CockroachDB {{ site.data.products.cloud }} provides support for the latest major version of CockroachDB and the major version immediately preceding it.
+[Major versions]({% link releases/index.md %}) of CockroachDB are labeled either [Regular releases]({% link releases/index.md %}#major-releases) or [Innovation releases]({% link releases/index.md %}).
+- **Regular releases** are supported for 12 months from their initial production release date.
+- **Innovation releases** are supported for 6 months from their initial production release date.
-CockroachDB Dedicated clusters are automatically upgraded to the latest patch of the cluster’s current major version of CockroachDB, but an account administrator must initiate an upgrade to a new major version.
+For each release type, the end date of this period is called End of Support (EOS).
-CockroachDB Serverless clusters are upgraded to the latest major version and each patch automatically.
+A cluster running an unsupported CockroachDB version is not eligible for Cockroach Labs’ [availability SLA](https://www.cockroachlabs.com/cloud-terms-and-conditions/cockroachcloud-technical-service-level-agreement/).
-{{site.data.alerts.callout_success}}
-Prior to the GA release of a major CockroachDB version, CockroachDB {{ site.data.products.dedicated }} clusters can optionally be upgraded to a [Pre-Production Preview](#pre-production-preview-upgrades) release—a beta or release candidate (RC) testing release for testing and validation of that next major version. To learn more, refer to [Upgrade to v24.1 Pre-Production Preview]({% link cockroachcloud/upgrade-to-v24.1.md %}).
-{{site.data.alerts.end}}
+CockroachDB {{ site.data.products.serverless }} clusters will automatically be upgraded to the next major version while the current one is still supported, to prevent a Serverless cluster from reaching EOS.
-## Patch version upgrades
+A CockroachDB {{ site.data.products.dedicated }} cluster must be upgraded prior to its EOS date to maintain uninterrupted support and SLA guarantees.
+
+When a CockroachDB {{ site.data.products.dedicated }} cluster is nearing its EOS date, you will be reminded to upgrade the cluster at least 30 days before the EOS date to avoid losing support. {% capture who_can_upgrade %}A [Cluster Administrator]({% link cockroachcloud/authorization.md %}#cluster-administrator) can [upgrade a cluster]({% link cockroachcloud/upgrade-to-{{site.current_cloud_version}}.md %}) directly from the CockroachDB Cloud Console. An [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) or [Folder Admin]({% link cockroachcloud/authorization.md %}#folder-admin) can grant the Cluster Administrator role.{% endcapture %}{{ who_can_upgrade }}
+
+### Currently supported versions
+
+Version | Release Type | Support period | Release date | EOS date
+:------:|:------------:|:--------------:|:------------:|:---------:
+v23.2 | Regular | 12 months | 2024-02-05 | 2025-02-05
+v24.1 | Regular | 12 months | 2024-05-20 | 2025-05-20
+v24.2 | Innovation | 6 months | 2024-08-12 | 2025-02-12
+
+For expected future versions, refer to [Upcoming releases]({% link releases/index.md %}#upcoming-releases).
-Patch version [releases](https://www.cockroachlabs.com/docs/releases), or "maintenance" releases, contain stable, backward-compatible improvements to the major versions of CockroachDB (for example, v23.1.12 and v23.1.13).
+### EOS versions
-For CockroachDB {{ site.data.products.dedicated }} clusters, [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) can [set a weekly 6-hour maintenance window]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window) during which available patch upgrades will be applied. During the window, your cluster may experience restarts, degraded performance, and, for single-node clusters, downtime. Upgrades may not always be completed by the end of the window, and maintenance that is critical for security or stability may occur outside the window. Patch upgrades can also be [deferred for 60 days]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window). If no maintenance window is configured, CockroachDB {{ site.data.products.dedicated }} clusters will be automatically upgraded to the latest supported patch version as soon as it becomes available.
+Version | Release Type | Support period | Release date | EOS date
+:------:|:------------:|:--------------:|:------------:|:--------:
+|v23.1 | Regular | 12 months | 2023-05-15 | 2024-05-15
-CockroachDB {{ site.data.products.serverless }} clusters are subject to automatic upgrades to the latest supported patch version.
+## Patch version upgrades
+
+A patch version [release]({% link releases/index.md %}), or "maintenance" releases, contains stable, backward-compatible improvements to a major version of CockroachDB. For example, {{site.current_cloud_version}} is a patch release.
{{site.data.alerts.callout_danger}}
-Single-node clusters will experience some downtime during cluster maintenance.
+Single-node clusters will experience some downtime while the node is restarted during cluster maintenance, including patch version upgrades.
{{site.data.alerts.end}}
+
+## CockroachDB {{ site.data.products.dedicated }} patch upgrades and maintenance windows
+
+CockroachDB {{ site.data.products.dedicated }} clusters are automatically upgraded to the latest patch version release of the cluster’s current CockroachDB major version, but a major-version upgrade must be initiated by an Org Administrator.
+
+A [Cluster Administrator]({% link cockroachcloud/authorization.md %}#cluster-administrator) can [set a weekly 6-hour maintenance window]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window) for a CockroachDB {{ site.data.products.dedicated }} cluster. During the maintenance window, patch upgrades may be applied, and the cluster may experience restarts, degraded performance, and, for single-node clusters, downtime. Upgrades may not always be completed by the end of the window, and maintenance that is critical for security or stability may occur outside of the window. A patch upgrade can be [deferred for 60 days]({% link cockroachcloud/cluster-management.md %}#set-a-maintenance-window). If no maintenance window is configured, a CockroachDB {{ site.data.products.dedicated }} cluster will be upgraded automatically to the latest supported patch version soon after it becomes available.
+
+### CockroachDB {{ site.data.products.serverless }} automatic upgrades
+
+CockroachDB {{ site.data.products.serverless }} clusters are automatically upgraded to new patch versions, as well as new major versions.
+
## Major version upgrades
-Major version [releases](https://www.cockroachlabs.com/docs/releases) (for example, v23.1.0 and v23.2.0) contain new functionality and may include backward-incompatible changes to CockroachDB.
+Major version [releases](https://www.cockroachlabs.com/docs/releases) (for example, {{ site.current_cloud_version }}) contain new functionality and may include backward-incompatible changes to CockroachDB.
-Major version upgrades are automatic for CockroachDB {{ site.data.products.serverless }} clusters and opt-in for CockroachDB {{ site.data.products.dedicated }} clusters. An [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) must initiate major version upgrades for CockroachDB {{ site.data.products.dedicated }} clusters. When a new major version is available, Admins will be able to [start an upgrade]({% link cockroachcloud/upgrade-to-v23.1.md %}) from the CockroachDB {{ site.data.products.cloud }} Console for clusters using CockroachDB {{ site.data.products.dedicated }}. When a major version upgrade is initiated for a cluster, it will upgrade to the latest patch version as well.
+Major version upgrades are automatic for CockroachDB {{ site.data.products.serverless }} clusters and must be initiated by an [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) for CockroachDB {{ site.data.products.dedicated }} clusters. In CockroachDB {{ site.data.products.dedicated }}, major versions labeled Regular releases are all required upgrades, while Innovation releases are optional. Once a new major version is available, you can [start an upgrade]({% link cockroachcloud/upgrade-to-{{site.current_cloud_version}}.md %}) from the CockroachDB Cloud Console. The cluster will be upgraded to the latest patch release within that major version.
-### Pre-production preview upgrades
+### Innovation releases
-Prior to the GA release of a major CockroachDB version, CockroachDB {{ site.data.products.cloud }} organizations can create new {{ site.data.products.dedicated }} clusters or upgrade existing clusters to a Pre-Production Preview release for testing and experimentation using a beta or release candidate (RC) of that next major version. Upgrading to a Pre-Production Preview is a major-version upgrade. After a cluster is upgraded to a Pre-Production Preview release, it is automatically upgraded to all subsequent releases within the same major version—including additional beta and RC releases, the GA release, and subsequent patch releases after GA, as [patch version upgrades](#patch-version-upgrades). To learn more, refer to [Upgrade to v23.2 Pre-Production Preview](https://cockroachlabs.com/docs/cockroachcloud/upgrade-to-v24.1).
+As of v24.2, Cockroach Labs releases a major version of CockroachDB once per quarter, alternating between releases classified as a [Regular release or an Innovation release]({% link releases/index.md %}#release-types). Regular releases provide a longer support period and a longer period between upgrades, while Innovation releases offer a shorter support period and faster access to new features.
-### Rollback support
+- Regular releases are not optional; they must be applied to CockroachDB {{ site.data.products.dedicated }} clusters and they are applied automatically to CockroachDB {{ site.data.products.serverless }} clusters. Regular releases are produced twice a year, alternating with Innovation Releases. They are supported for one year. Upgrading CockroachDB {{ site.data.products.dedicated }} directly from one regular release to the next regular release, skipping the intervening Innovation release, is supported.
+- Innovation releases are optional and can be skipped for CockroachDB {{ site.data.products.dedicated }} clusters but are required for CockroachDB {{ site.data.products.serverless }}. Innovation releases are produced twice a year, alternating with Regular releases. An innovation release is supported for 6 months, at which time a Dedicated cluster must be upgraded to the next Regular Release. At a given time, only one Innovation release is typically supported. Upgrading CockroachDB {{ site.data.products.dedicated }} directly from one Innovation release to the next Innovation release is not supported.
-When upgrading a CockroachDB {{ site.data.products.dedicated }} cluster to a new major version, once all nodes are running the new version, you have approximately 72 hours before the upgrade is automatically finalized. During this window, if you see unexpected behavior, you can trigger a rollback to the previous major version from the CockroachDB {{ site.data.products.cloud }} Console.
+{{site.data.alerts.callout_info}}
+To opt out of Innovation releases entirely and hide them from your CockroachDB organization, contact Support.{{site.data.alerts.end}}
-To stop the upgrade and roll back to the latest patch release of the previous major version, click **Roll back** in the banner at the top of the CockroachDB Cloud Console, and then click **Roll back upgrade**.
+To summarize the available major-version upgrade paths for CockroachDB {{ site.data.products.dedicated }}:
-{{site.data.alerts.callout_danger}}
-If you choose to roll back a major version upgrade, your cluster will be rolled back to the latest patch release of the previous major version, which may differ from the patch release you were running before you initiated the upgrade.
-{{site.data.alerts.end}}
+- When your cluster is running a Regular release, you can select which of the next two major versions to upgrade to:
+ - The next version, an Innovation release.
+ - The Regular release that follows that Innovation release, when it is available.
+- When your cluster is running an Innovation release, you can upgrade only to the subsequent Regular release, not directly to the newer Innovation release, if it is available.
+
+### Pre-production preview upgrades
-During rollback, nodes are reverted to that prior major version's latest patch one at a time, without interrupting the cluster's health and availability.
+Prior to the GA release of a major CockroachDB version, CockroachDB {{ site.data.products.cloud }} organizations can create new Dedicated clusters or upgrade existing clusters to a Pre-Production Preview release for testing and experimentation using a beta or release candidate (RC) of that next major version. Upgrading to a Pre-Production Preview is a major-version upgrade. After a cluster is upgraded to a Pre-Production Preview release, it is automatically upgraded to all subsequent releases within the same major version—including additional beta and RC releases, the GA release, and subsequent production patch releases as [patch version upgrades](#patch-version-upgrades). Upgrading to a Pre-Production Preview follows the same procedure as updating to a Production release. To learn more, refer to [Upgrade to {{ site.current_cloud_version }}]({% link cockroachcloud/upgrade-to-{{ site.current_cloud_version }}.md %}).
-If you notice problems after a major version upgrade has been finalized, it will not be possible to roll back via the CockroachDB {{ site.data.products.cloud }} Console. For assistance, [contact support](https://support.cockroachlabs.com/hc/requests/new).
+### Rollback support
-### End of Support for CockroachDB versions
+When upgrading a CockroachDB {{ site.data.products.dedicated }} cluster to a new major version, once all nodes are running the new version, the upgrade is finalized automatically in approximately 72 hours. During this window, if you see unexpected behavior, you can [trigger a rollback]({% link cockroachcloud/upgrade-to-{{ site.current_cloud_version }}.md %}#roll-back-the-upgrade) to the previous major version from the [CockroachDB {{ site.data.products.cloud }} Console](https://cockroachlabs.cloud).
-As CockroachDB releases new major versions, older versions reach their End of Support (EOS) on CockroachDB {{ site.data.products.cloud }}. A CockroachDB version reaches EOS when it is two major versions behind the latest version. For example, when CockroachDB v21.2 was released, CockroachDB v20.2 reached EOS.
+{{site.data.alerts.callout_info}}
+If you choose to roll back a major version upgrade, your cluster will be rolled back to the latest patch release of the previous major version, which may differ from the patch release you were running before you initiated the upgrade.
+{{site.data.alerts.end}}
-Clusters running unsupported CockroachDB versions are not eligible for our [availability SLA](https://www.cockroachlabs.com/cloud-terms-and-conditions/). Further downgrades in support may occur as per the [CockroachDB Release Support Policy](https://www.cockroachlabs.com/docs/releases/release-support-policy).
+During rollback, nodes are reverted one at a time to reduce the impact of the operation on the cluster's health and availability.
-If you are running a CockroachDB version nearing EOS, you will be reminded at minimum one month before that version’s EOS that your clusters must be upgraded by the EOS date to avoid losing support. A Org Administrator can [upgrade your cluster]({% link cockroachcloud/upgrade-to-v23.2.md %}) directly from the CockroachDB {{ site.data.products.cloud }} Console.
+If you notice problems after a major version upgrade has been finalized, it will not be possible to roll back via the CockroachDB {{ site.data.products.cloud }} Console. For assistance, [contact Support](https://support.cockroachlabs.com/hc/requests/new).
## Additional information
-For more details about the upgrade and finalization process, see [Upgrade to the Latest CockroachDB Version](https://cockroachlabs.com/docs/cockroachcloud/upgrade-to-v23.1).
+For more details about the upgrade and finalization process in CockroachDB, refer to the instructions on [upgrading to the latest CockroachDB version]({% link cockroachcloud/upgrade-to-{{site.current_cloud_version}}.md %}).
diff --git a/src/current/cockroachcloud/upgrade-to-v23.2.md b/src/current/cockroachcloud/upgrade-to-v23.2.md
index c12bf0aed67..e3f6e2dfa0f 100644
--- a/src/current/cockroachcloud/upgrade-to-v23.2.md
+++ b/src/current/cockroachcloud/upgrade-to-v23.2.md
@@ -16,7 +16,7 @@ pre_production_preview_version: v23.2.0-rc.2
[CockroachDB {{ page.pre_production_preview_version }}](https://www.cockroachlabs.com/docs/releases/{{ page.page_version }}#{{ page.pre_production_preview_version | replace: ".","-"}}) is available to CockroachDB {{ site.data.products.dedicated }} clusters as an opt-in upgrade for testing and experimentation.
{{site.data.alerts.callout_danger}}
-[Testing releases]({% link releases/index.md %}#release-naming) are not qualified for production environments and not eligible for support or uptime SLA commitments.
+[Testing releases]({% link releases/index.md %}#overview) are not qualified for production environments and not eligible for support or uptime SLA commitments.
{{site.data.alerts.end}}
An [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) can upgrade your CockroachDB {{ site.data.products.dedicated }} cluster from the CockroachDB {{ site.data.products.cloud }} Console. This page guides you through the process of upgrading.
diff --git a/src/current/cockroachcloud/upgrade-to-v24.1.md b/src/current/cockroachcloud/upgrade-to-v24.1.md
index 9116975078c..306b8fb17ea 100644
--- a/src/current/cockroachcloud/upgrade-to-v24.1.md
+++ b/src/current/cockroachcloud/upgrade-to-v24.1.md
@@ -15,7 +15,8 @@ pre_production_preview_version: v24.1.0-rc.1
{% if page.pre_production_preview == true %}
[CockroachDB {{ page.pre_production_preview_version }}](https://www.cockroachlabs.com/docs/releases/{{ page.page_version }}#{{ page.pre_production_preview_version | replace: ".","-"}}) is available to CockroachDB {{ site.data.products.dedicated }} clusters as an opt-in upgrade for testing and experimentation.
-{{site.data.alerts.callout_danger}} [Testing releases]({% link releases/index.md %}#release-naming) are not qualified for production environments and not eligible for support or uptime SLA commitments.
+{{site.data.alerts.callout_danger}}
+[Testing releases]({% link releases/index.md %}#overview) are not qualified for production environments and not eligible for support or uptime SLA commitments.
{{site.data.alerts.end}}
An [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) can upgrade your CockroachDB {{ site.data.products.dedicated }} cluster from the CockroachDB {{ site.data.products.cloud }} Console. This page shows how to upgrade a CockroachDB {{ site.data.products.dedicated }} cluster to {{ page.pre_production_preview_version }} for testing and experimentation.
diff --git a/src/current/cockroachcloud/upgrade-to-v24.2.md b/src/current/cockroachcloud/upgrade-to-v24.2.md
new file mode 100644
index 00000000000..d8f3face32f
--- /dev/null
+++ b/src/current/cockroachcloud/upgrade-to-v24.2.md
@@ -0,0 +1,169 @@
+---
+title: Upgrade to CockroachDB v24.2
+summary: Learn how to upgrade a cluster in CockroachDB Cloud to v24.2
+toc: true
+docs_area: manage
+page_version: v24.2
+prev_version: v24.1
+pre_production_preview: false
+pre_production_preview_version: v24.1.0-rc.1
+---
+
+{% capture previous_version_numeric %}{{ page.prev_version | remove_first: 'v' }}{% endcapture %}
+{% capture major_version_numeric %}{{ page.page_version | remove_first: 'v' }}{% endcapture %}
+
+{% if page.pre_production_preview == true %}
+[CockroachDB {{ page.pre_production_preview_version }}](https://www.cockroachlabs.com/docs/releases/{{ page.page_version }}#{{ page.pre_production_preview_version | replace: ".","-"}}) is available to CockroachDB {{ site.data.products.dedicated }} clusters as an opt-in upgrade for testing and experimentation.
+
+{{site.data.alerts.callout_danger}}
+[Testing releases]({% link releases/index.md %}#overview) are not qualified for production environments and not eligible for support or uptime SLA commitments.
+{{site.data.alerts.end}}
+
+An [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) can upgrade your CockroachDB {{ site.data.products.dedicated }} cluster from the CockroachDB {{ site.data.products.cloud }} Console. This page shows how to upgrade a CockroachDB {{ site.data.products.dedicated }} cluster to {{ page.pre_production_preview_version }} for testing and experimentation.
+
+{{site.data.alerts.callout_success}}
+Upgrading from {{ page.prev_version }} to {{ page.pre_production_preview_version }} is a major-version upgrade. Upgrading a CockroachDB {{ site.data.products.dedicated }} cluster to a new major version is opt-in. Before proceeding, review the CockroachDB {{ site.data.products.cloud }} [CockroachDB Cloud Upgrade Policy](https://cockroachlabs.com/docs/cockroachcloud/upgrade-policy#pre-production-preview). After a cluster is upgraded to a Pre-Production Preview release, it is automatically upgraded to all subsequent releases within the same major version—including additional beta and RC releases, the GA release, and subsequent patch releases after GA, as patch version upgrades. To learn more, refer to [Patch Version Upgrades]({% link cockroachcloud/upgrade-policy.md %}#patch-version-upgrades).
+{{site.data.alerts.end}}
+
+{% else %}
+Now that [CockroachDB {{ page.page_version }}](https://www.cockroachlabs.com/docs/releases/ {{ page.page_version }}) is available, an [Org Administrator]({% link cockroachcloud/authorization.md %}#org-administrator) can upgrade your CockroachDB {{ site.data.products.dedicated }} cluster from the CockroachDB {{ site.data.products.cloud }} Console. This page shows how to upgrade a cluster in CockroachDB Cloud to {{ page.page_version }}.
+
+{{site.data.alerts.callout_success}}
+Upgrading a CockroachDB {{ site.data.products.dedicated }} cluster to a new major version is opt-in. Before proceeding, review the CockroachDB {{ site.data.products.cloud }} [CockroachDB Cloud Upgrade Policy](https://cockroachlabs.com/docs/cockroachcloud/upgrade-policy).
+{{site.data.alerts.end}}
+
+If you upgrade to a Pre-Production Preview of {{ page.page_version }}, your cluster will be automatically upgraded to {{ page.page_version }}.0 upon its GA release.
+{% endif %}
+
+## Step 1. Verify that you can upgrade
+
+To upgrade to CockroachDB {{ page.page_version}}, you must be running {{ page.prev_version }}. If you are not running {{ page.prev_version }}, first [upgrade to {{ page.prev_version }}]({% link cockroachcloud/upgrade-to-{{ page.prev_version }}.md %}). Then return to this page and continue to [Step 2](#step-2-select-your-cluster-size).
+
+## Step 2. Select your cluster size
+
+The upgrade process depends on the number of nodes in your cluster. Select whether your cluster has multiple nodes or a single node:
+
+
+
+
+
+
+## Step 3. Understand the upgrade process
+
+
+In a multi-node cluster, the upgrade does not interrupt the cluster's overall health and availability. CockroachDB {{ site.data.products.cloud }} stops one node at a time and restarts it with the new version, waits a few minutes to observe the upgraded node's behavior, then moves on to the next node. This "rolling upgrade" takes approximately 4-5 minutes per node and is enabled by CockroachDB's [multi-active availability](https://www.cockroachlabs.com/docs/{{site.current_cloud_version}}/multi-active-availability) design.
+
+
+
+When you start the upgrade, the cluster will be unavailable for a few minutes while the node is stopped and restarted with {{ page.page_version }}.
+
+
+If you are upgrading from {{ page.prev_version }} to {{ page.page_version }}, the upgrade must be finalized. This is not required for subsequent patch upgrades. Approximately 72 hours after all nodes are running {{ page.page_version }}, the upgrade will be automatically [finalized]({% link {{ page.page_version }}/upgrade-cockroach-version.md %}#step-6-finish-the-upgrade). It's important to monitor your cluster and applications during this 72-hour window, so that you can [roll back the upgrade](#roll-back-the-upgrade) from the CockroachDB {{ site.data.products.cloud }} Console if you see [unexpected behavior according to key metrics]({% link {{ page.page_version }}/essential-metrics-dedicated.md %}) or if you experience application or database issues. Finalization enables certain [features and performance improvements introduced in {{ page.page_version }}](#expect-temporary-limitations). When finalization is complete, it is no longer possible to roll back to {{ page.prev_version }}.
+
+{{site.data.alerts.callout_info}}
+If you choose to roll back a major version upgrade, your cluster will be rolled back to the latest patch release of {{ page.prev_version }}, which may differ from the patch release you were running before you initiated the upgrade. To learn more, refer to [CockroachDB Cloud Upgrade Policy]({% link cockroachcloud/upgrade-policy.md %}).
+{{site.data.alerts.end}}
+
+When finalization begins, a series of migration jobs run to enable certain types of features and changes in the new major version that cannot be rolled back. These include changes to system schemas, indexes, and descriptors, and [enabling certain types of improvements and new features](#expect-temporary-limitations). Until the upgrade is finalized, these features and functions will not be available and the command `SHOW CLUSTER SETTING version` will return `{{ previous_version_numeric }}`.
+
+You can monitor the process of the migration in the CockroachDB {{ site.data.products.cloud }} [**Jobs** page]({% link cockroachcloud/jobs-page.md %}). Migration jobs have names in the format `{{ major_version_numeric }}-{migration-id}`. If a migration job fails or stalls, Cockroach Labs can use the migration ID to help diagnose and troubleshoot the problem. Each major version has different migration jobs with different IDs.
+
+Finalization is complete when all migration jobs have completed. After migration is complete, the command `SHOW CLUSTER SETTING version` will return `{{ major_version_numeric }}`.
+
+## Step 4. Prepare to upgrade
+
+Before starting the upgrade, complete the following steps.
+
+
+
+### Prepare for brief unavailability
+
+Your cluster will be unavailable while its single node is stopped and restarted with {{ page.page_version }}. Prepare your application for this brief downtime, typically a few minutes.
+
+The [**SQL Users**]({% link cockroachcloud/managing-access.md %}#create-a-sql-user) and [**Tools**]({% link cockroachcloud/tools-page.md %}) tabs in the CockroachDB {{ site.data.products.cloud }} Console will also be disabled during this time.
+
+
+
+### Review breaking changes
+
+{% comment %} Be careful with this logic and the page-level variable page_version {% endcomment %}
+{% assign rd = site.data.versions | where_exp: "rd", "rd.major_version == page.page_version" | first %}
+
+{% if page.pre_production_preview == true %}
+Review the backward-incompatible changes and deprecated features announced in each {{ page.page_version }} testing release. If any affect your applications, make the necessary changes before proceeding.
+{% else %}
+Review the backward-incompatible changes and deprecated features announced in the [{{ page.page_version }} release notes](https://www.cockroachlabs.com/docs/releases/{{ page.page_version }})
+{% endif %}
+
+## Step 5. Start the upgrade
+
+To start the upgrade process:
+
+1. [Sign in](https://cockroachlabs.cloud/) to your CockroachDB {{ site.data.products.cloud }} account.
+
+1. In the **Clusters** list, select the cluster you want to upgrade.
+
+1. Select **Actions > Upgrade {% if page.pre_production_preview == true %}to Pre-Production Preview{% else %}major version{% endif %}**.
+
+1. In the **Upgrade your cluster** dialog, review the pre-upgrade message and then click {% if page.pre_production_preview == true %}to Pre-Production Preview{% else %}major version{% endif %}.
+
+
+Your cluster will be upgraded one node at a time without interrupting the cluster's overall health and availability. This "rolling upgrade" will take approximately 4-5 minutes per node.
+
+
+
+Your single-node cluster will be unavailable for a few minutes while the node is stopped and restarted with CockroachDB {{ page.page_version }}.
+
+
+After it is started, an upgrade cannot be cancelled. Instead, you can wait for the upgrade to finish, then [roll it back](#roll-back-the-upgrade) for up to 72 hours, after which time it will be finalized and cannot be rolled back.
+
+## Step 6. Monitor the upgrade
+
+Once your cluster is running CockroachDB {{ page.page_version }}, you will have approximately 72 hours before the upgrade is automatically finalized. During this time, it is important to [monitor your applications](#monitor-your-application) and [expect temporary limitations](#expect-temporary-limitations).
+
+If you see unexpected behavior, you can [roll back](#roll-back-the-upgrade) to {{ page.prev_version }} during the 72-hour window.
+
+### Monitor your application
+
+Use the [DB Console]({% link cockroachcloud/tools-page.md %}) or your own tooling to monitor your application for any unexpected behavior.
+
+- If everything looks good, you can wait for the upgrade to automatically finalize or you can [manually trigger finalization](#finalize-the-upgrade).
+
+- If you see unexpected behavior, you can [roll back to the latest patch release of {{ page.prev_version }}](#roll-back-the-upgrade) during the 72-hour window.
+
+### Expect temporary limitations
+
+Most {{ page.page_version }} features can be used right away, but some will be enabled only after the upgrade has been finalized. Attempting to use these features before finalization will result in errors.
+
+For an expanded list of features included in {{ page.page_version }}, temporary limitations, backward-incompatible changes, and deprecated features, refer to the [{{ page.page_version }} release notes](https://www.cockroachlabs.com/docs/releases/{{ page.page_version }}).
+
+### Roll back the upgrade
+
+If you see unexpected behavior, you can roll back the upgrade during the 72-hour window.
+
+To stop the upgrade and roll back to {{ page.prev_version }}, click **Roll back** in the banner at the top of the CockroachDB {{ site.data.products.cloud }} Console, and then click **Roll back upgrade**.
+
+
+During rollback, nodes will be reverted to the latest production patch release of {{ page.prev_version }} one at a time without interrupting the cluster's health and availability.
+
+
+
+Because your cluster contains a single node, the cluster will be briefly unavailable while the node is stopped and restarted with the latest production patch release of {{ page.prev_version }}. Be sure to [prepare for this brief unavailability](#prepare-for-brief-unavailability) before starting the rollback.
+
+
+## Step 7. Complete the upgrade
+
+If everything looks good, you can wait for the upgrade to automatically finalize, or you can manually finalize the upgrade to lift the [temporary limitations](#expect-temporary-limitations) on the cluster more quickly.
+
+### Finalize the upgrade
+
+The upgrade is automatically finalized after 72 hours.
+
+To manually finalize the upgrade, click **Finalize** in the banner at the top of the CockroachDB {{ site.data.products.cloud }} Console, and then click **Finalize upgrade**.
+
+After finalization, all [temporary limitations](#expect-temporary-limitations) will be lifted and all {{ page.page_version }} features will be available for use. However, it will no longer be possible to roll back to {{ page.prev_version }}. If you see unexpected behavior after the upgrade has been finalized, [contact support](https://support.cockroachlabs.com/hc/requests/new).
+
+## See also
+
+- [CockroachDB Cloud Upgrade Policy](https://cockroachlabs.com/docs/cockroachcloud/upgrade-policy)
+- [CockroachDB {{ page.page_version }} Release Notes](https://www.cockroachlabs.com/docs/releases/{{ page.page_version }})
diff --git a/src/current/releases/index.md b/src/current/releases/index.md
index 7469e1194e5..e42da71766f 100644
--- a/src/current/releases/index.md
+++ b/src/current/releases/index.md
@@ -1,11 +1,8 @@
---
-title: Releases
+title: CockroachDB Releases
summary: Information about CockroachDB releases with an index of available releases and their release notes and binaries.
toc: true
docs_area: releases
-toc_not_nested: true
-pre_production_preview: true
-pre_production_preview_version: v24.1.0-beta.1
---
{% comment %}Enable debug to print debug messages {% endcomment %}
@@ -17,21 +14,119 @@ of this file, block-level HTML is indented in relation to the other HTML, and bl
indented in relation to the other Liquid. Please try to keep the indentation consistent. Thank you!
{% endcomment %}
-After downloading a supported CockroachDB binary, learn how to [install CockroachDB](https://www.cockroachlabs.com/docs/stable/install-cockroachdb). Be sure to review Cockroach Labs' [Release Support Policy]({% link releases/release-support-policy.md %}).
+{% assign all_production_releases = site.data.releases | where: "release_type", "Production" | sort: "release_date" | reverse %}
+{% assign latest_full_production_version = all_production_releases | first %}
-- **Generally Available (GA)** releases (also known as Production releases) are qualified for production environments. These may have either a default GA support type or an extended LTS (Long-Term Support) designation. Refer to [Release Support Policy]({% link releases/release-support-policy.md %}) for more information.
-- **Testing** releases are intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments. Testing releases allow you to begin testing and validating the next major version of CockroachDB early.
-- **Experimental** binaries allow you to deploy and develop with CockroachDB on architectures that are not yet qualified for production use. Experimental binaries are not eligible for support or uptime SLA commitments, whether they are for testing releases or production releases.
+{% assign major_versions = all_production_releases | map: "major_version" | uniq | sort | reverse %}
+{% assign latest_major_version_with_production = major_versions | first %}
-For more details, refer to [Release Naming](#release-naming). For information about applicable software licenses, refer to [Licenses](#licenses).
+## Overview
+
+A new major version of CockroachDB is released quarterly. After a series of testing releases, each major version receives an initial production release, follwed by a series of patch releases.
+
+Releases are named in the format `vYY.R.PP`, where `YY` indicates the year, `R` indicates the major release starting with `1` each year, and `PP` indicates the patch number, starting with `0`.
+
+For example, the latest production release is `{{ latest_full_production_version.release_name }}`, within major version [`{{ latest_major_version_with_production }}`]({% link releases/{{ latest_major_version_with_production }}.md %}).
+
+This page explains the types and naming of CockroachDB releases and provides access to the release notes and downloads for all CockroachDB [releases](#).
+
+After choosing a version of CockroachDB, learn how to:
+
+- [Create a cluster in CockroachDB {{ site.data.products.cloud }}]({% link cockroachcloud/create-a-serverless-cluster.md %}).
+- [Upgrade a cluster in CockroachDB {{ site.data.products.cloud }}]({% link cockroachcloud/upgrade-to-{{site.current_cloud_version}}.md %}).
+- [Install CockroachDB {{ site.data.products.core }}]({% link {{site.current_cloud_version}}/install-cockroachdb.md %})
+- [Upgrade a Self-Hosted cluster]({% link {{site.current_cloud_version}}/upgrade-cockroach-version.md %}).
+
+Be sure to review Cockroach Labs' [Release Support Policy]({% link releases/release-support-policy.md %}) and review information about applicable [software licenses](#licenses).
+
+### Release types
+
+#### Major releases
+
+As of 2024, every second major version is an **Innovation release**. For CockroachDB {{ site.data.products.core }} and CockroachDB {{ site.data.products.dedicated }}, these releases offer shorter support windows and can be skipped.
+
+All other major versions are **Regular releases**, which are required upgrades. These versions offer longer support periods, which, for self-hosted clusters, are further extended when a patch version is announced that begins their **LTS** (Long-Term Support) release series.
+
+For details on how this impacts support in CockroachDB {{ site.data.products.core }}, refer to [Release Support Policy]({% link releases/release-support-policy.md %}). For details on support per release type in CockroachDB Cloud, refer to [CockroachDB Cloud Support and Upgrade Policy]({% link cockroachcloud/upgrade-policy.md %}).
+
+| Major Release Type | Frequency | Required upgrade | LTS releases and extended support |
+| :---: | :---: | :---: | :---: |
+| Regular (e.g. v24.1) | 2x/year | on Dedicated, Serverless, Self-Hosted | Yes |
+| Innovation (e.g. v24.2) | 2x/year | on Serverless only | No* |
+* Column does not apply to CockroachDB Serverless, where clusters are automatically upgraded when a new major version or a patch release is available, ensuring continuous support.
+
+For a given CockroachDB {{ site.data.products.core }} or Dedicated cluster, customers may choose to exclusively install or upgrade to Regular Releases to benefit from longer testing and support lifecycles, or to also include Innovation Releases, and benefit from earlier access to new features. This choice does not apply to CockroachDB Serverless, where every major release is an automatic upgrade.
+
+CockroachDB v24.2 is an Innovation release and v24.3 is planned as a Regular release. Starting with v25.1, four major releases are expected per year, where every first and third release of the year is expected to be an Innovation release. For more details, refer to [Upcoming releases](#upcoming-releases).
+
+#### Patch releases
+
+A major version has two types of patch releases: a series of **testing releases** followed by a series of **production releases**. A major version’s initial production release is also known as its GA release.
+
+
+
+
+
+
+
Patch Release Type
+
Naming
+
Description
+
+
+
+
+
Production
+
vYY.R.0 - vYY.R.n (ex. v24.2.1)
+
Production releases are qualified for production environments. The type and duration of support for a production release may vary depending on the major release type, according to the Release Support Policy.
Produced during development of a new major version, testing releases are intended for testing and experimentation only, and are not qualified for production environments or eligible for support or uptime SLA commitments.
+
+
+
{{site.data.alerts.callout_danger}}
-In CockroachDB v22.2.x and above, a cluster that is upgraded to an alpha binary of CockroachDB or a binary that was manually built from the `master` branch cannot subsequently be upgraded to a production release.
+A cluster that is upgraded to an alpha binary of CockroachDB or a binary that was manually built from the `master` branch cannot subsequently be upgraded to a production release.
{{site.data.alerts.end}}
-## Staged release process
+### Staged release process
+
+As of 2024, CockroachDB is released under a staged delivery process. New releases are made available for select CockroachDB Cloud organizations for two weeks before binaries are published for CockroachDB {{ site.data.products.core }} downloads.
+
+### Recent releases
+
+{% comment %} TODO: Automate by Aug 12, 2024, or update the morning of August 12, then automate soon after. {% endcomment %}
+| Version | Release Type | GA date | Latest patch release |
+| :---: | :---: | :---: | :---: |
+| [v24.2](#v24-2) | Innovation | 2024-08-12 | v24.2.0 |
+| [v24.1](#v24-1) | Regular | 2024-05-20 | v24.1.3 |
+| [v23.2](#v23-2) | Regular | 2024-02-05 | v23.2.9 (LTS) |
+| [v23.1](#v23-1) | Regular | 2023-05-15 | v23.1.24 (LTS) |
+
+### Upcoming releases
+
+The following releases and their descriptions represent proposed plans that are subject to change. Please contact your account representative with any questions.
+
+| Version | Release Type | Expected GA date |
+| :---: | :---: | :---: |
+| v24.3 | Regular | 2024-11-18 |
+| v25.1 | Innovation | 2025 Q1 |
+| v25.2 | Regular | 2025 Q2 |
+| v25.3 | Innovation | 2025 Q3 |
+| v25.4 | Regular | 2025 Q4 |
-As of 2024, CockroachDB is released under a staged delivery process. New releases are made available for select CockroachDB Cloud organizations for two weeks before binaries are published for CockroachDB Self-Hosted downloads.
+## Downloads
{{ experimental_js_warning }}
@@ -66,8 +161,7 @@ As of 2024, CockroachDB is released under a staged delivery process. New release
{% assign lts_patch = lts_patch_string | times: 1 %}{% comment %}Cast string to integer {% endcomment %}
{% endif %}
-
-## {{ v.major_version }}
+### {{ v.major_version }}
{% if DEBUG == true %}
has_lts_releases: {{ has_lts_releases }}
@@ -79,6 +173,11 @@ As of 2024, CockroachDB is released under a staged delivery process. New release
v.release_date: {{ v.release_date }}
v.initial_lts_release_date: {{ v.initial_lts_release_date }} {% endif %}
+{% if v.major_version == "v24.2" %}
+CockroachDB v24.2 is an [Innovation release](#major-releases), which is optional for CockroachDB {{ site.data.products.dedicated }} and CockroachDB {{ site.data.products.core }} clusters. Refer to [Major release types](#major-releases) before installing or upgrading for release support details. To learn what’s new in this release, refer to [Feature Highlights](https://www.cockroachlabs.com/docs/releases/v24.2.html).
+{% endif %}
+{% comment %}TODO: Link above to 24.2 Feature Highlights{% endcomment %}
+
@@ -111,14 +210,14 @@ As of 2024, CockroachDB is released under a staged delivery process. New release
{% assign v_docker_arm = false %}
{% for r in releases %}
- {% if r.docker.docker_arm == true %}
+ {% if r.docker.docker_arm == true %}
{% assign v_docker_arm = true %}
{% break %}
{% endif %}
{% endfor %}
{% if releases[0] %}
-### {{ s }} Releases
+#### {{ s }} Releases
@@ -434,22 +533,6 @@ macOS downloads are **experimental**. Experimental downloads are not yet qualifi
{% endfor %} {% comment %}for s in sections {% endcomment %}
{% endfor %} {% comment %}for v in versions{% endcomment %}
-## Release naming
-
-Cockroach Labs uses a three-component calendar versioning scheme to name CockroachDB [releases](https://cockroachlabs.com/docs/releases/index#production-releases). The format is `YY.R.PP`, where `YY` indicates the year, `R` indicates the release (historically “1” or “2”, representing a typical biannual cycle), and `PP` indicates the patch release version. Example: Version 23.1.0 (abbreviated v23.1.0). Leading up to a new major version's initial GA (Generally Available) release, multiple testing builds are produced, moving from Alpha to Beta to Release Candidate. CockroachDB began using this versioning scheme with v19.1.
-
-A major release is typically produced twice a year indicating major enhancements to product functionality. A change in the `YY.R` component denotes a major release.
-
-Patch releases are produced during the [support period]({% link releases/release-support-policy.md %}) for a major version to roll out critical bug and security fixes. A change in the `PP` component denotes a patch release.
-
-During development of a major version of CockroachDB, releases are produced according to the following patterns. Alpha, Beta, and Release Candidate releases are testing releases intended for testing and experimentation only, and are not qualified for production environments and not eligible for support or uptime SLA commitments.
-
-- Alpha releases are the earliest testing releases leading up to a major version's initial GA (generally available) release, and have `alpha` in the version name. Example: `v23.1.0-alpha.1`.
-- Beta releases are produced after the series of alpha releases leading up to a major version's initial GA release, and tend to be more stable and introduce fewer changes than alpha releases. They have `beta` in the version name. Example: `v23.1.0-beta.1`.
-- Release candidates are produced after the series of beta releases and are nearly identical to what will become the initial generally available (GA) release. Release candidates have `rc` in the version name. Example: `v23.1.0-rc.1`.
-- A major version's initial GA release is produced after the series of release candidates for a major version, and ends with `0`. Example: `v23.1.0`. GA releases are validated and suitable for production environments.
-- Patch (maintenance) releases are produced after a major version's GA release, and are numbered sequentially. Example: `v23.1.13`.
-
## Licenses
Unless otherwise noted, all binaries available on this page are variously licensed under the Business Source License 1.1 (BSL), the CockroachDB Community License (CCL), and other licenses specified in the source code. To determine whether BSL or CCL applies to a CockroachDB feature, refer to the [Licensing FAQs](https://www.cockroachlabs.com/docs/stable/licensing-faqs) page under Feature Licensing. The default license for any feature that is not listed is the CCL.
diff --git a/src/current/releases/release-support-policy.md b/src/current/releases/release-support-policy.md
index e897b9b4229..9d3c9e58d2d 100644
--- a/src/current/releases/release-support-policy.md
+++ b/src/current/releases/release-support-policy.md
@@ -2,6 +2,7 @@
title: Release Support Policy
summary: Learn about Cockroach Labs' policy for supporting major releases of CockroachDB.
toc: true
+toc_not_nested: true
docs_area: releases
---
@@ -9,45 +10,58 @@ docs_area: releases
{% assign versions = site.data.versions | where_exp: "versions", "versions.release_date <= today" | sort: "release_date" | reverse %} {% comment %} Get all versions (e.g., v21.2) sorted in reverse chronological order. {% endcomment %}
-This page explains Cockroach Labs' policy for supporting [production releases]({% link releases/index.md %}) of CockroachDB Self-Hosted. For clusters deployed in {{ site.data.products.cloud }}, refer to the [CockroachDB {{ site.data.products.cloud }} Support and Upgrade Policy](https://www.cockroachlabs.com/docs/cockroachcloud/upgrade-policy).
+This page explains Cockroach Labs' policy for supporting [production releases]({% link releases/index.md %}) of CockroachDB {{ site.data.products.core }}. For clusters deployed in {{ site.data.products.cloud }}, refer to the [CockroachDB {{ site.data.products.cloud }} Support and Upgrade Policy]({% link cockroachcloud/upgrade-policy.md %}).
-There are two support types: GA and LTS (Long-Term Support). Each patch release of CockroachDB is assigned one of these types. The default is GA, unless otherwise specified.
-
-Initially, a major release series has GA support. After the series demonstrates a continuously high level of stability and performance, new patch releases are designated as LTS releases, which provide extended support windows. Specifically, the distinction determines the time spans of a release’s support phases: Maintenance Support, Assistance Support, and EOL (End of Life).
+There are two major release types: [Regular and Innovation releases]({% link releases/index.md %}#release-types). Each offers a unique set of Support Types, which define the durations for each [support phase](#support-phases).
## Support Phases
-- **Maintenance Support**: Cockroach Labs will produce regular patch releases that include critical security fixes and resolutions to problems identified by users.
-
-- **Assistance Support**: Immediately follows the Maintenance Support period. During this period, the following guidelines apply:
- - New enhancements will not be made to the major release.
+- **Maintenance Support**: Begins for a CockroachDB major version upon its [GA release]({% link releases/index.md %}#patch-releases). During this phase:
+ - Cockroach Labs will produce regular patch releases that include critical security fixes and resolutions to problems identified by users.
+ - Cockroach Labs may backport non-breaking enhancements produced for newer major versions.
+ - Cockroach Labs may direct customers to workarounds or other fixes applicable to a reported case.
+ - Cockroach Labs may recommend that customers [upgrade](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version) to a later version of the product to resolve or further troubleshoot an issue.
+- **Assistance Support**: Immediately follows the Maintenance Support phase for Regular releases. Innovation releases do not have an Assistance Support phase. During this phase:
+ - Feature enhancements will no longer be made available to the major release.
- Cockroach Labs will continue to add critical security fixes to the major release in the form of patch releases.
- Patch releases for the purpose of resolving bugs or other errors may no longer be made to the major release.
- Cockroach Labs may direct customers to workarounds or other fixes applicable to the reported case.
- - Cockroach Labs may direct customers to [upgrade](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version) to a later version of the product, to resolve or further troubleshoot an issue.
-
-- **End of Life (EOL)**: Following the assistance support period, Cockroach Labs will no longer provide any support for the release.
+ - Cockroach Labs may direct customers to [upgrade](https://www.cockroachlabs.com/docs/stable/upgrade-cockroach-version) to a later version of CockroachDB to resolve or further troubleshoot an issue.
+- **End of Life (EOL)**: The day that a major version’s final support period ends is its EOL date. After a version reaches EOL, Cockroach Labs provides no further support for the release.
+ - A Regular release reaches EOL at the Assistance Support phase's end date.
+ - An Innovation releases reaches EOL at the Maintenance Support phase's end date.
## Support Types
-* **GA Support**: The default support type for production releases, starting with the initial production release of a major version, followed by each subsequent patch release before LTS releases begin for that major version.
- * **Maintenance support ends**:
- * **365 days** **after** the day of the **first production release** of the major version (i.e. the ‘GA release,’ ending in .0).
- * **Assistance support ends**:
- * **180 days after** the **Maintenance Support end date** of the release.
- * Major versions prior to v23.1 will not have LTS releases.
-* **LTS (Long-Term Support)**: Conferred to an initial LTS maintenance release of a given major version and its subsequent maintenance releases. LTS provides extended support windows while also indicating our highest level of expected release stability and performance.
- * **Maintenance support ends**:
- * **365 days** **after** the day of the **first LTS release** of the major version.
- * **Assistance support ends**:
- * **365 days after** the **Maintenance Support end date** of the release.
+### Regular releases
+
+Initially, a Regular release series has GA Support. After the series demonstrates a continuously high level of stability and performance, new patch releases are designated as LTS releases, which have an extended support window for each [support phase](#support-phases): Maintenance Support, Assistance Support, and EOL (End of Life).
-## Current supported releases
+- **GA Support**: The default support type for production releases, starting with the initial production release of a major version, followed by each subsequent patch release before LTS releases begin for that major version.
+ - **Maintenance support ends**:
+ - **365 days** **after** the day of the **first production release** of the major version (i.e. the ‘GA release,’ ending in .0).
+ - **Assistance support ends**:
+ - **180 days after** the **Maintenance Support end date** of the release.
+ - Major versions prior to v23.1 will not have LTS releases.
+- **LTS (Long-Term Support)**: Conferred to an initial LTS maintenance release of a given major version and its subsequent maintenance releases. LTS provides extended support windows while also indicating our highest level of expected release stability and performance.
+ - **Maintenance support ends**:
+ - **365 days** **after** the day of the **first LTS release** of the major version.
+ - **Assistance support ends**:
+ - **365 days after** the **Maintenance Support end date** of the release.
-As of v19.1, Cockroach Labs uses a three-component calendar versioning scheme. Prior releases use a different versioning scheme. For more details, see [Release Naming]({% link releases/index.md %}#release-naming).
+### Innovation releases
-Date format: YYYY-MM-DD
+Innovation releases do not have LTS releases.
+- **Innovation Support**:
+ - **Maintenance Support ends:**
+ - **180 days after** the day of the **first production release** of the major version.
+
+Innovation releases are not eligible for Assistance Support, and reach EOL at the end of Maintenance Support.
+
+## Supported versions
+
+{% comment %}TODO: Bring in updated logic for Innovation{% endcomment %}
@@ -120,10 +134,11 @@ Date format: YYYY-MM-DD
* : This major version will receive LTS patch releases, which will be listed on an additional row, upon their availability.
+** : This major version is an optional innovation release and will not receive LTS patch releases. Innovation releases are EOL when Maintenance Support ends.
-## End-of-life (EOL) releases
+## End-of-life (EOL) versions
-The following releases are no longer supported.
+The following versions of CockroachDB are no longer supported.
@@ -182,3 +197,5 @@ The following releases are no longer supported.
{% endfor %} {% comment %} Display each EOL version, its release date, its maintenance support expiration date, and its assistance support expiration date, and its LTS maintenance and assistance support dates. Also include links to the latest hotfix version. {% endcomment %}
+
+** : This EOL major version is an optional innovation release. Innovation releases do not receive LTS releases and are EOL when Maintenance Support ends.
From fe344cdeb1de27cf03f9c103ec3a7fd4058b81cd Mon Sep 17 00:00:00 2001
From: Matt Linville
Date: Fri, 9 Aug 2024 17:02:25 -0700
Subject: [PATCH 15/15] Re-enable testing date
---
src/current/_data/versions.csv | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/current/_data/versions.csv b/src/current/_data/versions.csv
index 24e13a5d442..aafd560d1e6 100644
--- a/src/current/_data/versions.csv
+++ b/src/current/_data/versions.csv
@@ -14,4 +14,4 @@ v22.2,2022-12-05,2023-12-05,2024-06-05,N/A,N/A,N/A,N/A,N/A,v22.1,release-22.2
v23.1,2023-05-15,2024-05-15,2024-11-15,23.1.11,23.1.12,2023-11-13,2024-11-13,2025-11-13,v22.2,release-23.1
v23.2,2024-02-05,2025-02-05,2025-08-05,23.2.6,23.2.7,2024-07-08,2025-07-08,2026-07-08,v23.1,release-23.2
v24.1,2024-05-20,2025-05-20,2025-11-20,N/A,N/A,N/A,N/A,N/A,v23.2,release-24.1
-v24.2,N/A,N/A,N/A,N/A,N/A,N/A,N/A,N/A,v24.1,master
+v24.2,2024-08-08,2025-08-08,N/A,N/A,N/A,N/A,N/A,N/A,v24.1,release-24.2