Skip to content

Commit

Permalink
Doc: Fix external links (#17288)
Browse files Browse the repository at this point in the history
  • Loading branch information
colleenmcginnis authored Mar 6, 2025
1 parent feb2b92 commit cb68868
Show file tree
Hide file tree
Showing 31 changed files with 63 additions and 63 deletions.
2 changes: 1 addition & 1 deletion docs/extend/codec-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Ruby’s Bundler to maintain the dependencies for your plugin. Currently, all we’ll need is the Logstash gem, for testing, but if you require other gems, you should add them in here.

::::{tip}
See [Bundler’s Gemfile page](http://bundler.io/gemfile.md) for more details.
See [Bundler’s Gemfile page](http://bundler.io/gemfile.html) for more details.
::::


Expand Down
2 changes: 1 addition & 1 deletion docs/extend/filter-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Ruby’s Bundler to maintain the dependencies for your plugin. Currently, all we’ll need is the Logstash gem, for testing, but if you require other gems, you should add them in here.

::::{tip}
See [Bundler’s Gemfile page](http://bundler.io/gemfile.md) for more details.
See [Bundler’s Gemfile page](http://bundler.io/gemfile.html) for more details.
::::


Expand Down
2 changes: 1 addition & 1 deletion docs/extend/input-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -346,7 +346,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Ruby’s Bundler to maintain the dependencies for your plugin. Currently, all we’ll need is the Logstash gem, for testing, but if you require other gems, you should add them in here.

::::{tip}
See [Bundler’s Gemfile page](http://bundler.io/gemfile.md) for more details.
See [Bundler’s Gemfile page](http://bundler.io/gemfile.html) for more details.
::::


Expand Down
2 changes: 1 addition & 1 deletion docs/extend/output-new-plugin.md
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ Deprecations are noted in the `logstash-deprecation.log` file in the `log` direc
Gemfiles allow Ruby’s Bundler to maintain the dependencies for your plugin. Currently, all we’ll need is the Logstash gem, for testing, but if you require other gems, you should add them in here.

::::{tip}
See [Bundler’s Gemfile page](http://bundler.io/gemfile.md) for more details.
See [Bundler’s Gemfile page](http://bundler.io/gemfile.html) for more details.
::::


Expand Down
4 changes: 2 additions & 2 deletions docs/reference/event-dependent-configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ output {

Similarly, you can convert the UTC timestamp in the `@timestamp` field into a string.

Instead of specifying a field name inside the curly braces, use the `%{{FORMAT}}` syntax where `FORMAT` is a [java time format](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/format/DateTimeFormatter.md#patterns).
Instead of specifying a field name inside the curly braces, use the `%{{FORMAT}}` syntax where `FORMAT` is a [java time format](https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/time/format/DateTimeFormatter.html#patterns).

For example, if you want to use the file output to write logs based on the event’s UTC date and hour and the `type` field:

Expand All @@ -72,7 +72,7 @@ output {
```

::::{note}
The sprintf format continues to support [deprecated joda time format](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.md) strings as well using the `%{+FORMAT}` syntax. These formats are not directly interchangeable, and we advise you to begin using the more modern Java Time format.
The sprintf format continues to support [deprecated joda time format](http://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html) strings as well using the `%{+FORMAT}` syntax. These formats are not directly interchangeable, and we advise you to begin using the more modern Java Time format.
::::


Expand Down
2 changes: 1 addition & 1 deletion docs/reference/getting-started-with-logstash.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ This section includes the following topics:
* Java 17 (default). Check out [Using JDK 17](#jdk17-upgrade) for settings info.
* Java 21

Use the [official Oracle distribution](http://www.oracle.com/technetwork/java/javase/downloads/index.md) or an open-source distribution, such as [OpenJDK](http://openjdk.java.net/). See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_jvm) for the official word on supported versions across releases.
Use the [official Oracle distribution](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or an open-source distribution, such as [OpenJDK](http://openjdk.java.net/). See the [Elastic Support Matrix](https://www.elastic.co/support/matrix#matrix_jvm) for the official word on supported versions across releases.

::::{admonition} Bundled JDK
:class: note
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ You can configure logging using the `log4j2.properties` file or the Logstash API

## Log4j2 configuration [log4j2]

Logstash ships with a `log4j2.properties` file with out-of-the-box settings, including logging to console. You can modify this file to change the rotation policy, type, and other [log4j2 configuration](https://logging.apache.org/log4j/2.x/manual/configuration.md#Loggers).
Logstash ships with a `log4j2.properties` file with out-of-the-box settings, including logging to console. You can modify this file to change the rotation policy, type, and other [log4j2 configuration](https://logging.apache.org/log4j/2.x/manual/configuration.html#Loggers).

You must restart Logstash to apply any changes that you make to this file. Changes to `log4j2.properties` persist after Logstash is restarted.

Expand Down
4 changes: 2 additions & 2 deletions docs/reference/plugins-filters-aggregate.md
Original file line number Diff line number Diff line change
Expand Up @@ -373,7 +373,7 @@ Available variables are:

`event`: current Logstash event

`map`: aggregated map associated to `task_id`, containing key/value pairs. Data structure is a ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.md)
`map`: aggregated map associated to `task_id`, containing key/value pairs. Data structure is a ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.html)

`map_meta`: meta informations associated to aggregate map. It allows to set a custom `timeout` or `inactivity_timeout`. It allows also to get `creation_timestamp`, `lastevent_timestamp` and `task_id`.

Expand Down Expand Up @@ -406,7 +406,7 @@ To create additional events during the code execution, to be emitted immediately
}
```

The parameter of the function `new_event_block.call` must be of type `LogStash::Event`. To create such an object, the constructor of the same class can be used: `LogStash::Event.new()`. `LogStash::Event.new()` can receive a parameter of type ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.md) to initialize the new event fields.
The parameter of the function `new_event_block.call` must be of type `LogStash::Event`. To create such an object, the constructor of the same class can be used: `LogStash::Event.new()`. `LogStash::Event.new()` can receive a parameter of type ruby [Hash](http://ruby-doc.org/core-1.9.1/Hash.html) to initialize the new event fields.


### `end_of_task` [plugins-filters-aggregate-end_of_task]
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/plugins-filters-date.md
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,7 @@ Z
: Timezone offset structured as HH:mm (colon in between hour and minute offsets). Example: `-07:00`.

ZZZ
: Timezone identity. Example: `America/Los_Angeles`. Note: Valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.md).
: Timezone identity. Example: `America/Los_Angeles`. Note: Valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.html).


z
Expand Down Expand Up @@ -227,7 +227,7 @@ E

For non-formatting syntax, you’ll need to put single-quote characters around the value. For example, if you were parsing ISO8601 time, "2015-01-01T01:12:23" that little "T" isn’t a valid time format, and you want to say "literally, a T", your format would be this: "yyyy-MM-dd’T’HH:mm:ss"

Other less common date units, such as era (G), century (C), am/pm (a), and # more, can be learned about on the [joda-time documentation](http://www.joda.org/joda-time/key_format.md).
Other less common date units, such as era (G), century (C), am/pm (a), and # more, can be learned about on the [joda-time documentation](http://www.joda.org/joda-time/key_format.html).


### `tag_on_failure` [plugins-filters-date-tag_on_failure]
Expand All @@ -251,7 +251,7 @@ Store the matching timestamp into the given target field. If not provided, defa
* Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting.

Specify a time zone canonical ID to be used for date parsing. The valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.md). This is useful in case the time zone cannot be extracted from the value, and is not the platform default. If this is not specified the platform default will be used. Canonical ID is good as it takes care of daylight saving time for you For example, `America/Los_Angeles` or `Europe/Paris` are valid IDs. This field can be dynamic and include parts of the event using the `%{{field}}` syntax
Specify a time zone canonical ID to be used for date parsing. The valid IDs are listed on the [Joda.org available time zones page](http://joda-time.sourceforge.net/timezones.html). This is useful in case the time zone cannot be extracted from the value, and is not the platform default. If this is not specified the platform default will be used. Canonical ID is good as it takes care of daylight saving time for you For example, `America/Los_Angeles` or `Europe/Paris` are valid IDs. This field can be dynamic and include parts of the event using the `%{{field}}` syntax



Expand Down
2 changes: 1 addition & 1 deletion docs/reference/plugins-filters-jdbc_static.md
Original file line number Diff line number Diff line change
Expand Up @@ -447,7 +447,7 @@ name
: The name of the table to be created in the database.
columns
: An array of column specifications. Each column specification is an array of exactly two elements, for example `["ip", "varchar(15)"]`. The first element is the column name string. The second element is a string that is an [Apache Derby SQL type](https://db.apache.org/derby/docs/10.14/ref/crefsqlj31068.md). The string content is checked when the local lookup tables are built, not when the settings are validated. Therefore, any misspelled SQL type strings result in errors.
: An array of column specifications. Each column specification is an array of exactly two elements, for example `["ip", "varchar(15)"]`. The first element is the column name string. The second element is a string that is an [Apache Derby SQL type](https://db.apache.org/derby/docs/10.14/ref/crefsqlj31068.html). The string content is checked when the local lookup tables are built, not when the settings are validated. Therefore, any misspelled SQL type strings result in errors.
index_columns
: An array of strings. Each string must be defined in the `columns` setting. The index name will be generated internally. Unique or sorted indexes are not supported.
Expand Down
6 changes: 3 additions & 3 deletions docs/reference/plugins-inputs-cloudwatch.md
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ The default, `900`, means check every 15 minutes. Setting this value too low (ge
* Value type is [array](/reference/configuration-file-structure.md#array)
* Default value is `["CPUUtilization", "DiskReadOps", "DiskWriteOps", "NetworkIn", "NetworkOut"]`

Specify the metrics to fetch for the namespace. The defaults are AWS/EC2 specific. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.md) for the available metrics for other namespaces.
Specify the metrics to fetch for the namespace. The defaults are AWS/EC2 specific. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html) for the available metrics for other namespaces.


### `namespace` [plugins-inputs-cloudwatch-namespace]
Expand All @@ -224,7 +224,7 @@ Specify the metrics to fetch for the namespace. The defaults are AWS/EC2 specifi

If undefined, LogStash will complain, even if codec is unused. The service namespace of the metrics to fetch.

The default is for the EC2 service. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.md) for valid values.
The default is for the EC2 service. See [http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/aws-namespaces.html) for valid values.


### `period` [plugins-inputs-cloudwatch-period]
Expand Down Expand Up @@ -258,7 +258,7 @@ The AWS Region
* Value type is [string](/reference/configuration-file-structure.md#string)
* There is no default value for this setting.

The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.md) for more information.
The AWS IAM Role to assume, if any. This is used to generate temporary credentials, typically for cross-account access. See the [AssumeRole API documentation](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) for more information.


### `role_session_name` [plugins-inputs-cloudwatch-role_session_name]
Expand Down
2 changes: 1 addition & 1 deletion docs/reference/plugins-inputs-couchdb_changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ For questions about the plugin, open a topic in the [Discuss](http://discuss.ela

## Description [_description_12]

This CouchDB input allows you to automatically stream events from the CouchDB [_changes](http://guide.couchdb.org/draft/notifications.md) URI. Moreover, any "future" changes will automatically be streamed as well making it easy to synchronize your CouchDB data with any target destination
This CouchDB input allows you to automatically stream events from the CouchDB [_changes](http://guide.couchdb.org/draft/notifications.html) URI. Moreover, any "future" changes will automatically be streamed as well making it easy to synchronize your CouchDB data with any target destination

### Upsert and delete [_upsert_and_delete]

Expand Down
2 changes: 1 addition & 1 deletion docs/reference/plugins-inputs-jms.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ For questions about the plugin, open a topic in the [Discuss](http://discuss.ela

Read events from a Jms Broker. Supports both Jms Queues and Topics.

For more information about Jms, see [https://javaee.github.io/tutorial/jms-concepts.html](https://javaee.github.io/tutorial/jms-concepts.md). For more information about the Ruby Gem used, see [http://github.com/reidmorrison/jruby-jms](http://github.com/reidmorrison/jruby-jms).
For more information about Jms, see [https://javaee.github.io/tutorial/jms-concepts.html](https://javaee.github.io/tutorial/jms-concepts.html). For more information about the Ruby Gem used, see [http://github.com/reidmorrison/jruby-jms](http://github.com/reidmorrison/jruby-jms).

JMS configurations can be done either entirely in the Logstash configuration file, or in a mixture of the Logstash configuration file, and a specified yaml file. Simple configurations that do not need to make calls to implementation specific methods on the connection factory can be specified entirely in the Logstash configuration, whereas more complex configurations, should also use the combination of yaml file and Logstash configuration.

Expand Down
12 changes: 6 additions & 6 deletions docs/reference/plugins-inputs-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,9 +48,9 @@ Logstash instances by default form a single logical group to subscribe to Kafka

Ideally you should have as many threads as the number of partitions for a perfect balance — more threads than partitions means that some threads will be idle

For more information see [https://kafka.apache.org/38/documentation.html#theconsumer](https://kafka.apache.org/38/documentation.md#theconsumer)
For more information see [https://kafka.apache.org/38/documentation.html#theconsumer](https://kafka.apache.org/38/documentation.html#theconsumer)

Kafka consumer configuration: [https://kafka.apache.org/38/documentation.html#consumerconfigs](https://kafka.apache.org/38/documentation.md#consumerconfigs)
Kafka consumer configuration: [https://kafka.apache.org/38/documentation.html#consumerconfigs](https://kafka.apache.org/38/documentation.html#consumerconfigs)


## Metadata fields [_metadata_fields]
Expand All @@ -62,7 +62,7 @@ The following metadata from Kafka broker are added under the `[@metadata]` field
* `[@metadata][kafka][partition]`: Partition info for this message.
* `[@metadata][kafka][offset]`: Original record offset for this message.
* `[@metadata][kafka][key]`: Record key, if any.
* `[@metadata][kafka][timestamp]`: Timestamp in the Record. Depending on your broker configuration, this can be either when the record was created (default) or when it was received by the broker. See more about property log.message.timestamp.type at [https://kafka.apache.org/38/documentation.html#brokerconfigs](https://kafka.apache.org/38/documentation.md#brokerconfigs)
* `[@metadata][kafka][timestamp]`: Timestamp in the Record. Depending on your broker configuration, this can be either when the record was created (default) or when it was received by the broker. See more about property log.message.timestamp.type at [https://kafka.apache.org/38/documentation.html#brokerconfigs](https://kafka.apache.org/38/documentation.html#brokerconfigs)

Metadata is only added to the event if the `decorate_events` option is set to `basic` or `extended` (it defaults to `none`).

Expand Down Expand Up @@ -384,7 +384,7 @@ Please note that specifying `jaas_path` and `kerberos_config` in the config file
* Value type is [path](/reference/configuration-file-structure.md#path)
* There is no default value for this setting.

Optional path to kerberos config file. This is krb5.conf style as detailed in [https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.md)
Optional path to kerberos config file. This is krb5.conf style as detailed in [https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html](https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html)


### `key_deserializer_class` [plugins-inputs-kafka-key_deserializer_class]
Expand Down Expand Up @@ -439,7 +439,7 @@ The name of the partition assignment strategy that the client uses to distribute
* `sticky`
* `cooperative_sticky`

These map to Kafka’s corresponding [`ConsumerPartitionAssignor`](https://kafka.apache.org/38/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.md) implementations.
These map to Kafka’s corresponding [`ConsumerPartitionAssignor`](https://kafka.apache.org/38/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html) implementations.


### `poll_timeout_ms` [plugins-inputs-kafka-poll_timeout_ms]
Expand Down Expand Up @@ -581,7 +581,7 @@ The Kerberos principal name that Kafka broker runs as. This can be defined eithe
* Value type is [string](/reference/configuration-file-structure.md#string)
* Default value is `"GSSAPI"`

[SASL mechanism](http://kafka.apache.org/documentation.md#security_sasl) used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.
[SASL mechanism](http://kafka.apache.org/documentation.html#security_sasl) used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.


### `schema_registry_key` [plugins-inputs-kafka-schema_registry_key]
Expand Down
Loading

0 comments on commit cb68868

Please sign in to comment.