diff --git a/docs/integrations/web-servers/apache-tomcat.md b/docs/integrations/web-servers/apache-tomcat.md index 8bc9ae38a5..df92083507 100644 --- a/docs/integrations/web-servers/apache-tomcat.md +++ b/docs/integrations/web-servers/apache-tomcat.md @@ -19,24 +19,7 @@ Before installing the Sumo Logic app, Apache Tomcat must be set up and configure This section provides instructions for configuring log and metric collection for the Sumo Logic app for Apache Tomcat. Configuring log and metric collection for the Apache Tomcat app includes the following tasks. -### Step 1: Configure fields in Sumo Logic - -As part of the app installation process, the following fields will be created by default: -* `component` -* `environment` -* `webserver_system` -* `webserver_farm` -* `pod` - -Additionally, if you are using Apache Tomcat in the Kubernetes environment, the following additional fields will be created by default during the app installation process: -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_webserver_system` -* `pod_labels_webserver_farm` - -For information on setting up fields, see [Fields](/docs/manage/fields). - -### Step 2: Configure Collection for Apache Tomcat +### Step 2: Configure collection for Apache Tomcat -The first service in the pipeline is Telegraf. Telegraf collects metrics from Apache Tomcat. Note that we’re running Telegraf in each pod we want to collect metrics from as a sidecar deployment, for example, Telegraf runs in the same pod as the containers it monitors. Telegraf uses the Apache Tomcat and Jolokia2 input plugin to obtain metrics. For simplicity, the diagram doesn’t show the input plugins. The injection of the Telegraf sidecar container is done by the Telegraf Operator. Prometheus pulls metrics from Telegraf and sends them to [Sumo Logic Distribution for OpenTelemetry Collector](https://github.com/SumoLogic/sumologic-otel-collector), which enriches metadata and sends metrics to Sumo Logic. +The first service in the pipeline is Telegraf. Telegraf collects metrics from Apache Tomcat. Note that we’re running Telegraf in each pod we want to collect metrics from as a sidecar deployment, for example, Telegraf runs in the same pod as the containers it monitors. Telegraf uses the Apache Tomcat and Jolokia2 input plugins to obtain metrics. For simplicity, the diagram doesn’t show the input plugins. The injection of the Telegraf sidecar container is done by the Telegraf Operator. Prometheus pulls metrics from Telegraf and sends them to [Sumo Logic Distribution for OpenTelemetry Collector](https://github.com/SumoLogic/sumologic-otel-collector), which enriches metadata and sends metrics to Sumo Logic. In the logs pipeline, Sumo Logic Distribution for OpenTelemetry Collector collects logs written to standard out and forwards them to another instance of Sumo Logic Distribution for OpenTelemetry Collector, which enriches metadata and sends logs to Sumo Logic. Follow the below instructions to set up metrics collection: [Step 1: Configure Metrics Collection](#step-1-configure-metrics-collection) -1. Setup Kubernetes Collection with the Telegraf operator. +1. Set up Kubernetes Collection with the Telegraf operator. 2. Add annotations on your Apache Tomcat pods. [Step 2: Configure Logs Collection](#step-2-configure-logs-collection) @@ -72,18 +55,18 @@ Follow the below instructions to set up metrics collection: It’s assumed that you are using the latest helm chart version. If not, upgrade using the instructions [here](/docs/send-data/kubernetes). -#### Step 1: Configure Metrics Collection +### Step 1: Configure metrics collection This section explains the steps to collect Apache Tomcat metrics from a Kubernetes environment. In Kubernetes environments, we use the Telegraf Operator, which is packaged with our Kubernetes collection. You can learn more on this[ here](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/telegraf-collection-architecture). Follow the steps listed below to collect metrics from a Kubernetes environment: 1. [Set up Kubernetes Collection with the Telegraf Operator](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf). Ensure that you are monitoring your Kubernetes clusters with the Telegraf operator **enabled**. If you are not, then please follow [these instructions](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf) to do so. -2. Install jolokia on your Tomcat Pod to use the Jolokia Telegraf Input Plugin: +2. Install Jolokia on your Tomcat Pod to use the Jolokia Telegraf Input Plugin: * Download the latest version of the Jolokia war file from: [https://jolokia.org/download.html](https://jolokia.org/download.html). - * Rename the file from jolokia-war-X.X.X.war to jolokia.war - * Create a configMap **jolokia** from the binary file `kubectl create configmap jolokia --from-file=jolokia.jar` - * Create volume mount the jolokia.war file to `${TOMCAT_HOME}/webapps` + * Rename the file from jolokia-war-X.X.X.war to jolokia.war. + * Create a configMap **jolokia** from the binary file `kubectl create configmap jolokia --from-file=jolokia.jar`. + * Create volume mount the jolokia.war file to `${TOMCAT_HOME}/webapps`. ```yml spec: volumes: @@ -107,7 +90,7 @@ In Kubernetes environments, we use the Telegraf Operator, which is packaged with ``` - **Verification Step**: You can ssh to Tomcat pod and run following commands to make sure Telegraf (and Jolokia) is scraping metrics from your Tomcat Pod: + **Verification Step**: You can ssh to the Tomcat pod and run the following commands to make sure Telegraf (and Jolokia) is scraping metrics from your Tomcat Pod: ```bash curl localhost:9273/metrics ``` @@ -187,21 +170,21 @@ In Kubernetes environments, we use the Telegraf Operator, which is packaged with paths = ["hitCount","lookupCount"] tag_keys = ["context","host"] ``` - * `telegraf.influxdata.com/inputs`. This contains the required configuration for the Telegraf Tomcat Input plugin. Refer [to this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis) for more information on configuring the Tomcat input plugin for Telegraf. Note: As telegraf will be run as a sidecar, the host should always be localhost. + * `telegraf.influxdata.com/inputs`. This contains the required configuration for the Telegraf Tomcat Input plugin. Refer [to this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis) for more information on configuring the Tomcat input plugin for Telegraf. Note: As Telegraf will be run as a sidecar, the host should always be localhost. * In the input plugins section, which is `[[inputs.Tomcat]]`: * `servers` - The URL to the Tomcat server. This can be a comma-separated list to connect to multiple Tomcat servers. Please see [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tomcat) for more information on additional parameters for configuring the Tomcat input plugin for Telegraf. * In the tags section, which is `[inputs.Tomcat.tags]`: - * `environment`. This is the deployment environment where the Tomcat farm identified by the value of `servers` resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. + * `environment`. This is the deployment environment where the Tomcat farm identified by the value of `servers` resides. For example: dev, prod, or qa. While this value is optional we highly recommend setting it. * `webserver_farm` - Enter a name to identify this Tomcat farm. This farm name will be shown in the Sumo Logic dashboards. * In the input plugins section, which is `[[inputs.jolokia2_agent]]`: - * `urls` - The URL to the tomcat server. This can be a comma-separated list to connect to multiple tomcat servers. See [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) for more information on additional parameters for configuring the Tomcat input plugin for Telegraf. + * `urls` - The URL to the Tomcat server. This can be a comma-separated list to connect to multiple Tomcat servers. See [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) for more information on additional parameters for configuring the Tomcat input plugin for Telegraf. * In the tags section, which is `[inputs.jolokia2_agent.tags]` - * `environment`. This is the deployment environment where the Tomcat farm identified by the value of servers resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. + * `environment`. This is the deployment environment where the Tomcat farm identified by the value of servers resides. For example: dev, prod, or qa. While this value is optional we highly recommend setting it. * `webserver_farm`. Enter a name to identify this Tomcat farm. This farm name will be shown in the Sumo Logic dashboards. * **Do not modify** additional values set by this configuration as they will cause the Sumo Logic apps to not function correctly. * `telegraf.influxdata.com/class: sumologic-prometheus`. This instructs the Telegraf operator what output to use. This should not be changed. * `prometheus.io/scrape: "true"`. This ensures our Prometheus will scrape the metrics. - * `prometheus.io/port: "9273"`. This tells prometheus what ports to scrape on. This should not be changed. + * `prometheus.io/port: "9273"`. This tells Prometheus what ports to scrape on. This should not be changed. * `telegraf.influxdata.com/inputs` * In the tags section, which is [inputs.Tomcat.tags] * `component: “webserver”` - This value is used by Sumo Logic apps to identify application components. @@ -213,13 +196,13 @@ In Kubernetes environments, we use the Telegraf Operator, which is packaged with 1. Sumo Logic Kubernetes collection will automatically start collecting metrics from the pods having the labels and annotations defined in the previous step. 1. Verify metrics in Sumo Logic. -#### Step 2: Configure Logs Collection +### Step 2: Configure logs collection This section explains the steps to collect Apache Tomcat logs from a Kubernetes environment. **(Recommended Method) Add labels on your Apache Tomcat pods to capture logs from standard output.** Follow the instructions below to capture Apache Tomcat logs from stdout on Kubernetes. -1. Apply following labels to the Apache Tomcat pods: +1. Apply the following labels to the Apache Tomcat pods: ```sql environment: "prod_CHANGEME" component: "webserver" @@ -227,7 +210,7 @@ This section explains the steps to collect Apache Tomcat logs from a Kubernetes webserver_farm: "tomcat_prod__CHANGEME" ``` * Enter in values for the following parameters (marked `CHANGEME` in the snippet above): - * `environment`. This is the deployment environment where the Tomcat farm identified by the value of servers resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. + * `environment`. This is the deployment environment where the Tomcat farm identified by the value of servers resides. For example: dev, prod, or qa. While this value is optional we highly recommend setting it. * `Webserver_farm` - Enter a name to identify this Tomcat farm. This farm name will be shown in the Sumo Logic dashboards. * **Do not modify** additional values set by this configuration as they will cause the Sumo Logic apps to not function correctly. * `component: “webserver”` - This value is used by Sumo Logic apps to identify application components. @@ -256,7 +239,7 @@ This section explains the steps to collect Apache Tomcat logs from a Kubernetes 1. Sumo Logic Kubernetes collection will automatically start collecting logs from the pods having the annotations defined above. 1. Verify logs in Sumo Logic. -
**FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created for Apache Tomcat Web Server Application Components named as **AppObservabilityApacheTomcatWebserverFER** +
**FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, we will have a Field Extraction Rule automatically created for Apache Tomcat Web Server Application Components named **AppObservabilityApacheTomcatWebserverFER**.
@@ -272,7 +255,7 @@ This section provides instructions for configuring metrics collection for the Su 1. Configure Metrics Collection 1. Configure a Hosted Collector - 2. Configure an HTTP Logs and Metrics Source + 2. Configure HTTP Logs and Metrics Source 3. Install Telegraf 4. Download and setup Jolokia on each Apache Tomcat node 5. Configure and start Telegraf @@ -280,20 +263,20 @@ This section provides instructions for configuring metrics collection for the Su 1. Configure logging in Apache Tomcat 2. Configure Sumo Logic Installed Collector -#### Step 1: Configure Metrics Collection +### Step 1: Configure metrics collection 1. **Configure a Hosted Collector**. To create a new Sumo Logic hosted collector, perform the steps in the [Create a Hosted Collector](/docs/send-data/hosted-collectors/configure-hosted-collector) section of the Sumo Logic documentation. 1. **Configure an HTTP Logs and Metrics Source**. Create a new HTTP Logs and Metrics Source in the hosted collector created above by following[ these instructions. ](/docs/send-data/hosted-collectors/http-source/logs-metrics)Make a note of the **HTTP Source URL**. 1. **Install Telegraf**. Follow [these steps](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf) to install Telegraf. 1. **Download and setup Jolokia on each Apache Tomcat node**. As part of collecting metrics data from Telegraf, we will use the [Jolokia input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) to get data from Telegraf and the [Sumo Logic output plugin](https://github.com/SumoLogic/fluentd-output-sumologic) to send data to Sumo Logic. * Download the latest version of the Jolokia JVM-Agent from [Jolokia](https://jolokia.org/download.html). - * Rename downloaded Jar file to jolokia.jar. - * Save the file jolokia.jar on your apache tomcat server in `${TOMCAT_HOME}/webapps`. + * Rename the downloaded Jar file to jolokia.jar. + * Save the file jolokia.jar on your Apache Tomcat server in `${TOMCAT_HOME}/webapps`. * Configure Apache Tomcat to use Jolokia. - * Add following to tomcat-users.xml + * Add the following to tomcat-users.xml: - * Start or Restart Apache Tomcat Service + * Start or Restart the Apache Tomcat Service. * Verify the Jolokia agent installation by curl-ing this URL: `http://:/jolokia/version`. ```bash curl -v -u **username-CHANGEME**:**password-CHANGEME** "`http://APACHE_TOMCAT_SERVER_IP_ADDRESS:/jolokia/version`" @@ -478,12 +461,12 @@ Enter values for the following parameters (marked `CHANGEME` above): * In the input plugins section, which is `[[inputs.tomcat]]`: * `servers` - The URL to the Tomcat server. This can be a comma-separated list to connect to multiple Tomcat servers. See [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tomcat) for more information on additional parameters for configuring the Tomcat input plugin for Telegraf. * In the tags section, which is `[inputs.tomcat.tags]` - * `environment`. This is the deployment environment where the Tomcat farm identified by the value of **servers** resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. + * `environment`. This is the deployment environment where the Tomcat farm identified by the value of **servers** resides. For example: dev, prod, or qa. While this value is optional we highly recommend setting it. * `Webserver_farm` - Enter a name to identify this Tomcat farm. This farm name will be shown in the Sumo Logic dashboards. * In the input plugins section, which is `[[inputs.jolokia2_agent]]`: * `servers` - The URL to the Tomcat server. This can be a comma-separated list to connect to multiple Tomcat servers. See [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia2) for more information on additional parameters for configuring the Tomcat input plugin for Telegraf. * In the tags section, which is `[inputs.jolokia2_agent.tags]`: - * `environment`. This is the deployment environment where the Tomcat farm identified by the value of `servers` resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. + * `environment`. This is the deployment environment where the Tomcat farm identified by the value of `servers` resides. For example: dev, prod, or qa. While this value is optional we highly recommend setting it. * `webserver_farm`- Enter a name to identify this Tomcat farm. This farm name will be shown in the Sumo Logic dashboards. * In the output plugins section, which is `[[outputs.sumologic]]`: * `url` - This is the HTTP source URL created in step 3. See [this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/configure-telegraf-output-plugin.md) for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin. @@ -492,16 +475,16 @@ Here’s an explanation for additional values set by this Telegraf configuration **Do not modify** the configuration, as it will cause the SumoLogic apps to not function correctly. -* `data_format - “prometheus”` In the output plugins section, which is `[[outputs.sumologic]]`. Metrics are sent in the Prometheus format to Sumo Logic +* `data_format - “prometheus”` In the output plugins section, which is `[[outputs.sumologic]]`. Metrics are sent in the Prometheus format to Sumo Logic. * `Component: “webserver”` - In the input plugins section, which are `[[inputs.tomcat]]` and `[[inputs.jolokia2_agent]]` - This value is used by Sumo Logic apps to identify application components. -* `webserver_system: “tomcat”` - In the input plugins sections.In other words, this value identifies the webserver system +* `webserver_system: “tomcat”` - In the input plugins sections. In other words, this value identifies the web server system. * For all other parameters, see [this doc](https://github.com/influxdata/telegraf/blob/master/etc/logrotate.d/telegraf) for more parameters that can be configured in the Telegraf agent globally. Once you have finalized your telegraf.conf file, you can start or reload the telegraf service using instructions from the [doc](https://docs.influxdata.com/telegraf/v1.17/introduction/getting-started/#start-telegraf-service). At this point, Tomcat metrics should start flowing into Sumo Logic. -#### Step 2 Configure Logs Collection +### Step 2 Configure logs collection This section provides instructions for configuring log collection for Apache Tomcat running on a non-kubernetes environment for the Sumo Logic App for Apache Tomcat. @@ -532,21 +515,21 @@ Log format description: [https://stackoverflow.com/questions/4468546/explanation * **Name.** (Required) * **Description.** (Optional) * **File Path (Required).** Enter the path to your error.log or access.log. The files are typically located in **/usr/share/tomcat/logs/***. If you're using a customized path, check the Tomcat.conf file for this information. - * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different host name - * **Source Category.** Enter any string to tag the output collected from this Source, such as **Tomcat/Logs**. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details, see[ Best Practices](/docs/send-data/best-practices).) + * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different hostname. + * **Source Category.** Enter any string to tag the output collected from this Source, such as **Tomcat/Logs**. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details, see[ Best Practices](/docs/send-data/best-practices). * **Fields.** Set the following fields: ```sh * component = websystem * webserver_system = tomcat * webserver_farm = - * environment = #such as Dev, QA or Prod. + * environment = #such as Dev, QA, or Prod. ``` 1. Configure the **Advanced** section: * **Enable Timestamp Parsing.** Select Extract timestamp information from log file entries. * **Time Zone.** Choose the option, **Ignore time zone from log file and instead use**, and then select your Tomcat Server’s time zone. * **Timestamp Format.** The timestamp format is automatically detected. **Encoding.** Select UTF-8 (Default). - * **Enable Multiline Processing.** Detect messages spanning multiple lines - * Infer Boundaries - Detect message boundaries automatically + * **Enable Multiline Processing.** Detect messages spanning multiple lines. + * Infer Boundaries - Detect message boundaries automatically. 1. Click **Save**. At this point, Tomcat logs should start flowing into Sumo Logic. @@ -556,50 +539,46 @@ At this point, Tomcat logs should start flowing into Sumo Logic. ## Installing the Apache Tomcat app -The Sumo Logic app for Apache Tomcat provides pre-configured Dashboards for Access, Catalina.out, and Garbage Collection logs. +import AppInstall2 from '../../reuse/apps/app-install-sc-k8s.md'; -Locate and install the app you need from the **App Catalog**. If you want to see a preview of the dashboards included with the app before installing, click **Preview Dashboards**. + -1. From the **App Catalog**, search for and select the app. -2. Select the version of the service you're using and click **Add to Library**. -:::note -Version selection is not available for all apps. -::: -3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. - 2. **Data Source.** - * Choose **Enter a Custom Data Filter**, and enter a custom filter for Apache Tomcat webserver farm. Examples: - * For all Apache Tomcat webserver farms webserver_farm=* - * For a specific webserver farms: webserver_farm=tomcat.dev.01. - * Clusters within a specific environment: `webserver_farm=tomcat-1 and environment=prod`. (This assumes you have set the optional environment tag while configuring collection) -4. **Advanced**. Select the **Location in Library** (the default is the Personal folder in the library), or click **New Folder** to add a new folder. -5. Click **Add to Library**. +As part of the app installation process, the following fields will be created by default: +* `component` +* `environment` +* `webserver_system` +* `webserver_farm` +* `pod` -Once an app is installed, it will appear in your **Personal** folder, or other folder that you specified. From here, you can share it with your organization. +Additionally, if you are using Apache Tomcat in the Kubernetes environment, the following additional fields will be created by default during the app installation process: +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_webserver_system` +* `pod_labels_webserver_farm` -Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps. +For information on setting up fields, see [Fields](/docs/manage/fields). ## Viewing Apache Tomcat dashboards -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + + ### Overview -The **Apache Tomcat - Overview** dashboard provides a high-level view of the activity and health of Tomcat servers on your network. Dashboard panels display visual graphs and detailed information on visitor geographic locations, traffic volume and distribution, responses over time, as well as time comparisons for visitor locations and CPU, Memory. +The **Apache Tomcat - Overview** dashboard provides a high-level view of the activity and health of Tomcat servers on your network. Dashboard panels display visual graphs and detailed information on visitor geographic locations, traffic volume and distribution, responses over time, as well as time comparisons for visitor locations and CPU and memory. Use this dashboard to: -* Analyze CPU, Memory and disk utilization. -* Analyze http request about status code. +* Analyze CPU, Memory, and disk utilization. +* Analyze the HTTP request about the status code. * Gain insights into Network traffic for your Tomcat server. * Gain insights into originated traffic location by region. This can help you allocate computer resources to different regions according to their needs. -* Gain insights into Client, Server Responses on Tomcat Server. This helps you identify errors in Tomcat Server. +* Gain insights into Client and Server Responses on the Tomcat Server. This helps you identify errors in the Tomcat Server. test -#### Visitor Locations +### Visitor Locations The **Apache Tomcat - Visitor Locations** dashboard provides a high-level view of Tomcat visitor geographic locations both worldwide and in the United States. Dashboard panels also show graphic trends for visits by country over time and visits by US region over time. @@ -615,22 +594,22 @@ The **Apache Tomcat - Visitor Locations** dashboard provides a high-level view o The **Apache Tomcat - Visitor Traffic Insight** dashboard provides detailed information on the top documents accessed, top referrers, top search terms from popular search engines, and the media types served. - **Bytes Served.** Displays bytes served in a single chart on a timeline for the last 60 minutes. -- **HTTP Methods.** Shows the number of method over time in a pie chart on a timeline for the last 60 minutes. -- **Top 5 url.** Provides a list of the top 5 URL being accessed by your visitors in a bar chart for the 60 minutes. +- **HTTP Methods.** Shows the number of methods over time in a pie chart on a timeline for the last 60 minutes. +- **Top 5 url.** Provides a list of the top 5 URLs being accessed by your visitors in a bar chart for the 60 minutes. - **Media Types Served.** Displays a list of file types being served in a pie chart for the 60 minutes. -- **Top 5 Referrers.** Shows a list of the top 5 referring websites by URL in a bar chart for the 60 minutes. +- **Top 5 Referrers.** Shows a list of the top 5 referring websites by URL in a bar chart for 60 minutes. - **Top 10 Search Terms from Popular Search Engines.** Displays a list of the top 10 search terms and their count from search engines such as Google, Bing, and Yahoo in an aggregation table for the past hour. test ### Web Server Operations -The **Apache Tomcat - Web Server Operations** Dashboard provides a high-level view combined with detailed information on the top ten bots, geographic locations and data for clients with high error rates, server errors over time, and non 200 response code status codes. Dashboard panels also show information on server error logs, error log levels, error responses by server, and the top URIs responsible for 404 responses. +The **Apache Tomcat - Web Server Operations** Dashboard provides a high-level view combined with detailed information on the top ten bots, geographic locations, and data for clients with high error rates, server errors over time, and non 200 response code status codes. Dashboard panels also show information on server error logs, error log levels, error responses by the server, and the top URIs responsible for 404 responses. - **Non 200 Response Status Codes.** Displays the number of non-200 response status codes in a bar chart for the past hour. - **Client Locations - 4xx Errors.** Uses a geo lookup operation to display the location of clients with 4xx errors by IP address on a map of the world, which allows you to see a count of hits per location for the last hour. - **Server Errors Over Time.** Provides information on the type and number of server errors in a column chart on a line chart for the past hour. -- **Error Responses by Server.** Shows error responses and their distribution by server in a line chart for the past hour. +- **Error Responses by Server.** Shows error responses and their distribution by the server in a line chart for the past hour. - **Top 5 Clients Cause 4xx Errors.** Displays a list of the top 5 clients that have 4xx errors in a bar chart for the past hour. - **Top 5 URIs Causing 404 Responses.** Provides a list of the top 5 URIs with 404 response types in a pie chart for the past hour. @@ -652,7 +631,7 @@ The **Apache Tomcat - Outlier Analysis** dashboard provides a high-level view of Use this dashboard to: -* Detect outliers in your infrastructure with Sumo Logic’s machine learning algorithm. +* Detect outliers in your infrastructure with Sumo Logic’s machine-learning algorithm. * To identify outliers in incoming traffic and the number of errors encountered by your servers. test @@ -661,7 +640,7 @@ Use this dashboard to: The **Apache Tomcat - Catalina** dashboard provides information about events such as the startup and shutdown of the Apache Tomcat application server, the deployment of new applications, or the failure of one or more subsystems. -- **Log Levels.** Displays log levels types (Info, Severe, and Warning) in a pie chart for the last 24 hours. +- **Log Levels.** Displays log level types (Info, Severe, and Warning) in a pie chart for the last 24 hours. - **Non-INFO Errors.** Shows the number and type of errors (Severe or Warning) in a stacked column chart on a timeline for the last 24 hours. - **Component Errors.** Provides information on errors by component in a pie chart for the last 24 hours. - **Errors by Component.** Displays Info level errors by component in a stacked column chart on a timeline for the last 24 hours. @@ -679,11 +658,11 @@ The **Apache Tomcat - Garbage Collector** dashboard provides information on the - **Top 10 Host - High GC Time.** Displays the top 10 hosts with high garbage collection operation time as a bar chart for the last 12 hours. - **Top 10 Hosts - Low Average JVM Up-Time.** Shows the top 10 hosts by low average JVM up-time as a bar chart for the last 12 hours. - **Total GC Operation Time.** Provides the total garbage collection operation time by timeslices of 15 minutes in a column chart on a timeline for the last 12 hours. -- **Total GC Operations.** Displays the total number of times Full-GC and Minor-GC collection processes are executed in timeslices of 15 minutes on in a stacked column chart on a timeline for the past 12 hours. +- **Total GC Operations.** Displays the total number of times Full-GC and Minor-GC collection processes are executed in timeslices of 15 minutes on a stacked column chart on a timeline for the past 12 hours. - **Heap.** Shows the total heap memory utilization just before garbage collection was executed vs. total heap memory utilization after garbage collection was executed, in a line chart on a timeline for the last 12 hours. -- **PS Young Gen**. PS Young Gen also refers to “New Space,” which is comprised of of Eden-Space and two Survivor-Spaces of identical size, usually called From and To. This panel shows Young Gen memory utilization just before garbage collection was executed vs. Young Gen memory utilization after garbage collection was executed. This part of the heap always gets garbage collected. -- **Par Old Gen.** Par Old Gen is also referred as “Tenured Space”. This panel shows Old Gen memory utilization just before garbage collection was executed vs. Old Gen memory utilization after garbage collection was executed. -- **PS Perm Gen.** PS Perm Gen is also referred as “Permanent Space”. This panel shows Perm Gen memory utilization just before garbage collection was executed vs. Perm Gen memory utilization after garbage collection was executed. +- **PS Young Gen**. PS Young Gen also refers to “New Space,” which is comprised of Eden-Space and two Survivor-Spaces of identical size, usually called From and To. This panel shows Young Gen memory utilization just before garbage collection was executed vs. Young Gen memory utilization after garbage collection was executed. This part of the heap always gets garbage collected. +- **Par Old Gen.** Par Old Gen is also referred to as “Tenured Space”. This panel shows Old Gen memory utilization just before garbage collection was executed vs. Old Gen memory utilization after garbage collection was executed. +- **PS Perm Gen.** PS Perm Gen is also referred to as “Permanent Space”. This panel shows Perm Gen memory utilization just before garbage collection was executed vs. Perm Gen memory utilization after garbage collection was executed. test @@ -698,41 +677,37 @@ Use this dashboard to: ### Connectors -The **Apache Tomcat - Connector** dashboard provides analyze receive requests, pass them to the correct web application, and send back the results through the Connector as dynamically generated content. +The **Apache Tomcat - Connector** dashboard analyzes received requests, passes them to the correct web application, and sends back the results through the Connector as dynamically generated content. test ### Memory -The **Apache Tomcat - Memory** dashboard provides a memory of your Apache Tomcat instance. Use this dashboard to understand detail Memory of your Apache Tomcat (s) deployed in your farm. This dashboard also provides login activities +The **Apache Tomcat - Memory** dashboard provides a memory of your Apache Tomcat instance. Use this dashboard to understand the detailed Memory of your Apache Tomcat (s) deployed on your farm. This dashboard also provides login activities Use this dashboard to: * Analyze Heap memory. -* Analyze percent memory used. +* Analyze the percent memory used. test ### MemoryPool -The **Apache Tomcat - MemoryPool** dashboard provides a memory of your JMX Apache Tomcat instance. Use this dashboard to understand detail Heap Memory of your JMX Apache Tomcat (s) deployed in your farm. +The **Apache Tomcat - MemoryPool** dashboard provides a memory of your JMX Apache Tomcat instance. Use this dashboard to understand the detailed Heap Memory of your JMX Apache Tomcat (s) deployed in your farm. test To help determine if the Apache Tomcat server is available and performing well, the [Sumo Logic monitors](/docs/alerts/monitors) are provided with out-of-box alerts. -## Installing Apache Tomcat monitors - -Sumo Logic provides pre-configured alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you proactively determine if an Apache Tomcat webserver farm is available and performing as expected. These monitors are based on metric and log data and include pre-set thresholds that reflect industry best practices and recommendations. For more information about individual alerts, refer to the [Apache Tomcat alerts](/docs/integrations/web-servers/apache-tomcat#apache-tomcat-alerts). +## Create monitors for Apache Tomcat import CreateMonitors from '../../reuse/apps/create-monitors.md'; -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the Apache Tomcat alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: + ## Apache Tomcat Alerts - +
+Here are the alerts available for Apache Tomcat (click to expand). @@ -766,8 +741,9 @@ import CreateMonitors from '../../reuse/apps/create-monitors.md'; - +
Alert Name
Apache Tomcat - Error This alert fires when error count is greater than 0. This alert fires when the error count is greater than 0. > 0 0
+
diff --git a/docs/integrations/web-servers/apache.md b/docs/integrations/web-servers/apache.md index 17c561e591..78f4fcffb4 100644 --- a/docs/integrations/web-servers/apache.md +++ b/docs/integrations/web-servers/apache.md @@ -11,9 +11,9 @@ import TabItem from '@theme/TabItem'; Thumbnail icon -The Apache app is a unified logs and metrics app that helps you monitor the availability, performance, health and resource utilization of Apache web server farms. Preconfigured dashboards and searches provide visibility into your environment for real-time or historical analysis: visitor locations, visitor access types, traffic patterns, errors, web server operations, resource utilization and access from known malicious sources. +The Apache app is a unified logs and metrics app that helps you monitor the availability, performance, health, and resource utilization of Apache web server farms. Preconfigured dashboards and searches provide visibility into your environment for real-time or historical analysis: visitor locations, visitor access types, traffic patterns, errors, web server operations, resource utilization, and access from known malicious sources. -## Log types and Metrics +## Log types and metrics The Sumo Logic app for Apache assumes: * The [NCSA extended/combined log file format ](http://httpd.apache.org/docs/current/mod/mod_log_config.html) has been configured for Apache access logs and the default error log format for Apache Access logs and Apache Error logs. For a list of metrics that are collected and used by the app, see [Apache Metrics](#apache-metrics). @@ -112,27 +112,7 @@ The predefined searches in the Apache app are based on the Apache Access logs an ## Collecting logs and metrics for Apache -This section provides instructions for configuring log and metrics collection for the Sumo Logic app for Apache. - -### Step 1: Configure fields in Sumo Logic - -As part of the app installation process, the following fields will be created by default: -* `component` -* `environment` -* `webserver_system` -* `webserver_farm` - -Additionally, if you're using Apache in the Kubernetes environment, the following additional fields will be created by default during the app installation process: -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_webserver_system` -* `pod_labels_webserver_farm` - -For information on setting up fields, see [Fields](/docs/manage/fields). - -### Step 2: Configure Your Environment for Apache Logs and Metrics Collection - -Sumo Logic supports collection of logs and metrics data from Apache in both Kubernetes and non-Kubernetes environments. Please click on the appropriate link below based on the environment where your Apache farms are hosted. +Sumo Logic supports the collection of logs and metrics data from Apache in both Kubernetes and non-Kubernetes environments. Please click on the appropriate link below based on the environment where your Apache farms are hosted. " component: "webserver" @@ -259,11 +239,11 @@ We use the Telegraf Operator for Apache metrics collection and the Sumo Logic In This section provides instructions for configuring metrics collection for the Sumo Logic app for Apache. Follow the instructions to set up metrics collection for each server belonging to a Apache server farm: -#### Configure Metrics Collection from a Apache Server +### Configure metrics collection from an Apache server 1. **Configure Metrics in Apache**. Before you can configure Sumo Logic to ingest metrics, you must turn on [server-status](https://httpd.apache.org/docs/2.4/mod/mod_status.html) for Apache. For this, edit the Apache conf file (httpd.conf). * Uncomment this line if not already done in the httpd.conf: `LoadModule status_module libexec/apache2/mod_status.so` - * Add following lines in the httpd.conf after that + * Add the following lines in the httpd.conf after that ```xml ExtendedStatus On @@ -336,8 +316,8 @@ This section provides instructions for configuring metrics collection for the Su At this point, Apache metrics should start flowing into Sumo Logic. -#### Configure Logs Collection from an Apache server -This section provides instructions for configuring collection of logs from Apache running on a non-Kubernetes environment. +### Configure logs collection from an Apache server +This section provides instructions for configuring a collection of logs from Apache running on a non-Kubernetes environment. Apache logs (access logs and error logs) are stored in log files. @@ -350,38 +330,38 @@ To configure the Apache log file(s), locate your local **httpd.conf** configurat For access logs, the following directive is to be noted: * CustomLog: access log file path and format (standard common and combined) -For error logs, following directives are to be noted: +For error logs, the following directives are to be noted: * ErrorLog: error log file path * LogLevel: to control the number of messages logged to the error_log 2. **Configure an Installed Collector**. To add an Installed collector, perform the steps as defined on the page [Configure an Installed Collector.](/docs/send-data/installed-collectors) -3. **Configure a Local File Source for Apache access logs**. To add a Local File Source for Apache access log do the following +3. **Configure a Local File Source for Apache access logs**. To add a Local File Source for the Apache access log do the following 1. Add a [Local File Source](/docs/send-data/installed-collectors/sources/local-file-source) in the installed collector configured in the previous step. 2. Configure the Local File Source fields as follows: * **Name.** (Required) * **Description.** (Optional) - * **File Path (Required).** Enter the path to your apache access logs. The files are typically located in `/var/log/apache2/access_log`. If you're using a customized path, check the httpd.conf file for this information. - * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different host name + * **File Path (Required).** Enter the path to your Apache access logs. The files are typically located in `/var/log/apache2/access_log`. If you're using a customized path, check the httpd.conf file for this information. + * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different hostname. * **Source Category.** Enter any string to tag the output collected from this Source, such as **Prod/Apache/Access**. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details, see[ Best Practices](/docs/send-data/best-practices).) * **Fields**. Set the following fields. For more information on fields please see [this document](/docs/manage/fields): * `component = webserver` * `webserver_system = apache` * `webserver_farm = ` - * `environment = `, such as dev, qa or prod. + * `environment = `, such as dev, qa, or prod. * The values of `webserver_farm` and `environment` should be the same as they were configured in the Configure and start telegraf section. * **Configure the Advanced Options for Logs section:** * **Enable Timestamp Parsing.** Select Extract timestamp information from log file entries. - * **Time Zone.** Select Use time zone form log file, if none is detected use “Use Collector Default” + * **Time Zone.** Select Use time zone from the log file, if none is detected use “Use Collector Default” * **Timestamp Format.** Select Automatically detect the format. * **Encoding.** Select UTF-8 (Default). - * Apache Access logs are single-line logs, uncheck **Detect messages spanning multiple lines.** + * Apache Access logs are single-line logs, uncheck **Detect messages spanning multiple lines**. 3. Click **Save**. At this point, Apache access logs should start flowing into Sumo Logic. -4. **Configure a Local File Source for Apache error logs**. To add a Local File Source for Apache error log do the following +4. **Configure a Local File Source for Apache error logs**. To add a Local File Source for the Apache error log do the following: 1. Add a[ Local File Source](/docs/send-data/installed-collectors/sources/local-file-source) in the installed collector configured in the previous step. 2. Configure the Local File Source fields as follows: * **Name.** (Required) * **Description.** (Optional) * **File Path (Required).** Enter the path to your error_log. The files are typically located in `/var/log/apache2/error_log`. If you're using a customized path, check the httpd.conf file for this information. - * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different host name + * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different hostname. * **Source Category.** Enter any string to tag the output collected from this Source, such as **Prod/Apache/Error**. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details, see[ Best Practices](/docs/send-data/best-practices).) * **Fields**. Set the following fields. For more information on fields please see [this document](/docs/manage/fields): ```sql @@ -393,7 +373,7 @@ For error logs, following directives are to be noted: * The values of `webserver_farm` and `environment` should be the same as they were configured in the Configure and start telegraf section. * **Configure the Advanced Options for Logs section:** * **Enable Timestamp Parsing.** Select Extract timestamp information from log file entries. - * **Time Zone.** Select Use time zone form log file, if none is detected use “Use Collector Default” + * **Time Zone.** Select Use time zone from the log file, if none is detected use “Use Collector Default” * **Timestamp Format.** Select Automatically detect the format. * **Encoding.** Select UTF-8 (Default). * Apache Error logs are multiline-line logs, Select **Detect messages spanning multiple lines** and **Boundary Regex: Expression to match message boundary**. @@ -408,34 +388,27 @@ For error logs, following directives are to be noted: ## Installing the Apache app -Now that you have set up logs and metric collections for Apache, you can install the Sumo Logic app for Apache to use the pre-configured Searches and dashboards. - -To install the app, do the following: -1. Locate and select the app you need from the **App Catalog**. -2. From the **App Catalog**, search for and select the app. If you want to see a preview of the dashboards included with the app before installing, click images in **Dashboard Preview** section. -3. Click **Add Integration**. -4. In **Setup Data** step you would see **Open Setup Doc** button with link to this document. Click **Next** to proceed. -5. In the **Configure Apache** step, complete the following fields. - * **Apache Log Source**. Choose **Enter a Custom Data Filter** and enter a custom filter. Examples: - * For all Apache web server farms: `webserver_system=apache webserver_farm=*` - * For a specific web server farm: `webserver_system=apache webserver_farm=apache.dev.01` - * Select location in the library (the default is the Personal folder in the library), or click **New Folder** to add a new folder. - * **Folder Name** You can retain the existing name, or enter a name of your choice for the app. -5. Click **Next**. +import AppInstall2 from '../../reuse/apps/app-install-sc-k8s.md'; -For more information, see the [Install the Apps from the Library](/docs/get-started/apps-integrations). + -Once an app is installed, it will appear in your **Personal** folder, or other folder that you specified. From here, you can share it with your organization. +As part of the app installation process, the following fields will be created by default: +* `component` +* `environment` +* `webserver_system` +* `webserver_farm` -Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps. +Additionally, if you're using Apache in the Kubernetes environment, the following additional fields will be created by default during the app installation process: +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_webserver_system` +* `pod_labels_webserver_farm` -## Viewing Apache dashboards +## Viewing Apache dashboards​ -This section provides descriptions of each of the app dashboards. +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: + ### Overview @@ -450,7 +423,7 @@ Use this dashboard to: test -### Error Log Analysis +### Error log analysis The **Apache - Error Log Analysis** dashboard provides a high-level view of error log levels, clients causing errors, critical error messages and trends. @@ -465,7 +438,7 @@ Use this dashboard to: ### Trends -The **Apache - Trends** dashboard provides trends around HTTP responses, server hits, visitor locations, traffic volume and distribution. +The **Apache - Trends** dashboard provides trends around HTTP responses, server hits, visitor locations, traffic volume, and distribution. Use this dashboard to: * Monitor trends and identify outliers. @@ -474,7 +447,7 @@ Use this dashboard to: ### Outlier Analysis -The **Apache - Outlier Analysis** dashboard helps you quickly identify outliers for key Apache metrics such bytes served, number of visitors, server errors, and client errors. +The **Apache - Outlier Analysis** dashboard helps you quickly identify outliers for key Apache metrics such as bytes served, number of visitors, server errors, and client errors. Use this dashboard to: * Automatically detect outliers in the operations of your Apache web servers and take corrective actions if needed. @@ -485,7 +458,7 @@ Use this dashboard to: The **Apache - Threat Intel** dashboard provides an at-a-glance view of incoming threats to your Apache servers based on known malicious IP addresses. -Dashboard panels show threat counts, geographic locations, actors, threat severity, URLS accessed. +Dashboard panels show threat counts, geographic locations, actors, threat severity, and URLS accessed. Use this dashboard to: * Identify threats from incoming traffic based on incoming client IP addresses and discover potential IOCs. @@ -497,7 +470,7 @@ Use this dashboard to: The **Apache - Visitor Locations** dashboard provides a high-level view of Apache visitor geographic locations both worldwide and in the United States. Use this dashboard to: -* Get insights into geographic locations of your user base. +* Get insights into the geographic locations of your user base. test @@ -524,7 +497,7 @@ Use this dashboard to: The **Apache - Web Server Operations** Dashboard provides an at-a-glance view of the operations of your Apache web servers. Dashboard panels show information on bots, geographic locations, errors and URLs. Use this dashboard to: -* Get insights into client locations, bots and response codes. +* Get insights into client locations, bots, and response codes. test @@ -544,7 +517,7 @@ The **Apache - Server Resource Utilization** dashboard shows the CPU resource ut Use this dashboard to: * Monitor CPU utilization and load on your Apache web servers. -* Monitor the number of worker and idle threads. +* Monitor the number of workers and idle threads. test @@ -559,24 +532,13 @@ Use this dashboard to: test -## Installing Apache monitors - -This section provides instructions for installing the Sumo Logic Monitors for Apache. These instructions assume you have already set up collection as described in the [Collecting Logs and Metrics for Apache](#collecting-logs-and-metrics-for-apache) page. - -Sumo Logic has provided a predefined set of alerts, which can be imported and available through [Sumo Logic monitors](/docs/alerts/monitors), to help you proactively monitor your Apache Web servers and farms. These monitors are built based on metrics and logs datasets and include pre-set thresholds based on industry best practices and recommendations. - -For details about individual alerts, see [Apache alerts](#apache-alerts). +## Create monitors for Apache app import CreateMonitors from '../../reuse/apps/create-monitors.md'; -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the Apache alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: - -## Apache Alerts + -Sumo Logic provides out-of-the-box alerts available via [Sumo Logic monitors](/docs/alerts/monitors). These alerts are built based on logs and metrics datasets and have preset thresholds based on industry best practices and recommendations. +## Apache alerts
Here are the alerts available for Apache (click to expand). diff --git a/docs/integrations/web-servers/haproxy.md b/docs/integrations/web-servers/haproxy.md index 520085a83e..ab4648411e 100644 --- a/docs/integrations/web-servers/haproxy.md +++ b/docs/integrations/web-servers/haproxy.md @@ -11,17 +11,17 @@ import TabItem from '@theme/TabItem'; Thumbnail icon -HAProxy is open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers. +HAProxy is an open-source software that provides a high-availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers. The Sumo Logic app for HAProxy is a unified logs and metrics app that helps you monitor the availability, performance, and health of your HAProxy cluster. Preconfigured dashboards provide insights into active servers, visitor locations, sessions, errors, response time, and throughput. ## HAProxy log types -The app supports Logs and Metrics from the open source version of HAProxy. The app is tested on the 2.3.9 version of HAProxy. +The app supports Logs and Metrics from the open-source version of HAProxy. The app is tested on the 2.3.9 version of HAProxy. The HAProxy logs are generated in files as configured in the configuration file /etc/haproxy/haproxy.cfg ([learn more](https://www.haproxy.com/blog/introduction-to-haproxy-logging/)). -The Sumo Logic app for HAProxy supports metrics generated by the [HAProxy plugin for Telegraf](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy). The app assumes prometheus format Metrics. +The Sumo Logic app for HAProxy supports metrics generated by the [HAProxy plugin for Telegraf](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy). The app assumes Prometheus format Metrics. ### Sample log messages @@ -77,24 +77,9 @@ This section provides instructions for configuring logs and metrics collection f Configuring log and metric collection for the HAProxy app includes the following tasks: -### Step 1: Configure fields in Sumo Logic +### Step 2: Configure collection for HAProxy -As part of the app installation process, the following fields will be created by default: - * `component` - * `environment` - * `proxy_system` - * `proxy_cluster` - * `pod` - -Additionally, if you're using HAProxy in the Kubernetes environment, the following additional fields will be created by default during the app installation process: - * `pod_labels_component` - * `pod_labels_environment` - * `pod_labels_proxy_system` - * `pod_labels_proxy_cluster` - -### Step 2: Configure Collection for HAProxy - -Sumo Logic supports collection of logs and metrics data from HAProxy in both Kubernetes and non-Kubernetes environments. +Sumo Logic supports the collection of logs and metrics data from HAProxy in both Kubernetes and non-Kubernetes environments. -We use the Telegraf operator for HAProxy metric collection and Sumo Logic Installed Collector for collecting HAProxy logs. The diagram below illustrates the components of the HAProxy collection in a **non-Kubernetes** environment. Telegraf runs on the same system as HAProxy, and uses the [HAProxy input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy) to obtain HAProxy metrics, and the Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from HAProxy on the other hand are sent to either a Sumo Logic Local File source or Syslog source. +We use the Telegraf operator for HAProxy metric collection and the Sumo Logic Installed Collector for collecting HAProxy logs. The diagram below illustrates the components of the HAProxy collection in a **non-Kubernetes** environment. Telegraf runs on the same system as HAProxy and uses the [HAProxy input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy) to obtain HAProxy metrics, and the Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from HAProxy on the other hand are sent to either a Sumo Logic Local File source or a Syslog source. Backend dashboard This section provides instructions for configuring metrics collection for the Sumo Logic app for HAProxy. -#### Configure Metrics Collection +### Configure metrics collection 1. Configure a Hosted Collector: To create a new Sumo Logic hosted collector, perform the steps in the[Create a Hosted Collector](/docs/send-data/hosted-collectors/configure-hosted-collector) section of the Sumo Logic documentation. 2. Configure an HTTP Logs and Metrics Source: Create a new HTTP Logs and Metrics Source in the hosted collector created above by following[ these instructions](/docs/send-data/hosted-collectors/http-source/logs-metrics). Make a note of the **HTTP Source URL**. @@ -243,7 +228,7 @@ Please enter values for the following parameters (marked `CHANGEME` above): * In the input plugins section, that is `[[inputs.haproxy]]`: * `servers` - The URL to the HAProxy server. This can be a comma-separated list to connect to multiple HAProxy servers. Please see [this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy) for more information on additional parameters for configuring the HAProxy input plugin for Telegraf. * In the tags section, `[inputs.haproxy.tags]`: - * `environment`. This is the deployment environment where the HAProxy server identified by the value of `servers` resides. For example: dev, prod or qa. While this value is optional we highly recommend setting it. + * `environment`. This is the deployment environment where the HAProxy server identified by the value of `servers` resides. For example: dev, prod, or qa. While this value is optional we highly recommend setting it. * `proxy_cluster`. Enter a name to identify this HAProxy cluster. This cluster name will be shown in the Sumo Logic dashboards. * In the output plugins section, which is `[[outputs.sumologic]]`: * **`url`** - This is the HTTP source URL created in step 2. Please see [this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/configure-telegraf-output-plugin.md) for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin. @@ -260,7 +245,7 @@ Once you have finalized your telegraf.conf file, you can start or reload the tel At this point, HAProxy metrics should start flowing into Sumo Logic. -#### Configure Logs Collection +### Configure logs collection This section provides instructions for configuring log collection for HAProxy running on a non-Kubernetes environment for the Sumo Logic app for HAProxy. @@ -274,22 +259,22 @@ Based on your infrastructure and networking setup, choose one of these methods t 2. Configure local log file or syslog collection 3. Configure a Collector 4. Configure a Source -5. Configure logging in HAProxy: Haproxy supports logging via following methods: syslog, local text log files and stdout. Haproxy logs have six levels of verbosity. To select a level, set loglevel to one of: +5. Configure logging in HAProxy: Haproxy supports logging via the following methods: Syslog, local text log files, and stdout. Haproxy logs have six levels of verbosity. To select a level, set loglevel to one of: * **emerg** - Errors such as running out of operating system file descriptors. - * **alert** - Some rare cases where something unexpected has happened, such as being unable to cache a response - * **info** - TCP connection and http request details and errors - * **err** - Errors such as being unable to parse a map file, being unable to parse the HAProxy configuration file, and when an operation on a stick table fails - * **warning** - Certain important, but non-critical, errors such as failing to set a request header or failing to connect to a DNS nameserver - * **notice** - Changes to a server’s state, such as being UP or DOWN or when a server is disabled. Other events at startup, such as starting proxies and loading modules are also included. Health check logging, if enabled, also uses this level) - * **debug** (a lot of information, useful for development/testing) + * **alert** - Some rare cases where something unexpected has happened, such as being unable to cache a response. + * **info** - TCP connection and HTTP request details and errors. + * **err** - Errors such as being unable to parse a map file, being unable to parse the HAProxy configuration file, and when an operation on a stick table fails. + * **warning** - Certain important, but non-critical, errors such as failing to set a request header or failing to connect to a DNS nameserver. + * **notice** - Changes to a server’s state, such as being UP or DOWN or when a server is disabled. Other events at startup, such as starting proxies and loading modules are also included. Health check logging, if enabled, also uses this level. + * **debug** (a lot of information, useful for development/testing). All logging settings are located in [Haproxy.conf](https://www.haproxy.com/blog/introduction-to-haproxy-logging/). - For the dashboards to work properly, must set log format: + For the dashboards to work properly, must set the log format: ```bash %ci:%cp\ [%tr]\ %ft\ %b/%s\ %TR/%Tw/%Tc/%Tr/%Ta\ %ST\ %B\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r ``` -6. Configure Haproxy log to a Local file or syslog: +6. Configure Haproxy log to a Local file or Syslog: **Configuring HAProxy logs to stream via syslog (Recommended)** @@ -306,10 +291,10 @@ defaults The **log** directive instructs HAProxy to send logs to the Syslog server listening at 127.0.0.1:514. - The **log global** directive basically says, use the log line that was set in the **global** section. Putting a **log global** directive into the **defaults** section is equivalent to putting it into all of the subsequent proxy sections. + The **log global** directive basically says, to use the log line that was set in the **global** section. Putting a **log global** directive into the **defaults** section is equivalent to putting it into all of the subsequent proxy sections. -Keep the **port(514)** handy as we will use it in next steps. +Keep the **port(514)** handy as we will use it in the next steps. **Configuring HAProxy logs to go to log files** @@ -322,12 +307,12 @@ Follow the steps below to enable HAProxy logs to go to log files: defaults log global ``` -1. By default, rsyslog doesn’t listen to any address. Uncomment or add following lines in **/etc/rsyslog.conf.** This will make rsyslog listen on UDP port 514 for all IP addresses. +1. By default, rsyslog doesn’t listen to any address. Uncomment or add the following lines in **/etc/rsyslog.conf.** This will make rsyslog listen on UDP port 514 for all IP addresses. ```bash $ModLoad imudp $UDPServerRun 514 ``` -1. Now create a **/etc/rsyslog.d/haproxy.conf** file containing below lines. +1. Now create a **/etc/rsyslog.d/haproxy.conf** file containing the following lines: ```bash local2.* /var/log/haproxy.log ``` @@ -352,7 +337,7 @@ Follow the steps below to enable HAProxy logs to go to log files: * `component = proxy` * `proxy_system = haproxy` * `proxy_cluster = ` - * `environment = `, such as Dev, QA or Prod. + * `environment = `, such as Dev, QA, or Prod. 3. Configure the **Advanced** section: * **Enable Timestamp Parsing.** Select Extract timestamp information from log file entries. * **Time Zone.** Choose the option, **Ignore time zone from log file and instead use**, and then select your HAProxy Server’s time zone. @@ -365,13 +350,13 @@ Follow the steps below to enable HAProxy logs to go to log files: * **Name.** (Required) * **Description.** (Optional) * **File Path (Required).** Enter the path to your error.log or access.log. The files are typically located in /var/log/haproxy*.log. If you're using a customized path, check the haproxy.conf file for this information. - * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different host name. + * **Source Host.** Sumo Logic uses the hostname assigned by the OS unless you enter a different hostname. * **Source Category.** Enter any string to tag the output collected from this Source, such as **Haproxy/Logs**. (The Source Category metadata field is a fundamental building block to organize and label Sources. For details, see[ Best Practices](/docs/send-data/best-practices).) * **Fields.** Set the following fields: * `component = proxy` * `proxy_system = haproxy` * `proxy_cluster = ` - * `environment = `, such as Dev, QA or Prod. + * `environment = `, such as Dev, QA, or Prod. 1. Configure the **Advanced** section: * **Enable Timestamp Parsing.** Select Extract timestamp information from log file entries. * **Time Zone.** Choose the option, **Ignore time zone from log file and instead use**, and then select your HAProxy Server’s time zone. @@ -394,45 +379,62 @@ component="proxy" proxy_cluster="" proxy_system="haproxy" Now that you have set up collection for HAProxy, you can install the HAProxy app to use the pre-configured searches and dashboard that provide insight into your data. -import AppInstall from '../../reuse/apps/app-install.md'; +import AppInstall2 from '../../reuse/apps/app-install-sc-k8s.md'; - + -## Viewing HAProxy Dashboards +As part of the app installation process, the following fields will be created by default: + * `component` + * `environment` + * `proxy_system` + * `proxy_cluster` + * `pod` + +Additionally, if you're using HAProxy in the Kubernetes environment, the following additional fields will be created by default during the app installation process: + * `pod_labels_component` + * `pod_labels_environment` + * `pod_labels_proxy_system` + * `pod_labels_proxy_cluster` + +## Viewing the HAProxy dashboards + +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + + ### Overview -The **HAProxy - Overview** dashboard provides an at-a-glance view of HAProxy Backend and Frontend HTTP error codes percentage, visitor location, URLs and Clients causing errors. +The **HAProxy - Overview** dashboard provides an at-a-glance view of HAProxy Backend and Frontend HTTP error codes percentage, visitor location, URLs, and Clients causing errors. -* Identify Frontend and Backend Sessions percentage usage to understand active sessions. This can help you increase the HAProxy session limit. +* Identify Frontend and Backend Session percentage usage to understand active sessions. This can help you increase the HAProxy session limit. * Gain insights into originated traffic location by region. This can help you allocate computer resources to different regions according to their needs. -* Gain insights into Client, Server Responses on HAProxy Server. This helps you identify errors in HAProxy Server. +* Gain insights into Client and Server Responses on the HAProxy Server. This helps you identify errors in the HAProxy Server. * Gain insights into Network traffic for the Frontend and Backend system of your HAProxy server. test ### Backend -The **HAProxy - Backend** dashboard provides an at-a-glance view for the number of backend active servers, backend weight, respond code from backend and throughput http. +The **HAProxy - Backend** dashboard provides an at-a-glance view for the number of backend active servers, backend weight, response code from the backend, and throughput HTTP. Backend dashboard ### Frontend -The **HAProxy - Backend** dashboard provides an at-a-glance view detail of HAProxy Frontend. It provides information such as number request to frontend, number of error requests, and current session. +The **HAProxy - Frontend** dashboard provides an at-a-glance view detail of HAProxy Frontend. It provides information such as the number of requests to the front end, the number of error requests, and the current session. test ### Server -The **HAProxy - Backend** dashboard provides an at-a-glance view detail of HAProxy Server. This dashboard helps you monitoring uptime, and error request by proxy. +The **HAProxy - Server** dashboard provides an at-a-glance view detail of the HAProxy Server. This dashboard helps you monitor uptime, and error requests by proxy. test ### Error Log Analysis -The **HAProxy - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections and outliers, client requests, request trends, and request outliers. +The **HAProxy - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections, outliers, client requests, request trends, and request outliers. Use this dashboard to: * Track requests from clients. A request is a message asking for a resource, such as a page or an image. @@ -446,14 +448,14 @@ Use this dashboard to: The **HAProxy - Outlier Analysis** dashboard provides a high-level view of HAProxy server outlier metrics for bytes served, number of visitors, and server errors. You can select the time interval over which outliers are aggregated, then hover the cursor over the graph to display detailed information for that point in time. Use this dashboard to: -* Detect outliers in your infrastructure with Sumo Logic’s machine learning algorithm. +* Detect outliers in your infrastructure with Sumo Logic’s machine-learning algorithm. * To identify outliers in incoming traffic and the number of errors encountered by your servers. test ### Threat Analysis -The **HAProxy - Threat Inte**l dashboard provides an at-a-glance view of threats to HAProxy servers on your network. Dashboard panels display the threat count over a selected time period, geographic locations where threats occurred, source breakdown, actors responsible for threats, severity, and a correlation of IP addresses, method, and status code of threats. +The **HAProxy - Threat Analysis** dashboard provides an at-a-glance view of threats to HAProxy servers on your network. Dashboard panels display the threat count over a selected time period, geographic locations where threats occurred, source breakdown, actors responsible for threats, severity, and a correlation of IP addresses, method, and status code of threats. Use this dashboard to: * To gain insights and understand threats in incoming traffic and discover potential IOCs. Incoming traffic requests are analyzed using Sumo Logic [threat intelligence](/docs/security/threat-intelligence/). @@ -462,7 +464,7 @@ Use this dashboard to: ### Trends -The **HAProxy - Trends** dashboard provides an at-a-glance view of traffic to HAProxy servers on your network. Dashboard panels display the traffic count over one day time period, locations where traffic trends for visits by country one days time. +The **HAProxy - Trends** dashboard provides an at-a-glance view of traffic to HAProxy servers on your network. Dashboard panels display the traffic count over a day, and locations where traffic trends for visits by country one days time. test @@ -481,7 +483,7 @@ These insights can be useful for planning in which browsers, platforms, and oper The **HAProxy - Visitor Locations** dashboard provides a high-level view of HAProxy visitor geographic locations both worldwide and in the United States. Dashboard panels also show graphic trends for visits by country over time and visits by US region over time. Use this dashboard to: -* Gain insights into geographic locations of your user base. This is useful for resource planning in different regions across the globe. +* Gain insights into the geographic locations of your user base. This is useful for resource planning in different regions across the globe. test @@ -500,37 +502,33 @@ Use this dashboard to: The **HAProxy - Web Server Operations** dashboard provides a high-level view combined with detailed information on the top ten bots, geographic locations, and data for clients with high error rates, server errors over time, and non 200 response code status codes. Dashboard panels also show information on server error logs, error log levels, error responses by a server, and the top URIs responsible for 404 responses. Use this dashboard to: -* Gain insights into Client, Server Responses on HAProxy Server. This helps you identify errors in HAProxy Server. -* To identify geo locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. +* Gain insights into Client and Server responses on the HAProxy Server. This helps you identify errors in the HAProxy Server. +* To identify geo-locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. test -## Installing the HAProxy monitors - -Sumo Logic has provided pre-packaged alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you proactively determine if a HAProxy cluster is available and performing as expected. These monitors are based on metric and log data and include pre-set thresholds that reflect industry best practices and recommendations. For more information about individual alerts, see [HAProxy alerts](#haproxy-alerts). +## Create monitors for HAProxy app import CreateMonitors from '../../reuse/apps/create-monitors.md'; -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the HAProxy alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: - - -## HAProxy Alerts + +## HAProxy alerts +
+Here are the alerts available for Haproxy (click to expand). | Alert Type (Metrics/Logs) | Alert Name | Alert Description | Trigger Type (Critical / Warning) | Alert Condition | Recover Condition | |:---|:---|:---|:---|:---|:---| -| Logs | HAProxy - Access from Highly Malicious Sources | This alert fires when an HAProxy is accessed from highly malicious IP addresses. | Critical | > 0 | < = 0 | +| Logs | HAProxy - Access from Highly Malicious Sources | This alert fires when a HAProxy is accessed from highly malicious IP addresses. | Critical | > 0 | < = 0 | | Logs | HAProxy - High Client (HTTP 4xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 4xx. | Critical | > 0 | 0 | | Logs | HAProxy - High Server (HTTP 5xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 5xx. | Critical | > 0 | 0 | | Logs | HAProxy - Backend Error | This alert fires when we detect backend server errors. | Critical | >0 | < = 0 | | Logs | HAProxy - Backend Server Down | This alert fires when we detect a backend server for a given HAProxy server is down. | Critical | >0 | < = 0 | -| Metrics | HAProxy - High Active Backend Server Sessions | when the percent of backend server connections are high. | Warning | >80 | < = 80 | +| Metrics | HAProxy - High Active Backend Server Sessions | when the percentage of backend server connections is high. | Warning | >80 | < = 80 | | Metrics | HAProxy - Frontend Security Blocked Requests | HAProxy is blocking requests for security reasons | Warning | >10 | < = 10 | | Metrics | HAProxy - Has No Alive Backends | HAProxy has no alive active or backup backend servers | Critical | >0 | < = 0 | -| Metrics | HAProxy - Slow Response Time | the HAProxy response times are greater than one second. | Critical | >1 | < = 1 | +| Metrics | HAProxy - Slow Response Time | The HAProxy response times are greater than one second. | Critical | >1 | < = 1 | | Metrics | HAProxy - Pending Requests | HAProxy requests are pending | Warning | >0 | < = 0 | | Metrics | HAProxy - Retry High | there is a high retry rate | Warning | >0 | < = 0 | -| Metrics | HAProxy - High Server Connection Errors | there are too many connection errors to backend servers. | Warning | >100 | < = 100 | +| Metrics | HAProxy - High Server Connection Errors | There are too many connection errors to backend servers. | Warning | >100 | < = 100 | | Metrics | HAProxy - Server Healthcheck Failure | server healthchecks are failing. | Warning | >0 | < = 0 | +
diff --git a/docs/integrations/web-servers/iis-10.md b/docs/integrations/web-servers/iis-10.md index 75d8a7483a..2f669a1941 100644 --- a/docs/integrations/web-servers/iis-10.md +++ b/docs/integrations/web-servers/iis-10.md @@ -23,33 +23,24 @@ IIS app and integration are supported only on Windows. This section provides instructions for configuring log and metric collection for the Sumo Logic app for IIS. -Sumo Logic supports the collection of logs and metrics data from IIS server in standalone environments. The process to set up collection is done through the following steps: +Sumo Logic supports the collection of logs and metrics data from the IIS server in standalone environments. The process to set up collection is done through the following steps: 1. [Configure Log Collection](#configure-log-collection) - * Enable Logging on IIS Server Side + * Enable Logging on the IIS Server Side * Log Types - * Set up Collector and Sources on Sumo Logic side + * Set up Collector and Sources on the Sumo Logic side * Set up local file source for IIS Access Logs * Set up local file source for IIS Error Logs * Set up Source for IIS Performance (Perfmon) Logs 2. [Configure Metrics Collection](#configure-metrics-collection) - * Configure an HTTP Logs and Metrics Source + * Configure HTTP Logs and Metrics Source * Configure a Hosted Collector * Install Telegraf * Configure Telegraf (telegraf.conf), and start it Collect Internet Information Services (IIS) Logs and Metrics for Standalone environments -Sumo Logic uses the Telegraf operator for IIS metric collection and the [Installed Collector](/docs/send-data/installed-collectors) for collecting IIS logs. The diagram below illustrates the components of the IIS collection in a standalone environment. Telegraf uses the [Windows Performance Counters Input Plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) to obtain IIS metrics and the Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from IIS Server are collected by a [Local File Source](/docs/send-data/installed-collectors/sources/local-file-source). - -### Configure fields in Sumo Logic - -Following fields will be created automatically as a part of app installation process: -* `component` -* `environment` -* `webserver_system` -* `webserver_farm` -* `pod` +Sumo Logic uses the Telegraf operator for IIS metric collection and the [Installed Collector](/docs/send-data/installed-collectors) for collecting IIS logs. The diagram below illustrates the components of the IIS collection in a standalone environment. Telegraf uses the [Windows Performance Counters Input Plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) to obtain IIS metrics and the Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from the IIS Server are collected by a [Local File Source](/docs/send-data/installed-collectors/sources/local-file-source). ### Configure log collection @@ -68,36 +59,36 @@ This section provides instructions for configuring log collection for IIS runnin #Example c:\inetpub\logs\LogFiles\ ``` Within the folder, you will find subfolders for each site configured with IIS. The logs are stored in folders that follow a naming pattern like W3SVC1, W3SVC2, W3SVC3, etc. The number at the end of the folder name corresponds to your site ID. For example, W3SVC2 is for site ID 2. - * **IIS Access Logs (W3C default format)**. Sumo Logic expects logs in W3C format with the following fields. IIS allows you to choose fields to log in IIS access logs. To learn more about the various fields and their significance, see [Microsoft | W3C Logging](https://docs.microsoft.com/en-us/windows/desktop/http/w3c-logging). + * **IIS Access Logs (W3C default format)**. Sumo Logic expects logs in W3C format with the following fields. IIS allows you to choose fields to log in to IIS access logs. To learn more about the various fields and their significance, see [Microsoft | W3C Logging](https://docs.microsoft.com/en-us/windows/desktop/http/w3c-logging). ``` #Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken ``` - * **HTTP Error Logs**. Sumo Logic expects Error logs in following format. For information on how to configure HTTP Error Logs, and for explanations on the various HTTP Error Log fields and their significance, see [Microsoft | Error logging in HTTP APIs](https://support.microsoft.com/en-us/help/820729/error-logging-in-http-apis). + * **HTTP Error Logs**. Sumo Logic expects Error logs in the following format. For information on how to configure HTTP Error Logs, and for explanations on the various HTTP Error Log fields and their significance, see [Microsoft | Error logging in HTTP APIs](https://support.microsoft.com/en-us/help/820729/error-logging-in-http-apis). ``` #Fields: date time c-ip c-port s-ip s-port protocol_version verb cookedurl_query protocol_status siteId Reason_Phrase Queue_Name ``` - * **Performance Logs**. These logs are output of Perfmon queries which will be configured at Installed Collector, "**Windows Performance**" Source. + * **Performance Logs**. These logs are the output of Perfmon queries which will be configured at Installed Collector, "**Windows Performance**" Source. -#### Enable logging on your IIS Server +### Enable logging on your IIS server If logging is not already enabled on your IIS Server, perform the following steps to enable it: 1. Open IIS Manager. 1. Select the site or server in the **Connections** pane, then double-click **Logging**. Enhanced logging is only available for site-level logging. If you select the server in the Connections pane, then the Custom Fields section of the W3C Logging Fields dialog is disabled. 1. In the Format field under Log File, select **W3C** and then click Select Fields. IIS app works on default fields selection. -1. Select following fields, if not already selected. Sumo Logic expects these fields in IIS logs for the IIS app to work by default: +1. Select the following fields, if not already selected. Sumo Logic expects these fields in IIS logs for the IIS app to work by default: `date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-substatus sc-win32-status time-taken` For more information about IIS log format and log configuration, see [Microsoft | Enhanced Logging for IIS 8.5](https://docs.microsoft.com/en-us/iis/get-started/whats-new-in-iis-85/enhanced-logging-for-iis85). -#### Verify that log files are created +### Verify that log files are created Perform the following tasks to ensure that log files are being created: 1. Open a command-line window and change directories to `C:\inetpub\Logs\LogFiles`. This is the same path you will enter when you configure the Source to collect these files. 1. Under the \W3SVC1 directory, you should see one or more files with a .log extension. If the file is present, you can collect it. -#### Enable HTTP Error Logs on your Windows Server +### Enable HTTP error logs on your Windows server Perform the following task to enable HTTP Error Logs on your Windows Server that is hosting the IIS Server: @@ -107,11 +98,11 @@ Perform the following task to enable HTTP Error Logs on your Windows Server that C:\Windows\System32\LogFiles\HTTPERR ``` -#### Configure an Installed Collector +### Configure an installed collector If you have not already done so, install and configure an installed collector for Windows by following the [Install a Collector on Windows](/docs/send-data/installed-collectors/windows) documentation. -#### Configure Source for IIS Access Logs +### Configure source for IIS Access logs This section demonstrates how to configure a Local File Source for IIS Access Logs, for use with an [Installed Collector](/docs/integrations/web-servers/iis-10). You may configure a [Remote File Source](/docs/send-data/installed-collectors/sources/remote-file-source), but the configuration is more complex. Sumo Logic recommends using a Local File Source whenever possible. To configure a local file source for IIS Access Logs, do the following: @@ -120,8 +111,8 @@ This section demonstrates how to configure a Local File Source for IIS Access Lo 1. **Name**. Required (for example, "IIS Access Logs") 2. **Description**. (Optional) 3. **File Path** (Required). `C:\inetpub\Logs\LogFiles\W3SVC*\*.log` - 4. **Collection start time**. Choose how far back you would like to begin collecting historical logs. For example, choose 7 days ago to being collecting logs with a last modified date within the last seven days. - 5. **Source Host**. Sumo Logic uses the hostname assigned by the operating system by default, but you can enter a different host name. + 4. **Collection start time**. Choose how far back you would like to begin collecting historical logs. For example, choose 7 days ago to begin collecting logs with the last modified date within the last seven days. + 5. **Source Host**. Sumo Logic uses the hostname assigned by the operating system by default, but you can enter a different hostname. 6. **Source Category** (Required). For example, Webserver/IIS/Access. 7. **Fields**. Set the following fields: * `component = webserver` @@ -131,15 +122,15 @@ This section demonstrates how to configure a Local File Source for IIS Access Lo 3. Configure the Advanced section: * **Timestamp Parsing Settings**. Make sure the setting matches the timezone on the log files. * **Enable Timestamp Parsing**. Select **Extract timestamp information from log file entries**. - * **Time Zone**. Select the option to **Use time zone from log file. If none is present use:** and set the timezone to **UTC**. + * **Time Zone**. Select the option to **Use time zone from the log file. If none is present use:** and set the timezone to **UTC**. * **Timestamp Format**. Select the option to **Automatically detect the format**. * **Encoding**. UTF-8 is the default, but you can choose another encoding format from the menu if your IIS logs are encoded differently. - * **Enable Multiline Processing**. Uncheck the box to **Detect messages spanning multiple lines**. Since IIS Access logs are single line log files, disabling this option will ensure that your messages are collected correctly. + * **Enable Multiline Processing**. Uncheck the box to **Detect messages spanning multiple lines**. Since IIS Access logs are single-line log files, disabling this option will ensure that your messages are collected correctly. 4. Click **Save**. After a few minutes, your new Source should be propagated down to the Collector and will begin submitting your IIS Access log files to the Sumo Logic service. -#### Configure Source for HTTP Error Logs +### Configure source for HTTP error logs This section demonstrates how to configure a Local File Source for HTTP Error Logs, for use with an [Installed Collector](/docs/integrations/web-servers/iis-10). To configure a local file source for HTTP Error Logs, do the following: @@ -148,8 +139,8 @@ This section demonstrates how to configure a Local File Source for HTTP Error Lo 1. **Name**. Required (for example, "HTTP Error Logs") 2. **Description**. (Optional) 3. **File Path** (Required). `C:\Windows\System32\LogFiles\HTTPERR\*.*` - 4. **Collection start time**. Choose how far back you would like to begin collecting historical logs. For example, choose 7 days ago to being collecting logs with a last modified date within the last seven days. - 5. **Source Host**. Sumo Logic uses the hostname assigned by the operating system by default, but you can enter a different host name. + 4. **Collection start time**. Choose how far back you would like to begin collecting historical logs. For example, choose 7 days ago to begin collecting logs with the last modified date within the last seven days. + 5. **Source Host**. Sumo Logic uses the hostname assigned by the operating system by default, but you can enter a different hostname. 6. **Source Category** (Required). For example, Webserver/IIS/Error. 7. **Fields**. Set the following fields: * `component = webserver` @@ -159,15 +150,15 @@ This section demonstrates how to configure a Local File Source for HTTP Error Lo 3. Configure the Advanced section settings: * **Timestamp Parsing Settings**. Make sure the setting matches the timezone on the log files. * **Enable Timestamp Parsing**. Select **Extract timestamp information from log file entries**. - * **Time Zone**. Select the option to **Use time zone from log file. If none is present use:** and set the timezone to **UTC**. + * **Time Zone**. Select the option to **Use time zone from the log file. If none is present use:** and set the timezone to **UTC**. * **Timestamp Format**. Select the option to **Automatically detect the format**. * **Encoding**. UTF-8 is the default, but you can choose another encoding format from the menu if your IIS logs are encoded differently. - * **Enable Multiline Processing**. Uncheck the box to **Detect messages spanning multiple lines**. Since IIS Error logs are single line log files, disabling this option will ensure that your messages are collected correctly. + * **Enable Multiline Processing**. Uncheck the box to **Detect messages spanning multiple lines**. Since IIS Error logs are single-line log files, disabling this option will ensure that your messages are collected correctly. 4. Click **Save**. After a few minutes, your new Source should be propagated down to the Collector and will begin submitting your IIS HTTP Error log files to the Sumo Logic service. -#### Configure Source for IIS Performance (Perfmon) Logs +### Configure source for IIS Performance (Perfmon) logs This section demonstrates how to configure a Windows Performance Source, for use with an [Installed Collector](/docs/integrations/web-servers/iis-10). Use the appropriate source for your environment: * [Local Windows Performance Monitor Log Source](/docs/send-data/installed-collectors/sources/local-windows-performance-monitor-log-source) (**recommended**) @@ -179,7 +170,7 @@ To configure a Source for IIS Performance Logs, do the following: 2. Configure the Local Windows Performance Source Fields as follows: * **Name**. Required (for example, "IIS Performance") * **Source Category** (Required). For example, Webserver/IIS/PerfCounter. - * **Frequency**. **Every Minute** (you may custom choose frequency) + * **Frequency**. **Every Minute** (you may custom choose the frequency) * **Description**. (Optional) * **Fields**. Set the following fields: * `component = webserver` @@ -198,7 +189,7 @@ To configure a Source for IIS Performance Logs, do the following: ### Configure metrics collection -#### Set up a Sumo Logic HTTP Source +### Set up a Sumo Logic HTTP source 1. **Configure a Hosted Collector for Metrics**. To create a new Sumo Logic hosted collector, perform the steps in the [Create a Hosted Collector](/docs/send-data/hosted-collectors/configure-hosted-collector) documentation. 2. **Configure an HTTP Logs & Metrics source**: @@ -210,13 +201,13 @@ To configure a Source for IIS Performance Logs, do the following: 3. Select **Save**. 4. Take note of the URL provided once you click **Save**. You can retrieve it again by selecting the **Show URL** next to the source on the Collection Management screen. -#### Set up Telegraf +### Set up Telegraf 1. **Install Telegraf if you haven’t already**. Use the[ following steps](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf) to install Telegraf. 2. **Configure and start Telegraf**. As part of collecting metrics data from Telegraf, we will use the[ Windows Performance Counters Input Plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) to get data from Telegraf and the [Sumo Logic output plugin](https://github.com/SumoLogic/fluentd-output-sumologic) to send data to Sumo Logic.
-Copy and paste this `telegraf.conf` file and modify for your environment (click to expand). +Copy and paste this `telegraf.conf` file and modify it for your environment (click to expand). ```sql [[inputs.win_perf_counters]] @@ -402,7 +393,7 @@ To configure a Source for IIS Performance Logs, do the following: * `webserver_farm`. Enter a name to identify this IIS Server farm This farm name will be shown in our dashboards. Use “`default`” if none is present. * In the output plugins section, which is `[[outputs.sumologic]]`: * `URL`. This is the HTTP source URL created previously. See this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin. - * If you haven’t defined a farm in IIS Server, enter ‘**default**’ for `webserver_farm`. + * If you haven’t defined a farm in the IIS Server, enter ‘**default**’ for `webserver_farm`. * There are additional values set by the Telegraf configuration. We strongly advise against changing these values as it might cause the Sumo Logic app to not function correctly. * `data_format: “prometheus”`. In the output `[[outputs.sumologic]]` plugins section. Metrics are sent in the Prometheus format to Sumo Logic. * `component - “webserver”`. In the input `[[inputs.win_perf_counters]]` plugins section. This value is used by Sumo Logic apps to identify application components. @@ -414,42 +405,29 @@ At this point, Telegraf should start collecting the IIS Server metrics and forwa ## Installing the IIS app -This section demonstrates how to install the IIS app and assumes you have already set up the collection as described in [Collect Logs and Metrics for the IIS](#collecting-logs-and-metrics-for-the-iis-app). - -To install the app: +import AppInstall2 from '../../reuse/apps/app-install-sc-k8s.md'; -Locate and install the app you need from the **App Catalog**. If you want to see a preview of the dashboards included with the app before installing, click **Preview Dashboards**. + -1. From the **App Catalog**, search for and select the app. -2. Select the version of the service you're using and click **Add to Library**. - :::note - Version selection is not available for all apps. - ::: -3. To install the app, complete the following fields. - 1. **App Name**. You can retain the existing name, or enter a name of your choice for the app. - 2. **Data Source**. Choose **Enter a Custom Data Filter**, and enter a custom IIS Server farm filter. Examples: - * For all IIS Server farms, `webserver_farm=*`. - * For a specific farm, `webserver_farm=iis.dev.01`. - * Farms within a specific environment, `webserver_farm=iis.dev.01` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection). -3. **Advanced**. Select the **Location in Library** (the default is the Personal folder in the library), or click **New Folder** to add a new folder. -4. Click **Add to Library**. - -Once an app is installed, it will appear in your **Personal** folder, or another folder that you specified. From here, you can share it with your organization. +The following fields will be created automatically as a part of the app installation process: +* `component` +* `environment` +* `webserver_system` +* `webserver_farm` +* `pod` -Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps. +## Viewing IIS dashboards -## Viewing IIS Dashboards +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables). -::: + ### Overview -The **IIS - Overview** dashboard provides a high-level view of the performance and integrity of your Microsoft Internet Information Services (IIS) infrastructure. Dashboard panels display visual graphs and detailed information on IIS versions, platforms, and log formats. Panels also show visitor geographic locations, top app requests. OS platforms, response status, response times, and client and server errors. +The **IIS - Overview** dashboard provides a high-level view of the performance and integrity of your Microsoft Internet Information Services (IIS) infrastructure. Dashboard panels display visual graphs and detailed information on IIS versions, platforms, and log formats. Panels also show visitor geographic locations and top app requests. OS platforms, response status, response times, and client and server errors. Use this dashboard to: -* Get a high-level overview of sites, requests, connect, cache, data received and sent, queue, application pool, client location, client platforms, error and threats identified. +* Get a high-level overview of sites, requests, connect, cache, data received and sent, queue, application pool, client location, client platforms, errors, and threats identified. * Drill Down to specific use cases by clicking on specific panels of interest. IIS-Overview @@ -459,7 +437,7 @@ Use this dashboard to: The **IIS - HTTP Error** dashboard provides detailed information on IIS error logging in HTTP. Dashboard panels show details on error events, top client and server IP addresses, top protocol versions, and protocol status. Panels also show information on top reason phrases and verbs associated with HTTP errors, as well as top request details by reason. Use this dashboard to: -* Monitor errors logged by HTTP.SYS. The client request may be rejected by HTTP.SYS before it made it to an IIS worker process. In such cases the error is logged in the HTTPERR logs. +* Monitor errors logged by HTTP.SYS. The client request may be rejected by HTTP.SYS before it made it to an IIS worker process. In such cases, the error is logged in the HTTPERR logs. * Identify the reason for failure. Check if the request violated the HTTP protocol, or if there was a WAS/the application pool failure. * Correct the error identified to ensure a consistent and satisfactory user experience. @@ -490,7 +468,7 @@ Use this dashboard to: ### Threat Analysis -The **IIS - Threat Analysis** dashboard provides high-level views of threats throughout your IIS network. Dashboard panels display visual graphs and detailed information on Threats by Client IP, Threats by Actors, and Threat by Malicious Confidence. +The **IIS - Threat Analysis** dashboard provides high-level views of threats throughout your IIS network. Dashboard panels display visual graphs and detailed information on Threats by Client IP, Threats by Actors, and Threats by Malicious Confidence. Use this dashboard to: * Identify potential threats and indicators of compromise. @@ -515,9 +493,9 @@ The **IIS - Web Server Operations** dashboard provides visual graphs and detaile The **IIS - Requests Stats** dashboard provides visual graphs and statistics for requests made throughout your IIS infrastructure. Dashboard panels show the number of requests, request methods, request outliers, and requests by server. Panels also show details on GET, PUT, POST, and DELETE requests, as well as requests time compare and unique visitors outlier. Use this dashboard to: -* Monitor the load on your site for all requests, based on specific type of HTTP request and by server. This information allows you to efficiently allocate resources. +* Monitor the load on your site for all requests, based on the specific type of HTTP request and by server. This information allows you to efficiently allocate resources. * Identify outliers in requests. -* Analyze request volume trends are against last 7 days to understand business fluctuations. +* Analyze request volume trends against the last 7 days to understand business fluctuations. * Identify how you are acquiring unique users with unique client outliers, and compare with positive and negative outliers. IIS-Requests-Stats @@ -542,7 +520,7 @@ The **IIS - Visitor Traffic Insights** Dashboard provides detailed information o ### Application Pool -The **IIS - Application Pool** dashboard provides a high-level view of Application Pool State, Information and Worker Process Metrics. +The **IIS - Application Pool** dashboard provides a high-level view of the Application Pool State, Information, and Worker Process Metrics. IIS-Application-Pool @@ -553,7 +531,7 @@ The **IIS - ASP.NET** dashboard provides a high-level view of the ASP.NET global Use this dashboard to: * Analyze State Server Sessions. -* Monitor Applications Information. +* Monitor the Application Information. * Understand Request execution and wait time. IIS-ASP.NET @@ -568,7 +546,7 @@ Use this dashboard to monitor the following key metrics: * Errors * Cache * Requests Executing -* Requests in Application Queue +* Requests in the Application Queue * Pipeline Instance Count * Output Cache @@ -577,7 +555,7 @@ Use this dashboard to monitor the following key metrics: ### Cache Performance -The **IIS - Cache Performance** dashboard provides a high-level view of the the Web Service Cache Counters object includes cache counters specific to the World Wide Web Publishing Service. +The **IIS - Cache Performance** dashboard provides a high-level view of the Web Service Cache Counters object including cache counters specific to the World Wide Web Publishing Service. Use this dashboard to monitor the following key metrics: @@ -591,7 +569,7 @@ Use this dashboard to monitor the following key metrics: ### Web Service -The **IIS - Web Service** dashboard provides a high-level view of the Web Service object includes counters specific to the World Wide Web Publishing Service. +The **IIS - Web Service** dashboard provides a high-level view of the Web Service object including counters specific to the World Wide Web Publishing Service. Use this dashboard to monitor the following key metrics: @@ -603,19 +581,16 @@ Use this dashboard to monitor the following key metrics: IIS-Web-Service - ## Installing IIS monitors import CreateMonitors from '../../reuse/apps/create-monitors.md'; -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the IIS alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: + -## Using IIS Alerts +## Using IIS alerts -Sumo Logic provides out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the IIS server is available and performing as expected. These alerts are built based on logs and metrics datasets and have preset thresholds based on industry best practices and recommendations. They are as follows: +
+Here are the alerts available for IISv10 (click to expand). | Alert Name | Alert Description | Trigger Type (Critical / Warning) | Alert Condition | Recover Condition | |:---|:---|:---|:---|:---| @@ -626,3 +601,4 @@ Sumo Logic provides out-of-the-box alerts available through [Sumo Logic monitors | IIS - Slow Response Time | This alert fires when the response time for a given IIS server is greater than one second. | Warning | > 0 | 0 | | IIS - ASP.NET Application Errors | This alert fires when we detect an error in the ASP.NET applications running on an IIS server. | Warning | >0 | < = 0 | | IIS - Blocked Async IO Requests | This alert fires when we detect that there are blocked async I/O requests on an IIS server. | Warning | >0 | < = 0 | +
diff --git a/docs/integrations/web-servers/nginx-ingress.md b/docs/integrations/web-servers/nginx-ingress.md index 007c7a9477..693d1e2fe9 100644 --- a/docs/integrations/web-servers/nginx-ingress.md +++ b/docs/integrations/web-servers/nginx-ingress.md @@ -14,11 +14,11 @@ The Nginx Ingress app is a unified logs and metrics app that helps you monitor t This app is tested with the following Nginx Ingress versions: * For Kubernetes environments: Nginx version 1.21.3 -## Log and Metrics Types +## Log and metrics types The Sumo Logic app for Nginx Ingress assumes the NCSA extended/combined log file format for Access logs and the default Nginx error log file format for error logs. -All [Dashboards](#viewing-nginx-ingress-dashboards) (except the Error logs Analysis dashboard) assume the Access log format. The Error logs Analysis Dashboard assumes both Access and Error log formats, so as to correlate information between the two. +All [Dashboards](#viewing-nginx-ingress-dashboards) (except the Error logs Analysis dashboard) assume the Access log format. The Error Logs Analysis Dashboard assumes both Access and Error log formats, so as to correlate information between the two. For more details on Nginx logs, see [Module ngx_http_log_module](http://nginx.org/en/docs/http/ngx_http_log_module.html). @@ -35,16 +35,7 @@ In the Kubernetes environment, we use our Sumo Logic Kubernetes collection. You Configuring log and metric collection for the Nginx Ingress app includes the following tasks: -### Step 1: Configure fields in Sumo Logic - -Additionally, if you're using Nginx Ingress in the Kubernetes environment, the following fields will be created automatically during the app installation process: - -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_webserver_system` -* `pod_labels_webserver_farm` - -### Step 2: Configure Nginx Ingress Logs and Metrics Collection +### Step 2: Configure Nginx Ingress logs and metrics collection Sumo Logic supports the collection of logs and metrics data from Nginx Ingress in Kubernetes environments. @@ -54,7 +45,7 @@ It’s assumed that you are using the latest helm chart version if not please up 1. Before you can configure Sumo Logic to ingest metrics, you must enable the Prometheus metrics in the Nginx Ingress controller and annotate the Nginx Ingress pods, so Prometheus can find the Nginx Ingress metrics. For instructions on Nginx Open Source, refer to [this documentation](https://docs.nginx.com/nginx-ingress-controller/logging-and-monitoring/prometheus/). 2. Ensure you have deployed the [Sumologic-Kubernetes-Collection](https://github.com/SumoLogic/sumologic-kubernetes-collection), to send the logs and metrics to Sumologic. For more information on deploying Sumologic-Kubernetes-Collection, [visit here](/docs/send-data/kubernetes/install-helm-chart). Once deployed, logs will automatically be picked up and sent by default. Prometheus will scrape the Nginx Ingress pods, based on the annotations set in Step 1, for the metrics. Logs and Metrics will automatically be sent to the respective [Sumo Logic Distribution for OpenTelemetry Collector](https://github.com/SumoLogic/sumologic-otel-collector) instances, which consistently tag your logs and metrics, then forward them to your Sumo Logic org. -3. Apply following labels to the Nginx Ingress pod. +3. Apply the following labels to the Nginx Ingress pod. ```sql environment="prod_CHANGEME" component="webserver" @@ -72,52 +63,38 @@ It’s assumed that you are using the latest helm chart version if not please up **FER to normalize the fields in Kubernetes environments**. Labels created in Kubernetes environments automatically are prefixed with `pod_labels`. To normalize these for our app to work, a Field Extraction Rule named **AppObservabilityNginxIngressWebserverFER** is automatically created for Nginx Application Components.
## Installing the Nginx Ingress app +import AppInstall2 from '../../reuse/apps/app-install-sc-k8s.md'; -This section demonstrates how to install the Nginx Ingress app. These instructions assume you have already set up the collection as described above. - -To install the app: - -Locate and install the app you need from the **App Catalog**. If you want to see a preview of the dashboards included with the app before installing, click **Preview Dashboards**. - -1. From the **App Catalog**, search for and select the app. -2. Select the version of the service you're using and click **Add to Library**. -3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. - 2. **Data Source.** - 3. Choose **Enter a Custom Data Filter**, and enter a custom Nginx Ingress farm filter. Examples: - 1. For all Nginx Ingress farms: `webserver_farm=*`. - 2. For a specific farm: `webserver_farm=nginx-ingress.dev.01`. - 3. Farms within a specific environment: `webserver_farm=nginx-ingress.dev.01` and `environment=prod` (This assumes you have set the optional environment tag while configuring collection). - 4. **Advanced**. Select the **Location in Library** (the default is the Personal folder in the library), or click **New Folder** to add a new folder. -4. Click **Add to Library**. + -Once an app is installed, it will appear in your **Personal** folder, or another folder that you specified. From here, you can share it with your organization. - -Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps. +Additionally, if you're using Nginx Ingress in the Kubernetes environment, the following fields will be created automatically during the app installation process: +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_webserver_system` +* `pod_labels_webserver_farm` ## Viewing Nginx Ingress dashboards +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: + ### Overview -The **Nginx Ingress - Overview** dashboard provides an at-a-glance view of the NGINX server access locations, error logs along with connection metrics. +The **Nginx Ingress - Overview** dashboard provides an at-a-glance view of the NGINX server access locations, error logs, and connection metrics. Use this dashboard to: * Gain insights into originated traffic location by region. This can help you allocate computer resources to different regions according to their needs. * Gain insights into your Nginx health using Critical Errors and Status of Nginx Server. -* Get insights into Active and dropped connection. +* Get insights into Active and dropped connections. Nginx-Overview ### Error Logs -The **Nginx Ingress - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections and outliers, client requests, request trends, and request outliers. +The **Nginx Ingress - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections, outliers, client requests, request trends, and request outliers. -The Nginx Ingress - Error Logs Analysis Dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections and outliers, client requests, request trends, and request outliers. +The Nginx Ingress - Error Logs Analysis Dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections, outliers, client requests, request trends, and request outliers. Use this dashboard to: @@ -144,7 +121,7 @@ The **Nginx Ingress - Outlier Analysis** dashboard provides a high-level view o Use this dashboard to: -* Detect outliers in your infrastructure with Sumo Logic’s machine learning algorithm. +* Detect outliers in your infrastructure with Sumo Logic’s machine-learning algorithm. * To identify outliers in incoming traffic and the number of errors encountered by your servers. You can use schedule searches to send alerts to yourself whenever there is an outlier detected by Sumo Logic. @@ -165,8 +142,8 @@ Use this dashboard to: The Nginx - Web Server Operations dashboard provides a high-level view combined with detailed information on the top ten bots, geographic locations, and data for clients with high error rates, server errors over time, and non 200 response code status codes. Dashboard panels also show information on server error logs, error log levels, error responses by a server, and the top URIs responsible for 404 responses. Use this dashboard to: -* Gain insights into Client, Server Responses on Nginx Server. This helps you identify errors in Nginx Server. -* To identify geo locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. +* Gain insights into Client and Server Responses on the Nginx Server. This helps you identify errors in the Nginx Server. +* To identify geo-locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. Nginx-Ingress-Web-Server-Operations @@ -185,7 +162,7 @@ Use this dashboard to: The **Nginx Ingress - Visitor Locations** dashboard provides a high-level view of Nginx visitor geographic locations both worldwide and in the United States. Dashboard panels also show graphic trends for visits by country over time and visits by US region over time. Use this dashboard to: -* Gain insights into geographic locations of your user base. This is useful for resource planning in different regions across the globe. +* Gain insights into the geographic locations of your user base. This is useful for resource planning in different regions across the globe. Nginx-Ingress-Visitor-Locations @@ -205,14 +182,14 @@ The **Nginx Ingress - Connections and Requests Metrics** dashboard provides insi Use this dashboard to: -* Gain information about active and dropped connections. This helps you identify the connection rejected by Nginx Server. -* Gain information about the total requests handled by Nginx Server per second. This helps you understand read, write requests on Nginx Server. +* Gain information about active and dropped connections. This helps you identify the connection rejected by the Nginx Server. +* Gain information about the total requests handled by Nginx Server per second. This helps you understand read, and write requests on the Nginx Server. Nginx-Ingress-Connections-and-Requests-Metrics ### Controller Metrics -The **Nginx Ingress - Ingress Controller Metrics** dashboard gives you insight on the status, reloads, failure of kubernetes Nginx ingress controller. +The **Nginx Ingress - Ingress Controller Metrics** dashboard gives you insight into the status, reloads, and failure of the Kubernetes Nginx ingress controller. Use this dashboard to: * Gain information about Nginx ingress Controller status and reloads. This helps you understand the availability of Nginx Ingress controllers. @@ -220,21 +197,16 @@ Use this dashboard to: Nginx-Ingress-Controller-Metrics -## Installing Nginx Ingress monitors +## Create monitors for Nginx Ingress app import CreateMonitors from '../../reuse/apps/create-monitors.md'; -After [setting up collection](/docs/integrations/web-servers/nginx), you can proceed to installing the Nginx Ingress monitors, app, and view examples of each of dashboard. - -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the Nginx Ingress alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: - -## Nginx Ingress Alerts + -Sumo Logic has provided out-of-the-box alerts available via [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Nginx server is available and performing as expected. These alerts are built based on logs and metrics datasets and have preset thresholds based on industry best practices and recommendations. They are as follows: +## Nginx Ingress alerts +
+Here are the alerts available for Nginx Ingress (click to expand). | Alert Type (Metrics/Logs) | Alert Name | Alert Description | Trigger Type (Critical / Warning) | Alert Condition | Recover Condition | |:---|:---|:---|:---|:---|:---| | Logs | Nginx Ingress - Access from Highly Malicious Sources | This alert fires when an Nginx Ingress server is accessed from highly malicious IP addresses. | Critical | > 0 | < = 0 | @@ -242,3 +214,4 @@ Sumo Logic has provided out-of-the-box alerts available via [Sumo Logic monitors | Logs | Nginx Ingress - High Server (HTTP 5xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 5xx. | Critical | > 0 | 0 | | Logs | Nginx Ingress - Critical Error Messages | This alert fires when we detect critical error messages for a given Nginx Ingress server. | Critical | > 0 | 0 | | Metrics | Nginx Ingress - Dropped Connections | This alert fires when we detect dropped connections for a given Nginx Ingress server. | Critical | > 0 | 0 | +
diff --git a/docs/integrations/web-servers/nginx-plus-ingress.md b/docs/integrations/web-servers/nginx-plus-ingress.md index 5df3dae669..4e37b2c567 100644 --- a/docs/integrations/web-servers/nginx-plus-ingress.md +++ b/docs/integrations/web-servers/nginx-plus-ingress.md @@ -13,13 +13,13 @@ The Nginx Plus Ingress Controller for Kubernetes provides enterprise‑grade del This app supports Logs for Nginx Plus and Metrics for Nginx Plus Ingress Controller. ::: -The Nginx Plus Ingress app is a unified logs and metrics app that helps you monitor the availability, performance, health and resource utilization of your Nginx Plus Ingress web servers. Preconfigured dashboards and searches provide insight into server status, location zones, server zones, upstreams, resolvers, visitor locations, visitor access types, traffic patterns, errors, web server operations and access from known malicious sources. +The Nginx Plus Ingress app is a unified logs and metrics app that helps you monitor the availability, performance, health, and resource utilization of your Nginx Plus Ingress web servers. Preconfigured dashboards and searches provide insight into server status, location zones, server zones, upstreams, resolvers, visitor locations, visitor access types, traffic patterns, errors, web server operations, and access from known malicious sources. -## Log and Metrics Types +## Log and metrics types The Sumo Logic app for Nginx Plus Ingress assumes the NCSA extended/combined log file format for Access logs and the default Nginx error log file format for error logs. -All Dashboards (except the Error logs Analysis dashboard) assume the Access log format. The Error logs Analysis Dashboard assumes both Access and Error log formats, so as to correlate information between the two. For more details on Nginx logs, see [here](http://nginx.org/en/docs/http/ngx_http_log_module.html). +All Dashboards (except the Error Logs Analysis dashboard) assume the Access log format. The Error Logs Analysis Dashboard assumes both Access and Error log formats, to correlate information between the two. For more details on Nginx logs, see [here](http://nginx.org/en/docs/http/ngx_http_log_module.html). The Sumo Logic app for Nginx Plus Ingress assumes Prometheus format Metrics for Requests, Connections, and Ingress controller. For more details on Nginx Plus Ingress Metrics, see [here](https://docs.nginx.com/nginx-ingress-controller/logging-and-monitoring/prometheus/) @@ -78,7 +78,7 @@ Field Extraction Rules (FERs) tell Sumo Logic which fields to parse out automati Nginx assumes the NCSA extended/combined log file format for Access logs and the default Nginx Plus error log file format for error logs. -Both the parse expressions can be used for logs collected from Nginx Plus Server running on Local or container-based systems. +Both the parse expressions can be used for logs collected from the Nginx Plus Server running on Local or container-based systems. **FER for Access Logs** @@ -121,26 +121,25 @@ import AppInstall from '../../reuse/apps/app-install.md'; ## Viewing Nginx Plus Ingress Dashboards +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: + ### Overview -The **Nginx Plus Ingress - Overview** dashboard provides an at-a-glance view of the nginx plus server access locations, error logs along with connection metrics. +The **Nginx Plus Ingress - Overview** dashboard provides an at-a-glance view of the Nginx plus server access locations, error logs, and connection metrics. Use this dashboard to: * Gain insights into originated traffic location by region. This can help you allocate computer resources to different regions according to their needs. * Gain insights into your Nginx health using Critical Errors and Status of Nginx Server. -* Get insights into Active and dropped connection. +* Get insights into Active and dropped connections. Nginx Plus Ingress ### Error Logs Analysis -The **Nginx Plus Ingress - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections and outliers, client requests, request trends, and request outliers. +The **Nginx Plus Ingress - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections, outliers, client requests, request trends, and request outliers. Use this dashboard to: * Track requests from clients. A request is a message asking for a resource, such as a page or an image. @@ -164,7 +163,7 @@ Use this dashboard to: The **Nginx Plus Ingress - Outlier Analysis** dashboard provides a high-level view of Nginx server outlier metrics for bytes served, number of visitors, and server errors. You can select the time interval over which outliers are aggregated, then hover the cursor over the graph to display detailed information for that point in time. Use this dashboard to: -* Detect outliers in your infrastructure with Sumo Logic’s machine learning algorithm. +* Detect outliers in your infrastructure with Sumo Logic’s machine-learning algorithm. * To identify outliers in incoming traffic and the number of errors encountered by your servers. You can use schedule searches to send alerts to yourself whenever there is an outlier detected by Sumo Logic. @@ -176,7 +175,7 @@ You can use schedule searches to send alerts to yourself whenever there is an ou The **Nginx Plus Ingress - Threat Inte**l dashboard provides an at-a-glance view of threats to Nginx servers on your network. Dashboard panels display the threat count over a selected time period, geographic locations where threats occurred, source breakdown, actors responsible for threats, severity, and a correlation of IP addresses, method, and status code of threats. Use this dashboard to: -* To gain insights and understand threats in incoming traffic and discover potential IOCs. Incoming traffic requests are analyzed using theSumo Logic [threat intelligence](/docs/security/threat-intelligence/). +* To gain insights and understand threats in incoming traffic and discover potential IOCs. Incoming traffic requests are analyzed using the Sumo Logic [threat intelligence](/docs/security/threat-intelligence/). Nginx Plus Ingress @@ -185,8 +184,8 @@ Use this dashboard to: The **Nginx Plus Ingress - Web Server Operations** dashboard provides a high-level view combined with detailed information on the top ten bots, geographic locations, and data for clients with high error rates, server errors over time, and non 200 response code status codes. Dashboard panels also show information on server error logs, error log levels, error responses by a server, and the top URIs responsible for 404 responses. Use this dashboard to: -* Gain insights into Client, Server Responses on Nginx Server. This helps you identify errors in Nginx Server. -* To identify geo locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. +* Gain insights into Client and Server Responses on the Nginx Server. This helps you identify errors in the Nginx Server. +* To identify geo-locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. Nginx Plus Ingress @@ -206,7 +205,7 @@ These insights can be useful for planning in which browsers, platforms, and oper The **Nginx Plus Ingress - Visitor Locations** dashboard provides a high-level view of Nginx visitor geographic locations both worldwide and in the United States. Dashboard panels also show graphic trends for visits by country over time and visits by US region over time. Use this dashboard to: -* Gain insights into geographic locations of your user base. This is useful for resource planning in different regions across the globe. +* Gain insights into the geographic locations of your user base. This is useful for resource planning in different regions across the globe. Nginx Plus Ingress @@ -223,7 +222,7 @@ Use this dashboard to: ### Ingress Controller Metrics -The **Nginx Plus Ingress - Ingress Controller Metrics** dashboard provides you insight on the status, reloads, failure of kubernetes Nginx Plus ingress controller. +The **Nginx Plus Ingress - Ingress Controller Metrics** dashboard provides you insight into the status, reloads, and failure of the Kubernetes Nginx Plus ingress controller. Use this dashboard to: * Gain information about Nginx ingress Controller status and reloads. This helps you understand the availability of Nginx Ingress controllers. @@ -234,33 +233,33 @@ Use this dashboard to: ### HTTP Location Zones -The **Nginx Plus Ingress - HTTP Location Zones** metrics dashboard provides detailed statistics on the frontend performance, showing traffic speed, responses/requests count and various error responses. +The **Nginx Plus Ingress - HTTP Location Zones** metrics dashboard provides detailed statistics on the frontend performance, showing traffic speed, responses/requests count, and various error responses. Use this dashboard to: -* Gain information about Location http zones traffic: received and sent; speed, requires/responses amount, discarded traffic. -* Gain information about Location http zones error responses: percentage of responses by server, percentage of each type of error responses. +* Gain information about Location HTTP zones traffic: received and sent; speed, requires/responses amount, discarded traffic. +* Gain information about Location HTTP zones error responses: percentage of responses by the server, percentage of each type of error response. Nginx Plus Ingress ### HTTP Server Zones -The **Nginx Plus Ingress - HTTP Server Zones** metrics dashboard provides detailed statistics on the frontend performance, showing traffic speed, responses/requests count and various error responses. +The **Nginx Plus Ingress - HTTP Server Zones** metrics dashboard provides detailed statistics on the frontend performance, showing traffic speed, responses/requests count, and various error responses. Use this dashboard to: -* Gain information about Server http zones traffic: received and sent; speed, requires/responses amount, discarded traffic. -* Gain information about Server http zones error responses: percentage of responses by server, percentage of each type of error responses. +* Gain information about Server HTTP zones traffic: received and sent; speed, requires/responses amount, discarded traffic. +* Gain information about Server HTTP zones error responses: percentage of responses by the server, percentage of each type of error response. Nginx Plus Ingress ### HTTP Upstreams -The **Nginx Plus Ingress - HTTP Upstreams** metrics dashboard provides information about each upstream group for HTTP and HTTPS traffic, showing number of HTTP upstreams, servers, back-up servers, error responses and health monitoring. +The **Nginx Plus Ingress - HTTP Upstreams** metrics dashboard provides information about each upstream group for HTTP and HTTPS traffic, showing the number of HTTP upstreams, servers, backup servers, error responses, and health monitoring. Use this dashboard to: -* Gain information about HTTP upstreams, servers and back-up servers. -* Gain information about HTTP upstreams traffic: received and sent; speed, requires/responses amount, downtime and response time. -* Gain information about HTTP upstreams error responses: percentage of responses by server, percentage of each type of error responses. -* Gain information about HTTP upstreams health monitoring. +* Gain information about HTTP upstreams, servers, and backup servers. +* Gain information about HTTP upstream traffic: received and sent; speed, requires/responses amount, downtime, and response time. +* Gain information about HTTP upstream error responses: percentage of responses by the server, percentage of each type of error response. +* Gain information about HTTP upstream health monitoring. Nginx Plus Ingress @@ -269,7 +268,7 @@ Use this dashboard to: The **Nginx Plus Ingress - Resolvers** metrics dashboard provides DNS server statistics of requests and responses per each DNS status zone. Use this dashboard to: -* Gain information about the total number of zones, responses and requests speed. +* Gain information about the total number of zones, responses, and request speed. * Gain information about error responses by each type of error. Nginx Plus Ingress @@ -277,13 +276,13 @@ Use this dashboard to: ### Nginx Plus Ingress- TCP/UDP Upstreams -The **Nginx Plus Ingress - TCP/UDP Upstreams** metrics dashboard provides information about each upstream group for TCP and UDP traffic, showing number of TCP and UDP upstreams, servers, back-up servers, error responses and health monitoring. +The **Nginx Plus Ingress - TCP/UDP Upstreams** metrics dashboard provides information about each upstream group for TCP and UDP traffic, showing the number of TCP and UDP upstreams, servers, backup servers, error responses, and health monitoring. Use this dashboard to: -* Gain information about TCP and UDP upstreams, servers and back-up servers. -* Gain information about TCP and UDP upstreams traffic: received and sent; speed, requests/responses amount, downtime and response time. -* Gain information about TCP and UDP upstreams error responses: percentage of responses by server, percentage of each type of error responses. -* Gain information about TCP and UDP upstreams health monitoring. +* Gain information about TCP and UDP upstream, servers, and backup servers. +* Gain information about TCP and UDP upstream traffic: received and sent; speed, requests/responses amount, downtime, and response time. +* Gain information about TCP and UDP upstream error responses: percentage of responses by the server, percentage of each type of error response. +* Gain information about TCP and UDP upstream health monitoring. Nginx Plus Ingress @@ -293,29 +292,25 @@ The **Nginx Plus Ingress - TCP/UDP Zones** metrics dashboard provides TCP and UD Use this dashboard to: * Gain information about TCP and UDP traffic: received and sent; speed, requires/responses amount, discarded traffic. -* Gain information about TCP and UDP error responses: percentage of responses by server, percentage of each type of error responses. +* Gain information about TCP and UDP error responses: percentage of responses by the server, percentage of each type of error response. Nginx Plus Ingress -## Installing Nginx Plus Ingress monitors +## Create monitors for Nginx Plus Ingress app import CreateMonitors from '../../reuse/apps/create-monitors.md'; -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the Nginx Plus Ingress alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: - -## Nginx Plus Ingress Alerts - -Sumo Logic has provided out-of-the-box alerts available via [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Nginx server is available and performing as expected. These alerts are built based on logs and metrics datasets and have preset thresholds based on industry best practices and recommendations. + -Sumo Logic provides the following out-of-the-box alerts: +## Nginx Plus Ingress alerts +
+Here are the alerts available for Nginx Plus Ingress (click to expand). | Alert Name | Alert Description | Alert Condition | Recover Condition | |:---|:---|:---|:---| | Nginx Plus Ingress - Dropped Connections | This alert fires when we detect dropped connections for a given Nginx Plus server. | > 0 | `<=`0 | | Nginx Plus Ingress - Critical Error Messages | This alert fires when we detect critical error messages for a given Nginx Plus server. | > 0 | `<=`0 | -| Nginx Plus Ingress - Access from Highly Malicious Sources | This alert fires when an Nginx is accessed from highly malicious IP addresses. | > 0 | `<=`0 | +| Nginx Plus Ingress - Access from Highly Malicious Sources | This alert fires when a Nginx is accessed from highly malicious IP addresses. | > 0 | `<=`0 | | Nginx Plus Ingress - High Client (HTTP 4xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 4xx. | > 0 | `<=`0 | | Nginx Plus Ingress - High Server (HTTP 5xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 5xx. | > 0 | `<=`0 | +
diff --git a/docs/integrations/web-servers/nginx-plus.md b/docs/integrations/web-servers/nginx-plus.md index 7f9de9b775..4e3d353804 100644 --- a/docs/integrations/web-servers/nginx-plus.md +++ b/docs/integrations/web-servers/nginx-plus.md @@ -2,7 +2,7 @@ id: nginx-plus title: Nginx Plus sidebar_label: Nginx Plus -description: The Nginx Plus app is an unified logs and metrics app that helps you monitor the availability, performance, health and resource utilization of your Nginx Plus web servers. +description: The Nginx Plus app is a unified logs and metrics app that helps you monitor the availability, performance, health, and resource utilization of your Nginx Plus web servers. --- import useBaseUrl from '@docusaurus/useBaseUrl'; @@ -13,15 +13,15 @@ import TabItem from '@theme/TabItem'; The Sumo Logic app for Nginx Plus supports logs as well as Metrics for Nginx Plus, which is a web server that can be used as a reverse proxy, load balancer, mail proxy, and HTTP cache. -The Nginx Plus app is an unified logs and metrics app that helps you monitor the availability, performance, health and resource utilization of your Nginx Plus web servers. Preconfigured dashboards and searches provide insight into server status, location zones, server zones, upstreams, resolvers, visitor locations, visitor access types, traffic patterns, errors, web server operations and access from known malicious sources. +The Nginx Plus app is a unified logs and metrics app that helps you monitor the availability, performance, health, and resource utilization of your Nginx Plus web servers. Preconfigured dashboards and searches provide insight into server status, location zones, server zones, upstreams, resolvers, visitor locations, visitor access types, traffic patterns, errors, web server operations, and access from known malicious sources. -## Log and Metrics Types +## Log and metrics types The Sumo Logic app for Nginx Plus assumes the NCSA extended/combined log file format for Access logs and the default Nginx error log file format for error logs. -All Dashboards (except the Error logs Analysis dashboard) assume the Access log format. The Error logs Analysis Dashboard assumes both Access and Error log formats, so as to correlate information between the two. For more details on Nginx/NginxPlus logs, see [Module ngx_http_log_module](https://nginx.org/en/docs/http/ngx_http_log_module.html). +All Dashboards (except the Error Logs Analysis dashboard) assume the Access log format. The Error Logs Analysis Dashboard assumes both access and error log formats, to correlate information between the two. For more details on Nginx/NginxPlus logs, see [Module ngx_http_log_module](https://nginx.org/en/docs/http/ngx_http_log_module.html). -The Sumo Logic app for Nginx Plus assumes Prometheus format Metrics for Requests and Connections. For Nginx Plus Server metrics, API Module from Nginx Configuration is used. For more details on Nginx Plus Metrics, see [Module ngx_http_api_module](https://nginx.org/en/docs/http/ngx_http_api_module.html). +The Sumo Logic app for Nginx Plus assumes Prometheus format Metrics for Requests and Connections. For Nginx Plus Server metrics, the API Module from Nginx Configuration is used. For more details on Nginx Plus Metrics, see [Module ngx_http_api_module](https://nginx.org/en/docs/http/ngx_http_api_module.html). ### Sample log messages @@ -116,19 +116,19 @@ Prometheus pulls metrics from Telegraf and sends them to [Sumo Logic Distributio In the logs pipeline, Sumo Logic Distribution for OpenTelemetry Collector collects logs written to standard out and forwards them to another instance of Sumo Logic Distribution for OpenTelemetry Collector, which enriches metadata and sends logs to Sumo Logic. -#### Collect Logs for Nginx Plus in Kubernetes environment +### Collect logs for Nginx Plus in Kubernetes environment -Nginx Plus app supports the default access logs and error logs format. +The Nginx Plus app supports the default access logs and error logs format. 1. Before you can configure Sumo Logic to ingest logs, you must configure the logging of errors and processed requests in both Nginx Open Source and Nginx Plus. For instructions, refer to the [Configuring Logging documentation](https://docs.nginx.com/nginx/admin-guide/monitoring/logging/). 2. Use the Sumologic-Kubernetes-Collection, to send the logs to Sumologic. For more information, [visit](/docs/observability/kubernetes/collection-setup). -3. Identifying the logs metadata: For example, to get **Logs** data from the pod, you can use the following source `_sourceCategory = "kubernetes/default/nginx"` where `kubernetes` is Cluster name, `default` is Namespace, `nginx` is application. +3. Identifying the log's metadata: For example, to get **Logs** data from the pod, you can use the following source `_sourceCategory = "kubernetes/default/nginx"` where `kubernetes` is Cluster name, `default` is Namespace, `nginx` is application. 4. To get log data from Nginx Pods - all nginx logs must be redirected to standard output “**stdout**” and standard error “**stderr**”. -#### Collect Metrics for Nginx Plus in Kubernetes environment +### Collect metrics for Nginx Plus in Kubernetes environment -Nginx Plus app supports the metrics for Nginx Plus. +The Nginx Plus app supports the metrics for Nginx Plus. The following steps assume you are collecting Nginx Plus metrics from a Kubernetes environment. In Kubernetes environments, we use the Telegraf Operator, which is packaged with our Kubernetes collection. You can learn more about this[ here](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/telegraf-collection-architecture). @@ -158,13 +158,13 @@ The following steps assume you are collecting Nginx Plus metrics from a Kubernet ### For Non-Kubernetes environments -We use the Telegraf operator for Nginx Plus metric collection and Sumo Logic Installed Collector for collecting Nginx Plus logs. The diagram below illustrates the components of the Nginx Plus collection in a non-Kubernetes environment.
nginxplus-nonk8s +We use the Telegraf operator for Nginx Plus metric collection and the Sumo Logic Installed Collector for collecting Nginx Plus logs. The diagram below illustrates the components of the Nginx Plus collection in a non-Kubernetes environment.
nginxplus-nonk8s -Telegraf runs on the same system as Nginx Plus, and uses the [Nginx Plus input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx_plus_api) to obtain Nginx Plus metrics, and the Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from Nginx on the other hand are sent to either a Sumo Logic Local File source. +Telegraf runs on the same system as Nginx Plus and uses the [Nginx Plus input plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx_plus_api) to obtain Nginx Plus metrics, and the Sumo Logic output plugin to send the metrics to Sumo Logic. Logs from Nginx on the other hand are sent to either a Sumo Logic Local File source. -#### Collect Logs for Nginx Plus in Non-Kubernetes environment +### Collect logs for Nginx Plus in Non-Kubernetes environment -Nginx Plus app supports the default access logs and error logs format. +The Nginx Plus app supports the default access logs and error logs format. This section provides instructions for configuring log collection for the Sumo Logic app for Nginx Plus. Follow the instructions below to set up the Log collection. @@ -221,13 +221,13 @@ If you're using a service like Fluentd, or you would like to upload your logs ma
-#### Collect Metrics for Nginx Plus in Non-Kubernetes environment +### Collect metrics for Nginx Plus in Non-Kubernetes environment -Nginx Plus app supports the metrics for Nginx Plus. +The Nginx Plus app supports the metrics for Nginx Plus. This section provides instructions for configuring metrics collection for the Sumo Logic app for Nginx Plus. Follow the below instructions to set up the metric collection. -1. **Configure Metrics in Nginx Plus**. Before you can configure Sumo Logic to ingest metrics, you must enable API module to expose metrics in NGINX Plus. +1. **Configure Metrics in Nginx Plus**. Before you can configure Sumo Logic to ingest metrics, you must enable the API module to expose metrics in NGINX Plus. * The live activity monitoring data is generated by the [NGINX Plus API](https://nginx.org/en/docs/http/ngx_http_api_module.html). Visit [live activity monitoring](https://www.nginx.com/products/nginx/live-activity-monitoring/) to configure the API module. * Make a note of the URL where the API is exposed. It will match the format like[ http://localhost/api](https://127.0.0.1:8080/nginx_status). 2. **Configure a Hosted Collector**. To create a new Sumo Logic hosted collector, perform the steps in the[Create a Hosted Collector](/docs/send-data/hosted-collectors/configure-hosted-collector) section of the Sumo Logic documentation. @@ -251,10 +251,10 @@ This section provides instructions for configuring metrics collection for the Su ``` * `interval` - This is the frequency to send data to Sumo Logic, in this example, we will send the metrics every 60 seconds. Please refer to [this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/install-telegraf#configuring-telegraf) for more parameters that can be configured in the Telegraf agent globally. -* `urls` - The url to the Nginx Plus server with the API enabled. This can be a comma-separated list to connect to multiple Nginx Plus servers. Please refer [to this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx_plus_api) for more information on configuring the Nginx API input plugin for Telegraf. +* `urls` - The URL to the Nginx Plus server with the API enabled. This can be a comma-separated list to connect to multiple Nginx Plus servers. Please refer [to this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx_plus_api) for more information on configuring the Nginx API input plugin for Telegraf. * `url` - This is the HTTP source URL created in step 3. Please refer[ to this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/configure-telegraf-output-plugin.md) for more information on configuring the Sumo Logic Telegraf output plugin. * `data_format` - The format to use when sending data to Sumo Logic. Please refer[ to this doc](/docs/send-data/collect-from-other-data-sources/collect-metrics-telegraf/configure-telegraf-output-plugin.md) for more information on configuring the Sumo Logic Telegraf output plugin. -6. Once you have finalized your telegraf.conf file, you can run the following command to start telegraf. +6. Once you have finalized your telegraf.conf file, you can run the following command to start Telegraf. ```bash telegraf --config /path/to/telegraf.conf ``` @@ -262,13 +262,13 @@ This section provides instructions for configuring metrics collection for the Su
-### Field Extraction Rules +### Field extraction rules Field Extraction Rules (FERs) tell Sumo Logic which fields to parse out automatically. For instructions, on creating them, see [Create a Field Extraction Rule](/docs/manage/field-extractions/create-field-extraction-rule). Nginx assumes the NCSA extended/combined log file format for Access logs and the default Nginx error log file format for error logs. -Both the parse expressions can be used for logs collected from Nginx Plus Server running on Local or container-based systems. +Both the parse expressions can be used for logs collected from the Nginx Plus Server running on Local or container-based systems. For **FER for Access Logs**, use the following Parse Expression: @@ -298,15 +298,15 @@ import AppInstall from '../../reuse/apps/app-install.md'; -## Viewing Nginx Plus Dashboards +## Viewing Nginx Plus dashboards -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables.md). -::: +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + + ### Overview -The **Nginx Plus - Overview** dashboard provides an at-a-glance view of the Nginx Plus server access locations, error logs along with connection metrics. +The **Nginx Plus - Overview** dashboard provides an at-a-glance view of the Nginx Plus server access locations, error logs, and connection metrics. Use this dashboard to: @@ -318,7 +318,7 @@ Use this dashboard to: ### Error Logs Analysis -The **Nginx Plus - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections and outliers, client requests, request trends, and request outliers. +The **Nginx Plus - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections, and outliers, client requests, request trends, and request outliers. Use this dashboard to: * Track requests from clients. A request is a message asking for a resource, such as a page or an image. @@ -333,7 +333,7 @@ The **Nginx Plus - Logs Timeline Analysis** dashboard provides a high-level view Use this dashboard to: -* Understand the traffic distribution across servers, provide insights for resource planning by analyzing data volume and bytes served. +* Understand the traffic distribution across servers, and provide insights for resource planning by analyzing data volume and bytes served. * Gain insights into originated traffic location by region. This can help you allocate compute resources to different regions according to their needs. tk @@ -368,8 +368,8 @@ Use this dashboard to: The **Nginx Plus - Web Server Operations** dashboard provides a high-level view combined with detailed information on the top ten bots, geographic locations, and data for clients with high error rates, server errors over time, and non 200 response code status codes. Dashboard panels also show information on server error logs, error log levels, error responses by a server, and the top URIs responsible for 404 responses. Use this dashboard to: -* Gain insights into Client, Server Responses on Nginx Server. This helps you identify errors in Nginx Server. -* To identify geo locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. +* Gain insights into Client and Server Responses on the Nginx Server. This helps you identify errors in the Nginx Server. +* To identify geo-locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. tk @@ -390,7 +390,7 @@ These insights can be useful for planning in which browsers, platforms, and oper The **Nginx Plus - Visitor Locations** dashboard provides a high-level view of Nginx visitor geographic locations both worldwide and in the United States. Dashboard panels also show graphic trends for visits by country over time and visits by US region over time. Use this dashboard to: -* Gain insights into geographic locations of your user base. This is useful for resource planning in different regions across the globe. +* Gain insights into the geographic locations of your user base. This is useful for resource planning in different regions across the globe. tk @@ -411,7 +411,7 @@ Use this dashboard to: The **Nginx Plus - Caches** dashboard provides insight into cache states, cache hit rate, and cache disk usage over time. Use this dashboard to: -* Gain information about the number of caches used, how many of them are in active (hot) state and what is the hit rate of the cache. +* Gain information about the number of caches used, how many of them are in an active (hot) state and what is the hit rate of the cache. * Gain information about how much disk space is used for cache. tk @@ -423,8 +423,8 @@ The **Nginx Plus - HTTP Location Zones** dashboard provides detailed statistics Use this dashboard to: -* Gain information about Location http zones traffic: received and sent; speed, requires/responses amount, discarded traffic. -* Gain information about Location http zones error responses: percentage of responses by server, percentage of each type of error responses. +* Gain information about Location HTTP zones traffic: received and sent; speed, requires/responses amount, discarded traffic. +* Gain information about Location HTTP zones error responses: percentage of responses by the server, percentage of each type of error responses. tk @@ -435,22 +435,22 @@ The **Nginx Plus - HTTP Server Zones** dashboard provides detailed statistics on Use this dashboard to: -* Gain information about Server http zones traffic: received and sent; speed, requires/responses amount, discarded traffic. -* Gain information about Server http zones error responses: percentage of responses by server, percentage of each type of error responses. +* Gain information about Server HTTP zones traffic: received and sent; speed, requires/responses amount, discarded traffic. +* Gain information about Server HTTP zones error responses: percentage of responses by server, percentage of each type of error response. tk ### HTTP Upstreams -The **Nginx Plus - HTTP Upstreams** dashboard provides information about each upstream group for HTTP and HTTPS traffic, showing number of HTTP upstreams, servers, back-up servers, error responses, and health monitoring. +The **Nginx Plus - HTTP Upstreams** dashboard provides information about each upstream group for HTTP and HTTPS traffic, showing the number of HTTP upstreams, servers, backup servers, error responses, and health monitoring. Use this dashboard to: -* Gain information about HTTP upstreams, servers and back-up servers. -* Gain information about HTTP upstreams traffic: received and sent; speed, requires/responses amount, downtime and response time. -* Gain information about HTTP upstreams error responses: percentage of responses by server, percentage of each type of error responses. -* Gain information about HTTP upstreams health monitoring. +* Gain information about HTTP upstreams, servers, and backup servers. +* Gain information about HTTP upstream traffic: received and sent; speed, requires/responses amount, downtime, and response time. +* Gain information about HTTP upstream error responses: percentage of responses by the server, percentage of each type of error response. +* Gain information about HTTP upstream health monitoring. tk @@ -461,21 +461,21 @@ The **Nginx Plus - Resolvers** dashboard provides DNS server statistics of reque Use this dashboard to: -* Gain information about the total number of zones, responses, and requests speed. +* Gain information about the total number of zones, responses, and request speed. * Gain information about error responses by each type of error. tk ### TCP/UDP Upstreams -The **Nginx Plus - TCP/UDP Upstreams** dashboard provides information about each upstream group for TCP and UDP traffic, showing number of TCP and UDP upstreams, servers, back-up servers, error responses, and health monitoring. +The **Nginx Plus - TCP/UDP Upstreams** dashboard provides information about each upstream group for TCP and UDP traffic, showing the number of TCP and UDP upstreams, servers, backup servers, error responses, and health monitoring. Use this dashboard to: -* Gain information about TCP and UDP upstreams, servers, and back-up servers. -* Gain information about TCP and UDP upstreams traffic: received and sent; speed, requests/responses amount, downtime, and response time. -* Gain information about TCP and UDP upstreams error responses: percentage of responses by server, percentage of each type of error responses. -* Gain information about TCP and UDP upstreams health monitoring. +* Gain information about TCP and UDP upstreams, servers, and backup servers. +* Gain information about TCP and UDP upstream traffic: received and sent; speed, requests/responses amount, downtime, and response time. +* Gain information about TCP and UDP upstream error responses: percentage of responses by the server, percentage of each type of error response. +* Gain information about TCP and UDP upstream health monitoring. tk @@ -487,27 +487,25 @@ The **Nginx Plus - TCP/UDP Zones** dashboard provides TCP and UDP status zones w Use this dashboard to: * Gain information about TCP and UDP traffic: received and sent; speed, requires/responses amount, discarded traffic. -* Gain information about TCP and UDP error responses: percentage of responses by server, percentage of each type of error responses. +* Gain information about TCP and UDP error responses: percentage of responses by the server, percentage of each type of error response. tk -## Installing Nginx Plus monitors +## Create monitors for Nginx Plus app import CreateMonitors from '../../reuse/apps/create-monitors.md'; -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the Nginx Plus alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: - -## Nginx Plus Alerts + -Sumo Logic has provided out-of-the-box alerts available via[ Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Nginx Plus server is available and performing as expected. These alerts are built based on logs and metrics datasets and have preset thresholds based on industry best practices and recommendations. They are as follows: +## Nginx Plus alerts +
+Here are the alerts available for Nginx Plus (click to expand). | Name | Description | Alert Condition | Recover Condition | |:---|:---|:---|:---| | Nginx Plus - Dropped Connections | This alert fires when we detect dropped connections for a given Nginx Plus server. | > 0 | < = 0 | | Nginx Plus - Critical Error Messages | This alert fires when we detect critical error messages for a given Nginx Plus server. | > 0 | < = 0 | -| Nginx Plus - Access from Highly Malicious Sources | This alert fires when an Nginx Plus is accessed from highly malicious IP addresses. | > 0 | < = 0 | +| Nginx Plus - Access from Highly Malicious Sources | This alert fires when a Nginx Plus is accessed from highly malicious IP addresses. | > 0 | < = 0 | | Nginx Plus - High Client (HTTP 4xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 4xx. | > 0 | < = 0 | | Nginx Plus - High Server (HTTP 5xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 5xx. | > 0 | < = 0 | +
diff --git a/docs/integrations/web-servers/nginx.md b/docs/integrations/web-servers/nginx.md index a5500910ad..d2966759d3 100644 --- a/docs/integrations/web-servers/nginx.md +++ b/docs/integrations/web-servers/nginx.md @@ -54,22 +54,7 @@ Learn to set up NGINX for non-Kubernetes Sources. This section provides instructions for configuring log and metric collection for the Sumo Logic app for Nginx. The following tasks are required: -### Step 1: Configure fields in Sumo Logic - -As part of the app installation process, the following fields will be created by default: -* `component` -* `environment` -* `webserver_system` -* `webserver_farm` -* `pod` - -Additionally, if you are using Nginx in the Kubernetes environment, the following additional fields will be created by default during the app installation process: -* `pod_labels_component` -* `pod_labels_environment` -* `pod_labels_webserver_system` -* `pod_labels_webserver_farm` - -### Step 2: Configure Nginx Logs and Metrics Collection +### Step 2: Configure Nginx logs and metrics collection Sumo Logic supports the collection of logs and metrics data from Nginx in both Kubernetes and non-Kubernetes environments. Please click on the appropriate links below based on the environment where your Nginx farms are hosted. @@ -97,7 +82,7 @@ In the logs pipeline, Sumo Logic Distribution for OpenTelemetry Collector collec It’s assumed that you are using the latest helm chart version. If not, upgrade using the instructions [here](/docs/send-data/kubernetes). ::: -#### Configure Metrics Collection +### Configure metrics collection This section explains the steps to collect Nginx metrics from a Kubernetes environment. @@ -137,7 +122,7 @@ Modifying these values will cause the Sumo Logic apps to not function correctly. * `telegraf.influxdata.com/class: sumologic-prometheus`. This instructs the Telegraf operator what output to use. This should not be changed. * `prometheus.io/scrape: "true"`. This ensures our Prometheus will scrape the metrics. -* `prometheus.io/port: "9273"`. This tells prometheus what ports to scrape on. This should not be changed. +* `prometheus.io/port: "9273"`. This tells Prometheus what ports to scrape on. This should not be changed. * `telegraf.influxdata.com/inputs` * In the tags section, that is `[inputs.nginx.tags]` * `component: “webserver”`: This value is used by Sumo Logic apps to identify application components. @@ -150,12 +135,12 @@ Modifying these values will cause the Sumo Logic apps to not function correctly. 4. Verify metrics in Sumo Logic. -#### Configure Logs Collection +#### Configure logs collection This section explains the steps to collect Nginx logs from a Kubernetes environment. 1. **(Recommended Method) Add labels on your Nginx pods to capture logs from standard output.** Make sure that the logs from Nginx are sent to stdout. Follow the instructions below to capture Nginx logs from stdout on Kubernetes. - 1. Apply following labels to the Nginx pod. + 1. Apply the following labels to the Nginx pod. ```sql labels: environment="prod_CHANGEME" @@ -211,9 +196,9 @@ Telegraf uses the[ Nginx input plugin](https://github.com/influxdata/telegraf/tr The process to set up collection for Nginx data is done through the following steps. -#### Configure Logs Collection +#### Configure logs collection -Nginx app supports the default access logs and error logs format. +The Nginx app supports the default access logs and error logs format. 1. **Configure logging in Nginx.** Before you can configure Sumo Logic to ingest logs, you must configure the logging of errors and processed requests in NGINX Open Source and NGINX Plus. For instructions, refer to the following [documentation](https://docs.nginx.com/nginx/admin-guide/monitoring/logging/) 2. **Configure an Installed Collector.** If you have not already done so, install and configure an installed collector for Windows by [following the documentation](/docs/send-data/installed-collectors/windows). @@ -276,9 +261,9 @@ If you're using a service like Fluentd, or you would like to upload your logs ma
-#### Configure Metrics Collection +### Configure metrics collection -#### Set up a Sumo Logic HTTP Source +#### Set up a Sumo Logic HTTP source 1. **Configure a Hosted Collector for Metrics.** To create a new Sumo Logic hosted collector, perform the steps in the [Create a Hosted Collector](/docs/send-data/hosted-collectors/configure-hosted-collector) documentation. 2. **Configure an HTTP Logs & Metrics source**: @@ -317,10 +302,10 @@ Create or modify `telegraf.conf` and copy and paste the text below: Enter values for fields annotated with `` to the appropriate values. Do not include the brackets (`< >`) in your final configuration * Input plugins section, which is `[[inputs.nginx]]`: - * `urls` - An array of Nginx stub_status URI to gather stats. For more information on additional parameters to configure the Nginx input plugin for Telegraf see[ this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx#nginx-input-plugin). + * `urls` - An array of Nginx stub_status URI to gather stats. For more information on additional parameters to configure the Nginx input plugin for Telegraf see[this doc](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx#nginx-input-plugin). * In the tags section, which is `[inputs.nginx.tags]`: * `environment`. This is the deployment environment where the Nginx farm identified by the value of **servers** resides. For example; dev, prod, or QA. While this value is optional we highly recommend setting it. - * `webserver_farm` - Enter a name to identify this Nginx farm. This farm name will be shown in our dashboards. + * `webserver_farm` - Enter a name to identify this Nginx farm. This farm name will be shown on our dashboards. * In the output plugins section, which is `[[outputs.sumologic]]`: * **`URL`** - This is the HTTP source URL created previously. See this doc for more information on additional parameters for configuring the Sumo Logic Telegraf output plugin. @@ -328,7 +313,7 @@ Here’s an explanation for additional values set by this Telegraf configuration If you haven’t defined a farm in Nginx, then enter `default` for `webserver_farm`. -There are additional values set by the Telegraf configuration. We recommend not to modify these values as they might cause the Sumo Logic app to not function correctly. +There are additional values set by the Telegraf configuration. We recommend not modifying these values as they might cause the Sumo Logic app to not function correctly. * `data_format: “prometheus”`. In the output `[[outputs.sumologic]]` plugins section. Metrics are sent in the Prometheus format to Sumo Logic. * `Component - “webserver”` - In the input `[[inputs.nginx]]` plugins section. This value is used by Sumo Logic apps to identify application components. @@ -345,46 +330,43 @@ At this point, Telegraf should start collecting the Nginx metrics and forward th ## Installing the Nginx app -This section demonstrates how to install the Nginx app. +import AppInstall2 from '../../reuse/apps/app-install-sc-k8s.md'; -1. From the **App Catalog**, search for and select the Nginx app. -2. Select the version of the service you're using and click **Add to Library**. - :::note - Version selection is not available for all apps. - ::: -3. To install the app, complete the following fields. - 1. **App Name.** You can retain the existing name, or enter a name of your choice for the app. - 2. **Data Source.** Choose **Enter a Custom Data Filter**, and enter a custom Nginx farm filter. Examples: - 1. For all Nginx farms, `webserver_farm=*`. - 2. For a specific farm, `webserver_farm=nginx.dev.01`. - 3. Farms within a specific environment, `webserver_farm=nginx.dev.01` and `environment=prod`. (This assumes you have set the optional environment tag while configuring collection). -3. **Advanced**. Select the **Location in Library** (the default is the Personal folder in the library), or click **New Folder** to add a new folder. -4. Click **Add to Library**. + -Once an app is installed, it will appear in your **Personal** folder, or other folder that you specified. From here, you can share it with your organization. +As part of the app installation process, the following fields will be created by default: +* `component` +* `environment` +* `webserver_system` +* `webserver_farm` +* `pod` -Panels will start to fill automatically. It's important to note that each panel slowly fills with data matching the time range query and received since the panel was created. Results won't immediately be available, but with a bit of time, you'll see full graphs and maps. +Additionally, if you are using Nginx in the Kubernetes environment, the following additional fields will be created by default during the app installation process: +* `pod_labels_component` +* `pod_labels_environment` +* `pod_labels_webserver_system` +* `pod_labels_webserver_farm` ## Viewing Nginx Dashboards -:::tip Filter with template variables -Template variables provide dynamic dashboards that can rescope data on the fly. As you apply variables to troubleshoot through your dashboard, you view dynamic changes to the data for a quicker resolution to the root cause. You can use template variables to drill down and examine the data on a granular level. For more information, see [Filter with template variables](/docs/dashboards/filter-template-variables). -::: +import ViewDashboards from '../../reuse/apps/view-dashboards.md'; + + ### Overview -The **Nginx - Overview** dashboard provides an at-a-glance view of the NGINX server access locations, error logs along with connection metrics. +The **Nginx - Overview** dashboard provides an at-a-glance view of the NGINX server access locations, error logs, and connection metrics. Use this dashboard to: * Gain insights into originated traffic location by region. This can help you allocate computer resources to different regions according to their needs. * Gain insights into your Nginx health using Critical Errors and Status of Nginx Server. -* Get insights into Active and dropped connection. +* Get insights into Active and dropped connections. Nginx-Overview ### Error Logs -The **Nginx - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections and outliers, client requests, request trends, and request outliers. +The **Nginx - Error Logs Analysis** dashboard provides a high-level view of log level breakdowns, comparisons, and trends. The panels also show the geographic locations of clients and clients with critical messages, new connections, outliers, client requests, request trends, and request outliers. Use this dashboard to: * Track requests from clients. A request is a message asking for a resource, such as a page or an image. @@ -408,7 +390,7 @@ Use this dashboard to: The **Nginx - Outlier Analysis** dashboard provides a high-level view of Nginx server outlier metrics for bytes served, number of visitors, and server errors. You can select the time interval over which outliers are aggregated, then hover the cursor over the graph to display detailed information for that point in time. Use this dashboard to: -* Detect outliers in your infrastructure with Sumo Logic’s machine learning algorithm. +* Detect outliers in your infrastructure with Sumo Logic’s machine-learning algorithm. * To identify outliers in incoming traffic and the number of errors encountered by your servers. You can use schedule searches to send alerts to yourself whenever there is an outlier detected by Sumo Logic. @@ -431,7 +413,7 @@ The **Nginx - Web Server Operations** dashboard provides a high-level view combi Use this dashboard to: -* Gain insights into Client, Server Responses on Nginx Server. This helps you identify errors in Nginx Server. +* Gain insights into Client and Server Responses on the Nginx Server. This helps you identify errors in the Nginx Server. * To identify geo-locations of all Client errors. This helps you identify client location causing errors and helps you to block client IPs. Nginx-WebServerOperations @@ -455,7 +437,7 @@ The **Nginx - Visitor Locations** dashboard provides a high-level view of Nginx Use this dashboard to: -* Gain insights into geographic locations of your user base. This is useful for resource planning in different regions across the globe. +* Gain insights into the geographic locations of your user base. This is useful for resource planning in different regions across the globe. Nginx-VisitorLocations @@ -476,30 +458,25 @@ The **Nginx - Connections and Requests Metrics** dashboard provides insight into Use this dashboard to: -* Gain information about active and dropped connections. This helps you identify the connection rejected by Nginx Server. -* Gain information about the total requests handled by Nginx Server per second. This helps you understand read, write requests on Nginx Server. +* Gain information about active and dropped connections. This helps you identify the connection rejected by the Nginx Server. +* Gain information about the total requests handled by Nginx Server per second. This helps you understand read, and write requests on the Nginx Server. Nginx-Connections-and-Requests -## Installing Nginx monitors +## Create monitors for Nginx app import CreateMonitors from '../../reuse/apps/create-monitors.md'; -:::note -- Ensure that you have [Manage Monitors role capability](/docs/manage/users-roles/roles/role-capabilities/#alerting) permissions to install the Nginx alerts. -- You can only enable the set number of alerts. For more information, refer to [Monitors](/docs/alerts/monitors/create-monitor). -::: - -To view the full list, see [Nginx](#nginx-alerts). - -## Nginx Alerts - -Sumo Logic has provided out-of-the-box alerts available through [Sumo Logic monitors](/docs/alerts/monitors) to help you quickly determine if the Nginx server is available and performing as expected. These alerts are built based on logs and metrics datasets and have preset thresholds based on industry best practices and recommendations. They are as follows: + +## Nginx alerts +
+Here are the alerts available for Nginx (click to expand). | Alert Type (Metrics/Logs) | Alert Name | Alert Description | Trigger Type (Critical / Warning) | Alert Condition | Recover Condition | |:---|:---|:---|:---|:---|:---| -| Logs | Nginx - Access from Highly Malicious Sources | This alert fires when an Nginx server is accessed from highly malicious IP addresses. | Critical | > 0 | < = 0 | +| Logs | Nginx - Access from Highly Malicious Sources | This alert fires when a Nginx server is accessed from highly malicious IP addresses. | Critical | > 0 | < = 0 | | Logs | Nginx - High Client (HTTP 4xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 4xx. | Critical | > 0 | 0 | | Logs | Nginx - High Server (HTTP 5xx) Error Rate | This alert fires when there are too many HTTP requests (>5%) with a response status of 5xx. | Critical | > 0 | 0 | | Logs | Nginx - Critical Error Messages | This alert fires when we detect critical error messages for a given Nginx server. | Critical | > 0 | 0 | | Metrics | Nginx - Dropped Connections | This alert fires when we detect dropped connections for a given Nginx server. | Critical | > 0 | 0 | +
diff --git a/docs/reuse/apps/app-collection-option-1.md b/docs/reuse/apps/app-collection-option-1.md index f33d68e93a..7a12519bd5 100644 --- a/docs/reuse/apps/app-collection-option-1.md +++ b/docs/reuse/apps/app-collection-option-1.md @@ -1,5 +1,7 @@ To set up collection and install the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, and Manage Collectors capabilities depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -7,21 +9,19 @@ To set up collection and install the app, do the following: Sometimes this button says **Add Integration**. ::: 1. In the **Set Up Collection** section of your respective app, select **Create a new Collector**. - 1. **Collector Name**. Enter a Name to display for the Source in the Sumo Logic web application. The description is optional. + 1. **Collector Name**. Enter a Name to display the Source in the Sumo Logic web application. The description is optional. 1. **Timezone**. Set the default time zone when it is not extracted from the log timestamp. Time zone settings on Sources override a Collector time zone setting. - 1. (Optional) **Metadata**. Click the **+Add Metadata** link to add custom log [Metadata Fields](/docs/manage/fields). Define the fields you want to associate, each metadata field needs a name (key) and value. - * ![green check circle.png](/img/reuse/green-check-circle.png) A green circle with a check mark is shown when the field exists and is enabled in the Fields table schema. + 1. (Optional) **Metadata**. Click the **+Add Metadata** link to add a custom log [Metadata Fields](/docs/manage/fields). Define the fields you want to associate, each metadata field needs a name (key) and value. + * ![green check circle.png](/img/reuse/green-check-circle.png) A green circle with a checkmark is shown when the field exists and is enabled in the Fields table schema. * ![orange exclamation point.png](/img/reuse/orange-exclamation-point.png) An orange triangle with an exclamation point is shown when the field doesn't exist, or is disabled, in the Fields table schema. In this case, an option to automatically add or enable the nonexistent fields to the Fields table schema is provided. If a field is sent to Sumo that does not exist in the Fields schema or is disabled it is ignored, known as dropped. 1. Click **Next**. 1. Configure the source as specified in the `Info` box above, ensuring all required fields are included. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources set up, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value**. 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** Once your app is installed, it will appear in your **Installed Apps** folder, and dashboard panels will start to fill automatically. -Each panel slowly fills with data matching the time range query and received since the panel was created. Results will not immediately be available, but will update with full graphs and charts over time. +Each panel slowly fills with data matching the time range query received since the panel was created. Results will not immediately be available but will be updated with full graphs and charts over time. diff --git a/docs/reuse/apps/app-collection-option-2.md b/docs/reuse/apps/app-collection-option-2.md index 2044e5b564..237afba8b8 100644 --- a/docs/reuse/apps/app-collection-option-2.md +++ b/docs/reuse/apps/app-collection-option-2.md @@ -1,5 +1,7 @@ -To setup source in the existing collector and install the app, do the following: - +To set up the source in the existing collector and install the app, do the following: +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, and Manage Collectors capabilities depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -7,16 +9,14 @@ To setup source in the existing collector and install the app, do the following: Sometimes this button says **Add Integration**. ::: 1. In the **Set Up Collection** section of your respective app, select **Use an existing Collector**. -1. From the **Select Collector** dropdown, select the collector that you want to setup your source with and click **Next**. +1. From the **Select Collector** dropdown, select the collector that you want to set up your source with and click **Next**. 1. Configure the source as specified in the `Info` box above, ensuring all required fields are included. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources set up, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value**. 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** Once your app is installed, it will appear in your **Installed Apps** folder, and dashboard panels will start to fill automatically. -Each panel slowly fills with data matching the time range query and received since the panel was created. Results will not immediately be available, but will update with full graphs and charts over time. +Each panel slowly fills with data matching the time range query received since the panel was created. Results will not immediately be available but will be updated with full graphs and charts over time. diff --git a/docs/reuse/apps/app-collection-option-3.md b/docs/reuse/apps/app-collection-option-3.md index 72bcf58413..16b95ef43c 100644 --- a/docs/reuse/apps/app-collection-option-3.md +++ b/docs/reuse/apps/app-collection-option-3.md @@ -1,5 +1,7 @@ To skip collection and only install the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, and Manage Collectors capabilities depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -8,13 +10,11 @@ To skip collection and only install the app, do the following: ::: 1. In the **Set Up Collection** section of your respective app, select **Skip this step and use existing source** and click **Next**. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources set up, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value**. 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** Once your app is installed, it will appear in your **Installed Apps** folder, and dashboard panels will start to fill automatically. -Each panel slowly fills with data matching the time range query and received since the panel was created. Results will not immediately be available, but will update with full graphs and charts over time. +Each panel slowly fills with data matching the time range query received since the panel was created. Results will not immediately be available but will be updated with full graphs and charts over time. diff --git a/docs/reuse/apps/app-install-only-k8s.md b/docs/reuse/apps/app-install-only-k8s.md new file mode 100644 index 0000000000..ed4d9cf4cd --- /dev/null +++ b/docs/reuse/apps/app-install-only-k8s.md @@ -0,0 +1,20 @@ +To install the app, do the following: +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, and Manage Collectors capabilities depending upon the different content types part of the app. +::: +1. Select **App Catalog**. +1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. +1. Click **Install App**. + :::note + Sometimes this button says **Add Integration**. + ::: +1. Click **Next** in the **Setup Data** section. +1. In the **Configure** section of your respective app, complete the following fields. + 1. **Is K8S deployment involved**. Specify if resources being monitored are partially or fully deployed on Kubernetes (K8s) +1. Click **Next**. You will be redirected to the **Preview & Done** section. + +**Post-installation** + +Once your app is installed, it will appear in your **Installed Apps** folder, and dashboard panels will start to fill automatically. + +Each panel slowly fills with data matching the time range query received since the panel was created. Results will not immediately be available but will be updated with full graphs and charts over time. diff --git a/docs/reuse/apps/app-install-sc-k8s.md b/docs/reuse/apps/app-install-sc-k8s.md new file mode 100644 index 0000000000..0f27fdff3f --- /dev/null +++ b/docs/reuse/apps/app-install-sc-k8s.md @@ -0,0 +1,21 @@ +To install the app, do the following: +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, and Manage Collectors capabilities depending upon the different content types part of the app. +::: +1. Select **App Catalog**. +1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. +1. Click **Install App**. + :::note + Sometimes this button says **Add Integration**. + ::: +1. Click **Next** in the **Setup Data** section. +1. In the **Configure** section of your respective app, complete the following fields. + 1. **Field Name**. If you already have collectors and sources set up, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value**. + 2. **Is K8S deployment involved**. Specify if resources being monitored are partially or fully deployed on Kubernetes (K8s) +1. Click **Next**. You will be redirected to the **Preview & Done** section. + +**Post-installation** + +Once your app is installed, it will appear in your **Installed Apps** folder, and dashboard panels will start to fill automatically. + +Each panel slowly fills with data matching the time range query received since the panel was created. Results will not immediately be available but will be updated with full graphs and charts over time. diff --git a/docs/reuse/apps/app-install-v2.md b/docs/reuse/apps/app-install-v2.md index 01aa4c7c20..3025c09ec7 100644 --- a/docs/reuse/apps/app-install-v2.md +++ b/docs/reuse/apps/app-install-v2.md @@ -1,5 +1,7 @@ To install the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, and Manage Collectors capabilities depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the 🔎 **Search Apps** field, run a search for your desired app, then select it. 1. Click **Install App**. @@ -8,13 +10,11 @@ To install the app, do the following: ::: 1. Click **Next** in the **Setup Data** section. 1. In the **Configure** section of your respective app, complete the following fields. - 1. **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom**, and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources set up, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value**. 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-installation** Once your app is installed, it will appear in your **Installed Apps** folder, and dashboard panels will start to fill automatically. -Each panel slowly fills with data matching the time range query and received since the panel was created. Results will not immediately be available, but will update with full graphs and charts over time. +Each panel slowly fills with data matching the time range query received since the panel was created. Results will not immediately be available but will be updated with full graphs and charts over time. diff --git a/docs/reuse/apps/app-update.md b/docs/reuse/apps/app-update.md index b3add69683..f74e790a15 100644 --- a/docs/reuse/apps/app-update.md +++ b/docs/reuse/apps/app-update.md @@ -1,19 +1,19 @@ To update the app, do the following: - +:::note + Next-Gen App: To install or update the app, you must be an account administrator or a user with Manage Apps, Manage Monitors, Manage Fields, Manage Metric Rules, and Manage Collectors capabilities depending upon the different content types part of the app. +::: 1. Select **App Catalog**. 1. In the **Search Apps** field, search for and then select your app.
Optionally, you can identify apps that can be upgraded in the **Upgrade available** section. 1. To upgrade the app, select **Upgrade** from the **Manage** dropdown. 1. If the upgrade does not have any configuration or property changes, you will be redirected to the **Preview & Done** section. - 1. If the upgrade has any configuration or property changes, you will be redirected to **Setup Data** page. + 1. If the upgrade has any configuration or property changes, you will be redirected to the **Setup Data** page. 1. In the **Configure** section of your respective app, complete the following fields. - - **Key**. Select either of these options for the data source. - * Choose **Source Category** and select a source category from the list for **Default Value**. - * Choose **Custom** and enter a custom metadata field. Insert its value in **Default Value**. + 1. **Field Name**. If you already have collectors and sources set up, select the configured metadata field name (eg _sourcecategory) or specify other custom metadata (eg: _collector) along with its metadata **Field Value**. 1. Click **Next**. You will be redirected to the **Preview & Done** section. **Post-update** -Your upgraded app will be installed in the **Installed Apps** folder, and dashboard panels will start to fill automatically. +Your upgraded app will be installed in the **Installed Apps** folder and dashboard panels will start to fill automatically. :::note See our [**Release Notes** changelog](/release-notes-service) for new updates in the app.