Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Exception updating the admin password : /usr/share/opensearch/config/opensearch-security/internal_users.yml: Device or resource busy #3891

Closed
ruanyl opened this issue Dec 22, 2023 · 9 comments · Fixed by #4100
Assignees
Labels
bug Something isn't working triaged Issues labeled as 'Triaged' have been reviewed and are deemed actionable.

Comments

@ruanyl
Copy link
Member

ruanyl commented Dec 22, 2023

What is the bug?

Update(2023/12/23)

While I was debugging the issue in my stack, I found this commit 17748b9 might potentially be backward incompatible.

In my opensearch setup(for testing purpose), I manage the security configuration in opensearch.yml and internal_users.yml by myself(write permission to these files are removed). So I don't need security plugin to setup it for me. But I DO need security plugin to setup demo certificates files.

Previously, with the old install_demo_configuration.sh, demo certificates setup was happened before setting up admin user, so even the script exit when setting up admin user, the demo certificate files are still created.

Now, in the new implementation, it seems admin user setup is before certificates setup, so if that failed, demo certificates won't be created.

securitySettingsConfigurer.configureSecuritySettings();
certificateGenerator.createDemoCertificates();

Exception when running security plugin with opensearch on main branch(3.0.0)

OPENSEARCH_INITIAL_ADMIN_PASSWORD is set
DISABLE_INSTALL_DEMO_CONFIG is set to false

Suspicious log Exception updating the admin password : /usr/share/opensearch/config/opensearch-security/internal_users.yml: Device or resource busy

OpenSearch install type: rpm/deb on Linux 5.10.173-154.642.amzn2.x86_64 amd64
OpenSearch config dir: /usr/share/opensearch/config/
OpenSearch config file: /usr/share/opensearch/config/opensearch.yml
OpenSearch bin dir: /usr/share/opensearch/bin/
OpenSearch plugins dir: /usr/share/opensearch/plugins/
OpenSearch lib dir: /usr/share/opensearch/lib/
Detected OpenSearch Version: 3.0.0
Detected OpenSearch Security Version: 3.0.0.0
Admin password set successfully.
Exception updating the admin password : /usr/share/opensearch/config/opensearch-security/internal_users.yml: Device or resource busy

Error stack traces:

[2023-12-22T10:24:38,513][ERROR][o.o.b.OpenSearchUncaughtExceptionHandler] [opensearch-cluster-master-1] uncaught exception in thread [main]
org.opensearch.bootstrap.StartupException: java.lang.IllegalStateException: failed to load plugin class [org.opensearch.security.OpenSearchSecurityPlugin]
	at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:184) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.OpenSearch.execute(OpenSearch.java:171) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:104) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138) ~[opensearch-cli-3.0.0.jar:3.0.0]
	at org.opensearch.cli.Command.main(Command.java:101) ~[opensearch-cli-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:137) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:103) ~[opensearch-3.0.0.jar:3.0.0]
Caused by: java.lang.IllegalStateException: failed to load plugin class [org.opensearch.security.OpenSearchSecurityPlugin]
	at org.opensearch.plugins.PluginsService.loadPlugin(PluginsService.java:791) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundle(PluginsService.java:731) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundles(PluginsService.java:533) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.<init>(PluginsService.java:195) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:484) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:411) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:404) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180) ~[opensearch-3.0.0.jar:3.0.0]
	... 6 more
Caused by: java.lang.reflect.InvocationTargetException
	at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:74) ~[?:?]
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) ~[?:?]
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
	at org.opensearch.plugins.PluginsService.loadPlugin(PluginsService.java:782) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundle(PluginsService.java:731) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundles(PluginsService.java:533) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.<init>(PluginsService.java:195) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:484) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:411) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:404) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180) ~[opensearch-3.0.0.jar:3.0.0]
	... 6 more
uncaught exception in thread [main]
Caused by: org.opensearch.OpenSearchSecurityException: Error while initializing transport SSL layer from PEM: OpenSearchException[Unable to read /usr/share/opensearch/config/esnode.pem (/usr/share/opensearch/config/esnode.pem). Please make sure this files exists and is readable regarding to permissions. Property: plugins.security.ssl.transport.pemcert_filepath]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.initTransportSSLConfig(DefaultSecurityKeyStore.java:495) ~[?:?]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.initSSLConfig(DefaultSecurityKeyStore.java:309) ~[?:?]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.<init>(DefaultSecurityKeyStore.java:215) ~[?:?]
	at org.opensearch.security.ssl.OpenSearchSecuritySSLPlugin.<init>(OpenSearchSecuritySSLPlugin.java:235) ~[?:?]
	at org.opensearch.security.OpenSearchSecurityPlugin.<init>(OpenSearchSecurityPlugin.java:297) ~[?:?]
	at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62) ~[?:?]
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) ~[?:?]
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
	at org.opensearch.plugins.PluginsService.loadPlugin(PluginsService.java:782) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundle(PluginsService.java:731) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundles(PluginsService.java:533) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.<init>(PluginsService.java:195) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:484) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:411) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:404) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180) ~[opensearch-3.0.0.jar:3.0.0]
	... 6 more
Caused by: org.opensearch.OpenSearchException: Unable to read /usr/share/opensearch/config/esnode.pem (/usr/share/opensearch/config/esnode.pem). Please make sure this files exists and is readable regarding to permissions. Property: plugins.security.ssl.transport.pemcert_filepath
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.checkPath(DefaultSecurityKeyStore.java:1172) ~[?:?]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.resolve(DefaultSecurityKeyStore.java:287) ~[?:?]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.initTransportSSLConfig(DefaultSecurityKeyStore.java:465) ~[?:?]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.initSSLConfig(DefaultSecurityKeyStore.java:309) ~[?:?]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.<init>(DefaultSecurityKeyStore.java:215) ~[?:?]
	at org.opensearch.security.ssl.OpenSearchSecuritySSLPlugin.<init>(OpenSearchSecuritySSLPlugin.java:235) ~[?:?]
	at org.opensearch.security.OpenSearchSecurityPlugin.<init>(OpenSearchSecurityPlugin.java:297) ~[?:?]
	at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62) ~[?:?]
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502) ~[?:?]
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:486) ~[?:?]
	at org.opensearch.plugins.PluginsService.loadPlugin(PluginsService.java:782) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundle(PluginsService.java:731) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.loadBundles(PluginsService.java:533) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.plugins.PluginsService.<init>(PluginsService.java:195) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:484) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.node.Node.<init>(Node.java:411) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.setup(Bootstrap.java:242) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:404) ~[opensearch-3.0.0.jar:3.0.0]
	at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180) ~[opensearch-3.0.0.jar:3.0.0]
	... 6 more
java.lang.IllegalStateException: failed to load plugin class [org.opensearch.security.OpenSearchSecurityPlugin]
Likely root cause: OpenSearchException[Unable to read /usr/share/opensearch/config/esnode.pem (/usr/share/opensearch/config/esnode.pem). Please make sure this files exists and is readable regarding to permissions. Property: plugins.security.ssl.transport.pemcert_filepath]
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.checkPath(DefaultSecurityKeyStore.java:1172)
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.resolve(DefaultSecurityKeyStore.java:287)
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.initTransportSSLConfig(DefaultSecurityKeyStore.java:465)
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.initSSLConfig(DefaultSecurityKeyStore.java:309)
	at org.opensearch.security.ssl.DefaultSecurityKeyStore.<init>(DefaultSecurityKeyStore.java:215)
	at org.opensearch.security.ssl.OpenSearchSecuritySSLPlugin.<init>(OpenSearchSecuritySSLPlugin.java:235)
	at org.opensearch.security.OpenSearchSecurityPlugin.<init>(OpenSearchSecurityPlugin.java:297)
	at java.base/jdk.internal.reflect.DirectConstructorHandleAccessor.newInstance(DirectConstructorHandleAccessor.java:62)
	at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:502)
	at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:486)
	at org.opensearch.plugins.PluginsService.loadPlugin(PluginsService.java:782)
	at org.opensearch.plugins.PluginsService.loadBundle(PluginsService.java:731)
	at org.opensearch.plugins.PluginsService.loadBundles(PluginsService.java:533)
	at org.opensearch.plugins.PluginsService.<init>(PluginsService.java:195)
	at org.opensearch.node.Node.<init>(Node.java:484)
	at org.opensearch.node.Node.<init>(Node.java:411)
	at org.opensearch.bootstrap.Bootstrap$5.<init>(Bootstrap.java:242)
	at org.opensearch.bootstrap.Bootstrap.setup(Bootstrap.java:242)
	at org.opensearch.bootstrap.Bootstrap.init(Bootstrap.java:404)
	at org.opensearch.bootstrap.OpenSearch.init(OpenSearch.java:180)
	at org.opensearch.bootstrap.OpenSearch.execute(OpenSearch.java:171)
	at org.opensearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:104)
	at org.opensearch.cli.Command.mainWithoutErrorHandling(Command.java:138)
	at org.opensearch.cli.Command.main(Command.java:101)
	at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:137)
	at org.opensearch.bootstrap.OpenSearch.main(OpenSearch.java:103)
For complete error details, refer to the log at /usr/share/opensearch/logs/opensearch-cluster.log

How can one reproduce the bug?
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

What is the expected behavior?
A clear and concise description of what you expected to happen.

What is your host/environment?

  • OS: [e.g. iOS]
  • Version [e.g. 22]
  • Plugins

Do you have any screenshots?
If applicable, add screenshots to help explain your problem.

Do you have any additional context?
Add any other context about the problem.

@ruanyl ruanyl added bug Something isn't working untriaged Require the attention of the repository maintainers and may need to be prioritized labels Dec 22, 2023
@ruanyl ruanyl changed the title [BUG] [BUG] Exception updating the admin password : /usr/share/opensearch/config/opensearch-security/internal_users.yml: Device or resource busy Dec 22, 2023
@stephen-crawford
Copy link
Contributor

[Triage] Hi @ruanyl, thank you for filing this issue. Someone will follow-up to try to diagnose this bug and make a fix if one is possible. Thank you for posting the logs and reproduction steps!

@stephen-crawford stephen-crawford added triaged Issues labeled as 'Triaged' have been reviewed and are deemed actionable. and removed untriaged Require the attention of the repository maintainers and may need to be prioritized labels Jan 8, 2024
@prudhvigodithi
Copy link
Member

prudhvigodithi commented Feb 21, 2024

@ruanyl seen the same issue with helm charts and with OpenSearch 2.12.0, One common cause is having multiple VolumeMounts on the same path, especially in subdirectories, which can lead to the "device or resource busy" error. With the new the install_demo_configuration.sh updating the internal_users.yml file hence you see this error Device or resource busy.

I was able to make this work by adding the following and creating the internal_users.yml as configmap (kubectl create configmap internal-users --from-file=internal_users.yml=internal_users.yml)

extraVolumes:
  - name: internal-users-emptydir
    emptyDir: {}
  - name: internal-users
    configMap:
      name: internal-users
      items:
      - key: internal_users.yml
        path: internal_users.yml

extraVolumeMounts:
    - name: internal-users-emptydir
      mountPath: /usr/share/opensearch/config/opensearch-security/
      subPath: internal_users.yml
    - name: internal-users
      mountPath: /tmp/internal_users.yml
      subPath: internal_users.yml
   
extraInitContainers:
  - name: internal-users-init-container
    image: busybox
    command: ['sh', '-c', 'cp /tmp/internalusers/internal_users.yml /tmp/internal_users.yml']
    volumeMounts:
    - mountPath: /tmp/internal_users.yml
      subPath: internal_users.yml
      name: internal-users-emptydir
    - name: internal-users
      mountPath: /tmp/internalusers/internal_users.yml
      subPath: internal_users.yml

What it technically does it:

  1. The internal_users.yml is now created as an seperate configmap.
  2. The internal_users.yml is 1st mounted to a /tmp/ directory from above configmap.
  3. This internal_users.yml now part of the /tmp/ folder acts as a new mount and later copied to /usr/share/opensearch/config/opensearch-security/.
  4. During startup the security index is now created with the user data part of the internal_users.yml avoiding the error Device or resource busy

@DarshitChanpura
Copy link
Member

DarshitChanpura commented Feb 22, 2024

This error shows because the demo setup requires an admin password to be supplied, which demo config then generates hash and writes to internal_users.yml. But since docker has locked the file path with the mount for custom internal_users.yml, it throws this error. Currently, there is no way to just use demo certificates and supply custom internal_users. This could be added as a separate option in future and may require discussion among maintainers to implement this.

@geckiss
Copy link

geckiss commented Mar 5, 2024

Wouldn't it be better to check internal_users.yml if it contains strong admin password before trying to write into it? Read is compatible with k8s mount. Right now, we can't update to 2.12 even though we have already set strong admin password (through mounting custom internal_users.yml).

Seems to me it would be better to either read from the mounted internal_users.yml or read from environment variable.

@DarshitChanpura
Copy link
Member

DarshitChanpura commented Mar 6, 2024

@geckiss We can do that, but that opens up the possibility to add any string as password for admin user. Since the values are stored as hash, there is no way to validate the strength of the updated password. I understand that this would solve the problem at hand, but the responsibility would then be onto the user to supply strong password.

However, if the use-case for providing a custom internal_users.yml and utilizing the demo setup tool solely for demo certificates generation, wouldn't it be better if certificate generation would be a separate toggle-able feature?

Update: Here's a change that could work with custom internal users.yml: https://github.com/DarshitChanpura/security/blob/d3b64419082b63cd020d82caf41773f6eac0ae74/src/main/java/org/opensearch/security/tools/democonfig/SecuritySettingsConfigurer.java#L133

@fmr-disy
Copy link

fmr-disy commented Mar 15, 2024

I tried the workaround provided by @prudhvigodithi but the problem is that the directory /usr/share/opensearch/config/opensearch-security/ ends completely empty except the internal_users.yml file

To fix the issue, I opted for copying all the files from the opensearch image to the emptydir volume

extraVolumes:
  - name: security-emptydir
    emptyDir: {}
  - name: internal-users
    configMap:
      name: internal-users
      items:
      - key: internal_users.yml
        path: internal_users.yml

extraVolumeMounts:
    - name: security-emptydir
      mountPath: /usr/share/opensearch/config/opensearch-security/

extraInitContainers:
  - image: opensearchproject/opensearch:2.12.0
    name: init-security-dir
    command:
      - sh
      - -c
      - cp /usr/share/opensearch/config/opensearch-security/* /tmp/emptydir/
      - cp /tmp/internal_users.yml /tmp/emptydir/internal_users.yml
    volumeMounts:
    - name: security-emptydir
      mountPath: /tmp/emptydir
    - name: internal-users
      mountPath: /tmp/internal_users.yml
      subPath: internal_users.yml

@prudhvigodithi
Copy link
Member

prudhvigodithi commented Mar 15, 2024

Hey @fmr-disy did you try with the dataComplete: false in your values file?
https://github.com/opensearch-project/helm-charts/blob/main/charts/opensearch/values.yaml#L352
Adding @bbarani @peterzhuamazon

@fmr-disy
Copy link

Hey @fmr-disy did you try with the dataComplete: false in your values file? https://github.com/opensearch-project/helm-charts/blob/main/charts/opensearch/values.yaml#L352 Adding @bbarani @peterzhuamazon

Thanks for your quick reaction. I tested it setting the parameter securityConfig.config.dataComplete to false and it makes no difference. Unless you add all the security configuration files in securityConfig.config.data. Please see here: https://github.com/opensearch-project/helm-charts/blob/0dfa6066e9a8fb5a4cf76ffcfcd02650e2ee6090/charts/opensearch/templates/statefulset.yaml#L476

@prudhvigodithi
Copy link
Member

Hey @fmr-disy, following are my values.yaml and internal_users.yml files, following the steps above #3891 (comment), can you please test agin?

_meta:
  type: "internalusers"
  config_version: 2

# Define your internal users here

## Demo users

admin:
  hash: "$2y$12$ZPHBXj3AHQ3OBW0afObVT.G9bmEUguVrotnu3MsL/Q74WP.aNSjqi"
  reserved: false
  backend_roles:
  - "admin"
  description: "Demo admin user"


kibanaserver:
  hash: "$2y$12$fYudyhWT/xks1IZhko5Osuja3yrpt8NrQjXN.HdACfIs4uMy5TGKC"
  reserved: true
  description: "Demo OpenSearch Dashboards user"

kibanaro:
  hash: "$2y$12$O7ZpyD71zss2i.KafEA9GeRtKs8Ch2hSN8HtbcT1xCUwJPWVN7Y3u"
  reserved: false
  backend_roles:
  - "kibanauser"
  - "readall"
  attributes:
    attribute1: "value1"
    attribute2: "value2"
    attribute3: "value3"
  description: "Demo OpenSearch Dashboards read only user"

logstash:
  hash: "$2y$12$.iY36ILKEixrPF1ioV8in.vNlFVCW8wsy1Gf9m.mXlbT0U9QmvBK2"
  reserved: false
  backend_roles:
  - "logstash"
  description: "Demo logstash user"

readall:
  hash: "$2y$12$15huaUhaiNxHiP0JIhf.QeB9NVsB/nTiSytIT1DQJfXCEKKJaIy8C"
  reserved: false
  backend_roles:
  - "readall"
  description: "Demo readall user"

snapshotrestore:
  hash: "$2y$12$dyT.lMC1GdPwOVBSxfnS5ebGBH9S2.SkVI/F6D6ASy90oX8SdAxV."
  reserved: false
  backend_roles:
  - "snapshotrestore"
  description: "Demo snapshotrestore user"
---
clusterName: "opensearch-cluster"
nodeGroup: "master"

# If discovery.type in the opensearch configuration is set to "single-node",
# this should be set to "true"
# If "true", replicas will be forced to 1
singleNode: false

# The service that non master groups will try to connect to when joining the cluster
# This should be set to clusterName + "-" + nodeGroup for your master group
masterService: "opensearch-cluster-master"

# OpenSearch roles that will be applied to this nodeGroup
# These will be set as environment variable "node.roles". E.g. node.roles=master,ingest,data,remote_cluster_client
roles:
  - master
  - ingest
  - data
  - remote_cluster_client

replicas: 3

# if not set, falls back to parsing .Values.imageTag, then .Chart.appVersion.
majorVersion: ""

global:
  # Set if you want to change the default docker registry, e.g. a private one.
  dockerRegistry: ""

# Allows you to add any config files in {{ .Values.opensearchHome }}/config
opensearchHome: /usr/share/opensearch

# such as opensearch.yml and log4j2.properties
config:
  # # Values must be YAML literal style scalar / YAML multiline string.
  # # <filename>: |
  # #   <formatted-value(s)>
  # # log4j2.properties: |
  # #   status = error
  # #
  # #   appender.console.type = Console
  # #   appender.console.name = console
  # #   appender.console.layout.type = PatternLayout
  # #   appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] [%node_name]%marker %m%n
  # #
  # #   rootLogger.level = info
  # #   rootLogger.appenderRef.console.ref = console
  opensearch.yml: |
    cluster.name: opensearch-cluster

    # Bind to all interfaces because we don't know what IP address Docker will assign to us.
    network.host: 0.0.0.0

    # Setting network.host to a non-loopback address enables the annoying bootstrap checks. "Single-node" mode disables them again.
    # Implicitly done if ".singleNode" is set to "true".
    # discovery.type: single-node

    # Start OpenSearch Security Demo Configuration
    # WARNING: revise all the lines below before you go into production
    plugins:
      security:
        ssl:
          transport:
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
            enforce_hostname_verification: false
          http:
            enabled: true
            pemcert_filepath: esnode.pem
            pemkey_filepath: esnode-key.pem
            pemtrustedcas_filepath: root-ca.pem
        allow_unsafe_democertificates: true
        allow_default_init_securityindex: true
        authcz:
          admin_dn:
            - CN=kirk,OU=client,O=client,L=test,C=de
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]
    ######## End OpenSearch Security Demo Configuration ########
  # log4j2.properties:

# Extra environment variables to append to this nodeGroup
# This will be appended to the current 'env:' key. You can use any of the kubernetes env
# syntax here
extraEnvs: []
#  - name: MY_ENVIRONMENT_VAR
#    value: the_value_goes_here
# Chart version 2.18.0 and App Version OpenSearch 2.12.0 onwards a custom strong password needs to be provided in order to setup demo admin user.
# Cluster will not spin-up without this unless demo config install is disabled.
#  - name: OPENSEARCH_INITIAL_ADMIN_PASSWORD
#    value: <strong-password>

# Allows you to load environment variables from kubernetes secret or config map
envFrom: []
# - secretRef:
#     name: env-secret
# - configMapRef:
#     name: config-map

# A list of secrets and their paths to mount inside the pod
# This is useful for mounting certificates for security and for mounting
# the X-Pack license
secretMounts: []

hostAliases: []
# - ip: "127.0.0.1"
#   hostnames:
#   - "foo.local"
#   - "bar.local"

image:
  repository: "opensearchproject/opensearch"
  # override image tag, which is .Chart.AppVersion by default
  tag: ""
  pullPolicy: "IfNotPresent"

podAnnotations: {}
  # iam.amazonaws.com/role: es-cluster

# OpenSearch Statefulset annotations
openSearchAnnotations: {}

# additionals labels
labels: {}

opensearchJavaOpts: "-Xmx512M -Xms512M"

resources:
  requests:
    cpu: "1000m"
    memory: "100Mi"

initResources: {}
#  limits:
#     cpu: "25m"
#     memory: "128Mi"
#  requests:
#     cpu: "25m"
#     memory: "128Mi"

sidecarResources: {}
#   limits:
#     cpu: "25m"
#     memory: "128Mi"
#   requests:
#     cpu: "25m"
#     memory: "128Mi"

networkHost: "0.0.0.0"

rbac:
  create: false
  serviceAccountAnnotations: {}
  serviceAccountName: ""
  # Controls whether or not the Service Account token is automatically mounted to /var/run/secrets/kubernetes.io/serviceaccount
  automountServiceAccountToken: false

podSecurityPolicy:
  create: false
  name: ""
  spec:
    privileged: true
    fsGroup:
      rule: RunAsAny
    runAsUser:
      rule: RunAsAny
    seLinux:
      rule: RunAsAny
    supplementalGroups:
      rule: RunAsAny
    volumes:
      - secret
      - configMap
      - persistentVolumeClaim
      - emptyDir

persistence:
  enabled: true
  # Set to false to disable the `fsgroup-volume` initContainer that will update permissions on the persistent disk.
  enableInitChown: true
  # override image, which is busybox by default
  # image: busybox
  # override image tag, which is latest by default
  # imageTag:
  labels:
    # Add default labels for the volumeClaimTemplate of the StatefulSet
    enabled: false
  # OpenSearch Persistent Volume Storage Class
  # If defined, storageClassName: <storageClass>
  # If set to "-", storageClassName: "", which disables dynamic provisioning
  # If undefined (the default) or set to null, no storageClassName spec is
  #   set, choosing the default provisioner.  (gp2 on AWS, standard on
  #   GKE, AWS & OpenStack)
  #
  # storageClass: "-"
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

extraVolumes:
  - name: internal-users-emptydir
    emptyDir: {}
  - name: internal-users
    configMap:
      name: internal-users
      items:
      - key: internal_users.yml
        path: internal_users.yml

extraVolumeMounts:
    - name: internal-users-emptydir
      mountPath: /usr/share/opensearch/config/opensearch-security/
      subPath: internal_users.yml
    - name: internal-users
      mountPath: /tmp/internal_users.yml
      subPath: internal_users.yml

extraContainers: []
  # - name: do-something
  #   image: busybox
  #   command: ['do', 'something']

extraInitContainers:
  - name: internal-users-init-container
    image: busybox
    command: ['sh', '-c', 'cp /tmp/internalusers/internal_users.yml /tmp/internal_users.yml']
    volumeMounts:
    - mountPath: /tmp/internal_users.yml
      subPath: internal_users.yml
      name: internal-users-emptydir
    - name: internal-users
      mountPath: /tmp/internalusers/internal_users.yml
      subPath: internal_users.yml
  #- name: do-somethings
  #  image: busybox
  #  command: ['cp', '/tmp/internal_users.yml', '/usr/share/opensearch/config/opensearch-security/internal_users.yml']

# This is the PriorityClass settings as defined in
# https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
priorityClassName: ""

# By default this will make sure two pods don't end up on the same node
# Changing this to a region would allow you to spread pods across regions
antiAffinityTopologyKey: "kubernetes.io/hostname"

# Hard means that by default pods will only be scheduled if there are enough nodes for them
# and that they will never end up on the same node. Setting this to soft will do this "best effort".
# Setting this to custom will use what is passed into customAntiAffinity.
antiAffinity: "soft"

# Allows passing in custom anti-affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#types-of-inter-pod-affinity-and-anti-affinity
# Using this parameter requires setting antiAffinity to custom.
customAntiAffinity: {}

# This is the node affinity settings as defined in
# https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
nodeAffinity: {}

# This is the pod affinity settings as defined in
# https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#types-of-inter-pod-affinity-and-anti-affinity
podAffinity: {}

# This is the pod topology spread constraints
# https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/
topologySpreadConstraints: []

# The default is to deploy all pods serially. By setting this to parallel all pods are started at
# the same time when bootstrapping the cluster
podManagementPolicy: "Parallel"

# The environment variables injected by service links are not used, but can lead to slow OpenSearch boot times when
# there are many services in the current namespace.
# If you experience slow pod startups you probably want to set this to `false`.
enableServiceLinks: true

protocol: https
httpPort: 9200
transportPort: 9300
metricsPort: 9600
httpHostPort: ""
transportHostPort: ""


service:
  labels: {}
  labelsHeadless: {}
  headless:
    annotations: {}
  type: ClusterIP
  # The IP family and IP families options are to set the behaviour in a dual-stack environment
  # Omitting these values will let the service fall back to whatever the CNI dictates the defaults
  # should be
  #
  # ipFamilyPolicy: SingleStack
  # ipFamilies:
  # - IPv4
  nodePort: ""
  annotations: {}
  httpPortName: http
  transportPortName: transport
  metricsPortName: metrics
  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  externalTrafficPolicy: ""

updateStrategy: RollingUpdate

# This is the max unavailable setting for the pod disruption budget
# The default value of 1 will make sure that kubernetes won't allow more than 1
# of your pods to be unavailable during maintenance
maxUnavailable: 1

podSecurityContext:
  fsGroup: 1000
  runAsUser: 1000

securityContext:
  capabilities:
    drop:
      - ALL
  # readOnlyRootFilesystem: true
  runAsNonRoot: true
  runAsUser: 1000

securityConfig:
  enabled: true
  path: "/usr/share/opensearch/config/opensearch-security"
  actionGroupsSecret:
  configSecret:
  internalUsersSecret:
  rolesSecret:
  rolesMappingSecret:
  tenantsSecret:
  # The following option simplifies securityConfig by using a single secret and
  # specifying the config files as keys in the secret instead of creating
  # different secrets for for each config file.
  # Note that this is an alternative to the individual secret configuration
  # above and shouldn't be used if the above secrets are used.
  config:
    # There are multiple ways to define the configuration here:
    # * If you define anything under data, the chart will automatically create
    #   a secret and mount it. This is best option to choose if you want to override all the
    #   existing yml files at once.
    # * If you define securityConfigSecret, the chart will assume this secret is
    #   created externally and mount it. This is best option to choose if your intention is to
    #   only update a single yml file.
    # * It is an error to define both data and securityConfigSecret.
    securityConfigSecret: ""
    dataComplete: false
    data:
      config.yml: |
        _meta:
          type: "config"
          config_version: 2

        config:
          dynamic:
            # Set filtered_alias_mode to 'disallow' to forbid more than 2 filtered aliases per index
            # Set filtered_alias_mode to 'warn' to allow more than 2 filtered aliases per index but warns about it (default)
            # Set filtered_alias_mode to 'nowarn' to allow more than 2 filtered aliases per index silently
            #filtered_alias_mode: warn
            #do_not_fail_on_forbidden: false
            kibana:
              multitenancy_enabled: false
            #server_username: kibanaserver
            #index: '.kibana'
            http:
              anonymous_auth_enabled: true
              xff:
                enabled: false
                internalProxies: '192\.168\.0\.10|192\.168\.0\.11' # regex pattern
                #internalProxies: '.*' # trust all internal proxies, regex pattern
                #remoteIpHeader:  'x-forwarded-for'
                ###### see https://docs.oracle.com/javase/7/docs/api/java/util/regex/Pattern.html for regex help
                ###### more information about XFF https://en.wikipedia.org/wiki/X-Forwarded-For
                ###### and here https://tools.ietf.org/html/rfc7239
                ###### and https://tomcat.apache.org/tomcat-8.0-doc/config/valve.html#Remote_IP_Valve
            authc:
              kerberos_auth_domain:
                http_enabled: false
                transport_enabled: false
                order: 6
                http_authenticator:
                  type: kerberos
                  challenge: true
                  config:
                    # If true a lot of kerberos/security related debugging output will be logged to standard out
                    krb_debug: false
                    # If true then the realm will be stripped from the user name
                    strip_realm_from_principal: true
                authentication_backend:
                  type: noop
              basic_internal_auth_domain:
                description: "Authenticate via HTTP Basic against internal users database"
                http_enabled: true
                transport_enabled: true
                order: 4
                http_authenticator:
                  type: basic
                  challenge: true
                authentication_backend:
                  type: intern
              proxy_auth_domain:
                description: "Authenticate via proxy"
                http_enabled: false
                transport_enabled: false
                order: 3
                http_authenticator:
                  type: proxy
                  challenge: false
                  config:
                    user_header: "x-proxy-user"
                    roles_header: "x-proxy-roles"
                authentication_backend:
                  type: noop
              jwt_auth_domain:
                description: "Authenticate via Json Web Token"
                http_enabled: false
                transport_enabled: false
                order: 0
                http_authenticator:
                  type: jwt
                  challenge: false
                  config:
                    signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
                    jwt_header: "Authorization"
                    jwt_url_parameter: null
                    roles_key: null
                    subject_key: null
                authentication_backend:
                  type: noop
              clientcert_auth_domain:
                description: "Authenticate via SSL client certificates"
                http_enabled: false
                transport_enabled: false
                order: 2
                http_authenticator:
                  type: clientcert
                  config:
                    username_attribute: cn #optional, if omitted DN becomes username
                  challenge: false
                authentication_backend:
                  type: noop
              ldap:
                description: "Authenticate via LDAP or Active Directory"
                http_enabled: false
                transport_enabled: false
                order: 5
                http_authenticator:
                  type: basic
                  challenge: false
                authentication_backend:
                  # LDAP authentication backend (authenticate users against a LDAP or Active Directory)
                  type: ldap
                  config:
                    # enable ldaps
                    enable_ssl: false
                    # enable start tls, enable_ssl should be false
                    enable_start_tls: false
                    # send client certificate
                    enable_ssl_client_auth: false
                    # verify ldap hostname
                    verify_hostnames: true
                    hosts:
                    - localhost:8389
                    bind_dn: null
                    password: null
                    userbase: 'ou=people,dc=example,dc=com'
                    # Filter to search for users (currently in the whole subtree beneath userbase)
                    # {0} is substituted with the username
                    usersearch: '(sAMAccountName={0})'
                    # Use this attribute from the user as username (if not set then DN is used)
                    username_attribute: null
            authz:
              roles_from_myldap:
                description: "Authorize via LDAP or Active Directory"
                http_enabled: false
                transport_enabled: false
                authorization_backend:
                  # LDAP authorization backend (gather roles from a LDAP or Active Directory, you have to configure the above LDAP authentication backend settings too)
                  type: ldap
                  config:
                    # enable ldaps
                    enable_ssl: false
                    # enable start tls, enable_ssl should be false
                    enable_start_tls: false
                    # send client certificate
                    enable_ssl_client_auth: false
                    # verify ldap hostname
                    verify_hostnames: true
                    hosts:
                    - localhost:8389
                    bind_dn: null
                    password: null
                    rolebase: 'ou=groups,dc=example,dc=com'
                    # Filter to search for roles (currently in the whole subtree beneath rolebase)
                    # {0} is substituted with the DN of the user
                    # {1} is substituted with the username
                    # {2} is substituted with an attribute value from user's directory entry, of the authenticated user. Use userroleattribute to specify the name of the attribute
                    rolesearch: '(member={0})'
                    # Specify the name of the attribute which value should be substituted with {2} above
                    userroleattribute: null
                    # Roles as an attribute of the user entry
                    userrolename: disabled
                    #userrolename: memberOf
                    # The attribute in a role entry containing the name of that role, Default is "name".
                    # Can also be "dn" to use the full DN as rolename.
                    rolename: cn
                    # Resolve nested roles transitive (roles which are members of other roles and so on ...)
                    resolve_nested_roles: true
                    userbase: 'ou=people,dc=example,dc=com'
                    # Filter to search for users (currently in the whole subtree beneath userbase)
                    # {0} is substituted with the username
                    usersearch: '(uid={0})'
                    # Skip users matching a user name, a wildcard or a regex pattern
                    #skip_users:
                    #  - 'cn=Michael Jackson,ou*people,o=TEST'
                    #  - '/\S*/'
              roles_from_another_ldap:
                description: "Authorize via another Active Directory"
                http_enabled: false
                transport_enabled: false
                authorization_backend:
                  type: ldap
      roles.yml: |
        _meta:
          type: "roles"
          config_version: 2

        # Restrict users so they can only view visualization and dashboard on OpenSearchDashboards
        kibana_read_only:
          reserved: true

        # The security REST API access role is used to assign specific users access to change the security settings through the REST API.
        security_rest_api_access:
          reserved: true

        # Allows users to view monitors, destinations and alerts
        alerting_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/alerting/alerts/get'
            - 'cluster:admin/opendistro/alerting/destination/get'
            - 'cluster:admin/opendistro/alerting/destination/email_group/search'
            - 'cluster:admin/opendistro/alerting/destination/email_account/search'
            - 'cluster:admin/opendistro/alerting/monitor/get'
            - 'cluster:admin/opendistro/alerting/monitor/search'

        # Allows users to view and acknowledge alerts
        alerting_ack_alerts:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/alerting/alerts/*'

        # Allows users to use all alerting functionality
        alerting_full_access:
          reserved: true
          cluster_permissions:
            - 'cluster_monitor'
            - 'cluster:admin/opendistro/alerting/*'
          index_permissions:
            - index_patterns:
                - '*'
              allowed_actions:
                - 'indices_monitor'
                - 'indices:admin/aliases/get'
                - 'indices:admin/mappings/get'

        # Allow users to read Anomaly Detection detectors and results
        anomaly_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/ad/detector/info'
            - 'cluster:admin/opendistro/ad/detector/search'
            - 'cluster:admin/opendistro/ad/detectors/get'
            - 'cluster:admin/opendistro/ad/result/search'
            - 'cluster:admin/opendistro/ad/tasks/search'
            - 'cluster:admin/opendistro/ad/detector/validate'
            - 'cluster:admin/opendistro/ad/result/topAnomalies'

        # Allows users to use all Anomaly Detection functionality
        anomaly_full_access:
          reserved: true
          cluster_permissions:
            - 'cluster_monitor'
            - 'cluster:admin/opendistro/ad/*'
          index_permissions:
            - index_patterns:
                - '*'
              allowed_actions:
                - 'indices_monitor'
                - 'indices:admin/aliases/get'
                - 'indices:admin/mappings/get'

        # Allows users to read Notebooks
        notebooks_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/notebooks/list'
            - 'cluster:admin/opendistro/notebooks/get'

        # Allows users to all Notebooks functionality
        notebooks_full_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/notebooks/create'
            - 'cluster:admin/opendistro/notebooks/update'
            - 'cluster:admin/opendistro/notebooks/delete'
            - 'cluster:admin/opendistro/notebooks/get'
            - 'cluster:admin/opendistro/notebooks/list'

        # Allows users to read observability objects
        observability_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opensearch/observability/get'
            - 'cluster:admin/opensearch/ppl'

        # Allows users to all Observability functionality
        observability_full_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opensearch/observability/create'
            - 'cluster:admin/opensearch/observability/update'
            - 'cluster:admin/opensearch/observability/delete'
            - 'cluster:admin/opensearch/observability/get'

        # Allows users to read and download Reports
        reports_instances_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/reports/instance/list'
            - 'cluster:admin/opendistro/reports/instance/get'
            - 'cluster:admin/opendistro/reports/menu/download'

        # Allows users to read and download Reports and Report-definitions
        reports_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/reports/definition/get'
            - 'cluster:admin/opendistro/reports/definition/list'
            - 'cluster:admin/opendistro/reports/instance/list'
            - 'cluster:admin/opendistro/reports/instance/get'
            - 'cluster:admin/opendistro/reports/menu/download'

        # Allows users to all Reports functionality
        reports_full_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/reports/definition/create'
            - 'cluster:admin/opendistro/reports/definition/update'
            - 'cluster:admin/opendistro/reports/definition/on_demand'
            - 'cluster:admin/opendistro/reports/definition/delete'
            - 'cluster:admin/opendistro/reports/definition/get'
            - 'cluster:admin/opendistro/reports/definition/list'
            - 'cluster:admin/opendistro/reports/instance/list'
            - 'cluster:admin/opendistro/reports/instance/get'
            - 'cluster:admin/opendistro/reports/menu/download'

        # Allows users to use all asynchronous-search functionality
        asynchronous_search_full_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/asynchronous_search/*'
          index_permissions:
            - index_patterns:
                - '*'
              allowed_actions:
                - 'indices:data/read/search*'

        # Allows users to read stored asynchronous-search results
        asynchronous_search_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opendistro/asynchronous_search/get'

        # Allows user to use all index_management actions - ism policies, rollups, transforms
        index_management_full_access:
          reserved: true
          cluster_permissions:
            - "cluster:admin/opendistro/ism/*"
            - "cluster:admin/opendistro/rollup/*"
            - "cluster:admin/opendistro/transform/*"
          index_permissions:
            - index_patterns:
                - '*'
              allowed_actions:
                - 'indices:admin/opensearch/ism/*'

        # Allows users to use all cross cluster replication functionality at leader cluster
        cross_cluster_replication_leader_full_access:
          reserved: true
          index_permissions:
            - index_patterns:
                - '*'
              allowed_actions:
                - "indices:admin/plugins/replication/index/setup/validate"
                - "indices:data/read/plugins/replication/changes"
                - "indices:data/read/plugins/replication/file_chunk"

        # Allows users to use all cross cluster replication functionality at follower cluster
        cross_cluster_replication_follower_full_access:
          reserved: true
          cluster_permissions:
            - "cluster:admin/plugins/replication/autofollow/update"
          index_permissions:
            - index_patterns:
                - '*'
              allowed_actions:
                - "indices:admin/plugins/replication/index/setup/validate"
                - "indices:data/write/plugins/replication/changes"
                - "indices:admin/plugins/replication/index/start"
                - "indices:admin/plugins/replication/index/pause"
                - "indices:admin/plugins/replication/index/resume"
                - "indices:admin/plugins/replication/index/stop"
                - "indices:admin/plugins/replication/index/update"
                - "indices:admin/plugins/replication/index/status_check"

        # Allow users to read ML stats/models/tasks
        ml_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/openserach/ml/stats'
            - 'cluster:admin/opensearch/ml/models/get'
            - 'cluster:admin/opensearch/ml/models/search'
            - 'cluster:admin/opensearch/ml/tasks/get'
            - 'cluster:admin/opensearch/ml/tasks/search'

        # Allows users to read Notifications config/channels
        notifications_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opensearch/notifications/configs/get'
            - 'cluster:admin/opensearch/notifications/features'
            - 'cluster:admin/opensearch/notifications/channels/get'

        # Allows users to see snapshots, repositories, and snapshot management policies
        snapshot_management_read_access:
          reserved: true
          cluster_permissions:
            - 'cluster:admin/opensearch/snapshot_management/policy/get'
            - 'cluster:admin/opensearch/snapshot_management/policy/search'
            - 'cluster:admin/opensearch/snapshot_management/policy/explain'
            - 'cluster:admin/repository/get'
            - 'cluster:admin/snapshot/get'

        # Allows users to use all ML functionality
        ml_full_access:
          reserved: true
          cluster_permissions:
            - 'cluster_monitor'
            - 'cluster:admin/opensearch/ml/*'
          index_permissions:
            - index_patterns:
                - '*'
              allowed_actions:
                - 'indices_monitor'

        # Anonymous access role
        opendistro_security_anonymous_role:
          reserved: true
          cluster_permissions:
            # For using _cat/indices
            - 'cluster:monitor/state'
            - 'cluster:monitor/health'
            - 'cluster:monitor/nodes/info'
            # To enable read access to ISM
            - cluster:admin/opendistro/ism/policy/search
            - cluster:admin/opendistro/ism/managedindex/explain
            - cluster:admin/opendistro/rollup/search
            - cluster:admin/opendistro/transform/get_transforms
            # To enable read access to various ISM Jobs
            - cluster:admin/opendistro/rollup/explain
            - cluster:admin/opendistro/ism/policy/get
            - cluster:admin/opendistro/rollup/get
            - cluster:admin/opendistro/transform/explain
            - cluster:admin/opendistro/transform/get
            # To enable read access to security analytics
            - cluster:admin/opensearch/securityanalytics/rule/search
            - cluster:admin/opensearch/securityanalytics/detector/search
            - cluster:admin/opensearch/securityanalytics/findings/get
            - cluster:admin/opensearch/securityanalytics/alerts/get
            - cluster:admin/opensearch/securityanalytics/detector/get
            # For using sql join query
            - "indices:data/read/scroll*"
          index_permissions:
            - index_patterns:
              - ".kibana"
              - ".kibana-6"
              - ".kibana_*"
              - ".opensearch_dashboards"
              - ".opensearch_dashboards-6"
              - ".opensearch_dashboards_*"
              allowed_actions:
                - "read"
            - index_patterns:
              - ".tasks"
              - ".management-beats"
              - "*:.tasks"
              - "*:.management-beats"
              allowed_actions:
                - "read"
            - index_patterns:
              - 'opensearch_dashboards_sample_data_logs'
              - 'opensearch_dashboards_sample_data_flights'
              - 'opensearch_dashboards_sample_data_ecommerce'
              allowed_actions:
                - "read"
            - index_patterns:
              - '*'
              allowed_actions:
                - "read"
                - "indices:data/read/mget"
                - "indices:data/read/msearch"
                - "indices:data/read/mtv"
                - "indices:admin/get"
                - "indices:admin/aliases/exists*"
                - "indices:admin/aliases/get*"
                - "indices:admin/mappings/get"
                - "indices:data/read/scroll"
                - "indices:monitor/settings/get"
                - "indices:monitor/stats"
          tenant_permissions:
            - tenant_patterns:
              - '*'
              allowed_actions:
                - "kibana_all_read"
      roles_mapping.yml: |
        _meta:
          type: "rolesmapping"
          config_version: 2

        # Define your roles mapping here

        opendistro_security_anonymous_role:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        alerting_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        anomaly_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        notebooks_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        observability_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        reports_instances_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        reports_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        asynchronous_search_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        ml_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        notifications_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        snapshot_management_read_access:
          backend_roles:
          - "opendistro_security_anonymous_backendrole"

        ## Demo roles mapping

        all_access:
          reserved: false
          backend_roles:
          - "admin"
          description: "Maps admin to all_access"

        own_index:
          reserved: false
          users:
          - "*"
          description: "Allow full access to an index named like the username"

        logstash:
          reserved: false
          backend_roles:
          - "logstash"

        kibana_user:
          reserved: false
          backend_roles:
          - "kibanauser"
          description: "Maps kibanauser to kibana_user"

        readall:
          reserved: false
          backend_roles:
          - "readall"

        manage_snapshots:
          reserved: false
          backend_roles:
          - "snapshotrestore"

        kibana_server:
          reserved: true
          users:
          - "kibanaserver"

      action_groups.yml: |
        _meta:
          type: "actiongroups"
          config_version: 2
      tenants.yml: |
        _meta:
          type: "tenants"
          config_version: 2

        # Define your tenants here

        ## Demo tenants
        admin_tenant:
          reserved: false
          description: "Demo tenant for admin user"
      whitelist.yml: |
        config:
          enabled: false
          requests:
            /_cluster/settings:
              - GET
            /_cat/nodes:
              - GET
      nodes_dn.yml: |
        _meta:
          type: "nodesdn"
          config_version: 2
      audit.yml: |
        _meta:
          type: "audit"
          config_version: 2

        config:
          # enable/disable audit logging
          enabled: true

          audit:
            # Enable/disable REST API auditing
            enable_rest: true

            # Categories to exclude from REST API auditing
            disabled_rest_categories:
              - AUTHENTICATED
              - GRANTED_PRIVILEGES

            # Enable/disable Transport API auditing
            enable_transport: true

            # Categories to exclude from Transport API auditing
            disabled_transport_categories:
              - AUTHENTICATED
              - GRANTED_PRIVILEGES

            # Users to be excluded from auditing. Wildcard patterns are supported. Eg:
            # ignore_users: ["test-user", "employee-*"]
            ignore_users:
              - kibanaserver

            # Requests to be excluded from auditing. Wildcard patterns are supported. Eg:
            # ignore_requests: ["indices:data/read/*", "SearchRequest"]
            ignore_requests: []

            # Log individual operations in a bulk request
            resolve_bulk_requests: false

            # Include the body of the request (if available) for both REST and the transport layer
            log_request_body: true

            # Logs all indices affected by a request. Resolves aliases and wildcards/date patterns
            resolve_indices: true

            # Exclude sensitive headers from being included in the logs. Eg: Authorization
            exclude_sensitive_headers: true

          compliance:
            # enable/disable compliance
            enabled: true

            # Log updates to internal security changes
            internal_config: true

            # Log external config files for the node
            external_config: false

            # Log only metadata of the document for read events
            read_metadata_only: true

            # Map of indexes and fields to monitor for read events. Wildcard patterns are supported for both index names and fields. Eg:
            # read_watched_fields: {
            #   "twitter": ["message"]
            #   "logs-*": ["id", "attr*"]
            # }
            read_watched_fields: {}

            # List of users to ignore for read events. Wildcard patterns are supported. Eg:
            # read_ignore_users: ["test-user", "employee-*"]
            read_ignore_users:
              - kibanaserver

            # Log only metadata of the document for write events
            write_metadata_only: true

            # Log only diffs for document updates
            write_log_diffs: false

            # List of indices to watch for write events. Wildcard patterns are supported
            # write_watched_indices: ["twitter", "logs-*"]
            write_watched_indices: []

            # List of users to ignore for write events. Wildcard patterns are supported. Eg:
            # write_ignore_users: ["test-user", "employee-*"]
            write_ignore_users:
              - kibanaserver

# How long to wait for opensearch to stop gracefully
terminationGracePeriod: 120

sysctlVmMaxMapCount: 262144

startupProbe:
  tcpSocket:
    port: 9200
  initialDelaySeconds: 5
  periodSeconds: 10
  timeoutSeconds: 3
  failureThreshold: 30

livenessProbe: {}
  # periodSeconds: 20
  # timeoutSeconds: 5
  # failureThreshold: 10
  # successThreshold: 1
  # initialDelaySeconds: 10
  # tcpSocket:
  #   port: 9200

readinessProbe:
  tcpSocket:
    port: 9200
  periodSeconds: 5
  timeoutSeconds: 3
  failureThreshold: 3

## Use an alternate scheduler.
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
schedulerName: ""

imagePullSecrets: []
nodeSelector: {}
tolerations: []

# Enabling this will publically expose your OpenSearch instance.
# Only enable this if you have security enabled on your cluster
ingress:
  enabled: false
  # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
  # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
  # ingressClassName: nginx

  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  ingressLabels: {}
  path: /
  hosts:
    - chart-example.local
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

nameOverride: ""
fullnameOverride: ""

masterTerminationFix: false

opensearchLifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the preStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]

lifecycle: {}
  # preStop:
  #   exec:
  #     command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
  # postStart:
  #   exec:
  #     command:
  #       - bash
  #       - -c
  #       - |
  #         #!/bin/bash
  #         # Add a template to adjust number of shards/replicas1
  #         TEMPLATE_NAME=my_template
  #         INDEX_PATTERN="logstash-*"
  #         SHARD_COUNT=8
  #         REPLICA_COUNT=1
  #         ES_URL=http://localhost:9200
  #         while [[ "$(curl -s -o /dev/null -w '%{http_code}\n' $ES_URL)" != "200" ]]; do sleep 1; done
  #         curl -XPUT "$ES_URL/_template/$TEMPLATE_NAME" -H 'Content-Type: application/json' -d'{"index_patterns":['\""$INDEX_PATTERN"\"'],"settings":{"number_of_shards":'$SHARD_COUNT',"number_of_replicas":'$REPLICA_COUNT'}}'

keystore: []
# To add secrets to the keystore:
#  - secretName: opensearch-encryption-key

networkPolicy:
  create: false
  ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now.
  ## In order for a Pod to access OpenSearch, it needs to have the following label:
  ## {{ template "uname" . }}-client: "true"
  ## Example for default configuration to access HTTP port:
  ## opensearch-master-http-client: "true"
  ## Example for default configuration to access transport port:
  ## opensearch-master-transport-client: "true"

  http:
    enabled: false

# Deprecated
# please use the above podSecurityContext.fsGroup instead
fsGroup: ""

## Set optimal sysctl's through securityContext. This requires privilege. Can be disabled if
## the system has already been preconfigured. (Ex: https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html)
## Also see: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
sysctl:
  enabled: false

## Set optimal sysctl's through privileged initContainer.
sysctlInit:
  enabled: false
  # override image, which is busybox by default
  # image: busybox
  # override image tag, which is latest by default
  # imageTag:

## Enable to add 3rd Party / Custom plugins not offered in the default OpenSearch image.
plugins:
  enabled: false
  installList: []
  # - example-fake-plugin

# -- Array of extra K8s manifests to deploy
extraObjects: []
  # - apiVersion: secrets-store.csi.x-k8s.io/v1
  #   kind: SecretProviderClass
  #   metadata:
  #     name: argocd-secrets-store
  #   spec:
  #     provider: aws
  #     parameters:
  #       objects: |
  #         - objectName: "argocd"
  #           objectType: "secretsmanager"
  #           jmesPath:
  #               - path: "client_id"
  #                 objectAlias: "client_id"
  #               - path: "client_secret"
  #                 objectAlias: "client_secret"
  #     secretObjects:
  #     - data:
  #       - key: client_id
  #         objectName: client_id
  #       - key: client_secret
  #         objectName: client_secret
  #       secretName: argocd-secrets-store
  #       type: Opaque
  #       labels:
  #         app.kubernetes.io/part-of: argocd
  # - |
  #    apiVersion: policy/v1
  #    kind: PodDisruptionBudget
  #    metadata:
  #      name: {{ template "opensearch.uname" . }}
  #      labels:
  #        {{- include "opensearch.labels" . | nindent 4 }}
  #    spec:
  #      minAvailable: 1
  #      selector:
  #        matchLabels:
  #          {{- include "opensearch.selectorLabels" . | nindent 6 }}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triaged Issues labeled as 'Triaged' have been reviewed and are deemed actionable.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants