From 33305d9e451b07b2cccd2a2400bb65eb824ae961 Mon Sep 17 00:00:00 2001 From: "v.oleynikov" Date: Thu, 28 Mar 2024 20:11:26 +0300 Subject: [PATCH] fix Signed-off-by: v.oleynikov --- docs/README.md | 130 +++++++++++++++++----------------------------- docs/README_RU.md | 19 +++---- 2 files changed, 59 insertions(+), 90 deletions(-) diff --git a/docs/README.md b/docs/README.md index f82d2626..1320b5d5 100644 --- a/docs/README.md +++ b/docs/README.md @@ -4,22 +4,13 @@ description: "The sds-local-volume module: General Concepts and Principles." moduleStatus: experimental --- -{{< alert level="warning" >}} -The module is only guaranteed to work if [requirements](./readme.html#system-requirements-and-recommendations) are met. -As for any other configurations, the module may work, but its smooth operation is not guaranteed. -{{< /alert >}} - -This module manages replicated block storage based on `DRBD`. Currently, `LINSTOR` is used as a control-plane. The module allows you to create a `Storage Pool` in `LINSTOR` as well as a `StorageClass` in `Kubernetes` by creating [Kubernetes custom resources](./cr.html). -To create a `Storage Pool`, you will need the `LVMVolumeGroup` configured on the cluster nodes. The `LVM` configuration is done by the [sds-node-configurator](../../sds-node-configurator/) module. +This module manages replicated block storage based on `LVM`. The module allows you to create a `StorageClass` in `Kubernetes` by creating [Kubernetes custom resources](./cr.html). +To create a `Storage Class`, you will need the `LVMVolumeGroup` configured on the cluster nodes. The `LVM` configuration is done by the [sds-node-configurator](../../sds-node-configurator/) module. > **Caution!** Before enabling the `sds-local-volume` module, you must enable the `sds-node-configurator` module. -> -> **Caution!** The user is not allowed to configure the `LINSTOR` backend directly. > -> **Caution!** Data synchronization during volume replication is carried out in synchronous mode only, asynchronous mode is not supported. - -After you enable the `sds-local-volume` module in the Deckhouse configuration, your cluster will be automatically set to use the `LINSTOR` backend. You will only have to create [storage pools and StorageClasses](./usage.html#configuring-the-linstor-backend). +After you enable the `sds-local-volume` module in the Deckhouse configuration, you have to create StorageClasses. -> **Caution!** The user is not allowed to create a `StorageClass` for the replicated.csi.storage.deckhouse.io CSI driver. +> **Caution!** The user is not allowed to create a `StorageClass` for the local.csi.storage.deckhouse.io CSI driver. Two modes are supported: LVM and LVMThin. Each mode has its advantages and disadvantages. Read [FAQ](./faq.html#what-is-difference-between-lvm-and-lvmthin) to learn more and compare them. @@ -52,8 +43,6 @@ kubectl get mc sds-node-configurator -w ``` - Enable the `sds-local-volume` module. Refer to the [configuration](./configuration.html) to learn more about module settings. In the example below, the module is launched with the default settings. This will result in the following actions across all cluster nodes: - - installation of the `DRBD` kernel module; - - registration of the CSI driver; - launch of service pods for the `sds-local-volume` components. ```shell @@ -74,7 +63,7 @@ EOF kubectl get mc sds-local-volume -w ``` -- Make sure that all pods in `d8-sds-local-volume` and `d8-sds-node-configurator` namespaces are `Running` or `Completed` and are running on all nodes where `DRBD` resources are intended to be used. +- Make sure that all pods in `d8-sds-local-volume` and `d8-sds-node-configurator` namespaces are `Running` or `Completed` and are running on all nodes where `LVM` resources are intended to be used. ```shell kubectl -n d8-sds-local-volume get pod -owide -w @@ -83,7 +72,7 @@ kubectl -n d8-sds-node-configurator get pod -o wide -w ### Configuring storage on nodes -You need to create `LVM` volume groups on the nodes using `LVMVolumeGroup` custom resources. As part of this quickstart guide, we will create a regular `Thick` storage. See [usage examples](./usage.html) to learn more about custom resources. +You need to create `LVM` volume groups on the nodes using `LVMVolumeGroup` custom resources. As part of this quickstart guide, we will create a regular `Thin` storage. See [usage examples](./usage.html) to learn more about custom resources. To configure the storage: @@ -92,12 +81,13 @@ To configure the storage: ```shell kubectl get bd -NAME NODE CONSUMABLE SIZE PATH -dev-0a29d20f9640f3098934bca7325f3080d9b6ef74 worker-0 true 30Gi /dev/vdd -dev-457ab28d75c6e9c0dfd50febaac785c838f9bf97 worker-0 false 20Gi /dev/vde -dev-49ff548dfacba65d951d2886c6ffc25d345bb548 worker-1 true 35Gi /dev/vde -dev-75d455a9c59858cf2b571d196ffd9883f1349d2e worker-2 true 35Gi /dev/vdd -dev-ecf886f85638ee6af563e5f848d2878abae1dcfd worker-0 true 5Gi /dev/vdb +NAME NODE CONSUMABLE SIZE PATH +dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa worker-0 false 976762584Ki /dev/nvme1n1 +dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd worker-0 false 894006140416 /dev/nvme0n1p6 +dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0 worker-1 false 976762584Ki /dev/nvme1n1 +dev-b103062f879a2349a9c5f054e0366594568de68d worker-1 false 894006140416 /dev/nvme0n1p6 +dev-53d904f18b912187ac82de29af06a34d9ae23199 worker-2 false 976762584Ki /dev/nvme1n1 +dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1 worker-2 false 894006140416 /dev/nvme0n1p6 ``` @@ -112,8 +102,11 @@ metadata: spec: type: Local blockDeviceNames: # specify the names of the BlockDevice resources that are located on the target node and whose CONSUMABLE is set to true. Note that the node name is not specified anywhere since it is derived from BlockDevice resources. - - dev-0a29d20f9640f3098934bca7325f3080d9b6ef74 - - dev-ecf886f85638ee6af563e5f848d2878abae1dcfd + - dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa + - dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd + thinPools: + - name: ssd-thin + size: 50Gi actualVGNameOnTheNode: "vg-1" # the name of the LVM VG to be created from the above block devices on the node EOF ``` @@ -137,7 +130,11 @@ metadata: spec: type: Local blockDeviceNames: - - dev-49ff548dfacba65d951d2886c6ffc25d345bb548 + - dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0 + - dev-b103062f879a2349a9c5f054e0366594568de68d + thinPools: + - name: ssd-thin + size: 50Gi actualVGNameOnTheNode: "vg-1" EOF ``` @@ -161,7 +158,11 @@ metadata: spec: type: Local blockDeviceNames: - - dev-75d455a9c59858cf2b571d196ffd9883f1349d2e + - dev-53d904f18b912187ac82de29af06a34d9ae23199 + - dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1 + thinPools: + - name: ssd-thin + size: 50Gi actualVGNameOnTheNode: "vg-1" EOF ``` @@ -174,86 +175,53 @@ kubectl get lvg vg-1-on-worker-2 -w - The resource becoming `Operational` means that an LVM VG named `vg-1` made up of the `/dev/vdd` block device has been created on the `worker-2` node. -- Now that we have all the LVM VGs created on the nodes, create a [ReplicatedStoragePool](./cr.html#replicatedstoragepool) out of those VGs: +- Create a [LocalStorageClass](./cr.html#localstorageclass) resource for a zone-free cluster (see [use cases](./layouts.html) for details on how zonal ReplicatedStorageClasses work): ```yaml kubectl apply -f -< **Внимание!** Перед включением модуля `sds-local-volume` необходимо включить модуль `sds-node-configurator`. > После включения модуля `sds-local-volume` необходимо создать StorageClass'ы. @@ -72,7 +72,7 @@ kubectl -n d8-sds-node-configurator get pod -o wide -w ### Настройка хранилища на узлах -Необходимо на этих узлах создать группы томов `LVM` с помощью пользовательских ресурсов `LVMVolumeGroup`. В быстром старте будем создавать обычное `Thick` хранилище. Подробнее про пользовательские ресурсы и примеры их использования можно прочитать в [примерах использования](./usage.html). +Необходимо на этих узлах создать группы томов `LVM` с помощью пользовательских ресурсов `LVMVolumeGroup`. В быстром старте будем создавать обычное `Thin` хранилище. Подробнее про пользовательские ресурсы и примеры их использования можно прочитать в [примерах использования](./usage.html). Приступим к настройке хранилища: @@ -81,12 +81,13 @@ kubectl -n d8-sds-node-configurator get pod -o wide -w ```shell kubectl get bd -NAME NODE CONSUMABLE SIZE PATH -dev-0a29d20f9640f3098934bca7325f3080d9b6ef74 worker-0 true 30Gi /dev/vdd -dev-457ab28d75c6e9c0dfd50febaac785c838f9bf97 worker-0 false 20Gi /dev/vde -dev-49ff548dfacba65d951d2886c6ffc25d345bb548 worker-1 true 35Gi /dev/vde -dev-75d455a9c59858cf2b571d196ffd9883f1349d2e worker-2 true 35Gi /dev/vdd -dev-ecf886f85638ee6af563e5f848d2878abae1dcfd worker-0 true 5Gi /dev/vdb +NAME NODE CONSUMABLE SIZE PATH +dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa worker-0 false 976762584Ki /dev/nvme1n1 +dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd worker-0 false 894006140416 /dev/nvme0n1p6 +dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0 worker-1 false 976762584Ki /dev/nvme1n1 +dev-b103062f879a2349a9c5f054e0366594568de68d worker-1 false 894006140416 /dev/nvme0n1p6 +dev-53d904f18b912187ac82de29af06a34d9ae23199 worker-2 false 976762584Ki /dev/nvme1n1 +dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1 worker-2 false 894006140416 /dev/nvme0n1p6 ```