Skip to content

Commit

Permalink
fix
Browse files Browse the repository at this point in the history
Signed-off-by: v.oleynikov <vasily.oleynikov@flant.com>
  • Loading branch information
duckhawk committed Mar 28, 2024
1 parent 187c9d2 commit 33305d9
Show file tree
Hide file tree
Showing 2 changed files with 59 additions and 90 deletions.
130 changes: 49 additions & 81 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,13 @@ description: "The sds-local-volume module: General Concepts and Principles."
moduleStatus: experimental
---

{{< alert level="warning" >}}
The module is only guaranteed to work if [requirements](./readme.html#system-requirements-and-recommendations) are met.
As for any other configurations, the module may work, but its smooth operation is not guaranteed.
{{< /alert >}}

This module manages replicated block storage based on `DRBD`. Currently, `LINSTOR` is used as a control-plane. The module allows you to create a `Storage Pool` in `LINSTOR` as well as a `StorageClass` in `Kubernetes` by creating [Kubernetes custom resources](./cr.html).
To create a `Storage Pool`, you will need the `LVMVolumeGroup` configured on the cluster nodes. The `LVM` configuration is done by the [sds-node-configurator](../../sds-node-configurator/) module.
This module manages replicated block storage based on `LVM`. The module allows you to create a `StorageClass` in `Kubernetes` by creating [Kubernetes custom resources](./cr.html).
To create a `Storage Class`, you will need the `LVMVolumeGroup` configured on the cluster nodes. The `LVM` configuration is done by the [sds-node-configurator](../../sds-node-configurator/) module.
> **Caution!** Before enabling the `sds-local-volume` module, you must enable the `sds-node-configurator` module.
>
> **Caution!** The user is not allowed to configure the `LINSTOR` backend directly.
>
> **Caution!** Data synchronization during volume replication is carried out in synchronous mode only, asynchronous mode is not supported.
After you enable the `sds-local-volume` module in the Deckhouse configuration, your cluster will be automatically set to use the `LINSTOR` backend. You will only have to create [storage pools and StorageClasses](./usage.html#configuring-the-linstor-backend).
After you enable the `sds-local-volume` module in the Deckhouse configuration, you have to create StorageClasses.

> **Caution!** The user is not allowed to create a `StorageClass` for the replicated.csi.storage.deckhouse.io CSI driver.
> **Caution!** The user is not allowed to create a `StorageClass` for the local.csi.storage.deckhouse.io CSI driver.
Two modes are supported: LVM and LVMThin.
Each mode has its advantages and disadvantages. Read [FAQ](./faq.html#what-is-difference-between-lvm-and-lvmthin) to learn more and compare them.
Expand Down Expand Up @@ -52,8 +43,6 @@ kubectl get mc sds-node-configurator -w
```

- Enable the `sds-local-volume` module. Refer to the [configuration](./configuration.html) to learn more about module settings. In the example below, the module is launched with the default settings. This will result in the following actions across all cluster nodes:
- installation of the `DRBD` kernel module;
- registration of the CSI driver;
- launch of service pods for the `sds-local-volume` components.

```shell
Expand All @@ -74,7 +63,7 @@ EOF
kubectl get mc sds-local-volume -w
```

- Make sure that all pods in `d8-sds-local-volume` and `d8-sds-node-configurator` namespaces are `Running` or `Completed` and are running on all nodes where `DRBD` resources are intended to be used.
- Make sure that all pods in `d8-sds-local-volume` and `d8-sds-node-configurator` namespaces are `Running` or `Completed` and are running on all nodes where `LVM` resources are intended to be used.

```shell
kubectl -n d8-sds-local-volume get pod -owide -w
Expand All @@ -83,7 +72,7 @@ kubectl -n d8-sds-node-configurator get pod -o wide -w

### Configuring storage on nodes

You need to create `LVM` volume groups on the nodes using `LVMVolumeGroup` custom resources. As part of this quickstart guide, we will create a regular `Thick` storage. See [usage examples](./usage.html) to learn more about custom resources.
You need to create `LVM` volume groups on the nodes using `LVMVolumeGroup` custom resources. As part of this quickstart guide, we will create a regular `Thin` storage. See [usage examples](./usage.html) to learn more about custom resources.

To configure the storage:

Expand All @@ -92,12 +81,13 @@ To configure the storage:
```shell
kubectl get bd

NAME NODE CONSUMABLE SIZE PATH
dev-0a29d20f9640f3098934bca7325f3080d9b6ef74 worker-0 true 30Gi /dev/vdd
dev-457ab28d75c6e9c0dfd50febaac785c838f9bf97 worker-0 false 20Gi /dev/vde
dev-49ff548dfacba65d951d2886c6ffc25d345bb548 worker-1 true 35Gi /dev/vde
dev-75d455a9c59858cf2b571d196ffd9883f1349d2e worker-2 true 35Gi /dev/vdd
dev-ecf886f85638ee6af563e5f848d2878abae1dcfd worker-0 true 5Gi /dev/vdb
NAME NODE CONSUMABLE SIZE PATH
dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa worker-0 false 976762584Ki /dev/nvme1n1
dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd worker-0 false 894006140416 /dev/nvme0n1p6
dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0 worker-1 false 976762584Ki /dev/nvme1n1
dev-b103062f879a2349a9c5f054e0366594568de68d worker-1 false 894006140416 /dev/nvme0n1p6
dev-53d904f18b912187ac82de29af06a34d9ae23199 worker-2 false 976762584Ki /dev/nvme1n1
dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1 worker-2 false 894006140416 /dev/nvme0n1p6
```


Expand All @@ -112,8 +102,11 @@ metadata:
spec:
type: Local
blockDeviceNames: # specify the names of the BlockDevice resources that are located on the target node and whose CONSUMABLE is set to true. Note that the node name is not specified anywhere since it is derived from BlockDevice resources.
- dev-0a29d20f9640f3098934bca7325f3080d9b6ef74
- dev-ecf886f85638ee6af563e5f848d2878abae1dcfd
- dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa
- dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd
thinPools:
- name: ssd-thin
size: 50Gi
actualVGNameOnTheNode: "vg-1" # the name of the LVM VG to be created from the above block devices on the node
EOF
```
Expand All @@ -137,7 +130,11 @@ metadata:
spec:
type: Local
blockDeviceNames:
- dev-49ff548dfacba65d951d2886c6ffc25d345bb548
- dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0
- dev-b103062f879a2349a9c5f054e0366594568de68d
thinPools:
- name: ssd-thin
size: 50Gi
actualVGNameOnTheNode: "vg-1"
EOF
```
Expand All @@ -161,7 +158,11 @@ metadata:
spec:
type: Local
blockDeviceNames:
- dev-75d455a9c59858cf2b571d196ffd9883f1349d2e
- dev-53d904f18b912187ac82de29af06a34d9ae23199
- dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1
thinPools:
- name: ssd-thin
size: 50Gi
actualVGNameOnTheNode: "vg-1"
EOF
```
Expand All @@ -174,86 +175,53 @@ kubectl get lvg vg-1-on-worker-2 -w

- The resource becoming `Operational` means that an LVM VG named `vg-1` made up of the `/dev/vdd` block device has been created on the `worker-2` node.

- Now that we have all the LVM VGs created on the nodes, create a [ReplicatedStoragePool](./cr.html#replicatedstoragepool) out of those VGs:
- Create a [LocalStorageClass](./cr.html#localstorageclass) resource for a zone-free cluster (see [use cases](./layouts.html) for details on how zonal ReplicatedStorageClasses work):

```yaml
kubectl apply -f -<<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: ReplicatedStoragePool
kind: LocalStorageClass
metadata:
name: data
name: local-storage-class
spec:
type: LVM
lvmvolumegroups: # Here, specify the names of the LvmVolumeGroup resources you created earlier
- name: vg-1-on-worker-0
- name: vg-1-on-worker-1
- name: vg-1-on-worker-2
EOF

```

- Wait for the created `ReplicatedStoragePool` resource to become `Completed`:

```shell
kubectl get rsp data -w
```

- Confirm that the `data` Storage Pool has been created on nodes `worker-0`, `worker-1` and `worker-2` in LINSTOR:

```shell
alias linstor='kubectl -n d8-sds-local-volume exec -ti deploy/linstor-controller -- linstor'
linstor sp l

╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ worker-0 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ worker-0;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ worker-1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ worker-1;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ worker-2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ worker-2;DfltDisklessStorPool ┊
┊ data ┊ worker-0 ┊ LVM ┊ vg-1 ┊ 35.00 GiB ┊ 35.00 GiB ┊ False ┊ Ok ┊ worker-0;data ┊
┊ data ┊ worker-1 ┊ LVM ┊ vg-1 ┊ 35.00 GiB ┊ 35.00 GiB ┊ False ┊ Ok ┊ worker-1;data ┊
┊ data ┊ worker-2 ┊ LVM ┊ vg-1 ┊ 35.00 GiB ┊ 35.00 GiB ┊ False ┊ Ok ┊ worker-2;data ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
```

- Create a [ReplicatedStorageClass](./cr.html#replicatedstorageclass) resource for a zone-free cluster (see [use cases](./layouts.html) for details on how zonal ReplicatedStorageClasses work):

```yaml
kubectl apply -f -<<EOF
apiVersion: storage.deckhouse.io/v1alpha1
kind: ReplicatedStorageClass
metadata:
name: replicated-storage-class
spec:
storagePool: data # Here, specify the name of the ReplicatedStoragePool you created earlier
isDefault: false
lvm:
lvmVolumeGroups:
- name: vg-1-on-worker-0
thin:
poolName: ssd-thin
- name: vg-1-on-worker-1
thin:
poolName: ssd-thin
- name: vg-1-on-worker-2
thin:
poolName: ssd-thin
type: Thin
reclaimPolicy: Delete
topology: Ignored # - note that setting "ignored" means there should be no zones (nodes labeled topology.kubernetes.io/zone) in the cluster
volumeBindingMode: WaitForFirstConsumer
EOF
```

- Wait for the created `ReplicatedStorageClass` resource to become `Created`:
- Wait for the created `LocalStorageClass` resource to become `Created`:

```shell
kubectl get rsc replicated-storage-class -w
kubectl get lsc local-storage-class -w
```

- Confirm that the corresponding `StorageClass` has been created:

```shell
kubectl get sc replicated-storage-class
kubectl get sc local-storage-class
```

- If `StorageClass` with the name `replicated-storage-class` is shown, then the configuration of the `sds-local-volume` module is complete. Now users can create PVs by specifying `StorageClass` with the name `replicated-storage-class`. Given the above settings, a volume will be created with 3 replicas on different nodes.
- If `StorageClass` with the name `local-storage-class` is shown, then the configuration of the `sds-local-volume` module is complete. Now users can create PVs by specifying `StorageClass` with the name `local-storage-class`.

## System requirements and recommendations

### Requirements
- Stock kernels shipped with the [supported distributions](https://deckhouse.io/documentation/v1/supported_versions.html#linux).
- High-speed 10Gbps network.
- Do not use another SDS (Software defined storage) to provide disks to our SDS.

### Recommendations

- Avoid using RAID. The reasons are detailed in the [FAQ](./faq.html#why-is-it-not-recommended-to-use-raid-for-disks-that-are-used-by-the-sds-local-volume-module).

- Use local physical disks. The reasons are detailed in the [FAQ](./faq.html#why-do-you-recommend-using-local-disks-and-not-nas).
19 changes: 10 additions & 9 deletions docs/README_RU.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ description: "Модуль sds-local-volume: общие концепции и п
moduleStatus: experimental
---

Модуль управляет реплицируемым блочным хранилищем на базе `LVM`. На текущий момент в качестве control-plane используется `LINSTOR`. Модуль позволяет создавать `Storage Pool` в `LINSTOR` и `StorageClass` в `Kubernetes` через создание [пользовательских ресурсов Kubernetes](./cr.html).
Для создания `Storage Pool` потребуются настроенные на узлах кластера `LVMVolumeGroup`. Настройка `LVM` осуществляется модулем [sds-node-configurator](../../sds-node-configurator/).
Модуль управляет реплицируемым блочным хранилищем на базе `LVM`. Модуль позволяет создавать `StorageClass` в `Kubernetes` через создание [пользовательских ресурсов Kubernetes](./cr.html).
Для создания `Storage Class` потребуются настроенные на узлах кластера `LVMVolumeGroup`. Настройка `LVM` осуществляется модулем [sds-node-configurator](../../sds-node-configurator/).
> **Внимание!** Перед включением модуля `sds-local-volume` необходимо включить модуль `sds-node-configurator`.
>
После включения модуля `sds-local-volume` необходимо создать StorageClass'ы.
Expand Down Expand Up @@ -72,7 +72,7 @@ kubectl -n d8-sds-node-configurator get pod -o wide -w

### Настройка хранилища на узлах

Необходимо на этих узлах создать группы томов `LVM` с помощью пользовательских ресурсов `LVMVolumeGroup`. В быстром старте будем создавать обычное `Thick` хранилище. Подробнее про пользовательские ресурсы и примеры их использования можно прочитать в [примерах использования](./usage.html).
Необходимо на этих узлах создать группы томов `LVM` с помощью пользовательских ресурсов `LVMVolumeGroup`. В быстром старте будем создавать обычное `Thin` хранилище. Подробнее про пользовательские ресурсы и примеры их использования можно прочитать в [примерах использования](./usage.html).

Приступим к настройке хранилища:

Expand All @@ -81,12 +81,13 @@ kubectl -n d8-sds-node-configurator get pod -o wide -w
```shell
kubectl get bd

NAME NODE CONSUMABLE SIZE PATH
dev-0a29d20f9640f3098934bca7325f3080d9b6ef74 worker-0 true 30Gi /dev/vdd
dev-457ab28d75c6e9c0dfd50febaac785c838f9bf97 worker-0 false 20Gi /dev/vde
dev-49ff548dfacba65d951d2886c6ffc25d345bb548 worker-1 true 35Gi /dev/vde
dev-75d455a9c59858cf2b571d196ffd9883f1349d2e worker-2 true 35Gi /dev/vdd
dev-ecf886f85638ee6af563e5f848d2878abae1dcfd worker-0 true 5Gi /dev/vdb
NAME NODE CONSUMABLE SIZE PATH
dev-ef4fb06b63d2c05fb6ee83008b55e486aa1161aa worker-0 false 976762584Ki /dev/nvme1n1
dev-0cfc0d07f353598e329d34f3821bed992c1ffbcd worker-0 false 894006140416 /dev/nvme0n1p6
dev-7e4df1ddf2a1b05a79f9481cdf56d29891a9f9d0 worker-1 false 976762584Ki /dev/nvme1n1
dev-b103062f879a2349a9c5f054e0366594568de68d worker-1 false 894006140416 /dev/nvme0n1p6
dev-53d904f18b912187ac82de29af06a34d9ae23199 worker-2 false 976762584Ki /dev/nvme1n1
dev-6c5abbd549100834c6b1668c8f89fb97872ee2b1 worker-2 false 894006140416 /dev/nvme0n1p6
```


Expand Down

0 comments on commit 33305d9

Please sign in to comment.