Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added basic liveness- & readinessProbe for awx-web & awx-task containers #927

Closed
wants to merge 1 commit into from
Closed

Added basic liveness- & readinessProbe for awx-web & awx-task containers #927

wants to merge 1 commit into from

Conversation

moonrail
Copy link

@moonrail moonrail commented May 19, 2022

Adds basic livenessProbe & readinessProbe for containers web & task.
See issue #926 for background on this.

Removes env var AWX_SKIP_MIGRATIONS as it was removed in AWX 18.0.0 version of launch scripts and is not used anymore.

As discussed in this PR:
Adds initContainer database-migration that applies migrations or waits if migrations are being applied. This way no launch script can interfere and no AWX pod can come online without having migrations already applied and being unavailable.
Removes awx-operator-based db migration handling in installer, as at this point containers already did the job.
As a cause of this, wait_timeout had to be increased in installer for the AWX pods deployment to adhere for longer initial startup time of AWX pods when a lot/all migrations have to be applied.
I've copied ca-volume-mounts from web & task containers to this new initContainer as well, as these may be required depending on the SSL certificate being used on a (external) database host.

@moonrail
Copy link
Author

Regarding Test-Failure:
Task "Create the awx.ansible.com/v1alpha1.AWX": Resource creation timed out

Does not look like a failure introduced with these changes, but rather the test-K8s-cluster not being reachable.

@kdelee
Copy link
Member

kdelee commented May 24, 2022

@rooftopcellist I like this. I know you had expressed concern about not seeing the migration screen. But I don't care about that, and I don't think most people would. In OCP I don't even think the route will become available until the service is up, and the service won't be up until the pods are ready. So the route will become available when the application is ready and available -- which is the desired state.

@kdelee
Copy link
Member

kdelee commented May 24, 2022

@moonrail I don't agree that the failure is the tests not reaching the k8s cluster

the test:

- name: Create the awx.ansible.com/v1alpha1.AWX
k8s:
state: present
namespace: '{{ namespace }}'
definition: "{{ lookup('template', 'awx_cr_molecule.yml.j2') | from_yaml }}"
wait: yes
wait_timeout: 900
wait_condition:
type: Running
reason: Successful
status: "True"

the output:

 TASK [Create the awx.ansible.com/v1alpha1.AWX] *********************************
  task path: /home/runner/work/awx-operator/awx-operator/molecule/default/tasks/awx_test.yml:2
  redirecting (type: action) kubernetes.core.k8s to kubernetes.core.k8s_info
  redirecting (type: action) kubernetes.core.k8s to kubernetes.core.k8s_info
  <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: runner
  <127.0.0.1> EXEC /bin/sh -c 'echo ~runner && sleep 0'
  <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/runner/.ansible/tmp `"&& mkdir "` echo /home/runner/.ansible/tmp/ansible-tmp-1653055913.816676-8733-20931721321062 `" && echo ansible-tmp-1653055913.816676-8733-20931721321062="` echo /home/runner/.ansible/tmp/ansible-tmp-1653055913.816676-8733-20931721321062 `" ) && sleep 0'
  Using module file /home/runner/.ansible/collections/ansible_collections/kubernetes/core/plugins/modules/k8s.py
  <127.0.0.1> PUT /home/runner/.ansible/tmp/ansible-local-8725yojx4uon/tmpqexu8m9v TO /home/runner/.ansible/tmp/ansible-tmp-1653055913.816676-8733-20931721321062/AnsiballZ_k8s.py
  <127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/runner/.ansible/tmp/ansible-tmp-1653055913.816676-8733-20931721321062/ /home/runner/.ansible/tmp/ansible-tmp-1653055913.816676-8733-20931721321062/AnsiballZ_k8s.py && sleep 0'
  <127.0.0.1> EXEC /bin/sh -c '/opt/hostedtoolcache/Python/3.8.12/x64/bin/python /home/runner/.ansible/tmp/ansible-tmp-1653055913.816676-8733-20931721321062/AnsiballZ_k8s.py && sleep 0'
  <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/runner/.ansible/tmp/ansible-tmp-1653055913.816676-8733-20931721321062/ > /dev/null 2>&1 && sleep 0'
  fatal: [localhost]: FAILED! => {
      "changed": true,
      "duration": 902,
      "invocation": {
          "module_args": {
              "api_key": null,
              "api_version": "v1",
              "append_hash": false,
              "apply": false,
              "ca_cert": null,
              "client_cert": null,
              "client_key": null,
              "context": null,
              "delete_options": null,
              "force": false,
              "host": null,
              "kind": null,
              "kubeconfig": null,
              "merge_type": null,
              "name": null,
              "namespace": "osdk-test",
              "password": null,
              "persist_config": null,
              "proxy": null,
              "resource_definition": {
                  "apiVersion": "awx.ansible.com/v1beta1",
                  "kind": "AWX",
                  "metadata": {
                      "name": "example-awx",
                      "namespace": "osdk-test"
                  },
                  "spec": {
                      "ee_resource_requirements": {
                          "requests": {
                              "cpu": "200m",
                              "memory": "32M"
                          }
                      },
                      "ingress_annotations": "kubernetes.io/ingress.class: nginx\n",
                      "ingress_type": "ingress",
                      "postgres_init_container_resource_requirements": {},
                      "postgres_resource_requirements": {},
                      "redis_resource_requirements": {},
                      "task_resource_requirements": {
                          "requests": {
                              "cpu": "100m",
                              "memory": "32M"
                          }
                      },
                      "web_resource_requirements": {
                          "requests": {
                              "cpu": "100m",
                              "memory": "32M"
                          }
                      }
                  }
              },
              "src": null,
              "state": "present",
              "template": null,
              "username": null,
              "validate": null,
              "validate_certs": null,
              "wait": true,
              "wait_condition": {
                  "reason": "Successful",
                  "status": "True",
                  "type": "Running"
              },
              "wait_sleep": 5,
              "wait_timeout": 900
          }
      },
      "method": "create",
      "msg": "Resource creation timed out",
      "result": {
          "apiVersion": "awx.ansible.com/v1beta1",
          "kind": "AWX",
          "metadata": {
              "creationTimestamp": "2022-05-20T14:11:54Z",
              "generation": 1,
              "labels": {
                  "app.kubernetes.io/component": "awx",
                  "app.kubernetes.io/managed-by": "awx-operator",
                  "app.kubernetes.io/name": "example-awx",
                  "app.kubernetes.io/operator-version": "",
                  "app.kubernetes.io/part-of": "example-awx"
              },
              "managedFields": [
                  {
                      "apiVersion": "awx.ansible.com/v1beta1",
                      "fieldsType": "FieldsV1",
                      "fieldsV1": {
                          "f:status": {
                              ".": {},
                              "f:conditions": {}
                          }
                      },
                      "manager": "ansible-operator",
                      "operation": "Update",
                      "subresource": "status",
                      "time": "2022-05-20T14:11:54Z"
                  },
                  {
                      "apiVersion": "awx.ansible.com/v1beta1",
                      "fieldsType": "FieldsV1",
                      "fieldsV1": {
                          "f:metadata": {
                              "f:labels": {
                                  ".": {},
                                  "f:app.kubernetes.io/component": {},
                                  "f:app.kubernetes.io/managed-by": {},
                                  "f:app.kubernetes.io/name": {},
                                  "f:app.kubernetes.io/operator-version": {},
                                  "f:app.kubernetes.io/part-of": {}
                              }
                          },
                          "f:spec": {
                              ".": {},
                              "f:admin_user": {},
                              "f:create_preload_data": {},
                              "f:ee_resource_requirements": {
                                  ".": {},
                                  "f:requests": {
                                      ".": {},
                                      "f:cpu": {},
                                      "f:memory": {}
                                  }
                              },
                              "f:garbage_collect_secrets": {},
                              "f:image_pull_policy": {},
                              "f:ingress_annotations": {},
                              "f:ingress_type": {},
                              "f:loadbalancer_port": {},
                              "f:loadbalancer_protocol": {},
                              "f:nodeport_port": {},
                              "f:postgres_init_container_resource_requirements": {},
                              "f:postgres_resource_requirements": {},
                              "f:projects_persistence": {},
                              "f:projects_storage_access_mode": {},
                              "f:projects_storage_size": {},
                              "f:redis_resource_requirements": {},
                              "f:replicas": {},
                              "f:route_tls_termination_mechanism": {},
                              "f:task_privileged": {},
                              "f:task_resource_requirements": {
                                  ".": {},
                                  "f:requests": {
                                      ".": {},
                                      "f:cpu": {},
                                      "f:memory": {}
                                  }
                              },
                              "f:web_resource_requirements": {
                                  ".": {},
                                  "f:requests": {
                                      ".": {},
                                      "f:cpu": {},
                                      "f:memory": {}
                                  }
                              }
                          }
                      },
                      "manager": "OpenAPI-Generator",
                      "operation": "Update",
                      "time": "2022-05-20T14:11:57Z"
                  }
              ],
              "name": "example-awx",
              "namespace": "osdk-test",
              "resourceVersion": "2753",
              "uid": "863cc513-58ea-41e2-a4fc-2fc3cf7d7f2e"
          },
          "spec": {
              "admin_user": "admin",
              "create_preload_data": true,
              "ee_resource_requirements": {
                  "requests": {
                      "cpu": "200m",
                      "memory": "32M"
                  }
              },
              "garbage_collect_secrets": false,
              "image_pull_policy": "IfNotPresent",
              "ingress_annotations": "kubernetes.io/ingress.class: nginx\n",
              "ingress_type": "ingress",
              "loadbalancer_port": 80,
              "loadbalancer_protocol": "http",
              "nodeport_port": 30080,
              "postgres_init_container_resource_requirements": {},
              "postgres_resource_requirements": {},
              "projects_persistence": false,
              "projects_storage_access_mode": "ReadWriteMany",
              "projects_storage_size": "8Gi",
              "redis_resource_requirements": {},
              "replicas": 1,
              "route_tls_termination_mechanism": "Edge",
              "task_privileged": false,
              "task_resource_requirements": {
                  "requests": {
                      "cpu": "100m",
                      "memory": "32M"
                  }
              },
              "web_resource_requirements": {
                  "requests": {
                      "cpu": "100m",
                      "memory": "32M"
                  }
              }
          },
          "status": {
              "conditions": [
                  {
                      "lastTransitionTime": "2022-05-20T14:25:04Z",
                      "reason": "Failed",
                      "status": "False",
                      "type": "Failure"
                  },
                  {
                      "lastTransitionTime": "2022-05-20T14:25:04Z",
                      "reason": "Running",
                      "status": "True",
                      "type": "Running"
                  }
              ]
          }
      }
  }

The key part being:

          "status": {
              "conditions": [
                  {
                      "lastTransitionTime": "2022-05-20T14:25:04Z",
                      "reason": "Failed",
                      "status": "False",
                      "type": "Failure"
                  },
                  {
                      "lastTransitionTime": "2022-05-20T14:25:04Z",
                      "reason": "Running",
                      "status": "True",
                      "type": "Running"
                  }
              ]

I wonder if we need to put a retries on that task. I'm not an expert on these molecule tests, so I may be wrong. My impression is we expect it to not be running for a period of time and then start running

@rooftopcellist
Copy link
Member

Yes, @kdelee is right. These 3 conditions in the molecule test linked above (which CI runs) will not be met until the readiness probe is successful. The probe will not be successful until migrations are complete, which takes significantly longer than just waiting for the pod to be in the running state (what it used to be).

    wait_condition:
      type: Running
      reason: Successful
      status: "True"

I would suggest raising the timeout to 1500s (25 min). Retry logic isn't needed, wait_condition takes care of this for us.

@moonrail
Copy link
Author

Ah, now I understand the molecule testing setup - did not fully grasp it before.
Increased wait_timeout to 1500s as suggested.

@moonrail
Copy link
Author

@rooftopcellist @kdelee

Initial db migrations take about 5min on my local minikube setup and on our K8s test environment.
The duration of db migrations can vary depending on the setup.
So it could be considerably longer.
I've tested these changes before issuing this PR on our setup with an external db with no migrations that had to be applied so I didn't notice the amount of time required for them.

The current proposed livenessProbe would restart the Container after about 95 seconds, therefore deployment cannot succeed when migrations are required.

readinessProbe however is not problematic as it is, but I've misunderstood failureThreshold by being absolute. As per docs readinessProbes will be executed in a loop, until a container is ready, not only once:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes

I think the current migration execution & waiting on container startup cannot be kept as is for AWX being run in K8s.

With the current AWX startup scripts a livenessProbe cannot be safely implemented, without waiting an unreasonable long time to even probe (what is safe - 15min, 25min, 40min? This cannot be the solution).

Startup-Scripts:
Web: https://github.com/ansible/awx/blob/21.0.0/tools/ansible/roles/dockerfile/files/launch_awx.sh
Task: https://github.com/ansible/awx/blob/21.0.0/tools/ansible/roles/dockerfile/files/launch_awx_task.sh

Both use: https://github.com/ansible/awx/blob/21.0.0/tools/ansible/roles/dockerfile/files/wait-for-migrations

Would migrations be only applied within an initContainer, liveness- & readinessProbes would only become relevant once migrations were applied successfully, as the actual container would not be started until the initContainer succeeded.

An initContainer would have to be added to task & web, as both can run migrations on their startup and both are started simultaneously.
Command of initContainer could be awx-manage migrate || /usr/bin/wait-for-migrations.
I'm currently not sure if django returns non-zero return codes when it there is already another running migration, as docs do not mention it: https://docs.djangoproject.com/en/4.0/ref/django-admin/#django-admin-migrate
The above command example guesses it does return non-zero on blocking migrations.
It does return 0, when no migrations are required.

Waiting for migration-completion is done already on startup of web & task, so every initContainer being run after the first initContainer, could exit without waiting for migrations to complete.
But I'd suggest to use the script wait-for-migrations instead (as seen in the example command above), as then livenessProbes would not have to respect anything regarding migrations.
All would be done by the time the actual containers are started.

What do you think?

@kdelee
Copy link
Member

kdelee commented May 26, 2022

@moonrail your idea to run migrations in an init container sounds like a good idea to me. @shanemcd and @Zokormazo may be interested in this suggestion.

It sounds like you would be interested in implementing that in this PR so that the readiness/liveness probes can work sanely?

@kdelee
Copy link
Member

kdelee commented May 26, 2022

A really cool thing to do would be while the migrations init container is running, that we have another container that is serving that "migrations are running screen" that the service points to. Then once the actual app is up (readiness checks pass) we patch the service to point to the controller api. I know this contradicts what I said the other day about "not caring" if I see the migration screen...still trying to figure out best compromise to give human users feedback that the app is still "installing" e.g. not useble yet, but ostensibly healthy, and give automation the right feedback that the app is not yet ready

@moonrail
Copy link
Author

@kdelee
Yes, I could implement my suggestion, but your idea of showing a migration screen exceeds my knowledge on AWX, so I would not be sure I'd be able to implement it as desired.

@kdelee
Copy link
Member

kdelee commented May 31, 2022

@moonrail I think just doing the init container + the readiness/liveness checks is a great start/best way to begin. If it is desireable to show some "awx is starting" screen before those pass, that seems like a distinct request

@moonrail
Copy link
Author

moonrail commented Jun 1, 2022

@kdelee
Sounds good, will try to get to it added to this PR this week.

@rooftopcellist
Copy link
Member

@moonrail I agree with your idea to run the migrations in an initContainer, I think that is a great approach. FYI, the CI failure was resource related and was fixed here. Rebasing against devel will pull in the fix.

@moonrail
Copy link
Author

moonrail commented Jun 3, 2022

@kdelee @rooftopcellist
Pushed discussed changes and a little cleanup of awx-operator based migration handling.
Updated this PRs description with explanation on these changes.

@kdelee
Copy link
Member

kdelee commented Jun 21, 2022

@Zokormazo any way we can get a downstream test of this change?

@djyasin
Copy link
Member

djyasin commented Jul 1, 2022

@moonrail Hello! We are in the process of working on merging this PR. This branch will need to be rebased prior to that. Would you mind running that rebase on your end? If not, we can certainly work toward an alternative resolution. Thank you for your time!

…accept connections, Added basic liveness- & readinessProbe for web & task containers

Fixes #926
@moonrail
Copy link
Author

moonrail commented Jul 4, 2022

Hi @djyasin
branch is now rebased against devel.
Did not run tests again, but changes look the same as before, even though some Ansible db-migration code this PR currently removes went from roles/installer/tasks/main.yml to roles/installer/tasks/installer.yml.

@djyasin
Copy link
Member

djyasin commented Jul 5, 2022

Thank you so much @moonrail!! I will try running it through tests on our end now that this has been rebased.

@shanemcd
Copy link
Member

shanemcd commented Jul 5, 2022

I'm a little worried about this change. What happens when scaling the deployment to more than 1? This is going cause the migrations to run every time a new pod is created right?

@moonrail
Copy link
Author

moonrail commented Jul 5, 2022

@shanemcd
Correct. Any pod-launch will run the database-migration initContainer.
Django checks for required migrations before doing anything and exits gracefully with rc 0, when none are pending.
From my tests this seems to work perfectly fine in Django.

When having e.g. 3 replicas and you're upgrading AWX you'll get a 4th pod that will run migrations on the database.
Current pods will continue to run until the new pod is ready (and therefore migrations were applied) and are then replaced as well by K8s.
So yes, there will be running instances while migrations are being applied, when upgrading AWX on a running cluster.
I know of other Django apps, e.g. NetBox, that are handled this way as well when being run in K8s. Depending on the migrations there can be errors while upgrading to a new version, but I'd argue that an upgrade of a database and application to a new version is fairly expected to not be 100% seamless.
In case of AWX upgrading should be even more "problematic", when jobs are running.
That is why on our end an AWX upgrade includes draining (disabling control instances in AWX), waiting for all jobs in instance groups to complete and only then upgrading AWX.

@kdelee
Copy link
Member

kdelee commented Jul 5, 2022

Re: draining awx before upgrade, problematic if jobs are running.
Yes! This is a problem. It has been discussed the introduction of a "maintenance mode" that control nodes could be placed in to drain them before upgrade. Right now it is left to savvy administrators to figure out (as you all have)

to @shanemcd 's concern, what about a fresh install with replicas: 2 or greater. Is there a race condition on which pod's init container will decide there are migrations to run. Is there a way to ensure that only 1 pod will get the "lock" on who runs migrations when both init containers get to that stage at the same time.

@moonrail
Copy link
Author

moonrail commented Jul 6, 2022

@kdelee
This should be no problem.
We certainly do not experience problems with a similar setup on other Django apps in K8s.
Django by default uses transactions and therefore PostgreSQL is in the lead of allowing only one parallel migration to run, so any potential race condition (the case where django checks for migrations, does not see any running and then tries to run them) is therefore safe.
The initContainers startup command awx-manage migrate || /usr/bin/wait-for-migrations will cause pods that do not have the "migration lock" to wait for running migrations.

@@ -77,6 +77,7 @@
apply: yes
definition: "{{ lookup('template', 'deployment.yaml.j2') }}"
wait: yes
wait_timeout: 600
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this. I feel like it's more likely to cause problems than help anything.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Default is 120s:
https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s_module.html#parameter-wait_timeout
Initial migration of AWX Database takes longer than 120s, therefore following tasks would be run without AWX pods being ready.
I cannot guess how long it takes in individual environments, as it depends on the circumstances such as throughput, IO and so on.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given it may be hard to guess, I wonder if we want to make this a variable we could set

httpGet:
path: /api/v2/ping/
port: 8052
timeoutSeconds: 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't it be configured from outside with predefined default values?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#1188 - something like that
that way the user will have more control over the deployment if needed

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@erz4
I agree - it just was not the aim for my PR.
I wanted this PR to be a basic minimum to add this functionality ASAP, as every change to the deployment causes downtimes & job failures.

But since this PR is still open after now nearly 8 months, that obviously did not go as desired and the problem still persists.

If your PR is merged, this can be closed. We just need these checks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@moonrail
did you test it locally with molecule?

my probes failing in the docker-desktop env but working on a k8s cluster

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tested with molecule but also in our test environment (k8s).
Both ran at the last try (several months ago).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup mine is failing in the migration phase as mentioned above.

So what is the status now? why is that PR stuck?

@erz4
Copy link
Contributor

erz4 commented Jan 22, 2023

@rooftopcellist @kdelee

Initial db migrations take about 5min on my local minikube setup and on our K8s test environment. The duration of db migrations can vary depending on the setup. So it could be considerably longer. I've tested these changes before issuing this PR on our setup with an external db with no migrations that had to be applied so I didn't notice the amount of time required for them.

The current proposed livenessProbe would restart the Container after about 95 seconds, therefore deployment cannot succeed when migrations are required.

readinessProbe however is not problematic as it is, but I've misunderstood failureThreshold by being absolute. As per docs readinessProbes will be executed in a loop, until a container is ready, not only once: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes

I think the current migration execution & waiting on container startup cannot be kept as is for AWX being run in K8s.

With the current AWX startup scripts a livenessProbe cannot be safely implemented, without waiting an unreasonable long time to even probe (what is safe - 15min, 25min, 40min? This cannot be the solution).

Startup-Scripts: Web: https://github.com/ansible/awx/blob/21.0.0/tools/ansible/roles/dockerfile/files/launch_awx.sh Task: https://github.com/ansible/awx/blob/21.0.0/tools/ansible/roles/dockerfile/files/launch_awx_task.sh

Both use: https://github.com/ansible/awx/blob/21.0.0/tools/ansible/roles/dockerfile/files/wait-for-migrations

Would migrations be only applied within an initContainer, liveness- & readinessProbes would only become relevant once migrations were applied successfully, as the actual container would not be started until the initContainer succeeded.

An initContainer would have to be added to task & web, as both can run migrations on their startup and both are started simultaneously. Command of initContainer could be awx-manage migrate || /usr/bin/wait-for-migrations. I'm currently not sure if django returns non-zero return codes when it there is already another running migration, as docs do not mention it: https://docs.djangoproject.com/en/4.0/ref/django-admin/#django-admin-migrate The above command example guesses it does return non-zero on blocking migrations. It does return 0, when no migrations are required.

Waiting for migration-completion is done already on startup of web & task, so every initContainer being run after the first initContainer, could exit without waiting for migrations to complete. But I'd suggest to use the script wait-for-migrations instead (as seen in the example command above), as then livenessProbes would not have to respect anything regarding migrations. All would be done by the time the actual containers are started.

What do you think?

We discussed about that internally and thinking that startup probe will did the job for that case

something like that for web & task

          startupProbe:
            exec:
              command: ["!", "/usr/bin/awx-manage", "showmigrations", "|", "grep", "\'\[ \]\'"]
            initialDelaySeconds: 5
            periodSeconds: 3
            failureThreshold: 900
            successThreshold: 1
            timeoutSeconds: 2

@TheRealHaoLiu TheRealHaoLiu self-assigned this Jan 25, 2023
@fciava
Copy link

fciava commented Apr 4, 2023

Hello!
Any news on this? It would be a nice addition.
Thanks

@Toothwitch
Copy link

As this is still an issue and open. Is there some foreseeable activity/progress on this?
I'd like to see this basic feature implemented.

@moonrail
Copy link
Author

moonrail commented Mar 5, 2024

Yeah, no, this is not working. There seems to be no real interest on the maintainers side to implement this, so I'll not bother anymore and close this PR, as I am no longer willing to make any changes to it.

Who knows why this basic feature is not desired, maybe to reserve it for other downstream channels, but whatever.

@moonrail moonrail closed this Mar 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants