You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When a worker fails a task will move into archived state without retrying if asynq.MaxRetries(0) is set on the task. Setting it to any positive value causes allows X worker failures. IE setting asynq.MaxRetries(3) will cause allow the task to get picked up again up to 3 times with worker failures as the cause
Environment (please complete the following information):
OS: linux
asynq package version: 0.25.1
Redis version: 7.4.2
To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
create task with asynq.MaxRetries(0)
run task
kill worker before tasks completes (IE ctrl + c)
check task state in redis hgetall "asynq:{default}:t:8abc3be7-2f6f-4ac4-a610-1c0a23188d96"
start worker backup
after lease expires, task will move to archived state (can check with hgetall)
Expected behavior
tasks are picked back up on worker failure. retries is incremented only upon a task returning an error or panic
Screenshots
output from redis cli (payloads and job name redacted)
Describe the bug
When a worker fails a task will move into
archived
state without retrying ifasynq.MaxRetries(0)
is set on the task. Setting it to any positive value causes allows X worker failures. IE settingasynq.MaxRetries(3)
will cause allow the task to get picked up again up to 3 times with worker failures as the causeEnvironment (please complete the following information):
asynq
package version: 0.25.1To Reproduce
Steps to reproduce the behavior (Code snippets if applicable):
hgetall "asynq:{default}:t:8abc3be7-2f6f-4ac4-a610-1c0a23188d96"
archived
state (can check with hgetall)Expected behavior
tasks are picked back up on worker failure. retries is incremented only upon a task returning an error or panic
Screenshots
output from redis cli (payloads and job name redacted)
Additional context
For now i am bumping the max retry value to be able to handle this, but I was expecting worker failures to not impact retries
The text was updated successfully, but these errors were encountered: