You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When initializing an instance of asynq.NewScheduler, registering a scheduled task, and starting it, everything works fine. However, when the same code is deployed in a distributed manner (i.e., starting multiple instances), the same scheduled task is registered multiple times. This issue doesn't occur when only one instance is running.
Environment (please complete the following information):
OS: [e.g. Linux]
asynq package version: v0.25.1
Redis/Valkey version: 7.2.5
To Reproduce
Steps to reproduce the behavior:
Initialize a NewScheduler instance in your application.
Register a scheduled task.
Start multiple instances of the application in a distributed environment.
Observe that the same task is registered multiple times across instances.
Expected behavior
The scheduled task should only be registered once, regardless of the number of instances running in a distributed environment.
Screenshots
N/A
Additional context
This issue only arises when multiple instances of the application are running simultaneously. The task should be registered only once across all instances to avoid duplication.
two instance:
The text was updated successfully, but these errors were encountered:
I also believe that when multiple instances are running, the scheduled task should not be registered repeatedly, no matter which instance registers it. The scheduled task should only be stopped when the last instance shuts down. I think this approach aligns with the requirements of running multiple replicas.
guilinonline
changed the title
[BUG] Description of the bug
[BUG]The issue of registering and destroying scheduled tasks when running multiple replicas.
Jan 22, 2025
I think I understand now. By adding a TaskID to the scheduler.Register parameters, when multiple instances are running, only one scheduled task will produce messages to the queue, and the load will be balanced across the instances. Is my understanding correct?
Reading through the code - I think that might help a little bit, but it can still cause duplicates in a flow like:
Scheduler A: publishes task for instant K
Worker: completes task very quickly
Scheduler B: publishes task for same instant K <- there is no guarantee that the publishing for a certain instant will be synchronized across schedulers
It can also cause problems if the task wasn't completed in time before a second one is published:
Scheduler A: publishes task for instant K
Worker: busy, does not process
Scheduler A: publishes task for instant K+1 <- Not actually enqueued since ID exists already.
Perhaps the correct solution would be to have only one worker publish tasks to the scheduler? You can acquire a lock (over redis also) to synchronize between them.
Describe the bug
When initializing an instance of asynq.NewScheduler, registering a scheduled task, and starting it, everything works fine. However, when the same code is deployed in a distributed manner (i.e., starting multiple instances), the same scheduled task is registered multiple times. This issue doesn't occur when only one instance is running.
Environment (please complete the following information):
OS: [e.g. Linux]
asynq package version: v0.25.1
Redis/Valkey version: 7.2.5
To Reproduce
Steps to reproduce the behavior:
Initialize a NewScheduler instance in your application.
Register a scheduled task.
Start multiple instances of the application in a distributed environment.
Observe that the same task is registered multiple times across instances.
Expected behavior
The scheduled task should only be registered once, regardless of the number of instances running in a distributed environment.
Screenshots
N/A
Additional context
This issue only arises when multiple instances of the application are running simultaneously. The task should be registered only once across all instances to avoid duplication.
two instance:

The text was updated successfully, but these errors were encountered: