One process is producer of Bull queue(javascript) and 3 processes are consumer of the same queue.
Queued data by the producer process are paired and chronological order.
Each data object is like this : {order:1-1}, {order:1-2}, {order:2-1}, {order:2-2}.....
{order:1-1}, {order:1-2} data are paired ,and each data is in queuing chronological order.
It should also be dequeued chronologically.
If they are dequeued like {order:1-2}, {order:1-1}, it's useless.
When I run a single consumer process, dequeued data are chronological.
But, when the multi processes consume the queue, the dequeued data are not in the chronological order.
I think dequeued data by the multi processes are not FIFO.
It seems dequeued out of order.
So, I've tried to limit the process count to access queue at the moment.
I think the concurrency setting can be used for this.
const queue = new Qeueu('test', {
redis :{
host : 'localhost',
port : 6379
},
settings : {
concurrency : 1
}
But, the result is the same with before. Output of paired data are not in chronological order.
When I checked the queue status, the active count was 3.
It means that 3 queue was activated by the 3 process. My purpose is only one process access queue at the moment
I think that the concurrency setting is not working.
Or I'm applied incorrectly.
Here are summarizing my questions:
In the Bull queue(redis), are dequeued data not in chronological order if multi processes consume queue?
Or basically, it's not possible to keep chronological order in the multi processes because each process's processing time different. So, output will be out of order even though they read queued data in chronological order.
If so, is there any solution in chronological order?
How to set Bull queue if I want to make only one process can access the queue at the moment?
According to my investigation, Bull queue doesn't support in chronological order.
Single process or multi processes are the same result.
Instead of Bull queue, please use list if you want in chronological order.
It's working well with a single process and mutl processes.
Related
The current situation is that I have an application that scales horizontally with one SQL database. Periodically, a background process is ran but I only want one invocation of this background process running at a time. I have tried to accomplish this by using a database row and locking but I am stuck. The requirement is that only one batch job should have successfully completed per day.
Currently I have a table called lock which has three columns: timestamp, lock_id, status. Status is an enum that has three values 0 = not running, 1 = running, 2 = completed.
The issue is that if a batch job fails and status is equal to 0, How can I make sure that only one background process will retry. How do I guarantee that only one background process is running in the retry scenario?
In an ideal world, I would like to do a SELECT statement that checks for the STATUS in the locking table, if status is = 0 meaning not running then start the background job and change status to 1 = running. However, if all horizontally scaled processes do this at the same time, is it guaranteed that only one is executed?
Thanks!
Technologies available: Autosys, Informatica, Unix scripting, Database (available via informatica)
How our batch currently works is with filewatchers looking for a file called "control.txt" which gets deleted when a feed starts processing. It gets recreated once completed which allows all "control" autosys jobs waiting, to have one pick up the control file and begin processing data feeds one by one.
However, the system has grown large, and some feeds have become more important than others, and we're looking at ways to improve our scheduler to prioritize feeds over others.
With the current design, of one a file deciding when the next feed runs, it can't be done, and I haven't been able to come up with a simple solution to make it happen.
Example:
1. Feed A is processing
2. Feed B, Feed C, Feed X, Feed F come in while Feed A is processing
3. Need to ensure that Feed B is processed next, even though C, X, F are ready.
4. C, X, F have a lower priority than A and B, but have the same priority and can process in any order
A very interesting question. One thing that I can think of is to have an extra Autosys job with a shell script that copies the file in certain order. Like:
Create input folder e.g. StageFolder
Let's call your current Autosys input folder "the InputFolder"
Have Autosys monitor it and for any file run a OrderedFileCopyScript.sh, every minute
OrderedFileCopyScript.sh should copy one file from StageFolder to InputFolder in desired order only if InputFolder is empty
I hope I made myself clear.
I oppose use of Autosys for this requirement ! Wrong tool !
I don't know all the details but considering an application with the usual reference tables.
In this case you should make use of feed reference table to include relative priorities.
I would suggest to create(or reuse) a table to loaded by the successor job of the file watcher.
1) Table to contain the unprocessed file with the corresponding priority and then use this table to process the files based on the priority.
2) Remove/archive the entries once done.
3) Have another job of this and run like a daemon with start_times/run_window.
This gives the flexibility to deal with change in priorities and keeps overall design simple.
This gives
I like this article: http://technet.microsoft.com/en-us/library/dd576261(v=sql.100).aspx because of the receive top (10000) into a table variable. Processing a table variable with 10000 messages would give me a giant boost in performance.
receive top (10000) message_type_name, message_body, conversation_handle<br>
from MySSBLabTestQueue<br>
into #receive
From reading, the receive provides messages given a single conversation_handle. I have 200+ stores all sending messages with the same message type and contract to the same server. Can I implement the server to get all the messages from these stores on a single call to receive?
Thanks
A target can consolidate multiple conversations into few conversation groups, using the MOVE CONVERSATION. The RECEIVE restricts the result set to one single conversation group so moving many individual conversation into a single group can result in bigger result sets, as you desire.
For the records, initiators can also consolidate conversations using MOVE CONVERSATION, there is nothing role specific here. But initiators can also use the RELATED_CONVERSATION_GROUP clause of BEGIN DIALOG to start the conversation directly in the desired group, achieving consolidation and thus bigger result sets w/o having to use MOVE. This is useful because you can simply reverse the roles in the app, ie. instead of stores starting the dialogs with central server, have the central server start the dialogs with each store (thus reversing the roles) and the central server can start the dialogs in as few conversation groups as it likes, even 1. This removes the need to issue MOVE CONVERSATION.
Redis supports pub/sub.
How can I have a client retrieve the last value and subscribe to changes, in such a way that I don't miss messages.
Here is the problem with get + subscribe:
I get the last value from Redis
subscribe to changes
before I store the last value from 1. I receive an update, and therefore update my cache with the update
I naively proceed to store the the value from 1. overwritting the value from 3
Node.js & Redis:
I have a LIST (users:waiting) storing a queue of users waiting to join games.
I have SORTED SET (games:waiting) of games waiting for users. This is updated by the servers every 30s with a new date. This way I can ensure if a server crashes, the game is no longer used. If the server is running and fills up, it'll remove itself from the sorted set.
Each game has a SET (game:id:users) containing the users that are in it. Each game can accept no more than 6 players.
Multiple servers are using BRPOP to pick up users from the LIST (users:waiting).
Once a server has a user id, it gets the waiting games ids, then proceeds to run SCARD on their game:id:users SET. If the result of this is less than 6, it adds them to the set.
The problem:
If multiple servers are doing this at once, we could end up with more than 6 users being added to a set at a time. For example if one server requests SCARD and immediately after another runs SADD, the number in the set will have increased but the first server won't know.
Is there anyway of preventing this?
You need transactions, which redis supports: http://redis.io/topics/transactions
in your case in particular, you want to pay attention to the watch command: http://redis.io/topics/transactions#cas