I have a system A (generating messages each 1 second) linked over WSO2 EI with a system B (REST API).
The system B does not need so much data and it would be enough to send only each 60th message (so to speak kind of sampling).
could you please, tell me how to send out only each "x" message in the WSO2 EI?
Either time based (e.g every 60 sec) or count based (each 100th message)?
thx a lot!
Not sure, but maybe the following will work for you.
Store a counter value somewhere (e.g. database)
When system A generates a message, read that value from store
(database)
Increase that value and store it (e.g. databse)
Check if counter modulo 60 = 0
If so, send to system B otherwise do nothing
This should be possible using db mediator, script mediator and filter mediator.
Hope that helps.
Related
Whenever I receive a message from a Queue, I want to be able to know somehow what is the sequence number of the message. This sequence number should be equal to the number of messages that were sent to the queue from the moment it was created plus 1. So the first message sent to the queue would have sequenceNumber = 1, second message would have sequenceNumber = 2 etc. So basically it should work similarly to database id sequence, that is inceremented every time we insert a new row.
Does RabbitMQ has some kind of mechanism to support that?
Of course, I could use database for that, but I need to avoid DB usage if that's possible.
Any help would be really appreciated,
Thanks
I got a quick simple question,
Assume that if server receives 10 messages from user within 10 minutes, server sends a push email.
At first I thought it very simple using redis,
incr("foo"), expire("foo",60*10)
and in Java, handle the occurrence count like below
if(jedis.get("foo")>=10){sendEmail();jedis.del("foo");}
but imagine if user send one message at first minute and send 8 messages at 10th minute.
and the key expires, and user again send 3 messages in the next minute.
redis key will be created again with value 3 which will not trigger sendEmail() even though user send 11 messages in 2 minutes actually.
we're gonna use Redis and we don't want to put receive time values to redis.
is there any solution ?
So, there's 2 ways of solving this-- one to optimize on space and the other to optimize on speed (though really the speed difference should be marginal).
Optimizing for Space:
Keep up to 9 different counters; foo1 ... foo9. Basically, we'll keep one counter for each of the possible up to 9 different messages before we email the user, and let each one expire as it hits the 10 minute mark. This will work like a circular queue. Now do this (in Python for simplicity, assuming we have a connection to Redis called r):
new_created = False
for i in xrange(1,10):
var_name = 'foo%d' % i
if not (new_created or r.exists(var_name)):
r.set(var_name, 0)
r.expire(var_name, 600)
new_created = True
if not r.exists(var_name): continue
r.incr(var_name, 1)
if r.get(var_name) >= 10:
send_email(user)
r.del(var_name)
If you go with this approach, put the above logic in a Lua script instead of the example Python, and it should be quite fast. Since you'll at most be storing 9 counters per user, it'll also be quite space efficient.
Optimizing for speed:
Keep one Redis Sortet Set per user. Every time a user sends a message, add to his sorted set with a key equal to the timestamp and an arbitrary value. Then just do a ZCOUNT(now, now - 10 minutes) and send an email if that's greater than 10. Then ZREMRANGEBYSCORE(now - 10 minutes, inf). I know you said you didn't want to keep timestamps in Redis, but IMO this is a better solution, and you're going to have to hold some variant on timestamps somewhere no matter what.
Personally I'd go with the latter approach because the space differences are probably not that big, and the code can be done quickly in pure Redis, but up to you.
I’m here with another question this time.
I have an application which builds to move data from one database to another. It also deals with validation & comparison between the databases. When we start moving the data from source to destination it takes a while as it always deals with thousands of records. We use WCF service and SQL server # server side and WPF # client side to handle this.
Now I have a requirement to notify user with the time it is going to take based on the source database no: records (eventually that is what im going to create in the destination database) right before user starts this movement process.
Now my real question, which is the best way we can do this and get an estimated time out of it?
Thanks and appreciated your helps.
If your estimates are going to be updated during the upload process, you can take the time already spent, delete on number of processed records, and multiply by number of remaining records. This will give you an updating average remaining time:
TimeSpan spent = DateTime.Now - startTime;
TimeSpan remaining = (spent / numberOfProcessedRecords) * numberOfRemainingRecords;
Or - at least I think the correct term is stateful. I got a wcf service, listing lots of data back to me. So much data in fact, that I'm exceeding maxrecievedmessagesize - and the program crashes.
I've come to realize that I need to split the calls to the db. Instead of retrieving 5000 rows, I need to get row 1 - 200, remember the id of row number 200, get the next 200 rows from the id of row number 200 and so on.
Does anyone know how to do this? Is stateful (as in 'opposite of stateless') the correct way to go? And how would I proceed...? Could someone point me to an example?
You do not need stateful service in your scenario. Stateful services is better to avoid, especially when you want to save 5000 rows there.
Client should specified how much data it needs. So it could be method GetRows(index, amount), where index is start index for getting rows and amount of rows to get beginning from starting index.
Also client should ask about data state from service, and service just sends data state. For example when you have these 5000 rows, you could have method on service GetRowsState(index, amount) and the same story it's just saying you the last updated time for your rows, when time you have received is higher or other then on client, then once more to GetRows from server to update client data state.
I'm starting to use Redis, and I've run into the following problem.
I have a bunch of objects, let's say Messages in my system. Each time a new User connects, I do the following:
INCR some global variable, let's say g_message_id, and save INCR's return value (the current value of g_message_id).
LPUSH the new message (including the id and the actual message) into a list.
Other clients use the value of g_message_id to check if there are any new messages to get.
Problem is, one client could INCR the g_message_id, but not have time to LPUSH the message before another client tries to read it, assuming that there is a new message.
In other words, I'm looking for a way to do the equivalent of adding rows in SQL, and having an auto-incremented index to work with.
Notes:
I can't use the list indexes, since I often have to delete parts of the list, making it invalid.
My situation in reality is a bit more complex, this is a simpler version.
Current solution:
The best solution I've come up with and what I plan to do is use WATCH and Transactions to try and perform an "autoincrement" myself.
But this is such a common use-case in Redis that I'm surprised there is not existing answer for it, so I'm worried I'm doing something wrong.
If I'm reading correctly, you are using g_message_id both as an id sequence and as a flag to indicate new message(s) are available. One option is to split this into two variables: one to assign message identifiers and the other as a flag to signal to clients that a new message is available.
Clients can then compare the current / prior value of g_new_message_flag to know when new messages are available:
> INCR g_message_id
(integer) 123
# construct the message with id=123 in code
> MULTI
OK
> INCR g_new_message_flag
QUEUED
> LPUSH g_msg_queue "{\"id\": 123, \"msg\": \"hey\"}"
QUEUED
> EXEC
Possible alternative, if your clients can support it: you might want to look into the
Redis publish/subscribe commands, e.g. cients could publish notifications of new messages and subscribe to one or more message channels to receive notifications. You could keep the g_msg_queue to maintain a backlog of N messages for new clients, if necessary.
Update based on comment: If you want each client to detect there are available messages, pop all that are available, and zero out the list, one option is to use a transaction to read the list:
# assuming the message queue contains "123", "456", "789"..
# a client detects there are new messages, then runs this:
> WATCH g_msg_queue
OK
> LRANGE g_msg_queue 0 100000
QUEUED
> DEL g_msg_queue
QUEUED
> EXEC
1) 1) "789"
2) "456"
3) "123"
2) (integer) 1
Update 2: Given the new information, here's what I would do:
Have your writer clients use RPUSH to append new messages to the list. This lets the reader clients start at 0 and iterate forward over the list to get new messages.
Readers need to only remember the index of the last message they fetched from the list.
Readers watch g_new_message_flag to know when to fetch from the list.
Each reader client will then use "LRANGE list index limit" to fetch the new messages. Suppose a reader client has seen a total of 5 messages, it would run "LRANGE g_msg_queue 5 15" to get the next 10 messages. Suppose 3 are returned, so it remembers the index 8. You can make the limit as large as you want, and can walk through the list in small batches.
The reaper client should set a WATCH on the list and delete it inside a transaction, aborting if any client is concurrently reading from it.
When a reader client tries LRANGE and gets 0 messages it can assume the list has been truncated and reset its index to 0.
Do you really need unique sequential IDs? You can use UUIDs for uniqueness and timestamps to check for new messages. If you keep the clocks on all your servers properly synchronized then timestamps with a one second resolution should work just fine.
If you really do need unique sequential IDs then you'll probably have to set up a Flickr style ticket server to properly manage the central list of IDs. This would, essentially, move your g_message_id into a database with proper transaction handling.
You can simulate auto-incrementing a unique key for new rows. Simply use DBSIZE to get the current number of rows, then in your code, increment that number by 1, and use that number as the key for the new row. It's simple and atomic.