Can the target of a conversation receive messages from different initiators using the same conversation? - sql

I like this article: http://technet.microsoft.com/en-us/library/dd576261(v=sql.100).aspx because of the receive top (10000) into a table variable. Processing a table variable with 10000 messages would give me a giant boost in performance.
receive top (10000) message_type_name, message_body, conversation_handle<br>
from MySSBLabTestQueue<br>
into #receive
From reading, the receive provides messages given a single conversation_handle. I have 200+ stores all sending messages with the same message type and contract to the same server. Can I implement the server to get all the messages from these stores on a single call to receive?
Thanks

A target can consolidate multiple conversations into few conversation groups, using the MOVE CONVERSATION. The RECEIVE restricts the result set to one single conversation group so moving many individual conversation into a single group can result in bigger result sets, as you desire.
For the records, initiators can also consolidate conversations using MOVE CONVERSATION, there is nothing role specific here. But initiators can also use the RELATED_CONVERSATION_GROUP clause of BEGIN DIALOG to start the conversation directly in the desired group, achieving consolidation and thus bigger result sets w/o having to use MOVE. This is useful because you can simply reverse the roles in the app, ie. instead of stores starting the dialogs with central server, have the central server start the dialogs with each store (thus reversing the roles) and the central server can start the dialogs in as few conversation groups as it likes, even 1. This removes the need to issue MOVE CONVERSATION.

Related

How to set KOFAX KTM Server global variable value which will be initialized in Batch open, updated in SeparateCurrentPage & used in BatchClose?

I am trying to count a specific barcode value from Project.Document_SeparateCurrentPage and use it in BatchClose to compare if the count is greater than 1 and if it is >1 then send the batch to a specific queue with specific priority. I used a global variable in KTM Project Script to hold the count value which was initialized to 0 in Batch open. It worked fine until unit testing. But our automation team found that out of 20 similar batches, few batches were sent to the queue where the batch should go only if the count satisfies the greater than one condition, though they used only one barcode.
I googled and found that KTM Server script events do not allow to use shared information in different processes(https://docshield.kofax.com/KTM/en_US/6.4.0-uuxag78yhr/help/SCRIPT/ScriptDocumentation/c_ServerScriptEvents.html). Then I tried to use a batch field to hold the barcode count but unable to update its value from Project.Document_SeparateCurrentPage function using pXRootFolder.Fields.ItemByName("BatchFieldName").Text = "GreaterThanOne". The logs show that the batch reads the first page three times and then errors out.
Any links would help. Thanks in advance.
As you mentioned, the different phases of batch/document processing can execute in different processes, so global variables initialized in one event won’t necessarily be available in others. Ideally you should only use global variables if their content can be set from Application_InitializeScript or Application_InitializeBatch, because these events occur in each separate process. As you’ve found out, you shouldn’t use a global variable for your use case, because Document_SeparateCurrentPage and Batch_Close for one batch may occur in different processes, just as the same process will likely execute those events for multiple batches.
Also, you cannot set batch fields from document level events for a related reason: any number of separate processes could be processing documents of a batch in parallel, so batch level data is read-only to document events. It is a bit unintuitive, but separation is a document level event even though it seems like it is acting on the whole batch. (The three times you saw is just an error retry mechanism.)
If it meets your needs, the simplest answer might be to use a barcode locator as part of normal extraction (not just separation), and assign to a field if needed. While you cannot set batch fields from document events, you can read document data from batch events. So instead of trying to track something like a count over the course of document events, just make sure whatever data you need is saved at a document level. Then in a Batch_Close you can iterate the documents and count/calculate whatever you need. (In your case maybe the number of locator alternatives for the barcode locator, across each document.)

ABAP Program to notify Users X amount of days before user account will be disabled

I'm currently learning ABAP and trying to make an enhancement but have broken down in confusion on how to go about building on top of existing code. I have a program that runs periodically via a background job that disables user accounts X amount of days (in this case 90 days of inactive usage based on USR02~TRDAT).
I want to add an enhancement to notify the User via their email address (result of usr02~bname to match usr21~bname to pass the usr21~persnumber and usr21~addrnumber to adr6 which will point to the adr6~smtp_addr of the user, providing the usr02~bname -> adr6~smtp_addr relationship) based on their last logon date being 30, 15, 7, 5, 3, and 1 day away from the 90 day inactivity threshold with a link to the SAP system to help them reactivate the account with ease.
I'm beginning to think that an enhancement might not be a good idea but rather create a new program and schedule the background job daily. Any guidance or information would be greatly appreciated...
Extract
CLASS cl_inactive_users_reader DEFINITION.
PUBLIC SECTION.
TYPES:
BEGIN OF ts_inactive_user,
user_name TYPE syst_uname,
days_of_inactivity TYPE int1,
END OF ts_inactive_user.
TYPES tt_inactive_users TYPE STANDARD TABLE OF ts_inactive_user WITH EMPTY KEY.
CLASS-METHODS read_inactive_users
IMPORTING
min_days_of_inactivity TYPE int1
RETURNING
VALUE(result) TYPE tt_inactive_users.
ENDCLASS.
Then refactor
REPORT block_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 90 ).
LOOP AT inactive_users INTO DATA(inactive_user).
" block user
ENDLOOP.
And add
REPORT warn_inactive_users.
DATA(inactive_users) = cl_inactive_users_readers=>read_inactive_users( 60 ).
LOOP AT inactive_users INTO DATA(inactive_user).
CASE inactive_user-days_of_inactivity.
" choose urgency
ENDCASE.
" send e-mail
ENDLOOP.
and run both reports daily.
Don't create a big ball of mud by squeezing new features into existing code.
From SAP wiki:
The enhancement concept allows you to add your own functionality to SAP's standard business applications without having to modify the original applications. To modify the standard SAP behavior as per customer requirements, we can use enhancement framework.
As per your description, it doesn't sound like a use case for an enhancement. It isn't an intervention in an existing process. The original process and your new requirement are two different processes with some mutual logical part - selection of days of inactivity of users. The two shouldn't rely on each other.
Structurally I think it is best to have a separate program for computing which e-mails need to be sent and when, and a separate program for actually sending them.
I would copy your original program to a new one, and modify it a little bit so that instead of disabling a user, it records into some table for each user: 1) an e-mail 2) a date when to send 3) how many days left (30, 15, 7, etc) 4) status if the e-mail was sent or not. Initially you can even have multiple such jobs for each period (30, 15, 7 etc) and pass it as a parameter (which you use inside instead of 90).
This program you run daily as a job and it populates that table with e-mail "tasks" of what needs to be sent today. It just adds new lines, so lines from yesterday should stay in there.
The 2nd program should just read that table and send actual e-mails and update the statuses. You run that program daily as well.
This way you have:
overview: just check the table to see what's going on
control: if the e-mailer dies or hangs, you can restart it and it will continue where it left off; with statuses you avoid sending duplicate mails
you can make sure that you don't send outdated e-mails if in your mailer script you ignore all tasks older than say 2 days
I want to clarify your confusion about the use of enhancements:
You would want to use enhancements in terms of 'something' happens or is going to happen in the system and you would want to change this standard way.
That something, let's call it event or process could be for example an order is placed, a certain user is logging onto the system or a material has been or is going to be changed.
The change could be notifying another system of an order or checking the logged on user with additional checks for example his GUI version and warn him/her if not up-to-date.
Ask yourself, what process on the system does the execution of your program or code depend on. Does anything need to happen before the program is executed? No, only elapsing time.
Even if you had found an enhancement, you would want to use. If this process using the enhancement would not be run in 90 days, your mails would not be sent, because the enhancement would never been called.
edit: That being said, supposing you mean by enhancement 'building on your existing program' instead of 'creating a new one' would be absolutely not the right terminology for enhancement in the sap universe.
I would extend the functionality of your existing program, since you already compute how many days are left and you would have only one job to maintain.

Which datastructure should I use in Redis for a notification system?

I am trying to make a notification system with Redis rather than using MySQL which is what I use for the rest of the system. The reason for this is that I don't really need to save that much data so it can be saved in memory and I want it to be lightweight and fast.
The notifications will be kept temporarily. What I mean by that is that I do not want to save all notifications, but more like 50 latest unseen notifications for each user. So first thing I thought about was to use a linked list with a capped length of 50.
I would need to save this information for the notification:
postId
commentId
type
time
userId
username
image
So perhaps a JSON serialized string like this:
{"postId":1,"commentId":10,"type":1,"time":1462960058,"userId":2,"username":"Alexander","image":"ntfpRrgx.png"}
The notifications would be output like this on the client side:
Alexander commented on your post.
Alexander replied to your comment.
Where the type determines what kind of notification it is. I can handle "type" checks client side and output notification format accordingly. But here is the part I am having difficult with.
1) I need to be able to save the notifications in an ordered way so that I know which notification is newest.
2) I need to be able to know when a notification has been seen, so that it is not registered as not seen anymore.
3) I need to have a count of unseen notifications that I can show to the user. And If the user clicks on a notification, I need to mark that as a seen notification and decrement the count of unseen notifications.
4) I need to be able to mark all notifications as marked seen if the user wishes to do that.
5) I need to be able to get a subset of the notifications, whether seen or unseen, like an offset and limit on MySQL. For example, the user sees the newest 5 notifications, but he could click a next button and see the next 5, and the next 5 and so on.
I have no idea how to do all of this on Redis.
The key for the list or set could be user:1:notification. I know a list is sorted, and we can add and remove from the head and tail. But how do I achieve all these points?
1: You can use redis sorted sets (zset) operations and use timestamp as a score, and event id (or the entire event json) as a member.
ZADD my-set-key timestamp event-id
Then to get a page newest items you use zrevrange command. If you choose to use event id as a member, then you need additional structure to store event fields. I would recommend HSET eventid, field, value.
2: You can remove an item by member (event-id)
ZREM my-set-key event-id
3: Assuming your zset only keeps unseen, then you can use ZCARD to get size of the set
ZCARD my-set-key
4: You can remove an entire set in one shot using
DELETE my-set-key
5: You can paginate using zrange/zrevrange:
ZREVRANGE my-set-key start-position to-position
If you need to keep both seen and unseen items, then you need an extra zset where you only add, but don't remove once an item is seen

SADD only if SCARD below a value

Node.js & Redis:
I have a LIST (users:waiting) storing a queue of users waiting to join games.
I have SORTED SET (games:waiting) of games waiting for users. This is updated by the servers every 30s with a new date. This way I can ensure if a server crashes, the game is no longer used. If the server is running and fills up, it'll remove itself from the sorted set.
Each game has a SET (game:id:users) containing the users that are in it. Each game can accept no more than 6 players.
Multiple servers are using BRPOP to pick up users from the LIST (users:waiting).
Once a server has a user id, it gets the waiting games ids, then proceeds to run SCARD on their game:id:users SET. If the result of this is less than 6, it adds them to the set.
The problem:
If multiple servers are doing this at once, we could end up with more than 6 users being added to a set at a time. For example if one server requests SCARD and immediately after another runs SADD, the number in the set will have increased but the first server won't know.
Is there anyway of preventing this?
You need transactions, which redis supports: http://redis.io/topics/transactions
in your case in particular, you want to pay attention to the watch command: http://redis.io/topics/transactions#cas

WCF data services - Limiting related objects returned based on critera

I have an object graph consisting of a base employee object, and a set of related message objects.
I am able to return the employee objects based on search criteria on the employee properties (eg team) etc. However, if I expand on the messages, I get the full collection of messages back. I would like to be able to either take the top n messages (i.e. restrict to 10 most recent) or ideally use a date range on the message objects to limit how many are brought back.
So far I have not been able to figure out a way of doing this:
I get an error if I attempt to filter on properties on the message (&$filter=employee/message/StartDate gives an error ">No property 'StartDate' exists in type 'System.Data.Objects.DataClasses.EntityCollection`1).
Attempting to use Top on the message related object doesn't work either.
I have also tried using a WebGet extension that takes a string list of employee IDs. That works until the list gets too long, and then fails due to the URL getting too long (it might be possible to setup a paging mechanism on this approach)...
Unfortunately the UI control I am using requires the data to be in a fairly specific hierarchical shape, so I can't easily come at this from starting on the message side and working backwards.
Outside of making multiple calls does anyone know of a method to accomplish this with wcf data services?
Thanks!
M.
Looks like the only real way of doing this is in fact to reverse the direction of the query.
So instead of starting from the Employee, I go from the message side. You can filter back on the employee properties, and restrict on the Messages collection. Its not ideal, as it means iterating the collection on return to re-center it on the employee for what I am attempting to do, but it will work. The async nature of silverlight and rich client at least means while an extra iteration is required, it still appears to be reasonably fast.
Another interesting thing to note: the current version of odata/wcf data services does not support querying on properties of inherited classes, so I had to move the start/end date properties up to the base class in order to be able to restrict my search on them.
http://Site/Service.svc/Messages()?&$filter=Employee/OfficeName eq 'Toronto' and (year(StartDate) eq 2010 and month(StartDate) ge 9 )