Replication/save conflicts of documents - lotus-domino

I have two servers lets call them server A and B. In A, I have order documents and B is a replicate of A (A replicates to B every minute). In B, I have a java agent which is scheduled every 5 minutes and is sending a document to a website but also puts a flag in a field of the document. Many times now I get save/replication conflict on server A of that particular document which has been accessed by server B. This because others are also editing the same document on server A. How can this problem be solved?

If the documents on A are created using a form, enable the "Merge conflicts" in form properties. If the documens are created with an agent, add a reserved field doc.~$ConflictAction = "1".

Related

What is the use of Table Delivery Class in SAP Data Dictionary?

I want to see the difference of Delivery Class 'A' and 'C'. C for data entered only by the customer, but how can I see it on the code?
I created two tables of type 'A' and 'C'. I add data with ABAP code. I thought I couldn't add data to the table I created with C, but they both work the same.
For A Type:
DATA wa_ogr LIKE ZSGT_DELIVCLS2.
wa_ogr-ogrenci_no = 1.
wa_ogr-ogrenci_adi = 'Seher'.
INSERT ZSGT_DELIVCLS2 FROM wa_ogr.
For C Type:
DATA wa_ogr LIKE ZSGT_DELIVERYCLS.
wa_ogr2-ogrenci_no = 1.
wa_ogr2-ogrenci_adi = 'Seher'.
INSERT ZSGT_DELIVERYCLS FROM wa_ogr2.
Datas get trouble-free when I check with debugging.
Is there a live demo where I can see the working logic of C? Can you describe Delivery Class C better?
Tables with delivery class C are not "customer" tables, they are "customizing" tables. "Customizing" is SAPspeak for configuration settings. They are supposed to contain system-wide or client-wide settings which are supposed to be set in the development system and then get transported into the production system using a customizing transport. But if that's actually the case or not depends on what setting you choose when generating a maintenance dialog with transaction SE54. It's possible to have customizing tables which are supposed to be set in the production system directly without a transport request.
Tables with delivery class A are supposed to contain application data. Data which is created and updated by applications as part of their every day routine business processes. There should usually be no reason to transport that data (although you can do that by manually adding the table name and keys to a transport request). Those applications can be SAP standard applications, customer-developed applications or both.
There is also the delivery class L which is supposed to be used for short-living temporary data as well as the classes G, E, S and W which should only be used by SAP on tables they created.
But from the perspective of an ABAP program, there is no difference between these settings. Any ABAP keywords which read or write database tables work the same way regardless of delivery class.
But there are some SAP standard tools which treat these tables differently. One important one are client copies:
Data in delivery class C tables will always be copied.
Data in delivery class A tables is only copied when desired (it's a setting in the copy profile). You switch it off to create an empty client with all the settings of an existing client or to synchronize customizing settings between two existing clients without overwriting any of the application data. You switch it on if you want to create a copy of your application data, for example if you want a backup or want to perform a destructive test on real data.
Data in delivery class L tables doesn't get copied.
For more information on delivery classes, check the documentation.

Cosmos DB where condition by external document

I have a following structure of document (omiting all with underscore prefix like _self)
{
"id": "c5055e2b-efb2-4c86-907d-a0beb1dca4dc",
"Name": "John Johnson",
"partitionKey": "0ecdb989-01c6-4f11-9fd2-3e1dcc1c8cb9",
"FKToBeDeleted": "FK_c5055e2b-efb2-4c86-907d-a0beb1dca4dc_ToBeDeleted",
}
And as You can see there is a field named FKToBeDeleted and I use this to mark document, but it has to be as a reference, because in my app may occure kind of database concurrency, because 1st app can GET document, process it, and 2nd app can update document during processing and 1st one will not see any changes, because downloading again huge document and updating it is RU consuming, so I wanted to reduce the cost. Going further, I created a document for this.
{
"id": "FK_c5055e2b-efb2-4c86-907d-a0beb1dca4dc_ToBeDeleted",
"partitionKey": "0ecdb989-01c6-4f11-9fd2-3e1dcc1c8cb9",
"ToBeDeleted": false,
}
And now there is a problem, because my front-end app does not want to display any ToBeDeleted documents. This kinda cheats the user, because I just mark it as deleted but later delete the document.
Now the question is how the SQL query should look like? Previously it was like the following query, because r.ToBeDeleted was boolean.
SELECT r.id, r.Name, r.AddedAt, r._ts
FROM ROOT r
WHERE
(NOT(r.ToBeDeleted))
ORDER BY r.AddedAt desc
Now FKToBeDeleted is only a reference to another document, but the ID is in r.FKToBeDeleted, so I tried some nested SELECT but it didn't work.
Any suggestions what is the right way to achieve that?
EDIT (clarification)
Let's have a following situation.
There are two apps (you can also treat them as threads) which uses the same Cosmos DB instance.
STEP 1 - is a moment of start of processing some data, but database document is needed, so it gets that and on the right side you can see current document (but in fact only ToBeDeleted is interesting here).
STEP 2 - is a moment, when user wants to remove this processed item, because he is no longer interested of its results, but database document is also required here, so again there is a GET.
STEP 3 - is a moment, when job of soft delete is done and there is a need to update database document, and the field is set to true.
STEP 4 - is a moment, when processing is over of common flow and at the end there is update of the document. BUT, Application 2 downloaded it before STEP 3, and it's overriding things that Application 1 did, which is bad.
So I made a solution for that.
As you can see, the steps are the same, but instead of updating the same document, I update a referenced document, so I don't have a problem with overriding data.
Now, the problem is how to make a SQL query to join two documents, so the FK_1 id will be replaced of the value of ToBeDeleted field in another document.
According to this article there is no possibility to join two documents, which of course does not help me at all, yet closes the topic.
JOIN keyword exists in the language, but it is used to “unfold” nested containers, there is no way to join different documents.
Perhaps, you can use subquery instead of JOIN.
https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-subquery#mimic-join-with-external-reference-data

Summarize browsing records in specific URL for specific user into one record regarding date in SQL Server

I'm facing a problem in my report. The report calculates how many times a specific (company's applications) url's were viewed/opened ignoring user data.
What I need is for the query to count the times viewed regarding that every user might have not only opened the application but also browse in it (i.e. filter something, but it is still the same application), then the data shows that the same user in minutes or seconds difference opened the same application - every click/filtering/recompiling the page etc. makes a new entry record, which is misleading, because the report shows how many times the application was opened as an individual record. The applications (which are the same in every country) are used in different countries therefore the log data is in different servers.
There are 4 tables from different servers, which have log entry data of the applications (url's), and they have to be inserted into one with already summarizes log entry data.
A small piece of the one table with the data:
A small piece of the second table just to see that the only difference between tables is litintranet, wokintranet:
There you can see that for the LogApp IFP the same user browsed with difference of seconds. But it should have only one record (just for opening the app), but has 3 records because the user probably filtered something or refreshed the page etc.
I need a query that summarizes this information and enters the new summarized / reduced records into a new table. The new table will be used for reports as the correct data of records.
The output should look like this:
How can the summarizing be done?
Thank you for your help

SQL is updating a record thread safe

I am working on a server that accesses a database. It is possible for multiple people to access the same record. Will each request wait in line, or will they all try to modify that record at the same time.
Example:
I have an image, and the database will keep track of how many "likes" that image has.
UPDATE `images` SET `image_likes` = `image_likes` + 1 WHERE `image_id` = 0;
Assuming that specific image has 0 "likes" and 3 people at the same time "like" that image, would those 3 request properly be processed, resulting in that image having 3 likes, or is there a chance that the record can be corrupted, or at the very least be incorrect, maybe only showing 2 "likes"?
My Database uses the MyISAM engine and I am using it through GoDaddy.
Thank you
Php by itself is not thread safe but MySQL is , in this case MySQL will handle this issue and you will get 3 likes. Unless there is any other operation involved this should not be a problem
You can give it a try by calling that script via console multiple times to see what happens

SSRS Data-Driven Subscription [based on static Subscription table] Not Picking Up Changes Made to Subscription Table

I have a .RDL report which I designed in BIDS and have deployed to my report server. The report asks for three parameters before viewing report: Year, Month and Customer ID. The report works great and does exactly what it is supposed to.
While I used to run each report individually because there were 2-3 customers, now there are 30+ customers who receive the report, so I wanted to switch to a more automated fulfillment method to get the reports generated. After doing some research it appears that a using Report Manager to create a "Data Driven Subscription" (DDS) using the "Windows File Share" option gives me the capabilities I need.
As part of creating the DDS, I created a table called [Subscription] which is a table containing one row for each customer receiving the report and has the following columns:
Year
Month
CustomerID
FileName
FileLocation
Overwrite
Format
...so through using the DDS Wizard in Report Manager, I was able to successfully set up a Data Driven Subscription (which is linked to various columns in the [Subscription] table) which creates a new report for each customer in the [Subscription] table, saves [and overwrites, if necessary] it in a location of my choosing as a PDF (specified in [Subscription].[FileLocation], or the FileLocation column of my table for each row), and runs every minute (I plan on changing frequency to once a week, eventually).
This works flawlessly, giving me a new set of 30 reports in the directory of my choosing, with each report having a name I assigned in the FileName column of my table. Exactly what I was looking for.
HERE'S THE PROBLEM: When I update the FileLocation or FileName (or anything, really) in the [Subscription] table - it doesn't pick up the changes right away. Sometimes it doesn't even pick it up at all (for example I updated the [ReportName] column for one customer from Report_711622 to SpecialReport_711622, so that the output file for that customer should be named SpecialReport_711622 while all of the other reports should be called Report_XXXXX [no Special prefix]. But the file name of report for Customer 711622 remains the same!
It's almost like the job only see's what it needs to do once a day, and then does not go back and reference the [Subscription] table until I leave for the night, then when I come back in the morning it picks up the change.
Since I am about to scale this process out to a large customer-base using a different report, I need to be able to make edits to the [Subscription] table and have them get picked up by the Data Driven Subscription immediately (and if not immediately, at least a fixed interval of time that I can adjust, so that I can know 100% when the change will get picked up).
Does anyone know what's causing my lag? How do I change it so that updates to the Subscription table get picked up regularly? I'm also having issues with creating new DDS on other reports (following the exact process outlined above) - I've created the subscriptions, for every minute, and it says they are running and the number of outputs match the number of customers with 0 errors, but there are no files in the drive I specified (or anywhere else I've looked, for that matter).
Any help would be greatly appreciated!
I think the answer lies in the mechanism SSRS uses. There are a few places "lag" can occur.
The subscription is in fact an SQL Agent job which creates a record in the Event table. This table is a queue that SSRS checks to do scheduled tasks.
There is a small amount of time between the moment the subscription creates the Event record and the moment SQL reads it and starts creating the dataset for your DDS. The creation of the DDS dataset takes some time, too. In this time, the subscription will be in the Pending state. If you change anything in the data during this time, The subscription will still use the old data as report parameters. So obviously you will not notice your change until the next scheduled run.
Which brings me to the following: if a subscription is still being run and the next schedule kicks in (chances are, because yours runs every minute), the engine will not execute it, but wait for the next subscription schedule, and so on. So that's another possibility of lag - and cause of missing reports for a certain schedule minute. The subscription processes reports sequentially, one row from your DDS recordset at a time. Again, this takes some time. You can also see that in the subscription window when it says: # of # processed.
I suggest you look at the Event table in the database ReportServer during an execution. Also the ExecutionHistory views (there are 3) may be interesting. A scheduled run shows up as a RequestType = 1 and generates one record for each report. You can see the exact timing and parameters of each report that is run in the subscription. You may be able to extract the data you need to resolve your other issues.
EDIT: Here is a more elaborate guide to DDS data and events
http://blogs.msdn.com/b/deanka/archive/2009/01/13/diagnosing-and-troubleshooting-subscriptions.aspx
http://blogs.msdn.com/b/deanka/archive/2010/02/16/troubleshooting-subscriptions-part-ii-using-the-report-services-trace-log-file.aspx
Could this "Double-Hop" problem be the source of my issues? I'm so stuck on this one!
The Double-Hop Problem - MSDN Knowledgecast