Update field in redis - redis

i'm developing an application that required big request to databases
so my solution is to set a cache version in Redis
my question is how to update a specific field in stored document
in my case to increment NBRView by 1
and thnx

Related

Azure Data Factory - Rerun Failed Pipeline Against Azure SQL Table With Differential Date Filter

I am using ADF to keep an Azure SQL DB in sync with an on-prem DB. The on-prem DB is read only and the direction is one-way, from the Azure SQL DB to the on-prem DB.
My source table in the Azure SQL Cloud DB is quite large (10's of millions of rows) so I have the pipeline set to use an UPSERT (merge, trying to create a differential merge). I am using a filter on the Source table and the and the Filter Query has a WHERE condition that looks like this:
[HistoryDate] >= '#{formatDateTime(pipeline().parameters.windowStart, 'yyyy-MM-dd HH:mm' )}'
AND [HistoryDate] < '#{formatDateTime(pipeline().parameters.windowEnd, 'yyyy-MM-dd HH:mm' )}'
The HistoryDate column is auto-maintained in the source table with a getUTCDate() type approach. New records will always get a higher value and be included in the WHERE condition.
This works well, but here is my question: I am testing on my local machine before deploying to the client. When I am not working, my laptop hibernates and the pipeline rightfully fails because my local SQL Instance is "offline" during that run. When I move this to production this should not be an issue (computer hibernating), but what happens if the clients connection is temporarily lost (i.e, the client loses internet for a time)? Because my pipeline has a WHERE condition on the source to reduce the table size upsert to a practical number, any failure would result in a loss of any data created during that 5 minute window.
A failed pipeline can be rerun, but the run time would be different at that moment in time and I would effectively miss the block of records that would have been picked up if the pipeline had been run on time. pipeline().parameters.windowStart and pipeline().parameters.windowEnd will now be different.
As an FYI, I have this running every 5 minutes to keep the local copy in sync as close to real-time as possible.
Am I approaching this correctly? I'm sure others have this scenario and it's likely I am missing something obvious. :-)
Thanks...
Sorry to answer my own question, but to potentially help others in the future, it seems there was a better way to deal with this.
ADF offers a "Metadata-driven Copy Task" utility/wizard on the home screen that creates a pipeline. When I used it, it offers a "Delta Load" option for tables which takes a "Watermark". The watermark is a column for an incrementing IDENTITY column, increasing date or timestamp, etc. At the end of the wizard, it allows you to download a script that builds a table and corresponding stored procedure that maintains the values of each parameters after each run. For example, if I wanted my delta load to be based on an IDENTITY column, it stores the value of the max value of a particular pipeline run. The next time a run happens (trigger), it uses this as the MIN value (minus 1) and the current MAX value of the IDENTITY column to get the added records since the last run.
I was going to approach things this way, but it seems like ADF already does this heavy lifting for us. :-)

How to performance test an Update endpoint in JMeter while automatically updating row value in payload

I have an Update entity endpoint in my .NET Core Microservice API that needs to be tested for performance. For all other endpoints, I am able to store the ID in a CSV file and load it before processing, however I want to reuse the values in the CSV for update, which requires updating and keeping track of the Row Version attribute for the ID.
I will be testing using 100 Users and 100 Orders, so I will need to match every user to one order so they don't try updating the same entity.
Steps:
Read CSV with ID and current row version
Call Update endpoint on the ID and row version, read in new Row Version from response body
Store the new row version and the ID within JMeter to reuse in the test
Call Update endpoint on the ID and new row version
The problem with storing inside of the CSV is JMeter will be reading and writing from the same file. I am looking for a way to use a Java like collection inside of my script to not have to read and write from a file.
The dictionary would look like {'q28937-3423572903485-324875', rowVersion: 42}
Add a Post Processor as a child of your HTTP request (the first update) to extract the new ID and rowVersion.
Then in the next update, you should use Jmeter variables ${ID} and ${rowVersion} which holds the new values that you extracted using Post Processor.
Note that variables are not shared between threads, from Jmeter user manual best practices - 16.13
Variables are local to a thread; a variable set in one thread cannot be read in another. This is by design.
Also check
Using RegEx (Regular Expression Extractor) with JMeter guide.
Using CSV DATA SET CONFIG guide.
CSV data set config Jmeter User Manual

Mule Anypoint timestamp flowVar does not filter payload by LastModifiedDate

I'm trying to create a data sync using Mule Soft so that Db1 is checked for any updates based on LastModified Date and if so the updates are applied to Db2.
I've got the script to work to a point where when the script is first started, the data is copied from Db1 to Db2. After which the script constantly updating the records in Db2. (Below is my flow Diagram)
I've tried to setup recordVars in the message enricher (in Batch_Step) to see if records exists and route them accordingly in Choice (in Batch_Step1).
I've also enabled water mark in Poll for timestamp but nothing is working to avoid constant updating of inserted records.
Below are screenshot of my configs:
Watermark Setup:
Db1 query:
BatchStep Accept Expression:
Message Enricher:
Choice Setup:
Add LastModifiedDate in the Select statement from Db1 so watermark will able to access the field payload.LastModifiedDate.
Also, what is your query in Db2 batch_step? check it, cause it might always getting results that possibly caused to always have payload.size > 0.

Database watcher for mule

How to trigger mule application when the value of the row in a database gets updated.
Thanks in advance.
It depends on how you define whether a row has been updated. However a good starting point is poll and watermarks.
Poll allows you to poll a resource such as a database connector with a particualr SQL SELECT query and watermarks allows you store tracking info such as the the last 'id' processed or 'lastupdated' column of a databse for example.
Some links with examples:
http://www.mulesoft.org/documentation/display/current/Poll+Reference#PollReference-PollingforUpdatesusingWatermarks
http://blogs.mulesoft.org/data-synchronizing-made-easy-with-mule-watermarks/

Data Replication - SQL Server 2008

Good day
I needed to create a data replication between two databases. I created the Local Publication with one table for testing purposes. I then created the Local Subscription and it worked 100%. I tested it and the data gets updated. I then started to add more tables to the Local Publication that I created. I noticed that the new tables did not pull through to the new database through the Local Subscription I created. Do I need to create a new Subscription for the updates? Do I need to delete the current Subscription or is there another way that I can just update the Current Subscription?
Thanks
Ruan
Got this description from this Article: http://www.mssqltips.com/sqlservertip/2502/limit-snapshot-size-when-adding-new-article-to-sql-server-replication/
You must start snapshot agent, but check that already replicated tables are not marked for reinitialization, because in such case data from old tables will be transfered once more.