How can i upsert (replace) operation in Redis? (as Pipelined) - redis

Upsert (Replace)
Update If Exists
Insert If Not Exists
(Using Primary Key as Pipelined)

What do you mean by "update if exists"? The standard Redis SET commands (SET, MSET, HSET, LSET, etc.) will update (overwrite) an existing key if the key already exists or insert a new key if the key doesn't already exist.
Sounds like you are asking for the default behavior.

there are other data structures supported by redis for example SET, Sorted SET and SET command works for String values only as it expects a string key and string value.

Related

Repeatable write to SQL Sink using Azure Data Factory is failing

Scenario: I'm copying data from Azure Table Storage to an Azure SQL DB using an upsert stored procedure like this:
CREATE PROCEDURE [dbo].[upsertCustomer] #customerTransaction dbo.CustomerTransaction READONLY
AS
BEGIN
MERGE customerTransactionstable WITH (HOLDLOCK) AS target_sqldb
USING #customerTransaction AS source_tblstg
ON (target_sqldb.customerReferenceId = source_tblstg.customerReferenceId AND
target_sqldb.Timestamp = source_tblstg.Timestamp)
WHEN MATCHED THEN
UPDATE SET
AccountId = source_tblstg.AccountId,
TransactionId = source_tblstg.TransactionId,
CustomerName = source_tblstg.CustomerName
WHEN NOT MATCHED THEN
INSERT (
AccountId,
TransactionId,
CustomerName,
CustomerReferenceId,
Timestamp
)
VALUES (
source_tblstg.AccountId,
source_tblstg.TransactionId,
source_tblstg.CustomerName,
source_tblstg.CustomerReferenceId,
source_tblstg.Timestamp
);
END
GO
where customerReferenceId & Timestamp constitute the composite key for the CustomerTransactionstable
However, when I update the rows in my source(Azure table) and rerun the Azure data factory, I see this error:
"ErrorCode=FailedDbOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=A
database operation failed with the following error: 'Violation of
PRIMARY KEY constraint 'PK_CustomerTransactionstable'.
Cannot insert duplicate key in object
'dbo.CustomerTransactionstable'. The duplicate key value is
(Dec 31 1990 12:49AM, ABCDEFGHIGK).\r\nThe statement has been
terminated.',Source=.Net SqlClient Data
Provider,SqlErrorNumber=2627,Class=14,ErrorCode=-2146232060,State=1,Errors=[{Class=14,Number=2627,State=1,Message=Violation
of PRIMARY KEY constraint 'PK_CustomerTransactionstable'"
Now, I have verified that there's only one row in both the source and sink with a matching primary key, the only difference is that some columns in my source row have been updated.
This link in the Azure documentation speaks about the repeatable copying, however I don't want to delete rows for a time range from my destination before inserting any data nor do I have the ability to add a new sliceIdentifierColumn to my existing table or any schema change.
Questions:
Is there something wrong with my upsert logic? If yes, is there a better way to do upsert to Azure SQL DBs?
If I choose to use a SQL cleanup script, is there a way to delete only those rows from my Sink that match my primary key?
Edit:
This has now been resolved.
Solution:
The primary key violation will only occur if it's trying to insert a
record which already has a matching primary key. In my case there
although there was just one record in the sink, the condition on which
the merge was being done wasn't getting satisfied due to mismatch
between datetime and datetimeoffset fields.
Have you tried it using ADF Data Flows with Mapping Data Flows instead of coding it through a stored procedure? It may be much easier for you for upserts with SQL.
With an Alter Row transformation, you can perform Upsert, Update, Delete, Insert via UI settings and picking a PK: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-alter-row
You would still just need a Copy Activity prior to your Data Flow activity to copy the data from Table Storage. Put it in a Blob folder and then Data Flow can read the Source from there.

mule returning primary key on successful insert

Hi there I'm new to Mule and I needed pointer on how to process records. I'm trying to perform an operation where I insert a new record into one table and if the record is inserted successfully, obtain the primary key and insert it into another table where the primary key is part of the foreign key.
I don't know which connector or component to use to check if an insert was successful so that I can insert the primary key into another table.
My primary key is a uuid generated as a variable. I tried returning the GUID from sql server using using the following documentation but it didn't work. Any help or pointers on either question will help.
https://doctorjw.wordpress.com/2015/10/01/mule-and-getting-the-generated-id-of-a-newly-inserted-row/
If you want a DB-generated Id, you can use two DB blocks, saving a variable between them:
1st DB block: generate an unique Id throw a sequence, in example:
select GENERAID_ESB.nextval from dual
Save variable (Session or Flow, depending on your required scope for it):
#[payload.get(0).nextval]
2nd DB block: insert your record in DB with unique id saved, in example:
INSERT INTO ESB_TABLE values(#[(sessionVars.'idTable')],
#[message.outboundProperties.'yourInformation'])**
I hope this helps.

Copy database with data and foreign keys without identity insert on

Scenario:
I have a set of test data that needs to be deployed to our build server daily (our build server database is first overwritten with the current live database, and has all data over a month old removed).
This test data has foreign key references within it which need to stay.
I can't simply switch on IDENTITY_INSERT as the primary keys may clash with data that is already in the database (because we aren't starting from a blank database).
The test data needs to be able to be regenerated fairly regularly, so the though of going through the deploy script and fudging the id columns to be something outlandish (or a negative number for instance) and then changing the related foreign key columns to be the same id every time we regenerate the data doesn't thrill me.
Ideally I would like to know if there is a tool which can scan a database, pick up the foreign key constraints and generate the insert scripts accordingly, something like:
INSERT INTO MyTable VALUES('TEST','TEST');
DECLARE #Id INT;
SET #Id = (SELECT ##IDENTITY)
INSERT INTO MyRelatedTable VALUES(#Id,'TEST')
It sounds like you want to look into an ETL process that copes with the change in id. As you're using SQL Server, you can look at the OUTPUT clause - use this to build up some temporary tables that can map the "old" id to the "new" id for each primary key to map the foreign keys when migrating the "child" tables.

SQLite, SQL: Using UPDATE or INSERT accordingly

Basically, I want to insert if a given entry (by its primary key id) doesn't exist, and otherwise update if it does. What might be the best way to do this?
Does sqllite not have REPLACE command ? Or ON CONFLICT REPLACE ?

alter mysqldump file before import

I have a mysqldump file created from an earlier version of a product that can't be imported into a new version of the product, since the db structure has changed slightly (mainly altering a column that was NOT NULL DEFAULT 0 to UNIQUE KEY DEFAULT NULL).
If I just import the old dump file, it will error out since the column that has default values of 0 now breaks the UNIQUE constraint.
It would be easy enough to either manually alter the mysqldump file, or import into a temp table and change it, then copy to the new table. However, is there a way to do this programatically, so it will be repeatable and not manual? (this will need to happen for many instances of this product).
I'm thinking something like disabling key constraints for the import, then setting all values that = 0 to NULL, then re-enabling the key constraints?
Is this possible? Any help appreciated.
Yes.
SET UNIQUE_CHECKS=0; Turns off Unique Key Constraints
SET FOREIGN_KEY_CHECKS=0; Turns off Foreign Key Constraints
Import file
Update 0 to Null
SET UNIQUE_CHECKS=1 turns back on
SET FOREIGN_KEY_CHECKS=1 turns back on
You could just use sed and modify the dumpfile in an automated, repleatable way.
sed s/NOT NULL DEFAULT 0/UNIQUE KEY DEFAULT NULL/g
or something like that.