how does upsert with external id work? - api

How does upsert work in the Salesforce API?
I believe that it checks it checks if there is a record with unique id. In the case that it is available, then it updates the record otherwise the record is created.
Is this correct?
I am receiving the following error
Upsert failed. First exception on row 1; first error: DUPLICATE_EXTERNAL_ID, Asset Tag: more than one record found for external id field: [a11M0000000CwJqIAK, a11M0000000CwJvIAK]: [Asset_Tag__c]
I have a list with items, and there are no duplicate Asset_Tag values.
system.debug('LstItem Asset_Tag__c'+LstItem );
upsert LstItem Asset_Tag__c;
From the debug log
LstItem Asset_Tag__c(Item_c__c:{Scanned_By__c=005M0000000IlxyIAC, Asset_Tag__c=12149, Status__c=Active, Scan_Location__c=001M0000008GzJXIA0, Last_Scan_Date__c=2011-12-17 06:08:47}, Item_c__c:{Scanned_By__c=005M0000000IlxyIAC, Asset_Tag__c=23157, Status__c=Active, Scan_Location__c=001M0000008GzJXIA0, Last_Scan_Date__c=2011-12-17 08:26:14})
What can I do to resolve this issue?

The error message indicates that, based on the External Id value you provided, there were two matching records. In this case, the system does not know which one it should update, so it fails.
If you take a look at /a11M0000000CwJqIAK and /a11M0000000CwJvIAK, they will have the same value in the external ID field. You may want to consider de-duplicating the records for this object.

Related

Unable to implement SCD2

I was just working on SCD Type 2 and was unable to fully implement it in a way that some scenarios were not getting full filled. I had done it in IICS. Really finding it very very difficult to cover all possible scenarios. So, below is the flow:
Src
---> Lkp ( on src.id = tgt.id)
---> expression ( flag= iif (isnull (tgt.surrogatekey) then Insert, iif(isnotnull(tgt.surrogatekey) and md5(other_non_key_cole)<>tgt.md5)then Update)
----> insert on flag insert(works fine)
but on update i pass updates
to 2 target instances
of same target table
in one i am updating it
as new update as insert
and in other i
am updating tgt_end_date=lkp_start_date for previously stored ids and active_ind becomes 'N'.
But what happens is this works but not in when i receive new updates with again same records meaning duplicates or simply rerunning the mapping inserts duplicates in the target table and changing of end_date also becomes unstable when i insert multiple changes of the same record it sets all active_flags to 'Y' what expected is all should be 'N' except the last latest in evry run. Could anyone please help with this even in SQL if you can interpret.
If I’ve understood your question correctly, in each run you have multiple records in your source that are matching a single record in your target.
If this is the case then you need to process your data so that you have a single source record per target record before you put the data through the SCD2 process

Bigquery return nested results without flattening it without using a table

It is possible to return nested results(RECORD type) if noflatten_results flag is specified but it is possible to just view them on screen without writing it to table first.
for example, here is an simple user table(my actual table is big large(400+col with multi-level of nesting)
ID,
name: {first, last}
I want to view record particular user & display in my applicable, so my query is
SELECT * FROM dataset.user WHERE id=423421 limit 1
is it possible to return the result directly?
You should write your output to "temp" table with noflatten_results option (also respective expiration to be set to purge table after it is used) and serve your client out of this temp table. All "on-fly"
Have in mind that no matter how small "temp" table is - if you will be querying it (in above second step) you will be billed for at least 10MB, so you better use Tabledata.list API in this step (https://cloud.google.com/bigquery/docs/reference/v2/tabledata/list) which is free!
So if you try to get repeated records it will fail on the interface/BQ console with the error:
Error: Cannot output multiple independently repeated fields at the same time.
and in order to get past this error is to FLATTEN your output.

How to implement a key lookup for generated keys table in pentaho Kettle

I just started to use Pentaho Kettle for integration. Seems great so far, quite intuitive compared to Talend, which I was also investigating.
I am trying to migrate some customers without their keys. So I have their email addresses.
The customer may already exist in the database, so what I need to do is:
If the customer exists, add it's id to the imported field and continue.
But if the customer doesn't exist I need to get the next Hibernate key from the table Hibernate_Sequences and set it as the id.
But I don't want to always allocate a key, so I want to conditionally execute a step to allocate the next key.
So what I want to do, is in the flow execute the db procedure, which allocates the next key and returns it, only if there's no value in id from the "lookup id" step.
Is this possible?
Just posting my updated flow - so the answer was to use a filter rows component which splits the data on true/false. I really had trouble getting the id out of the database stored proc because of a bug, so I had to use decimal and then convert back to integer (which I also couldn't figure out how to do, so used a javascript component).
Yes it is. As per official documentation (i left only valuable information) "Lookup values are added as new fields onto the stream". So u need just to put step "Filter row" in Flow section and check for "id" which suppose to be added in "Existing Id Lookup" step.

How do I not waste Generator values when using them server side with Firebird?

Check this simple piece of code that uses a generator to create unique primary keys in a Firebird table:
CREATE OR ALTER TRIGGER ON_BEFOREINSERT_PK_BOOKING_ITEM FOR BOOKING_ITEM BEFORE INSERT POSITION 0
AS
BEGIN
IF ((NEW.booking_item_id IS NULL) OR (NEW.booking_item_id = 0)) THEN BEGIN
SELECT GEN_ID(LastIdBookingItem, 1) FROM RDB$DATABASE INTO :NEW.booking_item_id;
END
END!
This trigger grabs and increments then assigns a generated value for the booking item id thus creating an auto-incremented key for the BOOKING_ITEM table. The trigger even checks that the booking id has not already been assigned a value.
The problem is the auto-incremented value will be lost (wasted) if, for some reason, the BOOKING_ITEM record cannot be posted.
I have a couple of ideas on how to avoid this wasting but have concerns about each one. Here they are:
Decrement the counter if a posting error occurs. Within the trigger I set up a try-except block (do try-except blocks even exist in Firebird PSQL?) and run a SELECT GEN_ID(LastIdBookingItem, -1) FROM RDB$DATABASEon post exceptions. Would this work? What if another transaction sneaks in and increments the generator before I decrement it? That would really mess things up.
Use a Temporary Id. Set the id to some unique temp value that I change to the generator value I want on trigger AFTER INSERT. This method feels somewhat contrived and requires a way that ensures that the temp id is unique. But what if the booking_item_id was supplied client side, how would I distinguish that from a temp id?. Plus I need another trigger
Use Transaction Control. This is like option 1. except instead of using the try-except block to reset the generator I start a transaction and then roll it back if the record fails to post. I don't know the syntax for using transaction control. I thought I read somewhere that SAVEPOINT/SET TRANSACTION is not allowed in PSQL. Plus the roll back would have to happen in the AFTER INSERT trigger so once again I need another trigger.
Surely this is an issue for any Firebird developer that wants to use Generators. Any other ideas? Is there something I'm missing?
Sequences are outside transaction control, and meddling with them to get 'gapless' numbers will only cause troubles because another transaction could increment the sequence as well concurrently, leading to gaps+duplicates instead of no gaps:
start: generator value = 1
T1: increment: value is 2
T2: increment: value is 3
T1: 'rollback', decrement: value is 2 (and not 1 as you expect)
T3: increment: value is 3 => duplicate value
Sequences should primarily be used for generating artificial primary keys, and you shouldn't care about the existence of gaps: it doesn't matter as long as the number uniquely identifies the record.
If you need an auditable sequence of numbers, and the requirement is that there are no gaps, then you shouldn't use a database sequence to generate it. You could use a sequence to assign numbers after creating and committing the invoice itself (so that it is sure it is persisted). An invoice without a number is simply not final yet. However even here there is a window of opportunity to get a gap, eg if an error or other failure occurs between assigning the invoice number and committing.
Another way might be to explicitly create a zero-invoice (marked as cancelled/number lost) with the gap numbers, so that the auditor knows what happened to that invoice.
Depending on local law and regulations, you shouldn't 're-use' or recycle lost numbers as that might be construed as fraud.
You might find other ideas in "An Auditable Series of Numbers". This also contains a Delphi project using IBObjects, but the document itself describes the problem and possible solutions pretty well.
What if, instead of using generators, you create a table with as many columns as the number of generators, giving each column the name of a generator. Something like:
create table generators
(
invoiceNumber integer default 0 not null,
customerId integer default 0 not null,
other generators...
)
Now, you have a table where you can increment invoice number using a SQL inside a transaction, something like:
begin transaction
update generator set invoiceNumber = invoiceNumber + 1 returning invoiceNumber;
insert into invoices set ..........
end transaction.
if anything goes wrong, the transaction would be rolled-back, together with the new
invoice number. I think there would be no more gaps in the sequence.
Enio

Integrity error where try to save a supplier invoices

I build a module who increase stock on supplier invoices. Work fine on devel server but when I pun on work server I have this error. How can I corret this error?
Integrity Error
The operation cannot be completed, probably due to the following: - deletion: you may be trying to delete a record while other records still reference it - creation/update: a mandatory field is not correctly set
[object with reference: Purchase Order Line - purchase.order.line]
Integrity Error on OpenERP occur because of two possibility (mentioned on the description of error):
When you create or update a record, a mandatory field is not correctly set or is not filled. What field? One of field in your object (mentioned). In your case: purchase.order.line.
When you delete a record, the record you want to delete is used by another record and set as a mandatory field (required) from the python code.
My guess, if you get the error when you create/update a record (Purchase Order), maybe you create/update the order lines, but one of mandatory field in order lines is empty.