Issue raising an error in after trigger - error-handling

I'm having an issue raising an error in an after trigger, and I don't see any reason why I can raise an error one way, but not the other. Let me give you an example.
The following trigger will fail and raise the following error:
Error:Apex trigger tstTrigger2 caused an unexpected exception, contact
your administrator: tstTrigger2 : execution of AfterUpdate caused by:
System.FinalException: SObject row does not allow errors:
Trigger.tstTrigger2 : line 19, column 1
trigger tstTrigger2 on Account (after update)
{
Set<Id> AccountIds = Trigger.newMap.keySet();
List<Account > accountsToProcess = [Select Id, Name from Account Where Id IN : AccountIds];
for(Account act: accountsToProcess)
{
act.addError('doesn't work');
}
}
However, raising an error this way works. Please note there is always ever only 1 record in the keyset, at least in this test scenario.
trigger tstTrigger on Account (after update)
{
Set<Id> AccountIds = Trigger.newMap.keySet();
List<Account > accountsToProcess = [Select Id, Name from Account Where Id IN : AccountIds];
Trigger.new[0].addError('However, this works?');
}
Any explanation of why the first one is failing, and the second one is not is greatly appreciated. As well, if you could point me to the best way to implement this so that it's bulkified that would be great. Thanks!

addError() doesn't roll back your insertion it will just prevent the further execution of the script, so the data is never inserted if an you throw an error on UI.
By doing this
Trigger.new[0].addError('However, this works?');
You're simply throwing an error on the first record in the list thereby stopping anything processing.
Something like this will solve your first code snip
trigger tstTrigger2 on Account (after update)
{
Map<ID, Account> accountMap = Trigger.newMap;
for(ID act: accountMap.keySet())
{
accountMap.get(act).addError('doesnt work');
}
}
You were querying out the account Ids and by that time they were already committed to the database, which won't allow errors to be flagged on the records

Related

Is there a Denodo 8 VQL function or line of VQL for throwing an error in a VDP scheduler job?

My goal is to load a cache when there is new data available. Data is loaded into the source table once a day but at an unpredictable time.
I've been trying to set up a data availability trigger VDP scheduler job like described in this Denodo community post:
https://community.denodo.com/answers/question/details?questionId=9060g0000004FOtAAM&title=Run+Scheduler+Job+Based+on+Value+from+a+Query
The post describes creating a scheduler job to fail whenever the condition is not satisfied. Now the only way I've found to force an error on certain conditions is to just use (1/0) and this doesn't always work for some reason. I was wondering if there is way to do this with a function like in normal SQL, couldn't find anything in the Denodo documentation.
This is what my code currently looks like:
--Trigger job
SELECT CASE
WHEN (
data_in_cache = current_data
)
THEN 1 % 0
ELSE 1
END
FROM database.table;
The cache job waits for the trigger job to be successful so the cache will only load when the data in the cache is outdated. This doesn't always work even though I feel it should.
Hoping someone has a function or line of VQL to make Denodo scheduler VDP job result in an error.
This would be easy by creating a custom function that, when executed, just throws an Exception. It doesn't need to be an Exception, you could create your own Exception to see it in the error trace. In any case, it could be something like this...
#CustomElement(type = CustomElementType.VDPFUNCTION, name = "ERROR_SAMPLE_FUNCTION")
public class ErrorSampleVdpFunction {
#CustomExecutor
public CustomArrayValue errorSampleFunction() throws Exception {
throw new Exception("This is an error");
}
}
So you will use it like:
--Trigger job SELECT CASE WHEN ( data_in_cache = current_data ) THEN errorSampleFunction() ELSE 1 END FROM database.table;

DbUpdateConcurrencyException on inserting a new row in SQL Server using EF Core (expected to affect 1 row(s) but actually affected 0 row(s))

I am trying to insert data in one table using Ef core 5 with repository pattern and Unit of work.
Code Sample :
var stateData = new State
{
StateId = state.StateId,
Action = state.Action,
Event = state.Event,
ExecutedOn = DateTime.Now
};
_unitOfWork.GetRepository<State>().Add(stateData);
var result = _unitOfWork.Commit();
GetRepository method used to get respective repo:
{
return (IRepository<TEntity>)GetOrAddRepository(typeof(TEntity), new
Repository<TEntity>(Context));
}
Commit Method :
{
return Context.SaveChanges();
}
I am trying to insert data in state table which has Id as primary and identity column. Rest other columns are StateId,Action,Event and ExecutedOn(datatype : datetime2).
Application is running on multiple nodes. So there will be multiple insert request at a same time from multiple node but different data.
I am getting DbUpdateConcurrencyException frequently while inserting the records of states in DB. Sometimes it works but most of time I get DbUpdateConcurrencyException with message "Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=527962 for information on understanding and handling optimistic concurrency exceptions".
There is no update operation but still I am getting concurrency exception.
I have tried all others solutions on similar questions but no luck.

Snowflake Query Killed: "SQL execution canceled"

I've got a Talend job running with a couple of dataflows running in parallel against a Snowflake database. An update statement against Table A is causing an update on Table B to fail with the following error:
Transaction 'uuid-of-transaction', id 'a-very-long-integer-id', is being committed, SQL execution canceled.
Call END_OPERATION(999,'String1','String2','String3','String4','Success','0')
UPDATE TableB SET BATCH_KEY = 1234, LOAD_DT = current_timestamp::timestamp_ntz, KEY_HASH = MD5(TO_VARCHAR(ARRAY_CONSTRUCT(col1))), ROW_HASH = MD5(TO_VARCHAR(ARRAY_CONSTRUCT(col2, col3))) WHERE BATCH_KEY = -1 OR BATCH_KEY IS NULL;
The code for END_OPERATION is here:
var cmd =
"CALL END_OPERATION(:1,:2,:3,:4,:5,:6,null);";
try {
snowflake.execute (
{sqlText: cmd,binds: [BATCH_KEY,ENTITY,LAYER,SRC,OPERATION,OPERATION_STATUS].map(function(param){return param === undefined ? null : param})},
);
return "Succeeded.";
}
catch (err) {
return "Failed: " + err;
}
var cmd =
"UPDATE TableA SET OPERATION_STATUS=:6,END_DT=current_timestamp,ROW_COUNT=IFNULL(:7,ROW_COUNT) WHERE BATCH_KEY=:1 AND ENTITY_NAME=:2 AND LAYER_NAME=:3 AND SRC=:4 AND OPERATION_NAME=:5";
try {
snowflake.execute (
{sqlText: cmd,binds: [BATCH_KEY,ENTITY,LAYER,SRC,OPERATION,OPERATION_STATUS,ROW_COUNT].map(function(param){return param === undefined ? null : param})},
);
return "Succeeded.";
}
catch (err) {
return "Failed: " + err;
}
I'm failing to understand why the UPDATE statement against TableB is getting killed. It's getting killed nearly immediately.
Here we need to review the flow of all SQL statements coming from the Talend job within the same session in which the failing SQL command is run as well as all the statements coming from the other parallel job.
From the Query History we can get the SessionID of the session. From the History section of the Snowflake UI we can make a search based upon the SessionID. This will list all the commands run through this particular session.
We can review all the commands in their chronological order by sorting over the start_date column and try to observe the sequence of SQL statements.
Your point is indeed valid that an update on TableA should not affect an Update on TableB but after reviewing all the statements of both the sessions (we read that the Talend job is running a couple of dataflows in parallel) we may come across some SQL statement in one session which has taken a lock on tableB before the Update command is submitted against it from the other session.
Another thing which can be reviewed here is how the transaction is managed by the workflow. Within the same list of SQL queries in that session we need to check for any statements which sets the parameter Autocommit at the session level. If Autocommit it set to FALSE at the start of the session then the session will not release any of the table locks until an explicit commit is submitted.
Since the situation here sounds a bit unusual and complex, we may have to dig a little more deeper to review the execution logs of both the queries and for that we may have to contact the Snowflake support.

apex addError - remove default error message

I have added addError to my record like in the following.
v.addError('My Error message');
But I get the error message like in the following.
I don't want the default part "Error: Invalid Data. Review all error messages to correct your data".
I tried adding to some field instead of adding it to the whole record.
But in this case, it implies that the error is specific to what the user has entered in that field. But this is not what I want. My error is record specific.
In a nutshell, based on some condition, I want to stop a record being inserted. This is actually to be added to before insert of my trigger.
Please help.
Try this Code for field level error message:
trigger AccountTrigger on Account (before insert) {
for(Account accIns :trigger.new){
if(accIns.Rating == 'Hot'){
accIns.Rating.addError('My Error message');
}
}
}

SQLExecDirect failed but SQLGetDiagRec has no data

I'm trying to setup some useful error handling in a program that used ODBC. According to documentation if SQLExecDirect returns SQL_ERROR I should be able to call SQLGetDiagRec to get SQL_STATE and possibly some messages, but in my tests when I call SQLGetDiagRec right after getting an error from SQLExecDirect I get SQL_NO_DATA returned and no information.
Code:
result = SQLExecDirect(hstmt, <SQL Statement>, SQL_NTS);
if(result == SQL_ERROR)
{
SQLSMALLINT msg_len = 0;
SQLCHAR sql_state[6], message[256];
SQLINTEGER native_error = 0;
result = SQLGetDiagRec(SQL_HANDLE_DBC, hDbc, 1, sql_state, &native_error, message, countof(message), &msg_len);
// Here 'result' is SQL_NO_DATA
....
}
It works in other cases, just not for SQLExecDirect for some reason. I'm also aware that one should cycle through the SQLGetDiagRec results, but if the very first one returns SQL_NO_DATA, according to documentation it means that there are no further ones.
The specific error that I was testing it with was requesting a non-existent table.
Is there anything else that I need to do in order obtain at least an error code, or does the diagnostic not work for errors that result from incorrect SQL requests?
When you call SQLGetDiagRec, pass SQL_HANDLE_STMT and your statement handle (hstmt in your example). That should return errors associated with that specific statement.