Hangfire Job: Fires and does not execute and still status is successful - hangfire

There is a particular job that I use my Hangfire to execute and it does not execute and it returns success, it does not show any exception it just moves straight to successful, what could be the issue:
Below is the function signature:
ResultsHelper.ImportTermScores(
"C:\HostingSpaces\wwwroot\uploads\JSS 1 J 1st.xlsx",
5,
1,
"2014JSS1J",
52,
"Admin");

Related

LoadRunner - exiting login transaction on failure and perform log off

I'm running a LoadRunner test , upon user failure at login /even at any other transaction it has to fail and execute log off portion of the script.
Note: I have put text check and with textcheck count if the transaction fails( using if condition I have handled it) it then ends transaction with fail status .I would need solution to perform log off also at the point where the if condition fails.
Can anyone share me with an example to execute log off when textcheck fails.
Depends upon your language choice.
Assuming you have the default language of C with your HTTP virtual user, then simply implement a logout function which contains your logout code. Call that function upon failure of your condition. A "return 1;" inside of that if/then conditional will also start a new iteration immediately. "return 0;" goes to a new iteration with respected pacing. "return -1;" kills the virtual user altogether.

Is there a Denodo 8 VQL function or line of VQL for throwing an error in a VDP scheduler job?

My goal is to load a cache when there is new data available. Data is loaded into the source table once a day but at an unpredictable time.
I've been trying to set up a data availability trigger VDP scheduler job like described in this Denodo community post:
https://community.denodo.com/answers/question/details?questionId=9060g0000004FOtAAM&title=Run+Scheduler+Job+Based+on+Value+from+a+Query
The post describes creating a scheduler job to fail whenever the condition is not satisfied. Now the only way I've found to force an error on certain conditions is to just use (1/0) and this doesn't always work for some reason. I was wondering if there is way to do this with a function like in normal SQL, couldn't find anything in the Denodo documentation.
This is what my code currently looks like:
--Trigger job
SELECT CASE
WHEN (
data_in_cache = current_data
)
THEN 1 % 0
ELSE 1
END
FROM database.table;
The cache job waits for the trigger job to be successful so the cache will only load when the data in the cache is outdated. This doesn't always work even though I feel it should.
Hoping someone has a function or line of VQL to make Denodo scheduler VDP job result in an error.
This would be easy by creating a custom function that, when executed, just throws an Exception. It doesn't need to be an Exception, you could create your own Exception to see it in the error trace. In any case, it could be something like this...
#CustomElement(type = CustomElementType.VDPFUNCTION, name = "ERROR_SAMPLE_FUNCTION")
public class ErrorSampleVdpFunction {
#CustomExecutor
public CustomArrayValue errorSampleFunction() throws Exception {
throw new Exception("This is an error");
}
}
So you will use it like:
--Trigger job SELECT CASE WHEN ( data_in_cache = current_data ) THEN errorSampleFunction() ELSE 1 END FROM database.table;

Snowflake a working procedure is not being successfully executed when calling it within a scheduled task

I am having a fully running procedure on snowflake with no errors at all when calling it:
call ADD_MONTHLY_OBSERVATION_VALUES('#test_azure_blob_stg/Monthly_Report.csv', 'GENERIC_CSV_FORMAT');
I wanted to wrap this command into a task and schedule it to a specific time like so:
CREATE OR REPLACE TASK ADD_MONTHLY_OBSERVATION_VALUES_TASK
WAREHOUSE = 'DEV_WH'
TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24'
//SCHEDULE = 'USING CRON 0 6-7 * * SUN,MON,TUE,WED,THU Asia/Dacca'
//Schedule for each minute
SCHEDULE = 'USING CRON * * * * * UTC'
AS
call ADD_MONTHLY_OBSERVATION_VALUES('#test_azure_blob_stg/Monthly_Report.csv', 'GENERIC_CSV_FORMAT');
And then I resumed the task to work:
ALTER TASK ADD_MONTHLY_OBSERVATION_VALUES_TASK RESUME;
When I checked the history of the task:
STATE: SUCCEEDED
ERROR_CODE: NULL
ERROR_MESSAGE: NULL
QUERY_START_TIME: 2021-01-27 16:00:06.198 -0800
COMPLETED_TIME: 2021-01-27 16:00:24.902 -0800
RETURN_VALUE: NULL
Actually the procedure will return a string DONE when everything have been successfully added/updated.
When running:
show tasks;
The procedure's state is started.
Why the procedure is not being executed when called using a task?
There was new data uploaded to the staged file from Azure, so the procedure should detect the new submissions and start the insert process.
If you run the procedure outside the task, you have to use the task owner role to ensure correct testing. If your procedure works outside the task with task owner privileges, it should also work within the task.
So... I assume there is a problem with your access/rights management. SQL statements executed by the task can only operate on Snowflake objects on which the role has the required privileges. --> You have to grant more privileges on the objects within your procedure to your task owner role.
See more here: https://docs.snowflake.com/en/sql-reference/sql/create-task.html

Spring Batch JMSItemReader giving duplicate data in session transacted mode

I have a spring batch job which has single step. I am using JMSItemReader where jmstemplate is session transacted and my writer is just performing some business logic. Whenever any exception occurs by default and retry is exhausted then automatically batch size becomes 1 and retrys for all the items one by one.
I am defining step like this.
stepBuilderFactory.get("step")
.<String, String> chunk(10)
.reader(reader())
.processor(processor)
.writer(writer)
.faultTolerant()
.processorNonTransactional()
.retry(SomeException.class)
.retryLimit(2)
.backOffPolicy(backOffPolicy)
.skip(SomeException.class)
.skipLimit(Integer.MAX_VALUE)
.build();
The issue I am facing is something like this
Input is : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Items in batch 1, 2, 3, 4, 5
Exception occurs in writer
Retrys for 2 times and retrys exhausted
Now it will try 1 by 1 like this
item - 1 - Error
item - 2 - Success
item - 3 - Error
item - 4 - Error
item - 5 - Success
As error occurred so items 1, 3, 4 are skipped and 2, 5 are successfully processed
Here is the issue - Next I should get 6, 7, 8, 9, 10 as batch for processing but I am getting 1, 2, 3, 4, 5 as batch again and its getting executing infitely.
Note: It works fine when the sessionTransacted is false but in that case it doesn't roll back messages in case of exception to ActiveMQ Queue.
Any help is appreciated.
I think this is valid behavior, Since there is transaction rollback and message is not removed from queue, message is available for next listener thread for reading. And you've skip limit of Integer.MAX_VALUE, hence it would retry for infinite time(nearly as you have large skiplimit). I believe you need to configure dead letter queue for the queue you are reading from such that, after certain retries, if the message is corrupt/invalid, should be moved to DLQ, for manual intervention to process the message. Thus the same message is not redelivered again to the listener.

Doctrine deadlock with ORM updates

I'm trying to figure out what is causing deadlocks in my Symfony 2 application. I'm running a cronjob that does batch-updates on a fairly large dataset and one part of it causes this error:
Doctrine\DBAL\DBALException: An exception occurred while executing
'UPDATE SpotEvent SET ts = ?, current = ? WHERE id = ?' with params
["2015-12-28 00:35:27", 1, 39316]: SQLSTATE[40P01]: Deadlock
detected: 7 ERROR: deadlock detected DETAIL: Process 32030 waits for
ShareLock on transaction 2130787; blocked by process 32029. Process
32029 waits for ShareLock on transaction 2130786; blocked by process
32030. HINT: See server log for query details. CONTEXT: while updating tuple (105,68) in relation "spotevent" (uncaught exception)
at
/home/maf/symfony/vendor/doctrine/dbal/lib/Doctrine/DBAL/DBALException.php
line 91 while running console command
The code causing it is basically this:
check event
if (already in database) {
update timestamp
} else {
create new
}
From what I see in the error, the first branch causes the deadlock, but from what I read about deadlocks, the second should be more likely. In any case I don't understand why I have a deadlock at all.
I should say I am running this job in 6 parallel processes. However, there is no overlap between them (i.e. job one is checking from 1-200, job 2 from 201 to 400, etc.)
I'm using PostgreSQL as the database backend. My "check event" step is done using DQL, everything else is pure ORM.