Static Hangfire RecurringJob methods in LINQPad are not behaving - hangfire

I have a script in LINQPad that looks like this:
var serverMode = EnvironmentType.EWPROD;
var jobToSchedule = JobType.ABC;
var hangfireCs = GetConnectionString(serverMode);
JobStorage.Current = new SqlServerStorage(hangfireCs);
Action<string, string, XElement> createOrReplaceJob =
(jobName, cronExpression, inputPackage) =>
{
RecurringJob.RemoveIfExists(jobName);
RecurringJob.AddOrUpdate(
jobName,
() => new BTR.Evolution.Hangfire.Schedulers.JobInvoker().Invoke(
jobName,
inputPackage,
null,
JobCancellationToken.Null),
cronExpression, TimeZoneInfo.Local);
};
// psuedo code to prepare inputPackage for client ABC...
createOrReplaceJob("ABC.CustomReport.SurveyResults", "0 2 * * *", inputPackage);
JobStorage.Current.GetConnection().GetRecurringJobs().Where( j => j.Id.StartsWith( jobToSchedule.ToString() ) ).Dump( "Scheduled Jobs" );
I have to schedule in both QA and PROD. To do that, I toggle the serverMode variable and run it once for EWPROD and once for EWQA. This all worked fine until recently, and I don't know exactly when it changed unfortunately because I don't always have to run in both environments.
I did purchase/install LINQPad 7 two days ago to look at some C# 10 features and I'm not sure if that affected it.
But here is the problem/flow:
Run it for EWQA and everything works.
Run it for EWPROD and the script (Hangfire components) seem to run in a mix of QA and PROD.
When I'm running it the 'second time' in EWPROD I've confirmed:
The hangfireCs (connection string) is right (pointing to PROD) and it is assigned to JobStorage.Current
The query at the end of the script, JobStorage.Current.GetConnection().GetRecurringJobs() uses the right connection.
The RecurringJob.* methods inside the createOrReplaceJob Action use the connection from the previous run (i.e. EWQA). If I monitor my QA Hangfire db, I see the job removed and added.
Temporary workaround:
Run it for EWQA and everything works.
Restart LINQPad or use 'Cancel and Reset All Queries' method
Run it for EWPROD and now everything works.
So I'm at a loss of where the issue might lie. I feel like my upgrade/install of LINQPad7 might be causing problems, but I'm not sure if there is a different way to make the RecurringJob.* static methods use the 'updated' connection string.
Any ideas on why the restart or reset is now needed?
LINQPad - 5.44.02
Hangfire.Core - 1.7.17
Hangfire.SqlServer - 1.7.17

This is caused by your script (or a library that you call) caching something statically, and not cleaning up between executions.
Either clear/dispose objects when you're done (e.g., JobStorage.Current?) or tell LINQPad not to re-use the process between executions, by adding Util.NewProcess=true; to your script.

Related

Run "apoc.periodic.commit" inside "apoc.periodic.repeat"

I would like to run every x second script that uses apoc.periodic.commit. The natural way for me would be to put it inside apoc.periodic.repeat.
But: If I do something like this:
CALL apoc.periodic.repeat(
"test",
"
CALL apoc.periodic.commit(
'
CREATE (a:TEST)
WITH a LIMIT 1
RETURN 0;
'
)
",
1
);
the Neo4j start executing it,... but never finish (It doesn't insert Node TEST). And if I stop the periodic and registrate a new one (one without apoc.periodic.commit, that I know works), the new one doesn't run either... I need to restart the Neo4j to work again.
Any suggestion on where I am doing wrong?

DB doesn't respond after N calls

my web app call(via mybatis) the DB(sqlserver 2008) with websphere. Many times it blocks after N calls(N select) and the web app doesn't respond. Then I have to restart the Websphere.
for (Integer idMovimento: idMovimenti)
{
try{
Movimenti tmpMov = selectByPrimaryKeyForMovimentiRAC(idMovimento);
scartInputParam.setIdMovimento(idMovimento);
List<Scarto> tmpSc = movimentiDAO.selectScarti(scartInputParam);
It blocks in the last select every time.
Any suggestion?
Have I to configure the WAS or there are other tricks?
And can I recall the query after some time?
SOLVED
I changed my query adding join into the select query, in this way I avoid to call the database N time and I put all the result into a list. Then I worked with java without any needless call.

Elasticsearch throws 'ElasticsearchIllegalStateException' part way through tests

I've got a large Groovy application with a lot of JUnit integration tests (256), most of which use 'com.github.tlrx.elasticsearch-test', version: '1.2.1' to run elasticsearch locally.
part way through running all of the test classes all the test that use elasticsearch start throwing a 'ElasticsearchIllegalStateException' with message 'Failed to obtain node lock, is the following location writable?: [./target/elasticsearch-test/data/cluster-test-kiml42s-MacBook-Pro.local]'.
If I run any of these classes alone, it works fine.
This is my initialising code run in all #Befores:
esSetup = new EsSetup();
CreateIndex createIndex = createIndex(index)
for(int i = 0; i < types.size(); i++){
createIndex.withMapping(types[i], fromClassPath(mappings[i]))
}
esSetup.execute(deleteAll(), createIndex)
client = esSetup.client()
And this if my teardown code run in the #Afters:
client.admin().indices().prepareDelete(index).get()
This problem doesn't seem to happen on our build server, so it's only annoying and inconvinient, not a serious problem, but any help would be most appreciated. Thanks.
This problem seems to have been cause by leaving the test nodes active while jUnit ran through all the tests - eventually it stopped being able to create new nodes. The solution is to use esSetup.terminate() in the after to destroy the nodes at the end of each test.
Here's an example of it being used correctly: https://gist.github.com/tlrx/4117854

Getting "Can not convert the given object to query." with ColdFusion ORM

This is happening intermittently (usually at start up). I get the above error message when executing the following code.
var arr = ORMExecuteQuery( "FROM priority WHERE active = 1 ORDER BY sortOrder" );
var qry = entityToQuery( arr );
The first line executes fine, but the second line blows up. The solution is to run ormreload();
The problem keeps coming up in an unpredictable way though. Even when no changes have been made to the beans or gateways that are using ORM. Completely unpredictable and impossible to replicate on purpose. Is there something else that can mess with the hibernate mappings that could cause this type of problem.
Other info that may be pertinent:
This is a MURA plugin based on a recent version of FW/1.
ormreload() is a persistent fix (until it fails again)
My current solution is to put ormreload() in the setupApplication() method of application.cfc
I just want to understand better what could be causing this problem.

"This SqlTransaction has completed; it is no longer usable."... configuration error?

I've been working on this for about a day and a half now, and searched numberous blogs and help articles on the Web. I found several questions on SO related to this error, but I didn't think they quite applied to my situation (or in some cases, unfortunately, I couldn't understand them well enough to implement :P). I'm not sure I can describe this well enough for help... but here goes:
We have a .NET app to track our resources. There's an export function to copy a resource to the time tracking system and the billing system; this accesses a stored procedure that links to the time and billing databases.
I recently moved the billing system database to a new server (original server: Server 2003 SP2, SQL 2005; new server: Server 2008 R2, SQL 2008 R2). I have a Linked Server set up that points to the 2008 databases. I updated the stored procedure to point to the 2008 server, and then I got an error about MSDTC and RPC (http://www.safnet.com/writing/tech/archives/2007/06/server_myserver.html). I enabled 'rpc/rpc out' on the Linked Server and set MSDTC to allow Network Access (something like this: http://www.sqlwebpedia.com/content/msdtc-troubleshooting).
Now I'm getting the above, when I try to run the export function: "This SqlTransaction has completed; it is no longer usable." What seems odd to me is that when I just run the stored procedure (from SSMS), it says it completes successfully.
Has anyone seen this before? Have I missed something in the configuration? I keep going over the same pages, and the only thing I found was that I didn't reboot after making the MSDTC changes (mentioned in here: http://social.msdn.microsoft.com/forums/en-US/adodotnetdataproviders/thread/7172223f-acbe-4472-8cdf-feec80fd2e64/).
I can post part or all of the stored procedure, if it would help... please let me know.
I believe this error message is due to a "zombie transaction".
Look for possible areas where the transacton is being committed twice (or rolled back twice, or rolled back and committed, etc.). Does the .Net code commit the transaction after the SP has already committed it? Does the .Net code roll it back on encountering an error, then attempt to roll it back again in a catch (or finally) clause?
It's possible an error condition was never being hit on the old server, and thus the faulty "double rollback" code was never hit. Maybe now you have a situation where there is some configuration error on the new server, and now the faulty code is getting hit via exception handling.
Can you debug into the error code? Do you have a stack trace?
I had this recently after refactoring in a new connection manager. A new routine accepted a transaction so it could be run as part of a batch, problem was with a using block:
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
using (transaction.Connection)
{
using (transaction)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
}
}
It looks as though the outer using was closing the underlying connection thus any attempts to commit or rollback the transaction threw up the message "This SqlTransaction has completed; it is no longer usable."
I removed the usings added a covering test and the problem went away.
public IEnumerable<T> Query<T>(IDbTransaction transaction, string command, dynamic param = null)
{
return transaction.Connection.Query<T>(command, new DynamicParameters(param), transaction, commandType: CommandType.StoredProcedure);
}
Check for anything that might be closing the connection while inside the context of a transaction.
Had the exact same problem and just could not find the right solution.
Hope this helps somebody.
I have an .NET Core 3.1 WebApi with EF Core. Upon receiving multiple calls at the same time, the applications was trying to add and save changes to the database at the same time.
In my case the problem was that the table that the data would be saved in, did not have a primary key set.
Somehow EF Core missed when the migration was ran from the application that the ID in the model was supposed to be a primary key.
I found the problem by opening the SQL Profiler and seeing that all transactions was successfully submitted to the database (from the application) but only one new row was created. The profiler also showed that some type of deadlock was happening but I couldn't see much more in the trace logs of the profiler.
On further inspection I noticed that the primary key identifier was missing on the column "Id".
The exceptions I got from my application was:
This SqlTransaction has
completed; it is no longer usable.
and/or
An exception has been raised that is likely due to a transient
failure. Consider enabling transient error resiliency by adding
'EnableRetryOnFailure()' to the 'UseSqlServer' call.
I have the same problem. This error occurs because conection pooling. When exists two or more users acess the system the connetion pooling reuse a connetion and the transation too. If the first user execute commit ou rollback the transaction is no longe usable.
I have recently ran across similar situation. To debug in any VS IDE version, open exceptions from Debug (Ctrl + D, E) - check all checkboxes against the column "Thrown", and run the application in debug mode. I have realized that one of the tables was not imported properly in the new database, so internal Sql Exception was killing the connection, thus results into this error.
Gist of the story is, If Previously working code returns this error on a new database, this could be database schema missing issue, realize by above debugging tip,
Hope It Helps,
HydTechie
Also check for any long running processes executed from your .NET app against the DB. For example you may be calling a stored procedure or query which does not have enough time to finish which can show in your logs as:
Execution Timeout Expired. The timeout period elapsed prior to
completion of the operation or the server is not responding.
This SqlTransaction has completed; it is no longer usable.
Check the command timeout settings
Try to run a trace (profiler) and see what is happening on the DB side...
In my case the problem was that one of the queries included in the transaction was raising an exception, and even though the exception was "gracefully" handled, it still managed to roll back the entire transaction.
My pseudo-code was like:
var transaction = connection.BeginTransaction();
for(all the lines in a file)
{
try{
InsertLineInTable(); // INSERT statement might fail and throw an exception
}
catch {
// notify the user about the error on line x and continue
}
}
// Commit and Rollback will fail if one of the queries
// in InsertLineInTable threw an exception
if(CheckTableForErrors())
{
transaction.Commit();
}
else
{
transaction.Rollback();
}
Here is a way to detect Zombie transaction
SqlTransaction trans = connection.BeginTransaction();
//some db calls here
if (trans.Connection != null) //Detecting zombie transaction
{
trans.Commit();
}
Decompiling the SqlTransaction class, you will see the following
public SqlConnection Connection
{
get
{
if (this.IsZombied)
return (SqlConnection) null;
return this._connection;
}
}
I notice if the connection is closed, the transOP will become zombie, thus cannot Commit.
For my case, it is because I have the Commit() inside a finally block, while the connection was in the try block. This arrangement is causing the connection to be disposed and garbage collected. The solution was to put Commit inside the try block instead.
For what it's worth, I've run into this on what was previously working code. I had added SELECT statements in a trigger for debug testing and forgot to remove them. Entity Framework / MVC doesnt play nice when other stuff is output to the "grid". Make sure to check for any rogue queries and remove them.
In my case, I've some codes which need to execute after committing the transaction, at the same try-catch block. One of the codes threw
an error then try block handed over the error to its catch block which contains the transaction rollback.
It will show a similar error. For example, look at the code structure below :
SqlTransaction trans = null;
try{
trans = Con.BeginTransaction();
// your codes
trans.Commit();
//your codes having errors
}
catch(Exception ex)
{
trans.Rollback(); //transaction roll back
// error message
}
finally
{
// connection close
}
Hope it will help someone :)