Weblogic JDBC Exception : XAER_RMERR - weblogic9.x

any one have ever found this kind of exception in weblogic ?
[JDBCExceptionReporter] : Unexpected exception while enlisting XAConnection java.sql.SQLException: XA error: XAResource.XAER_RMERR start() failed on resource 'datasource/tx/olddata': XAER_RMERR : A resource manager error has occured in the transaction branch
we have application using Weblogic transaction manager to manage the transaction, the error happened when the application server take the data from a middleware server, when the data is received, the application server will try to insert the data to the database server, but the when it about to be commited it failed with above error.
anyone got any idea what might trig this kind of error ??

This happen when you fire query across DB Link using XA Driver datasource. you can try separate Non Xa driver datasource for across DB Link queries.

Related

SSAS : Errors in the metadata file - error trying to update cube in a system table 'DataSource'

A few days ago, we started having an error with the processing in our cube. The cube is being processed by the job and returns the error:
The following system error occurred: Invalid data. Failed to decrypt
sensitive data. Possibly the encryption key does not match or is
inaccessible due to improper service account change. The current
operation was canceled because another operation on the transaction
failed. (Microsoft.AnalysisServices.Core)
In this instance we have other cubes working correctly.
We verified that this cube is the only one that does not have credentials. We've already tried to add the credentials by refreshing credentials and via the script, and for the first solution we don't have any errors, but it doesn't continue without changing anything. By the second solution we get the error:
Failed to encrypt sensitive data. Possibly the encryption key is
inaccessible due to improper service account change. An error occurred
while trying to update a row in a system table 'DataSource' in the
metadata database.
Anyone have a similar error?
Thanks in advance.

Avoid Deadlock During SSRS Reports Deployment

I wonder if anyone has any suggestion or experience with the same scenario.
We have one Server we utilise for our SSRS Reports. We deploy to Multiple Folders in SSRS i.e. Site_1, Site_2, Site_3 ... Site_26
In each site we deploy roughly about 800+ Reports. These reports are the same for Site_1 to Site_26 (except if we skip a site).
We use Azure DevOps with Powershell ReportingServicesTools to deploy the reports.
What happens is when we start the deployment, we will get several sites failing due to a deadlock with the below error:
The Report and Process ID is Random and never the same
##[error]Failed to create item Report.rdl : Failed to create catalog item C:\azagent\A9_work\r5\a\SSRS Reports\Reports\Report.rdl : Exception calling "CreateCatalogItem" with "7" argument(s): "System.Web.Services.Protocols.SoapException: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. ---> Microsoft.ReportingServices.Diagnostics.Utilities.ReportServerStorageException: An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. ---> System.Data.SqlClient.SqlException: Transaction (Process ID 100) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
The error is not related to Low Disk etc as we've tested this to death and it occurs with two sites on a monster server. The error is Transaction Deadlock.
The only way we can successfully deploy the reports is if we deploy them concurrently one after the other. However, due to time constraints and business requirements this is not an option.
We have done all the PSSDiags etc and found that the error occurs due to this Stored Procedure "FindObjectsNonRecursive"
We nearly resolved it by adding the (NoLock) option but it seems this was only temporary and we're back to where we were. Microsoft also advised that they would not change it. Also noting that 18 months down the line MS still has not been able to give us a fix or a solution to our problem.
I would appreciate any feedback from anyone on how you overcame this problem if you had it.
Thank you for your time.
I would appreciate any feedback from anyone on how you overcame this problem if you had it.
Did you try retrying like the error suggests? Deadlocks are timing-dependent, so it should eventually succeed.

Can not start/stop cache within lock or transaction with DataStorageConfiguration

I have one server node and one client node. In DataStorageConfiguration ,persistent is enabled.
I restarted my server node and trying to perform the operations on cache. I am getting below exceptions. This exception is if I use DataStorageConfiguration.
Caused by: class org.apache.ignite.IgniteException: Cannot start/stop cache within lock or transaction. [cacheName=, operation=dynamicStartCache]
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.checkEmptyTransactionsEx(GridCacheProcessor.java:4879)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.dynamicStartCache(GridCacheProcessor.java:3460)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.dynamicStartCache(GridCacheProcessor.java:3404)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.publicJCache(GridCacheProcessor.java:4416)
at org.apache.ignite.internal.processors.cache.GridCacheProcessor.publicJCache(GridCacheProcessor.java:4387)
at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.checkProxyIsValid(GatewayProtectedCacheProxy.java:1602)
at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.onEnter(GatewayProtectedCacheProxy.java:1619)
at org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:853)
Can you please help?
Ignite Cache Reconnection Issue (Cache is stopped)
I have refereed above link and liten the reconnect event and then call ignite.getOrCreateCache(spaCacheName);
I guess you should not use e.g. getOrCreateCache within a started transaction. Create all of your caches before starting a transaction. I think it's pretty self-explanatory.

Error 40 and SqlAzureExecutionStrategy

I have a service fabric service (guest executable), using entityframework core, talking to sql azure.
From time to time I see the following error:
A network-related or instance-specific error occurred while establishing a connection
to SQL Server. The server was not found or was not accessible. Verify that the
instance name is correct and that SQL Server is configured to allow remote connections.
(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)
It seems transient as there are numerous database transactions that occur without errors. This seems to occur more when a node is busy.
I've added code in start up to set EnableRetryOnFailure to set the SqlServerRetryingExecutionStrategy:
services.AddEntityFrameworkSqlServer()
.AddDbContext<MyDbContext>(options =>
options.UseSqlServer(_configuration.GetConnectionString("MyDbConnection"),
o => o.EnableRetryOnFailure()))
One major caveat, is at the moment I'm losing context so I don't know what data was attempting to be updated/inserted, so I don't know if it was eventually successful or not.
Couple of questions:
From the Transient Detection Code it doesn't look like error: 40 is caught, but my understanding is that error 40 may actually be another error (unclear). Is that correct?
Is this really a transient issue or does it mean I have another problem?
Without additional logging (working on it), do we know if the retry strategy logs the error, but still retry's and in fact may have been successful?
If this is a transient error, but it's not caught in the default execution strategy, why not? and what would be unintentded consequences of sub classing the SqlAzureExecutionStrategy to include this error.
I've seen this question: Sql Connection Error 40 Under Load, and it feels familiar, but he seems to have resolved it by tuning his database - which I will look at doing, I'm trying to make my code more resilient in the case of database issues.
There is a certain version of EF Core that caches the query or requests if the time span between two database transactions is very small, so update your packages to make sure you are using the most recent.
Query: Threading issues cause NullReferenceException in SimpleNullableDependentKeyValueFactory #5456
check these other links
https://github.com/aspnet/EntityFramework/issues/5456
https://github.com/aspnet/Security/issues/739
https://github.com/aspnet/EntityFramework/issues/6347
https://github.com/aspnet/EntityFramework/issues/4120

How to make Dapper resilient for SqlAzure?

I recently found out Entity Framework has a very easy way to make connections resilient in SQL Azure. Is there a recommended way to accomplish the same in Dapper?
The fastest way to protect against connection issues in C# against Azure is the Microsoft Transient Fault Handling Block.
For example the below code would retry upto 3 times with 1 second intervals in-between when attempting to open a connection to a Windows Azure SQL Database:
var retryStrategy = new FixedInterval(3, TimeSpan.FromSeconds(1));
var retryPolicy =
new RetryPolicy<SqlDatabaseTransientErrorDetectionStrategy>(retryStrategy);
retryPolicy.ExecuteAction(() => myConnection.Open());
FixedInterval is the back off policy, so it will try, wait 1 second, try again, etc until it's tried 3 times.
SqlDatabaseTransientErrorDetectionStrategy is simply does a check on the exception thrown, if it's a connection exception that should be retried, it will tell the RetryPolicy to execute the action again. If it is not a connection exception, then the action will not be executed and the original exception will be thrown as normal.
As for when you should use it with Dapper; you can safely retry opening connections and read operations, however be aware or retrying write operations as there is a risk of duplicating inserts, trying to delete a row twice etc.
More detail here, this library can be found as a NuGet Package here which includes the detection strategies for Windows Azure.