Database Sync failing after a day - azure-sql-database

We setup database sync between two databases on the same server. It worked fine yesterday and then stopped working today. I tried killing connections to the database and stopping the web apps that are connected to the database thinking maybe it was a connection limit. I also reset the user and pass after verifying the connection is correct.
This is the error we're getting:
Database re-provisioning failed with the exception "The current operation could not be completed because the database is not provisioned for sync or you not have permissions to the sync configuration tables." For more information, provide tracing ID ‘b4b76a8c-38ae-4b48-ad08-6c07933c23c1’ to customer support.

The error log indicated that the previously provision related operation was not completed yet, so you were not able to re-provision the sync group at the same time.
May I know whether you are still experiencing this problem? If yet could you please provide the latest error log? I'll update the answer then.

Related

The concurrent snapshot for publication xxx is not available because it has not been fully generated

I'm having trouble running replication on sql server 2019. On replication monitor in Distributor to Subscriber history section I get action message:
The concurrent snapshot for publication xxx is not available because
it has not been fully generated or the Log Reader Agent is not running
to activate it. If the generation of the concurrent snapshot was
interrupted, the Snapshot Agent for the pub.
Is this message the cause of the replication I'm working on not running? I have tried various ways that I found on the internet, and nothing worked.
Does anyone have a solution?

Redis StackExchange Client - Frequently receiving "Timeout exceptions", "Redis connection exception", "No connection available to service"

I am frequently getting the below mentioned errors, the dll version used in the project is - 1.0.488.0
System.TimeoutException: Timeout performing GET
StackExchange.Redis.RedisConnectionException: No connection is available to service this operation: GET
No connection is available to service this operation: EXISTS
Can anyone help me out in figuring what the issue can be?
Have also created an issue on StackExchange's Github repo for the same
Issue created on Github for the same
It looks like your connection broke. And when it did, any commands already sent to Redis would have timed out on the client application, even though they could have executed on the server. If you upgrade to a later version of the StackExchange.Redis client, you will get richer diagnostics information about what the state of the threadpool, CPU etc was on the client application side.

Google Cloud SQL Second generation -> "Aborted connection"

I am running a Java + JPA/Hibernate application on Appengine and switched my database from the first generation Google Cloud SQL instance to the second generation and now get a lot of this errors:
2017-05-20T22:49:53.533247Z 2235 [Note] Aborted connection 2235 to db:
'mydb' user: 'root' host: 'cloudsqlproxy~myip'
(Got an error reading communication packets)
As far as I can tell, most of these error occur during database requests inside task queue tasks.
This did not happen with the first generation. How can this be avoided?
Τhe "Aborted connection nnnn to db:" message is triggered when an existing connection is terminated improperly as described by Google’s documentation. Most of the aborted connections happen because of the termination of your connection was not correct or because of networking problem between the server and the client, as described in the documentation here.
I advise you to follow the Google’s documentation about managing Cloud SQL connections, emphasizing on the “connection pools” section and of course the “opening and closing connection” section.
Managing database connections talks about "close connection properly". However, in my case, the error still occurs when I use GCP cloud function to connect GCP cloudSQL.
A Google group says that, unless you use NullPool OR dispose the engine explicitly, the error message will always occur. Also, it does not suggest to use engine.dispose().
So I wonder what is the best way to release the resource of connection pool without generating error message on CloudSQL?

Is it required to start MSDTC service on database server along with web server? Also should it be running on mirroring server too?

My project supports nested transactions and thus we have MSDTC service running on web server as well as on database server. The project is working fine. However, we have database mirroring established over database server and thus whenever fail-over happens, site page where, nested transactions are used, throws an error:
The operation is not valid for the state of the transaction.
We have MSTDC service running on mirroring database too. Please suggest what should be done to overcome this problem.
In the default DTC setup it is the DTC of the server that initiates the transactions (the web server in your case) that coordinates them. When the first database server goes down, it rollbacks its current transaction and notifies the transaction coordinator of this and that is why you get the error. The webserver cannot commit the transaction because at least one participant has voted for a rollback.
I don't think you can get around that. What your webserver should do is retry the complete transaction. Database calls would than be handled by the mirror server and would succeed.
That is at least my opinion. I'm no authority on distributed transactions, nor on database clusters with automatic failover...

ORA-07445 access violation

I have this error when running a large query on oracle. any advice?
I'm using pl sql version 10.2
I have noticed that the error is due to creating a view that is based up on many tables, and when I do a select from this view to a specific parameter with a where condition I got that error. When I checked the logs I found out this
ora 07445 access violation
So it is due to something on the view. I have full rights on the tables that I'm creating the views from. And I'm not using any network, the database is on my machine.
Thanks.
From the useful oerr command:
$ oerr ora 3113
03113, 00000, "end-of-file on communication channel"
// *Cause: The connection between Client and Server process was broken.
// *Action: There was a communication error that requires further investigation.
// First, check for network problems and review the SQL*Net setup.
// Also, look in the alert.log file for any errors. Finally, test to
// see whether the server process is dead and whether a trace file
// was generated at failure time.
So the likeliest causes:
The server process you were connected to crashed.
A network problem broke your connection.
Someone manually killed the process on the server you were connected to.
When the server process you were connected to crashed, it threw an ORA-07445. That error, along with ORA-00600, are relatively famous Oracle errors. They're functionally unhandled exceptions, with an ORA-00600 being an unhandled exception in the Oracle code, whereas ORA-07445 is a fatal signal from the OS, generally because Oracle did something that the OS didn't approve of, so the OS killed the Oracle process.
Oracle's support site (http://metalink.oracle.com) has an online troubleshooter for these errors -- search within metalink for document 600.1, and enter the appropriate information from the log file and you might receive some useful troubleshooting information.
This is usually when something is killed at the database server OS level. But it is a fairly generic error. But in my specific world, I'll see this in an application server log on machine A if the database server on machine B is shutdown. In your case, your desktop is losing communication with your DBMS. Your 'large query' may be getting killed at the process level if some administrator or automated process is identifying your query as a resource hog (i.e. you have a Cartesian product).
To be clear this is very likely something your doing wrong as the client and not a bug with your server or Oracle itself.
UPDATE since you provided additional details. Since the db is running on your machine I would bet that your query is encountering a lack of RAM to support both client and server operations.