ORA-07445 access violation - sql

I have this error when running a large query on oracle. any advice?
I'm using pl sql version 10.2
I have noticed that the error is due to creating a view that is based up on many tables, and when I do a select from this view to a specific parameter with a where condition I got that error. When I checked the logs I found out this
ora 07445 access violation
So it is due to something on the view. I have full rights on the tables that I'm creating the views from. And I'm not using any network, the database is on my machine.
Thanks.

From the useful oerr command:
$ oerr ora 3113
03113, 00000, "end-of-file on communication channel"
// *Cause: The connection between Client and Server process was broken.
// *Action: There was a communication error that requires further investigation.
// First, check for network problems and review the SQL*Net setup.
// Also, look in the alert.log file for any errors. Finally, test to
// see whether the server process is dead and whether a trace file
// was generated at failure time.
So the likeliest causes:
The server process you were connected to crashed.
A network problem broke your connection.
Someone manually killed the process on the server you were connected to.
When the server process you were connected to crashed, it threw an ORA-07445. That error, along with ORA-00600, are relatively famous Oracle errors. They're functionally unhandled exceptions, with an ORA-00600 being an unhandled exception in the Oracle code, whereas ORA-07445 is a fatal signal from the OS, generally because Oracle did something that the OS didn't approve of, so the OS killed the Oracle process.
Oracle's support site (http://metalink.oracle.com) has an online troubleshooter for these errors -- search within metalink for document 600.1, and enter the appropriate information from the log file and you might receive some useful troubleshooting information.

This is usually when something is killed at the database server OS level. But it is a fairly generic error. But in my specific world, I'll see this in an application server log on machine A if the database server on machine B is shutdown. In your case, your desktop is losing communication with your DBMS. Your 'large query' may be getting killed at the process level if some administrator or automated process is identifying your query as a resource hog (i.e. you have a Cartesian product).
To be clear this is very likely something your doing wrong as the client and not a bug with your server or Oracle itself.
UPDATE since you provided additional details. Since the db is running on your machine I would bet that your query is encountering a lack of RAM to support both client and server operations.

Related

SSIS package gets fatal error while reading the input stream from the network

Problem
When executing a data-heavy SSIS package that inserts the data from a database in EnvironmentA1 to a database EnvironmentB1, I get the following error:
A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 10060, output error: 0)
Context Information
EnvironmentA1 - virtual machine in local data center, running SQL Server 2017
EnvironmentB1 - virtual machine in Azure, running SQL Server 2017
The package is being executed from SSIS Catalog scheduled daily by SQL Agent. Very occasionally it will succeed but it is now generally expected to fail every time it runs, different step every time.
What is really baffling to me about this is that if I set to run the same package interactively in Visual Studio using the exact same connection strings with the same security context for both EnvironmentA1 & EnvironmentB1 connection managers it will succeed every time without any issues. The Visual Studio itself is installed elsewhere in EnvironmentC1.
This is how example entries in SQL Error Log on EnvironmentB1 look like around the time of failure:
Error messages from SSIS Catalog execution report:
Everything above and the research made suggest that this is network related issue. The common suggestion found was to disable any TCP Offloading related features which I did for both environments but that didn't make any difference.
EnvironmentA1:
EnvironmentB1:
Additionally for testing purposes I disabled the following features from NIC configuration on each environmet:
EnvironmentA1:
Receive-Side Scaling State
Large Send Offload V2 IPv4
Large Send Offload V2 IPv6
TCP Checksum Offload IPv4
TCP Checksum Offload IPv6
EnvironmentB1:
Receive-Side Scaling
Large Send Offload Version 2 IPv4
Large Send Offload Version 2 IPv6
TCP Checksum Offload IPv4
TCP Checksum Offload IPv6
IPSec Offload
Also to note there are other SSIS packages that interact with same both environments and some of them has never produced a similar error, but they are either dealing with insignificant amount of data or pushing it in the opposite direction (EnvironmentB1 to EnvironmentA1)
As a temporary measure I have also tried deploying the package to the SSIS Catalog of EnvironmentA2 (development version of EnvironmentA1) and scheduling execution using production connection strings, but it gets the exact same issue and the only guaranteed way to run the package successfully remains running it via Visual Studio.
If anyone could at least point me in the right direction of diagnosing this issue, that would be greatly appreciated. Please let me know if I can add any other info for the context.
Your 3rd SSIS error states the connection was forceably closed by remote host.
That suggests firewall or network filtering issues. Check with your network guys if that could be the case.

Google Cloud SQL Second generation -> "Aborted connection"

I am running a Java + JPA/Hibernate application on Appengine and switched my database from the first generation Google Cloud SQL instance to the second generation and now get a lot of this errors:
2017-05-20T22:49:53.533247Z 2235 [Note] Aborted connection 2235 to db:
'mydb' user: 'root' host: 'cloudsqlproxy~myip'
(Got an error reading communication packets)
As far as I can tell, most of these error occur during database requests inside task queue tasks.
This did not happen with the first generation. How can this be avoided?
Τhe "Aborted connection nnnn to db:" message is triggered when an existing connection is terminated improperly as described by Google’s documentation. Most of the aborted connections happen because of the termination of your connection was not correct or because of networking problem between the server and the client, as described in the documentation here.
I advise you to follow the Google’s documentation about managing Cloud SQL connections, emphasizing on the “connection pools” section and of course the “opening and closing connection” section.
Managing database connections talks about "close connection properly". However, in my case, the error still occurs when I use GCP cloud function to connect GCP cloudSQL.
A Google group says that, unless you use NullPool OR dispose the engine explicitly, the error message will always occur. Also, it does not suggest to use engine.dispose().
So I wonder what is the best way to release the resource of connection pool without generating error message on CloudSQL?

Database Sync failing after a day

We setup database sync between two databases on the same server. It worked fine yesterday and then stopped working today. I tried killing connections to the database and stopping the web apps that are connected to the database thinking maybe it was a connection limit. I also reset the user and pass after verifying the connection is correct.
This is the error we're getting:
Database re-provisioning failed with the exception "The current operation could not be completed because the database is not provisioned for sync or you not have permissions to the sync configuration tables." For more information, provide tracing ID ‘b4b76a8c-38ae-4b48-ad08-6c07933c23c1’ to customer support.
The error log indicated that the previously provision related operation was not completed yet, so you were not able to re-provision the sync group at the same time.
May I know whether you are still experiencing this problem? If yet could you please provide the latest error log? I'll update the answer then.

Cannot create DB2 index, getting SQL30081N error

Trying to create an index (and run some long queries) on DB2 v9.1 and failing with the following error message:
SQL30081N (A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "". Communication function detecting the
error...")
I have tried to follow advice given by IBM here regarding setting QUERYTIMEOUTINTERVAL=0- http://www-01.ibm.com/support/docview.wss?rs=71&uid=swg21164785 but it did not take.
any ideas? queries and commands seem to time out at about 15 minutes.
You can rule out any network interference by running the DDL and SQL locally on the server. By using nohup on UNIX or schtasks on Windows, you can start a DB2 job that will run to completion even if the database server loses all network connectivity.
It seems like a network error, probably your client machine is losing the connection to the server. Are you over an unstable network connection, for example a VPN over the internet?

Connection Problems With Mirrored Database

The setup is as follows:
A C++ client connects via OLEDB/SQL Native Client to a SQL Server 2005 database located on another machine. The server is setup with mirroring (automatic failover) with a synchronized server located on yet another server and a witness server on another server.
Occassionally (once every couple of days), our application seizes up in that it appears to attempt to establish a database connection to the database and rather than simply failing and OLEDB throwing a database connection failure it just gets "stuck" (we have a timeout for the connection but it's never timing out). 24 to 36 hours later we'll get an error:
TCP Provider: An existing connection was forcibly closed by the remote host.
And things will continue you on with lots of these errors and our app will eventually need to be restarted. We can't really figure out what condition could be causing this behavior and what we can do about it?
In preliminary research, I've seen some related problems that were solved by setting the Connection Lifetime connection string property to something non-zero.
Does anyone have any thoughts on what might be going on here?