Background:
I'm working with a program coded in C++ which uses ODBC on SQL Native Client to establish connections to interact with a SQL Server 2000 database.
Problem:
My connections are abstracted into an object which opens a connection when the object is instantiated and closes the connection when the object is destroyed. I can see that the objects are being destroyed: their destructor are firing and inside of these destructors, SQLDisconnect( ConnHandle ) is being called, followed by SQLFreeHandle( SQL_HANDLE_DBC, ConnHandle ); However, watching the connection count using sp_Who2 or the Performance Monitor in SQL shows the connection count increasing without relent, despite these connections being destroyed.
This hasn't proven problematic until executing a chain of functions that runs long enough to create several thousand of these objects and as such, several thousands of connections.
Question:
Has anyone seen anything like this before? What might be causing this? My initial google searches haven't proven very fruitful!
EDIT:
I have verified that SQLDisconnect is returning without error.
Connection pooling is off. In fact, when I attempt to enabling it using SQLSetEnvAttr, my application crashes when the 2nd call to SQLDriverConnect is made.
Check that you are not using connection pooling. If it is turned on, it will cache opened connections for some (configurable) time.
If you are not using connection pooling, then you must check return value of the SQLDisconnect(). You may have some transaction executing or rollbacking that wont let SQL Disconnect() release your connection.
You have more details on how to check for SQLDisconnect errors at MSDN.
I believe I have seen the same issue in an application that uses MFC and ODBC, rather than the SQL native client API directly. Occaisonally my application hangs on shutdown, the stack trace is:
sqlncli!CCriticalSectionNT::Enter
sqlncli!SQLFreeStmt
sqlncli!SQLFreeConnect
sqlncli!SQLFreeHandle
odbc32!UnloadDriver
odbc32!FreeDbc
odbc32!DestroyIDbc
odbc32!FreeIdbc
odbc32!SQLFreeConnect
mfc42!CDatabase::Close
mfc42!CDatabase::Free
mfc42!CDatabase::~CDatabase
Try as I might, I cannot see anything that might cause such a hang. I'd be grateful if anyone can suggest a solution. It seems others have seen similar issues online, but to date I haven't found any solution.
sqlncli!CCriticalSectionNT::Enter
sqlncli!SQLFreeStmt
sqlncli!SQLFreeConnect
sqlncli!SQLFreeHandle
odbc32!UnloadDriver
odbc32!FreeDbc
odbc32!DestroyIDbc
odbc32!FreeIdbc
odbc32!SQLFreeConnect
mfc42!CDatabase::Close
mfc42!CDatabase::Free
mfc42!CDatabase::~CDatabase
From your stacktrace not having a bottom, can we assume that the CDatabase is a global variable?
Possibly in a dll?
We found your exact symptoms if attempting disconnect from SQL Server from within the destructor of a global variable.
Using the MDAC ODBC drivers works successully.
Moving the code out of the destructor works sucessfully.
It seems something to do with sql native client not liking being called from inside a DllMain.
Related
I have an old Access solution that includes opening a connection and inserting some lines in a SQL database. I had to make a very small change that had nothing to do with that connection. After the change I compacted the database.
The solution works normally for me, but the person that normally runs it is getting the Run-time error I outlined in the subject. I had another user try and also got this error. It seems I am now the only one that can run this, yet I did nothing to the code or the access db as a whole that I can see explaining this.
The button they press that triggers the error:
Sets a variable as ADODB.Connection
Defines the connection string
ERROR TIME: Then it runs a function that executes a stored procedure using that connection. The error point is showing as occurring at the top of that function, when it attempts to CreateObject("ADODB.Command").
Run-time error '429': ActiveX component can't create object
Can anyone offer anything as to why this user that could work with it suddenly can't despite the same security, same machine, etc.
Sigh... Thanks for the responses. It turns out to be our security software and the fact that I moved the solution to a different location. Out of sight out of mind. I forget that I have some exemptions setup for myself, so I didn't see how it could be that. I just need to learn to check that stuff first.
I am working with a java program where I need to call a native library in order to call other dll that in turn would contact some Remote "Amadeus" (Airline related service) service.
Problem statement:
The jni dll creates a session every time it contacts the remote service and close the session after completion of its intended task. Its very similar to jdbc approach where no connection pooling is involved.
Now it seems that the session is actually not being closed properly and at the server end it eventually results in max session exception and refusing any further connection requests.
It seems there is some issue at the dll-remote service connectivity end, because from java code the connection is being properly closed by calling the native call.
Workaround identified:
Restarting the application fixes this issue because it forces all open connection getting closed.
We don't have access to alter the dll for the time being, so we are thinking if we could have the same flavour of restarting the jvm without actually restarting the jvm but reconnecting (unloading and reloading) the dll.
Here we think if we can disconnect the dll, all memory allocated to the dll would be cleared off, and hence would force to close/collect the open sessions at server end.
Question:
Can we really unload and then reload the dll without restarting the jvm?
Please help us.
Thanks in advance.
Krgds,
Debojit
You can unload it. It's probably not safe to do so, however.
Assuming you're running on a Windows platform, without knowing the details of the DLL, it's not possible to safely unload the library. Per the documentation for the FreeLibrary function:
Use caution when calling FreeLibrary with a handle returned by
GetModuleHandle. The GetModuleHandle function does not increment a
module's reference count, so passing this handle to FreeLibrary can
cause a module to be unloaded prematurely.
Since you're not going to have direct access to the handle used to load the library nor all the references into the library, you can't know if all references to the library are no longer used.
Also, unloading the library may not solve your problem. Ending the entire process solves the problem because that causes everything that process uses to be released/closed (for the most part - there are always exceptions...). Simply unloading one library loaded into the address space doesn't do that. Unloading the library might do what you want - if the library is designed to work that way. Given that it's apparently not working properly when used normally, I'd say the odds of it working properly when used abnormally aren't very good - if unloading the library while it's being used doesn't cause even worse problems.
You need to actually solve the problem - not simply try things and see if they work.
After a whole day tracking down a memory leak in my VB.NET project, I have traced the cause to a bug with FileMaker's ODBC driver !
To reproduce you'll need a database you can connect to (I have mine hosted on Server Advanced 11.0.3, but you can also host it locally), and the ODBC driver registered/installed on the PC (I tested versions 11.3 and 12.0, and the latest 12.2).
Start a new VB.NET WinForms project, add a button to the form and paste this code onto the button's click event:
Using cn_FM As New Odbc.OdbcConnection("DRIVER={FileMaker ODBC};SERVER=192.168.1.xxx;UID=admin;PWD=admin;DATABASE=test;")
cn_FM.Open()
End Using
All this code does is open a connection to a FileMaker database, however if you analyse the memory usage in Windows Task Manager you can easily see (by repeatedly clicking the button you just made) that cn_FM is not being disposed properly because the Handles keep increasing! I tried forcing Garbage Collection but this didn't do anything, so I assume its a problem with the driver itself.
Oh, and I tested connecting to a SQL database in the same way, and as you would expect, there was no handle leakage...
Can anyone confirm this is correct?
Edit: I tried various ways of opening and closing the connection, as well as actually querying the database for something in the using block. Also tried hosting the fp7 file locally, but still no go :(
FileMaker's ODBC drivers are horrible and they admit it. You'll also find that your CPU spikes to nearly 100% for every query you hit the FM server with. I've been griping at them about it for years.
Their "solution" was to introduce External SQL Sources, but that requires you to go the other direction. You can bind your VB database to FileMaker and then access the data just like actual FileMaker data. This will allow you to create scripts on the FM server to sync whatever tables you need to sync with your VB database.
It's not ideal, but that's going to be your best bet to get something together with good performance.
I got around this problem by making a persistent connection (declare and open it once and leave it open). But I need to check if its still open each time I want to use it, for example:
Public Sub CheckOpen(ByRef cn As Odbc.OdbcConnection)
If cn.State <> System.Data.ConnectionState.Open Then
cn.Close()
cn.Open()
End If
End Sub
If you have multiple FM database files then this may mean you need to have one connection for each file.
Side Note: FileMaker's xdbc_listener.exe process running on FMSA is also leaky. We have noticed a pattern that once it reaches just under 2GB memory usage it crashes. So keep in mind that the process may need constant restarting.
Just a little background: I am using Access 2010 to create forms and VBA code in an Access 2003 format database. For some reason, Access 2007 format databases always corrupt on me when I make changes and save them with a particular group of objects, but that's for another discussion.
When writing VBA code in this Access 2003 database, any time my code breaks (via breakpoint or an unhandled error) and I make a correction, Access tells me that it can't save back to the database because another user has it open. However, I am the only user working on the database; this is a local copy of the database and it's sitting on my desktop.
The LDB file can't be deleted because Access is using it. When I first load the database, I see my machine name and "Admin" when opening the LDB in a text or hex editor. After a break, I see that plus a duplicate entry, but this time around "admin" has a lower-case "A."
Closing the database and reopening it fixes the problem but makes it needlessly cumbersome to debug my code. Anyone else encounter this issue and/or have a fix for it?
It might be helpful to know what your code is doing when this happens. Certainly that's not normal behavior. For instance, are you opening another database with New Access.Application? Are you using ADO or DAO to access records in the database with a connection string?
There are no external connections to the database at all.
It may not matter if there are external connections to the database if you are using a connection string to connect to the open database; not sure but that may be seen as an external connection... you may want to use CurrentDB for DAO, or CurrentProject.Connection as your ActiveConnection for any ADO queries.
I am assuming that this problem persists through reboots; but for the sake of argument, try closing out Access and going to the task manager to make sure you have no other instances of MSAccess.exe running. You might even try closing all Office products and/or making sure that Access is the only Office product running. I have seen some weird conflicts between Microsoft Communicator and Outlook; so it's not entirely out of the question for Access to have issues with another MS product.
You may also want to check the size of the database to make sure it's not exceeded 2GB. That causes the infamous "Invalid parameter" error; perhaps it might be causing this as well.
With no other details about how your program works, we may only be able to offer generic advice like this.
I have discovered a way to cause the problem discussed above (and thereby to correct it). Turns out if you create a database object and set it to the current database, you get this problem.
That is,
dim cdb as database
set cdb = currentdb
From this point on, you're cooked.
Instead, figure a way around this by possibly using currentdb directly or not using it at all.
This worked for me.
In your VBA Try checking that all your open Connections to the database are closed. Until the connection is open the LDB fill will be there.
Same symptom of not being able to save form or code mods after application had started. I found a workaround today! In the startup of my first form of the app, I had issued a "DAO.DBEngine.SetOption dbMaxLocksPerFile, 20000". Commenting this statement removed the problem. I did no further testing, but FYI, the DBEngine call was before any reference or attempt to use CurrentDB(). Also the current default on my Access 16 install is 9,500.
I thought I might answer here, since I stumbled upon this question while having a similar issue. Essentially, it boiled down to this: I could either edit forms, VBA, etc. or edit information in the local database (which I'm using as a cache) with currentDB. I also have a backend database, but the locking was clearly on the frontend database.
The solution ended up being weird, but stupidly simple. When the frontend starts up, I have it immediately create a connection to the backend using OpenRecordset (and similarly to you, that backend was still on my own computer for testing purposes). I tried temporarily disabling that code, and suddenly it wasn't an issue anymore. And it turns out, once I call currentDB, I can then call OpenRecordset to open the connection to the backend, and suddenly it isn't a problem anymore.
Tl;Dr: if you're calling OpenRecordset somewhere in your code to connect to a backend, be sure to call something like set db = currentDB beforehand, then everything works. (That is, probably until I publish this answer and Access then decides it doesn't want to anymore).
Why this fixed it is beyond me, someone with more knowledge can maybe answer that.
The solution:
options > current database > click enable -track name auto correct info
UPDATE
As Mathias notes below, this exact problem has been reported and resolved here:
ASP.NET-MVC (IIS6) Error on high traffic: Specified cast is not valid
ORIGINAL POST
This may be too specific a debugging issue to be posted here, but I'm posting it anyway in the hopes that it produces a solution that others find useful.
I have a web application that operates under moderate load -- maybe 5 requests per second. It has some older code talking to Sql via ADO.NET + DataReaders and has been using this same technique for at least five years without problem. It also has some newer code using LINQ-to-SQL. Both techniques use the same connection string to maximize connection pool reuse.
Recently I'm experiencing a very weird behavior described by these symptoms:
Everything will work perfectly for about a day, then suddenly every call (or nearly every call) to the data layer (both ADO.NET and LINQ) returns data that cannot be parsed by my code -- I'll get exceptions like "Unable to cast object of type 'System.Int32' to type 'System.String'." or "Sequence contains no elements" or "IndexOutOfRangeException" or "Invalid attempt to call Read when reader is closed".
Interestingly, I never get exceptions from SqlCommand.ExecuteReader() or DataReader.Read() -- the exceptions only occur when I try to parse the IDataRecord that is returned.
I am able to fix the problem temporarily by restarting either Sql or IIS. After a few hours it comes back again.
I've tried monitoring the number of connections in the connection pool and it never goes above 3 or so. Certainly never above 100.
I'm not getting anything in the event log that indicates any problem with Sql or IIS.
The drive has 9 GB empty space.
I suspected bad RAM, but the server is using registered ECC DIMMs.
I have other applications using ADO.NET that work fine and never exhibit the problem.
When the problem is occurring I can call the exact same stored procedures via Management Studio and they return the correct, expected results.
Here is my pattern for ADO.NET access:
using (var dbConn = Database.Connection) // gets already-open connection
{
var cmd = new SqlCommand("GetData", dbConn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("#id", id);
SomeDataObject dataObject = null;
var dr = cmd.ExecuteReader(CommandBehavior.CloseConnection | CommandBehavior.SingleRow);
if (dr.Read())
dataObject = new SomeDataObject(dr);
dr.Close();
return dataObject;
}
Theory: Is it possible that the combination of ADO.NET in one part of the code and LINQ in another part of the code, both using the same connections from the connection pool, is having some weird side-effect?
Question: Are there any debugging steps I should be trying? Any events logs or performance metrics that might help?
20+ open connections on 5 hits/second is a red flag to me. We have close to 100 hits/sec and hover around 10 connections.
What about memory use? Is it high?
I suspect you're having problems with releasing resources. I'm still getting my feet wet with LINQ to SQL and I too have a long positive expereince with ADO.NET. I wonder if you're missing a pattern with LINQ to SQL that cleans up connections, etc.
Try this - can you isolate the ADO.NET code from LINQ in the application? If you ONLY make ADO.NET calls, what happens to memory, connection count, etc? Then add in the LINQ stuff and see how it affects it.
Resource problems seem to 'start up late' becuase they take a while to accumulate.
UPDATE
I found someone on SO who apparently has the same issue
ASP.NET-MVC (IIS6) Error on high traffic: Specified cast is not valid.
It is expained in the answer from ATLE
ORIGINAL POST
I have seen issues in a linq-to-sql application when under load. I used MVC - Storefront from
Rob Connery so I guess a lot of people use this kind of application layout.
The application worked perfectly when under little load, but there where strange errors that sounded like the one you describe when under medium load.
I suspected that it was an issue with where the db-context was stored.
In my case it was easy to reproduce: I used jmeter and had 5 threads each having a couple of requests per second (20 I guess). I realy needed to have load originating from multiple threads.
So my advice is: Try to reproduce the error in development by creating some load with Jmeter (not good for ASP.NET but for ASP.NET MVC) or application center test.