SQLCODE=-514 SQLSTATE=26501 occurred when I fnished the rebind operator - sql

I want to make sure the new procedure valid, insteading of the DB2 always query by the cache pool, I have to rebind the database (db2rbind command). And then I deploy the application on WebSphere. BUT, when I login to the application, the error occurs:
The cursor "SQL_CURSN200C4" is not in a prepared state..SQLCODE=-514 SQLSTATE=26501,DRIVER=3.65.97
further more, the most weird thing is that the error just occurred only once. It will not never occur after this time, and the application runs very well. I'm so curious about how it occurs and the reason why it only occurs only once.
ps: my DB2 version is 10.1 Enterprise Server Edition.
and the sql which the error stack point to is very simple just like:
select * from table where 1=1 and field_name="123" with ur

Unless you configure otherwise (statementCacheSize=0) or manually use setPoolable(false) in your application, WebSphere Application Server data sources cache and reuse PreparedStatements. A rebind can cause statements in the cache to become invalid. Fortunately, WebSphere Application Server has built-in knowledge of the -514 error code and will purge the bad statement from the cache in response to an occurrence of this error, so that the invalidated prepared statement does not continue to be reused and cause additional errors to the application. You might be running into this situation, which could explain how the error occurs just once after the rebind.

Related

tomcat server sql exception

I have an app that runs the Tomcat server. I use IntelliJ on my machine and run it from there when I do tests.
Running it many times, all works, and suddenly server do not go up well, and I see the following in log:
com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#1: BasicResourcePool$AcquireTask: com.mchange.v2.resourcepool.BasicResourcePool$AcquireTask#1ab64513 -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (30). Last acquisition attempt exception:
java.sql.SQLException: Unexpected exception encountered during query.
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1073)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:987)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:982)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:927)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2664)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2568)
at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1557)
at com.mysql.jdbc.ConnectionImpl.loadServerVariables(ConnectionImpl.java:3868)
at com.mysql.jdbc.ConnectionImpl.initializePropsFromServer(ConnectionImpl.java:3407)
at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2384)
at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2153)
at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:792)
at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:47)
at sun.reflect.GeneratedConstructorAccessor38.newInstance(Unknown Source)
I have no clue what happened since I did not change anything related to JDBC or SQL. I tried to replace kotlin version (upgrade) and returned right away, but I don't know if it has anything to do with the exception, and how can I solve it.
checking the connection to the database from IntelliJ DB pane - connected successfully.
I will be thankful if someone has a clue what could go wrong.

Capture Data State on Error at runtime - SQL Server

I have a question regarding error handling practices within SQL Server.
What I would like to accomplish is easy error re-creation. I have a very active SQL Server installation with constantly changing data in the tables I am interested in. It is modeling an active warehouse environment.
I've already built a generic error handler for all the stored procedures on this installation in order to track errors and log specifics about the cause of the error such as:
calling line (this gives the EXEC statement of the stored procedure as well as input variables)
error_message
error_state
error_number
error_line
etc.
What I am missing is reproducibility. Even if I were to run the same statement just a few minutes after being notified that an error occurred, I cannot be sure that my results would be the same due to the underlying data changing.
I would like to capture the state of the data on the database when the error occurred.
This could be something like a database image that I could then import into a clean SQL Server installation and execute the erring line in order to perfectly capture what was happening on the database the moment the error occurred.
Due to the nature of needing to capture this at runtime, I would prefer a light-weight solution. Perhaps only capturing the tables relevant to the failing statement.
Does anyone know if this is possible or has been done before? It is really only critical to try and suss out logical errors. It wouldn't be necessary for something like a deadlock.
I would ultimately turn these data subsets into XML or JSON and include them in the error log when appropriate.

Facing errors while SQL deployment in azure devops pipeline

Am using sql DACPAC type of deployment in release pipeline of azure devops.but getting below error. I have no idea about SQL.Any suggestions?
Publishing to database 'database_name' on server 'Server_name'.
Initializing deployment (Start)
*** The column [dbo].[xxxx].[yyyy] is being dropped, data loss could
occur.
Initializing deployment (Complete)
Analyzing deployment plan (Start)
*** If this deployment is executed, changes to [dbo].[xxx2] might
introduce run-time errors in [dbo].[yyyy2].
Analyzing deployment plan (Complete)
Updating database (Start)
An error occurred while the batch was being executed.
Updating database (Failed)
Agree with Michael's appointment.
The column [***] is being dropped, data loss could occur.
and
If this deployment is executed, changes to [] might introduce
run-time errors in [].
These are all expected which caused by against the security. I assume you did some changes into your database which can not sure whether it would break anything on target database. Now, it will block the deployment since the server can't determine whether the changes are secure.
The first solution is set /p:BlockOnPossibleDataLoss=false.
The BlockOnPossibleDataLoss default value is true, which means stop the deployment if possible data loss detected. And false let SqlPackage.exe ignore them.
So, please go the task, then locate to and input the above argument into Additional SqlPackage.exe Arguments:
The second solution is input
/p:TreatVerificationErrorsAsWarnings=true
Note: The second solution should be used if the first one does not work for you.
Set TreatVerificationErrorsAsWarnings=true means treating the verification errors as warnings to get a complete list of issues, and it can bypass the limitation of allowing the publish action to stop when the first error occurs.
See this doc to get more publish action.

SQLCode -991 when trying to read from DB2 table

I've created and compiled a program in Cobol, but when trying to run and test it with a JCL job, I'm getting this error when reading the output. (The program compiles and the job runs without errors themselves)
SQLCODE = -991, ERROR: CALL ATTACH WAS UNABLE TO ESTABLISH AN IMPLICIT CONNECT OR OPEN TO DB2. RC1=0008 RC2=00F30034
SQLSTATE = 57015
Now I don't understand why this error occurs. The DB2 database is up and running, I can access it myself. I can't find an error in my program code either. Googling it sadly doesn't provide me a clear solution, all I can find is that the issue is either the compiling job for the program, the jcl to run it or issues with the DB2 itself.
Have you done a bind and did it work !!!, The error indicates the plan does not exist or is not authorized.
You need to talk to people at your site about the Compile/Bind process and who
authorises
If you do not know know about Mainframe Cobol/DB2 compile process, try
reading this
Basically --->
Cobol program
Cobol DB2 Program ---+----> with no SQL ---> Compile -----> Executable
| but calls Plan
|
+----> DBRM (SQL) -----> Bind ------> DB2 Plan
It is the plan that needs the DB2 authorisation to run the SQL !!!
You might be able to authorise the Plan or you might need to see the DBA's
With DB2 COBOL, There is Co-Compiler (was a pre-compiler) that strips out
the SQL and creates the DBRM (basically a a special SQL procedure).
It is the Bind that processes the DBRM (SQL) and creates DB2 access plans
This may seem long winded after java etc. But there are some advantages
The SQL is processed ahead of time and not while the program is running
You can check the DB2 access path at any time - Before / after execution. Useful for
analysing performance issues.
The same DB2 access paths are used from one run to the next. This leads to
fairly predictable run times.

SQL Native Client ODBC application not disconnecting after SQLDisconnect and not pooling?

Background:
I'm working with a program coded in C++ which uses ODBC on SQL Native Client to establish connections to interact with a SQL Server 2000 database.
Problem:
My connections are abstracted into an object which opens a connection when the object is instantiated and closes the connection when the object is destroyed. I can see that the objects are being destroyed: their destructor are firing and inside of these destructors, SQLDisconnect( ConnHandle ) is being called, followed by SQLFreeHandle( SQL_HANDLE_DBC, ConnHandle ); However, watching the connection count using sp_Who2 or the Performance Monitor in SQL shows the connection count increasing without relent, despite these connections being destroyed.
This hasn't proven problematic until executing a chain of functions that runs long enough to create several thousand of these objects and as such, several thousands of connections.
Question:
Has anyone seen anything like this before? What might be causing this? My initial google searches haven't proven very fruitful!
EDIT:
I have verified that SQLDisconnect is returning without error.
Connection pooling is off. In fact, when I attempt to enabling it using SQLSetEnvAttr, my application crashes when the 2nd call to SQLDriverConnect is made.
Check that you are not using connection pooling. If it is turned on, it will cache opened connections for some (configurable) time.
If you are not using connection pooling, then you must check return value of the SQLDisconnect(). You may have some transaction executing or rollbacking that wont let SQL Disconnect() release your connection.
You have more details on how to check for SQLDisconnect errors at MSDN.
I believe I have seen the same issue in an application that uses MFC and ODBC, rather than the SQL native client API directly. Occaisonally my application hangs on shutdown, the stack trace is:
sqlncli!CCriticalSectionNT::Enter
sqlncli!SQLFreeStmt
sqlncli!SQLFreeConnect
sqlncli!SQLFreeHandle
odbc32!UnloadDriver
odbc32!FreeDbc
odbc32!DestroyIDbc
odbc32!FreeIdbc
odbc32!SQLFreeConnect
mfc42!CDatabase::Close
mfc42!CDatabase::Free
mfc42!CDatabase::~CDatabase
Try as I might, I cannot see anything that might cause such a hang. I'd be grateful if anyone can suggest a solution. It seems others have seen similar issues online, but to date I haven't found any solution.
sqlncli!CCriticalSectionNT::Enter
sqlncli!SQLFreeStmt
sqlncli!SQLFreeConnect
sqlncli!SQLFreeHandle
odbc32!UnloadDriver
odbc32!FreeDbc
odbc32!DestroyIDbc
odbc32!FreeIdbc
odbc32!SQLFreeConnect
mfc42!CDatabase::Close
mfc42!CDatabase::Free
mfc42!CDatabase::~CDatabase
From your stacktrace not having a bottom, can we assume that the CDatabase is a global variable?
Possibly in a dll?
We found your exact symptoms if attempting disconnect from SQL Server from within the destructor of a global variable.
Using the MDAC ODBC drivers works successully.
Moving the code out of the destructor works sucessfully.
It seems something to do with sql native client not liking being called from inside a DllMain.