Issue with HSQL in memory database, LockAcquisitionException - locking

HSQL in memory database is giving CannotAcquireLockException when using multiple times in test cases.
We are using:
jdbc:hsqldb:mem:batchsqldb;sql.syntax_pgs=true
this is giving CannotAcquireLockException which is intermittent. It comes half of the time. I have also tried hsqldb.lock_file=false; in connection url but did not work

Related

JDBC - SQLException : The cursor has been previously released and is unavailable

I have working program in Java that is in production for a long time now. It updates a few thousand records from database 2x per day.
After new year, when database took a hit (lot of processing happening on 1st) and I updated the other parts of the code to a new version (whole process consists of 5 programs (this is 3rd/5) that are run together in Eclipse project, but I did not change this program even a little bit), I get SQL Exception:
The cursor has been previously released and is unavailable
Where does the exception happen?
While iterating ResultSet, doesn't matter how many rows it already read. (can happen on third, can happen on 2000th row).
ResultSet is created on connection that was used before and is read only.
ResultSet is created on newly created Statement object.
->Updates on the same table are done with another write-only connection transaction-wise.
This example is probably not reproducible.
Database: IBM Informix Dynamic Server Version 14.10.FC7
Eclipse version: 2021-12 (4.22)
Java version: 1.8.0_131
JDBC driver version: 4.50.1
readCon = DriverManager.getConnection(url, user, passwd);
writeCon = DriverManager.getConnection(url, user, passwd);
Statement st = readCon.createStatement();
ResultSet rs = st.executeQuery(select from table_X....);
while (rs.next){
// commit is not happening if transaction didnt begin
writeCon.<commit transaction, begin transaction>
writeCon.UpdateUsingPreparedStatement(update table_X...)
}
...
NOTE: This program runs smoothly without any problems after running the process from that program (from step 3)
What did I learn from trying to search how to solve this?
I didn't find much on the Internet, only solution was to update JDBC driver to 4.50.1 (which I am using right now)
In almost all cases where I have seen this comes down to 2 types of problems.
Concurrent use of the same JDBC objects. This is almost always the case. The program has threads and the threads are reusing connections or Statement objects. This often blows up on you in high concurrent environments because you end up operating on the same internal statement id and one thread is then closing the statement/cursor on your other Thread so it looks like the statement closed on you suddenly.
As you say you do have 2 connections but do make absolutely sure nothing is shared among threads. I've debugged a number of customer applications that thought they had proper separation but in fact did end up sharing some objects among threads. Turning on the SQLIDEBUG or instructing the driver to dump the protocol tracing events will show who sent the close on the statement. Support teams can help with this analysis. Usually when I do this, I find the close was sent by another thread right in the middle of the work you really wanted done.
Much rarer, but occasionally another issue will cause the cursor to get closed, but in those cases, you would see very obvious prior Exceptions from the JDBC driver and/or server before you hit this statement already closed error. This could be that you hit this problem WARN - Failed to getImportedKeys The cursor has been previously released and is unavailable and upgrading the driver does fix it.
My guess is you have shared connection objects among threads that 99% of the time doesn't clash, but when you get to having a really busy system that 1% shows up and causes the issue you are seeing.

IBM i SQL - dump plan cache

I run heavy query on IBM i. First time it takes a long time, Subsequent times are much faster. It seems to be creating temporary index. How can I remove this index, so I can re-test like the first time?
Use the Visual Explain (VE) tool in the Run SQL Scripts component of ACS to see the differences between runs.
If indeed the issue is a system maintained temporary index (MTI), you can track it down via the schema's tooling in ACS and delete it if you so desire.
However, an MTI only gets deleted by the system when the system reboots (IPL).
So if you seeing differences without rebooting the server, I suspect the differences are caused by psuedo-closing. By default, once the DB see's the same query a few times (3 is the default), instead of hard closing it's cursors, it will psuedo-close them.
Again, VE will show "hard opens" and "pseudo opens".
To get the pseduo closed cursors to hard close, simply disconnect and reconnect.

How to forcefully stop long postgres query under heavy load?

I am working on a Rails app with Postgres on Ubuntu. Unfortunately for me, this legacy app uses some heavyweight stored procedures in the db. What's more, the db is quite large (5GB) and my computer is not particularly fast. Every now and then, if I pass some bad parameters from my code to the db, my computer becomes super slow to the degree that I cannot get to the console and kill the postgres process. I assume, this is due to some very expensive db query. My only solution is to hard reset my laptop. So my question is - is there a way to forcefully kill a long-taking query? Or perhaps, is there a way to limit the CPU and RAM the db is allowed to use, so that I still have some resources left to go and manually restart postgres?
You can set a maximum time for statements to take with the statement_timeout configuration option:
Abort any statement that takes more than the specified number of milliseconds, starting from the time the command arrives at the server from the client. If log_min_error_statement is set to ERROR or lower, the statement that timed out will also be logged. A value of zero (the default) turns this off.
You can set this option a variety of ways, such as in postgresql.conf for everyone, per session with the SET command, or even per database or per role. More information on setting options is in the documentation.

SqlDataAdapter.Fill suddenly taking a long time

I have an application with a central DataTier that can execute a query to a data table using an SQLDataAdapter. None of this code has changed but now all queries are taking at least 10x as long to execute a query returning even one record.
The only difference is that I have been using the app in a VM but the issue has started mid way through using the application. eg, the speed issue has not manifested itself from the start of using the VM, rather half way through.
Has anyone else had an issue with the SQL Data Adapter taking a long time to fill for no reason? executing the query in Management studio it runs in less than a second.
Firewalls are disabled
ok, after another half day wasted on this it seems to be an issue relating to networking on the Virtual PC.
I have seen a massive improvement by changing the network adapter in the VM to Shared NAT and no longer experience the long delay when populating data tables.
Its obviously having an issue resolving the SQL server.
For anyone else that stumbles across this post here are the settings

mysql slow on first query, then fast for related queries

I have been struggling with a problem that only happens when the database has been idle for a period of time for the data queried. The first query will be extremely slow, on the order of 30 seconds and then related queries will be fast like 0.1 seconds. I am assuming this is related to caching, but I have been unable to find the cause of it.
Changing the mysql variables tmp_table_size, max_heap_table_size to a larger size had no effect except to create the temp tables in memory.
I do not think this is related to the query itself as it is well indexed and after the first slow query, variants of the same query do not show up in the slow query log. I am most interested in trying to determine the cause of this or a way to reset the offending cache so I can troubleshoot the issue.
Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.
It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.
This is what I'd expect.
Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.
Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).
Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.
Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)
MySQL Workbench:
The below isn't 100% related to this SO question, but the symptoms are very related and this is the first SO result when searching for "mysql workbench slow" or related terms, so hopefully it's useful for others.
Clear the query history! - following the process at MySql workbench query history ( last executed query / queries ) i.e. create / alter table, select, insert update queries to clear MySQL Workbench's query history really sped up the program for me.
In summary: change the Output pane to History Output, right click on a Date and select Delete All Logs.
The issue I was experiencing was "slow first query" in that it would take a few seconds to load the results even though the duration/fetch were well under 1 second. After clearing my query history, the duration/fetch times stayed the same (well under 1 second, so no DB behavior actually changed), but now the results loaded instantly rather than after a few second delay.
Is anything else running on your mysql server? My thought is that maybe after the first query, your table is still cached in memory. Once it's idle, another process is causing it to be de-cached. Just a guess though.
How much memory do you have any what else is running?
I had an SSIS package that was timing out. The query was very simple, from a single MySQL table, but it sometimes returned a lot of records and would sometimes take a few minutes initially to execute, then only a few milliseconds afterwards if I wanted to query it again. We were stuck with the ADO connection, which meant it would time out after 30 seconds, so about half the databases we were trying to load were failing.
After beating my head against the wall I tried performing an initial query first; very simple and only returning a few rows. Since it was so simple it executed fast and set the table in the cache for faster querying. In the next step of the package I would do the more complex query which returned the large data set that kept timing out. Problem solved - all tables loaded. I may start doing this on a regular basis, the complex queries execute much faster by doing a simple query first.
Ttry and compare the output of "vmstat 1" on the linux command line when running the query after a period of time, vs when you re-run it and get results fast. Specifically check the "bi" column (that's the kb read from disk per second).
You may find the operating system is caching the disk blocks in the fast case (and thus a low "bi" figure), but not in the slow case (and hence a large "bi" figure).
You might also find that vmstat shows high/low cpu usage in either case. If it's low when fast, and disk throughput is also low, then your system may still be returning a cached query, even though you've indicated the relevant config value is set to zero. Perhaps check the output of show engine innodb status and SHOW VARIABLES and confirm.
innodb_buffer_pool_size may also be set high (it should be...), which would cache the blocks even before the OS can return them.
You might also find that "key_buffer" is set high - this would cache the keys in the indexes, which could make your select blindingly fast.
Try check the mysql performance blog site for lots of useful info.
I had issue when MySQL 5.6 was slow on first query after idle period. This was a connection problem, not MySQL instance problem, e.g. if you run MYSQL Query Browser execute "select * from some_queue", leave it alone for a couple of hours, then execute any query, it runs slow, while at the same time processes on server or new instance of Browser will select from same tables instantly.
Adding skip-host-cache, skip-name-resolve to my.ini file solved this problem.
I don't know why is that. Why I tried this: MySQL 5.1 without those options was slowly establishing connections from other networks (e.g. server is 192.168.1.100, 192.168.1.101 connects fast, 192.168.2.100 connects slow), MySQL 5.6 didn't have such problem to start with so we didn't add these to my.ini initially.
UPD: Solved half the cases, actually. Setting wait_timeout to maximum integer fixed the other half. Maybe I even now can remove skip-host-cache, skip-name-resolve and it won't slow down in 100% of the cases