Hello im using HikariCP to access the database via jdbc
I want to cancel/ abort if the query takes too long
Therefore i added the props for it
I run a query that takes about 30 seconds and set the timeout to 1 but it not seems working
The query runs like before
val config: HikariConfig = HikariConfig()
config.jdbcUrl = "$url;encrypt=true;trustServerCertificate=true;"
config.username = username
config.password = password
config.addDataSourceProperty("cachePrepStmts", "true")
config.addDataSourceProperty("prepStmtCacheSize", "250")
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048")
config.addDataSourceProperty("queryTimeout", "1")
config.addDataSourceProperty("cancelQueryTimeout", "1")
using latest jdbc:11.2.1.jre17 and HikariCP:5.0.1
I tried to put it in the url directly but still has no effect
Any thoughts what im doing wrong or where i should search ?
Related
I m working on script which will let me know running status (running/stopped) of desired application on remote desktop (windows server)
my code is doing fine until I have logged in on remote desktop (using "Remote Desktop Connection").. if I close it without logging off.. it continue work fine.. but as I log off from there .. it just stop working .. here one thing I note.. even after logging off when I run command on ssh client... it gives some successful acknowledgement
I do get desired output when remote desktop connection for that server is on from any other computer in network
ALL FOWWLOWING CODE AND OUTPUT IS WHEN I LOG OFF FROM REMOTE DESKTOP CONNECTION
string runCommand = "wmic process call create "TestClient.exe";
SshCommand command = ssh.RunCommand(runCommand);
string myData = command.Result;
after this myData will have
Executing (Win32_Process)->Create()
Method execution successful.
Out Parameters:
instance of __PARAMETERS
{
ProcessId = [some pid in numeric]; //when I be logged off, this field would not be there.. (case of problem)
ReturnValue = [some numeric];
};
but after doing this when I check status of test client by following code..
string rumCommand = "wmic process where "TestClient.exe" get ProcessID, ExecutablePath";
SshCommand command = ssh.RunCommand(rumCommand);
string myData = command.Result;
but there will not any running app listed in myData !!!
Connecting ssh client as per follow..
string pass = "password";
PasswordAuthenticationMethod PasswordConnection = new PasswordAuthenticationMethod("user_name", pass);
KeyboardInteractiveAuthenticationMethod KeyboardInteractive = new KeyboardInteractiveAuthenticationMethod("user_name");
ConnectionInfo connectionInfo = new ConnectionInfo(serverIP, port, "user_name", PasswordConnection, KeyboardInteractive);
SshClient ssh = new SshClient(connectionInfo);
if (!ssh.IsConnected)
ssh.Connect();
yeh, It was very simple problem to solve, but was in corner as well. As I had made test client (which I was testing whether they r running or not) running through task scheduler and forgot to tick on checkbox which suggest that it will be running even after Logoff.. checking this checkbox.. All Fine
I am executing some long running quires using the big-query java client.
I construct a big-query job and execute like this
val queryRequest = new QueryRequest().setQuery(query)
val queryJob = client.jobs().query(ProjectId, queryRequest)
queryJob.execute()
The problem I am facing is the for the same query, the client returns before the job is complete i.e. the number of rows in result is zero.
I tried printing the response and it shows
{"jobComplete":false,"jobReference":{"jobId":"job_bTLRGrw5_xR26i9Li3a9EQvuA6c","projectId":"analytics-production"},"kind":"bigquery#queryResponse"}
From that I can see that the job is not complete. The why did the client return before the job is complete ?
While building the client, I use the HttpRequestInitializer and in the initialize method I provide the timeout parameters.
override def initialize(request: HttpRequest): Unit = {
request.setConnectTimeout(...)
request.setReadTimeout(...)
}
Tried giving high values for timeout like 240 seconds etc..but no luck. The behavior is still the same. It fails intermitently.
Make sure you set the timeout on the Bigquery request body, and not the HTTP object.
val queryRequest = new QueryRequest().setQuery(query).setTimeoutMs(10000) //10 seconds
The param is timeoutMs. This is documented here: https://cloud.google.com/bigquery/docs/reference/v2/jobs/query
Please also read the docs regarding this field: How long to wait for the query to complete, in milliseconds, before the request times out and returns. Note that this is only a timeout for the request, not the query. If the query takes longer to run than the timeout value, the call returns without any results and with the 'jobComplete' flag set to false. You can call GetQueryResults() to wait for the query to complete and read the results. The default value is 10000 milliseconds (10 seconds).
More about Synchronous queries here
https://cloud.google.com/bigquery/querying-data#syncqueries
We have the following code:
var db = new CoreEntityDB();
var abc = new abcDB();
var connection = new DataStore(db.ConnectionStrings.First(p => p.Name == "Abc").Value, DataStore.Server.SqlServer);
var projects = new List<abc_Employees>();
projects.AddRange(abc.Database.SqlQuery<abc_Employees>("usp_ABC_EmployeeSys"));
The project is failing on the following line:
projects.AddRange(abc.Database.SqlQuery<abc_Employees>("usp_ABC_EmployeeSys"));
And the error says: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding"
Everything was working fine a few days ago, and now, nothing. Nothing's changed either as far as code, or sql stored proc.
Anyone else experienced this before?
Did you try to run SP independently to see if that's the bottle neck?
Is it the command that is timing out?
You can increase the command timeout using:
((IObjectContextAdapter)abc).ObjectContext.CommandTimeout = 180;
You should take a look at your stored procedure. The default timeout is 30 seconds so it looks like it is taking longer for the stored procedure to return results. Increasing the timeout is just treating the symptoms.
I'm using hsqldb to create cached tables and indexed tables.
The data being stored has pretty high frequency so I need to use a connection pool.
Also because there is a lot of data I do not call checkpoint on every commit, but rather expect the data to be flushed after 50,000 rows are inserted.
So the thing is that I can see the .data file is growing but when I connect with hsqldb client I don't see the tables and the data.
So I had 2 simple tests, one inserted single row and one inserted 60,000 rows to new table. In both cases I couldn't see the result in any hsqldb client.
(Note that I use shutdown=true)
So when I add checkpoint after each commit, it solve the problem.
Also if specify in the connection string to use log, it solves the problem (I don't want the log in production though). Also not using pooled connection solved the problem and last is using pooled data source and explicitly close it before shutdown.
So I guess that some connections in the connection pool are not being closed, preventing from the db to somehow commit the changes and make them available for the client. But then, why couldn't I see the result even with 60,000 rows?
I also would expect the pool to be closed automatically...
What am I doing wrong? What is happening behind the scene?
The code to get the data source looks like this:
Class.forName("org.hsqldb.jdbcDriver");
String url = "jdbc:hsqldb:" + m_dbRoot + dbName + "/db" + ";hsqldb.log_data=false;shutdown=true;hsqldb.nio_data_file=false";
ConnectionFactory connectionFactory = new DriverManagerConnectionFactory(url, user, password);
GenericObjectPool connectionPool = new GenericObjectPool();
KeyedObjectPoolFactory stmtPool = new GenericKeyedObjectPoolFactory(null);
new PoolableConnectionFactory(connectionFactory, connectionPool, stmtPool, null, false, true);
DataSource ds = new PoolingDataSource(connectionPool);
And I'm using this Pooled data source to create table:
Connection c = m_dataSource.getConnection();
Statement st = c.createStatement();
String script = String.format("CREATE CACHED TABLE IF NOT EXISTS %s (id %s NOT NULL, entity %s NOT NULL, PRIMARY KEY (id));", m_tableName, m_idGenerator.getIdType(), TABLE_ENTITY_TYPE);
st.execute(script);
c.close;
st.close();
And insert rows:
Connection c = m_dataSource.getConnection();
c.setAutoCommit(false);
Statement stmt = c.prepareStatement(m_sqlInsert);
stmt.setObject(1, id);
stmt.setBinaryStream(2, Serializer.Helper.serialize(m_serializer, entity));
stmt.executeUpdate();
stmt.close();
stmt = null;
c.commit();
c.close();
stmt.close();
so the above seems to add data but it cannot be seen.
When I explicitly called
connectionPool.close();
Then and only then I could see the result.
I also tried to use JDBCDataSource and it worked as well.
So what is going on? And what is the right way to do this?
Your method of accessing the database from outside your application process is simply wrong.
Only one java process is supposed to connect to the file: database.
In order to achieve your aim, launch an HSQLDB server within your application, using exactly the same JDBC URL. Then connect to this server from the external client.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/listeners-chapt.html#lsc_app_start
Update: The OP commented that the external client was used after the application had stopped. Because you have turned the log off with hsqldb.log_data=false, nothing is persisted permanently. You need to perform an explicit CHECKPOINT or SHUTDOWN when your application completes its work. You cannot rely on shutdown=true at all, even without connection pooling.
See the Guide:
http://www.hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations
I started to use dapper.net a while ago for performance reasons and that i really like the named parameters feature compared to just run "ExecuteQuery" in LINQ To SQL.
It works great for most queries but i get some really weird timeouts from time to time. The strangest thing is that this timeout only happens when the SQL is executed via dapper. If i take the executed query copied from the profiler and just run it in Management Studio its fast and works perfect. And it's not just a temporary issues. The query consistently timeout via dapper and consistently works fine in Management Studio.
exec sp_executesql N'SELECT Item.Name,dbo.PlatformTextAndUrlName(Item.ItemId) As PlatformString,dbo.MetaString(Item.ItemId) As MetaTagString, Item.StartPageRank,Item.ItemRecentViewCount
NAME_SRCH.RANK as NameRank,
DESC_SRCH.RANK As DescRank,
ALIAS_SRCH.RANK as AliasRank,
Item.itemrecentviewcount,
(COALESCE(ALIAS_SRCH.RANK, 0)) + (COALESCE(NAME_SRCH.RANK, 0)) + (COALESCE(DESC_SRCH.RANK, 0) / 20) + Item.itemrecentviewcount / 4 + ((CASE WHEN altrank > 60 THEN 60 ELSE altrank END) * 4) As SuperRank
FROM dbo.Item
INNER JOIN dbo.License on Item.LicenseId = License.LicenseId
LEFT JOIN dbo.Icon on Item.ItemId = Icon.ItemId
LEFT OUTER JOIN FREETEXTTABLE(dbo.Item, name, #SearchString) NAME_SRCH ON
Item.ItemId = NAME_SRCH.[KEY]
LEFT OUTER JOIN FREETEXTTABLE(dbo.Item, namealiases, #SearchString) ALIAS_SRCH ON
Item.ItemId = ALIAS_SRCH.[KEY]
INNER JOIN FREETEXTTABLE(dbo.Item, *, #SearchString) DESC_SRCH ON
Item.ItemId = DESC_SRCH.[KEY]
ORDER BY SuperRank DESC OFFSET #Skip ROWS FETCH NEXT #Count ROWS ONLY',N'#Count int,#SearchString nvarchar(4000),#Skip int',#Count=12,#SearchString=N'box,com',#Skip=0
That is the query that i copy pasted from SQL Profiler. I execute it like this in my code.
using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["Conn"].ToString())) {
connection.Open();
var items = connection.Query<MainItemForList>(query, new { SearchString = searchString, PlatformId = platformId, _LicenseFilter = licenseFilter, Skip = skip, Count = count }, buffered: false);
return items.ToList();
}
I have no idea where to start here. I suppose there must be something that is going on with dapper since it works fine when i just execute the code.
As you can see in this screenshot. This is the same query executed via code first and then via Management Studio.
I can also add that this only (i think) happens when i have two or more word or when i have a "stop" char in the search string. So it may have something todo with the full text search but i cant figure out how to debug it since it works perfectly from Management Studio.
And to make matters even worse, it works fine on my localhost with a almost identical database both from code and from Management Studio.
Dapper is nothing more than a utility wrapper over ado.net; it does not change how ado.net operates. It sounds to me that the problem here is "works in ssms, fails in ado.net". This is not unique: it is pretty common to find this occasionally. Likely candidates:
"set" option: these have different defaults in ado.net - and can impact performance especially if you have things like calculated+persisted+indexed columns - if the "set" options aren't compatible it can decide it can't use the stored value, hence not the index - and instead table-scan and recompute. There are other similar scenarios.
system load / transaction isolation-level / blocking; running something in ssms does not reproduce the entire system load at that moment in time
cached query plans: sometimes a duff plan gets cached and used; running from ssms will usually force a new plan - which will naturally be tuned for the parameters you are using in your test. Update all your index stats etc, and consider adding the "optimise for" query hint
In ADO is the default value for CommandTimeout 30 Seconds, in Management Studio infinity. Adjust the command timeout for calling Query<>, see below.
var param = new { SearchString = searchString, PlatformId = platformId, _LicenseFilter = licenseFilter, Skip = skip, Count = count };
var queryTimeoutInSeconds = 120;
using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["Conn"].ToString()))
{
connection.Open();
var items = connection.Query<MainItemForList>(query, param, commandTimeout: queryTimeoutInSeconds, buffered: false);
return items.ToList();
}
See also
SqlCommand.CommandTimeout Property on MSDN
For Dapper , default timeout is 30 seconds But we can increase the timeout in this way. Here we are incresing the timeout 240 seconds (4 minutes).
public DataTable GetReport(bool isDepot, string fetchById)
{
int? queryTimeoutInSeconds = 240;
using (IDbConnection _connection = DapperConnection)
{
var parameters = new DynamicParameters();
parameters.Add("#IsDepot", isDepot);
parameters.Add("#FetchById", fetchById);
var res = this.ExecuteSP<dynamic>(SPNames.SSP_GetSEPReport, parameters, queryTimeoutInSeconds);
return ToDataTable(res);
}
}
In the repository layer , we can call our custom ExecuteSP method for the Stored Procedures with additional parameters "queryTimeoutInSeconds".
And below is the "ExecuteSP" method for dapper:-
public virtual IEnumerable<TEntity> ExecuteSP<TEntity>(string spName, object parameters = null, int? parameterForTimeout = null)
{
using (IDbConnection _connection = DapperConnection)
{
_connection.Open();
return _connection.Query<TEntity>(spName, parameters, commandTimeout: parameterForTimeout, commandType: CommandType.StoredProcedure);
}
}
Could be a matter of setting the command timeout in Dapper. Here's an example of how to adjust the command timeout in Dapper:
Setting Command Timeout in Dapper