How do to increase timeout value for DataGrip's connection to Google BigQuery? - sql

I currently connect JetBrain's DataGrip IDE to Google BigQuery to run my queries. I get the following error however: [Simba][BigQueryJDBCDriver](100034) The job has timed out on the server. Try increasing the timeout value. This of course happens when I run a query that may take some time to execute.
I can execute queries that take a short amount of time to complete so the connection does work.
I looked at this question (SQL Workbench/J and BigQuery) but I still did not fully understand how to change the timeout value
The error is seen below in this screenshot:

This works well also:
Datasource Properties | Advanced | Timeout : 3600

Please open up data source properties and add this to the very end of connection URL: ;Timeout=3600; (note it case sensitive). Try to increase the value until error is gone.

Related

Big Query The job encountered an error during execution

I've had this query in BigQuery that I have been updating every day for the last few months. It's been fine - some occasional errors, but retrying has solved the problem.
Bet last few days I am getting the error: The job encountered an error during execution. Retrying the job may solve the problem.
The error description says that it's an external error, so how can I fix that?
I have been retrying (with rather long pauses in between), but I still get the error.
JobID example: bquxjob_152ced5d_169917f0145
Does anyone have any idea what's going on? Is there any data/time limitations I might encounter (but why just the last few days then)?
You can use CGP stackdriver to monitor your BigQuery process using this URL
Interesting information you can find among others are the queryTime heatmap and the Slot usage which might help you understand your problems better
On the subject of the external table usage, you can use Google transfer (See this link for details) to schedule a repeated transfer from CSV to BigQuery table.
The below Image show you how to get to the transfer set up page from the webUI
I encountered this dreadfully useless error in a scheduled query. It was working great and then one day it stopped working at all and has been failing ever since without any other explanation. The StackDriver (now "Logs Explorer") showed nothing more enlightening:
jobStatus: {
errorResult: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
errors: [
0: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
]
jobState: "DONE"
}
Figuring out the actual issue takes a long time because scheduled queries start slowly since they use BATCH priority. What I found in my case was that the partitioned table and "Partition field" setting in the scheduled query was the culprit. I dropped the table and removed the partition field and voila the thing works again (although far from ideal since I need partitioning).
I hope this helps someone else running up against that useless error but in any case, I hope the good folks working on BigQuery find a better error to bubble up.
I ran into this problem when replacing the contents of a partitioned table. Two retries did not help. When I removed the --range_partitioning from the command the update was processed correctly. The table remained partitioned.
So there seems to be an issue about updates to partitioned tables, and when that is the cause these errors might not benefit from retry. I don't know whether there are other causes of this error.
This kind of issue probably has a lot to do with BigQuery quota errors : https://cloud.google.com/bigquery/docs/troubleshoot-quotas#ts-number-column-partition-quota, as mentionned by other answers, such as the 4000 partitions-by-table quota.

Simple queries take very long

When I execute a query for the first time in DBeaver it can take up to 10-15 seconds to display the result. In SQLDeveloper those queries only take a fraction of that time.
For example:
Simple "select column1 from table1" statement
DBeaver: 2006ms,
SQLDeveloper: 306ms
Example 2 (other way around; so theres no server-side caching):
Simple "select column1 from table2" statement
SQLDeveloper: 252ms,
DBeaver: 1933ms
DBeavers status box says:
Fetch resultset
Discover attribute column1
Find attribute column1
Late bind attribute colummn1
2, 3 and 4 use most of the query execution time.
I'm using oracle 11g, SQLDeveloper 4.1.1.19 and DBeaver 3.5.8.
See http://dbeaver.jkiss.org/forum/viewtopic.php?f=2&t=1870
What could be the cause?
DBeaver looks up some metadata related to objects in your query.
On an Oracle DB, it queries catalog tables such as
SYS.ALL_ALL_TABLES / SYS.ALL_OBJECTS - only once after connection, for the first query you execute
SYS.ALL_TAB_COLS / SYS.ALL_INDEXES / SYS.ALL_CONSTRAINTS / ... - I believe each time you query a table not used before.
Version 3.6.10 introduced an option to enable/disable a hint used in those queries. Disabling the hint made a huge difference for me. The option is in the Oracle Properties tab of the connection edit dialog. Have a look at issue 360 on dbeaver's github for more info.
The best way to get insight is to perfom the database trace
Perform few time the query to eliminate the caching effect.
Than repeat in both IDEs following steps
activate the trace
ALTER SESSION SET tracefile_identifier = test_IDE_xxxx;
alter session set events '10046 trace name context forever, level 12'; /* binds + waits */
Provide the xxxx to identify the test. You will see this string as a part of the trace file name.
Use level 12 to see the wait events and bind variables.
run the query
close the conenction
This is important to not trace other things.
Examine the two trace files to see:
what statements were performed
what number of rows was fetched
what time was elapsed in DB
for the rest of the time the client (IDE) is responsible
This should provide you enough evidence to claim if one IDE behaves different than other or if simple the DB statements issued are different.

Postgresql job scheduler using pgagent giving error with update query

I have created a job scheduler using pgagent in postgresql:
What I did is mentioned as screen shots
I had created like this to update name in my database field in a certain time. But when I check it is getting failed.
The failed status as follows:
What I did wrong? How can I correct it?
I have also faced the same problem exactly. By trial and error, I changed the Connection Type from Local to Remote and gave the following connection string
user=some_user password=some_password host=localhost port=5432 dbname=some_database
in the properties of the Step. And, it worked. So, the trick is to treat even the local server as a remote server.

SQL error:8152, but not over max?

I'm part of a team writing an ERP using , Seam, and Jboss, and on one of my pages, I keep getting an SQL error: 8152 whenever I try to input something. SQL error:8152, for those of you who don't know, is when you try to input a value over the maximum limit of the column.
I've double checked my entity and the database, and their maximum value limits are the same (50 nvarchars). In addition, I'm pretty sure that we're not using audit tables. I then put System.out.println(""); all over the place, and found that the error was happening in between these two println(s):
System.out.println("Flushing");
entityManager.flush();
System.out.println("Flushing complete");
Which is part of a method that process all changes to the table. But I'm pretty new to programming and not sure what's going on.
Any help would be appreciated, thanks in advance, Jeff.
P.s. Code on request, but I didn't post it because there is a lot of it all over the place.
I would verify the SQL that is being executed when the flush() is performed. That way you can see the length of your data and verify that it is too big as shown by the DB error.
If you are using Hibernate, you can output SQL to the console. You don't say what your DB is, but if it's SQL Server you can use the profiler to see what SQL is being executed.

All of a Sudden , Sql Server Timeout

We got a legacy vb.net applicaction that was working for years
But all of a sudden it stops working yesterday and gives sql server timeout
Most part of application gives time out error , one part for example is below code :
command2 = New SqlCommand("select * from Acc order by AccDate,AccNo,AccSeq", SBSConnection2)
reader2 = command2.ExecuteReader()
If reader2.HasRows() Then
While reader2.Read()
If IndiAccNo <> reader2("AccNo") Then
CAccNo = CAccNo + 1
CAccSeq = 10001
IndiAccNo = reader2("AccNo")
Else
CAccSeq = CAccSeq + 1
End If
command3 = New SqlCommand("update Acc Set AccNo=#NewAccNo,AccSeq=#NewAccSeq where AccNo=#AccNo and AccSeq=#AccSeq", SBSConnection3)
command3.Parameters.Add("#AccNo", SqlDbType.Int).Value = reader2("AccNo")
command3.Parameters.Add("#AccSeq", SqlDbType.Int).Value = reader2("AccSeq")
command3.Parameters.Add("#NewAccNo", SqlDbType.Int).Value = CAccNo
command3.Parameters.Add("#NewAccSeq", SqlDbType.Int).Value = CAccSeq
command3.ExecuteNonQuery()
End While
End If
It was working and now gives time out in command3.ExecuteNonQuery()
Any ideas ?
~~~~~~~~~~~
Some information :
There isnt anything that has been changed on network and the app uses local database
The main issue is that even in development environment it donest work anymore
I'll state the obvious - something changed. It could be an upgrade that isn't having the desired effect - it could be a network component going south - it could be a flakey disk - it could be many things - but something in the access path has changed. What other problem indications are you seeing, including problems not directly related to this application? Where is the database stored (local disk, network storage box, written by angels on the head of a pin, other)? Has your system administrator "helped" or "improved" things somehow? The code has not worn out - something else has happened.
Is it possible that this query has been getting slower over time and is now just exceeded the default timeout?
How many records would be in the acc table and are there indexes on AccNo and AccSeq?
Also what version of SQL are you using?
How long since you updated statistics and rebuilt indexes?
How much has your data grown? Queries that work fine for small datasets can be bad for large ones.
Are you getting locking issues? [AMJ] Have you checked activity monitor to see if there are locks when the timeout occurs?
Have you run profiler to grab the query that is timing out and then run it directly onthe server? Is it faster then? Could also be network issues in moving the information from the database server to the application. That would at least tell you if it s SQl Server issue or a network issue.
And like Bob Jarvis said, what has recently changed on the server? Has something changed in the database structure itself? Has someone added a trigger?
I would suggest that there is a lock on one of the records that you are trying to update, or there are transactions that haven't been completed.
I know this is not part of your question, but after seeing your sample code i have to make this comment: is there any chance you could change your method of executing sql on your database? It is bad on so many levels.
Perhaps should you set the CommandTimeout property to a higher delay?
Doing so will allow your command to wait a little longer for the underlying database to respond. As I see it, perhaps are you not letting time enough for your database engine to perform all what is required before creating another command to perform your update.
Know that the SqlDataReader continues to "SELECT" while feeding the in-memory objects. Then, while reading, you require your code to update some other table, which your DBE just can't handle, by the time your SqlCommand requires, than times out.
any chances of a "quotes" as part of the strings you are passing to queries?
any chances of date dependent queries where a special condition is not working anymore?
Have you tested the obvious?
Have you run the "update Acc Set AccNo=#NewAccNo,AccSeq=#NewAccSeq where AccNo=#AccNo and AccSeq=#AccSeq" query directly on your SQL Server Management Studio? (Please replace the variables with some hard coded values)
Have you run the same test on another colleague's PC?
Can we make sure that the SQLConnection is working fine. It could be the case that SQL login criteria is changed and connection is getting a timeout. It will be probably more helpful if you post the error message here.
You can rewrite the update as a single query. This will run much faster than the original query.
UPDATE subquery
SET AccNo = NewAccNo, AccSeq = NewAccSeq
FROM
(SELECT AccNo, AccSeq,
DENSE_RANK() OVER (PARTITION BY AccNo ORDER BY AccNo) NewAccNo,
ROW_NUMBER() OVER (PARTITION BY AccNo ORDER BY AccDate, AccSeq)
+ 10000 NewAccSeq
FROM Acc) subquery
After HLGEM's suggestions, I would check the data and make sure it is okay. In cases like this, 95% of the time it is the data.
Make sure disk is defragged. Yes, I know, but it does make a difference. Not the built-in defragger. One that defrags and optimizes like PerfectDisk.
This may be a bit of a long shot, but if your entire application has stopped working, have you run out of space for the transaction log in your database? Either it's been specified to an absolute size, and that has been reached, or your disk is just full.
May be your tables include more information, and defined SqlConnection.ConnectionTimeout property value in config file with little value. And this value isn't necessary to execute your queries.
you can trying optimize your queries, and also rebuilt indexes.