How can I get instumemtation in monetdb ? I need thedetails like planning time , execution time for each MAL statement in monetdb . Is there any command for it ?
The planning time is reported in the Default branch as part of the EXPLAIN.
For all branches you can use sys.querylog_catalog() to see compilation times.
The individual MAL instruction timing can be observed with the TRACE modifier:
For more information see:https://www.monetdb.org/Documentation/Cookbooks/SQLrecipes/QueryTiming
Related
I currently connect JetBrain's DataGrip IDE to Google BigQuery to run my queries. I get the following error however: [Simba][BigQueryJDBCDriver](100034) The job has timed out on the server. Try increasing the timeout value. This of course happens when I run a query that may take some time to execute.
I can execute queries that take a short amount of time to complete so the connection does work.
I looked at this question (SQL Workbench/J and BigQuery) but I still did not fully understand how to change the timeout value
The error is seen below in this screenshot:
This works well also:
Datasource Properties | Advanced | Timeout : 3600
Please open up data source properties and add this to the very end of connection URL: ;Timeout=3600; (note it case sensitive). Try to increase the value until error is gone.
I have a sql file running many queries. I want to see the accumualted sum of all queries. I know that if I turn on timing, or call
\timing
query 1;
query 2;
query 3;
...
query n;
at the beginning of the script, it will start to show time it takes for each query to run. However, I need to have the accumulate results of all queries, without having to manually add them.
Is there a systematic way? If not, how can I fetch the interim times to throw them in a variable.
The pg_stat_statements is a good module that provides a means for tracking execution statistics.
First, add pg_stat_statements to shared_preload_libraries in the
postgresql.conf file. To know where this .conf file exists in your
filesystem, run show config_file;
shared_preload_libraries = 'pg_stat_statements'
Restart Postgres database
Create the extension
CREATE EXTENSION pg_stat_statements;
Now, the module provides a View, pg_stat_statements, which helps you to analyze various query execution metrics.
Reset the contents of stat collected before running queries.
SELECT pg_stat_statements_reset();
Now, execute your script file containing queries.
\i script_file.sql
You may get all the timing statistics of all the queries executed. To get the total time taken, simply run
select sum(total_time) from pg_stat_statements
where query !~* 'pg_stat_statements';
The time you get is in milliseconds, which may be converted to desired format using various timestamp related Postgres functions
If you want to time the whole script, on linux or mac you can use the time utility to launch the script.
The measurement in this case is a bit more than the sum of the raw query times, because it includes some overhead of starting and running the psql command. On my system this overhead is around 20ms.
$ time psql < script.sql
…
real 0m0.117s
user 0m0.008s
sys 0m0.007s
The real value is the time it took to execute the whole script, including the aforementioned overhead.
The approach in this answer is a crude, simple client side way to measure the runtime of the overall script. It is not useful to measure milli-second precision server side execution times. It still might be sufficient for many use-cases.
The solution of Kaushik Nayak is a way more precise method to time executions directly on the server. It also provides much more insight into the execution (eg. query level times).
I've had this query in BigQuery that I have been updating every day for the last few months. It's been fine - some occasional errors, but retrying has solved the problem.
Bet last few days I am getting the error: The job encountered an error during execution. Retrying the job may solve the problem.
The error description says that it's an external error, so how can I fix that?
I have been retrying (with rather long pauses in between), but I still get the error.
JobID example: bquxjob_152ced5d_169917f0145
Does anyone have any idea what's going on? Is there any data/time limitations I might encounter (but why just the last few days then)?
You can use CGP stackdriver to monitor your BigQuery process using this URL
Interesting information you can find among others are the queryTime heatmap and the Slot usage which might help you understand your problems better
On the subject of the external table usage, you can use Google transfer (See this link for details) to schedule a repeated transfer from CSV to BigQuery table.
The below Image show you how to get to the transfer set up page from the webUI
I encountered this dreadfully useless error in a scheduled query. It was working great and then one day it stopped working at all and has been failing ever since without any other explanation. The StackDriver (now "Logs Explorer") showed nothing more enlightening:
jobStatus: {
errorResult: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
errors: [
0: {
code: 14
message: "Error encountered during execution. Retrying may solve the problem."
}
]
jobState: "DONE"
}
Figuring out the actual issue takes a long time because scheduled queries start slowly since they use BATCH priority. What I found in my case was that the partitioned table and "Partition field" setting in the scheduled query was the culprit. I dropped the table and removed the partition field and voila the thing works again (although far from ideal since I need partitioning).
I hope this helps someone else running up against that useless error but in any case, I hope the good folks working on BigQuery find a better error to bubble up.
I ran into this problem when replacing the contents of a partitioned table. Two retries did not help. When I removed the --range_partitioning from the command the update was processed correctly. The table remained partitioned.
So there seems to be an issue about updates to partitioned tables, and when that is the cause these errors might not benefit from retry. I don't know whether there are other causes of this error.
This kind of issue probably has a lot to do with BigQuery quota errors : https://cloud.google.com/bigquery/docs/troubleshoot-quotas#ts-number-column-partition-quota, as mentionned by other answers, such as the 4000 partitions-by-table quota.
If I write a query in Apache hive then it executes mapreduce job behind the scene but how I can run only map job in hive?
Thanks
Certain optimized queries do in fact only require a map phase. You may provide a MAPJOIN hint in Hive to achieve same: this is recommended for small secondary tables:
SELECT /*+ MAPJOIN(...) */ * FROM ...
This was a question which was asked to me in an interview,I didn't knew the answer that time but then i figured it out later on.
The following query runs a Map only job.So selecting column values will run map only job.Hence we dont need reducer for this scenario.
select id,salary from tableA;
In our oracle server(10 g), we are getting ORA-4030 error on sometimes.
ORA-04030: out of process memory when trying to allocate nn bytes
We understood it is related with memory size adjustment. We are trying some memory settings.
Other than this, wanted to know,
(1) Any specific SQL query usages will be cause this kind of error
(2) any Oracle SQL query tuning can be applied to avoid this
Your replies will help.
Thanks in advance.
1) The sorts,distinct, group and join hashes are the most probably to give you this error!
2) What OS do you use? In linux you can see what resources do you for your users with ulimit -a.
You should increase the memory per process for PGA.
Regards
One thing which could be contributing to the error, is not freeing cursors. In .net a SQLStatement = a db cursor. Make sure that the applications are closing (and disposing) the SQL statements it is using.