Measuring the averaged elapsed time for SQL code running in google BigQuery - sql

As BigQuery is a shared resource, it is possible that one gets different values running the same code on BigQuery. OK one option that I always use is to turn off caching in Query Settings, Cache preference. This way queries will not be cached. The problem with this setting is that if you refresh the browser or leave it idle, that Cache Preference box will be ticked again.
Anyhow I had a discussion with some developers that are optimizing the code. In a nutshell, they take slow running code, run it 5 times and get an average, then following optimization then run the code again 5 times to get an average value for optimized SQL. Details are not clear to me. However, my preference would be (all in BQ console)
create a user session
turn off sql caching
On BQ console paste the slow running code;
On the same session paste the optimized code
Run the codes (separated by ";")
This will ensure that any systematics like BQ busy/overloaded, slow connection etc will affect "BOTH" SQL piece equally and the systematics will be cancelled out. In my option one only need to run it once as caching is turned off as well. Running 5 times to get an average looks excessive and superfluous?
Appreciate any suggestions/feedback
Thanks

Measuring the time is one way, the other way to see if the query has been optimized is the understanding of the query plan and how slots are used effectively.
I've been with BigQuery more than 6 years, and what you describe was never used by me. In BigQuery actually what matters is reducing the costs, and that can be done iteratively rewriting the query, and using partitioning/clustering/materialized views, caching/temporary tables.

Related

Which one is the best either Cached nor Non-Cached Google BigQuery in C# application

I have developed C# application that reads data from Google Big-query using .Net Client Library.
Query:
Select SUM(Salary), Count(Employee_ID) From Employee_Details
If i am using Non-Cached Query (JobConfig.UseCacheQuery=false) in Job Configuration then able to get the result in ~6 Seconds.
If i am using Cached Query (JobConfig.UseCacheQuery=true) in Job Configuration then able to get the same result in ~2 Seconds.
Which is the best way to use Google BigQuery whether Cached nor Non-Cached. (Cached Query Execution time is faster than Non-Cached once).
If there is any drop-backs are present in Cached Queries? Kindly Clarify this.
If you run a BigQuery query twice in a row, the query cache will allow the second query invocation to simply return the same results that the first query already computed, without actually running the query again. You get your results faster, and you don't get charged for it.
The query cache is a simple way to prevent customers from overspending by repeating the same query, which sometimes happens in automated environments.
Query caching is on by default, and I would recommend leaving it enabled unless you have a specific reason to disable it. One reason you might disable caching is if you are doing performance testing and want to actually run the query to see how long it takes. But those scenarios are rare.
Read more here:
https://cloud.google.com/bigquery/querying-data#querycaching

Get ALL sql queries executed in the last several hours in Oracle

I need a way to collect all queries executed in Oracle DB (Oracle Database 11g) in the last several hours regardless of how fast the queries are (this will be used to calculate sql coverage after running tests against my application that has several points where queries are executed).
I cannot use tables like V$SQL since there's no guarantee that a query will remain there long enough. It seems I could use DBA_HIST_SQLTEXT but I didn't find a way to filter out queries executed before current test run.
So my question is: what table could I use to get absolutely all queries executed in the given period of time (up to 2 hours) and what DB configuration should I adjust to reach my goal?
"I need the query itself since I need to learn which queries out of all queries coded in the application are executed during the test"
The simplest way of capturing all the queries executed would be to enable Fine Grained Audit on all your database tables. You could use the data dictionary to generate policies on every application table.
Note that even when writing to an OS file such a number of policies would generate a high impact on the database, and will increase the length of time it takes to run the tests. Therefore you should only use these policies to assess your test coverage, and disable them for other test runs.
Find out more.
In the end I went with what Justin Cave suggested and instrumented Oracle JDBC driver to collect every executed query in a Set and them dump them all into a file after running the tests.

Tableau 8.1 taking long time to display report

I have a stored procedure as a source connection in Tableau 8.1. It takes a long time to fetch and display ( about 1 min) 40000 records (there is no bar chart, pie charts etc).
What the stored proc does is it selects 40000 records with some 6-7 table joins.
However the same stored procedures executes and displays the records in sql server management studio within 3 seconds.
After using SQL Server Profiler, it shows that some 45000 inserts into a tableau temp table occurs which takes a long time. Also, it shows in the log file that it takes a high percentage of time for the inserts while the execution of stored proc itself takes about 4-5 seconds only.Is this the problem ?Any suggestion how to over come this issue?
Regards
Gautam S
A few of places to start:
First check out the Tableau log file in your Tableau repository directory after trying to access your data. There will be a lot of information in there, but you should be able to see the actual SQL that Tableau sends to your database -- and that may give you some clues about what it is doing that is taking so long. If nothing else, you can cut and paste the SQL into your database tools and try to isolate the problem SQL without Tableau in the mix
If the log file doesn't give you an idea about how to restructure your system to avoid the long query, then send it along with info about your schema to Tableau support. They may be able to help.
Simplify whatever you can to reduce the problem to its core, get rid of everything in your visualization but a total, and then slowly build it back up to see what causes the behavior. For example, make a test version and remove one table at a time from your query to see what causes the problem.
Avoid using quick filters if you see performance problems (or minimize them) Nice feature, but comes with a performance cost
Try the Tableau performance monitoring (record and analysis) features
Work with a smaller data set during testing so you can more quickly experiment with different approaches
Try replacing your stored procedure with a view. That's usually better if at all possible.
Add indices to speed the joins
If there is no way around the long operation and if updates are infrequent, make a Tableau extract so that you only pay that cost periodically
If none of these things help, cut the problem down to its simplest version and post a schema and the problem SQL Otherwise, people can only give you generic advice

What data load on my DB should I expect if I get more users?

currently as a single user, it takes the 260ms for a certain query to run from start to finish.
what will happen if I have 1000 queries sent at the same time? should I expect the same query to take ~4 minutes? (260ms*1000)
It is not possible to make predictions without any knowledge of the situation. There will be a number of factors which affect this time:
Resources available to the server (if it is able to hold data in memory, things run quicker than if disk is being accessed)
What is involved in the query (e.g. a repeated query will usually execute quicker the second time around, assuming the underlying data has not changed)
What other bottlenecks are in the system (e.g. if the webserver and database server are on the same system, the two processes will be fighting for available resource under heavy load)
The only way to properly answer this question is to perform load testing on your application.

mysql slow on first query, then fast for related queries

I have been struggling with a problem that only happens when the database has been idle for a period of time for the data queried. The first query will be extremely slow, on the order of 30 seconds and then related queries will be fast like 0.1 seconds. I am assuming this is related to caching, but I have been unable to find the cause of it.
Changing the mysql variables tmp_table_size, max_heap_table_size to a larger size had no effect except to create the temp tables in memory.
I do not think this is related to the query itself as it is well indexed and after the first slow query, variants of the same query do not show up in the slow query log. I am most interested in trying to determine the cause of this or a way to reset the offending cache so I can troubleshoot the issue.
Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.
It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.
This is what I'd expect.
Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.
Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).
Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.
Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)
MySQL Workbench:
The below isn't 100% related to this SO question, but the symptoms are very related and this is the first SO result when searching for "mysql workbench slow" or related terms, so hopefully it's useful for others.
Clear the query history! - following the process at MySql workbench query history ( last executed query / queries ) i.e. create / alter table, select, insert update queries to clear MySQL Workbench's query history really sped up the program for me.
In summary: change the Output pane to History Output, right click on a Date and select Delete All Logs.
The issue I was experiencing was "slow first query" in that it would take a few seconds to load the results even though the duration/fetch were well under 1 second. After clearing my query history, the duration/fetch times stayed the same (well under 1 second, so no DB behavior actually changed), but now the results loaded instantly rather than after a few second delay.
Is anything else running on your mysql server? My thought is that maybe after the first query, your table is still cached in memory. Once it's idle, another process is causing it to be de-cached. Just a guess though.
How much memory do you have any what else is running?
I had an SSIS package that was timing out. The query was very simple, from a single MySQL table, but it sometimes returned a lot of records and would sometimes take a few minutes initially to execute, then only a few milliseconds afterwards if I wanted to query it again. We were stuck with the ADO connection, which meant it would time out after 30 seconds, so about half the databases we were trying to load were failing.
After beating my head against the wall I tried performing an initial query first; very simple and only returning a few rows. Since it was so simple it executed fast and set the table in the cache for faster querying. In the next step of the package I would do the more complex query which returned the large data set that kept timing out. Problem solved - all tables loaded. I may start doing this on a regular basis, the complex queries execute much faster by doing a simple query first.
Ttry and compare the output of "vmstat 1" on the linux command line when running the query after a period of time, vs when you re-run it and get results fast. Specifically check the "bi" column (that's the kb read from disk per second).
You may find the operating system is caching the disk blocks in the fast case (and thus a low "bi" figure), but not in the slow case (and hence a large "bi" figure).
You might also find that vmstat shows high/low cpu usage in either case. If it's low when fast, and disk throughput is also low, then your system may still be returning a cached query, even though you've indicated the relevant config value is set to zero. Perhaps check the output of show engine innodb status and SHOW VARIABLES and confirm.
innodb_buffer_pool_size may also be set high (it should be...), which would cache the blocks even before the OS can return them.
You might also find that "key_buffer" is set high - this would cache the keys in the indexes, which could make your select blindingly fast.
Try check the mysql performance blog site for lots of useful info.
I had issue when MySQL 5.6 was slow on first query after idle period. This was a connection problem, not MySQL instance problem, e.g. if you run MYSQL Query Browser execute "select * from some_queue", leave it alone for a couple of hours, then execute any query, it runs slow, while at the same time processes on server or new instance of Browser will select from same tables instantly.
Adding skip-host-cache, skip-name-resolve to my.ini file solved this problem.
I don't know why is that. Why I tried this: MySQL 5.1 without those options was slowly establishing connections from other networks (e.g. server is 192.168.1.100, 192.168.1.101 connects fast, 192.168.2.100 connects slow), MySQL 5.6 didn't have such problem to start with so we didn't add these to my.ini initially.
UPD: Solved half the cases, actually. Setting wait_timeout to maximum integer fixed the other half. Maybe I even now can remove skip-host-cache, skip-name-resolve and it won't slow down in 100% of the cases