Increase Tableau online Query execution time - google-bigquery

Is there any way to change query execution time on tableau online. I have to extract 100 M records from Google Bigquery which will take around 3 hr on tableau online but my process stops after 2 hrs with error com.tableausoftware.nativeapi.dll.TableauCancelException: Operation canceled
which is query execution time limit

My #1 suggestion is to make sure your BigQuery tables and queries are optimized as much as possible. Your question doesn't give details on your schema design, query, or indexing, but that is the first place to start. Look at the query plan and timeline to get an idea of where the bottlenecks are in the query execution.
My #2 suggestion is to look at what aggregations your visualizations are doing and see if you can offload that aggregation to the database instead. That will reduce the amount of data needed in your extract file.
Tableau has an excellent Knowledge Base article on this topic, found at https://kb.tableau.com/articles/issue/error-operation-cancelled-query-time-resource-limit-7200-refreshing-tableau-online-extract. Much of my answer below is copy/pasted from the KB article.
Here are some options that the KB article gives. Options 3 is easy to do and can make a noticeable difference. Option 2 may be easy to implement, depending on your schema design.
Option 1
Use live connections instead of extracts. See Keep Data Fresh for identifying which cloud-based data sources are capable of connecting live on Tableau Online.
Option 2
Use incremental extracts.
Option 3
Decrease the size of the data extract. Do this by hiding unused columns and adding filtering to the extract.
Option 4
Refresh the published extract using Tableau Bridge, the Command Line Utility, or manually using Tableau Desktop.
Option 5
Try scheduling extract refreshes at different times of day, or at night.

Related

How to increase performance of Azure Data Factory Pipeline with Integration Runtime

I would like to increated the performance of our pipelines.
The pipelines currently run from an integration runtime.
I am running a single copy activity on tables held on our Source which is a SQL Database. Tables contain just under a million rows, with about 15 columns.
Currently the time it takes to copy a table from Source to Sink(ADLS) is approximately 20mins.
Is there a way to increase the DIU to increase performance?
My current copy settings are as follows:
I'm thinking that if I made some changes to Settings, see below, I would improve performance, but I have never played around to settings before, any suggestions most welcomed.
The activity details for a pipeline run is as follows:
My link service is an Azure Synapse Link service, see below:
From the output window, we can see that almost all the wait time was "Time to first byte", which means your SQL server is slow to reply. It takes ~22 minutes for less than 90K rows. So changes on the ADF side will not help.
If your query is a simple "select * from table", then maybe your SQL server is low on resources. You can check that in your database portal in Azure. Try to add more resources and see if copy times improve.
If this is a query from a view or other complicated query, maybe it needs some improvement (indexes, improve code). You can test that by writing the query result to a table in your SQL database, use that table as the data factory source, and see if this improves copy time.
Quick check , is the Azure SQL and storage account in the same region ? Also I see that your copy activity is set as parraleism as 1 , you can play with number and see if that helps .
How to setyp parallelism please read here : https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-performance-features#parallel-copy
Please see the snaphot below

Measuring the averaged elapsed time for SQL code running in google BigQuery

As BigQuery is a shared resource, it is possible that one gets different values running the same code on BigQuery. OK one option that I always use is to turn off caching in Query Settings, Cache preference. This way queries will not be cached. The problem with this setting is that if you refresh the browser or leave it idle, that Cache Preference box will be ticked again.
Anyhow I had a discussion with some developers that are optimizing the code. In a nutshell, they take slow running code, run it 5 times and get an average, then following optimization then run the code again 5 times to get an average value for optimized SQL. Details are not clear to me. However, my preference would be (all in BQ console)
create a user session
turn off sql caching
On BQ console paste the slow running code;
On the same session paste the optimized code
Run the codes (separated by ";")
This will ensure that any systematics like BQ busy/overloaded, slow connection etc will affect "BOTH" SQL piece equally and the systematics will be cancelled out. In my option one only need to run it once as caching is turned off as well. Running 5 times to get an average looks excessive and superfluous?
Appreciate any suggestions/feedback
Thanks
Measuring the time is one way, the other way to see if the query has been optimized is the understanding of the query plan and how slots are used effectively.
I've been with BigQuery more than 6 years, and what you describe was never used by me. In BigQuery actually what matters is reducing the costs, and that can be done iteratively rewriting the query, and using partitioning/clustering/materialized views, caching/temporary tables.

Allowing many users to view stale BigQuery data query results concurrently

If I have a BigQuery dataset with data that I would like to make available to 1000 people (where each of these people would only be allowed to view their subset of the data, and is OK to view a 24hr stale version of their data), how can I do this without exceeding the 50 concurrent queries limit?
In the BigQuery documentation there's mention of 50 concurrent queries being permitted which give on-the-spot accurate data, which I would surpass if I needed them to all be able to view on-the-spot accurate data - which I don't.
In the documentation there is mention of Batch jobs being permitted and saving of results into destination tables which I'm hoping would somehow allow a reliable solution for my scenario, but am having difficulty finding information on how reliably or frequently those batch jobs can be expected to run, and whether or not someone querying results that exist in those destination tables is in itself counting towards the 50 concurrent users limit.
Any advice appreciated.
Without knowing the specifics of your situation and depending on how much data is in the output, I would suggest putting your own cache in front of BigQuery.
This sounds kind of like a dashboading/reporting solution, so I assume there is a large amount of data going in and a relatively small amount coming out (per-user).
Run one query per day with a batch script to generate your output (grouped by user) and then export it to GCS. You can then break it up into multiple flat files (or just read it into memory on your frontend). Each user hits your frontend, you determine which part of the output to serve up to them and respond.
This should be relatively cheap if you can work off the cached data and it is small enough that handling the BigQuery output isn't too much additional processing.
Google Cloud Functions might be an easy way to handle this, if you don't want the extra work of setting up a new VM to host your frontend.

Big query is to slow

I am just starting with biquery, my DB is small (10K of rows 1 table) and my queries are simple count and group by.
Its takes and average of 3-4 sec per request but sometimes its jumps to 10 and event 15sec
I am querying from amazon linux server in Irland using the BQ tool.
Is it possible to get results faster (under 1sec) so I will be able to present my webpages faster.
1) Big Query is a highly scalable database, before being a "super fast" database. It's designed to process HUGE amount of data distributing the processing among several different machines using a technique named Dremel. Because it's designed to use several machines and parallel processing, you should expect to have super-scalability with a good performance.
2) BigQuery is an asset when you want to analyze billions of rows.
For example: analyzing all the wikipedia revisions in 5-10 seconds isn't bad, is it? But even a much smaller table would take about the same time, even if has 10k rows.
3) Under this size, you'll be better off using more traditional data storage solutions such as Cloud SQL or the App Engine Datastore. If you want to keep SQL capability, Cloud SQL is the best guess.
Sybase IQ is often installed in a single database and it doesn't use Dremel. That said, it's going to be faster than Big Query in many scenarios...as designed.
4) Certainly the performance differ from a dedicated environment. You get your dedicated environment for 20K$ a month.
That's the expected behaviour. In BigQuery you are using a shared infrastructure, so depending on the use at the moment you will get better or worse response time. Actually batch queries (those not needing interactivity) are encouraged and rewarded by not adding up to your quota.
You typically don't use BigQuery as your main database to show data in your web application. Depending on what you want to do, BigQuery can be a Big Data storage and you should have another intermediate store where you could store computed results to display to your users. Or maybe in your use case you don't really need BigQuery and there is a better solution.
In any case, you are not going to be able to avoid a few seconds wait (even if you go Premium, you get more guarantees about the service, but in no case a service fast enough as to be your main backend for a webapp)

Tableau 8.1 taking long time to display report

I have a stored procedure as a source connection in Tableau 8.1. It takes a long time to fetch and display ( about 1 min) 40000 records (there is no bar chart, pie charts etc).
What the stored proc does is it selects 40000 records with some 6-7 table joins.
However the same stored procedures executes and displays the records in sql server management studio within 3 seconds.
After using SQL Server Profiler, it shows that some 45000 inserts into a tableau temp table occurs which takes a long time. Also, it shows in the log file that it takes a high percentage of time for the inserts while the execution of stored proc itself takes about 4-5 seconds only.Is this the problem ?Any suggestion how to over come this issue?
Regards
Gautam S
A few of places to start:
First check out the Tableau log file in your Tableau repository directory after trying to access your data. There will be a lot of information in there, but you should be able to see the actual SQL that Tableau sends to your database -- and that may give you some clues about what it is doing that is taking so long. If nothing else, you can cut and paste the SQL into your database tools and try to isolate the problem SQL without Tableau in the mix
If the log file doesn't give you an idea about how to restructure your system to avoid the long query, then send it along with info about your schema to Tableau support. They may be able to help.
Simplify whatever you can to reduce the problem to its core, get rid of everything in your visualization but a total, and then slowly build it back up to see what causes the behavior. For example, make a test version and remove one table at a time from your query to see what causes the problem.
Avoid using quick filters if you see performance problems (or minimize them) Nice feature, but comes with a performance cost
Try the Tableau performance monitoring (record and analysis) features
Work with a smaller data set during testing so you can more quickly experiment with different approaches
Try replacing your stored procedure with a view. That's usually better if at all possible.
Add indices to speed the joins
If there is no way around the long operation and if updates are infrequent, make a Tableau extract so that you only pay that cost periodically
If none of these things help, cut the problem down to its simplest version and post a schema and the problem SQL Otherwise, people can only give you generic advice