Someone knows how to unactivate the 10 rows limit at table preview? - google-bigquery

I recently change my view to editor in Big Query and I noticed that exist a '10 row limit' inside 'Records' when I try to preview a table in big query.
Screenshot 10 row limit
Someone know how to change the limit?
Thanks!

Related

Custom Pagination in datatable

I have a web application in which I get data from my database and show in a datatable. I am facing an issue doing this as the data that I am fetching has too many rows(200 000). So when I query something like select * from table_name;
my application gets stuck.
Is there a way to handle this problem with JavaScript?
I tried pagination but I cannot figure how would i do that as datatable creates pagination for already rendered data?
Is there a way through which I can run my query through pagination at
the backend?
I have come across the same problem when working with mongodb and angularjs. I used server side paging. Since you have huge number of records, You can try using the same approach.
Assuming a case that you are displaying 25 records in one page.
Backend:
Get the total count of the records using COUNT query.
select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} to query limited records based on the page number;
Frontend:
Instead of using datatable, display the data in HTML table it self.
Define buttons for next page and previous page.
Define global variable in the controller/js file for pageNumber.
Increment pageNumber by 1 when next page button is clicked and
decrement that by 1 when prev button is pressed.
use result from COUNT query to put upper limit to pageNumber
variable.(if 200 records are there limit will be 200/25=8).
So basically select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} will limit the number of records to 25. when req.query.pageNumber=1, it will offset first 25records and sends next 25 records. similarly if req.query.pageNumber=2, it will offset first 2*25 records and sends 51-75 records.
There are two ways to handle.
First way - Handling paging in client side
Get all data from database and apply custom paging.
Second way - Handling paging in server side
Every time you want to call in database and get records according to pagesize.
You can use LIMIT and OFFSET constraints for pagination in MySQL. I understand that at a time 2 lacs data makes performance slower. But as you mention that you have to use JS for that. So make it clear that if you wants js as frontend then it is not going to help you. But as you mention that you have a web application, If that application is on Node(as server) then I can suggest you the way, which can help you a lot.
use 2 variables, named var_pageNo and var_limit. Now use the row query of mysql as
select * form <tbl_name> LIMIT var_limit OFFSET (var_pageNo * var_limit);
Do code according to this query. Replace the variable with your desire values. This will make your performance faster, and will fetch the data as per your specified limit.
hope this will helpful.

Pulling a 33,000-record recordset took LESS execution time than using Count() in the SQL. How is that possible?

Thanks in advance for putting up with me.
Pulling a 33,000-record recordset from the database took LESS execution time than using Count() in the SQL and just grabbing 20 rows.
How is that possible?
A bit more detail:
Before, we were grabbing the entire recordset yet only displaying 20 rows of it on a page at a time for pagination. That was cringeworthy and wasteful, so I redesigned the page to only grab 20 rows at a time and to simply use an index variable to grab the next page, and so on.
All well and good, but that lacked a record count, which our people needed.
So after the record query, I added (what I thought would be) a quick query just on the index of the table using the Count(index) function in Structured Query Language.
A side by side comparison of the original page and my new page indicates my new page takes roughly 10% longer to execute than the original! I was flabbergasted. I thought for sure it would be lightning fast, way faster than the original.
Any thoughts on why and what I might do to remedy that?
Is it because the script has to run two queries, regardless of the data retrieved?
Update:
Here is the SQL.
(Table names and field names are fictionalized in this post for security, but the structure is the same as the real page).
The main recordset select query contains:
SELECT
top 21 roster_id, roster_pplid, roster_pplemailid, roster_emailid, roster_firstname,
roster_lastname, roster_since, roster_pplsubscrid, roster_firstppldone, roster_pmtcurrent,
roster_emailverified, roster_active, roster_selfcanceled, roster_deactreason
FROM roster
WHERE
roster_siteid = 22
AND roster_isdeleted = false
order by roster_id desc
The record count query contains:
SELECT
COUNT(roster_id)
FROM
roster
WHERE
roster_siteid = 22
AND roster_isdeleted = false
The first query runs, then the second. The second always dynamically has the same matching WHERE filter.
I think I know why it is slower, I'm using GetRows to grab the recordset in the new page, was not using that in the old page. That seems to be the slowdown. But I have to use it, cannot step beyond the 21st record otherwise.
Nick.McDermaid : The SQL shown is selecting the TOP 21 rows, that is how it is grabbing just 20 rows (number 21 is just to populate the index for the "Next" page link).

Shifting Window in google Big Query dataset

I have 30 daily sharded tables in Big Query from Nov 1 to Nov 30, 2016.
Each of these tables follow the naming convention of "sample_datamart_YYYYMMDD".
Each of these daily tables have a field called timestampServer.
My goal is to advance the data by 24 hours at 00:00:00 UTC every day.
So that the data is kept current without me having to copy the tables.
Is there any way to :
1) do a calculation on the field timestampServer so that it gets updated every 24 hours?
2) and at the same time rename the table name from sample_datamart_20161130 to sample_datamart_20161201?
I've read the other posts and I think those are more on aggregations in a 30 day window. My objective is not to do any aggreagtions. I just want to move the whole dataset forward by 24 hours so that when I searched for the last 1 day, there will always be data there.
Does anyone know if Google Cloud Datasets: Update be able to perform the tasks?
https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/update#try-it
Thanks very much for any guidance.
As of #2 - how to rename the table name from sample_datamart_20161130
to sample_datamart_20161201?
This can be achieved by copying table to new table and then deleting original table.
Zero extra cost as copy job is free of charge
Table can be copied with Jobs: Insert API using copy configuration and then table can be deleted using Tables: Delete API
Just wanted to note that above answer just directly answers your (second) question. But somehow I feel you can go wrong direction. If you want to describe in more details what your are trying to achieve (as oposed to how you think you will implement it) we might be able to provide better help for you. If you will go this way - I would recommend to post it as a separate question :o)

Why does SSRS taking too long to execute in design and more while building it?

I have two data-sets in my SSRS tool, first table contain 12,000 records and second one 26,000 records. And 40 columns in each table.
While building a report each time I go preview - it takes forever to display.
Is any way to do something to avoid that, so I can at least not spent so much time to build this report?
Thank you in advance.
Add a dummy parameter to limit your dataset. Or just change your select to select top 100 while building the report
#vercelli's answer is a good one. In addition you can change your cache options in the designer (for all resultsets including patramters) so that the queries are not rerun each time.
This is really useful plus - a couple of tips for you:
1. I don't recommend caching until you are happy with the your dataset results.
2. If you are using the cache and you want to do a quick refresh then the data is stored in a ".data" file in the same location as a your .rdl. You can delete this to query the database again if required.

Mystery: SQL query hangs with DataLength() criteria

I have a table called Photos. It has a little over 3000 rows and includes an image type field called Photo.
This query runs instantaneously:
Select PhotoFileName, DATALENGTH(Photo)
From Photos
Order by DATALENGTH(Photo)
This query hangs intermittently (sometimes takes several minutes to complete, then after completing once, runs instantaneously).
Select PhotoFileName, DATALENGTH(Photo)
From Photos
Where DATALENGTH(Photo)>0
Same with this query:
Select PhotoFileName, DATALENGTH(Photo)
From Photos
Where Photo is not NULL
What could possibly be going on?
I'm not sure why you're seeing this problem, but I think you could resolve it by adding a computed column to the table to hold the Length of the photo, perhaps tied with an index on that new column.