Workaround for UDF memory leak - google-bigquery

Apparently there is a memory leak on BigQuery's UDF. We run a simple UDF over a small table (3000 rows, 5MB) and it fails. If we run the same UDF over the first half of the table concatenated with the second half of the table (in the same query), then it works! Namely: SELECT blah myUDF(SELECT id,data FROM table) fails. SELECT blah myUDF(SELECT id, data FROM table ORDER BY id LIMIT 1500),myUDF(SELECT id, data FROM table ORDER BY id DESC LIMIT 1500) succeeds.
The question is: how do we work around this issue? Is there a way to dynamically split a table in multiple parts, each of equal size and of predefined number of rows? Say 1000 rows at a time? (the sample table has 3000 rows, but we want this to succeed in larger tables, and if we split a 6000 row table in half, the UDF will be failing again on each half). In any solution, it is important to (a) NOT use ORDER BY, because it has a 65000 row limitation; (b) use a single combined query (otherwise the solution may be too slow, plus every combined table is charged at a minimum of 10MB, so if we have to split a 1,000,000 row table into 1,000 rows at a time we will automatically be charged for 10 GB. Times 1,000 tables = 10TB. This stuff adds up quickly)
Any ideas?

This issue was related to limits we had on the size of the UDF code. It looks like V8's optimize+recompile pass of the UDF code generates a data segment that was bigger than our limits, but this was only happening when when the UDF runs over a "sufficient" number of rows. I'm meeting with the V8 team this week to dig into the details further.
In the meantime, we've rolled out a fix to update the max data segment size. I've verified that this fixes several other queries that were failing for the same reason.
Could you please retry your query? I'm afraid I can't easily get to your code resources in GCS.

Related

How wide can a postgresSQL table be before I run into issues?

According to the limits postgres supports up to 1600 columns per table.
https://www.postgresql.org/docs/current/limits.html
I understand that it's bad practice to have so many columns but what are the consequences of approaching this limit?
For example, will a table with 200 columns perform fine in an application? How can you tell when you're approaching too many columns for a given table?
The hard limit is that a table row has to fit inside a single 8kB block.
The "soft limits" you encounter with many columns are
writing SELECT list becomes more and more annoying (never, ever, use SELECT *)
each UPDATE has to write a large row version, so lots of data churn
extracting the 603th column from a row requires skipping the previous 602 columns, which is a performance hit
it is plain annoying if the output of \d is 50 pages long

Redshift performance difference between CTAS and select count

I have query A, which mostly left joins several different tables.
When I do:
select count(1) from (
A
);
the query returns the count in approximately 40 seconds. The count is not big, at around 2.8M rows.
However, when I do:
create table tbl as A;
where A is the same query, it takes approximately 2 hours to complete. Query A returns 14 columns (not many) and all the tables used on the query are:
Vacuumed;
Analyzed;
Distributed across all nodes (DISTSTYLE ALL);
Encoded/Compressed (except on their sortkeys).
Any ideas on what should I look at?
When using CREATE TABLE AS (CTAS), a new table is created. This involves copying all 2.8 million rows of data. You didn't state the size of your table, but this could conceivable involve a lot of data movement.
CTAS does not copy the DISTKEY or SORTKEY. The CREATE TABLE AS documentation says that the default DISTKEY is EVEN. Therefore, the CTAS operation would also have involved redistributing the data amongst nodes. Since the source table was DISTKEY ALL, at least the data was available on each node for distribution, so this shouldn't have been too bad.
If your original table DDL included compression, then these settings would probably have been copied across. If the DDL did not specify compression, then the copy to the new table might have triggered the automatic compression analysis, which involves loading 100,000 rows, choosing a compression type for each column, dropping that data and then starting the load again. This could consume some time.
Finally, it comes down to the complexity of Query A. It is possible that Redshift was able to optimize the query by reading very little data from disk because it realized that very few columns of data (or perhaps no columns) were required to read from disk to display the count. This really depends upon the contents of that Query.
It could simply be that you've got a very complex query that takes a long time to process (that wasn't processed as part of the Count). If the query involves many JOIN and WHERE statements, it could be optimized by wise use of DISTKEY and SORTKEY values.
CREATE TABLE writes all data that is returned by the query to disk, count query does not, that explains the difference. Writing all rows is more expensive operation compared to reading row count.

Optimizing sql queries for deleting duplicates in Monetdb

I have a problem where I have a market data table with >100,000,000 rows and I need to search and remove duplicates where the symbol and totvol columns match but the serial_no is different.
I have tried the query below on both a single table and also using a copy of the table for reference, but unfortunately it takes up an enormous amount of heap space (>100G and counting, sometimes filling the harddrive to the brim and crashing my database)and time (>30 mins) and brings my server to a crawl (60-95% cpu usage on 32 cores!) which is unacceptable. Is there an efficient way to write this query to optimize the sql execution if i want to execute something like this regularly?
Normally I would partition the table somehow since duplicates for the most part are inserted adjacent or near each other, but since monetdb is a column store database partitioning this way also takes up a lot of heap space. The only helpful thing I have found to reduce the heap is by creating an entirely new table with a subset of the data (i.e. split alphabetically by symbol) which results in smaller column bat files and then running the operation on the small table, is there any way I can keep the large table in tact and write a query which operates on maybe 1,000,000 rows at a time?
The query:
delete from print_11_11
where exists (Select a.serial_no
from print_11_11 as a, print_11_11 as b
where a.symbol=b.symbol
and a.totvol = b.totvol
and a.serial_no>b.serial_no)
Some example data, rows 2 and 3 are duplicates of one another and rows 4-7 are all duplicates = by my critera, note the exseq may be the same or different, it does not matter which exseq value we keep when removing duplicates:
<table border="1"><tr BGCOLOR="#CCCCFF"><th>serial_no</th><th>ttime</th><th>symbol</th><th>vol</th><th>totvol</th><th>exseq</th></tr>
<tr><td>0</td><td>80017</td><td>T</td><td>200</td><td>200</td><td>133813</td></tr>
<tr><td>855</td><td>80017</td><td>T</td><td>42</td><td>242</td><td>133813</td></tr>
<tr><td>867</td><td>80017</td><td>T</td><td>42</td><td>242</td><td>136690</td></tr>
<tr><td>868</td><td>80210</td><td>T</td><td>100</td><td>342</td><td>136690</td></tr>
<tr><td>876</td><td>80211</td><td>T</td><td>100</td><td>442</td><td>136690</td></tr>
<tr><td>877</td><td>80211</td><td>T</td><td>100</td><td>442</td><td>136696</td></tr>
<tr><td>882</td><td>80211</td><td>T</td><td>100</td><td>442</td><td>136737</td></tr>
<tr><td>883</td><td>80213</td><td>T</td><td>2928</td><td>3370</td><td>136737</td></tr>
</table>

Large Denormalized Table Optimization

I have a single large denormalized table that mirrors the make up of a fixed length flat file that is loaded yearly. 112 columns and 400,000 records. I have a unique clustered index on the 3 columns that make up the where clause of the query that is run most against this table. Index Frag is .01. Performance on the query is good, sub second. However, returning all the records takes almost 2 minutes. The execution plan shows 100% of the cost is on a Clustered Index Scan (not seek).
There are no queries that require a join (due to the denorm). The table is used for reporting. All fields are type nvarchar (of the length of the field in the data file).
Beyond normalizing the table. What else can I do to improve performance.
Try paginating the query. You can split the results into, let's say, groups of 100 rows. That way, your users will see the results pretty quickly. Also, if they don't need to see all the data every time they view the results, it will greatly cut down the amount of data retrieved.
Beyond this, adding parameters to the query that filter the data will reduce the amount of data returned.
This post is a good way to get started with pagination: SQL Pagination Query with order by
Just replace the "50" and "100" in the answer to use page variables and you're good to go.
Here are three ideas. First, if you don't need nvarchar, switch these to varchar. That will halve the storage requirement and should make things go faster.
Second, be sure that the lengths of the fields are less than nvarchar(4000)/varchar(8000). Anything larger causes the values to be stored on a separate page, increasing retrieval time.
Third, you don't say how you are retrieving the data. If you are bringing it back into another tool, such as Excel, or through ODBC, there may be other performance bottlenecks.
In the end, though, you are retrieving a large amount of data, so you should expect the time to be much longer than for retrieving just a handful of rows.
When you ask for all rows, you'll always get a scan.
400,000 rows X 112 columns X 17 bytes per column is 761,600,000 bytes. (I pulled 17 out of thin air.) Taking two minutes to move 3/4 of a gig across the network isn't bad. That's roughly the throughput of my server's scheduled backup to disk.
Do you have money for a faster network?

Appropriate query and indexes for a logging table in SQL

Assume a table named 'log', there are huge records in it.
The application usually retrieves data by simple SQL:
SELECT *
FROM log
WHERE logLevel=2 AND (creationData BETWEEN ? AND ?)
logLevel and creationData have indexes, but the number of records makes it take longer to retrieve data.
How do we fix this?
Look at your execution plan / "EXPLAIN PLAN" result - if you are retrieving large amounts of data then there is very little that you can do to improve performance - you could try changing your SELECT statement to only include columns you are interested in, however it won't change the number of logical reads that you are doing and so I suspect it will only have a neglible effect on performance.
If you are only retrieving small numbers of records then an index of LogLevel and an index on CreationDate should do the trick.
UPDATE: SQL server is mostly geared around querying small subsets of massive databases (e.g. returning a single customer record out of a database of millions). Its not really geared up for returning truly large data sets. If the amount of data that you are returning is genuinely large then there is only a certain amount that you will be able to do and so I'd have to ask:
What is it that you are actually trying to achieve?
If you are displaying log messages to a user, then they are only going to be interested in a small subset at a time, and so you might also want to look into efficient methods of paging SQL data - if you are only returning even say 500 or so records at a time it should still be very fast.
If you are trying to do some sort of statistical analysis then you might want to replicate your data into a data store more suited to statistical analysis. (Not sure what however, that isn't my area of expertise)
1: Never use Select *
2: make sure your indexes are correct, and your statistics are up-to-date
3: (Optional) If you find you're not looking at log data past a certain time (in my experience, if it happened more than a week ago, I'm probably not going to need the log for it) set up a job to archive that to some back-up, and then remove unused records. That will keep the table size down reducing the amount of time it takes search the table.
Depending on what kinda of SQL database you're using, you might look into Horizaontal Partitioning. Oftentimes, this can be done entirely on the database side of things so you won't need to change your code.
Do you need all columns? First step should be to select only those you actually need to retrieve.
Another aspect is what you do with the data after it arrives to your application (populate a data set/read it sequentially/?).
There can be some potential for improvement on the side of the processing application.
You should answer yourself these questions:
Do you need to hold all the returned data in memory at once? How much memory do you allocate per row on the retrieving side? How much memory do you need at once? Can you reuse some memory?
A couple of things
do you need all the columns, people usually do SELECT * because they are too lazy to list 5 columns of the 15 that the table has.
Get more RAM, themore RAM you have the more data can live in cache which is 1000 times faster than reading from disk
For me there are two things that you can do,
Partition the table horizontally based on the date column
Use the concept of pre-aggregation.
Pre-aggregation:
In preagg you would have a "logs" table, "logs_temp" table, a "logs_summary" table and a "logs_archive" table. The structure of logs and logs_temp table is identical. The flow of application would be in this way, all logs are logged in the logs table, then every hour a cron job runs that does the following things:
a. Copy the data from the logs table to "logs_temp" table and empty the logs table. This can be done using the Shadow Table trick.
b. Aggregate the logs for that particular hour from the logs_temp table
c. Save the aggregated results in the summary table
d. Copy the records from the logs_temp table to the logs_archive table and then empty the logs_temp table.
This way results are pre-aggregated in the summary table.
Whenever you wish to select the result, you would select it from the summary table.
This way the selects are very fast, because the number of records are far less as the data has been pre-aggregated per hour. You could even increase the threshold from an hour to a day. It all depends on your needs.
Now the inserts would be fast too, because the amount of data is not much in the logs table as it holds the data only for the last hour, so index regeneration on inserts would take very less time as compared to very large data-set hence making the inserts fast.
You can read more about Shadow Table trick here
I employed the pre-aggregation method in a news website built on wordpress. I had to develop a plugin for the news website that would show recently popular (popular during the last 3 days) news items, and there are like 100K hits per day, and this pre-aggregation thing has really helped us a lot. The query time came down from more than 2 secs to under a second. I intend on making the plugin publically available soon.
As per other answers, do not use 'select *' unless you really need all the fields.
logLevel and creationData have indexes
You need a single index with both values, what order you put them in will affect performance, but assuming you have a small number of possible loglevel values (and the data is not skewed) you'll get better performance putting creationData first.
Note that optimally an index will reduce the cost of a query to log(N) i.e. it will still get slower as the number of records increases.
C.
I really hope that by creationData you mean creationDate.
First of all, it is not enough to have indexes on logLevel and creationData. If you have 2 separate indexes, Oracle will only be able to use 1.
What you need is a single index on both fields:
CREATE INDEX i_log_1 ON log (creationData, logLevel);
Note that I put creationData first. This way, if you only put that field in the WHERE clause, it will still be able to use the index. (Filtering on just date seems more likely scenario that on just log level).
Then, make sure the table is populated with data (as much data as you will use in production) and refresh the statistics on the table.
If the table is large (at least few hundred thousand rows), use the following code to refresh the statistics:
DECLARE
l_ownname VARCHAR2(255) := 'owner'; -- Owner (schema) of table to analyze
l_tabname VARCHAR2(255) := 'log'; -- Table to analyze
l_estimate_percent NUMBER(3) := 5; -- Percentage of rows to estimate (NULL means compute)
BEGIN
dbms_stats.gather_table_stats (
ownname => l_ownname ,
tabname => l_tabname,
estimate_percent => l_estimate_percent,
method_opt => 'FOR ALL INDEXED COLUMNS',
cascade => TRUE
);
END;
Otherwise, if the table is small, use
ANALYZE TABLE log COMPUTE STATISTICS FOR ALL INDEXED COLUMNS;
Additionally, if the table grows large, you shoud consider to partition it by range on creationDate column. See these links for the details:
Oracle Documentation: Range Partitioning
OraFAQ: Range partitions
How to Create and Manage Partition Tables in Oracle