I'm using U-SQL to select all objects which are inside one or more of the shapes. The code works but is really slow. Is there some way to make it more performant?
#rs1 =
SELECT DISTINCT aisdata.imo,
portshape.unlocode
FROM #lsaisdata AS aisdata
CROSS JOIN
#portsshape AS portshape
WHERE Geometry.STMPolyFromText(new SqlChars(portshape.shape.Replace("Z", "").ToCharArray()), 0).STContains(Geometry.Point(aisdata.lon, aisdata.lat, 0)).IsTrue;
Added more information about my issue:
I've registered Microsoft.SqlServer.Types.dll and SqlServerSpatial130.dll to be able to use spatial functions in U-SQL
I'm running my job in Data Lake Analytics using two AUs. Initially I used 10 AUs, but the Diagnostics tab stated that the job was 8 AUs over-allocated and max useful AUs was 2.
The job takes about 27 minutes to run with the UDT code below and the cross join takes almost all of this time
The input is one csv file (66 Mb) and one wkt file (2.4 Mb)
I'm using Visual Studio 2015 with Azure Data Lake Tools v2.2.5000.0
I tried encapsulating some of the spatial code in UDTs and that improved the performance to 27 minutes:
#rs1 =
SELECT DISTINCT aisdata.imo,
portshape.unlocode
FROM #lsaisdata AS aisdata
CROSS JOIN
#portsshape AS portshape
WHERE portshape.geoShape.GeoObject.STContains(SpatialUSQLApp.CustomFunctions.GetGeoPoint(aisdata.lon, aisdata.lat).GeoObject).IsTrue;
First, a CROSS JOIN will always explode your data to an NxM matrix. Depending on the number of rows this may either make it very expensive and possibly hard to estimate correct degree of parallelism.
Secondly, I assume that the Spatial join you do is an expensive operation. For example, if you use SQL Server 2012's spatial capabilities (2016 has native implementations of the type that may be a bit faster), I assume you probably get similar performance behavior. Most of the time you need a spatial index to get better performance. Now U-SQL does not support spatial indices, but you probably could approximate the same behavior by using an abstraction (like tessellation of the objects and determining if they overlap), to provide a faster pre-filter/join before you then test the condition to weed out the false positives.
Related
I'm setting up a crude data warehouse for my company and I've successfully pulled contact, company, deal and association data from our CRM into bigquery, but when I join these together into a master table for analysis via our BI platform, I continually get the error:
Query exceeded resource limits. This query used 22602 CPU seconds but would charge only 40M Analysis bytes. This exceeds the ratio supported by the on-demand pricing model. Please consider moving this workload to the flat-rate reservation pricing model, which does not have this limit. 22602 CPU seconds were used, and this query must use less than 10200 CPU seconds.
As such, I'm looking to optimise my query. I've already removed all GROUP BY and ORDER BY commands, and have tried using WHERE commands to do additional filtering but this seems illogical to me as it would add processing demands.
My current query is:
SELECT
coy.company_id,
cont.contact_id,
deals.deal_id,
{another 52 fields}
FROM `{contacts}` AS cont
LEFT JOIN `{assoc-contact}` AS ac
ON cont.contact_id = ac.to_id
LEFT JOIN `{companies}` AS coy
ON CAST(ac.from_id AS int64) = coy.company_id
LEFT JOIN `{assoc-deal}` AS ad
ON coy.company_id = CAST(ad.from_id AS int64)
LEFT JOIN `{deals}` AS deals
ON ad.to_id = deals.deal_id;
FYI {assoc-contact} and {assoc-deal} are both separate views I created from the associations table for easier associations of those tables to the companies table.
It should also be noted that this query has occasionally run successfully, so I know it does work, it just fails about 90% of the time due to the query being so big.
TLDR;
Check your join keys. 99% of the time the cause of the problem is a combinatoric explosion.
I can't know for sure since I don't have access to the data of the underlying table, but I will give a general resolution method which in my experience worked every time to find the root cause.
Long Answer
Investigation method
Say you are joining two tables
SELECT
cols
FROM L
JOIN R ON L.c1 = R.c1 AND L.c2 = R.c2
and you run into this error. The first thing you should do is check for duplicates in both tables.
SELECT
c1, c2, COUNT(1) as nb
FROM L
GROUP BY c1, c2
ORDER by nb DESC
And the same thing for each table involved in a join.
I bet that you will find that your join keys is duplicated. BigQuery is very scalable, so in my experience this error happens when you have a join key that repeats more than 100 000 times on both tables. It means that after your join, you will have 100000^2 = 10 billion rows !!!
Why BigQuery gives this error
In my experience, this error message means that your query does too many computation compared to the size of your inputs.
No wonder you're getting this if you end up with 10 billion rows after joining tables with a few million rows each.
BigQuery's on-demand pricing model is based on the amount of data read in your tables. This means that people could try to abuse this by, say running CPU-intensive computations while reading small datasets. To give an extreme example, imagine someone makes a Javascript UDF to mine bitcoin and runs it on BigQuery
SELECT MINE_BITCOIN_UDF()
The query will be billed $0 because it doesn't read anything, but will consume hours of Google's CPU. Of course they had to do something about this.
So this ratio exists to make sure that users don't do anything sketchy by using hours of CPUs while processing a few Mb of inputs.
Other MPP platforms with a different pricing model (e.g. Azure Synapse who charges based on the amount of bytes processed, not read like BQ) would perhaps have run without complaining, and then billed you 10Tb for reading that 40Mb table.
P.S.: Sorry for the late and long answer, it's probably too late for the person who asked, but hopefully it will help whoever runs into that error.
I have a curious question and as my name suggests I am a novice so please bear with me, oh and hi to you all, I have learned so much using this site already.
I have an MSSQL database for customers where I am trying to track their status on a daily basis, with various attributes being recorded in several tables, which are then joined together using a data table to create a master table which yields approximately 600million rows.
As you can imagine querying this beast on a middling server (Intel i5, SSD HD OS, 2tb 7200rpm HD, Standard SQL Server 2017) is really slow. I was using Google BigQuery, but that got expensive very quickly. I have implemented indexes which have somewhat sped up the process, but still not fast enough. A simple select distinct on customer id for a given attribute is still taking 12 minutes on average for a first run.
The whole point of having a daily view is to make it easier to have something like tableau or QLIK connect to a single table to make it easy for the end user to create reports by just dragging the required columns. I have thought of using the main query that creates the master table and parameterizes it, but visualization tools aren't great for passing many variables.
This is a snippet of the table, there are approximately 300,000 customers and a row per day is created for customers who join between 2010 and 2017. They fall off the list if they leave.
My questions are:
1) should I even bother creating a flat file or should I just parameterize the query.
2) Are there any techniques I can use aside from setting the smallest data types for each column to keep the DB size to a minimal.
3) There are in fact over a hundred attribute columns, a lot of them, once they are set to either a 0 or 1, seldom change, is there another way to achieve this and save space?
4)What types of indexes should I have on the master table if many of the attributes are binary
any ideas would be greatly received.
I am running a Spark SQL query with huge amount of data (approx 50 million records).Due to heavy records, Query becomes slower in the cluster, So that it was taking long time (20 mins) to process the entire data. I am using inner join,left join inside the query.How can i improve the performance.
Since you are performing join operation and data size is huge, chances are lot of shuffling and I/O operation is involved. So if you are not using kryo serialization, your code would be using default Java Serialization. Use Kryo serialization, it gives better performance.
This one depends how you are storing your data in Hdfs. if its in a file format. Try creating Hive tables on top of it. Hive provides many optimisation technique.
a. Partitiong and Bucketing : Partitioning speeds up query since you don't have to scan entire table while reading data. Bucketing speeds up join operation.
b. Map Side join can be used: Here smaller table is sent to memory where join operation is performed in mapper itself and thus speeding up query.
Apache Spark provides option to take benefit of Hive using Hive Context.
I have a large volume of data, and I'm looking to efficiently (ie, using a relatively small Spark cluster) perform COUNT and DISTINCT operations one of the columns.
If I do what seems obvious, ie load the data into a dataframe:
df = spark.read.format("CSV").load("s3://somebucket/loadsofcsvdata/*").toDF()
df.registerView("someview")
and then attempt to run a query:
domains = sqlContext.sql("""SELECT domain, COUNT(id) FROM someview GROUP BY domain""")
domains.take(1000).show()
my cluster just crashes and burns - throwing out of memory exceptions or otherwise hanging/crashing/not completing the operation.
I'm guessing that somewhere along the way there's some sort of join that blows one of the executors' memory?
What's the ideal method for performing an operation like this, when the source data is at massive scale and the target data isn't (the list of domains in the above query is relatively short, and should easily fit in memory)
related info available at this question: What should be the optimal value for spark.sql.shuffle.partitions or how do we increase partitions when using Spark SQL?
I would suggest to tune your executors settings. Especially, setting following parameters correctly can provide dramatic improvement in performance.
spark.executor.instances
spark.executor.memory
spark.yarn.executor.memoryOverhead
spark.executor.cores
In your case, I would also suggest to tune Number of partitions, especially bump up following param from default 200 to higher value, as per requirement.
spark.sql.shuffle.partitions
We have a batch analytical SQL job – run once daily – that reads data from 2 source tables held in a powerful RDBMS. The source tables are huge (>100TB) but has less than 10 fields combined.
The question I have is can the 2 source tables be held in a compressed and indexed flat file so the entire operation can be much faster and saves on storage and can be run on a low spec server. Also, can we run SQL like queries against these compressed and indexed flat-files? Any pointers on how to go about doing this would be extremely helpful.
Most optimization strategies optimize either speed or size, and trade one off against the other. In general, RDBMS solutions optimize for speed, at the expense of size - for instance, by creating an index, you take up more space, and in return you get faster data access.
So your desire to optimize for both speed AND size is unlikely to be fulfilled - you almost certainly have to trade one against the other.
Secondly, if you want to execute "sql-like" queries, I'm pretty sure that an RDBMS is the best solution - especially with huge data sets.
It may be the case that the underlying data lends itself to a specific optimization - for instance, if you can create a custom indexing scheme based on bitmasks to create integers, and using those integers to access data using boolean operators, you may be able to beat the performance of an RDBMS index.