I have parent child data in excel which gets loaded into a 3rd party system running MS SQL server. The data represents a directed (hopefully) acyclic graph. 3rd party means I don't have a completely free hand in the schema. The excel data is a concatenation of other files and the possibility exists that in the cross-references between the various files someone has caused a loop - i.e. X is a child of Y (X->Y) then elsewhere (Y->A->B-X). I can write vb, vba etc on the excel or on the SQL server db. The excel file is almost 30k rows so I'm worried about a combinatorial explosion as the data is set to grow. So some of the techniques like creating a table with all the paths might be pretty unwieldy. I'm thinking of simply writing a program that, for each root, does a tree traversal to each leaf and if the depth gets greater than some nominal value flags it.
Better suggestions or pointers to previous discussion welcomed.
You can use a recursive CTE to detect loops:
with prev as (
select RowId, 1 AS GenerationsRemoved
from YourTable
union all
select RowId, prev.GenerationsRemoved + 1
from prev
inner join YourTable on prev.RowId = ParentRowId
and prev.GenerationsRemoved < 55
)
select *
from prev
where GenerationsRemoved > 50
This does require you to specify a maximum recursion level: in this case the CTE runs to 55, and it selects as erroneous rows with more than 50 children.
Related
I have a PowerBI report that shows metrics and visuals for a large amount of quote data extracted by an API, roughly 400k records a week. These quotes only contain latitude and longitude points for location, but shareholders need to slice views by our service areas. We have a fact table of areas with IDs and geography polygons that I am able to reference.
Currently, the report uses a gnarly custom SQL query that pulls this data from the transactional database, transforms it, and finds the nearest area through a cross apply method.
Here's an example of the code:
-- step 1 : get quotes from the first table
SELECT Col1, Col2...
INTO #AllQuotes
FROM Quotes1
LEFT JOIN (FactTables)
INNER JOIN([filtering self join])
WHERE expression
-- Step 2 : insert quotes from a separate table into our first temp table to get a table with all quote data
INSERT INTO #AllQuotes
SELECT Col1, Col2
FROM Quotes2
LEFT JOIN(Fact Tables)
INNER JOIN([filtering self join])
WHERE expression
-- Step 3 : Use CROSS APPLY to check the distance of every quote from every area, only selecting the shortest distance
SELECT *
FROM (SELECT *
FROM #AllQuotes as t
CROSS APPLY (SELECT TOP 1 a.AreaName,
a.AreaPoly.STDistance(geography::STGeomFromText('POINT('+ cast(t.PickLongitudeTemp as VARCHAR(20)) +' '+ cast(t.PickLatitudeTemp as VARCHAR(20)) +')', 4326).MakeValid()) AS 'DistanceToZone'
FROM Area as a
WHERE (a.AreaPoly.STIsValid() = 1)
AND (a.AreaPoly.STDistance(geography::STGeomFromText('POINT('+ cast(t.PickLongitudeTemp as VARCHAR(20)) +' '+ cast(t.PickLatitudeTemp as VARCHAR(20)) +')', 4326).MakeValid()) IS NOT NULL)
ORDER BY a.AreaPoly.STDistance(geography::STGeomFromText('POINT('+ cast(t.PickLongitudeTemp as VARCHAR(20)) +' '+ cast(t.PickLatitudeTemp as VARCHAR(20)) +')', 4326).MakeValid()) ASC) AS t2 ) AS llz;
This is obviously very computationally expensive and is making the PowerBI mashup engine work in overdrive. We are starting to have issues with CPU load on our database due to poor data load optimization. PowerBI rebuilds its data model every refresh and its query engine is not the strongest at using complex queries. Compounding this with the large amount of data, it quickly becomes a real issue with our stability.
Our database doesn't have a schema that is conducive to making efficient analytics queries, there is no transformation happening as it's loaded, and a process to hit a maps API to associate addresses with lat/longs. In order to produce reports with any value, I need to perform a lot of transformations within the query or within the loading process. This isn't the best thing to do, I know, but its what I got working and that could provide value.
I decided to try to move the query into something server side so that PowerBI only needed to load an already transformed and prepped dataset. With views I was able to get a dataset of all of my quotes and their lat/longs.
Now how would I go about running step 3? I have a few ideas:
Use a nested view
Refactor every temp table into a monolith of CTEs that then get transformed by a final view
Research a new method for solving a Lat/Long to Polygon matching problem.
I would like to have a final table that PowerBI can import with a simple SELECT * FROM #AllQuotes so that the mashup engine has to do less work constructing the data model. This would also allow me to implement incremental refresh and be able to only import a day's worth of data as time goes on rather than the full dataset.
Any solutions or ideas on how to match Lat/Long points to a list of geography Polygons in a PBI friendly way would be greatly appreciated.
Can't say I'm a spatial expert, but I don't think you are really using your index. STDistance has to run against every combo of quote/area then sort to find the smallest distance. So you need to reduce the number of areas each quote is compared against
If you review your data, I'd guess you find something like 30% of quotes are within 5,000 meters of your quote location. And 80% are within 10,000 meters.
With that in mind, I think we can add some queries to find those close matches first. This should be able use you spatial indexes efficiently as it can first filter down to only close matches, reducing the number of times you have to calculate the distance of a quote to each area.
Conceptual Code Approach: First Find Quick Matches within Predefined Distance(s)
/*First identify matches with in small distance like 5,000 meters*/
UPDATE #Quote
SET NearestAreaID = C.AreaID
FROM #Quote AS A
CROSS APPLY (SELECT QuoteGeogPoint = geography::Point(A.PickLatitudeTemp, A.PickLongitudeTemp, 4326)) AS B
CROSS APPLY ( SELECT TOP(1) B.AreaID
FROM Area AS DTA
/*STBuffer creates circle of 5000 meters around Quote location
STIntersects matches only on data that intersects with that point*/
WHERE B.QuoteGeogPoint.STBuffer(5000).STIntersects(DTA.AreaPoly)
ORDER BY DTA.AreaPoly.STDistance(Q.QuoteGeogPoint)
) AS C
/*Could run above query again for a medium distance say 10,000 meters*/
UPDATE #Quote
SET NearestAreaID = C.AreaID
FROM #Quote AS A
CROSS APPLY (SELECT QuoteGeogPoint = geography::Point(A.PickLatitudeTemp, A.PickLongitudeTemp, 4326)) AS B
CROSS APPLY ( SELECT TOP(1) B.AreaID
FROM Area AS DTA
WHERE B.QuoteGeogPoint.STBuffer(10000).STIntersects(DTA.AreaPoly)
ORDER BY DTA.AreaPoly.STDistance(Q.QuoteGeogPoint)
) AS C
WHERE A.NearestAreaID IS NULL /*No matches found*/
Match Quotes Regardless of Area Distance
Once you've found the easy matches, use this script (your current step 3) to clean up any stragglers
/*Find matches for any didn't have a match in defined distances*/
UPDATE #Quote
SET NearestAreaID = C.AreaID
FROM #Quote AS A
CROSS APPLY (SELECT QuoteGeogPoint = geography::Point(A.PickLatitudeTemp, A.PickLongitudeTemp, 4326)) AS B
CROSS APPLY ( SELECT TOP(1) B.AreaID
FROM Area AS DTA
ORDER BY DTA.AreaPoly.STDistance(Q.QuoteGeogPoint)
) AS C
WHERE NearestAreaID IS NULL /*No matches already found*/
I have a column in my tables called 'data' with JSONs in it like below:
{"tt":"452.95","records":[{"r":"IN184366","t":"812812819910","s":"129.37","d":"982.7","c":"83"},{"r":"IN183714","t":"8028028029093","s":"33.9","d":"892","c":"38"}]}
I have written a code to unnest it into separate columns like tr,r,s.
Below is the code
with raw as (
SELECT json_extract_path_text(B.Data, 'records', true) as items
FROM tableB as B where B.date::timestamp between
to_timestamp('2019-01-01 00:00:00','YYYY-MM-DD HH24:MA:SS') AND
to_timestamp('2022-12-31 23:59:59','YYYY-MM-DD HH24:MA:SS')
UNION ALL
SELECT json_extract_path_text(C.Data, 'records', true) as items
FROM tableC as C where C.date-5 between
to_timestamp('2019-01-01 00:00:00','YYYY-MM-DD HH24:MA:SS') AND
to_timestamp('2022-12-31 23:59:59','YYYY-MM-DD HH24:MA:SS')
),
numbers as (
SELECT ROW_NUMBER() OVER (ORDER BY TRUE)::integer- 1 as ordinal
FROM <any_random_table> limit 1000
),
joined as (
select raw.*,
json_array_length(orders.items, true) as number_of_items,
json_extract_array_element_text(
raw.items,
numbers.ordinal::int,
true
) as item
from raw
cross join numbers
where numbers.ordinal <
json_array_length(raw.items, true)
),
parsed as (
SELECT J.*,
json_extract_path_text(J.item, 'tr',true) as tr,
json_extract_path_text(J.item, 'r',true) as r,
json_extract_path_text(J.item, 's',true)::float8 as s
from joined J
)
select * from parsed
The above code is working when there are small number of records but this taking more than a day to run and CPU utilization (in redshift) is reaching 100 % and even the disk space used also reaching 100% if I am putting date between last two years etc.. or if the number of records is large.
Can anyone please suggest any alternative way to unnnest JSON objects like above in redshift.
My query plan is saying:
Nested Loop Join in the query plan - review the join predicates to avoid Cartesian products
Goal: To Unnest without using any cross joins
Input: data column having JSON
"tt":"452.95","records":[{"r":"IN184366","t":"812812819910","s":"129.37","d":"982.7","c":"83"},{"r":"IN183714","t":"8028028029093","s":"33.9","d":"892","c":"38"}]}
Output should be for example
tr,r,s columns from the above json
You want to unnest json records of up to 1000 stored in a json array but nested loop join is taking too long.
The root issues is likely your data model. You have stored structured records (called "records"), inside a semi-structure text element (json), within a column of a structured columnar database. You want to perform some operation on these buried records that you haven't described but here's the problem. Columnar databases are optimized for performing read-centric analytic queries but you need to expand these json internal records into Redshift rows (records) which is fundamentally a write operation. This is working against the optimizations of the database.
The size of this expanding data is also large as compared to your disk storage on your cluster which is why the disks are filling up. You CPUs are likely spinning unpacking the jsons and managing overloaded disk and memory capacity. At the edge of filling up disks Redshift shifts to a mode that optimizes disk space utilization at the expense of execution speed. A larger cluster may give you a significantly faster execution if you can avoid this effect but that will cost money you may not have budgeted. Not an ideal solution.
One area that would improve speed of your query is not carrying all the data along. You keep raw.* and J.* all through the query but it is not clear you need these. Since part of the issue is data size during execution and that this execution includes loop joining, you are making the execution much harder that it needs to be by carrying all this data (including the original jsons).
The best way out of this situation is to change your data model and expand these json internal records into Redshift records on ingestion. Json data is fine for seldom used information or information that is only needed at the end of a query where the data is small. Needing the expanded json at the input end of the query for such a large amount of data is not good use case for json in Redshift. Each of these "records" inside of the json are records and need to be stored as such if you need to work across them as query input.
Now you want to know if there is some slick way to get around this issue in your case and the answer is "unlikely but maybe". Can you describe how you are using the final values in your query (t, r, and s)? If you are just using some aspect of this data (max value or sum or ...) then there may be a way to get to the answer without the large nested loop join. But if you need all the values then there is no other way to get these AFAIK. A description of what comes next in the data process could open up such an opportunity.
I'm trying to query my database to pull only duplicate/old data to write to a scratch section in excel (Using a macro passing SQL to the DB).
For now, I'm currently testing in Access alone to only filter out the old data.
First, I'm trying to filter my database by a specifed WorkOrder, RunNumber, and Row.
The code below only filters by Work Order, RunNumber, and Row. ...but SQL doesn't like when I tack on a 2nd AND statement; so this currently isn't working.
SELECT *
FROM DataPoints
WHERE (((DataPoints.[WorkOrder])=[WO2]) AND ((DataPoints.[RunNumber])=6) AND ((DataPoints.[Row]=1)
Once I figure that portion out....
Then if there is only 1 entry with specified WorkOrder, RunNumber, and Row, then I want filter it out. (its not needed in the scratch section, because its data is already written to the main section of my report)
If there are 2 or more entries with said criteria(WO, RN, and Row), then I want to filter out the newest entry based on RunDate and RunTime, and only keep all older entries.
For instance, in the clip below. The only item remaining in my filtered query will be the top entry with the timestamp 11:47:00AM.
.
Are there any recommended commands to complete this problem? Any ideas are helpful. Thank you.
I would suggest something along the lines of the following:
select t.*
from datapoints t
where
t.workorder = [WO2] and
t.runnumber = 6 and
t.row = 1 and
exists
(
select 1
from datapoints u
where
u.workorder = t.workorder and
u.runnumber = t.runnumber and
u.row = t.row and
(u.rundate > t.rundate or (u.rundate = t.rundate and u.runtime > t.runtime))
)
Here, if the correlated subquery within the where clause finds a record with the same workorder, runnumber and row, but with either a later rundate or the same rundate and a later runtime, then the record is returned by the main query.
You need two more )'s at the end of your code snippet. Or you can delete the parentheses completely in this example, MS Access will ad them back in as it deems necessary.
M.S. Access SQL can be tricky as it is not standards compliant and either doesn't allow for super complex queries, or it needs an ugly work around, like having a parentheses nesting nightmare when trying to join more than two tables.
For these reasons, I suggest using multiple Access queries to produce your results.
I've a table like this which contains links :
key_a key_b
--------------
a b
b c
g h
a g
c a
f g
not really tidy & infinite recursion ...
key_a = parent
key_b = child
Require a query which will recompose and attribute a number for each hierarchical group (parent + direct children + indirect children) :
key_a key_b nb_group
--------------------------
a b 1
a g 1
b c 1
**c a** 1
f g 2
g h 2
**link responsible of infinite loop**
Because we have
A-B-C-A
-> Only want to show simply the link as shown.
Any idea ?
Thanks in advance
The problem is that you aren't really dealing with strict hierarchies; you're dealing with directed graphs, where some graphs have cycles. Notice that your nbgroup #1 doesn't have any canonical root-- it could be a, b, or c due to the cyclic reference from c-a.
The basic way of dealing with this is to think in terms of graph techniques, not recursion. In fact, an iterative approach (not using a CTE) is the only solution I can think of in SQL. The basic approach is explained here.
Here is a SQL Fiddle with a solution that addresses both the cycles and the shared-leaf case. Notice it uses iteration (with a failsafe to prevent runaway processes) and table variables to operate; I don't think there's any getting around this. Note also the changed sample data (a-g changed to a-h; explained below).
If you dig into the SQL you'll notice that I changed some key things from the solution given in the link. That solution was dealing with undirected edges, whereas your edges are directed (if you used undirected edges the entire sample set is a single component because of the a-g connection).
This gets to the heart of why I changed a-g to a-h in my sample data. Your specification of the problem is straightforward if only leaf nodes are shared; that's the specification I coded to. In this case, a-h and g-h can both get bundled off to their proper components with no problem, because we're concerned about reachability from parents (even given cycles).
However, when you have shared branches, it's not clear what you want to show. Consider the a-g link: given this, g-h could exist in either component (a-g-h or f-g-h). You put it in the second, but it could have been in the first instead, right? This ambiguity is why I didn't try to address it in this solution.
Edit: To be clear, in my solution above, if shared braches ARE encountered, it treats the whole set as a single component. Not what you described above, but it will have to be changed after the problem is clarified. Hopefully this gets you close.
You should use a recursive query. In the first part we select all records which are top level nodes (have no parents) and using ROW_NUMBER() assign them group ID numbers. Then in the recursive part we add to them children one by one and use parent's groups Id numbers.
with CTE as
(
select t1.parent,t1.child,
ROW_NUMBER() over (order by t1.parent) rn
from t t1 where
not exists (select 1 from t where child=t1.parent)
union all
select t.parent,t.child, CTE.rn
from t
join CTE on t.parent=CTE.Child
)
select * from CTE
order by RN,parent
SQLFiddle demo
Painful problem of graph walking using recursive CTEs. This is the problem of finding connected subgraphs in a graph. The challenge with using recursive CTEs is to prevent unwarranted recursion -- that is, infinite loops In SQL Server, that typically means storing them in a string.
The idea is to get a list of all pairs of nodes that are connected (and a node is connected with itself). Then, take the minimum from the list of connected nodes and use this as an id for the connected subgraph.
The other idea is to walk the graph in both directions from a node. This ensures that all possible nodes are visited. The following is query that accomplishes this:
with fullt as (
select keyA, keyB
from t
union
select keyB, keyA
from t
),
CTE as (
select t.keyA, t.keyB, t.keyB as last, 1 as level,
','+cast(keyA as varchar(max))+','+cast(keyB as varchar(max))+',' as path
from fullt t
union all
select cte.keyA, cte.keyB,
(case when t.keyA = cte.last then t.keyB else t.keyA
end) as last,
1 + level,
cte.path+t.keyB+','
from fullt t join
CTE
on t.keyA = CTE.last or
t.keyB = cte.keyA
where cte.path not like '%,'+t.keyB+',%'
) -- select * from cte where 'g' in (keyA, keyB)
select t.keyA, t.keyB,
dense_rank() over (order by min(cte.Last)) as grp,
min(cte.Last)
from t join
CTE
on (t.keyA = CTE.keyA and t.keyB = cte.keyB) or
(t.keyA = CTE.keyB and t.keyB = cte.keyA)
where cte.path like '%,'+t.keyA+',%' or
cte.path like '%,'+t.keyB+',%'
group by t.id, t.keyA, t.keyB
order by t.id;
The SQLFiddle is here.
you might want to check with COMMON TABLE EXPRESSIONS
here's the link
I have parent child data in excel which gets loaded into a 3rd party system running MS SQL server. The data represents a directed (hopefully) acyclic graph. 3rd party means I don't have a completely free hand in the schema. The excel data is a concatenation of other files and the possibility exists that in the cross-references between the various files someone has caused a loop - i.e. X is a child of Y (X->Y) then elsewhere (Y->A->B-X). I can write vb, vba etc on the excel or on the SQL server db. The excel file is almost 30k rows so I'm worried about a combinatorial explosion as the data is set to grow. So some of the techniques like creating a table with all the paths might be pretty unwieldy. I'm thinking of simply writing a program that, for each root, does a tree traversal to each leaf and if the depth gets greater than some nominal value flags it.
Better suggestions or pointers to previous discussion welcomed.
You can use a recursive CTE to detect loops:
with prev as (
select RowId, 1 AS GenerationsRemoved
from YourTable
union all
select RowId, prev.GenerationsRemoved + 1
from prev
inner join YourTable on prev.RowId = ParentRowId
and prev.GenerationsRemoved < 55
)
select *
from prev
where GenerationsRemoved > 50
This does require you to specify a maximum recursion level: in this case the CTE runs to 55, and it selects as erroneous rows with more than 50 children.