I have a table, called V (as seen in the screenshot below). How would I find all the rows with a given value in either the IN or OUT columns?
For example, finding all rows with "#10:0" in IN or OUT below.
My best attempt is
SELECT FROM V WHERE ???(OUT OR IN) = '#10:0'
but I don't know what should be in place of the ???.
in your case you have a defined edge #rid so it would be better to start your query from E and not V.
By this way you'll have to use the inV(), outV() and bothV() functions.
Examples:
1) Getting the IN vertex (#12:1)
select expand(inV()) from #10:0
2) Getting the OUT vertex (#12:0)
select expand(outV()) from #10:0
2) Getting both vertices connected to #10:0 (#12:0 and #12:1)
select expand(bothV()) from #10:0
Hope it helps
based on your query i guess:
SELECT FROM V WHERE OUT = '#10:0' OR IN = '#10:0'
but the screenshot really hurt my eyes
Related
is there any way within snowflake/sql query to view what tables are being queried the most as well as what columns? I want to know what data is of most value to my users and not sure how to do this programatically. Any thoughts are appreciated - thank you!
2021 update
The new ACCESS_HISTORY view has this information (in preview right now, enterprise edition).
For example, if you want to find the most used columns:
select obj.value:objectName::string objName
, col.value:columnName::string colName
, count(*) uses
, min(query_start_time) since
, max(query_start_time) until
from snowflake.account_usage.access_history
, table(flatten(direct_objects_accessed)) obj
, table(flatten(obj.value:columns)) col
group by 1, 2
order by uses desc
Ref: https://docs.snowflake.com/en/sql-reference/account-usage/access_history.html
2020 answer
The best I found (for now):
For any given query, you can find what tables are scanned through looking at the plan generated for it:
SELECT *, "objects"
FROM TABLE(EXPLAIN_JSON(SYSTEM$EXPLAIN_PLAN_JSON('SELECT * FROM a.b.any_table_or_view')))
WHERE "operation"='TableScan'
You can find all of your previous ran queries too:
select QUERY_TEXT
from table(information_schema.query_history())
So the natural next step would be combine both - but that's not straightforward, as you'll get an error like:
SQL compilation error: argument 1 to function EXPLAIN_JSON needs to be constant, found 'SYSTEM$EXPLAIN_PLAN_JSON('SELECT * FROM a.b.c')'
The solution would be to combine the queries from the query_history() with the SYSTEM$EXPLAIN_PLAN_JSON outside (to make the strings constant), and then you will be able to find out the most queried tables.
Quite new to sql, and looking for help on what i'm doing wrong.
With the code below, i'm getting the error "cannot access field value on a value with type array<struct> at [1:30]"
The "audience size value" comes from the dataset public_campaigns where as the engagement rate comes from the data set public_instagram_channels
I think the dataset that's causing the issue here is the public_campaigns.
thanks in advance for your help!
SELECT creator_audience_size.value, AVG(engagement_rate/1000000) AS avgER
FROM `public_instagram_channels` AS pic
JOIN `public_campaigns`AS pc
ON pic.id=pc.id
GROUP BY creator_audience_size.value
This is to do with the type of one of the columns using REPEATED mode.
In Google BigQuery you have to use UNNEST on these repeated columns to get their individual values in the result set.
It's unclear from what you've posted which column is the repeated type - looking at the table definition for public_instagram_channels and public_campaigns will reveal this - look for the word REPEATED in the Mode column of the table definition.
Once you've found it, include UNNEST in your query, as per this untested example:
SELECT creator_audience_size.value, AVG(engagement_rate/1000000) AS avgER
FROM `public_instagram_channels` AS pic,
UNNEST(`column_name`) AS whatever_you_want
JOIN `public_campaigns`AS pc ON pic.id = pc.id
GROUP BY creator_audience_size.value
I am working on a small project for an online databases course and i was wondering if you could help me out with a problem I am having.
I have a web page that is searching a movie database and retrieving specific columns using a movie initial input field, a number input field, and a code field. These will all be converted to strings and used as user input for the query.
Below is what i tried before:
select A.CD, A.INIT, A.NBR, A.STN, A.ST, A.CRET_ID, A.CMNT, A.DT
from MOVIE_ONE A
where A.INIT = :init
AND A.CD = :cd
AND A.NBR = :num
The way the page must search is in three different cases:
(initial and number)
(code)
(initial and number and code)
The cases have to be independent so if certain field are empty, but fulfill a certain case, the search goes through. It also must be in one query. I am stuck on how to implement the cases.
The parameters in the query are taken from the Java parameters in the method found in an SQLJ file.
If you could possibly provide some aid on how i can go about this problem, I'd greatly appreciate it!
Consider wrapping the equality expressions in NVL (synonymous to COALESCE) so if parameter inputs are blank, corresponding column is checked against itself. Also, be sure to kick the a-b-c table aliasing habit.
SELECT m.CD, m.INIT, m.NBR, m.STN, m.ST, m.CRET_ID, m.CMNT, m.DT
FROM MOVIE_ONE m
WHERE m.INIT = NVL(:init, m.INIT)
AND m.CD = NVL(:cd, m.CD)
AND m.NBR = COALESCE(:num, m.NBR)
To demonstrate, consider below DB2 fiddles where each case can be checked by adjusting value CTE parameters all running on same exact data.
Case 1
WITH
i(init) AS (VALUES('db2')),
c(cd) AS (VALUES(NULL)),
n(num) AS (VALUES(53)),
cte AS
...
Case 2
WITH
i(init) AS (VALUES(NULL)),
c(cd) AS (VALUES(2018)),
n(num) AS (VALUES(NULL)),
cte AS
...
Case 3
WITH
i(init) AS (VALUES('db2')),
c(cd) AS (VALUES(2018)),
n(num) AS (VALUES(53)),
cte AS
...
However, do be aware the fiddle runs a different SQL due to nature of data (i.e., double and dates). But query does reflect same concept with NVL matching expressions on both sides.
SELECT *
FROM cte, i, c, n
WHERE cte.mytype = NVL(i.init, cte.mytype)
AND YEAR(CAST(cte.mydate AS date)) = NVL(c.cd, YEAR(CAST(cte.mydate AS date)))
AND ROUND(cte.mynum, 0) = NVL(n.num, ROUND(cte.mynum, 0));
I performed STInteract using two tables and the intersections of points onto a given polygon. I have converted all the tables to have geometries for all. I am having a problem writing the query for this. I am trying to look for the points that did not intersect.
These are my two table
PO_Database = contains the points
POLY_Database = Polygon of interest
This is my script:
SELECT GEOM
FROM [dbo].[PO_Database] as PO
JOIN [dbo].[POLY_Database] as p ON hwy.GEOM.STIntersects(p.NEATCELL) = 1
I tried changing the value from 1 to 0 but I get repeating values of the geometry for when the query is run with 0. How do I write the query to give me the names of the points that did not intersect with the polygon. Also is there a way to do checks if the intersects where done right.
If you get repeating values, you probably have multiple rows in the POLY_Database table. If you want to find the points that do not intersect any of those polygons, try this query:
SELECT GEOM
FROM [dbo].[PO_Database] as PO
WHERE NOT EXISTS (
SELECT * FROM [dbo].[POLY_Database] as p
WHERE hwy.GEOM.STIntersects(p.NEATCELL) = 1
)
I have a big table (1M rows) with the following columns:
source, dest, distance.
Each row defines a link (from A to B).
I need to find the distances between a pair using anoter node.
An example:
If want to find the distance between A and B,
If I find a node x and have:
x -> A
x -> B
I can add these distances and have the distance beetween A and B.
My question:
How can I find all the nodes (such as x) and get their distances to (A and B)?
My purpose is to select the min value of distance.
P.s: A and B are just one connection (I need to do it for 100K connections).
Thanks !
As Andomar said, you'll need the Dijkstra's algorithm, here's a link to that algorithm in T-SQL: T-SQL Dijkstra's Algorithm
Assuming you want to get the path from A-B with many intermediate steps it is impossible to do it in plain SQL for an indefinite number of steps. Simply put, it lacks the expressive power, see http://en.wikipedia.org/wiki/Expressive_power#Expressive_power_in_database_theory . As Andomar said, load the data into a process and us Djikstra's algorithm.
This sounds like the traveling salesman problem.
From a SQL syntax standpoint: connect by prior would build the tree your after using the start with and limit the number of layers it can traverse; however, doing will not guarantee the minimum.
I may get downvoted for this, but I find this an interesting problem. I wish that this could be a more open discussion, as I think I could learn a lot from this.
It seems like it should be possible to achieve this by doing multiple select statements - something like SELECT id FROM mytable WHERE source="A" ORDER BY distance ASC LIMIT 1. Wrapping something like this in a while loop, and replacing "A" with an id variable, would do the trick, no?
For example (A is source, B is final destination):
DECLARE var_id as INT
WHILE var_id != 'B'
BEGIN
SELECT id INTO var_id FROM mytable WHERE source="A" ORDER BY distance ASC LIMIT 1
SELECT var_id
END
Wouldn't something like this work? (The code is sloppy, but the idea seems sound.) Comments are more than welcome.
Join the table to itself with destination joined to source. Add the distance from the two links. Insert that as a new link with left side source, right side destination and total distance if that isn't already in the table. If that is in the table but with a shorter total distance then update the existing row with the shorter distance.
Repeat this until you get no new links added to the table and no updates with a shorter distance. Your table now contains a link for every possible combination of source and destination with the minimum distance between them. It would be interesting to see how many repetitions this would take.
This will not track the intermediate path between source and destination but only provides the shortest distance.
IIUC this should do, but I'm not sure if this is really viable (performance-wise) due to the big amount of rows involved and to the CROSS JOIN
SELECT
t1.src AS A,
t1.dest AS x,
t2.dest AS B,
t1.distance + t2.distance AS total_distance
FROM
big_table AS t1
CROSS JOIN
big_table AS t2 ON t1.dst = t2.src
WHERE
A = 'insert source (A) here' AND
B = 'insert destination (B) here'
ORDER BY
total_distance ASC
LIMIT
1
The above snippet will work for the case in which you have two rows in the form A->x and x->B but not for other combinations (e.g. A->x and B->x). Extending it to cover all four combiantions should be trivial (e.g. create a view that duplicates each row and swaps src and dest).