How can you filter Snowflake EXPLAIN AS TABULAR syntax when its embedded in the TABLE function? Can you filter it with anything? - sql

I have a table named Posts I would like to count and profile in Snowflake using the current Snowsight UI.
When I return the results via EXPLAIN using TABLULAR I am able to return the set with the combination of TABLE, RESULT_SCAN, and LAST_QUERY_ID functions, but any predicate or filter or column reference seems to fail.
Is there a valid way to do this in Snowflake with the TABLE function or is there another way to query the output of the EXPLAIN using TABLULAR?
-- Works
EXPLAIN using TABULAR SELECT COUNT(*) from Posts;
-- Works
SELECT t.* FROM TABLE(RESULT_SCAN(LAST_QUERY_ID())) as t;
-- Does not work
SELECT t.* FROM TABLE(RESULT_SCAN(LAST_QUERY_ID())) as t where operation = 'GlobalStats';
-- invalid identifier 'OPERATION', the column does not seem recognized.
Tried the third example and expected the predicate to apply to the function output. I don't understand why the filter works on some TABLE() results and not others.

You need to double quote the column name
where "operation"=
From the Documentation
Note that because the output column names from the DESC USER command
were generated in lowercase, the commands use delimited identifier
notation (double quotes) around the column names in the query to
ensure that the column names in the query match the column names in
the output that was scanned

Related

Snowflake tables with TO, FROM as column names

Looks like we've loaded some snowflake tables via ELT with "TO, FROM" as column names and they are both classic functions in any sql tool
Whenever I run a query for specifically those columns, there's always an error - how do I fix it apart from changing column names? Don't want to change column names as ELT process always happens from mongoDB via log based replication (stitch data)
select * - works perfectly , all other columns work too. Just "to" , "from" is the issue - should that never be used a columns?
select to, from from table limit 10 ; // tested [to, "to", 'to'] - none work
Error: SQL compilation error: error line 1 at position 7 invalid identifier '"to"'
Any ideas how to fix this apart from source column change or snowflake column changes?
Snowflake uses the standard double quotes to escape identifiers. However, when identifiers are escaped, the case of the letters matters, So, these are not the same:
select "to"
select "To"
select "TO"
You need to choose the one that is correct for your column names.
In addition spaces matter, so these are not the same:
select "to "
select " to"
select "to"
That is, what looks like to might be something else. You need to know what that is to escape the name properly.
If you can't figure them out, there is a trick to create a view to give the table reasonable names. Something like this:
create view v_t (to_date, from_date, . . .) as
select *
from t;
You need to be sure to include all the column names in the table in the column name list, in the same order as they are in the table. Then you can use the view with reasonable names.

Inserting new rows into table-1 based on constraints defined on table-2 and table-3

I want to append new rows to a table-1 d:\dl based on the equality constraint lower(rdl.subdir) = lower(tr.n1), where rdl and tr would be prospective aliases for f:\rdl and f:\tr tables respectively.
I get a function name is missing ). message when running the following command in VFP9:
INSERT INTO d:\dl SELECT * FROM f:\rdl WHERE (select LOWER(subdir)FROM f:\rdl in (select LOWER(n1) FROM f:\tr))
I am using the in syntax, instead of the alias based equality statement lower(rdl.subdir) = lower(tr.n1) because I do not know where to define aliases within this command.
In general, the best way to get something like this working is to first make the query work and give you the results you want, and then use it in INSERT.
In general, in SQL commands you assign aliases by putting them after the table name, with or without the keyword AS. In this case, you don't need aliases because the ones you want are the same as the table names and that's the default.
If what you're showing is your exact code and you're running it in VFP, the first problem is that you're missing the continuation character between lines.
You're definitely doing too much work, too. Try this:
INSERT INTO d:\dl ;
SELECT * ;
FROM f:\rdl ;
JOIN f:\tr ;
ON LOWER(rdl.subdir) = LOWER(tr.n1)

SQL - just view the description for explanation

I would like to ask if it is possible to do this:
For example the search string is '009' -> (consider the digits as string)
is it possible to have a query that will return any occurrences of this on the database not considering the order.
for this example it will return
'009'
'090'
'900'
given these exists on the database. thanks!!!!
Use the Like operator.
For Example :-
SELECT Marks FROM Report WHERE Marks LIKE '%009%' OR '%090%' OR '%900%'
Split the string into individual characters, select all rows containing the first character and put them in a temporary table, then select all rows from the temporary table that contain the second character and put these in a temporary table, then select all rows from that temporary table that contain the third character.
Of course, there are probably many ways to optimize this, but I see no reason why it would not be possible to make a query like that work.
It can not be achieved in a straight forward way as there is no sort() function for a particular value like there is lower(), upper() functions.
But there is some workarounds like -
Suppose you are running query for COL A, maintain another column SORTED_A where from application level you keep the sorted value of COL A
Then when you execute query - sort the searchToken and run select query with matching sorted searchToken with the SORTED_A column

Is it possible in SQL to return a first row that has attribute names and then values

I am wondering if there is a way to write the SQL so that it would return me a result as usual but now, on the first row that would return also the attribute names.
To explain what I mean:
say you have a table "test" which has 2 attributes "id" and "name":
id name
1 nik
2 tst
query:
SELECT * FROM test;
produces:
1 nik
2 tst
but what I want it to return is this:
id name
1 nik
2 tst
Is this possible?
edit: I am using PostreSQL
You cannot return the names and the actual column values in a single result unless you give up on the real datatypes (which is probably not what you want).
Your example mixes character data and numeric data in the id column and Postgres will (rightfully) refuse to return such a result set.
Edit:
I tested the "union" solution given e.g. by JNK and it fails (as expected) on Postgres, Oracle and SQL Server precisely because of the non-matching datatypes. MySQL follows it's usual habits of not throwing errors and simply converts everything to characters.
Extremely generic answer since you don't provide an RDBMS:
SELECT id, name FROM(
SELECT 'id' as 'id', 'name' as 'name', 1 as 'Rank'
UNION ALL
SELECT *, 2 as 'Rank' FROM test) as X
ORDER BY [RANK]
EDIT
Thanks to Martin for pointing out the need for the ORDER BY
Assuming you are on SQL Server, you can get the column names of a specific table by using this query:
select column_name 'Column Name', data_type 'Data Type'
from information_schema.columns
where table_name = 'putYourTableNameHere'
Then, you'll have to UNION your things together.
I agree with OMG Ponies above, the way to get this meta-data is usually from the interface you use.
for example the Perl DBI module has a method fetchrow_hashref where the columns of the returned row are returned as an associative array (hash) where the colnames are the keys.
print $ref->{'name'}; # would print nik or tst
Update:
I had forgotten to add that some of these interface layers have a method that return s the col names and you could use that instead of adding the names into your result set.
The DBI method you'd use would be $sth->{NAMES}->[0] would return the first column name.
Depending on your tools / technique, but here are a couple:
If you're using SSMS (Sql-Server), and want to copy/paste your results with the column headers:
Query Window -->
R-Click
Results -->
Grid or Text -->
Check-mark the 'Include column headers in the result set' option
If you're using Sql-Server, you can query meta-tables (sys.columns, etc.)
If you're using an ASP.NET databound control, you usually have access to methods or properties (sqldatareader.getname(i), etc.)
Anyway -- just depends on the layer you're trying to get the names from -- if these above don't help, then edit / re-tag your question so we can focus on whatever tool you're wanting to use to do this.
EDIT for PostgresSQL
If you're using PostgresSQL, you can query meta-tables (information_schema.columns, etc.)

Postgresql function returns composite - how do I access composite values as separate columns?

I have a Postgresql function which returns a composite type defined as (location TEXT, id INT). When I run "SELECT myfunc()", My output is a single column of type text, formatted as:
("locationdata", myid)
This is pretty awful. Is there a way to select my composite so that I get 2 columns back - a TEXT column, and an INT column?
Use:
SELECT *
FROM myfunc()
You can read more about the functionality in this article.
Answer has already been accepted, but I thought I'd throw this in:
It may help to think about the type of the data and where those types fit into an overall query. SQL queries can return essentially three types:
A single scalar value
A list of values
A table of values
(Of course, a list is just a one-column table, and a scalar is just a one-value list.)
When you look at the types, you see that an SQL SELECT query has the following template:
SELECT scalar(s)
FROM table
WHERE boolean-scalar
If your function or subquery is returning a table, it belongs in the FROM clause. If it returns a list, it could go in the FROM clause or it could be used with the IN operator as part of the WHERE clause. If it returns a scalar, it can go in the SELECT clause, the FROM clause, or in a boolean predicate in the WHERE clause.
That's an incomplete view of SELECT queries, but I've found it helps to figure out where my subqueries should go.