How to create table from result of client side function \crosstabview - sql

In psql i can get the pivot of a table using the client side function \crosstabview, but i can't figure out how to store the result in a table, i tried CREATE Table foo AS ... \crosstabview, but it stores just the input to the function and not the output.

\crosstabview is a psql slash command which takes the results of the actual query (everything up until \crosstabview) and reformats it. As such the output you see is generated within the psql client for display only, and cannot be used in other SQL operations.
To create a table based off the results of a crosstab query you'll need the tablefunc extension.

Related

Get query used to create a table

We use snowflake at work to store data, and for one of the tables, I dont have the SQL query used to create the table. Is there a way to see the query used to make that table?
I tried using the following
get_ddl('table', 'db.table', true)
but this gives me an output like-
This doesnt give me any information about the sql query that was used. How do I get that in snowflake?
If get_ddl() is not enough you may use INFORMATION_SCHEMA.
To get more information you have 2 options:
Use the QUERY_HISTORY() table functions: https://docs.snowflake.com/en/sql-reference/functions/query_history.html
Use the QUERY_HISTORY() view: https://docs.snowflake.com/en/sql-reference/account-usage/query_history.html
If you use the funtions/view above and filter all the records by QUERY_TEXT, maybe you get more information about the exact SQL that was used to create your table.

Adding today date in Table name when using Create Table function in standard sql GBQ

I am quite new to GBQ and any help is appreciated it.
I have a query below:
#Standard SQL
create or replace table `xxx.xxx.applications`
as select * from `yyy.yyy.applications`
What I need to do is to add today's date at the end of the table name so it is something like xxx.xxx.applications_<todays date>
basically create a filename with Application but add date at the end of the name applications.
I am writing a procedure to create a table every time it runs but need to add the date for audit purposes every time I create the table (as a backup).
I searched everywhere and can't get the exact answer, is this possible in Query Editor as I need to store this as a Proc.
Thanks in advance
BigQuery doesn't support dynamic SQL at the moment which means that this kind of construction is not possible.
Currently BigQuery supports Parameterized Queries but its not possible to use parameters to dynamically change the source table's name as you can see in the provided link.
BigQuery supports query parameters to help prevent SQL injection when
queries are constructed using user input. This feature is only
available with standard SQL syntax. Query parameters can be used as
substitutes for arbitrary expressions. Parameters cannot be used as
substitutes for identifiers, column names, table names, or other parts
of the query.
If you need to build a query based on some variable's value, I suggest that you use some script in SHELL, Python or any other programming language to create the SQL statement and then execute it using the bq command.
Another approach could be using the BigQuery client library in some of the supported languages instead of the bq command.

Get a Big Query script to output a table

I am using the new scripting feature of Big Query to declare a variable and then am using that variable in a standard SQL query.
The structure of the query is :
DECLARE {name of variable} {data type};
SET {name of variable} = {Value}'
(A SQL QUERY THEN FOLLOWS USING THE ABOVE VARIABLE)
I understand that this is now a script a no longer a typical query, and thus when I run it, it runs as a sequence of executable tasks. But is there anyway in the script to explicitly state that I only want to output the resulting table of the SQL query as opposed to both the result of declaring the variable and SQL query?
What BQ Outputs
Depending on how you "capture" the output, if you are sending a query from Python/Java/CLI, then the last SELECT statement in script is the only output that you receive with the API.
Please also note that each "output" that you see come with a cost/bytes-billed, which is another reason for them to be visible at all time.
Update:
If you need to capture the output of SELECT statement to a table, depending on your intention, you may use:
CREATE OR REPLACE TABLE <your_destination_table> AS SELECT ...
or
INSERT INTO TABLE <your_destination_table> SELECT ...

Call a SQL function in Nifi using ExecuteSQL or another processor

I am currently using a function in SQL Server to get the max-value of a certain column. I Need this value to generate a specific number of dummy files to insert flowfiles that are created later on.
Is there a way of calling this function via a nifi-processor?
By using ExecuteSQL I Always get error like unable to execute SQL select query or the column "ab" was not found, when using select ab.functionname() (ab is the loginname of the db)
In SQL Server I can just use select ab.functionname() and get the desired results.
If there is no possible way of calling this function, is there another way to create #flowfiles dummyfiles to reserve this place for them in the DB so that no one else could insert or use this ids (not autoincremt, because it is not possible) while the flowfiles are getting processed?
I tried using $flowfile.count and the Counterprocessor, but this did not solve the Problem.
It should look like: INSERT INTO table (id,nr) values (max(id)+1,anynumber) for every flowfiles, unfortunately the ExecuteSQL is not able to do this.
Think this conversation can help you:
https://community.hortonworks.com/questions/26170/does-executesql-processor-allow-to-execute-stored.html
Gist:
You can use ExecuteScript or ExecuteProcess to call appropriate script. For example for ExecuteProcess just call sqlplus command. Choose type of command "sqlplus". In command arguments set something like: user_id/password#dbname #"script_path/someScript.sql". In someScript.sql you put something like:
execute spname(param)
You can write your own processor :) Of course it's more difficulty and often unnecessary

Check whether field exists in SQLite without fetching them all

I am writing a database abstraction layer that also abstracts some of the different query types. One of them is called "field_exists" - its purpose should be pretty self-explanatory.
And I want to implement that for SQLite.
The problem I am having is that I need to use one query that either returns a row confirming that the field exists or none if it doesn't. Thus, I cannot use the PRAGMA approach.
So, what query can I use to check whether a field exists in SQLite, that fulfills the above criteria?
EDIT: I should add that the query needs to be able to run in PHP code (using PDO).
Also, the query should look something like this (which only works with MySQL):
SHOW COLUMNS FROM table LIKE 'field'
Trying to select a field that doesn't exist will return an exception, then you can catch it and return nothing.
Use the .schema TABLENAME command. It will tell you the command that was issued to create the table. For more info chekcout the SQLite command shell documentation.
If you don't have access to the sqlite command line, you can always query the sqlite_master table. Let's say you want to know the command used to create the table MyTable. You'd issue this:
select sql from sqlite_master where name='MyTable';
This then gives you the sql command that was used to create the table. Then just grep through that output and see if the column you're looking for is in the command used to create the table.
UPDATE 2:
Actually better than the sql I posted above, you can use this:
PRAGMA table_info(*table_name*)
This will show you all the columns in a given table along with their types and other info.