Check whether field exists in SQLite without fetching them all - sql

I am writing a database abstraction layer that also abstracts some of the different query types. One of them is called "field_exists" - its purpose should be pretty self-explanatory.
And I want to implement that for SQLite.
The problem I am having is that I need to use one query that either returns a row confirming that the field exists or none if it doesn't. Thus, I cannot use the PRAGMA approach.
So, what query can I use to check whether a field exists in SQLite, that fulfills the above criteria?
EDIT: I should add that the query needs to be able to run in PHP code (using PDO).
Also, the query should look something like this (which only works with MySQL):
SHOW COLUMNS FROM table LIKE 'field'

Trying to select a field that doesn't exist will return an exception, then you can catch it and return nothing.

Use the .schema TABLENAME command. It will tell you the command that was issued to create the table. For more info chekcout the SQLite command shell documentation.
If you don't have access to the sqlite command line, you can always query the sqlite_master table. Let's say you want to know the command used to create the table MyTable. You'd issue this:
select sql from sqlite_master where name='MyTable';
This then gives you the sql command that was used to create the table. Then just grep through that output and see if the column you're looking for is in the command used to create the table.
UPDATE 2:
Actually better than the sql I posted above, you can use this:
PRAGMA table_info(*table_name*)
This will show you all the columns in a given table along with their types and other info.

Related

Get query used to create a table

We use snowflake at work to store data, and for one of the tables, I dont have the SQL query used to create the table. Is there a way to see the query used to make that table?
I tried using the following
get_ddl('table', 'db.table', true)
but this gives me an output like-
This doesnt give me any information about the sql query that was used. How do I get that in snowflake?
If get_ddl() is not enough you may use INFORMATION_SCHEMA.
To get more information you have 2 options:
Use the QUERY_HISTORY() table functions: https://docs.snowflake.com/en/sql-reference/functions/query_history.html
Use the QUERY_HISTORY() view: https://docs.snowflake.com/en/sql-reference/account-usage/query_history.html
If you use the funtions/view above and filter all the records by QUERY_TEXT, maybe you get more information about the exact SQL that was used to create your table.

ADO SQL query create column if it doesn't exist

I have a query for a report based on an MS Access database (as the program project file). The tables in this database get updated with new fields periodically as new features are added.
We need to be able to support old and new versions of the file for our report, so need to know if there is a way to insert a field into the SQL SELECT query if it does not already exist. (Note: Do not want to create ALTER TABLE type statements, as the field only needs to be added into the result set, not into the table permanently.)
I know you can do something like "" AS [FieldName], but that only applies when you know the field doesn't exist and need to create a blank spot for it (such as when a unioned table does have that field). In this case, the table might have the field so I want to use it if it does, but if it doesn't I want to have it still exist in the query results with a default value.
Any help would be appreciated. (I also know you can force the user to update the file, but that option was stated as "only last resort".)
Thanks,
Chris

Adding today date in Table name when using Create Table function in standard sql GBQ

I am quite new to GBQ and any help is appreciated it.
I have a query below:
#Standard SQL
create or replace table `xxx.xxx.applications`
as select * from `yyy.yyy.applications`
What I need to do is to add today's date at the end of the table name so it is something like xxx.xxx.applications_<todays date>
basically create a filename with Application but add date at the end of the name applications.
I am writing a procedure to create a table every time it runs but need to add the date for audit purposes every time I create the table (as a backup).
I searched everywhere and can't get the exact answer, is this possible in Query Editor as I need to store this as a Proc.
Thanks in advance
BigQuery doesn't support dynamic SQL at the moment which means that this kind of construction is not possible.
Currently BigQuery supports Parameterized Queries but its not possible to use parameters to dynamically change the source table's name as you can see in the provided link.
BigQuery supports query parameters to help prevent SQL injection when
queries are constructed using user input. This feature is only
available with standard SQL syntax. Query parameters can be used as
substitutes for arbitrary expressions. Parameters cannot be used as
substitutes for identifiers, column names, table names, or other parts
of the query.
If you need to build a query based on some variable's value, I suggest that you use some script in SHELL, Python or any other programming language to create the SQL statement and then execute it using the bq command.
Another approach could be using the BigQuery client library in some of the supported languages instead of the bq command.

Call a SQL function in Nifi using ExecuteSQL or another processor

I am currently using a function in SQL Server to get the max-value of a certain column. I Need this value to generate a specific number of dummy files to insert flowfiles that are created later on.
Is there a way of calling this function via a nifi-processor?
By using ExecuteSQL I Always get error like unable to execute SQL select query or the column "ab" was not found, when using select ab.functionname() (ab is the loginname of the db)
In SQL Server I can just use select ab.functionname() and get the desired results.
If there is no possible way of calling this function, is there another way to create #flowfiles dummyfiles to reserve this place for them in the DB so that no one else could insert or use this ids (not autoincremt, because it is not possible) while the flowfiles are getting processed?
I tried using $flowfile.count and the Counterprocessor, but this did not solve the Problem.
It should look like: INSERT INTO table (id,nr) values (max(id)+1,anynumber) for every flowfiles, unfortunately the ExecuteSQL is not able to do this.
Think this conversation can help you:
https://community.hortonworks.com/questions/26170/does-executesql-processor-allow-to-execute-stored.html
Gist:
You can use ExecuteScript or ExecuteProcess to call appropriate script. For example for ExecuteProcess just call sqlplus command. Choose type of command "sqlplus". In command arguments set something like: user_id/password#dbname #"script_path/someScript.sql". In someScript.sql you put something like:
execute spname(param)
You can write your own processor :) Of course it's more difficulty and often unnecessary

Why doesn't this specific syntax work for upserting?

I'm using SQL Server 2005 and I want to synchronize two tables which have the same definition but exist in different databases. MERGE INTO only exists in 2008 and I'd prefer a syntax where I don't have to specify columns in the UPDATE. So I stumbled upon various posts using the following syntax:
UPDATE Destination FROM (Source INTERSECT Destination)
INSERT INTO Destination FROM (Source EXCEPT Destination)
But when I try to execute it I get:
Incorrect syntax near the keyword 'FROM'.
How can I get this working? I have multiple tables which I need to synchronize and I don't want to specify all the columns in every statement.
Thanks for any hint!
According to Books Online the update command requires the set keyword, and it must come before the optional from keyword. The insert command doesn't have a stand alone from keyword, the from only exists as part of a select statement either as a derived table source or within a common table expression.
The link you reference is not showing valid SQL Server 2005 syntax.
"How can I get this working? I have multiple tables which I need to synchronize and I don't want to specify all the columns in every statement."
For update, you must specify all the columns. For insert if the source and destination have the same struture then you can use insert into TARGTET_TABLE_NAME select * from SOURCE_TABLE_NAME BUT that is not recommended for production code, if the source or destination change, the statement would break. If source and destination differ, then you must specify columns on at least one side of the insert.
I'm sorry if this doesn't answer your question, but assuming the whole reason for this is in the interest of saving time, can't you just right-click the source table and generate the INSERT script, then right-click the destination table and generate a blank SELECT script, then combine the two? This will only work if a kill-and-fill is acceptable in your environment.