How can I explode SQL array into separate columns? - sql

I have a table that has a column called "table_name" of type string and a column called "keys" of type array. I need to write an SQL procedure/queries/something that generates tables called "table_name" and each element in the "keys" column is column name. Any thoughts on how to begin this?
For example,
table_name
keys
table_1
['A','B','C']
table_2
['D','E']
would generate table_1:
A
B
C
and table_2:
D
E
Thanks!

You'll need two steps.
Dynamically generate the CREATE TABLE statement form this data
Execute that statement.
Step 1 will look something like:
SELECT 'CREATE TABLE '|| table_name || '(' || array_to_string(keys, ' INT, ') || ");" as create_table_statement
FROM yourtable;
Each row outputted should contain a single column with your create table statement. Note that I made an assumption about your data types as those aren't present in your table or array.
Step 2 is up to you. Copy/paste into a client and execute or do this in a proc or outside scripting language.

Related

Adding new columns to a table in Azure Data Factory

I have a CSV file in blob storage with the following format:
**Column,DataType**
Acc_ID, int
firstname, nvarchar(500)
lastname, nvarchar(500)
I am trying to read this file in data factory and loop through the column names and check the destination table if these columns already exits, if not I want to create the missing columns in the SQL table.
I know that we can use the following SQL query to create columns that do not exist.
IF NOT EXISTS (
SELECT
*
FROM
INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'contact_info' AND COLUMN_NAME = 'acc_id')
BEGIN
ALTER TABLE contact_info
ADD acc_id int NULL
END;
But I am not sure if we can read the CSV file and pass the column names from the CSV file to the above SQL query in a data factory pipeline. Any suggestions for this please?
You can create a column if not exist using the Pre-copy script in the Copy data activity.
• Table columns before executing the pipeline.
SELECT TABLE_NAME, COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'contact_info'
• Source file:
ADF pipeline:
Using the lookup activity, get the list columns and datatypes by connecting the source dataset to the source file.
Output of lookup activity:
Connect the lookup output to the ForEach activity to loop all the values from the lookup.
#activity('Lookup1').output.value
Add Copy data activity inside ForEach activity and connect the source to the SQL table. Select query instead of a table in Use query properties. Write a query that does not result in any result as we are using this copy activity only to add a column to the table if not exist.
select * from dbo.contact_info where 1= 2
In the Copy data activity sink, connect the sink dataset to the SQL table, and in the Pre-copy script write your query to add a new column. Here use the current ForEach loop items (column, datatype) values instead of hardcoding the values as below.
#{concat('IF NOT EXISTS ( SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = ','''','contact_info','''',' AND COLUMN_NAME = ','''',item().Column,'''',') ALTER TABLE contact_info ADD ',item().Column,' ', item().DataType,' NULL')}
When the pipeline is executed, the FoEach loop executes till it completes all the values in the lookup output and creates a new column in the table if not exist.
Columns in the table after the pipeline is executed:

DROP TABLE by CONCAT table name with VALUE from another SELECT [SQLite]

I was wondering how can I drop table with concat by selecting value from other table.
This is what I am trying to figure out:
DROP TABLE SELECT 'table' || (select value from IncrementTable)
So basically table name is table6 for example.
Goal is: eg.. DROP TABLE table6
You can't do this directly. Table and column names have to be known when the statement is byte-compiled; they can't be generated at runtime. You have to figure out the table name and generate the appropriate statement string in the program using the database, and execute it.

Hive - Create Table statement with 'select query' and 'fields terminated by' commands

I want to create a table in Hive using a select statement which takes a subset of a data from another table. I used the following query to do so :
create table sample_db.out_table as
select * from sample_db.in_table where country = 'Canada';
When I looked into the HDFS location of this table, there are no field separators.
But I need to create a table with filtered data from another table along with a field separator. For example I am trying to do something like :
create table sample_db.out_table as
select * from sample_db.in_table where country = 'Canada'
ROW FORMAT SERDE
FIELDS TERMINATED BY '|';
This is not working though. I know the alternate way is to create a table structure with field names and the "FIELDS TERMINATED BY '|'" command and then load the data.
But is there any other way to combine the two into a single query that enables me to create a table with filtered data from another table and also with a field separator ?
Put row format delimited .. in front of AS select
do it like this
Change the query to yours
hive> CREATE TABLE ttt row format delimited fields terminated by '|' AS select *,count(1) from t1 group by id ,name ;
Query ID = root_20180702153737_37802c0e-525a-4b00-b8ec-9fac4a6d895b
here is the result
[root#hadoop1 ~]# hadoop fs -cat /user/hive/warehouse/ttt/**
2|\N|1
3|\N|1
4|\N|1
As you can see in the documentation, when using the CTAS (Create Table As Select) statement, the ROW FORMAT statement (in fact, all the settings related to the new table) goes before the SELECT statement.

How can I flip rows from one table into columns of a second table?

How can I take a row value from table FormField and create a column in table Registrant? Somehow it also needs to be smart enough to know if the column already exists.
Essentially I am flipping rows from the 1st table into columns on the 2nd table.
What I have right now was manually entered. However I need to code it to do this. Would this be a trigger or how would I even properly do this?
FormField table contains column ColumnName, one example of a row being FirstName.
Registrant table contains columns that should correspond, such as column FirstName
It needs to be "fool-proof" because if someone else enters FirstName into ColumnName, it shouldn't try to add another FirstName column. This also means it should reformat the string to work as a proper column (proper case, no spaces, etc).
This will generate ALTER TABLE commands for missing fields. "Fool-proofing" should be done beforehand:
SELECT 'ALTER TABLE [Registrant] ADD [' + [ColumnName] + '] NVARCHAR(MAX);'
FROM [FormField] f
LEFT JOIN syscolumns c ON OBJECT_NAME(c.id) = 'Registrant'
AND c.Name = f.[ColumnName]
WHERE c.id IS NULL

How to get table from value in another table

I have a table with names of other tables. How can I create a query that gets the table from the values?
For example, if my table has the values:
tables
------
users
users2
users3
I want to create Dynamic SQL function that knows how to take the values (i.e. the table names) from the reference table and then select * from that named value.
If you just want a one time script you can run the query:
select 'select * from ' || table_name || ';' cmd
from user_tables;
And then run the output