presto create table from another table with double quotes with comma separated - hive

How to create table in presto from a select query that has comma-separated values inside the column itself.
e.g
select statement has values look like
Column1
column2
column3
1
2
abc,xyz
below is the create statement for table
CREATE TABLE resultTable
WITH (format = 'TEXTFILE',textfile_field_separator = ',')
AS
select Column1 ,column2 column3 from sourceTable
but this ends up with missing placing fields which have comma separated values

Related

Get columnnames as string sepparated by comma / list

i am doing some inserts with some of my tables and all fields but a few need to be inserted or updated. Like here.
Is there a quick way to get the columns of a table returned as a string so i can copy them into my sql query?
For example i have a table:
create table myTable1
(
column1 integer not null,
column2 integer not null ,
column3 integer not null,
column4 integer not null,
column5 integer not null,
...a lot more fields...
)
And later i want to insert something from another table myTable2 that has the same fields (dont question my ways and why i have two identical tables).
INSERT INTO myTable SELECT column1, column2, column3, column4, ...
FROM table_source
But the tables have so many fields it is cumbersome to write them donw manualy and it would be faster to have a string where i can just delete the column names i dont need. Is there a nice query that ouputs "column1, column2, column3, column4, ..." so i dont have to write that myself and can copy tha into my query?
Found the answer quickly.
SELECT table_catalog, string_agg(column_name, ', ')
FROM information_schema.columns
WHERE table_schema = 'mySchema'
AND table_name = 'myTable' GROUP BY 1;
This query does the trick for me.

Insert data into table with values inside quotes of another select statement

I am trying to insert data into multiple columns of a table with specific values. However one varchar column will have its data value coming from another table select statement placed inside a quotes.
I need it placed inside without having the sub select statement interpreted as part of the string. Below is my query;
INSERT INTO Table1
(column1,
column2,
column3,
column4)
VALUES (484640,
4,
1,
'<HTML><Head></Head><Body>Upload results</Body></HTML>')
Currently your select query will be just treated like a string since it is enclosed within single quotes. The query will be inserted into the column, not the result.
Here is the correct way:
INSERT INTO Table1
(column1,
column2,
column3,
column4)
SELECT DISTINCT 484640,
4,
1,
'<HTML><Head></Head><Body><a href="cgroup-histstatus.aspx?Staging_PKID='
+ cast(column_1 as varchar(50))
+ '">Upload results</a></Body></HTML>'
FROM Table_2
If you want the values from column_1 to be enclosed within single quotes then replace '+ column_1 +' with '''+ column_1 +'''.
This works even when select query returns more than one record.

How to Match a string against patterns in Oracle SQL

I have a bit of different scenario. there is one column (Oracle table) in the table which stores patterns.another column with unique id.
Now, i have to match those patterns against a string and have to find out which patterns are matching that string.then i have to pick out those matched patterns along with the ids
Can anybody guide me on how to efficiently do it?
Sample Data
Table 1
-------
Column1 Column2
1 AB%
2 A%
3 %c%
Now, there is a string comes like ABC (take it as an item number. It gets inserted in DB and then a trigger fires that has to do the rest of the job as provided in sample below)
Table 2
---------
Column1 Column2
ABC AB%,A%
or more efficient(desired) Table 2 would be like -
Table 2(desired)
---------
Column1 Column2
ABC 1,2
This is the desired result.
In Oracle 11g you could use function listagg
and simple before insert or update trigger:
Sample data:
create table table1 (column1 number(3), column2 varchar2(10));
insert into table1 values (1, 'AB%');
insert into table1 values (2, 'A%');
insert into table1 values (3, '%c%');
create table table2 (column1 varchar2(10), column2 varchar2(500));
Trigger:
create or replace trigger tg_table2_ins
before insert or update of column1 on table2 for each row
declare
v_list table2.column2%type;
begin
select listagg(t1.column1, ', ') within group (order by t1.column1)
into v_list from table1 t1
where :new.column1 like t1.column2;
:new.column2 := v_list;
end tg_table2_ins;
Test:
insert into table2 (column1) values ('ABC');
insert into table2 (column1) values ('Oracle');
insert into table2 (column1) values ('XYZ');
insert into table2 (column1) values ('Ascii');
select * from table2;
COLUMN1 COLUMN2
---------- ----------
ABC 1, 2
Oracle 3
XYZ
Ascii 2, 3

Find rows which are in a row with comma separated values same table sql

i have a table which contains comma separated values some thing like
id locs
1 a,s,d,f
2 s,d,f,a
3 d,s,a,f
4 d,f,g,a
5 a,s,e
6 f,d
i need out put as 1,2,3,6 in sql server when i have taken comma separated string of id 1.
that means i have taken locs of id 1 and separated with comma, now i want all the ids which contains the separated values of id 1.
Note: I know i don't have to keep comma separated values in table but its happened.
Hope i was clear with my question.
declare #tb table (id int, locs varchar(50))
insert into #tb values(1, 'a,s,d,f'),
(2,'s,d,f,a'),
(3,'d,s,a,f'),
(4,'d,f,g,a'),
(5,'a,s,e'),
(6,'f,d')
declare #cta varchar(20)='s,d,f,a'
;with cte0(id,col2)
as
(
select id,t.c.value('.','varchar(max)') as col2 from (select id,x= cast('<t>'+replace(locs,',','</t><t>') +'</t>' as xml) from #tb) a cross apply x.nodes('/t') t(c)
)
select distinct id from cte0 where #cta like '%'+col2+'%' and id not in( select distinct id from cte0 where #cta not like '%'+col2+'%')
If I understand you correctly, you need to return the id value of all the rows that has at least one of the comma separated values from the locs column of the row you selected. Since this is a poor database design there can only be an ugly solution to this problem.
Start by creating a user defined function to split a comma separated values into a table. there are many ways to do it, this is the first that google found.
DECLARE #Values varchar(max)
SELECT #Values = Locs
FROM Table WHERE Id = #Id
SELECT Id
FROM Table INNER JOIN dbo.Split(#Values) SplitedString
ON( '%,'+ SplitedString.s+',%' LIKE ',' + Locs + ',')

Teradata SQL - Syntax to spool varchar for CASE OF DATA

I have a VARCHAR(10) column where values can be stored as 'Abxxx' and 'abxxx'
When grouping on this field (Column1) they are all returning as uppercase 'Abxxx' even when there are lower case values 'abxxx' in the data.
What syntax can I use to return unique data values in separate rows in the spool?
Read only access.
SELECT
Column1, COUNT(unique_id)
FROM
Table
GROUP BY
Column1
Desired result:
Abxxx 345 abxxx 5678
Using (CASESPECIFIC) will get the result you want:
Select Column1 (CASESPECIFIC), count(unique_id)
FROM Table
GROUP BY Column1 (CASESPECIFIC)
Be sure to make the table column case specific, see below.
-- Create a table with one column case-insensitive, another column case-sensitive
CREATE TABLE cities
(
name VARCHAR(80) NOT CASESPECIFIC,
name2 VARCHAR(80) CASESPECIFIC
);
-- Insert a row
INSERT INTO cities VALUES ('San Diego', 'San Diego');
-- Name column is case-insensitive
SELECT * FROM cities WHERE name = 'SAN DIEGO';
-- Output: San Diego
-- Name2 column is case-sensitive
SELECT * FROM cities WHERE name2 = 'SAN DIEGO';
-- No rows found