I have a Table which contain around 1000 columns. When I use
Select *
from Table
Its Return entire record of the table. But I just want only limited column of the record.
col1 | col2 | col3 | col4 | col 5 | ......................... | col1000 |
| | | | | ------------------------- | |
| | | | | ------------------------- | |
| | | | | ------------------------- | |
| | | | | ------------------------- | |
------------------------------------------------------------------------------------------------------
I just need col5 to col1000 record data only.
you have to write all the columns that you need in select
select col5, col6, ......... ,col1000 from table
there is no shot-cut way with out it and select * means all the columns of your table
If you really want to do without typing each column name, one way is using dynamic query.
For example in SQL Server you can write the dynamic query like following.
DECLARE #selstmt AS NVARCHAR(max);
SET #selstmt = 'select ' + Stuff((SELECT ', ' + Quotename(NAME) FROM
( SELECT c.NAME FROM
sys.columns c JOIN sys.tables t ON c.object_id = t.object_id
WHERE t.NAME = 'yourtablename'
AND c.NAME NOT IN('col1', 'col2', 'col3', 'col4')
)t FOR
xml path(''), type).value('.',
'NVARCHAR(MAX)'), 1, 1, '');
SET #selstmt = #selstmt + ' from yourtable'
EXEC sp_executesql #selstmt
Specify column names that you want to select instead of using * operator.
SELECT * will return all columns for table
SELECT col5, col6, col7,...., col1000 (col5 upto col1000) will return only specified columns of the table
There actually is one easy way in SSMS.
SELECT * FROM TableA
Select this text, press CNTRL SHIFT Q
Then you have all the columns and can easily remove a few.
You have to write all columns.
SELECT col1, col2, col3 from table;
But...
Tested on MySQL
Since you have too many columns, you can run a query to select the desired columns from the table:
SELECT GROUP_CONCAT(COLUMN_NAME SEPARATOR ', ')
FROM information_schema.COLUMNS
WHERE TABLE_SCHEMA = 'schemaName' AND TABLE_NAME = 'table'
AND COLUMN_NAME IN ('col1', 'col2', 'col3');
# or to filter the columns
# AND COLUMN_NAME NOT IN ('col1', 'col2', 'col3');
And build the query with the resultant columns.
GROUP_CONCAT outputs the values in a single row separated with a ,, so you can use them to query the table directly
SELECT
col1,col2,col3
FROM
table_name
Related
So I have a table called 'SongsMetadata' in my database with 6 columns as shown below (appx 70k records). It contains all songs related information.
It is slightly different than the regular database table. The 'File_name' column contains .csv files. Those are the actual tables and values in front of them are the columns in that csv file.
So for '1001186_1_7562755270480253254.csv' record in SongsMetadata table, '1001186_1_7562755270480253254' is the table name and it's columns are ' ', 'name', 'album', 'time', 'price' (these tables contain a lot of garbage values)
My goal is to compare all the tables(in this case .csv files) to get all the similar column names and their count. Now I already have a solution to get common column names and count for normal tables here. Each table will be compared with every other table. However, I'm not sure how I can achieve the same with .csv tables.
The expected output is:
1001186_1_7562755270480253254.csv & 1001186_0_5503858345485431752.csv |  , name, price| 3 #common columns count
1001186_0_5503858345485431752.csv & 99524146_0_3894874701785592836.csv |  , name, price| 3
and so on...
Any suggestions are appreciated.
The following solution shows how to treat your exsting table so that the wanted matching can occur efficiently, This requires an unpivot although the effect of an unpivot is performed by using cross apply and values which is a simple and efficient method. After that the "matching" is shown, followed by an alternative query for details yo may also find useful. Lastly the new table is displayed just to help visualize what it is.
See the as a live demo at SQL Fiddle
Small Sample:
CREATE TABLE SongsMetadata
([file_name] varchar(7), [col1] varchar(6), [col2] varchar(6), [col3] varchar(6), [col4] varchar(6))
;
INSERT INTO SongsMetadata
([file_name], [col1], [col2], [col3], [col4])
VALUES
('abc.csv', ' ', 'name', 'price', 'artist'),
('def.csv', 'name', ' ', ' ', 'price')
;
UNPIVOT Query
This query moves the column information into a normalized structure to enable the subsequent matching to occur. It is vital to the overall solution. As an added bonus you can mark some column names as "bad" so that these may be ignored later e.g. (which most likely is garbage data)
select
file_name, column_number, column_name
, case when column_name IN (' ','</b>','other-unwanted') then 0 else 1 end as col_is_good
into SongsMetadataUpivot
from (
select file_name, column_number, column_name
from SongsMetadata
cross apply (
values
(1, col1)
, (2, col2)
, (3, col3)
, (4, col4)
) ca (column_number, column_name)
) d
;
Query 1:
This is the "matching logic" provided at http://rextester.com/TLQ28814 but applied to the unpivoted songs data, AND it has the ability to exclude column names you simply don't want to consider (col_is_good).
with fmatch as (
select
l.file_name + ' & ' + r.file_name AS comparing_files
, l.column_name
from SongsMetadataUpivot l
inner join SongsMetadataUpivot r on l.column_name = r.column_name
and l.file_name < r.file_name
and r.col_is_good = 1
where l.col_is_good = 1
)
select --* from fmatch
f.comparing_files
, STUFF((
SELECT
N', ' + column_name
FROM fmatch c
WHERE f.comparing_files = c.comparing_files
order by c.column_name
FOR xml PATH (''), TYPE
)
.value('text()[1]', 'nvarchar(max)'), 1, 2, N'') as columns
, count(*) as num_col_matches
from fmatch f
group by f.comparing_files
Results:
| comparing_files | columns | num_col_matches |
|-------------------|-------------|-----------------|
| abc.csv & def.csv | name, price | 2 |
Query 2:
This will simply allow production of the column lists, in name order, together with their respective column positions in each file.
SELECT
file_name, ca.*
from SongsMetadata f
cross apply (
select
STUFF((
SELECT
N', ' + column_name
FROM SongsMetadataUpivot c
WHERE f.file_name = c.file_name
AND c.col_is_good = 1
ORDER BY column_name
FOR xml PATH (''), TYPE
)
.value('text()[1]', 'nvarchar(max)'), 1, 2, N'')
, STUFF((
SELECT
N', ' + cast(column_number as nvarchar)
FROM SongsMetadataUpivot c
WHERE f.file_name = c.file_name
AND c.col_is_good = 1
ORDER BY column_name
FOR xml PATH (''), TYPE
)
.value('text()[1]', 'nvarchar(max)'), 1, 2, N'')
) ca (column_names, col_numbers)
Results:
| file_name | column_names | col_numbers |
|-----------|---------------------|-------------|
| abc.csv | artist, name, price | 4, 2, 3 |
| def.csv | name, price | 1, 4 |
Query 3:
So you may visualize the "unpivoted" data, the overall solution requires this to occur.
select * from SongsMetadataUpivot
Results:
| file_name | column_number | column_name | col_is_good |
|-----------|---------------|-------------|-------------|
| abc.csv | 1 | | 0 |
| abc.csv | 2 | name | 1 |
| abc.csv | 3 | price | 1 |
| abc.csv | 4 | artist | 1 |
| def.csv | 1 | name | 1 |
| def.csv | 2 | | 0 |
| def.csv | 3 | | 0 |
| def.csv | 4 | price | 1 |
The following a table structure:
'----ID-----'----NAME----'----FIELD1----'----FIELD2----'
' 1 ' val ' 123 ' 321 '
' 2 ' val2 ' 234 ' 212 '
Need to get the following result:
'----ID-----'----NAME----'----FIELDS----'----VALUES----'
' 1 ' val ' FIELD1 ' 123 '
' 1 ' val ' FIELD2 ' 321 '
' 2 ' val2 ' FIELD1 ' 234 '
' 2 ' val2 ' FIELD2 ' 212 '
How write this query? I can get column names from INFORMATION_SCHEMA.COLUMNS. But how to join table with INFORMATION_SCHEMA.COLUMNS? Also how can rotating a part of table?
As living example. Following is table:
On screenshot only several fields but in table there are a lot of fields. I wrote the following query:
Select p.GUID, p.myvalues, p.Fields
from myTable gz
unpivot( [myvalues] for Fields in ([area], [davlplastmax])) p
But this query doesn't return null values.
Also I want get columns from INFORMATION_SCHEMA.COLUMNS and past them in ([area], [davlplastmax]).
For example:
unpivot( [values] for Fields in (
SELECT [MyDb].INFORMATION_SCHEMA.COLUMNS.COLUMN_NAME
FROM [MyDb].INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = N'MyTable'
)
Unpivot?
select u.id, u.name, u.fields, u.values
from MyTable t
unpivot
(
values
for fields in (Field1, Field2)
) u;
You can use unpivot as below:
Select * from #data
unpivot( [values] for Fields in ([Field1],[Field2])) p
Output as below:
+----+------+--------+--------+
| Id | Name | values | Fields |
+----+------+--------+--------+
| 1 | val | 123 | Field1 |
| 1 | val | 321 | Field2 |
| 2 | val2 | 234 | Field1 |
| 2 | val2 | 212 | Field2 |
+----+------+--------+--------+
You can use dynamic query as below for getting columns from Information_Schemas
Declare #cols1 varchar(max)
Declare #query nvarchar(max)
Select #cols1 = stuff((select ','+QuoteName(Column_Name) from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'TestData'
and COLUMN_NAME not in ('Id','Name') for xml path('')),1,1,'')
Select #query = ' Select * from
(Select * from #data )a
unpivot( [values] for Fields in (' + #cols1+ ')) p '
Exec sp_executeSql #query
i have a table in which i want to get column values into single record separated by comma
using the stuff() with select ... for xml path ('') method of string concatenation.
select col1, col2, col3 = stuff(
(
select ','+i.col3
from t as i
where i.col1 = t.col1
for xml path (''), type).value('.','nvarchar(max)')
,1,1,'')
from t
group by col1, col2
rextester demo: http://rextester.com/QXH88855
returns:
+------+------+-------------+
| col1 | col2 | col3 |
+------+------+-------------+
| 1 | roy | a,f,g,h |
| 2 | sam | h,k,l |
| 3 | joe | q,w,e,r,t,y |
+------+------+-------------+
If SQL Server 2017 or Vnext or SQL Azure you can use string_agg
SELECT col1, col2, STRING_AGG(col3, ',') from yourtable
GROUP BY col1, col2
Formatting the output really should be done in the program that receives the data. You can export from SQL Server Management studio to csv by selecting "output to file" on the toolbar and configuring comma delimited output.
If you really need to combine the columns into a comma separated single value:
SELECT CAST(col1 AS NVARCHAR(100)) + N',' + CAST(col2 AS NVARCHAR(100)) + N',' + CAST(col3 AS NVARCHAR(100))
FROM table
I have many rows for the same data but each row has a different columns populated. I want to join all these rows into a single row. I have tried group by, but the dataset has 1000 columns. Any suggestions?
SELECT TOP 100 *
FROM [Hilltopsamplerfix].[dbo].[tempHilltopWaterQualityExtractPivot]
WHERE [SiteName] = 'site' AND [RunDate] = 'xxx'
Example of the data
I can't paste an image because my reputation is under 10
+------------------------------+
| Site column1 column2 column3 |
+------------------------------+
| SITE1 NULL NULL 76 |
| SITE1 NULL 23 NULL |
| SITE1 NULL NULL NULL |
+------------------------------+
Desired output:
+------------------------------+
| Site column1 column2 column3 |
+------------------------------+
| SITE1 NULL 23 76 |
+------------------------------+
You need to use group by:
select site, max(column1) as column1, max(column2) as column2, . . .
from [Hilltopsamplerfix].[dbo].[tempHilltopWaterQualityExtractPivot]
group by site;
You can get the columns in the table from information_schema.columns and construct the logic in the SQL or Excel. For example
select ' max(' + column_name + ') as ' + column_name + ', '
from information_schema.columns
where table_name = 'tempHilltopWaterQualityExtractPivot' and
column_name <> 'site';
Then copy the results into the query window.
Try this below SQL, Based on Gordon
Declare #Cols Varchar(Max)
select #Cols= coalesce(#Cols+',', '')+ ' max(' + column_name + ') as ' + column_name
from information_schema.columns
where table_name = 'tempHilltopWaterQualityExtractPivot' and Column_Name <> 'site'
Declare #Query Varchar(max)
SET #Query='Select Site, '+ #Cols+' From tempHilltopWaterQualityExtractPivot Group by Site'
EXEC(#Query)
Is there any way to map the first table to the second table with an SQL query or, if too complicated, a PL/SQL block?
Original
--------------------------------------
| col1 | col2 | col3 | col4 |
--------------------------------------
| key | case 1 | case 2 | case 3 |
| value1 | v1c1 | v1c2 | v1c3 |
| value2 | v2c1 | v2c2 | v2c3 |
--------------------------------------
Target
-----------------------------
| key | case | result |
-----------------------------
| value1 | case 1 | v1c1 |
| value1 | case 2 | v1c2 |
| value1 | case 3 | v1c3 |
| value2 | case 1 | v2c1 |
| value2 | case 2 | v2c2 |
| value2 | case 3 | v2c3 |
-----------------------------
The original table can have a variable number of columns, and 'key' is a hardcoded string and is always in column 1 of the original table. No other row has “key” in column 1, so this row is a unique pivot.
Thank you
If dynamic sql is allowed, then it is possible to have all your requirements fullfilled using one query:
SELECT col1 as "key"
,extractvalue(dbms_xmlgen.getXMLType('select "' || tc.Column_Name ||
'" as v from Original where col1 = ''key''')
,'/ROWSET/ROW/V') "case"
,extractvalue(dbms_xmlgen.getXMLType('select "' || tc.Column_Name ||
'" as v from Original where col1 = ''' ||
replace(col1, '''', '''''') || '''')
,'/ROWSET/ROW/V') "result"
FROM Original
,(SELECT Column_Name
FROM All_Tab_Columns tc
WHERE tc.Owner = 'YOURSCHEMA'
and tc.Table_Name = 'ORIGINAL'
and Column_Name != 'COL1'
ORDER BY tc.COLUMN_ID) tc
WHERE col1 != 'key'
ORDER BY "key"
,"case"
Some more details as requested:
dbms_xmlgen.getXMLType returns an XmlType instance which is basically the result of the supplied query string as XML.
The format is ROWSET for the root node and ROW for each row. Every column will be an element as well.
The 2 selects that I am creating are only returning one value and to makes things easier, I gave them a column alias "V" so that I know which value to pick from the XML.
extractValue is a function that returns the result of an XPath expression from an XmlType.
'/ROWSET/ROW/V' returns the first V node, from the first ROW node that resides under the root node ROWSET.
<ROWSET><ROW><V>Abc</V></ROW></ROWSET>
The original table can have a variable
number of columns
Really?
The straightforward way is to select and union the parts you want.
select col1 as key, 'case1' as case, col2 as result
from test
where col1 <> 'key'
union all
select col1 as key, 'case2' as case, col3 as result
from test
where col1 <> 'key'
union all
select col1 as key, 'case3' as case, col4 as result
from test
where col1 <> 'key'
Straightforward, but not dynamic.
Later . . .
Based on your comment . . . although I don't think it's necessary.
select col1 as key, (select col2 from test where col1='key') as case, col2 as result
from test
where col1 <> 'key'
union all
select col1 as key, (select col3 from test where col1='key') as case, col3 as result
from test
where col1 <> 'key'
union all
select col1 as key, (select col4 from test where col1='key') as case, col4 as result
from test
where col1 <> 'key'
Oracle 11 also supports UNPIVOT, which I haven't used.
I don't know which parts can change, but this should be a start for you. If the column names can change (key, case 1, etc.) you will have to have another query to get the correct column names. If you have questions feel free to ask:
declare
v_query VARCHAR2(5000);
v_case VARCHAR2(255);
v_colcount PLS_INTEGER;
begin
-- Get number of columns
select count(*)
INTO v_colcount
from user_tab_columns
where table_name = 'T1';
-- Build case statement to get correct value for result column
v_case := 'case';
for i in 1 .. v_colcount-1
loop
v_case := v_case||' when rn = '||to_char(i)||' then col'||to_char(i+1);
end loop;
v_case := v_case||' end result';
-- Build final query
v_query := 'select col1 key, ''case ''||rn case, '||v_case||'
from t1
cross join (
select rownum rn
from dual
connect by level <= '||to_char(v_colcount-1)||'
) cj
where col1 <> ''key''
order by key, case';
-- Display query (would probably be replaced with an insert using execute immediate)
dbms_output.put_line(v_query);
end;
This produces the following query (which assumes your original table is called t1):
select col1 key, 'case '||rn case, case when rn = 1 then col2 when rn = 2 then col3 when rn = 3 then col4 end result
from t1
cross join (
select rownum rn
from dual
connect by level <= 3
) cj
where col1 <> 'key'
order by key, case
Try this:
with data as
(select level l from dual connect by level <= 3)
select col1,
'case' || l as "case",
decode(l,1,col2,2,col3,3,col4) as "values"
from myTable, data
order by 1,2;
Cheers