I don't know how to union all tables with dynamic SQL.
The issue is that I'm inserting into db a number of tables - all having the same structure (only one varchar column
[Line]
). I don't know that would be the number of tables inserted - it depends on the project. But I want to automate the process in SQL.
I'm using this query to find those tables, additionally I'm adding some [RowNum] that may serve as an ID of each table:
SELECT
ROW_NUMBER() OVER (ORDER BY Name) AS [RowNum],
[Name] AS [Name]
INTO #all_tables_with_ids
FROM #all_tables
This query returns:
RowNum | Name
------------------------
1 | Table 1
2 | Table 2
3 | Table 3
4 | Table 4
I would like to merge all tables together. I was trying to write some insert into in while loop but it didn't work. I figured out that I need dynamic SQL.
Can you suggest something? I was trying to find some examples but all of them fail due to the fact that the list of tables is not known at the beginning, so it needs to be created dynamically as well.
Demo here:
create table #test
(
RowNum int,
Name varchar(100)
)
insert into #test
select 1,quotename('table1')
union all
select 2,quotename('table2')
declare #sql nvarchar(max)
set #sql='select somecol from tbl union all '
declare #sql1 nvarchar(max)
;with cte
as
(select #sql as ql,name,rplc
from
#test t1
cross apply
(select replace(#sql,'tbl',name) as rplc from #test t2 where t1.rownum=t2.rownum)b
)
select #sql1= stuff(
(select ''+rplc
from cte
for xml path('')
),1,0,'')
set #sql1=substring(#sql1,1,len(#sql1)-10)
print #sql1
--exec(#Sql1)
Related
I am trying to select all distinct values from all tables that start with a specific name, like: 'logs_2020_12_01', 'logs_2021_01_02', ..To select all tables with this specific name is straight forward:
SELECT name FROM sqlite_master WHERE type='table' AND name LIKE 'logs_%';
The select I want for one individual table is:
SELECT DISTINCT batch FROM logs_2021_01_27;
but I cannot find a way to combine it to make the selection from all tables. I tried a couple of things but it does not work, like:
SELECT DISTINCT batch FROM (SELECT name FROM sqlite_master WHERE type='table' AND name LIKE 'logs_%')
any ideas?
thanks
What about using Dynamic SQL, stored your tables information into a temp table with id column and set it to identity.
CREATE TABLE #temp ---identity column will be used to iterate
(
id INT IDENTITY,
TableName VARCHAR(20)
)
INSERT INTO #temp
SELECT name FROM sqlite_master WHERE type='table' AND name LIKE 'logs_%';
-- choose your own results with where conditions
DECLARE #SQL VARCHAR(MAX)
DECLARE #Count INT = 1
DECLARE #Table VARCHAR(20)
WHILE #COUNT <= (SELECT COUNT(*) FROM #temp)
BEGIN
select #table = TABLENAME FROM #temp WHERE id = #Count
SELECT #sql = 'SELECT DISTINCT(batch) FROM '+ #table
PRINT #SQL
SET #Count = #Count + 1
END
after your print result looks good, change it to EXEC(#SQL), thanks
SQLite does not support dynamic sql.
You have to select the column batch from each of all the tables and combine them with UNION so the duplicates are removed:
SELECT batch FROM logs_2020_12_01 UNION
SELECT batch FROM logs_2020_12_02 UNION
......................................
SELECT batch FROM logs_2020_12_30 UNION
SELECT batch FROM logs_2020_12_31
If you don't know the full names of the tables, you can get them with this statement:
SELECT name
FROM sqlite_master
WHERE type = 'table' AND name LIKE 'logs/_%' ESCAPE '/'
and then use a programming language to construct a SELECT statement with UNION to get the results that you want.
I have two tables with following structure.
and
and I want results like this from one query.
CUSTOMER_CODE | CUSTOMER_NAME | LINE 1 |LINE 2 | LINE 3
we have to make first table as pivot, but how not sure.
Please advise.
Thanks
Here is a dynamic conditional aggregation There were no table names, so TABLE1 relates to Image1
Declare #SQL varchar(max)=''
Select #SQL = #SQL+',[Line '+cast([Line#] as varchar(25))+']=max(case when [Line#]='+cast([Line#] as varchar(25))+' then EMail else '''' end)'
From (Select Distinct [Line#] from Table1) A
Order By [Line#]
Select #SQL='
Select A.Customer_Code
,B.Customer_Branch_Name'+#SQL+'
From Table1 A
Join Table2 B
on A.Customer_Code=B.Customer_Branch
Group By A.Customer_Code,B.Customer_Branch_Name'
Exec(#SQL)
Returns
I have a sql table with some values and a lot of filters
ID | Name | Filter1 | Filter2 | Filter3 | Filter4 ... and so on...
As now the filters have been set as int and I am running a query as follows to get the data required
select Name
from tblABC
where Filter1=1 and Filter2 = 7 and Filter3 = 33 ... and so on...'
My issue is that I want a filter column to hold multiple numbers. eg:- row no 3 will have numbers 8 and 13 in Filter1 cell, so that when I run a query for 8 or 13 I get the same result.
ie I want both the below queries to return the same result.
select... where Filter1=8
select... where Filter1=13
How can this be done? I tried converting the Filter columns to nvarchar and entering data as .8.13. where '.' where was used as separators. After this, running a query 'select... where Filter1 LIKE '%.8.%' is working for me.. But there are like 12 Filter columns and when such a string search is run in large volumes, wouldn't it make the query slow. What would be a more efficient way of doing this?
I am using Microsoft SQL 2014
A more efficient way, hmm. Separating tblABC from the filters would be my suggested way to go, even if it's not the most efficient way it will make up for it in maintenance (and it sure is more efficient than using like with wildcards for it).
tblABC ID Name
1 Somename
2 Othername
tblABCFilter ID AbcID Filter
1 1 8
2 1 13
3 1 33
4 2 5
How you query this data depends on your required output of course. One way is to just use the following:
SELECT tblABC.Name FROM tblABC
INNER JOIN tblABCFilter ON tblABC.ID = tblABCFilter.AbcID
WHERE tblABCFilter.Filter = 33
This will return all Name with a Filter of 33.
If you want to query for several Filters:
SELECT tblABC.Name FROM tblABC
INNER JOIN tblABCFilter ON tblABC.ID = tblABCFilter.AbcID
WHERE tblABCFilter.Filter IN (33,7)
This will return all Name with Filter in either 33 or 7.
I have created a small example fiddle.
I'm going to post a solution I use. I use a split function ( there are a lot of SQL Server split functions all over the internet)
You can take as example
CREATE FUNCTION [dbo].[SplitString]
(
#List NVARCHAR(MAX),
#Delim VARCHAR(255)
)
RETURNS TABLE
AS
RETURN ( SELECT [Value] FROM
(
SELECT
[Value] = LTRIM(RTRIM(SUBSTRING(#List, [Number],
CHARINDEX(#Delim, #List + #Delim, [Number]) - [Number])))
FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name)
FROM sys.all_objects) AS x
WHERE Number <= LEN(#List)
AND SUBSTRING(#Delim + #List, [Number], LEN(#Delim)) = #Delim
) AS y
);
and run your query like this
select Name
from tblABC
where Filter1 IN (
SELECT * FROM SplitString(#DatatoFilter,',') and
Filter2 (IN (
SELECT * FROM SplitString(#DatatoFilter,',') and
..so on.
If you have hunderds of thousands of records it may not perform very well. But it should work.
My personal aproch would be a stored procedure and temp tables. Create a temp table with all the values you want to use as filter
SELECT *
INTO #Filter1
FROM SplitString(#DatatoFilter,',')
SELECT *
INTO #Filter2
FROM SplitString(#DatatoFilter,',')
then the final select
SELECT * FROM yourtable
WHERE Filter1 IN (SELECT DISTINCT Part FROM #Filter1) and
Filter2 IN (SELECT DISTINCT Part FROM #Filter2)
I don't think it makes any big difference from the first query, but it is easier to read.
Another solution which you can try is to convert the columns to XML. Its better than converting the columns to VARCHAR. You can use .exist to get only the records matching your criteria. Something like this.
DECLARE #table1 TABLE
(
[ID] int, [Name] varchar(9),Filter1 XML
)
INSERT INTO #table1
([ID], [Name],Filter1)
VALUES
(1, 'Somename','<Filter>8</Filter>'),
(2, 'Othername','<Filter>8</Filter><Filter>13</Filter>'),
(3, 'Thirdname','<Filter>25</Filter>')
DECLARE #FilterValue INT = 8
SELECT Filter1.query('/Filter'),*
FROM #table1
WHERE Filter1.exist('/Filter[. = sql:variable("#FilterValue")]') = 1
EDIT
You can even use the XML column to store all 12 of your filters. So this filter xml column which store all your filters and their multiple values.
DECLARE #table1 TABLE
(
[ID] int, [Name] varchar(9),Filter XML
)
INSERT INTO #table1
([ID], [Name],Filter)
VALUES
(1, 'Somename','<Filter ID = "1"><FilterVal>8</FilterVal></Filter><Filter ID = "2"><FilterVal>3</FilterVal><FilterVal>12</FilterVal></Filter>'),
(2, 'Othername','<Filter ID = "1"><FilterVal>8</FilterVal><FilterVal>13</FilterVal></Filter><Filter ID = "2"><FilterVal>8</FilterVal><FilterVal>13</FilterVal></Filter>'),
(3, 'Thirdname','<Filter ID = "2"><FilterVal>12</FilterVal><FilterVal>25</FilterVal></Filter><Filter ID = "3"><FilterVal>33</FilterVal></Filter>')
DECLARE #Filter1Value INT = 8
DECLARE #Filter2Value INT = 12
SELECT *
FROM #table1
WHERE Filter.exist('/Filter[#ID = 1]/FilterVal[. = sql:variable("#Filter1Value")]') = 1
AND Filter.exist('/Filter[#ID = 2]/FilterVal[. = sql:variable("#Filter2Value")]') = 1
Let's assume two tables:
TableA holds various data measurements from a variety of stations.
TableB holds metadata, about the columns used in TableA.
TableA has:
stationID int not null, pk
entryDate datetime not null, pk
waterTemp float null,
waterLevel float null ...etc
TableB has:
id int not null, pk, autoincrement
colname varchar(50),
unit varchar(50) ....etc
So for example, one line of data from tableA reads:
1 | 2013-01-01 00:00 | 2.4 | 3.5
two lines from tableB read:
1| waterTemp | celcius
2| waterLevel | meters
This is a simplified example. In truth, tableA might hold close to 20 different data columns, and table b has close to 10 metadata columns.
I am trying to design a view which will output the results like this:
StationID | entryDate | water temperature | water level |
1 | 2013-01-01 00:00 | 2.4 celcius | 3.5 meters |
So two questions:
Other than specifying subselects from TableB (..."where
colname='XXX'") for each column, which seems horribly insufficient
(not to mention...manual :P ), is there a way to get the result I
mentioned earlier with automatic match on colname?
I have a hunch
that this might be bad design on the database. Is it so? If yes,
what would be a more optimal design? (Bear in mind the complexity of
the data structure I mentioned earlier)
dynamic SQL with PIVOT is the answer. though it is dirty in terms of debugging or say for some new developer to understand the code but it will give you the result you expected.
check the below query.
in this we need to prepare two things dynamically. one is list columns in the result set and second is list of values will appear in PIVOT query. notice in the result i do not have NULL values for Column3, Column5 and Column6.
SET NOCOUNT ON
IF OBJECT_ID('TableA','u') IS NOT NULL
DROP TABLE TableA
GO
CREATE TABLE TableA
(
stationID int not null IDENTITY (1,1)
,entryDate datetime not null
,waterTemp float null
,waterLevel float NULL
,Column3 INT NULL
,Column4 BIGINT NULL
,Column5 FLOAT NULL
,Column6 FLOAT NULL
)
GO
IF OBJECT_ID('TableB','u') IS NOT NULL
DROP TABLE TableB
GO
CREATE TABLE TableB
(
id int not null IDENTITY(1,1)
,colname varchar(50) NOT NULL
,unit varchar(50) NOT NULL
)
INSERT INTO TableA( entryDate ,waterTemp ,waterLevel,Column4)
SELECT '2013-01-01',2.4,3.5,101
INSERT INTO TableB( colname, unit )
SELECT 'WaterTemp','celcius'
UNION ALL SELECT 'waterLevel','meters'
UNION ALL SELECT 'Column3','unit3'
UNION ALL SELECT 'Column4','unit4'
UNION ALL SELECT 'Column5','unit5'
UNION ALL SELECT 'Column6','unit6'
DECLARE #pvtInColumnList NVARCHAR(4000)=''
,#SelectColumnist NVARCHAR(4000)=''
, #SQL nvarchar(MAX)=''
----getting the list of Columnnames will be used in PIVOT query list
SELECT #pvtInColumnList = CASE WHEN #pvtInColumnList=N'' THEN N'' ELSE #pvtInColumnList + N',' END
+ N'['+ colname + N']'
FROM TableB
--PRINT #pvtInColumnList
----lt and rt are table aliases used in subsequent join.
SELECT #SelectColumnist= CASE WHEN #SelectColumnist = N'' THEN N'' ELSE #SelectColumnist + N',' END
+ N'CAST(lt.'+sc.name + N' AS Nvarchar(MAX)) + SPACE(2) + rt.' + sc.name + N' AS ' + sc.name
FROM sys.objects so
JOIN sys.columns sc
ON so.object_id=sc.object_id AND so.name='TableA' AND so.type='u'
JOIN TableB tbl
ON tbl.colname=sc.name
JOIN sys.types st
ON st.system_type_id=sc.system_type_id
ORDER BY sc.name
IF #SelectColumnist <> '' SET #SelectColumnist = N','+#SelectColumnist
--PRINT #SelectColumnist
----preparing the final SQL to be executed
SELECT #SQL = N'
SELECT
--this is a fixed column list
lt.stationID
,lt.entryDate
'
--dynamic column list
+ #SelectColumnist +N'
FROM TableA lt,
(
SELECT * FROM
(
SELECT colname,unit
FROM TableB
)p
PIVOT
( MAX(p.unit) FOR p.colname IN ( '+ #pvtInColumnList +N' ) )q
)rt
'
PRINT #SQL
EXECUTE sp_executesql #SQL
here is the result
ANSWER to your Second Question.
the design above is not even giving performance nor flexibility. if user wants to add new Metadata (Column and Unit) that can not be done w/o changing table definition of TableA.
if we are OK with writing Dynamic SQL to give user Flexibility we can redesign the TableA as below. there is nothing to change in TableB. I would convert it in to Key-value pair table. notice that StationID is not any more IDENTITY. instead for given StationID there will be N number of row where N is the number of column supplying the Values for that StationID. with this design, tomorrow if user adds new Column and Unit in TableB it will add just new Row in TableA. no table definition change required.
SET NOCOUNT ON
IF OBJECT_ID('TableA_New','u') IS NOT NULL
DROP TABLE TableA_New
GO
CREATE TABLE TableA_New
(
rowID INT NOT NULL IDENTITY (1,1)
,stationID int not null
,entryDate datetime not null
,ColumnID INT
,Columnvalue NVARCHAR(MAX)
)
GO
IF OBJECT_ID('TableB_New','u') IS NOT NULL
DROP TABLE TableB_New
GO
CREATE TABLE TableB_New
(
id int not null IDENTITY(1,1)
,colname varchar(50) NOT NULL
,unit varchar(50) NOT NULL
)
GO
INSERT INTO TableB_New(colname,unit)
SELECT 'WaterTemp','celcius'
UNION ALL SELECT 'waterLevel','meters'
UNION ALL SELECT 'Column3','unit3'
UNION ALL SELECT 'Column4','unit4'
UNION ALL SELECT 'Column5','unit5'
UNION ALL SELECT 'Column6','unit6'
INSERT INTO TableA_New (stationID,entrydate,ColumnID,Columnvalue)
SELECT 1,'2013-01-01',1,2.4
UNION ALL SELECT 1,'2013-01-01',2,3.5
UNION ALL SELECT 1,'2013-01-01',4,101
UNION ALL SELECT 2,'2012-01-01',1,3.6
UNION ALL SELECT 2,'2012-01-01',2,9.9
UNION ALL SELECT 2,'2012-01-01',4,104
SELECT * FROM TableA_New
SELECT * FROM TableB_New
SELECT *
FROM
(
SELECT lt.stationID,lt.entryDate,rt.Colname,lt.Columnvalue + SPACE(3) + rt.Unit AS ColValue
FROM TableA_New lt
JOIN TableB_new rt
ON lt.ColumnID=rt.ID
)t1
PIVOT
(MAX(ColValue) FOR Colname IN ([WaterTemp],[waterLevel],[Column1],[Column2],[Column4],[Column5],[Column6]))pvt
see the result below.
I would design this database like the following:
A table MEASUREMENT_DATAPOINT that contains the measured data points. It would have the columns ID, measurement_id, value, unit, name.
One entry would be 1, 1, 2.4, 'celcius', 'water temperature'.
A table MEASUREMENTS that contains the data of the measurement itself. Columns: ID, station_ID, entry_date.
You might want to look into the MS-SQL function called PIVOT/UNPIVOT
http://technet.microsoft.com/en-us/library/ms177410(v=sql.105).aspx
you can take column names and have them in rows or vice versa using this command.
Once you have the column name in the column itself you can join that column from tableA to tableB. Then unpivot to get your data back the way you want it. (caveat I may be swapping the use of pivot and unpivot :))
Word to the wise though, if you are working with large tables, pivot is not the fastest of operations.
I think you would have to flip it to a row per metric. Looking at your design above:
1 | 2013-01-01 00:00 | 2.4 | 3.5
How do I know what row in table b that applies to?
I would try something like this:
Table B:
Metric_Key | Metric
1 | WaterLevel in Meters
2 | Temp in Celcius
...
Table A:
StationID | entrydate | Metric_Key | Value
1 2013-01-01 00:00 1 2.4
I'm working on a database which has the following table:
id location
1 Singapore
2 Vancouver
3 Egypt
4 Tibet
5 Crete
6 Monaco
My question is, how can I produce a query from this which would result in column names like the following without writing them into the query:
Query result:
Singapore , Vancouver, Egypt, Tibet, ...
< values >
how can I produce a query which would result in column names like the
following without writing them into the query:
Even with crosstab() (from the tablefunc extension), you have to spell out the column names.
Except, if you create a dedicated C function for your query. The tablefunc extension provides a framework for this, output columns (the list of countries) have to be stable, though. I wrote up a "tutorial" for a similar case a few days ago:
PostgreSQL row to columns
The alternative is to use CASE statements like this:
SELECT sum(CASE WHEN t.id = 1 THEN o.ct END) AS "Singapore"
, sum(CASE WHEN t.id = 2 THEN o.ct END) AS "Vancouver"
, sum(CASE WHEN t.id = 3 THEN o.ct END) AS "Egypt"
-- more?
FROM tbl t
JOIN (
SELECT id, count(*) AS ct
FROM other_tbl
GROUP BY id
) o USING (id);
ELSE NULL is optional in a CASE expression. The manual:
If the ELSE clause is omitted and no condition is true, the result is null.
Basics for both techniques:
PostgreSQL Crosstab Query
You could do this with some really messing dynamic sql but I wouldn't recommend it.
However you could produce something like below, let me know if that stucture is acceptable and I will post some sql.
Location | Count
---------+------
Singapore| 1
Vancouver| 0
Egypt | 2
Tibet | 1
Crete | 3
Monaco | 0
Script for SelectTopNRows command from SSMS
drop table #yourtable;
create table #yourtable(id int, location varchar(25));
insert into #yourtable values
('1','Singapore'),
('2','Vancouver'),
('3','Egypt'),
('4','Tibet'),
('5','Crete'),
('6','Monaco');
drop table #temp;
create table #temp( col1 int );
Declare #Script as Varchar(8000);
Declare #Script_prepare as Varchar(8000);
Set #Script_prepare = 'Alter table #temp Add [?] varchar(100);'
Set #Script = ''
Select
#Script = #Script + Replace(#Script_prepare, '?', [location])
From
#yourtable
Where
[id] is not null
Exec (#Script);
ALTER TABLE #temp DROP COLUMN col1 ;
select * from #temp;