I have a Database for say MasterDB which has list of Some Databases Name in a Table tbl_B .Each DataBase Name is identified by an ID.
The structure of the table tbl_B is like the following
tbl_B
ID | DB_Name
-------------
1 | DelhiDB
2 | MumbaiDB
There are DataBases with the same name i.e DelhiDB and MumbaiDB and each of them have a Table with name tbl_C which will have some data for eg.
tbl_C for Delhi
custIDDelhi | custNameDelhi | CustPhoneDelhi |
----------------------------------------------
1 | John | 123456 |
2 | Monika | 789945 |
Please note here that the column names for Both the databases can be Different
Also Please note that DelhiDB and MumbaiDB are separate Database each having a table named tbl_C
I want to create a Table called tblCusotmer_Dictionary in MasterDB
With Data something like this
ColumnName | DataBaseName | DataBaseID | db_ColumnNamme
-----------------------------------------------------------
CustomerID | DelhiDB | 1 | custIDDelhi
CustomerName | DelhiDB | 1 | custNameDelhi
CustomerPhone | DelhiDB | 1 | CustPhoneDElhi
CustomerID | MumbaiDB | 2 | custIDMumbai
CustomerName | MumbaiDB | 2 | custNameMumbai
CustomerPhone | MumbaiDB | 2 | CustPhoneMumbai
Here I dont want any customer data just a list of column name from both the databases along with Database name and ID ,
the column ColumnName in the above table is the Generic Name I am giving to the column db_ColumnNamme
I have taken example for 2 databases and 3 columns for simplicity But there can can be N number for databases each having a table with a same name ( tbl_c here) with fixed no of columns.
Let me know in comments for any clarifications.
if I understood your question correctly then below is the solution which you are looking for. Let me know if it works for you.
DECLARE #tblDatabaseName AS TABLE (Id INT, dbName VARCHAR(100))
--DROP TABLE #tmpREcord
INSERT INTO #tblDatabaseName(id,dbName) VALUES (1,'DelhiDB'),(1,'MumbaiDB')
DECLARE #SQL AS VARCHAR(8000)
DECLARE #Id INT
DECLARE #dbName AS VARCHAR(100)
CREATE TABLE #tmpRecord (
columnName VARCHAR(20),DBID INT, DatabaseName VARCHAR(100))
DECLARE cur_Traverse CURSOR FOR SELECT Id , dbName FROM #tblDatabaseName
OPEN cur_Traverse
FETCH NEXT FROM cur_Traverse INTO #id ,#dbName
WHILE ##FETCH_STATUS =0
BEGIN
SET #SQL = 'INSERT INTO #tmpRecord (ColumnName,DbId,DatabaseName )
SELECT name ,' + CONVERT(VARCHAR(10),#Id) + ' AS DBID, ''' + #dbName + ''' as dbname'
+ ' FROM ' + #dbName + '.sys.all_columns s
WHERE object_Id = (SELECT TOP(1) object_Id FROM ' + #dbName + '.sys.all_objects WHERE name=''tbl_C'')'
PRINT #SQL
EXECUTE (#SQL)
FETCH NEXT FROM cur_Traverse INTO #Id, #dbName
END
CLOSE cur_Traverse
DEALLOCATE cur_Traverse
SELECT * FROM #tmpRecord
You appears to want :
select t.colsname as ColumnName,
b.db_name as DataBaseName,
b.id as DataBaseID,
t.cols as db_ColumnNamme
from tbl_C c
cross apply (values ('custID', 'CustomerID'), ('custName', 'CustomerName'),
('CustPhone', 'CustomerPhone')
) t (cols, colsname)
inner join tbl_B b on b.id = c.custID;
I want to compare values from the first table InputStrings with values in the second table StringConstraints in Sql.
InputStrings
+-----------+------------+-----------+
| Name | Address | City |
+-----------+------------+-----------+
| abcabcabc | xyxyxyxy | qweqweqwe |
| abbcabc | xyxxyxy | qweqwe |
| abccabc | xyxyxyxyxy | qwweqwe |
+-----------+------------+-----------+
StringConstraints
+---------+-----------+-----------+
| colName | minlength | maxlength |
+---------+-----------+-----------+
| Name | 2 | 20 |
| Address | 4 | 10 |
| City | 5 | 10 |
+---------+-----------+-----------+
I want to check if the length of the values in the Name column is between 2 and 20; length of values in the Address column is between 4 and 10; and length of values in the City column is between 5 and 10.
There are 68 rows like this in my table. InputString has 40 columns in it. I can't write for each and every row.
Can anyone help me make a generalized solution to compare the values?
I'm new to the database area.
this will return bad records:
Select *
from InputStrings
where (len(Name) not between (select minlength from StringContraints where colName = 'Name')
and
(select maxlength from StringContraints where colName = 'Name'))
or
(len(Address) not between (select minlength from StringContraints where colName = 'Address')
and
(select maxlength from StringContraints where colName = 'Address'))
or
(len(City) not between (select minlength from StringContraints where colName = 'City')
and
(select maxlength from StringContraints where colName = 'City'))
This is a way to create the same thing dynamically. Please keep in mind that Stack Overflow is not a coding service.
declare #sql varchar(max) =''
,#Name varchar(50)
,#min int
,#max int
,#counter int =1;
declare csr cursor
for
select colName, minlength, [maxlength] from (values ('Name',2,20),('Address',4,10),('City',5,10)) a(colName,minlength,[maxlength])
open csr
fetch next from csr
into #Name,#min,#max
set #sql = 'select * from inputstrings where '
while ##FETCH_STATUS = 0
begin
if(#counter =1 ) set #sql = #sql + '(len('+ #name+') not between ' + cast(#min as varchar(5)) + ' and ' + cast(#max as varchar(5)) + ') '
else set #sql = #sql + 'or (len('+ #name+') not between ' + cast(#min as varchar(5)) + ' and ' + cast(#max as varchar(5)) + ') '
set #counter=#counter +1
fetch next from csr
into #Name,#min,#max
end
close csr
deallocate csr
print #sql
exec(#sql)
You didn't state your DMBS, so this is a solution for Postgres:
select *
from (
select i.id,
t.*,
c.minlength,
c.maxlength,
length(t.value) between c.minlength and c.maxlength as is_valid
from inputstrings i
cross join lateral jsonb_each_text(to_jsonb(i) - 'id') as t(colname, value)
join stringconstraints c on lower(c.colname) = lower(t.colname)
) t
where not is_valid
order by id;
This first turns each row from the table inputstrings into key/value pairs and the result of that is joined to the stringconstraints table. From there it's easy to validate the column values based on the constraints. This is independent of the number of columns in inputstrings. The result will be one row per column value that violates the constraints.
For the following setup:
create table inputstrings (id integer, name text, address text, city text);
insert into inputstrings
values
(1, 'Name OK','Some Address that is too long','City Name OK'),
(2, 'N', 'Address OK', 'Cty'),
(3, 'Good Name', 'Good Address', 'Good City');
create table stringconstraints (colname text, minlength int, maxlength int);
insert into stringconstraints
values
('Name', 2, 20),
('Address', 4, 12),
('City', 5, 15);
The query returns this result:
id | colname | value | minlength | maxlength | is_valid
---+---------+-------------------------------+-----------+-----------+---------
1 | address | Some Address that is too long | 4 | 12 | false
2 | name | N | 2 | 20 | false
2 | city | Cty | 5 | 15 | false
I added the id column so that it is possible to match a invalid column value to the actual source row.
Online example: http://rextester.com/VVG55773
Second example with more columns: http://rextester.com/QCKG53573 (note the query hasn't changed)
I have a procedure with one parameter, letsay #AssetID int.
I want to select a column value from another table, then use that value as the parameter for this procedure.
I've stored procedure something like this and the table has been filtered with "Where" criteria from #AssetID parameter:
declare #inspectyear as nvarchar(max), #calc as nvarchar(max), #query as nvarchar(max);
set #inspectyear = STUFF((select distinct ',' + quotename(InspectYear) from ##t2 c
for XML path(''), type).value('.','NVARCHAR(MAX)'),1,1,'')
select #calc = ', ' + quotename(Max(InspectYear)) + ' - ' + quotename(Max(InspectYear)-2)
+ ' as Calc1, ' + quotename(Max(InspectYear)) + ' - ' + quotename(min(InspectYear))
+ ' as Calc2' from #t2;
set #query =
';with data as
(
select inspectyear,
partno, Pos, number
from #t2
unpivot
(
number
for Pos in ([Pos1], [Pos2], [Pos3], [Pos4])
) unpvt
)
select * ' + #calc + ' into ##temp
from data
pivot
(
sum(number)
for inspectyear in (' + #inspectyear + ')
) pvt
order by partno';
exec sp_executesql #query = #query;
select * from ##temp;
drop table ##temp;
So I need to create another procedure, for instance:
create procedure spExecmyProc
as
begin
exec spMyProc '#AssetID' -- <-- The parameter took from other table.
go
end
The #date parameter, took from other table.
Is it possible to do that? The result should be only one result.
So far, this is what I did. It works, but the result is not on "one result". It create more than one result if the #AssetID is more than one:
declare #AssetID int;
declare cur CURSOR FOR
select distinct AssetID from myTable
open cur
fetch next from cur into #AssetID
while ##FETCH_STATUS = 0
begin
exec mySPName #AssetID
fetch next from cur into #AssetID
end
close cur
DEALLOCATE cur
Thank you.
I'm not 100% sure I understand what you're trying to achieve, but if you want to be able to be able to run some code on each value of AssetID in mytable, returning just one result for each input value, I think you could use a Scalar-valued Function. Let's pretend that the purpose of your original stored procedure was just to increment the AssetId value by 1 for simplicity - your function could be created like this:
CREATE FUNCTION fnMyFunction (#AssetId INT)
RETURNS INT
AS
BEGIN
DECLARE #return INT
SET #return = #AssetId + 1
RETURN #return
END
If you then have some values in a table:
CREATE TABLE Assets (
AssetId INT
)
INSERT INTO Assets
SELECT 1
UNION
SELECT 2
UNION
SELECT 3
UNION
SELECT 5
UNION
SELECT 7
UNION
SELECT 5
You can call your function on each value you return:
SELECT AssetId,
dbo.fnMyFunction(AssetId) AS AssetIdPlus1
FROM Assets
Which gives these results for my super simple dataset defined above:
/------------------------\
| AssetId | AssetIdPlus1 |
|---------+--------------|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 5 | 6 |
| 7 | 8 |
| 5 | 6 |
\------------------------/
If you just want to get the result for each unique value of AssetId in your table, then just return the DISTINCT results:
SELECT DISTINCT
AssetId,
dbo.fnMyFunction(AssetId)
FROM Assets
which would give these results for the same dataset above (with just one row for AssetId = 5):
/------------------------\
| AssetId | AssetIdPlus1 |
|---------+--------------|
| 1 | 2 |
| 2 | 3 |
| 3 | 4 |
| 5 | 6 |
| 7 | 8 |
\------------------------/
Here's a sample of my table
[myTable]
id | random1 | random2 | random3 | random4
1 | 123 | 5357 | 10 | 642
2 | 423 | 34 | 20 | 531
3 | 9487 | 234 | 30 | 975
4 | 34 | 123 | 40 | 864
Here's my current query, but it isn't working like I'd expect it to:
SELECT
cols.*,
(SELECT SUM(cols.column_name) FROM myTable t)
FROM
(SELECT
table_name::text, column_name::text
FROM
information_schema.columns
where
table_name = 'myTable') as cols
I'm getting the error: function sum(text) does not exist - which makes sense. I'm pretty sure that mysql is can be messy enough to allow a reference like that, but I don't know how to do this in postgres.
What I'd really like to have is an end result somewhere along the lines of...
table_name | column_name | sum
myTable | id | 10
myTable | random1 | 10067
myTable | random2 | 5748
myTable | random3 | 100
myTable | random4 | 3012
I want to take this query a lot further, but I'm getting really hung up on being able to reference the column name.
SQL queries are static. They select before-known columns from before-known tables. You cannot make them look up table names and columns from the database dictionary and then magically glue these names into themselves.
What you can do: Write a program (Java, C#, PHP, whatever you like) doing the following:
Send a query to the DBMS to find the column names for the table you are interested in.
Build a SQL query string with the column names got.
Send this query to the DBMS.
declare #tableName varchar(255) = 'myTable' --Change this to the table name you want
/*create table and column name dataSet and insert values*/
if object_id('tempdb.dbo.#objectSet') is not null
drop table #objectSet
create table #objectSet
(table_name varchar(256),
columnID int,
column_name varchar(256),
[sum] int)
insert into #objectSet
(table_name,
columnID,
column_name)
select O.name table_name,
C.column_id columnID,
C.name column_name
from sys.all_objects O
join sys.all_columns C
on O.object_id = C.object_id
join sys.types T
on C.user_type_id = T.user_type_id
where O.object_id = object_id(#tableName)
and T.name in ('int', 'tinyint', 'smallint', 'bigint') --Columns with Aggregatable datatypes only, all other columns will be excluded from the set.
/*Create loop variables for each column*/
declare #SQL as varchar(4000),
#counter int = 1,
#maxCount int
select #maxCount = SQ.maxCount
from ( select count(*) maxCount
from #objectSet OS) SQ
/*Run loop, updating each column as it goes*/
while #counter <= #maxCount
begin
select #SQL = 'update OS set OS.[sum] = SQ.[sum] from #objectSet OS join (select sum(DS.' + OS.column_name + ') [sum] from ' + #tableName + ' DS) SQ on OS.column_name = ''' + OS.column_name + ''''
from #objectSet OS
where OS.columnID = #counter
exec (#SQL)
select #counter += 1
end
/*Display Results*/
select OS.table_name,
OS.column_name,
OS.[sum]
from #objectSet OS
Using system object tables, some dynamic T-SQL, and a loop should do it.
Is it possible to write a statement that selects a column from a table and converts the results to a string?
Ideally I would want to have comma separated values.
For example, say that the SELECT statement looks something like
SELECT column
FROM table
WHERE column<10
and the result is a column with values
|column|
--------
| 1 |
| 3 |
| 5 |
| 9 |
I want as a result the string "1, 3, 5, 9"
You can do it like this:
Fiddle demo
declare #results varchar(500)
select #results = coalesce(#results + ',', '') + convert(varchar(12),col)
from t
order by col
select #results as results
| RESULTS |
-----------
| 1,3,5,9 |
There is new method in SQL Server 2017:
SELECT STRING_AGG (column, ',') AS column FROM Table;
that will produce 1,3,5,9 for you
select stuff(list,1,1,'')
from (
select ',' + cast(col1 as varchar(16)) as [text()]
from YourTable
for xml path('')
) as Sub(list)
Example at SQL Fiddle.
SELECT CAST(<COLUMN Name> AS VARCHAR(3)) + ','
FROM <TABLE Name>
FOR XML PATH('')
The current accepted answer doesn't work for multiple groupings.
Try this when you need to operate on categories of column row-values.
Suppose I have the following data:
+---------+-----------+
| column1 | column2 |
+---------+-----------+
| cat | Felon |
| cat | Purz |
| dog | Fido |
| dog | Beethoven |
| dog | Buddy |
| bird | Tweety |
+---------+-----------+
And I want this as my output:
+------+----------------------+
| type | names |
+------+----------------------+
| cat | Felon,Purz |
| dog | Fido,Beethoven,Buddy |
| bird | Tweety |
+------+----------------------+
(If you're following along:
create table #column_to_list (column1 varchar(30), column2 varchar(30))
insert into #column_to_list
values
('cat','Felon'),
('cat','Purz'),
('dog','Fido'),
('dog','Beethoven'),
('dog','Buddy'),
('bird','Tweety')
)
Now – I don’t want to go into all the syntax, but as you can see, this does the initial trick for us:
select ',' + cast(column2 as varchar(255)) as [text()]
from #column_to_list sub
where column1 = 'dog'
for xml path('')
--Using "as [text()]" here is specific to the “for XML” line after our where clause and we can’t give a name to our selection, hence the weird column_name
output:
+------------------------------------------+
| XML_F52E2B61-18A1-11d1-B105-00805F49916B |
+------------------------------------------+
| ,Fido,Beethoven,Buddy |
+------------------------------------------+
You can see it’s limited in that it was for just one grouping (where column1 = ‘dog’) and it left a comma in the front, and additionally it’s named weird.
So, first let's handle the leading comma using the 'stuff' function and name our column stuff_list:
select stuff([list],1,1,'') as stuff_list
from (select ',' + cast(column2 as varchar(255)) as [text()]
from #column_to_list sub
where column1 = 'dog'
for xml path('')
) sub_query([list])
--"sub_query([list])" just names our column as '[list]' so we can refer to it in the stuff function.
Output:
+----------------------+
| stuff_list |
+----------------------+
| Fido,Beethoven,Buddy |
+----------------------+
Finally let’s just mush this into a select statement, noting the reference to the top_query alias defining which column1 we want (on the 5th line here):
select top_query.column1,
(select stuff([list],1,1,'') as stuff_list
from (select ',' + cast(column2 as varchar(255)) as [text()]
from #column_to_list sub
where sub.column1 = top_query.column1
for xml path('')
) sub_query([list])
) as pet_list
from #column_to_list top_query
group by column1
order by column1
output:
+---------+----------------------+
| column1 | pet_list |
+---------+----------------------+
| bird | Tweety |
| cat | Felon,Purz |
| dog | Fido,Beethoven,Buddy |
+---------+----------------------+
And we’re done.
You can read more here:
FOR XML PATH in SQL server and [text()]
https://learn.microsoft.com/en-us/sql/relational-databases/xml/use-path-mode-with-for-xml?view=sql-server-2017
https://www.codeproject.com/Articles/691102/String-Aggregation-in-the-World-of-SQL-Server
This a stab at creating a reusable column to comma separated string. In this case, I only one strings that have values and I do not want empty strings or nulls.
First I create a user defined type that is a one column table.
-- ================================
-- Create User-defined Table Type
-- ================================
USE [RSINET.MVC]
GO
-- Create the data type
CREATE TYPE [dbo].[SingleVarcharColumn] AS TABLE
(
data NVARCHAR(max)
)
GO
The real purpose of the type is to simplify creating a scalar function to put the column into comma separated values.
-- ================================================
-- Template generated from Template Explorer using:
-- Create Scalar Function (New Menu).SQL
--
-- Use the Specify Values for Template Parameters
-- command (Ctrl-Shift-M) to fill in the parameter
-- values below.
--
-- This block of comments will not be included in
-- the definition of the function.
-- ================================================
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: Rob Peterson
-- Create date: 8-26-2015
-- Description: This will take a single varchar column and convert it to
-- comma separated values.
-- =============================================
CREATE FUNCTION fnGetCommaSeparatedString
(
-- Add the parameters for the function here
#column AS [dbo].[SingleVarcharColumn] READONLY
)
RETURNS VARCHAR(max)
AS
BEGIN
-- Declare the return variable here
DECLARE #result VARCHAR(MAX)
DECLARE #current VARCHAR(MAX)
DECLARE #counter INT
DECLARE #c CURSOR
SET #result = ''
SET #counter = 0
-- Add the T-SQL statements to compute the return value here
SET #c = CURSOR FAST_FORWARD
FOR SELECT COALESCE(data,'') FROM #column
OPEN #c
FETCH NEXT FROM #c
INTO #current
WHILE ##FETCH_STATUS = 0
BEGIN
IF #result <> '' AND #current <> '' SET #result = #result + ',' + #current
IF #result = '' AND #current <> '' SET #result = #current
FETCH NEXT FROM #c
INTO #current
END
CLOSE #c
DEALLOCATE #c
-- Return the result of the function
RETURN #result
END
GO
Now, to use this. I select the column I want to convert to a comma separated string into the SingleVarcharColumn Type.
DECLARE #s as SingleVarcharColumn
INSERT INTO #s VALUES ('rob')
INSERT INTO #s VALUES ('paul')
INSERT INTO #s VALUES ('james')
INSERT INTO #s VALUES (null)
INSERT INTO #s
SELECT iClientID FROM [dbo].tClient
SELECT [dbo].fnGetCommaSeparatedString(#s)
To get results like this.
rob,paul,james,1,9,10,11,12,13,14,15,16,18,19,23,26,27,28,29,30,31,32,34,35,36,37,38,39,40,41,42,44,45,46,47,48,49,50,52,53,54,56,57,59,60,61,62,63,64,65,66,67,68,69,70,71,72,74,75,76,77,78,81,82,83,84,87,88,90,91,92,93,94,98,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,120,121,122,123,124,125,126,127,128,129,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159
I made my data column in my SingleVarcharColumn type an NVARCHAR(MAX) which may hurt performance, but I flexibility was what I was looking for and it runs fast enough for my purposes. It would probably be faster if it were a varchar and if it had a fixed and smaller width, but I have not tested it.
ALTER PROCEDURE [dbo].[spConvertir_CampoACadena]( #nomb_tabla varchar(30),
#campo_tabla varchar(30),
#delimitador varchar(5),
#respuesta varchar(max) OUTPUT
)
AS
DECLARE #query varchar(1000),
#cadena varchar(500)
BEGIN
SET #query = 'SELECT #cadena = COALESCE(#cadena + '''+ #delimitador +''', '+ '''''' +') + '+ #campo_tabla + ' FROM '+#nomb_tabla
--select #query
EXEC(#query)
SET #respuesta = #cadena
END
You can use the following method:
select
STUFF(
(
select ', ' + CONVERT(varchar(10), ID) FROM #temp
where ID<50
group by ID for xml path('')
), 1, 2, '') as IDs
Implementation:
Declare #temp Table(
ID int
)
insert into #temp
(ID)
values
(1)
insert into #temp
(ID)
values
(3)
insert into #temp
(ID)
values
(5)
insert into #temp
(ID)
values
(9)
select
STUFF(
(
select ', ' + CONVERT(varchar(10), ID) FROM #temp
where ID<50
group by ID for xml path('')
), 1, 2, '') as IDs
Result will be:
--------------------------- easy I Found Like it ----------------
SELECT STUFF((
select ','+ name
from tblUsers
FOR XML PATH('')
)
,1,1,'') AS names
name
---------
mari, joan, carls
---------
Use LISTAGG function,
ex. SELECT LISTAGG(colmn) FROM table_name;
Use simplest way of doing this-
SELECT GROUP_CONCAT(Column) from table
+------+----------------------+
| type | names |
+------+----------------------+
| cat | Felon |
| cat | Purz |
| dog | Fido |
| dog | Beethoven |
| dog | Buddy |
| bird | Tweety |
+------+----------------------+
select group_concat(name) from Pets
group by type
Here you can easily get the answer in single SQL and by using group by in your SQL you can separate the result based on that column value. Also you can use your own custom separator for splitting values
Result:
+------+----------------------+
| type | names |
+------+----------------------+
| cat | Felon,Purz |
| dog | Fido,Beethoven,Buddy |
| bird | Tweety |
+------+----------------------+