SQL - Concatenate all columns from any table - sql

I'm using triggers to audit table changes. Right now I capture the individual column changes in the following:
DECLARE #statement VARCHAR(MAX)
SELECT #statement =
'Col1: ' + CAST(ISNULL(Col1, '') AS VARCHAR) + ', Col2: ' + CAST(ISNULL(Col2, '') AS VARCHAR) + ', Col3: ' + CAST(ISNULL(Col3, '') AS VARCHAR)
FROM INSERTED;
The problem is, I need to tweak the column names for every table/trigger that I want to audit against. Is there a way I can build #statement, independent of the table using a more generic approach?
cheers
David

what you need to do is build a memory table using the following query and then loop through the same to produce the SQL statement you want
select column_name from information_schema.columns
where table_name like 'tName'
order by ordinal_position
however i am not sure this would be the right thing to do for AUDIT. How are you going to pull it back later. Say in one of your releases you happen to drop the column what will happen then? how will you know which column held which data.

Related

Find rows containing delimited words within nvarchar parameter

I have a procedure that selects an offset of rows from a table:
SELECT * --table contains ID and Name columns
FROM Names
ORDER BY ID
OFFSET #Start ROWS
FETCH NEXT #Length ROWS ONLY
In addition to #Start and #Length parameters, the procedure also receives #SearchValue NVARCHAR(255) parameter. #SearchValue contains a string of values delimited by a space, for example '1 ik mi' or 'Li 3'.
What I need is to query every record containing all of those values. So, if the #SearchValue is '1 ik mi', it should return any records that contain all three values: '1', 'mi', and 'ik'. Another way to understand this is by going here, searching the table (try searching 00 eer 7), and observing the filtered results.
I have the freedom to change the delimiter or run some function (in C#, in my case) that could format an array of those words.
Below are our FAILED attempts (we didn't try implementing it with OFFSET yet):
Select ID, Name
From Names
Where Cast(ID as nvarchar(255)) in (Select value from string_split(#SearchValue, ' ')) AND
Name in (Select value from string_split(#SearchValue, ' '))
SELECT ID, Name
FROM Names
WHERE #SearchValueLIKE '% ' + CAST(ID AS nvarchar(20)) + ' %' AND
#SearchValueLIKE '% ' + Name + ' %';
We used Microsoft docs on string_split for the ideas above.
Tomorrow, I will try to implement this solution, but I'm wondering if there's another way to do this in case that one doesn't work. Thank you!
Your best bet will be to use a FULL TEXT index. This is what they're built for.
Having said that you can work around it.. BUT! You're going to be building a query to do it. You can either build the query in C# and fire it at the database, or build it in the database. However, you're never going to be able to optimise the query very well because users being users could fire all sorts of garbage into your search that you'll need to watch out for, which is obviously a topic for another discussion.
The solution below makes use of sp_executesql, so you're going to have to watch out for SQL injection (before someone else picks apart this whole answer just to point out SQL injection):
DROP TABLE #Cities;
CREATE TABLE #Cities(id INTEGER IDENTITY PRIMARY KEY, [Name] VARCHAR(100));
INSERT INTO #Cities ([Name]) VALUES
('Cooktown'),
('South Suzanne'),
('Newcastle'),
('Leeds'),
('Podunk'),
('Udaipur'),
('Delhi'),
('Murmansk');
DECLARE #SearchValue VARCHAR(20) = 'ur an rm';
DECLARE #query NVARCHAR(1000);
SELECT #query = COALESCE(#query + '%'' AND [Name] LIKE ''%', '') + value
FROM (Select value from string_split(#SearchValue, ' ')) a;
SELECT #query = 'SELECT * FROM #Cities WHERE [Name] LIKE ''%' + #query + '%''';
EXEC sp_executesql #query;

SSMS - MS SQL Sever Query option set ON/OFF to display all columns in Shortdate format?

In SSMS, for MS SQL Server 2008 or newer versions, is there a general query option or something like that, something to set ON or OFF before launching the query, in order to view all DATE columns as Shortdate (only date, without time)?
Something like SET ANSI_NULLS { ON | OFF } ?
Because I often use 'select * from table', or different approaches like that, and inside tables are many columns and the DATE columns are in different places, and I don't want every time to check where these columns are and to explicitly use CONVERT or CAST only on them, to display them properly.
Thank you for any suggestion.
Yeah I will solve such situation from interface end only.
Also saying like,
Because I often use 'select * from table', or different approaches
this is itself bad,you can't have your own way or approaches.
Nonetheless in sql we can do something like this,
USE AdventureWorks2012
GO
--proc parameter
DECLARE #tablename VARCHAR(50) = 'Employee'
DECLARE #table_schema VARCHAR(50) = 'HumanResources'
--local variable
DECLARE #Columnname VARCHAR(max) = ''
DECLARE #Sql VARCHAR(max) = ''
SELECT #Columnname = #Columnname + CASE
WHEN DATA_TYPE = 'date'
OR DATA_TYPE = 'datetime'
THEN 'cast(' + QUOTENAME(COLUMN_NAME) + ' as date)'
ELSE QUOTENAME(COLUMN_NAME)
END + ',' + CASE
WHEN DATA_TYPE = 'date'
OR DATA_TYPE = 'datetime'
THEN 'cast(' + QUOTENAME(COLUMN_NAME) + ' as date)'
ELSE QUOTENAME(COLUMN_NAME)
END + ','
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = #tablename
AND TABLE_SCHEMA = #table_schema
ORDER BY ORDINAL_POSITION
SET #Columnname = STUFF(#Columnname, len(#Columnname), 1, '')
--set #Columnname=stuff(#Columnname,1,1,'')
--PRINT #Columnname
SET #Sql = 'select ' + #Columnname + ' from ' + #table_schema + '.' + #tablename + ''
--PRINT #Sql
EXEC (#Sql)
it can be further improve as per requirement.Also please use sp_executeSql
you can customize case condition.
There is no "magic" display format button or function in SSMS no. When you execute a query, SSMS will display that column in the format that is appropriate for that data type; for a datetime field that will include the time.
If you don't want to include the time, then you have to either CAST or CONVERT the individual column(s), or format the data appropriately in your presentation layer (for example, if you're using Excel then dd/MM/yyyy may be appropriate).
If all your columns have '00:00:00.000' at the end of their value, the problem isn't the display format, it's your data type choice. Clearly, the problem isn't that SSMS is returning a time for a date**time** column, it's that you've declare a column as a datetime when it should have been a date. You can change the datatype of a column using ALTER. For example:
USE Sandbox;
Go
CREATE TABLE TestTable (ID smallint IDENTITY(1,1), DateColumn datetime);
INSERT INTO TestTable (DateColumn)
VALUES ('20180201'),('20180202'),('20180203'),('20180204'),('20180205');
SELECT *
FROM TestTable;
GO
ALTER TABLE TestTable ALTER COLUMN DateColumn date;
GO
SELECT *
FROM TestTable;
GO
DROP TABLE TestTable;
TL;DR: SSMS displays data in an appropriate format for the data you have. If you don't like it, you have to supply an alternate format for it to display for each appropriate column. If the issue is your data, change the data type.
Edit: I wanted to add a little more to this.
This question is very much akin to also asking "I would like to be able to run queries where decimals only return the integer part of the value. Can this be done automagically?". So, the value 9.1 would return 9, but also, the value 9.999999999 would return 9.
Now, I realise that you "might" be thinking "Numbers aren't anything like dates", but really, they are. At the end of the (especially in data) a date is just a number (hell, a datetime time in SQL Server is stored as the number of days after 1900-01-01, and the time is a decimal of that number, so 43136.75 is actually 2018-02-07 18:00:00.000).
Now that we're talking in numbers, does it seems like a good idea to you to have all your decimals returned as their FLOOR value? I imagine the answer is "no". Imagine if you were doing some kind of accounting, and only summing the values of transactions using the FLOOR value. You could be losing 1,000's (or more) of £/$/€'s.
Think of the old example of the people who stole money from payments which contained values of less than a penny. The amount they stole was a huge amount, however, not one individual theft had a value >= $0.01. The same principle really rules here; precision is very important and if your column has that precision it should be there for a reason.
The same is true for dates. If you are storing times with dates, and the time isn't relevant for that specific query, change your query; having a setting to ignore times (or decimal points) is, in all honestly, just a bad idea.
I don't think that there is an option like this in SSMS. The best thing I am coming up with is to create views of the tables and this way you can do a
select convert(date, <date column>)
one time and they will appear as just dates in the views.

sum range of columns SQL

I'm using SQL Server 2016. Suppose I have columns sales1, sales2,..,sales100 in a dataset, along with some other column, say name. I want to write a SQL query like
SELECT name, SUM(sales1), SUM(sales2),..., SUM(sales100)
FROM dataset
GROUP BY name
but I don't believe this is valid SQL syntax.. Is there a shorthand to do this?
There is no shorthand, but SSMS provides a simple way to do this.
From Object Explorer expand your table and drag the column folder to a query window.
It will give you a csv list of columns.
Oh, but that is not what you wanted!
Replace ',' with '), SUM(' and with minor tweaking you can have the desired string.
Or you could try this:
DECLARE #SQL VARCHAR(MAX) = ''
SELECT #SQL += 'SUM(' + column_name + '), '
FROM information_Schema.COLUMNS
WHERE table_name = 'mytable'
AND column_name LIKE 'sales%'
ORDER BY ORDINAL_POSITION
SELECT #SQL

Dynamic SQL w/ Loop Over All Columns in a Table

I recently was pulled off of an ASP.net conversion project at my new job to help with a rather slow, mundane, but desperate task another department is handling. Basically, they are using a simple SQL script on every column of every table in every database (it's horrible) to generate a count of all of the distinct records on each table for each column. My SQL experience is limited and my dynamic SQL experience is zero, more or less, but since I have not been given permissions yet to even access this particular database I went to work attempting to formulate a more automated query to perform this task, testing on a database I do have access to.
In short, I ran into some issues and I was hoping someone might be able to help me fill in the blanks. It'll save this department more than a month of speculated time if something more automated can be utilized.
These are the two scripts I was given and told to run on each column. The first one was for any non-bit/boolean column and also for non-datetime columns. The second was to be used for any datetime column.
select columnName, count(*) qty
from tableName
group by columnName
order by qty desc
select year(a.columnName), count(*) qty
from tableName a
group by year(a.columnName)
order by qty desc
Doing this thousands of times doesn't seem like a lot of fun to me, so here is more or less some pseudo-code that I came up with that I think could solve the issue, I will point out which areas I am fuzzy on.
declare #sql nvarchar(2500)
set #sql = 'the first part(s) of statement'
[pseudo-pseudo] Get "List" of All Column Names in Table (I do not believe there is a Collection datatype in SQL code, but you get the idea)
[pseudo-pseudo] Loop Through "List" of Column Names
(I know this dot notation wouldn't work, but I would like to perform something similar to this)
IF ColumnName.DataType LIKE 'date%'
set #sql = #sql + ' something'
IF ColumnName.DataType = bit
set #sql = #sql + ' something else' --actually it'd be preferable to skip bit/boolean datatypes if possible as these aren't necessary for the reports being created by these queries
ELSE
set #sql = #sql + ' something other than something else'
set #sql = #sql + ' ending part of statement'
EXEC(#sql)
So to summarize, for simplicity's sake I'd like to let the user plug the table's name into a variable at the start of the query:
declare #tableName nvarchar(50)
set #tableName = 'TABLENAME' --Enter Query's Table Name Here
Based on this, the code will loop through every column of that table, checking for datatype. If the datatype is a datetime (or other date like datatype), the "year" code would be added to the dynamic SQL. If it is anything else (except bit/boolean), then it will add the default logic to the dynamic SQL code.
Again, for simplicity's sake (even if it is bad practice) I figure the end result will be a dynamic SQL statement with multiple selects, one for each column in the table. Then the user would simply copy the output to excel (which they are doing right now anyway). I know this isn't the perfect solution so I am open to suggestions, but since time is of the essence and my experience with dynamic SQL is close to null, I thought a somewhat quick and dirty approach would be tolerable in this case.
I do apologize for my very haphazard preparation with this question but I do hope someone out there might be able to steer me in the right direction.
Thanks so much for your time, I certainly appreciate it.
Here's an example working through all the suggestions in the comments.
declare #sql nvarchar(max);
declare stat_cursor cursor local fast_forward for
select
case when x.name not in ('date', 'datetime2', 'smalldatetime', 'datetime') then
N'select
' + quotename(s.name, '''') + ' as schema_name,
' + quotename(t.name, '''') + ' as table_name,
' + quotename(c.name) + ' as column_name,
count(*) qty
from
' + quotename(s.name) + '.' + quotename(t.name) + '
group by
' + quotename(c.name) + '
order by
qty desc;'
else
N'select
' + quotename(s.name, '''') + ' as schema_name,
' + quotename(t.name, '''') + ' as table_name,
year(' + quotename(c.name) + ') as column_name,
count(*) qty
from
' + quotename(s.name) + '.' + quotename(t.name) + '
group by
year(' + quotename(c.name) + ')
order by
qty desc;'
end
from
sys.schemas s
inner join
sys.tables t
on s.schema_id = t.schema_id
inner join
sys.columns c
on c.object_id = t.object_id
inner join
sys.types x
on c.system_type_id = x.user_type_id
where
x.name not in (
'geometry',
'geography',
'hierarchyid',
'xml',
'timestamp',
'bit',
'image',
'text',
'ntext'
);
open stat_cursor;
fetch next from stat_cursor into #sql;
while ##fetch_status = 0
begin
exec sp_executesql #sql;
fetch next from stat_cursor into #sql;
end;
close stat_cursor;
deallocate stat_cursor;
Example SQLFiddle (note this only shows the first iteration through the cursor. Not sure if this is a limitation of SQLFiddle or a bug).
I'd probably stash the results into a separate database if I was doing this. Also, I'd probably put the SQL building bits into user defined functions for maintainability (the slow bit will be running the queries, no point optimizing generating them).

SQL Server 2008: create trigger across all tables in db

Using SQL Server 2008, I've created a database where every table has a datetime column called "CreatedDt". What I'd like to do is create a trigger for each table so that when a value is inserted, the CreatedDt column is populated with the current date and time.
If you'll pardon my pseudocode, what I'm after is the T-SQL equivalent of:
foreach (Table in MyDatabase)
{
create trigger CreatedDtTrigger
{
on insert createddt = datetime.now;
}
}
If anyone would care to help out, I'd greatly appreciate it. Thanks!
As #EricZ says, the best thing to do is bind a default for the column. Here's how you'd add it to every table using a cursor and dynamic SQL:
Sure, You can do it with a cursor:
declare #table sysname, #cmd nvarchar(max)
declare c cursor for
select name from sys.tables where is_ms_shipped = 0 order by name
open c; fetch next from c into #table
while ##fetch_status = 0
begin
set #cmd = 'ALTER TABLE ' + #table + ' ADD CONSTRAINT DF_' + #table + '_CreateDt DEFAULT GETDATE() FOR CreateDt'
exec sp_executesql #cmd
fetch next from c into #table
end
close c; deallocate c
No need to go for Cursors. Just copy the result of below Query and Execute.
select distinct 'ALTER TABLE '+ t.name +
' ADD CONSTRAINT DF_'+t.name+'_crdt DEFAULT getdate() FOR '+ c.name
from sys.tables t
inner join sys.columns c on t.object_id=c.object_id
where c.name like '%your column name%'
Here's another method:
DECLARE #SQL nvarchar(max);
SELECT #SQL = Coalesce(#SQL + '
', '')
+ 'ALTER TABLE ' + QuoteName(T.TABLE_SCHEMA) + '.' + QuoteName(T.TABLE_NAME)
+ ' ADD CONSTRAINT ' + QuoteName('DF_'
+ CASE WHEN T.TABLE_SCHEMA <> 'dbo' THEN T.Table_Schema + '_' ELSE '' END
+ C.COLUMN_NAME) + ' DEFAULT (GetDate()) FOR ' + QuoteName(C.COLUMN_NAME)
+ ';'
FROM
INFORMATION_SCHEMA.TABLES T
INNER JOIN INFORMATION_SCHEMA.COLUMNS C
ON T.TABLE_SCHEMA = C.TABLE_SCHEMA
AND T.TABLE_NAME = C.TABLE_NAME
WHERE
C.COLUMN_NAME = 'CreatedDt'
;
EXEC (#SQL);
This yields, and runs, a series of statements similar to the following:
ALTER TABLE [schema].[TableName] -- (line break added)
ADD CONSTRAINT [DF_schema_TableName] DEFAULT (GetDate()) FOR [ColumnName];
Some notes:
This uses the INFORMATION_SCHEMA views. It is best practice to use these where possible instead of the system tables because they are guaranteed to not change between versions of SQL Server (and moreover are supported on many DBMSes, so all things being equal it's best to use standards-compliant/portable code).
In a database with a case-sensitive default collation, one MUST use upper case for the INFORMATION_SCHEMA view names and column names.
When creating script it's important to pay attention to schema names and proper escaping (using QuoteName). Not doing so will break in someone's system some day.
I think it is best practice to put the DEFAULT expression inside parentheses. While no error is received without it in this case, with it, if the function GetDate() is parameterized and/or ever changed to a more complex expression, nothing will break.
If you decide that column defaults are not going to work for you, then the triggers you imagined are still possible. But it will take some serious work to manage whether the trigger already exists and alter or create it appropriately, JOIN to the inserted meta-table inside the trigger, and do it based on the full list of primary key columns for the table (if they exist, and if they don't, then you're out of luck). It is quite possible, but extremely difficult--you could end up with nested, nested, nested dynamic SQL. I have such automated object-creating script that contains 13 quote marks in a row...