I'm struggling to create a query that will return all information from a table (like SELECT *), but I would like to omit the column(s) that are auto-incremental.
Reason is, I'm displaying all data (using SELECT *, because I don't always know what columns are available) in a grid-view control, then I open the table up to allow for updates to be carried out. However this also opens up the column(s) that are assigned as auto-incremental for edit and prevents the update query from working.
So far I found the 'sys.columns.is_identity' table which seems it would help in some fashion, I'm just not sure how I could use this with a dynamic SELECT.
It should be noted that the columns are not always known, hence I use SELECT * to retrieve the initial required data.
As you mentioned, only way to do this is using sys.columns and dynamic query
DECLARE #col_list VARCHAR(8000)
SET #col_list = (SELECT ',' + Quotename(c.NAME)
FROM sys.columns c
JOIN sys.objects o
ON c.object_id = o.object_id
WHERE o.NAME = 'table_name'
AND is_identity <> 1
ORDER BY column_id
FOR xml path(''))
SET #col_list = Stuff(#col_list, 1, 1, '')
EXEC('select '+#col_list +' from yourtable')
Related
I need help in searching our database. So we have this problem that we need to know all tables, with the column_name "sysmodified" and see if there are any entries before a specific date (25-sep-2019).
I tried to find the answer on google and stackoverflow, but I either get an answer how to get the results before 25-sep within 1 table Example1, or results how to get all tables, which has this column_name Example2.
Using the code I have so far (see below), we know that there are 325 tables, which contain the column_name "sysmodified". I could manually use example 1 to get my information, but I was hoping for a way to get the results that I need with just one query.
This is what I have so far:
USE [database2]
GO
SELECT t.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.name LIKE '%sysmodified%'
ORDER BY schema_name, table_name;
However if I try to enter anything like sysmodified < '20190925'. I get errors
WHERE c.name LIKE '%sysmodified%'
AND t.sysmodified < '20190925'
or this approach, which also results in errors
SELECT t.name AS table_name, sysmodified,
based on (but I cannot add 325 columnnames in FROM?)
SELECT
title,
primary_author,
published_date
FROM
books
WHERE
title LIKE 'The%'
Hopefully someone could help me with an approach how to tackle this problem. We use Microsoft SQL Server Management Studio 17 (if that might be relevant).
This is fairly simple dynamic sql to put together. This should produce the results you are looking for as I understand your requirements.
declare #SQL nvarchar(MAX) = ''
select #SQL = #SQL + 'select distinct TableName = ''' + object_name(object_ID) + ''' from ' + quotename(object_name(object_ID)) + ' where ' + quotename(c.name) + ' < ''20190925'' UNION ALL '
from sys.columns c
where name like '%sysmodified%'
set #SQL = left(#SQL, len(#SQL) - 10) --removes the final UNION ALL
select #SQL
--once you are comfortable that the dynamic sql is correct just uncomment the next line
--exec sp_executesql #SQL
The comment from GMB is right. Fastest way I can think of an answer is using Dynamic SQL.
I would build a query to loop through or create a union select statement across all tables that have that column. Something like:
(skeleton)
DECLARE #N_SQL NVARCHAR(MAX)
Find all tables that have a column with '%sysmodified%'
Build a dynamic query of (union style) from above like:
SET #N_SQL = ''
SELECT #N_SQL = #NSQL + 'UNION SELECT ' [SCHEMA] + '.' + [TABLENAME] + ' AS TABLENAME FROM ' + [SCHEMA] + '.' + [TABLENAME] + ' WHERE ' + [COLUMN] + ' >= '''<DATE>'''
SELECT #N_SQL --just to see what that string looks like
EXEC SP_EXECUTESQL RIGHT(#N_SQL, LEN(#N_SQL) - 5) --Trimming out the first word "UNION"
So, the above might work. Might need a bit of clean up, but its a skeleton idea.
Based on questions like SQL to find the number of distinct values in a column and https://gis.stackexchange.com/questions/330932/get-line-length-using-sql-in-qgis
I see we can get a count and list of unique values using SQL but I can't see anything where we can do this without knowing the name of the field.
Is it possible in SQL for QGIS which only allows these commands? I found this option for another flavor -https://dataedo.com/kb/query/sql-server/list-table-columns-in-database
In Mapbasic I have used the following but would like to do this in SQL...
'Get Column Name list
dim x as integer
dim sColName as string
dim aColName as Alias
For x=1 to TableInfo(temptable, TAB_INFO_NCOLS)
sColName = ColumnInfo(temptable, "col"+str$(x), COL_INFO_NAME)
if (sColName not in ("GID","GID_New")) then
aColName = sColName
Select aColName, count(*) from temptable group by aColName into "g_"+sColName
Browse * from "g_"+sColName
Export "g_"+sColName Into WFolder+RSelection.col2+"_"+sColName+".csv" Type "ASCII" Delimiter "," CharSet "WindowsLatin1" Titles
End If
Next
I guess in SQL we would use http://www.sqlservertutorial.net/sql-server-basics/sql-server-select-distinct/ but how can I tell it to just use every column in the table without knowing/specifying the name?
UPDATE
If I run
SELECT DISTINCT * FROM Drainage_Lines_Clip;
I get
But I need something like the following without having to specify the column name. Ref
It should look like this extract from running Unique on a google sheet of the data (except with counts)
So this answer is based upon dynamic SQL. You'll get people saying 'don't use it it's dangerous', but they're the kind of people that think the best access to a system for users is none.. Anyway. Be aware of the security risks with SQL injection when using dynamic SQL. I'll leave that part up to you..
The below goes off to the sys.columns table and grabs all of the column names in the table, then a SQL statement is constructed to count all of the values in each column in your target table.
DECLARE #ReturnVar NVARCHAR(MAX);
SELECT #ReturnVar = COALESCE(#ReturnVar + ' UNION ALL ', '') + 'SELECT ''' + c.[name] + ''' [ColumnName], CAST(' + c.[name] + ' AS VARCHAR(MAX)) [ColumnValue], CAST(COUNT(1) AS VARCHAR(MAX)) [Count] FROM dbo.Admissions GROUP BY CAST(' + c.[name] + ' AS VARCHAR(MAX))'
FROM sys.columns c
INNER JOIN sys.objects o ON o.object_id = c.object_id
INNER JOIN sys.schemas s ON s.schema_id = o.schema_id
WHERE o.[name] = 'Drainage_Lines_Clip'
AND s.[name] = 'dbo'
AND c.[name] != 'GID_New';
EXEC sp_executesql #ReturnVar;
I ended up having to use a combination of PyQGIS and SQL to get what's needed.
layer = qgis.utils.iface.activeLayer()
fields=[] # List of fields
Lquery=[] # List of queries to join together with Union All statement
Cquery=[] # Combined Query to use
for field in layer.fields():
if field.name() not in ('GID_New'):
fields.append(field.name())
query = "Select '{0}' as 'Column', {0} as 'Value', count(*) as 'Unique' from {1} group by {0}".format(field.name(), layer.name())
Lquery.append(query)
else:
print (field.name())
# query = "Select {0}, count(*) from {1} group by {0} order by 2 Desc".format(field.name(), layer.name())
for L in Lquery:
Cquery.append(L+' Union All ')
query=''.join(map(str, Fquery))
query=query[:-11]+' Order by Column'
vlayer = QgsVectorLayer( "?query={}".format(query), 'counts_'+layer.name(), "virtual" )
QgsProject.instance().addMapLayer(vlayer)
Found this query here on Stack Overflow which I found very helpful to pull all table names and corresponding columns from a Microsoft SQL Server Enterprise Edition (64-bit) 10.50.4286 SP2 database.
SELECT o.Name, c.Name
FROM sys.columns c
JOIN sys.objects o ON o.object_id = c.object_id
WHERE o.type = 'U'
ORDER BY o.Name, c.Name
It produces a table with two columns like this, each row has the table name in column 01 and the corressponding columns in column 02:
What I really want however is something like this, one column for each table name and the tables columns listed below it like this:
I've already started doing this manually in Excel, but with over 5000 rows returned it would be really nice if there was a way to format the results in the query itself to look like this. Thanks in advance!
As everyone is telling you, this is an un-SQL-y thing to do. Your resultset will have an arbitrary number of columns (equal to the number of user tables in your database, which could be huge). Since the resultset must be rectangular, it will have as many rows as the maximum number of columns in any of your tables, so many of the values will be NULL.
That said, a straightforward dynamic PIVOT gets you what you want:
DECLARE #columns nvarchar(max);
DECLARE #sql nvarchar(max);
SET #columns = STUFF ( (
SELECT '],[' + t.name
FROM sys.tables t
WHERE t.type = 'U'
FOR XML PATH('') ), 1, 2, '')
+ ']';
SET #sql = '
SELECT ' + #columns + '
FROM
(
SELECT t.Name tName
, c.Name cName
, ROW_NUMBER() OVER (PARTITION BY t.Name ORDER BY c.Name) rn
FROM sys.columns c
JOIN sys.tables t ON t.object_id = c.object_id
WHERE t.type = ''U''
) raw
PIVOT (MAX(cName) FOR tName IN ( ' + #columns + ' ))
AS pvt;
';
EXECUTE(#sql);
This is what it produces on my master database:
spt_fallback_db spt_fallback_dev spt_fallback_usg spt_monitor MSreplication_options
------------------- ------------------- ------------------- --------------- ----------------------
dbid high dbid connections install_failures
name low lstart cpu_busy major_version
status name segmap idle minor_version
version phyname sizepg io_busy optname
xdttm_ins status vstart lastrun revision
xdttm_last_ins_upd xdttm_ins xdttm_ins pack_errors value
xfallback_dbid xdttm_last_ins_upd xdttm_last_ins_upd pack_received NULL
xserver_name xfallback_drive xfallback_vstart pack_sent NULL
NULL xfallback_low xserver_name total_errors NULL
NULL xserver_name NULL total_read NULL
NULL NULL NULL total_write NULL
(11 row(s) affected)
It might be easiest to do for example something like this:
Build a comma separated list using for XML path, for example like this.
Then copy that result to excel and use data to columns to create separate columns from the items
Use copy + paste special -> transpose to turn rows into columns
I have a column LastUpdate in all tables of my database and I want to say "on insert of update LastUpdate = getdate()"
I can do this with a trigger but I find it' hard to write hundreds triggers for each table of the database.
- How do I dynamically create a trigger that affect all tables?
- How do I dynamically create triggers for each table ?
It is not possible to have a trigger that fires when any table is updated.
You could generate the SQL Required dynamically, the following:
SELECT N'
CREATE TRIGGER trg_' + t.Name + '_Update ON ' + ObjectName + '
AFTER UPDATE
AS
BEGIN
UPDATE t
SET LastUpdate = GETDATE()
FROM ' + o.ObjectName + ' AS t
INNER JOIN inserted AS i
ON ' +
STUFF((SELECT ' AND t.' + QUOTENAME(c.Name) + ' = i.' + QUOTENAME(c.Name)
FROM sys.index_columns AS ic
INNER JOIN sys.columns AS c
ON c.object_id = ic.object_id
AND c.column_id = ic.column_id
WHERE ic.object_id = t.object_id
AND ic.index_id = ix.index_id
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 4, '') + ';
END;
GO'
FROM sys.tables AS t
INNER JOIN sys.indexes AS ix
ON ix.object_id = t.object_id
AND ix.is_primary_key = 1
CROSS APPLY (SELECT QUOTENAME(OBJECT_SCHEMA_NAME(t.object_id)) + '.' + QUOTENAME(t.name)) o (ObjectName)
WHERE EXISTS
( SELECT 1
FROM sys.columns AS c
WHERE c.Name = 'LastUpdate'
AND c.object_id = t.object_id
);
Generates SQL for each table with a LastUpdate column along the lines of:
CREATE TRIGGER trg_TableName_Update ON [dbo].[TableName]
AFTER UPDATE
AS
BEGIN
UPDATE t
SET LastUpdate = GETDATE()
FROM [dbo].[TableName] AS t
INNER JOIN inserted AS i
ON t.[PrimaryKey] = i.[PrimaryKey];
END;
GO
The relies on each table having a primary key to get the join from the inserted table back to the table being updated.
You can either copy and paste the results and execute them (I would recommend this way so you can at least check the SQL Generated, or build it into a cursor and execute it using sp_executesql. I would recommend the former, i.e. use this to save a bit of time, but still check each trigger before actually creating it.
I personally think last modified columns are a flawed concept, it always feels to me like storing annoyingly little information, if you really care about data changes then track them properly with an audit table (or temporal tables, or using Change Tracking). Firstly, knowing when something was changed, but not what it was changed from, or who changed it is probably more annoying than not knowing at all, secondly it overwrites all previous changes, what makes the latest change more important than all those that have gone before.
I'm using SQL Server 2005 and need to write a select statement that selects all of the rows within the table but only for columns of certain types. In my case I want all columns except those of type xml, text, ntext, image, or nonbinary CLR user-defined type columns.
That is the question. If you want to know why I'm doing this read on. I'm using EXCEPT to identify differences between each table in two databases similar to what is outlined in this question: SQL compare data from two tables. I don't understand why INTERSECT was suggested in the comment so I'm using UNION ALL instead.
The code I'm using is:
(select *, 'table a first' as where_missing from A EXCEPT select *,'table a first' as where_missing from B)
union all
(select *,'table b first' as where_missing from B EXCEPT select *, 'table b first' as where_missing from A)
Some of the tables contain column types which don't work with EXCEPT. Therefore I don't want to select those columns. I can get this info from information_schema.columns but is there a nice way that I can then use that in my example above in place of the "*"?
Thanks!
I don't think there is a way to do it other than using dynamic SQL. First select columns:
declare #columns nvarchar(max);
select #columns = stuff(
(select ', ' + c.name
from sys.tables t
join sys.columns c on t.object_id = c.object_id
join sys.types ty on c.system_type_id = ty.system_type_id
where t.name = 'A'
and ty.name not in ('text', 'ntext', 'xml', 'image')
for xml path(''))
, 1, 2, '');
Then run the query:
declare #sql nvarchar(max);
set #sql = 'select ' + #columns + ', ''table a first'' as where_missing from A';
exec (#sql);