I need to query all databases, tables, columns, and number of rows for each table from a server.
The following code almost does what I need except it is only for a single database. I need this output with the addition of a column for the database name. And for it to run against all databases instead of just a single named one. Also need the number of records for each table
USE [temp_db];
SELECT
OBJECT_SCHEMA_NAME(T.[object_id],DB_ID()) AS [Schema],
T.[name] AS [table_name], AC.[name] AS [column_name],
TY.[name] AS system_data_type, AC.[max_length],
AC.[precision], AC.[scale], AC.[is_nullable], AC.[is_ansi_padded]
FROM
sys.[tables] AS T
INNER JOIN
sys.[all_columns] AC ON T.[object_id] = AC.[object_id]
INNER JOIN
sys.[types] TY ON AC.[system_type_id] = TY.[system_type_id]
AND AC.[user_type_id] = TY.[user_type_id]
WHERE
T.[is_ms_shipped] = 0
ORDER BY
T.[name], AC.[column_id]
Current output:
Schema|table_name|column_name|system_data_type|max_length|precision|scale|is_nullable|is_ansi_padded
I need the output to be:
db_name|table_name|column_name|system_data_type|num_records
Related
My error is in this part (select min(sc.name) from so.name ), how to solve it ?
In the select I am getting the table and column name, in the same time i want to get the min value of the column from the table. Is that possible?.
select so.name table_name , sc.name Column_name,(select min(sc.name) from so.name )
from sysindexes si, syscolumns sc, sysobjects so
where si.indid < 2 -- 0 = if a table. 1 = if a clustered index on an allpages-locked table. >1 = if a nonclustered index or a clustered index on a data-only-locked table.
and so.type = 'U' --U – user table
and sc.status & 128 = 128 --(value 128) – indicates an identity column.
and so.id = sc.id
and so.id = si.id
So the problem is that You are trying to basically trying to do dynamic code where You try to select a column based on a table name from a system table.
Problem is that SQL doesnt know that the 'so.name' you are referencing is a table (further more, sysobjects also contains procedures and functions).
Rather than that, you should do an Inner join between sys.syscolumns and sys.systables based on object_id.
Currently, I have a query that uses the sys tables to return all the tables and column names where the column is a specific custom type, the query looks like this:
select
schemas.name, obj.name, col.name
from
sys.objects obj
inner join
sys.columns col on col.object_id = obj.object_id
inner join
sys.types types on types.user_type_id = col.user_type_id
inner join
sys.schemas schemas on obj.schema_id = schemas.schema_id
where
types.name = 'myCustomType'
However, the security of the SQL Server database is being changed and we can no longer query the sys tables. How else can I query for this information?
I have a ProductInventory table with over 100 fields. Many need to offer user guidance to understand. I’m looking for a way to add tooltips to the form without hardcoding the text.
I want to use MS SQL table field property DESCRIPTION as the source for user tooltips during web forms data entry.
Generally, my descriptions are for other db admins, but I was wondering if with a little more thoughtful and friendly descriptions, I could make duel use of this extended SQL field property.
Is is possible to retrieve this field property WITH a datatable/dataset query?
Example:
FieldName: ProductID
Value: [string]
FieldDescription: “This is a description for the end user of what the ProductID is field used for”
I know we can get the field schema for a specific field by…
SELECT #Result = count(1)
FROM ::fn_listextendedproperty (N'MS_Description', N'Schema', 'dbo',
N'Table', '[Your Table Name]',
N'Column', '[You Field Name]')
Is this possible without a “per field” query on each field?
Perhaps run two queries. One to pull records and one to pull schema.
(i.e. retrieve entire table schema and do a loop to find matching field name.)
Any thoughts?
You can get those descriptions back via query for the whole table.
This is almost entirely lifted from Phil Factor on SQL Server Central. My modifications are only the extra join condition p.name = 'MS_Description' and the where clause.
SELECT SCHEMA_NAME(tbl.schema_id) AS [Table_Schema],
tbl.name AS [Table_Name],
clmns.name AS [Column_Name],
p.name AS [Name],
CAST(p.value AS SQL_VARIANT) AS [Value]
FROM sys.tables AS tbl
INNER JOIN sys.all_columns AS clmns ON clmns.OBJECT_ID=tbl.OBJECT_ID
INNER JOIN sys.extended_properties AS p ON p.major_id=clmns.OBJECT_ID
AND p.minor_id=clmns.column_id
AND p.class= 1
AND p.name = 'MS_Description'
WHERE tbl.name = 'Your Table Name'
ORDER BY [Table_Schema] ASC,
[Table_Name] ASC,
[Column_ID] ASC,
[Name] ASC
try
select
s.name [Schema],
ao.[name] [Table],
ac.[name] [Column],
t.[name] [Type],
p.[value] [Description]
from sys.all_columns ac
inner join sys.types t on ac.system_type_id = t.system_type_id
inner join sys.all_objects ao on ao.object_id = ac.object_id
inner join sys.schemas s on s.schema_id = ao.schema_id
LEFT OUTER JOIN sys.extended_properties p ON p.major_id = ac.object_id AND p.minor_id = ac.column_id
where ao.type = 'u'
order by s.name, ao.name, ac.name
with ao.type = 'u' (all user tables)
if you want only one table use ao.name = 'table_name'
I have database A which contains a table (CoreTables) that stores a list of active tables within database B that the organization's users are sending data to.
I would like to be able to have a set-based query that can output a list of only those tables within CoreTables that are populated with data.
Dynamically, I normally would do something like:
For each row in CoreTables
Get the table name
If table is empty
Do nothing
Else
Print table name
Is there a way to do this without a cursor or other dynamic methods? Thanks for any assistance...
Probably the most efficient option is:
SELECT c.name
FROM dbo.CoreTables AS c
WHERE EXISTS
(
SELECT 1
FROM sys.partitions
WHERE index_id IN (0,1)
AND rows > 0
AND [object_id] = OBJECT_ID(c.name)
);
Just note that the count in sys.sysindexes, sys.partitions and sys.dm_db_partition_stats are not guaranteed to be completely in sync due to in-flight transactions.
While you could just run this query in the context of the database, you could do this for a different database as follows (again assuming that CoreTables does not include schema in the name):
SELECT c.name
FROM DatabaseA.CoreTables AS c
WHERE EXISTS
(
SELECT 1
FROM DatabaseB.sys.partitions AS p
INNER JOIN DatabaseB.sys.tables AS t
ON p.[object_id] = t.object_id
WHERE t.name = c.name
AND p.rows > 0
);
If you need to do this for multiple databases that all contain the same schema (or at least overlapping schema that you're capturing in aggregate in a central CoreTables table), you might want to construct a view, such as:
CREATE VIEW dbo.CoreTableCounts
AS
SELECT db = 'DatabaseB', t.name, MAX(p.rows)
FROM DatabaseB.sys.partitions AS p
INNER JOIN DatabaseB.sys.tables AS t
ON p.[object_id] = t.[object_id]
INNER JOIN DatabaseA.dbo.CoreTables AS ct
ON t.name = ct.name
WHERE p.index_id IN (0,1)
GROUP BY t.name
UNION ALL
SELECT db = 'DatabaseC', t.name, rows = MAX(p.rows)
FROM DatabaseC.sys.partitions AS p
INNER JOIN DatabaseC.sys.tables AS t
ON p.[object_id] = t.[object_id]
INNER JOIN DatabaseA.dbo.CoreTables AS ct
ON t.name = ct.name
WHERE p.index_id IN (0,1)
GROUP BY t.name
-- ...
GO
Now your query isn't going to be quite as efficient, but doesn't need to hard-code database names as object prefixes, instead it can be:
SELECT name
FROM dbo.CoreTableCounts
WHERE db = 'DatabaseB'
AND rows > 0;
If that is painful to execute you could create a view for each database instead.
In SQL Server, you can do something like:
SELECT o.name, st.row_count
FROM sys.dm_db_partition_stats st join
sys.objects o
on st.object_id = o.object_id
WHERE index_id < 2 and st.row_count > 0
By the way, this specifically does not use OBJECT_ID() or OBJECT_NAME() because these are evaluated in the current database. The above code continues to work for another database, using 3-part naming. This version also takes into account multiple partitions:
SELECT o.name, sum(st.row_count)
FROM <dbname>.sys.dm_db_partition_stats st join
<dbname>.sys.objects o
on st.object_id = o.object_id
WHERE index_id < 2
group by o.name
having sum(st.row_count) > 0
something like this?
//
foreach (System.Data.DataTable dt in yourDataSet.Tables)
{
if (dt.Rows.Count != 0) { PrintYourTableName(dt.TableName); }
}
//
This is a way you can do it, that relies on system tables, so be AWARE it may not always work in future versions of SQL. With that strong caveat in mind.
select distinct OBJECT_NAME(id) as tabName,rowcnt
from sys.sysindexes si
join sys.objects so on si.id=si.id
where indid=1 and so.type='U'
You would add to the where clause the tables you are interested in and rowcnt <1
I followed this article:
http://www.mssqltips.com/sqlservertip/1796/creating-a-table-with-horizontal-partitioning-in-sql-server/
Which in essence does the following:
Creates a database with three filegroups, call them A, B, and C
Creates a partition scheme, mapping to the three filegroups
Creates table - SalesArchival, using the partition scheme
Inserts a few rows into the table, split over the filegroups.
I'd like to perform a query like this (excuse my pseudo-code)
select * from SalesArchival
where data in filegroup('A')
Is there a way of doing this, or if not, how do I go about it.
What I want to accomplish is to have a batch run every day that moves data older than 90 days to a different file group, and perform my front end queries only on the 'current' file group.
To get at a specific filegroup, you'll always want to utilize partition elimination in your predicates to ensure minimal records get read. This is very important if you are to get any benefits from partitioning.
For archival, I think you're looking for how to split and merge ranges. You should always keep the first and last partitions empty, but this should give you an idea of how to use partitions for archiving. FYI, moving data from 1 filegroup to another is very resource intensive. Additionally, results will be slightly different if you use a range right pf. Since you are doing partitioning, hopefully you've read up on best practices.
DO NOT RUN ON PRODUCTION. THIS IS ONLY AN EXAMPLE TO LEARN FROM.
This example assumes you have 4 filegroups (FG1,FG2,FG3, & [PRIMARY]) defined.
IF EXISTS(SELECT NULL FROM sys.tables WHERE name = 'PartitionTest')
DROP TABLE PartitionTest;
IF EXISTS(SELECT NULL FROM sys.partition_schemes WHERE name = 'PS')
DROP PARTITION SCHEME PS;
IF EXISTS(SELECT NULL FROM sys.partition_functions WHERE name = 'PF')
DROP PARTITION FUNCTION PF;
CREATE PARTITION FUNCTION PF (datetime) AS RANGE LEFT FOR VALUES ('2012-02-05', '2012-05-10','2013-01-01');
CREATE PARTITION SCHEME PS AS PARTITION PF TO (FG1,FG2,FG3,[PRIMARY]);
CREATE TABLE PartitionTest( Id int identity(1,1), DT datetime) ON PS(DT);
INSERT PartitionTest (DT)
SELECT '2012-02-05' --FG1
UNION ALL
SELECT '2012-02-06' --FG2(This is the one 90 days old to archive into FG1)
UNION ALL
SELECT '2012-02-07' --FG2
UNION ALL
SELECT '2012-05-05' --FG2 (This represents a record entered recently)
Check the filegroup associated with each record:
SELECT O.name TableName, fg.name FileGroup, ps.name PartitionScheme,pf.name PartitionFunction, ISNULL(prv.value,'Undefined') RangeValue,p.rows
FROM sys.objects O
INNER JOIN sys.partitions p on P.object_id = O.object_id
INNER JOIN sys.indexes i on p.object_id = i.object_id and p.index_id = i.index_id
INNER JOIN sys.data_spaces ds on i.data_space_id = ds.data_space_id
INNER JOIN sys.partition_schemes ps on ds.data_space_id = ps.data_space_id
INNER JOIN sys.partition_functions pf on ps.function_id = pf.function_id
LEFT OUTER JOIN sys.partition_range_values prv on prv.function_id = ps.function_id and p.partition_number = prv.boundary_id
INNER JOIN sys.allocation_units au on p.hobt_id = au.container_id
INNER JOIN sys.filegroups fg ON au.data_space_id = fg.data_space_id
WHERE o.name = 'PartitionTest' AND i.type IN (0,1) --Remove nonclustereds. 0 for heap, 1 for BTree
ORDER BY O.name, fg.name, prv.value
This proves that 2012-02-05 is in FG1 while the rest are in FG2.
In order to archive, your' first instinct is to move the data. When partitioning though, you actually have to slide the partition function range value.
Now let's move 2012-02-06 (90 days or older in your case) into FG1:--Move 2012-02-06 from FG2 to FG1
ALTER PARTITION SCHEME PS NEXT USED FG1;
ALTER PARTITION FUNCTION PF() SPLIT RANGE ('2012-02-06');
Rerun the filegroup query to verify that 2012-02-06 got moved into FG1.
$PARTITION (Transact-SQL) should have what you want to do.
Run the following to know the size of your partitions and ID:
USE AdventureWorks2012;
GO
SELECT $PARTITION.TransactionRangePF1(TransactionDate) AS Partition,
COUNT(*) AS [COUNT] FROM Production.TransactionHistory
GROUP BY $PARTITION.TransactionRangePF1(TransactionDate)
ORDER BY Partition ;
GO
and the following should give you data from given partition id:
SELECT * FROM Production.TransactionHistory
WHERE $PARTITION.TransactionRangePF1(TransactionDate) = 5 ;
No. You need to use the exact condition that you use in your partition function. Which is probably like
where keyCol between 3 and 7