I have two tables which I have simplified below for clarity. One stores data values while the other defines the units and type of data. Some tests have one result, others may have more (My actual table has results 1-10):
Table 'Tests':
ID Result1 Result2 TestType(FK to TestTypes Type)
---------- ------------ ----------- -----------
1001 50 29 1
1002 90.9 NULL 2
1003 12.4 NULL 2
1004 20.2 30 1
Table 'TestTypes':
Type TestName Result1Name Result1Unit Result2Name Result2Unit ..........
------- --------- ------------ ----------- ------------ -----------
1 Temp Calib. Temperature F Variance %
2 Clarity Turbidity CU NULL NULL
I would like to use the ResultXName as the column alias when I join the two tables. In other words, if a user wants to see all Type 1 'Temp Calib' tests, the data would be formatted as follows:
Temperature Variance
------------ -----------
50 F 10.1%
20.2 F 4.4%
Or if they look at Type 2, which only uses 1 result and should ignore the NULL:
Turbidity
----------
90.9 CU
12.4 CU
I have had some success in combining the two columns of the tables:
SELECT CONCAT(Result1, ' ', ISNULL(Result1Unit, ''))
FROM Tests
INNER JOIN TestTypes ON Tests.TestType = TestTypes.Type
But I cannot figure out how to use the TestName as the new column alias. This is what I've been trying using a subquery, but it seems subqueries are not allowed in the AS clause:
SELECT CONCAT(Result1, ' ', ISNULL(Result1Unit, '')) AS (SELECT TOP(1) Result1Name FROM TestTypes WHERE Type = 1)
FROM Tests
INNER JOIN TestTypes ON Tests.TestType = TestTypes.Type
Is there a different method I can use? Or do I need to restructure my data to achieve this? I am using MSSQL.
Yes, this can be fully automated by constructing a dynamic SQL string carefully. The key points in this solution and references is listed as follows.
Count the Result variables (section 1.)
Get the new column name of ResultXName by using sp_executesql with the output definition (section 2-1)
Append the clause for the new column (section 2-2)
N.B.1. Although a dynamic table schema is usually considered a bad design, sometimes people are simply ordered to do that. Therefore I do not question the adequacy of this requirement.
N.B.2. Mind the security problem of arbitrary string execution. Additional string filters may be required depending on your use case.
Test Dataset
use [testdb];
GO
if OBJECT_ID('testdb..Tests') is not null
drop table testdb..Tests;
create table [Tests] (
[ID] int,
Result1 float,
Result2 float,
TestType int
)
insert into [Tests]([ID], Result1, Result2, TestType)
values (1001,50,29,1),
(1002,90.9,NULL,2),
(1003,12.4,NULL,2),
(1004,20.2,30,1);
if OBJECT_ID('testdb..TestTypes') is not null
drop table testdb..TestTypes;
create table [TestTypes] (
[Type] int,
TestName varchar(50),
Result1Name varchar(50),
Result1Unit varchar(50),
Result2Name varchar(50),
Result2Unit varchar(50)
)
insert into [TestTypes]([Type], TestName, Result1Name, Result1Unit, Result2Name, Result2Unit)
values (1,'Temp Calib.','Temperature','F','Variance','%'),
(2,'Clarity','Turbidity','CU',NULL,NULL);
--select * from [Tests];
--select * from [TestTypes];
Solution
/* Input Parameter */
declare #type_no int = 1;
/* 1. determine the number of Results */
declare #n int;
-- If there are hundreds of results please use the method as of (2-1)
select #n = LEN(COALESCE(LEFT(Result1Name,1),''))
+ LEN(COALESCE(LEFT(Result2Name,1),''))
FROM [TestTypes]
where [Type] = #type_no;
/* 2. build dynamic query string */
-- cast type number as string
declare #s_type varchar(10) = cast(#type_no as varchar(10));
-- sql query string
declare #sql nvarchar(max) = '';
declare #sql_colname nvarchar(max) = '';
-- loop variables
declare #i int = 1; -- loop index
declare #s varchar(10); -- stringified #i
declare #colname varchar(max); -- new column name
set #sql += '
select
L.[ID]';
-- add columns one by one
while #i <= #n begin
set #s = cast(#i as varchar(10));
-- (2-1) find the new column name
SET #sql_colname = N'select #colname = Result' + #s + 'Name
from [TestTypes]
where [Type] = ' + #s_type;
EXEC SP_EXECUTESQL
#Query = #sql_colname,
#Params = N'#colname varchar(max) OUTPUT',
#colname = #colname OUTPUT;
-- (2-2) sql clause of the new column
set #sql += ',
cast(L.Result' + #s + ' as varchar(10)) + '' '' + R.Result' + #s + 'Unit as [' + #colname + ']'
-- next Result
set #i += 1
end
set #sql += '
into [ans]
from [Tests] as L
inner join [TestTypes] as R
on L.TestType = R.Type
where R.[Type] = ' + #s_type;
/* execute */
print #sql; -- check the query string
if OBJECT_ID('testdb..ans') is not null
drop table testdb..ans;
exec sp_sqlexec #sql;
/* show */
select * from [ans];
Result (type = 1)
| ID | Temperature | Variance |
|------|-------------|----------|
| 1001 | 50 F | 29 % |
| 1004 | 20.2 F | 30 % |
/* the query string */
select
L.[ID],
cast(L.Result1 as varchar(10)) + ' ' + R.Result1Unit as [Temperature],
cast(L.Result2 as varchar(10)) + ' ' + R.Result2Unit as [Variance]
into [ans]
from [Tests] as L
inner join [TestTypes] as R
on L.TestType = R.Type
where R.[Type] = 1
Tested on SQL Server 2017 (linux docker image, latest version) on debian 10
Related
I have a DB with 50 tables having the same structure (same column names, types) clustered Indexed on the Created Date column . Each of these tables have around ~ 100,000 rows and I need to pull all of them for some columns.
select * from customerNY
created date | Name | Age | Gender
__________________________________
25-Jan-2016 | Chris| 25 | M
27-Jan-2016 | John | 24 | M
30-Jan-2016 | June | 34 | F
select * from customerFL
created date | Name | Age | Gender
__________________________________
25-Jan-2016 | Matt | 44 | M
27-Jan-2016 | Rose | 24 | F
30-Jan-2016 | Bane | 34 | M
The above is an example of the tables in the DB. I need an SQL that runs quickly pulling all the data. Currently, I am using UNION ALL for this but it takes a lot of time for completing the report. Is there another way for this where I can pull in data without using UNION ALL such as
select Name, Age, Gender from [:customerNY:customerFL:]
Out of context: Can I pull in the table name in the result?
Thanks for any help. I've been putting my mind to this but I can't find a way to do it quicker.
This dynamic SQL approach should meet your criteria, it selects table names from the schema and creates a SELECT statement at runtime for it to execute, and to meet the criteria of the UNION ALL each SELECT statement is given a UNION ALL then I use STUFF to remove the first one.
DECLARE #SQL AS VarChar(MAX)
SET #SQL = ''
SELECT #SQL = #SQL + 'UNION ALL SELECT Name, Age, Gender FROM ' + TABLE_SCHEMA + '.[' + TABLE_NAME + ']' + CHAR(13)
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME LIKE 'Customer%'
SELECT #SQL = STUFF(#SQL,1,10,'')
EXEC (#SQL)
However I do not recommend using this and you should do what people have suggested in the comments to restructure your data.
Memory Optimising the test tables below gave a 7x speed increase compared to the same data in regular tables. Samples are 50 tables of 100000 rows. Please only run this on a test server as it creates filegroups/tables etc.:
USE [master]
GO
ALTER DATABASE [myDB] ADD FILEGROUP [MemOptData] CONTAINS MEMORY_OPTIMIZED_DATA
GO
ALTER DATABASE [myDB] ADD FILE ( NAME = N'Mem', FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA' ) TO FILEGROUP [MemOptData] --Change Path for your version
Go
use [myDB]
go
set nocount on
declare #loop1 int = 1
declare #loop2 int = 1
declare #NoTables int = 50
declare #noRows int = 100000
declare #sql nvarchar(max)
while #loop1 <= #NoTables
begin
set #sql = 'create table [MemCustomer' + cast(#loop1 as nvarchar(6)) + '] ([ID] [int] IDENTITY(1,1) NOT NULL,[Created Date] date, [Name] varchar(20), [Age] int, Gender char(1), CONSTRAINT [PK_Customer' + cast(#loop1 as nvarchar(6)) + '] PRIMARY KEY NONCLUSTERED
(
[ID] ASC
)) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)'
exec(#sql)
while #loop2 <= #noRows
begin
set #sql = 'insert into [MemCustomer' + cast(#loop1 as nvarchar(6)) + '] ([Created Date], [Name], [Age], [Gender]) values (DATEADD(DAY, ROUND(((20) * RAND()), 0), DATEADD(day, 10, ''2018-06-01'')), (select top 1 [name] from (values(''bill''),(''steve''),(''jack''),(''roger''),(''paul''),(''ozzy''),(''tom''),(''brian''),(''norm'')) n([name]) order by newid()), FLOOR(RAND()*(85-18+1))+18, iif(FLOOR(RAND()*(2))+1 = 1, ''M'', ''F''))'
--print #sql
exec(#sql)
set #loop2 = #loop2 + 1
end
set #loop2 = 1
set #loop1 = #loop1 + 1
end
;with cte as (
Select * from MemCustomer1
UNION
Select * from MemCustomer2
UNION
...
UNION
Select * from MemCustomer50
)
select * from cte where [name] = 'tom' and age = 27 and gender = 'F'
I need to export data from a non-normalized database where there are multiple columns to a new normalized database.
One example is the Products table, which has 30 boolean columns (ValidSize1, ValidSize2 ecc...) and every record has a foreign key which points to a Sizes table where there are 30 columns with the size codes (XS, S, M etc...). In order to take the valid sizes for a product I have to scan both tables and take the value SizeCodeX from the Sizes table only if ValidSizeX on the product is true. Something like this:
Products Table
--------------
ProductCode <PK>
Description
SizesTableCode <FK>
ValidSize1
ValidSize2
[...]
ValidSize30
Sizes Table
-----------
SizesTableCode <PK>
SizeCode1
SizeCode2
[...]
SizeCode30
For now I am using a "template" query which I repeat for 30 times:
SELECT
Products.Code,
Sizes.SizesTableCode, -- I need this code because different codes can have same size codes
Sizes.Size_1
FROM Products
INNER JOIN Sizes
ON Sizes.SizesTableCode = Products.SizesTableCode
WHERE Sizes.Size_1 IS NOT NULL
AND Products.ValidSize_1 = 1
I am just putting this query inside a loop and I replace the "_1" with the loop index:
SET #counter = 1;
SET #max = 30;
SET #sql = '';
WHILE (#counter <= #max)
BEGIN
SET #sql = #sql + ('[...]'); -- Here goes my query with dynamic indexes
IF #counter < #max
SET #sql = #sql + ' UNION ';
SET #counter = #counter + 1;
END
INSERT INTO DestDb.ProductsSizes EXEC(#sql); -- Insert statement
GO
Is there a better, cleaner or faster method to do this? I am using SQL Server and I can only use SQL/TSQL.
You can prepare a dynamic query using the SYS.Syscolumns table to get all value in row
DECLARE #SqlStmt Varchar(MAX)
SET #SqlStmt=''
SELECT #SqlStmt = #SqlStmt + 'SELECT '''+ name +''' column , UNION ALL '
FROM SYS.Syscolumns WITH (READUNCOMMITTED)
WHERE Object_Id('dbo.Products')=Id AND ([Name] like 'SizeCode%' OR [Name] like 'ProductCode%')
IF REVERSE(#SqlStmt) LIKE REVERSE('UNION ALL ') + '%'
SET #SqlStmt = LEFT(#SqlStmt, LEN(#SqlStmt) - LEN('UNION ALL '))
print ( #SqlStmt )
Well, it seems that a "clean" (and much faster!) solution is the UNPIVOT function.
I found a very good example here:
http://pratchev.blogspot.it/2009/02/unpivoting-multiple-columns.html
How can I determine the space used by a table variable without using DATALENGTH on all columns?
eg:
DECLARE #T TABLE
(
a bigint,
b bigint,
c int,
d varchar(max)
)
insert into #T select 1,2,3, 'abc123'
exec sp_spaceused #T
Trying to work out how much memory a Table variable consumes when running a stored procedure.
I know in this example I can go:
SELECT DATALENGTH(a) + DATALENGTH(b) + DATALENGTH(c) + DATALENGTH(d)
But is there any other way other than doing DATALENGTH on all table columns?
The metadata for table variables is pretty much the same as for other types of tables so you can determine space used by looking in various system views in tempdb.
The main obstacle is that the table variable will be given an auto generated name such as #3D7E1B63 and I'm not sure if there is a straight forward way of determining its object_id.
The code below uses the undocumented %%physloc%% function (requires SQL Server 2008+) to determine a data page belonging to the table variable then DBCC PAGE to get the associated object_id. It then executes code copied directly from the sp_spaceused procedure to return the results.
DECLARE #T TABLE
(
a bigint,
b bigint,
c int,
d varchar(max)
)
insert into #T select 1,2,3, 'abc123'
DECLARE #DynSQL nvarchar(100)
SELECT TOP (1) #DynSQL = 'DBCC PAGE(2,' +
CAST(file_id AS VARCHAR) + ',' +
CAST(page_id AS VARCHAR) + ',1) WITH TABLERESULTS'
FROM #T
CROSS APPLY sys.fn_PhysLocCracker(%%physloc%%)
DECLARE #DBCCPage TABLE (
[ParentObject] [varchar](100) NULL,
[Object] [varchar](100) NULL,
[Field] [varchar](100) NULL,
[VALUE] [varchar](100) NULL
)
INSERT INTO #DBCCPage
EXEC (#DynSQL)
DECLARE #id int
SELECT #id = VALUE
FROM #DBCCPage
WHERE Field = 'Metadata: ObjectId'
EXEC sp_executesql N'
USE tempdb
declare #type character(2) -- The object type.
,#pages bigint -- Working variable for size calc.
,#dbname sysname
,#dbsize bigint
,#logsize bigint
,#reservedpages bigint
,#usedpages bigint
,#rowCount bigint
/*
** Now calculate the summary data.
* Note that LOB Data and Row-overflow Data are counted as Data Pages.
*/
SELECT
#reservedpages = SUM (reserved_page_count),
#usedpages = SUM (used_page_count),
#pages = SUM (
CASE
WHEN (index_id < 2) THEN (in_row_data_page_count + lob_used_page_count + row_overflow_used_page_count)
ELSE lob_used_page_count + row_overflow_used_page_count
END
),
#rowCount = SUM (
CASE
WHEN (index_id < 2) THEN row_count
ELSE 0
END
)
FROM sys.dm_db_partition_stats
WHERE object_id = #id;
/*
** Check if table has XML Indexes or Fulltext Indexes which use internal tables tied to this table
*/
IF (SELECT count(*) FROM sys.internal_tables WHERE parent_id = #id AND internal_type IN (202,204,211,212,213,214,215,216)) > 0
BEGIN
/*
** Now calculate the summary data. Row counts in these internal tables don''t
** contribute towards row count of original table.
*/
SELECT
#reservedpages = #reservedpages + sum(reserved_page_count),
#usedpages = #usedpages + sum(used_page_count)
FROM sys.dm_db_partition_stats p, sys.internal_tables it
WHERE it.parent_id = #id AND it.internal_type IN (202,204,211,212,213,214,215,216) AND p.object_id = it.object_id;
END
SELECT
name = OBJECT_NAME (#id),
rows = convert (char(11), #rowCount),
reserved = LTRIM (STR (#reservedpages * 8, 15, 0) + '' KB''),
data = LTRIM (STR (#pages * 8, 15, 0) + '' KB''),
index_size = LTRIM (STR ((CASE WHEN #usedpages > #pages THEN (#usedpages - #pages) ELSE 0 END) * 8, 15, 0) + '' KB''),
unused = LTRIM (STR ((CASE WHEN #reservedpages > #usedpages THEN (#reservedpages - #usedpages) ELSE 0 END) * 8, 15, 0) + '' KB'')
', N'#id int',#id=#id
Returns
name rows reserved data index_size unused
------------------------------ ----------- ------------------ ------------------ ------------------ ------------------
#451F3D2B 1 16 KB 8 KB 8 KB 0 KB
I have a query that returns the people in a certain household, however the individuals show up in to separate rows, what i want to do is merge the two rows into one.
SELECT dbo.households.id, dbo.individuals.firstname, dbo.individuals.lastname
FROM dbo.households INNER JOIN
dbo.individuals ON dbo.households.id = dbo.individuals.householdID
WHERE (dbo.households.id = 10017)
Current results:
ID | First Name | Last Name |
1 | Test | Test1 |
1 | ABC | ABC1 |
Desired results:
ID | First Name | Last Name |ID1| First Name1| Last Name1|
1 | Test | Test1 |1 | ABC | ABC1 |
However if theres 3 people then it would need to merge all 3 and so on
Depending on the response to the question I asked above, below is a simple script that would compile the names into a string and then output the string (I don't have access to the syntax validator now so forgive any errors):
DECLARE
#CNT INT,
#R_MAX INT,
#H_ID INT,
#R_FIRST VARCHAR(250),
#R_LAST VARCHAR(250),
#R_NAMES VARCHAR(MAX)
SET #CNT = 0; --Counter
SET #R_NAMES = 'Names: ';
SELECT #R_MAX = COUNT(*) FROM dbo.individuals a WHERE a.householdID = #H_ID; --Get total number of individuals
PRINT(#R_MAX); --Output # of matching rows
--Loop through table to get individuals
WHILE #CNT < #R_MAX
BEGIN
--Select statement
SELECT * FROM (SELECT
#R_FIRST = b.firstname,
#R_LAST = b.lastname,
ROW_NUMBER() OVER (ORDER BY b.lastname, b.firstname) AS Row
FROM
dbo.households a INNER JOIN
dbo.individuals b ON a.id = b.householdID
WHERE
(a.id = #H_ID)) AS RN WHERE (Row = #CNT);
SET #R_NAMES = #R_NAMES + #R_FIRST + #R_LAST + '; '; --Add individual's name to name string
PRINT(CAST(#CNT AS VARCHAR) + ':' + #R_NAMES);
SET #CNT = #CNT +1; --Increase counter
END
PRINT(#R_NAMES); --Output the individuals
Provided you're using SQL Server 2005 or up, you might be able to use FOR XML PATH('') to concatenate the strings.
This should do what you want without having to do manual loops:
edit: fixed up SQL to actually work (now I have access to SQL)
SELECT households.id,
STUFF(
(
SELECT '; ' + [firstname] + '|' + lastname AS [text()]
FROM individuals
WHERE individuals.householdID = households.id
FOR XML PATH('')
)
, 1, 2, '' ) -- remove the first '; ' from the string
AS [name]
FROM dbo.households
WHERE (households.id = 10017)
This is pretty close to the format of data that you wanted.
it converts the data to XML (without any actual XML markup due to the PATH('')) and then joins it back to the header row.
I have a SQL Server 2005 database that stores data for multiple users. Each table that contains user-owned data has a column called OwnerID that identifies the owner; most but not all tables have this column.
I want to be able to count number of rows 'owned' by a user in each table. In other words, I want a query that returns the names of each table that contains an OwnerID column, and counts the number of rows in each table that match a given OwnerID value.
I can return just the names of the matching tables using this query:
SELECT OBJECT_NAME(object_id) [Table] FROM sys.columns
WHERE name = 'OwnerID' ORDER BY OBJECT_NAME(object_id);
That query returns a list of table names like this:
+---------+
| Table |
+---------+
| Alpha |
| Beta |
| Gamma |
| ... |
+---------+
But is it possible to write a query that can also count the number of rows in each table that match a given OwnerID? ie:
+---------+------------+
| Table | RowCount |
+---------+------------+
| Alpha | 2042 |
| Beta | 49 |
| Gamma | 740 |
| ... | ... |
+---------+------------+
Note: The list of table names needs to be returned dynamically, it is not suitable to hard-code table names into this query.
Edit: the answer...
(I can't edit your answers yet but I can edit my own question so I'm putting it here...)
Damien_The_Unbeliever had essentially the correct answer, but SQL Server doesn't allow string concatenation in an exec statement so I had to set the query prior to the exec statement. The final query is as follows:
DECLARE #OwnerID int;
SET #OwnerID = 1;
DECLARE #ForEachSQL varchar(100);
SET #ForEachSQL = 'INSERT INTO #t(TableName,RowsOwned) SELECT ''?'', COUNT(*) FROM ? WHERE OwnerID = ' + CONVERT(varchar(11), #OwnerID);
CREATE TABLE #t(TableName sysname, RowsOwned int);
EXEC sp_MSforeachtable #ForEachSQL,
#whereAnd = 'AND o.id IN (SELECT id FROM syscolumns where name=''OwnerID'')';
SELECT * FROM #t ORDER BY TableName;
DROP TABLE #t;
You can use sp_MSForeachtable, and the #whereand parameter, to specify a filter so you're only working against tables with an OwnerID column. Create a temp table, and populate that for each matching table. Something like:
create table #t(tablename sysname,Cnt int)
sp_MSforeachtable 'insert into #t(tablename,Cnt) select ''?'',COUNT(*) from ?',#whereAnd='and o.id in (select id from syscolumns where name=''OwnerID'')'
select * from #t
Two major caveats to mention - first is that sp_MSforeachtable is "undocumented", so you use it at your own risk - it could be suddenly removed from SQL Server by any kind of servicing, or in the next release.
The second is that, having a dynamic schema is usually a sign that something else has gone wrong in modelling - possibly attribute splitting (where sales for January and February are given different tables, even though they're logically the same thing and should appear in the same table, with possibly an additional column to distinguish them)
And, of course, you wanted to filter based on a particular clientID, so the query would be more like:
'insert into #t(tablename,Cnt) select ''?'',COUNT(*) from ? where OwnerID=' + #OwnerID
(Assuming #OwnerID is the owner sought, and is an int)
This would get the info from sysindexes. It can be slightly out of date but will give you a rough count
SELECT
[TableName] = so.name,
[RowCount] = MAX(si.rows)
FROM
sysobjects so,
sysindexes si
WHERE
so.xtype = 'U'
AND
si.id = OBJECT_ID(so.name)
GROUP BY
so.name
ORDER BY
2 DESC
If you needed it to be 100% right then you could use the undocumented feature sp_MSForEachTable
DECLARE #SQL VARCHAR(255)
SET #SQL = 'DBCC UPDATEUSAGE (' + DB_NAME() + ')'
EXEC(#SQL)
CREATE TABLE #foo
(
tablename VARCHAR(255),
rc INT
)
INSERT #foo
EXEC sp_msForEachTable
'SELECT PARSENAME(''?'', 1),
COUNT(*) FROM ?'
SELECT tablename, rc
FROM #foo
ORDER BY rc DESC
DROP TABLE #foo
You can use this:
DECLARE #nSQL NVARCHAR(MAX)
SELECT #nSQL = COALESCE(#nSQL + 'UNION ALL ' + CHAR(10), '')
+ 'SELECT ''' + TABLE_NAME + ''' AS TableName, COUNT(*) FROM ' + QUOTENAME(TABLE_NAME) + CHAR(10)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'strKey'
-- This will PRINT out the dynamically generated SQL statement. Just replace this with EXECUTE(#nSQL) when you are happy to run it.
PRINT #nSQL
Update: To search for a specific OwnerId:
DECLARE #nSQL NVARCHAR(MAX)
DECLARE #OwnerId INTEGER
SET #OwnerId = 1
SELECT #nSQL = COALESCE(#nSQL + 'UNION ALL ' + CHAR(10), '')
+ 'SELECT ''' + TABLE_NAME + ''' AS TableName, COUNT(*) FROM ' + QUOTENAME(TABLE_NAME) + ' WHERE OwnerId = #OwnerId' + CHAR(10)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'strKey'
EXECUTE sp_executesql #nSQL, '#OwnerId INTEGER', #OwnerId
SELECT
O.ID,
O.NAME,
I.ROWCNT
FROM SYSOBJECTS O
INNER JOIN SYSINDEXES I
ON O.ID = I.ID
WHERE O.UID = 5
AND O.XTYPE = 'U'
AND I.STATUS = 0
Try using this query it will give you id of the table, table name and no of rows for that table.
UID = 5 means I want to check in particular schema which has id = 5.You can check schema id using SELECT SCHEMA_ID('<schema name>');
XTYPE = 'U' means User defined tables only.