For a bit of database-sanity checking code, I'd like to determine whether a particular object_id corresponds to an empty table.
Is there some way to (for instance) select count(*) from magic_operator(my_object_id) or similar?
I'd strongly prefer a pure-sql solution that can run on MS SQL server 2008b.
You can get a rough idea from
SELECT SUM(rows)
FROM sys.partitions p
WHERE index_id < 2 and p.object_id=#my_object_id
If you want guaranteed accuracy you would need to construct and execute a dynamic SQL string containing the two part object name. Example below though depending on how you are using this you may prefer to use sp_executesql and return the result as an output parameter instead.
DECLARE #DynSQL nvarchar(max) =
N'SELECT CASE WHEN EXISTS(SELECT * FROM ' +
QUOTENAME(OBJECT_SCHEMA_NAME(#my_object_id)) + '.' +
QUOTENAME(OBJECT_NAME(#my_object_id)) +
') THEN 0 ELSE 1 END AS IsEmpty'
EXECUTE (#DynSQL)
Well it depends on what do you consider as Pure sql
I've come up with the following solution. It is purely written in T-SQL but uses dynamically built query
-- Using variables just for better readability.
DECLARE #Name NVARCHAR(4000)
DECLARE #Schema NVARCHAR(4000)
DECLARE #Query NVARCHAR(4000)
-- Get the relevant data
SET #Schema = QUOTENAME(OBJECT_SCHEMA_NAME(613577224))
SET #Name = QUOTENAME(OBJECT_NAME(613577224))
-- Build query taking into consideration the schema and possible poor object naming
SET #Query = 'SELECT COUNT(*) FROM ' + #Schema + '.' + #Name + ''
-- execute it.
EXEC(#Query)
EDIT
The changes consider the possible faulty cases described in the comments.
I've outlined the variables, because this is a convenient approach for me. Cheers.
Related
I am trying to store the results of an SQL query into a variable.The query simply detects the datatype of a column, hence the returned result is a single varchar.
SET #SQL =
'declare ##x varchar(max) SET ##x = (select DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
WHERE Table_name = ' +char(39)+#TabName+char(39) +
' AND column_name = ' +char(39)+#colName+char(39) + ')'
EXECUTE (#SQL)
Anything within the 'SET declaration' cannot access any variables outside of it and vice versa, so I am stuck on how to store the results of this query in a varchar variable to be accessed by other parts of the stored procedure.
You dont need a dynamic query to achieve what you want, below query will give the same result as yours.
declare #x varchar(max)
declare #tableName varchar(100), #ColumnName varchar(50)
set #tableName = 'Employee'
set #ColumnName = 'ID'
select #x = DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
where
Table_Name = #tableName
and column_name = #ColumnName
select #x
All user-defined variables in T-SQL have private local-scope only. They cannot be seen by any other execution context, not even nested ones (unlike #temp tables, which can be seen by nested scopes). Using "##" to try to trick it into making a global-variable doesn't work.
If you want to execute dynamic SQL and return information there are several ways to do it:
Use sp_ExecuteSQL and make one of the parameters an OUTPUT parameter (recommended for single values).
Make a #Temp table before calling the dynamic SQL and then have the Dynamic SQL write to the same #Temp table (recommended for multiple values/rows).
Use the INSERT..EXEC statement to execute your dynamic SQL which returns its information as the output of a SELECT statement. If the INSERT table has the same format as the dynamic SQL's SELECT output, then the data output will be inserted into your table.
If you want to return only an integer value, you can do this through the RETURN statement in dynamic SQL, and receive it via #val = EXEC('...').
Use the Session context-info buffer (not recommended).
However, as others have pointed out, you shouldn't actually need dynamic SQL for what you are showing us here. You can do just this with:
SET #x = ( SELECT DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
WHERE Table_name = #TabName
AND column_name = #colName )
You may want to consider using the sp_executesql stored procedure for dynamic sql.
The following link provides a good usage example of sp_executesql procedure with output parameters:
http://support.microsoft.com/kb/262499
I have a table in SQL Server 2008 which contains custom validation criteria in the form of expressions stored as text, e.g.
StagingTableID CustomValidation
----------------------------------
1 LEN([mobile])<=30
3 [Internal/External] IN ('Internal','External')
3 ([Internal/External] <> 'Internal') OR (LEN([Contact Name])<=100)
...
I am interested in determining whether all rows in a table pass the conditional statement. For this purpose I am writing a validation stored procedure which checks whether all values in a given field in a given table meet the given condition(s). SQL is not my forte, so after reading this questions this is my first stab at the problem:
EXEC sp_executesql N'SELECT #passed = 0 WHERE EXISTS (' +
N'SELECT * FROM (' +
N'SELECT CASE WHEN ' + #CustomValidationExpr + N' THEN 1 ' +
N'ELSE 0 END AS ConditionalTest ' +
N'FROM ' + #StagingTableName +
N')t ' +
N'WHERE t.ConditionalTest = 0)'
,N'#passed BIT OUTPUT'
,#passed = #PassedCustomValidation OUTPUT
However, I'm not sure if the nested queries can be re-written as one, or if there is an entirely better way for testing for validity of all rows in this scenario?
Thanks in advance!
You should be able to reduce by at least one subquery like this:
EXEC sp_executesql N'SELECT #passed = 0 WHERE EXISTS (' +
N'SELECT 1 FROM ' + #StagingTableName +
N'WHERE NOT(' + #CustomValidationExpr + N')) ' +
,N'#passed BIT OUTPUT'
,#passed = #PassedcustomValidation OUTPUT
Before we answer the original question, have you looked into implementing constraints? This will prevent bad data from entering your database in the first place. Or is the point that these must be dynamically set in the application?
ALTER TABLE StagingTable
WITH CHECK ADD CONSTRAINT [StagingTable$MobileValidLength]
CHECK (LEN([mobile])<=30)
GO
ALTER TABLE StagingTable
WITH CHECK ADD CONSTRAINT [StagingTable$InternalExternalValid]
CHECK ([Internal/External] IN ('Internal','External'))
GO
--etc...
You need to concatenate the expressions together. I agree with #PinnyM that a where clause is easier for full table validation. However, the next question will be how to identify which rows fail which tests. I'll wait for you to ask that question before answering it (ask it as a separate question and not as an edit to this one).
To create the where clause, something like this:
declare #WhereClause nvarchar(max);
select #WhereClause = (select CustomValidation+' and '
from Validations v
for xml path ('')
) + '1=1'
select #WhereClause = replace(replace(#WhereClause, '<', '<'), '>', '>'))
This strange construct, with the for xml path('') and the double select, is the most convenient way to concatenate values in SQL Server.
Also, put together your query before doing the sp_executesql call. It gives you more flexibilty:
declare #sql nvarchar(max);
select #sql = '
select #passed = count(*)
from '+#StagingTableName+'
where '+#WhereClause
That is the number that pass all validation tests. The where clause for the fails is:
declare #WhereClause nvarchar(max);
select #WhereClause = (select 'not '+CustomValidation+' or '
from Validations v
for xml path ('')
) + '1=0'
I need to create a search for a java app I'm building where users can search through a SQL database based on the table they're currently viewing and a search term they provide. At first I was going to do something simple like this:
SELECT * FROM <table name> WHERE CAST((SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<table name>')
AS VARCHAR) LIKE '%<search term>%'
but that subquery returns more than one result, so then I tried to make a procedure to loop through all the columns in a given table and put any relevant fields in a results table, like this:
CREATE PROC sp_search
#tblname VARCHAR(4000),
#term VARCHAR(4000)
AS
SET nocount on
SELECT COLUMN_NAME
INTO #tempcolumns
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #tblname
ALTER TABLE #tempcolumns
ADD printed BIT,
num SMALLINT IDENTITY
UPDATE #tempcolumns
SET printed = 0
DECLARE #colname VARCHAR(4000),
#num SMALLINT
WHILE EXISTS(SELECT MIN(num) FROM #tempcolumns WHERE printed = 0)
BEGIN
SELECT #num = MIN(num)
FROM #tempcolumns
WHERE printed = 0
SELECT #colname = COLUMN_NAME
FROM #tempcolumns
WHERE num = #num
SELECT * INTO #results FROM #tblname WHERE CAST(#colname AS VARCHAR)
LIKE '%' + #term + '%' --this is where I'm having trouble
UPDATE #tempcolumns
SET printed = 1
WHERE #num = num
END
SELECT * FROM #results
GO
This has two problems: first is that it gets stuck in an infinite loop somehow, and second I can't select anything from #tblname. I tried using dynamic sql as well, but I don't know how to get results from that or if that's even possible.
This is for an assignment I'm doing at college and I've gotten this far after hours of trying to figure it out. Is there any way to do what I want to do?
You need to only search columns that actually contain strings, not all columns in a table (which may include integers, dates, GUIDs, etc).
You shouldn't need a #temp table (and certainly not a ##temp table) at all.
You need to use dynamic SQL (though I'm not sure if this has been part of your curriculum so far).
I find it beneficial to follow a few simple conventions, all of which you've violated:
use PROCEDURE not PROC - it's not a "prock," it's a "stored procedure."
use dbo. (or alternate schema) prefix when referencing any object.
wrap your procedure body in BEGIN/END.
use vowels liberally. Are you saving that many keystrokes, never mind time, saying #tblname instead of #tablename or #table_name? I'm not fighting for a specific convention but saving characters at the cost of readability lost its charm in the 70s.
don't use the sp_ prefix for stored procedures - this prefix has special meaning in SQL Server. Name the procedure for what it does. It doesn't need a prefix, just like we know they're tables even without a tbl prefix. If you really need a prefix there, use another one like usp_ or proc_ but I personally don't feel that prefix gives you any information you don't already have.
since tables are stored using Unicode (and some of your columns might be too), your parameters should be NVARCHAR, not VARCHAR. And identifiers are capped at 128 characters, so there is no reason to support > 257 characters for #tablename.
terminate statements with semi-colons.
use the catalog views instead of INFORMATION_SCHEMA - though the latter is what your professor may have taught and might expect.
CREATE PROCEDURE dbo.SearchTable
#tablename NVARCHAR(257),
#term NVARCHAR(4000)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql NVARCHAR(MAX);
SET #sql = N'SELECT * FROM ' + #tablename + ' WHERE 1 = 0';
SELECT #sql = #sql + '
OR ' + c.name + ' LIKE ''%' + REPLACE(#term, '''', '''''') + '%'''
FROM
sys.all_columns AS c
INNER JOIN
sys.types AS t
ON c.system_type_id = t.system_type_id
AND c.user_type_id = t.user_type_id
WHERE
c.[object_id] = OBJECT_ID(#tablename)
AND t.name IN (N'sysname', N'char', N'nchar',
N'varchar', N'nvarchar', N'text', N'ntext');
PRINT #sql;
-- EXEC sp_executesql #sql;
END
GO
When you're happy that it's outputting the SELECT query you're after, comment out the PRINT and uncomment the EXEC.
You get into an infinite loop because EXISTS(SELECT MIN(num) FROM #tempcolumns WHERE printed = 0) will always return a row even if there are no matches - you need to EXISTS (SELECT * .... instead
To use dynamic SQL, you need to build up a string (varchar) of the SQL statement you want to run, then you call it with EXEC
eg:
declare #s varchar(max)
select #s = 'SELECT * FROM mytable '
Exec (#s)
I'm doing a basic query where I'm doing
Select Lower(column-name) from table
Now if I want to do a lower case on more than one column, I would need to do a
Select Lower(col1), Lower(col2) from table
I wanted to know if it's possible to do lowercase function application on all columns.
Something like
Select Lower(*) from table
This is not a valid statement, when I'm trying on sqlite3 and I'm guessing it's same for other vendors too. Has anybody been able to do this via a different approach. PL/SQL or T-SQL maybe.
No. It isnt possible to fetch all columns in lower case all at once.
I concur that there is no way to do this without mentioning each column. But since your consuming application will presumably have to loop through all columns and rows anyway, I suggest that is the right place to transform this data anyway...
While its not possible to use an inbuilt sql construct to achieve this, yet a small sql script should solve the purpose. Please consider the script given below:
declare #number_of_columns int
declare #counter int
declare #query nvarchar(max)
declare #column_name nvarchar(max)
set #query = 'select '
set #counter = 1
select #number_of_columns = count(ordinal_position) from information_schema.columns where table_name = '<your table name>'
while(#counter <= #number_of_columns)
begin
select #column_name = column_name from information_schema.columns
where table_name = '<your table name>' and ordinal_position = #counter
if(#counter < #number_of_columns)
begin
Set #query = #query + 'lower(' + (#column_name) + '),'
end
else
begin
Set #query = #query + 'lower(' + (#column_name) + ')'
end
set #counter = #counter + 1
end
set #query = #query + ' from <your table name>'
select #query
EXEC sp_executesql #query = #query
While, the above script would not work for 'ntext' data type implicitly as its explicit conversion to varchar is required to apply 'lower' function, also the script might not be required for data types like 'int','datetime' etc. So modify the script by applying additional conditions on the data types of the columns.
I hope to find a way to get the value in the Nth column of a dataset.
Thus, for N = 6 I want
SELECT (Column6Value) from MyTable where MyTable.RowID = 14
Is there a way to do this in TSQL as implemented in SQL Server 2005? Thanks.
You should be able to join with the system catalog (Information_Schema.Columns) to get the column number.
This works:
create table test (a int, b int, c int)
insert test values(1,2,3)
declare #column_number int
set #column_number = 2
declare #query varchar(8000)
select #query = COLUMN_NAME from information_Schema.Columns
where TABLE_NAME = 'test' and ORDINAL_POSITION = #column_number
set #query = 'select ' + #query + ' from test'
exec(#query)
But why you would ever do something like this is beyond me, what problem are you trying to solve?
Not sure if you're at liberty to redesign the table, but if the ordinal position of the column is significant, your data is not normalized and you're going to have to jump through lots of hoops for many common tasks.
Instead of having table MyTable with Column1... ColumnN you'd have a child table of those values you formerly stored in Column1...ColumnN each in their own row.
For those times when you really need those values in a single row, you could then do a PIVOT: Link
Edit: My suggestion is somewhat moot. Ash clarified that it's "de-normalization by design, it's a pivot model where each row can contain one of any four data types." Yeah, that kind of design can be cumbersome when you normalize it.
If you know the range of n you could use a case statement
Select Case when #n = 1 then Column1Value when #n = 2 then Column2Value end
from MyTable
As far as I know there is no dynamic way to replace a column (or table) in a select statement without resorting to dynamic sql (in which chase you should probably refactor anyways)
Implementation of #Mike Sharek's answer.
Declare #columnName varchar(255),
#tablename varchar(255), #columnNumber int, #SQL nvarchar(4000)
Set #tablename = 'MyTable'
Set #columnNumber = 6
Select #columnName = Column_Name from Information_SChema.columns
where Ordinal_position = #columnNumber and Table_Name = #tablename
Set #SQL = 'select ' + #columnName + ' from ' + #tableName + ' where RowID=14'
Exec sp_Executesql #SQL
I agree with Sambo - why are you trying to do this? If you are calling the code from C# or VB, its much easier to grab the 6th column from a resultset.