how to map column if exists otherwise empty - nhibernate

I'm trying to map a property to a Formula to select value if the column if exists otherwise default value
I tried the following
mapper.Map(x => x.GroupId).Formula("(select case when exists (select * from INFORMATION_SCHEMA.COLUMNS SYS_COLS_TBL WHERE SYS_COLS_TBL.TABLE_NAME ='Azure' AND SYS_COLS_TBL.COLUMN_NAME = 'GroupId') then this_.GroupId else '' end)");
I get an error
SqlException: Invalid column name 'GroupId'.

This SQL statement will return all tables, which do have name 'Azure'
select * from INFORMATION_SCHEMA.COLUMNS SYS_COLS_TBL
WHERE SYS_COLS_TBL.TABLE_NAME ='Azure'
AND SYS_COLS_TBL.COLUMN_NAME = 'GroupId'
but, there could be e.g. two such tables .. but in different schemas. And if that happens, we can be querying dbo.Azure .. while SomeOtherSchema.Azure is having column GroupId. This will fix it:
select * from INFORMATION_SCHEMA.COLUMNS SYS_COLS_TBL
WHERE SYS_COLS_TBL.TABLE_NAME ='Azure'
AND SYS_COLS_TBL.COLUMN_NAME = 'GroupId'
AND SYS_COLS_TBL.TABLE_SCHEMA = 'dbo'
Also, to support more complex querying (JOIN associations) remove this_.:
instead of:
then this_.GroupId else '' end)
use:
then GroupId else '' end)
NHibernate will provide proper alias based on context

The SQL statement requires that all references to the tables and columns do exist. Therefore you're getting Invalid column name 'GroupId' error.
The usual way to do something like that would be to use dynamic sql (sp_executesql #sql):
DECLARE #sql NVARCHAR(MAX) = '
SELECT '+
(CASE WHEN EXISTS (SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = 'dbo' AND --assuming the schema is dbo
TABLE_NAME = 'Azure' AND
COLUMN_NAME = 'GroupId'
)
then 'GroupId'
else 'NULL as GroupId'
end) + '
FROM Azure';
exec sp_executesql #sql
But this would not work because you are already in the context of SQL statement and it is not possible to inject the dynamic sql there.
To solve the problem there are a few solutions:
Correct solution: create the missing columns in the database.
Painful solution: map the object to a stored procedures which will execute a dynamic SQL similar to the above.
Insanely painful solution: Map your possible missing columns as normal. Before building the factory - introspect your model and either remove mappings for the columns missing or change their mapping to formulas. This would be similar to Configuration.ValidateSchema method of NHibernate, but instead of throwing errors - you'd need to remove these columns from the mapping.

Related

SQL joining huge tables by excluding just one column in select statement [duplicate]

I'm trying to use a select statement to get all of the columns from a certain MySQL table except one. Is there a simple way to do this?
EDIT: There are 53 columns in this table (NOT MY DESIGN)
Actually there is a way, you need to have permissions of course for doing this ...
SET #sql = CONCAT('SELECT ', (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME), '<columns_to_omit>,', '') FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<table>' AND TABLE_SCHEMA = '<database>'), ' FROM <table>');
PREPARE stmt1 FROM #sql;
EXECUTE stmt1;
Replacing <table>, <database> and <columns_to_omit>
(Do not try this on a big table, the result might be... surprising !)
TEMPORARY TABLE
DROP TABLE IF EXISTS temp_tb;
CREATE TEMPORARY TABLE ENGINE=MEMORY temp_tb SELECT * FROM orig_tb;
ALTER TABLE temp_tb DROP col_a, DROP col_f,DROP col_z; #// MySQL
SELECT * FROM temp_tb;
DROP syntax may vary for databases #Denis Rozhnev
Would a View work better in this case?
CREATE VIEW vwTable
as
SELECT
col1
, col2
, col3
, col..
, col53
FROM table
You can do:
SELECT column1, column2, column4 FROM table WHERE whatever
without getting column3, though perhaps you were looking for a more general solution?
If you are looking to exclude the value of a field, e.g. for security concerns / sensitive info, you can retrieve that column as null.
e.g.
SELECT *, NULL AS salary FROM users
To the best of my knowledge, there isn't. You can do something like:
SELECT col1, col2, col3, col4 FROM tbl
and manually choose the columns you want. However, if you want a lot of columns, then you might just want to do a:
SELECT * FROM tbl
and just ignore what you don't want.
In your particular case, I would suggest:
SELECT * FROM tbl
unless you only want a few columns. If you only want four columns, then:
SELECT col3, col6, col45, col 52 FROM tbl
would be fine, but if you want 50 columns, then any code that makes the query would become (too?) difficult to read.
While trying the solutions by #Mahomedalid and #Junaid I found a problem. So thought of sharing it. If the column name is having spaces or hyphens like check-in then the query will fail. The simple workaround is to use backtick around column names. The modified query is below
SET #SQL = CONCAT('SELECT ', (SELECT GROUP_CONCAT(CONCAT("`", COLUMN_NAME, "`")) FROM
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'users' AND COLUMN_NAME NOT IN ('id')), ' FROM users');
PREPARE stmt1 FROM #SQL;
EXECUTE stmt1;
If the column that you didn't want to select had a massive amount of data in it, and you didn't want to include it due to speed issues and you select the other columns often, I would suggest that you create a new table with the one field that you don't usually select with a key to the original table and remove the field from the original table. Join the tables when that extra field is actually required.
You could use DESCRIBE my_table and use the results of that to generate the SELECT statement dynamically.
My main problem is the many columns I get when joining tables. While this is not the answer to your question (how to select all but certain columns from one table), I think it is worth mentioning that you can specify table. to get all columns from a particular table, instead of just specifying .
Here is an example of how this could be very useful:
select users.*, phone.meta_value as phone, zipcode.meta_value as zipcode
from users
left join user_meta as phone
on ( (users.user_id = phone.user_id) AND (phone.meta_key = 'phone') )
left join user_meta as zipcode
on ( (users.user_id = zipcode.user_id) AND (zipcode.meta_key = 'zipcode') )
The result is all the columns from the users table, and two additional columns which were joined from the meta table.
I liked the answer from #Mahomedalid besides this fact informed in comment from #Bill Karwin. The possible problem raised by #Jan Koritak is true I faced that but I have found a trick for that and just want to share it here for anyone facing the issue.
we can replace the REPLACE function with where clause in the sub-query of Prepared statement like this:
Using my table and column name
SET #SQL = CONCAT('SELECT ', (SELECT GROUP_CONCAT(COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'users' AND COLUMN_NAME NOT IN ('id')), ' FROM users');
PREPARE stmt1 FROM #SQL;
EXECUTE stmt1;
So, this is going to exclude only the field id but not company_id
Yes, though it can be high I/O depending on the table here is a workaround I found for it.
SELECT *
INTO #temp
FROM table
ALTER TABLE #temp DROP COlUMN column_name
SELECT *
FROM #temp
It is good practice to specify the columns that you are querying even if you query all the columns.
So I would suggest you write the name of each column in the statement (excluding the one you don't want).
SELECT
col1
, col2
, col3
, col..
, col53
FROM table
I agree with the "simple" solution of listing all the columns, but this can be burdensome, and typos can cause lots of wasted time. I use a function "getTableColumns" to retrieve the names of my columns suitable for pasting into a query. Then all I need to do is to delete those I don't want.
CREATE FUNCTION `getTableColumns`(tablename varchar(100))
RETURNS varchar(5000) CHARSET latin1
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE res VARCHAR(5000) DEFAULT "";
DECLARE col VARCHAR(200);
DECLARE cur1 CURSOR FOR
select COLUMN_NAME from information_schema.columns
where TABLE_NAME=#table AND TABLE_SCHEMA="yourdatabase" ORDER BY ORDINAL_POSITION;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
REPEAT
FETCH cur1 INTO col;
IF NOT done THEN
set res = CONCAT(res,IF(LENGTH(res)>0,",",""),col);
END IF;
UNTIL done END REPEAT;
CLOSE cur1;
RETURN res;
Your result returns a comma delimited string, for example...
col1,col2,col3,col4,...col53
I agree that it isn't sufficient to Select *, if that one you don't need, as mentioned elsewhere, is a BLOB, you don't want to have that overhead creep in.
I would create a view with the required data, then you can Select * in comfort --if the database software supports them. Else, put the huge data in another table.
At first I thought you could use regular expressions, but as I've been reading the MYSQL docs it seems you can't. If I were you I would use another language (such as PHP) to generate a list of columns you want to get, store it as a string and then use that to generate the SQL.
Based on #Mahomedalid answer, I have done some improvements to support "select all columns except some in mysql"
SET #database = 'database_name';
SET #tablename = 'table_name';
SET #cols2delete = 'col1,col2,col3';
SET #sql = CONCAT(
'SELECT ',
(
SELECT GROUP_CONCAT( IF(FIND_IN_SET(COLUMN_NAME, #cols2delete), NULL, COLUMN_NAME ) )
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #tablename AND TABLE_SCHEMA = #database
),
' FROM ',
#tablename);
SELECT #sql;
If you do have a lots of cols, use this sql to change group_concat_max_len
SET ##group_concat_max_len = 2048;
I agree with #Mahomedalid's answer, but I didn't want to do something like a prepared statement and I didn't want to type all the fields, so what I had was a silly solution.
Go to the table in phpmyadmin->sql->select, it dumps the query: copy, replace and done! :)
While I agree with Thomas' answer (+1 ;)), I'd like to add the caveat that I'll assume the column that you don't want contains hardly any data. If it contains enormous amounts of text, xml or binary blobs, then take the time to select each column individually. Your performance will suffer otherwise. Cheers!
Just do
SELECT * FROM table WHERE whatever
Then drop the column in you favourite programming language: php
while (($data = mysql_fetch_array($result, MYSQL_ASSOC)) !== FALSE) {
unset($data["id"]);
foreach ($data as $k => $v) {
echo"$v,";
}
}
The answer posted by Mahomedalid has a small problem:
Inside replace function code was replacing "<columns_to_delete>," by "", this replacement has a problem if the field to replace is the last one in the concat string due to the last one doesn't have the char comma "," and is not removed from the string.
My proposal:
SET #sql = CONCAT('SELECT ', (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME),
'<columns_to_delete>', '\'FIELD_REMOVED\'')
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '<table>'
AND TABLE_SCHEMA = '<database>'), ' FROM <table>');
Replacing <table>, <database> and `
The column removed is replaced by the string "FIELD_REMOVED" in my case this works because I was trying to safe memory. (The field I was removing is a BLOB of around 1MB)
You can use SQL to generate SQL if you like and evaluate the SQL it produces. This is a general solution as it extracts the column names from the information schema. Here is an example from the Unix command line.
Substituting
MYSQL with your mysql command
TABLE with the table name
EXCLUDEDFIELD with excluded field name
echo $(echo 'select concat("select ", group_concat(column_name) , " from TABLE") from information_schema.columns where table_name="TABLE" and column_name != "EXCLUDEDFIELD" group by "t"' | MYSQL | tail -n 1) | MYSQL
You will really only need to extract the column names in this way only once to construct the column list excluded that column, and then just use the query you have constructed.
So something like:
column_list=$(echo 'select group_concat(column_name) from information_schema.columns where table_name="TABLE" and column_name != "EXCLUDEDFIELD" group by "t"' | MYSQL | tail -n 1)
Now you can reuse the $column_list string in queries you construct.
I wanted this too so I created a function instead.
public function getColsExcept($table,$remove){
$res =mysql_query("SHOW COLUMNS FROM $table");
while($arr = mysql_fetch_assoc($res)){
$cols[] = $arr['Field'];
}
if(is_array($remove)){
$newCols = array_diff($cols,$remove);
return "`".implode("`,`",$newCols)."`";
}else{
$length = count($cols);
for($i=0;$i<$length;$i++){
if($cols[$i] == $remove)
unset($cols[$i]);
}
return "`".implode("`,`",$cols)."`";
}
}
So how it works is that you enter the table, then a column you don't want or as in an array: array("id","name","whatevercolumn")
So in select you could use it like this:
mysql_query("SELECT ".$db->getColsExcept('table',array('id','bigtextcolumn'))." FROM table");
or
mysql_query("SELECT ".$db->getColsExcept('table','bigtextcolumn')." FROM table");
May be I have a solution to Jan Koritak's pointed out discrepancy
SELECT CONCAT('SELECT ',
( SELECT GROUP_CONCAT(t.col)
FROM
(
SELECT CASE
WHEN COLUMN_NAME = 'eid' THEN NULL
ELSE COLUMN_NAME
END AS col
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'employee' AND TABLE_SCHEMA = 'test'
) t
WHERE t.col IS NOT NULL) ,
' FROM employee' );
Table :
SELECT table_name,column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'employee' AND TABLE_SCHEMA = 'test'
================================
table_name column_name
employee eid
employee name_eid
employee sal
================================
Query Result:
'SELECT name_eid,sal FROM employee'
I use this work around although it may be "Off topic" - using mysql workbench and the query builder -
Open the columns view
Shift select all the columns you want in your query (in your case all but one which is what i do)
Right click and select send to SQL Editor-> name short.
Now you have the list and you can then copy paste the query to where ever.
If it's always the same one column, then you can create a view that doesn't have it in it.
Otherwise, no I don't think so.
I would like to add another point of view in order to solve this problem, specially if you have a small number of columns to remove.
You could use a DB tool like MySQL Workbench in order to generate the select statement for you, so you just have to manually remove those columns for the generated statement and copy it to your SQL script.
In MySQL Workbench the way to generate it is:
Right click on the table -> send to Sql Editor -> Select All Statement.
The accepted answer has several shortcomings.
It fails where the table or column names requires backticks
It fails if the column you want to omit is last in the list
It requires listing the table name twice (once for the select and another for the query text) which is redundant and unnecessary
It can potentially return column names in the wrong order
All of these issues can be overcome by simply including backticks in the SEPARATOR for your GROUP_CONCAT and using a WHERE condition instead of REPLACE(). For my purposes (and I imagine many others') I wanted the column names returned in the same order that they appear in the table itself. To achieve this, here we use an explicit ORDER BY clause inside of the GROUP_CONCAT() function:
SELECT CONCAT(
'SELECT `',
GROUP_CONCAT(COLUMN_NAME ORDER BY `ORDINAL_POSITION` SEPARATOR '`,`'),
'` FROM `',
`TABLE_SCHEMA`,
'`.`',
TABLE_NAME,
'`;'
)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE `TABLE_SCHEMA` = 'my_database'
AND `TABLE_NAME` = 'my_table'
AND `COLUMN_NAME` != 'column_to_omit';
I have a suggestion but not a solution.
If some of your columns have a larger data sets then you should try with following
SELECT *, LEFT(col1, 0) AS col1, LEFT(col2, 0) as col2 FROM table
If you use MySQL Workbench you can right-click your table and click Send to sql editor and then Select All Statement This will create an statement where all fields are listed, like this:
SELECT `purchase_history`.`id`,
`purchase_history`.`user_id`,
`purchase_history`.`deleted_at`
FROM `fs_normal_run_2`.`purchase_history`;
SELECT * FROM fs_normal_run_2.purchase_history;
Now you can just remove those that you dont want.

Trouble Getting Columns Names to Variable in SSIS Execute SQL Task

I'm attempting to validate some column headings before the import of a monthly data set. I've set up an Execute SQL Task that's supposed to retrieve the column headings of the prior month's table and store it in Header_Row as a single string with the field names separated by commas. The query runs just fine in SQL Server, but when running in SSIS, it throws the following error:
"The type of the value (Empty) being assigned to variable 'User:Header_Row' differs from the current variable type (String)."
1) Does this mean that I'm not getting anything back from my query?
2) Is there another method I should be using in SSIS to get the query results I'm looking for?
3) Is there an issue with me using the variable reference in my query as a portion of a string? I think the answer is yes, but would like to confirm, as my variable was still empty after changing this.
Original Query:
SELECT DISTINCT
STUFF((
SELECT
',' + COLUMN_NAME
FROM
db_Analytics.INFORMATION_SCHEMA.COLUMNS aa
WHERE
TABLE_NAME = 'dt_table_?'
ORDER BY
aa.ORDINAL_POSITION
FOR
XML PATH('')
), 1, 1, '') AS Fields
FROM
db_Analytics.INFORMATION_SCHEMA.COLUMNS a;
EDIT: After changing the variable to cover the full table name, I have a new error saying "The value type (__ComObject) can only be converted to variables of the type Object."
Final Query:
SELECT DISTINCT
CAST(STUFF((
SELECT
',' + COLUMN_NAME
FROM
db_Analytics.INFORMATION_SCHEMA.COLUMNS aa
WHERE
TABLE_NAME = ?
ORDER BY
aa.ORDINAL_POSITION
FOR
XML PATH('')
), 1, 1, '') As varchar(8000)) AS Fields
FROM
db_Analytics.INFORMATION_SCHEMA.COLUMNS a;
You are attempting to parameterize your query. Proper query parameterization is useful for avoiding SQL Injection attacks and the like.
Your query is looking for a TABLE_NAME that is literally 'dt_table_?' That's probably not what you want.
For laziness, I'd just rewrite it as
DECLARE #tname sysname = 'dt_table_' + ?;
SELECT DISTINCT
STUFF((
SELECT
',' + COLUMN_NAME
FROM
db_Analytics.INFORMATION_SCHEMA.COLUMNS aa
WHERE
TABLE_NAME = #tname
ORDER BY
aa.ORDINAL_POSITION
FOR
XML PATH('')
), 1, 1, '') AS Fields
FROM
db_Analytics.INFORMATION_SCHEMA.COLUMNS a;
If that's not working, you might need to use an Expression to build out the query.
I'm really pretty sure that this is your problem:
TABLE_NAME = 'dt_table_?'
I'm guessing this is an attempt to parameterize the query, but having the question mark inside the single-quote will cause the question mark to be taken literally.
Try like this instead:
TABLE_NAME = ?
And when you populate the variable that you use as the parameter value, include the 'dt_table_' part in the value of the variable.
EDIT:
Also in your ResultSet assignment, try changing "Fields" to "0" in the Result Name column.
There are two issues with the query above:
1) The query in the task was not properly parameterized. I fixed this by putting the full name of the prior month's table into the variable.
2) The default length of the result was MAX, which was causing an issue when SSIS would try to put it into my variable, Header_Row. I fixed this by casting the result of the query as varchar(8000).
Thanks for the help everyone.

How can a column be renamed in results when getting columns from INFORMATION_SCHEMA?

So I have this:
CREATE PROCEDURE getBattingColumnNames
AS
SELECT COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'Batting'
AND COLUMN_NAME NOT IN ('playerID','yearID','stint','teamID','lgID', 'G_batting','GIDP','G_old');
GO
And it works great. I get all the column names that I want, in c# I use this to populate a drop down with the column names. however, one of my column names, "Doub" I would like to change. So playing around with it I tried:
SELECT COLUMN_NAME.Doub AS 'DB'
FROM INFORMATION_SCHEMA.COLUMNS
WHERE (TABLE_NAME = 'Batting')
and a variation of that, and the error is the mulitplart identifier could not be bound. How can i change that column name in this query?
You could use a case to translate the column name:
select case COLUMN_NAME
when 'Doub' then 'DB'
else COLUMN_NAME
end
from ...

How do I return the SQL data types from my query?

I've a SQL query that queries an enormous (as in, hundreds of views/tables with hard-to-read names like CMM-CPP-FAP-ADD) database that I don't need nor want to understand. The result of this query needs to be stored in a staging table to feed a report.
I need to create the staging table, but with hundreds of views/tables to dig through to find the data types that are being represented here, I have to wonder if there's a better way to construct this table.
Can anyone advise how I would use any of the SQL Server 2008 tools to divine the source data types in my SQL 2000 database?
As a general example, I want to know from a query like:
SELECT Auth_First_Name, Auth_Last_Name, Auth_Favorite_Number
FROM Authors
Instead of the actual results, I want to know that:
Auth_First_Name is char(25)
Auth_Last_Name is char(50)
Auth_Favorite_Number is int
I'm not interested in constraints, I really just want to know the data types.
select * from information_schema.columns
could get you started.
For SQL Server 2012 and above: If you place the query into a string then you can get the result set data types like so:
DECLARE #query nvarchar(max) = 'select 12.1 / 10.1 AS [Column1]';
EXEC sp_describe_first_result_set #query, null, 0;
You could also insert the results (or top 10 results) into a temp table and get the columns from the temp table (as long as the column names are all different).
SELECT TOP 10 *
INTO #TempTable
FROM <DataSource>
Then use:
EXEC tempdb.dbo.sp_help N'#TempTable';
or
SELECT *
FROM tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..#TempTable');
Extrapolated from Aaron's answer here.
You can also use...
SQL_VARIANT_PROPERTY()
...in cases where you don't have direct access to the metadata (e.g. a linked server query perhaps?).
SQL_VARIANT_PROPERTY (Transact-SQL)
In SQL Server 2005 and beyond you are better off using the catalog views (sys.columns) as opposed to INFORMATION_SCHEMA. Unless portability to other platforms is important. Just keep in mind that the INFORMATION_SCHEMA views won't change and so they will progressively be lacking information on new features etc. in successive versions of SQL Server.
There MUST be en easier way to do this... Low and behold, there is...!
"sp_describe_first_result_set" is your friend!
Now I do realise the question was asked specifically for SQL Server 2000, but I was looking for a similar solution for later versions and discovered some native support in SQL to achieve this.
In SQL Server 2012 onwards cf. "sp_describe_first_result_set" - Link to BOL
I had already implemented a solution using a technique similar to #Trisped's above and ripped it out to implement the native SQL Server implementation.
In case you're not on SQL Server 2012 or Azure SQL Database yet, here's the stored proc I created for pre-2012 era databases:
CREATE PROCEDURE [fn].[GetQueryResultMetadata]
#queryText VARCHAR(MAX)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
--SET NOCOUNT ON;
PRINT #queryText;
DECLARE
#sqlToExec NVARCHAR(MAX) =
'SELECT TOP 1 * INTO #QueryMetadata FROM ('
+
#queryText
+
') T;'
+ '
SELECT
C.Name [ColumnName],
TP.Name [ColumnType],
C.max_length [MaxLength],
C.[precision] [Precision],
C.[scale] [Scale],
C.[is_nullable] IsNullable
FROM
tempdb.sys.columns C
INNER JOIN
tempdb.sys.types TP
ON
TP.system_type_id = C.system_type_id
AND
-- exclude custom types
TP.system_type_id = TP.user_type_id
WHERE
[object_id] = OBJECT_ID(N''tempdb..#QueryMetadata'');
'
EXEC sp_executesql #sqlToExec
END
SELECT COLUMN_NAME,
DATA_TYPE,
CHARACTER_MAXIMUM_LENGTH
FROM information_schema.columns
WHERE TABLE_NAME = 'YOUR_TABLE_NAME'
You can use columns aliases for better looking output.
Can you get away with recreating the staging table from scratch every time the query is executed? If so you could use SELECT ... INTO syntax and let SQL Server worry about creating the table using the correct column types etc.
SELECT *
INTO your_staging_table
FROM enormous_collection_of_views_tables_etc
This will give you everything column property related.
SELECT * INTO TMP1
FROM ( SELECT TOP 1 /* rest of your query expression here */ );
SELECT o.name AS obj_name, TYPE_NAME(c.user_type_id) AS type_name, c.*
FROM sys.objects AS o
JOIN sys.columns AS c ON o.object_id = c.object_id
WHERE o.name = 'TMP1';
DROP TABLE TMP1;
select COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='yourTable';
sp_describe_first_result_set
will help to identify the datatypes of query by analyzing datatypes of first resultset of query
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql?view=sql-server-2017
I use a simple case statement to render results I can use in technical specification documents. This example does not contain every condition you will run into with a database, but it gives you a good template to work with.
SELECT
TABLE_NAME AS 'Table Name',
COLUMN_NAME AS 'Column Name',
CASE WHEN DATA_TYPE LIKE '%char'
THEN DATA_TYPE + '(' + CONVERT(VARCHAR, CHARACTER_MAXIMUM_LENGTH) + ')'
WHEN DATA_TYPE IN ('bit', 'int', 'smallint', 'date')
THEN DATA_TYPE
WHEN DATA_TYPE = 'datetime'
THEN DATA_TYPE + '(' + CONVERT(VARCHAR, DATETIME_PRECISION) + ')'
WHEN DATA_TYPE = 'float'
THEN DATA_TYPE
WHEN DATA_TYPE IN ('numeric', 'money')
THEN DATA_TYPE + '(' + CONVERT(VARCHAR, NUMERIC_PRECISION) + ', ' + CONVERT(VARCHAR, NUMERIC_PRECISION_RADIX) + ')'
END AS 'Data Type',
CASE WHEN IS_NULLABLE = 'NO'
THEN 'NOT NULL'
ELSE 'NULL'
END AS 'PK/LK/NOT NULL'
FROM INFORMATION_SCHEMA.COLUMNS
ORDER BY
TABLE_NAME, ORDINAL_POSITION
Checking data types.
The first way to check data types for SQL Server database is a query with the SYS schema table. The below query uses COLUMNS and TYPES tables:
SELECT C.NAME AS COLUMN_NAME,
TYPE_NAME(C.USER_TYPE_ID) AS DATA_TYPE,
C.IS_NULLABLE,
C.MAX_LENGTH,
C.PRECISION,
C.SCALE
FROM SYS.COLUMNS C
JOIN SYS.TYPES T
ON C.USER_TYPE_ID=T.USER_TYPE_ID
WHERE C.OBJECT_ID=OBJECT_ID('your_table_name');
In this way, you can find data types of columns.
This easy query return a data type bit. You can use this thecnic for other data types:
select CAST(0 AS BIT) AS OK

SELECT * EXCEPT

Is there any RDBMS that implements something like SELECT * EXCEPT? What I'm after is getting all of the fields except a specific TEXT/BLOB field, and I'd like to just select everything else.
Almost daily I complain to my coworkers that someone should implement this... It's terribly annoying that it doesn't exist.
Edit: I understand everyone's concern for SELECT *. I know the risks associated with SELECT *. However, this, at least in my situation, would not be used for any Production level code, or even Development level code; strictly for debugging, when I need to see all of the values easily.
As I've stated in some of the comments, where I work is strictly a commandline shop, doing everything over ssh. This makes it difficult to use any gui tools (external connections to the database aren't allowed), etc etc.
Thanks for the suggestions though.
As others have said, it is not a good idea to do this in a query because it is prone to issues when someone changes the table structure in the future. However, there is a way to do this... and I can't believe I'm actually suggesting this, but in the spirit of answering the ACTUAL question...
Do it with dynamic SQL... this does all the columns except the "description" column. You could easily turn this into a function or stored proc.
declare #sql varchar(8000),
#table_id int,
#col_id int
set #sql = 'select '
select #table_id = id from sysobjects where name = 'MY_Table'
select #col_id = min(colid) from syscolumns where id = #table_id and name <> 'description'
while (#col_id is not null) begin
select #sql = #sql + name from syscolumns where id = #table_id and colid = #col_id
select #col_id = min(colid) from syscolumns where id = #table_id and colid > #col_id and name <> 'description'
if (#col_id is not null) set #sql = #sql + ','
print #sql
end
set #sql = #sql + ' from MY_table'
exec #sql
Create a view on the table which doesn't include the blob columns
Is there any RDBMS that implements something like SELECT * EXCEPT?
Yes, Google Big Query implements SELECT * EXCEPT:
A SELECT * EXCEPT statement specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output.
WITH orders AS(
SELECT 5 as order_id,
"sprocket" as item_name,
200 as quantity
)
SELECT * EXCEPT (order_id)
FROM orders;
Output:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| sprocket | 200 |
+-----------+----------+
EDIT:
H2 database also supports SELECT * EXCEPT (col1, col2, ...) syntax.
Wildcard expression
A wildcard expression in a SELECT statement. A wildcard expression represents all visible columns. Some columns can be excluded with optional EXCEPT clause.
EDIT 2:
Hive supports: REGEX Column Specification
A SELECT statement can take regex-based column specification in Hive releases prior to 0.13.0, or in 0.13.0 and later releases if the configuration property hive.support.quoted.identifiers is set to none.
The following query selects all columns except ds and hr.
SELECT `(ds|hr)?+.+` FROM sales
EDIT 3:
Snowflake also now supports: SELECT * EXCEPT (and a RENAME option equivalent to REPLACE in BigQuery)
EXCLUDE col_name EXCLUDE (col_name, col_name, ...)
When you select all columns (SELECT *), specifies the columns that should be excluded from the results.
RENAME col_name AS col_alias RENAME (col_name AS col_alias, col_name AS col_alias, ...)
When you select all columns (SELECT *), specifies the column aliases that should be used in the results.
and so does Databricks SQL (since Runtime 11.0)
star_clause
[ { table_name | view_name } . ] * [ except_clause ]
except_clause
EXCEPT ( { column_name | field_name } [, ...] )
and also DuckDB
-- select all columns except the city column from the addresses table
SELECT * EXCLUDE (city) FROM addresses;
-- select all columns from the addresses table, but replace city with LOWER(city)
SELECT * REPLACE (LOWER(city) AS city) FROM addresses;
-- select all columns matching the given regex from the table
SELECT COLUMNS('number\d+') FROM addresses;
DB2 allows for this. Columns have an attribute/specifier of Hidden.
From the syscolumns documentation
HIDDEN
CHAR(1) NOT NULL WITH DEFAULT 'N'
Indicates whether the column is implicitly hidden:
P Partially hidden. The column is implicitly hidden from SELECT *.
N Not hidden. The column is visible to all SQL statements.
Create table documentation As part of creating your column, you would specify the IMPLICITLY HIDDEN modifier
An example DDL from Implicitly Hidden Columns follows
CREATE TABLE T1
(C1 SMALLINT NOT NULL,
C2 CHAR(10) IMPLICITLY HIDDEN,
C3 TIMESTAMP)
IN DB.TS;
Whether this capability is such a deal maker to drive the adoption of DB2 is left as an exercise to future readers.
Is there any RDBMS that implements something like SELECT * EXCEPT
Yes! The truly relational language Tutorial D allows projection to be expressed in terms of the attributes to be removed instead of the ones to be kept e.g.
my_relvar { ALL BUT description }
In fact, its equivalent to SQL's SELECT * is { ALL BUT }.
Your proposal for SQL is a worthy one but I heard it has already been put to the SQL standard's committee by the users' group and rejected by the vendor's group :(
It has also been explicitly requested for SQL Server but the request was closed as 'won't fix'.
Yes, finally there is :) SQL Standard 2016 defines Polymorphic Table Functions
SQL:2016 introduces polymorphic table functions (PTF) that don't need to specify the result type upfront. Instead, they can provide a describe component procedure that determines the return type at run time. Neither the author of the PTF nor the user of the PTF need to declare the returned columns in advance.
PTFs as described by SQL:2016 are not yet available in any tested database.10 Interested readers may refer to the free technical report “Polymorphic table functions in SQL” released by ISO. The following are some of the examples discussed in the report:
CSVreader, which reads the header line of a CVS file to determine the number and names of the return columns
Pivot (actually unpivot), which turns column groups into rows (example: phonetype, phonenumber) -- me: no more harcoded strings :)
TopNplus, which passes through N rows per partition and one extra row with the totals of the remaining rows
Oracle 18c implements this mechanism. 18c Skip_col Polymorphic Table Function Example Oracle Live SQL and Skip_col Polymorphic Table Function Example
This example shows how to skip data based on name/specific datatype:
CREATE PACKAGE skip_col_pkg AS
-- OVERLOAD 1: Skip by name
FUNCTION skip_col(tab TABLE, col columns)
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t;
-- OVERLOAD 2: Skip by type --
FUNCTION skip_col(tab TABLE,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t;
END skip_col_pkg;
and body:
CREATE PACKAGE BODY skip_col_pkg AS
/* OVERLOAD 1: Skip by name
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_name
*
* PARAMETERS:
* tab - The input table
* col - The name of the columns to drop from the output
*
* DESCRIPTION:
* This PTF removes all the input columns listed in col from the output
* of the PTF.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t
AS
new_cols dbms_tf.columns_new_t;
col_id PLS_INTEGER := 1;
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
FOR j IN 1 .. col.count() LOOP
tab.column(i).pass_through := tab.column(i).description.name != col(j);
EXIT WHEN NOT tab.column(i).pass_through;
END LOOP;
END LOOP;
RETURN NULL;
END;
/* OVERLOAD 2: Skip by type
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_type
*
* PARAMETERS:
* tab - Input table
* type_name - A string representing the type of columns to skip
* flip - 'False' [default] => Match columns with given type_name
* otherwise => Ignore columns with given type_name
*
* DESCRIPTION:
* This PTF removes the given type of columns from the given table.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t
AS
typ CONSTANT VARCHAR2(1024) := upper(trim(type_name));
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
tab.column(i).pass_through :=
CASE upper(substr(flip,1,1))
WHEN 'F' THEN dbms_tf.column_type_name(tab.column(i).description)
!=typ
ELSE dbms_tf.column_type_name(tab.column(i).description)
=typ
END /* case */;
END LOOP;
RETURN NULL;
END;
END skip_col_pkg;
And sample usage:
-- skip number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number');
-- only number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number', flip => 'True')
-- skip defined columns
SELECT *
FROM skip_col_pkg.skip_col(scott.emp, columns(comm, hiredate, mgr))
WHERE deptno = 20;
I highly recommend to read entire example(creating standalone functions instead of package calls).
You could easily overload skip method for example: skip columns that does not start/end with specific prefix/suffix.
db<>fidde demo
Related: How to Dynamically Change the Columns in a SQL Query By Chris Saxon
Stay away from SELECT *, you are setting yourself for trouble. Always specify exactly which columns you want. It is in fact quite refreshing that the "feature" you are asking for doesn't exist.
I believe the rationale for it not existing is that the author of a query should (for performance sake) only request what they're going to look at/need (and therefore know what columns to specify) -- if someone adds a couple more blobs in the future, you'd be pulling back potentially large fields you're not going to need.
Temp table option here, just drop the columns not required and select * from the altered temp table.
/* Get the data into a temp table */
SELECT * INTO #TempTable
FROM
table
/* Drop the columns that are not needed */
ALTER TABLE #TempTable
DROP COLUMN [columnname]
SELECT * from #TempTable
declare #sql nvarchar(max)
#table char(10)
set #sql = 'select '
set #table = 'table_name'
SELECT #sql = #sql + '[' + COLUMN_NAME + '],'
FROM INFORMATION_SCHEMA.Columns
WHERE TABLE_NAME = #table
and COLUMN_NAME <> 'omitted_column_name'
SET #sql = substring(#sql,1,len(#sql)-1) + ' from ' + #table
EXEC (#sql);
I needed something like what #Glen asks for easing my life with HASHBYTES().
My inspiration was #Jasmine and #Zerubbabel answers. In my case I've different schemas, so the same table name appears more than once at sys.objects. As this may help someone with the same scenario, here it goes:
ALTER PROCEDURE [dbo].[_getLineExceptCol]
#table SYSNAME,
#schema SYSNAME,
#LineId int,
#exception VARCHAR(500)
AS
DECLARE #SQL NVARCHAR(MAX)
BEGIN
SET NOCOUNT ON;
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name
FROM sys.columns
WHERE name <> #exception
AND object_id = (SELECT object_id FROM sys.objects
WHERE name LIKE #table
AND schema_id = (SELECT schema_id FROM sys.schemas WHERE name LIKE #schema))
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #schema + '.' + #table + ' WHERE Id = ' + CAST(#LineId AS nvarchar(50))
EXEC(#SQL)
END
GO
It's an old question, but I hope this answer can still be helpful to others. It can also be modified to add more than one except fields. This can be very handy if you want to unpivot a table with many columns.
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name FROM sys.columns WHERE name <> 'colName' AND object_id = (SELECT id FROM sysobjects WHERE name = 'tblName')
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + 'tblName'
EXEC sp_executesql #SQL
Stored Procedure:
usp_SelectAllExcept 'tblname', 'colname'
ALTER PROCEDURE [dbo].[usp_SelectAllExcept]
(
#tblName SYSNAME
,#exception VARCHAR(500)
)
AS
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name from sys.columns where name <> #exception and object_id = (Select id from sysobjects where name = #tblName)
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #tblName
EXEC sp_executesql #SQL
For the sake of completeness, this is possible in DremelSQL dialect, doing something like:
WITH orders AS
(SELECT 5 as order_id,
"foobar12" as item_name,
800 as quantity)
SELECT * EXCEPT (order_id)
FROM orders;
Result:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| foobar12 | 800 |
+-----------+----------+
There also seems to be another way to do it here without Dremel.
Your question was about what RDBMS supports the * EXCEPT (...) syntax, so perhaps, looking at the jOOQ manual page for * EXCEPT can be useful in the future, as that page will keep track of new dialects supporting the syntax.
Currently (mid 2022), among the jOOQ supported RDBMS, at least BigQuery, H2, and Snowflake support the syntax natively. The others need to emulate it by listing the columns explicitly:
-- ACCESS, ASE, AURORA_MYSQL, AURORA_POSTGRES, COCKROACHDB, DB2, DERBY, EXASOL,
-- FIREBIRD, HANA, HSQLDB, INFORMIX, MARIADB, MEMSQL, MYSQL, ORACLE, POSTGRES,
-- REDSHIFT, SQLDATAWAREHOUSE, SQLITE, SQLSERVER, SYBASE, TERADATA, VERTICA,
-- YUGABYTEDB
SELECT LANGUAGE.CD, LANGUAGE.DESCRIPTION
FROM LANGUAGE
-- BIGQUERY, H2
SELECT * EXCEPT (ID)
FROM LANGUAGE
-- SNOWFLAKE
SELECT * EXCLUDE (ID)
FROM LANGUAGE
Disclaimer: I work for the company behind jOOQ
As others are saying: SELECT * is a bad idea.
Some reasons:
Get only what you need (anything more is a waste)
Indexing (index what you need and you can get it more quickly. If you ask for a bunch of non-indexed columns, too, your query plans will suffer.