How to describe table in SQL Server 2008? - sql

I want to describe a table in SQL Server 2008 like what we can do with the DESC command in Oracle.
I have table [EX].[dbo].[EMP_MAST] which I want to describe, but it does not work.
Error shown:
The object 'EMP_MAST' does not exist in database 'master' or is
invalid for this operation.

You can use sp_columns, a system stored procedure for describing a table.
exec sp_columns TableName
You can also use sp_help.

According to this documentation:
DESC MY_TABLE
is equivalent to
SELECT column_name "Name", nullable "Null?",
concat(concat(concat(data_type,'('),data_length),')') "Type" FROM
user_tab_columns WHERE table_name='TABLE_NAME_TO_DESCRIBE';
I've roughly translated that to the SQL Server equivalent for you - just make sure you're running it on the EX database.
SELECT column_name AS [name],
IS_NULLABLE AS [null?],
DATA_TYPE + COALESCE('(' + CASE WHEN CHARACTER_MAXIMUM_LENGTH = -1
THEN 'Max'
ELSE CAST(CHARACTER_MAXIMUM_LENGTH AS VARCHAR(5))
END + ')', '') AS [type]
FROM INFORMATION_SCHEMA.Columns
WHERE table_name = 'EMP_MAST'

The sp_help built-in procedure is the SQL Server's closest thing to Oracle's DESC function IMHO
sp_help MyTable
Use
sp_help "[SchemaName].[TableName]"
or
sp_help "[InstanceName].[SchemaName].[TableName]"
in case you need to qualify the table name further

You can use keyboard short-cut for Description/ detailed information of Table in SQL Server 2008.
Follow steps:
Write Table Name,
Select it, and press Alt + F1
It will show detailed information/ description of mentioned table as,
1) Table created date,
2) Columns Description,
3) Identity,
4) Indexes,
5) Constraints,
6) References etc. As shown Below [example]:

May be this can help:
Use MyTest
Go
select * from information_schema.COLUMNS where TABLE_NAME='employee'
{ where: MyTest= DatabaseName
Employee= TableName } --Optional conditions

I like the answer that attempts to do the translate, however, while using the code it doesn't like columns that are not VARCHAR type such as BIGINT or DATETIME. I needed something similar today so I took the time to modify it more to my liking. It is also now encapsulated in a function which is the closest thing I could find to just typing describe as oracle handles it. I may still be missing a few data types in my case statement but this works for everything I tried it on. It also orders by ordinal position. this could be expanded on to include primary key columns easily as well.
CREATE FUNCTION dbo.describe (#TABLENAME varchar(50))
returns table
as
RETURN
(
SELECT TOP 1000 column_name AS [ColumnName],
IS_NULLABLE AS [IsNullable],
DATA_TYPE + '(' + CASE
WHEN DATA_TYPE = 'varchar' or DATA_TYPE = 'char' THEN
CASE
WHEN Cast(CHARACTER_MAXIMUM_LENGTH AS VARCHAR(5)) = -1 THEN 'Max'
ELSE Cast(CHARACTER_MAXIMUM_LENGTH AS VARCHAR(5))
END
WHEN DATA_TYPE = 'decimal' or DATA_TYPE = 'numeric' THEN
Cast(NUMERIC_PRECISION AS VARCHAR(5))+', '+Cast(NUMERIC_SCALE AS VARCHAR(5))
WHEN DATA_TYPE = 'bigint' or DATA_TYPE = 'int' THEN
Cast(NUMERIC_PRECISION AS VARCHAR(5))
ELSE ''
END + ')' AS [DataType]
FROM INFORMATION_SCHEMA.Columns
WHERE table_name = #TABLENAME
order by ordinal_Position
);
GO
once you create the function here is a sample table that I used
create table dbo.yourtable
(columna bigint,
columnb int,
columnc datetime,
columnd varchar(100),
columne char(10),
columnf bit,
columng numeric(10,2),
columnh decimal(10,2)
)
Then you can execute it as follows
select * from describe ('yourtable')
It returns the following
ColumnName IsNullable DataType
columna NO bigint(19)
columnb NO int(10)
columnc NO datetime()
columnd NO varchar(100)
columne NO char(10)
columnf NO bit()
columng NO numeric(10, 2)
columnh NO decimal(10, 2)
hope this helps someone.

As a variation of Bridge's answer (I don't yet have enough rep to comment, and didn't feel right about editing that answer), here is a version that works better for me.
SELECT column_name AS [Name],
IS_NULLABLE AS [Null?],
DATA_TYPE + CASE
WHEN CHARACTER_MAXIMUM_LENGTH IS NULL THEN ''
WHEN CHARACTER_MAXIMUM_LENGTH > 99999 THEN ''
ELSE '(' + Cast(CHARACTER_MAXIMUM_LENGTH AS VARCHAR(5)) + ')'
END AS [Type]
FROM INFORMATION_SCHEMA.Columns
WHERE table_name = 'table_name'
Notable changes:
Works for types without length. For an int column, I was seeing NULL for the type because the length was null and it wiped out the whole Type column. So don't print any length component (or parens).
Change the check for CAST length of -1 to check actual length. I was getting a syntax error because the case resulted in '*' rather than -1. Seems to make more sense to perform an arithmetic check rather than an overflow from the CAST.
Don't print length when very long (arbitrarily > 5 digits).

Just enter the below line.
exec sp_help [table_name]

Related

Filter columns before calculating SUM (Error: Error converting data type varchar to int)

I need to calculate Min/Max/etc. values for columns in my a table. So I want to filter the columns from INFORMATION_SCHEMA to get only the numeric ones.
The following query returns an error: Error converting data type varchar to int. and I don't know how to fix it.
SELECT #vQuery =
'SELECT '''+TABLE_NAME+''' AS TableName
, '''+COLUMN_NAME+''' AS ColumnName
, '''+DATA_TYPE+''' AS DataType
, MIN(TRY_CAST(TRY_CAST(['+COLUMN_NAME+'] AS VARCHAR(MAX)) AS NUMERIC(30,4))) AS MinValue
, MAX(TRY_CAST(TRY_CAST(['+COLUMN_NAME+'] AS VARCHAR(MAX)) AS NUMERIC(30,4))) AS MaxValue
, AVG(TRY_CAST(TRY_CAST(['+COLUMN_NAME+'] AS VARCHAR(MAX)) AS NUMERIC(30,4))) AS AvgValue
, STDEV(TRY_CAST(TRY_CAST(['+COLUMN_NAME+'] AS VARCHAR(MAX)) AS NUMERIC(30,4))) AS StandardDeviation
, SUM(TRY_CAST(TRY_CAST(['+COLUMN_NAME+'] AS VARCHAR(MAX)) AS NUMERIC(30,4))) AS TotalSum
FROM '+QUOTENAME(TABLE_SCHEMA)+'.'+QUOTENAME(TABLE_NAME)+';'+ CHAR(10)
FROM
(SELECT TABLE_SCHEMA, TABLE_NAME, COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_SCHEMA = #Schema
AND TABLE_NAME = #Table
AND DATA_TYPE IN ('BIGINT','NUMERIC','SMALLINT','DECIMAL','SMALLMONEY','INTEGER','INT','TINYINT','MONEY','FLOAT','REAL')) t
This is a bit long for a comment:
Your query appears to work here.
If you were not using the try_() functions, then the problem might occur when you run the query but not when you generate it. And the issue is exponential notation. For instance:
select convert(varchar(255), convert(float, 1234556678990.0))
returns:
'1.23456e+012'
And this value cannot be converted to a numeric value.
I see no advantage to converting to a string before converting to a numeric, so I would drop those conversions. However, with try_cast() this should not be an issue.
In fact, you are generating separate queries for each type. I don't think any conversion is warranted. The queries can return different types if they want. You would need to be concerned about types if you connected the queries using UNION ALL.
Note: You should also use QUOTENAME() on the column names. It can protect against SQL injection, but that seems like a very unlikely problem when you are selecting from INFORMATION_SCHEMA tables (someone would have to create very curious table/column names). However, it also handles non-conforming column names, such as those with spaces.

How to assign a column to a dynamic variable

One column in one table of my database say (Table A and Column A) can be either of Numeric type or VARCHAR type. Datatype is decided dynamically and then table gets created.
I need to create a dynamic variable (#Dynamic) which should check the datatype of this column and assign a different column (column B or column C) to it accordingly i.e.
If column A is NVARCHAR, assign column B to #Dynamic
If column A is NUMERIC, assign column C to #Dynamic
I've to do this in both SQL Server and Oracle.
Any help to write a function for this would be greatly appreciated.
In sql server you can check column data type
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'Table A'
AND COLUMN_NAME = 'Column A'
In oracle
SELECT Type
FROM user_tab_columns
WHERE table_name = 'Table A'
AND column_name = 'Column A';
Sql_variant is a dynamic type in SQL Server. The eqivalent in oracle would be anydata type.
But if you are using dynamic SQL why don't you store the data in nvarchar as it is big enough for the convertet numeric values as well and you can use it directly for your dynamic SQL statement?
Actually this was a generic question and did not need any sample data to answer.
The approach I am following is:
I wrote a function
CREATE FUNCTION [dbo].[GET_DATA_TYPE] (#input VARCHAR)
RETURNS VARCHAR(255) AS
BEGIN
DECLARE #l_data_type VARCHAR(255)
SELECT #l_data_type=DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = #input AND
COLUMN_NAME = 'AssetID'
RETURN #l_data_type
END
I would call this function in my stored procedures as
SELECT #dataType = dbo.GET_DATA_TYPE(#input);
I declared another variable i.e. #FinalType
IF #dataType == 'numeric'
THEN #FinalType = columnC
IF #dataType == 'nvarchar'
THEN #FinalType = columnD
Then I'll use this #FinalType variable in all my dynamic sqls.
Any other efficient way to do this.

sql server : get default value of a column

I execute a select to get the structure of a table.
I want to get info about the columns like its name or if it's null or if it's primary key..
I do something like this
....sys.columns c...
c.precision,
c.scale,
c.is_nullable as isnullable,
c.default_object_id as columndefault,
c.is_computed as iscomputed,
but for default value i get the id..something like 454545454 but i want to get the value "xxxx". What is the table to search or what is the function to convert that id to the value.
Thanks
You can do this (done a SELECT * just so you can see all the info available):
SELECT *
FROM INFORMATION_SCHEMA.COLUMNS
WHERE....
This includes a "COLUMN_DEFAULT" column in the resultset.
Use
Select * From INFORMATION_SCHEMA.COLUMNS
there is a column called COLUMN_DEFAULT
The property you want is called "cdefault".
http://sql-server-performance.com/Community/forums/p/20588/114944.aspx
'bills' is an example table
select
COLUMN_DEFAULT --default
,IS_NULLABLE -- is nullable
,NUMERIC_PRECISION --number of digits (binary or decimal depending on radix)
,NUMERIC_PRECISION_RADIX --decimal places
,NUMERIC_SCALE --number of digits to right of decimal point
,COLUMNPROPERTY(OBJECT_ID('bills'),COLUMN_NAME,'Iscomputed') AS ISCOMPUTED --is computed
from INFORMATION_SCHEMA.columns where TABLE_name='bills'
select * from INFORMATION_SCHEMA.TABLE_CONSTRAINTS where TABLE_NAME='bills' and CONSTRAINT_TYPE='PRIMARY KEY'

How do I return the SQL data types from my query?

I've a SQL query that queries an enormous (as in, hundreds of views/tables with hard-to-read names like CMM-CPP-FAP-ADD) database that I don't need nor want to understand. The result of this query needs to be stored in a staging table to feed a report.
I need to create the staging table, but with hundreds of views/tables to dig through to find the data types that are being represented here, I have to wonder if there's a better way to construct this table.
Can anyone advise how I would use any of the SQL Server 2008 tools to divine the source data types in my SQL 2000 database?
As a general example, I want to know from a query like:
SELECT Auth_First_Name, Auth_Last_Name, Auth_Favorite_Number
FROM Authors
Instead of the actual results, I want to know that:
Auth_First_Name is char(25)
Auth_Last_Name is char(50)
Auth_Favorite_Number is int
I'm not interested in constraints, I really just want to know the data types.
select * from information_schema.columns
could get you started.
For SQL Server 2012 and above: If you place the query into a string then you can get the result set data types like so:
DECLARE #query nvarchar(max) = 'select 12.1 / 10.1 AS [Column1]';
EXEC sp_describe_first_result_set #query, null, 0;
You could also insert the results (or top 10 results) into a temp table and get the columns from the temp table (as long as the column names are all different).
SELECT TOP 10 *
INTO #TempTable
FROM <DataSource>
Then use:
EXEC tempdb.dbo.sp_help N'#TempTable';
or
SELECT *
FROM tempdb.sys.columns
WHERE [object_id] = OBJECT_ID(N'tempdb..#TempTable');
Extrapolated from Aaron's answer here.
You can also use...
SQL_VARIANT_PROPERTY()
...in cases where you don't have direct access to the metadata (e.g. a linked server query perhaps?).
SQL_VARIANT_PROPERTY (Transact-SQL)
In SQL Server 2005 and beyond you are better off using the catalog views (sys.columns) as opposed to INFORMATION_SCHEMA. Unless portability to other platforms is important. Just keep in mind that the INFORMATION_SCHEMA views won't change and so they will progressively be lacking information on new features etc. in successive versions of SQL Server.
There MUST be en easier way to do this... Low and behold, there is...!
"sp_describe_first_result_set" is your friend!
Now I do realise the question was asked specifically for SQL Server 2000, but I was looking for a similar solution for later versions and discovered some native support in SQL to achieve this.
In SQL Server 2012 onwards cf. "sp_describe_first_result_set" - Link to BOL
I had already implemented a solution using a technique similar to #Trisped's above and ripped it out to implement the native SQL Server implementation.
In case you're not on SQL Server 2012 or Azure SQL Database yet, here's the stored proc I created for pre-2012 era databases:
CREATE PROCEDURE [fn].[GetQueryResultMetadata]
#queryText VARCHAR(MAX)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
--SET NOCOUNT ON;
PRINT #queryText;
DECLARE
#sqlToExec NVARCHAR(MAX) =
'SELECT TOP 1 * INTO #QueryMetadata FROM ('
+
#queryText
+
') T;'
+ '
SELECT
C.Name [ColumnName],
TP.Name [ColumnType],
C.max_length [MaxLength],
C.[precision] [Precision],
C.[scale] [Scale],
C.[is_nullable] IsNullable
FROM
tempdb.sys.columns C
INNER JOIN
tempdb.sys.types TP
ON
TP.system_type_id = C.system_type_id
AND
-- exclude custom types
TP.system_type_id = TP.user_type_id
WHERE
[object_id] = OBJECT_ID(N''tempdb..#QueryMetadata'');
'
EXEC sp_executesql #sqlToExec
END
SELECT COLUMN_NAME,
DATA_TYPE,
CHARACTER_MAXIMUM_LENGTH
FROM information_schema.columns
WHERE TABLE_NAME = 'YOUR_TABLE_NAME'
You can use columns aliases for better looking output.
Can you get away with recreating the staging table from scratch every time the query is executed? If so you could use SELECT ... INTO syntax and let SQL Server worry about creating the table using the correct column types etc.
SELECT *
INTO your_staging_table
FROM enormous_collection_of_views_tables_etc
This will give you everything column property related.
SELECT * INTO TMP1
FROM ( SELECT TOP 1 /* rest of your query expression here */ );
SELECT o.name AS obj_name, TYPE_NAME(c.user_type_id) AS type_name, c.*
FROM sys.objects AS o
JOIN sys.columns AS c ON o.object_id = c.object_id
WHERE o.name = 'TMP1';
DROP TABLE TMP1;
select COLUMN_NAME, DATA_TYPE, CHARACTER_MAXIMUM_LENGTH
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='yourTable';
sp_describe_first_result_set
will help to identify the datatypes of query by analyzing datatypes of first resultset of query
https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql?view=sql-server-2017
I use a simple case statement to render results I can use in technical specification documents. This example does not contain every condition you will run into with a database, but it gives you a good template to work with.
SELECT
TABLE_NAME AS 'Table Name',
COLUMN_NAME AS 'Column Name',
CASE WHEN DATA_TYPE LIKE '%char'
THEN DATA_TYPE + '(' + CONVERT(VARCHAR, CHARACTER_MAXIMUM_LENGTH) + ')'
WHEN DATA_TYPE IN ('bit', 'int', 'smallint', 'date')
THEN DATA_TYPE
WHEN DATA_TYPE = 'datetime'
THEN DATA_TYPE + '(' + CONVERT(VARCHAR, DATETIME_PRECISION) + ')'
WHEN DATA_TYPE = 'float'
THEN DATA_TYPE
WHEN DATA_TYPE IN ('numeric', 'money')
THEN DATA_TYPE + '(' + CONVERT(VARCHAR, NUMERIC_PRECISION) + ', ' + CONVERT(VARCHAR, NUMERIC_PRECISION_RADIX) + ')'
END AS 'Data Type',
CASE WHEN IS_NULLABLE = 'NO'
THEN 'NOT NULL'
ELSE 'NULL'
END AS 'PK/LK/NOT NULL'
FROM INFORMATION_SCHEMA.COLUMNS
ORDER BY
TABLE_NAME, ORDINAL_POSITION
Checking data types.
The first way to check data types for SQL Server database is a query with the SYS schema table. The below query uses COLUMNS and TYPES tables:
SELECT C.NAME AS COLUMN_NAME,
TYPE_NAME(C.USER_TYPE_ID) AS DATA_TYPE,
C.IS_NULLABLE,
C.MAX_LENGTH,
C.PRECISION,
C.SCALE
FROM SYS.COLUMNS C
JOIN SYS.TYPES T
ON C.USER_TYPE_ID=T.USER_TYPE_ID
WHERE C.OBJECT_ID=OBJECT_ID('your_table_name');
In this way, you can find data types of columns.
This easy query return a data type bit. You can use this thecnic for other data types:
select CAST(0 AS BIT) AS OK

SELECT * EXCEPT

Is there any RDBMS that implements something like SELECT * EXCEPT? What I'm after is getting all of the fields except a specific TEXT/BLOB field, and I'd like to just select everything else.
Almost daily I complain to my coworkers that someone should implement this... It's terribly annoying that it doesn't exist.
Edit: I understand everyone's concern for SELECT *. I know the risks associated with SELECT *. However, this, at least in my situation, would not be used for any Production level code, or even Development level code; strictly for debugging, when I need to see all of the values easily.
As I've stated in some of the comments, where I work is strictly a commandline shop, doing everything over ssh. This makes it difficult to use any gui tools (external connections to the database aren't allowed), etc etc.
Thanks for the suggestions though.
As others have said, it is not a good idea to do this in a query because it is prone to issues when someone changes the table structure in the future. However, there is a way to do this... and I can't believe I'm actually suggesting this, but in the spirit of answering the ACTUAL question...
Do it with dynamic SQL... this does all the columns except the "description" column. You could easily turn this into a function or stored proc.
declare #sql varchar(8000),
#table_id int,
#col_id int
set #sql = 'select '
select #table_id = id from sysobjects where name = 'MY_Table'
select #col_id = min(colid) from syscolumns where id = #table_id and name <> 'description'
while (#col_id is not null) begin
select #sql = #sql + name from syscolumns where id = #table_id and colid = #col_id
select #col_id = min(colid) from syscolumns where id = #table_id and colid > #col_id and name <> 'description'
if (#col_id is not null) set #sql = #sql + ','
print #sql
end
set #sql = #sql + ' from MY_table'
exec #sql
Create a view on the table which doesn't include the blob columns
Is there any RDBMS that implements something like SELECT * EXCEPT?
Yes, Google Big Query implements SELECT * EXCEPT:
A SELECT * EXCEPT statement specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output.
WITH orders AS(
SELECT 5 as order_id,
"sprocket" as item_name,
200 as quantity
)
SELECT * EXCEPT (order_id)
FROM orders;
Output:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| sprocket | 200 |
+-----------+----------+
EDIT:
H2 database also supports SELECT * EXCEPT (col1, col2, ...) syntax.
Wildcard expression
A wildcard expression in a SELECT statement. A wildcard expression represents all visible columns. Some columns can be excluded with optional EXCEPT clause.
EDIT 2:
Hive supports: REGEX Column Specification
A SELECT statement can take regex-based column specification in Hive releases prior to 0.13.0, or in 0.13.0 and later releases if the configuration property hive.support.quoted.identifiers is set to none.
The following query selects all columns except ds and hr.
SELECT `(ds|hr)?+.+` FROM sales
EDIT 3:
Snowflake also now supports: SELECT * EXCEPT (and a RENAME option equivalent to REPLACE in BigQuery)
EXCLUDE col_name EXCLUDE (col_name, col_name, ...)
When you select all columns (SELECT *), specifies the columns that should be excluded from the results.
RENAME col_name AS col_alias RENAME (col_name AS col_alias, col_name AS col_alias, ...)
When you select all columns (SELECT *), specifies the column aliases that should be used in the results.
and so does Databricks SQL (since Runtime 11.0)
star_clause
[ { table_name | view_name } . ] * [ except_clause ]
except_clause
EXCEPT ( { column_name | field_name } [, ...] )
and also DuckDB
-- select all columns except the city column from the addresses table
SELECT * EXCLUDE (city) FROM addresses;
-- select all columns from the addresses table, but replace city with LOWER(city)
SELECT * REPLACE (LOWER(city) AS city) FROM addresses;
-- select all columns matching the given regex from the table
SELECT COLUMNS('number\d+') FROM addresses;
DB2 allows for this. Columns have an attribute/specifier of Hidden.
From the syscolumns documentation
HIDDEN
CHAR(1) NOT NULL WITH DEFAULT 'N'
Indicates whether the column is implicitly hidden:
P Partially hidden. The column is implicitly hidden from SELECT *.
N Not hidden. The column is visible to all SQL statements.
Create table documentation As part of creating your column, you would specify the IMPLICITLY HIDDEN modifier
An example DDL from Implicitly Hidden Columns follows
CREATE TABLE T1
(C1 SMALLINT NOT NULL,
C2 CHAR(10) IMPLICITLY HIDDEN,
C3 TIMESTAMP)
IN DB.TS;
Whether this capability is such a deal maker to drive the adoption of DB2 is left as an exercise to future readers.
Is there any RDBMS that implements something like SELECT * EXCEPT
Yes! The truly relational language Tutorial D allows projection to be expressed in terms of the attributes to be removed instead of the ones to be kept e.g.
my_relvar { ALL BUT description }
In fact, its equivalent to SQL's SELECT * is { ALL BUT }.
Your proposal for SQL is a worthy one but I heard it has already been put to the SQL standard's committee by the users' group and rejected by the vendor's group :(
It has also been explicitly requested for SQL Server but the request was closed as 'won't fix'.
Yes, finally there is :) SQL Standard 2016 defines Polymorphic Table Functions
SQL:2016 introduces polymorphic table functions (PTF) that don't need to specify the result type upfront. Instead, they can provide a describe component procedure that determines the return type at run time. Neither the author of the PTF nor the user of the PTF need to declare the returned columns in advance.
PTFs as described by SQL:2016 are not yet available in any tested database.10 Interested readers may refer to the free technical report “Polymorphic table functions in SQL” released by ISO. The following are some of the examples discussed in the report:
CSVreader, which reads the header line of a CVS file to determine the number and names of the return columns
Pivot (actually unpivot), which turns column groups into rows (example: phonetype, phonenumber) -- me: no more harcoded strings :)
TopNplus, which passes through N rows per partition and one extra row with the totals of the remaining rows
Oracle 18c implements this mechanism. 18c Skip_col Polymorphic Table Function Example Oracle Live SQL and Skip_col Polymorphic Table Function Example
This example shows how to skip data based on name/specific datatype:
CREATE PACKAGE skip_col_pkg AS
-- OVERLOAD 1: Skip by name
FUNCTION skip_col(tab TABLE, col columns)
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t;
-- OVERLOAD 2: Skip by type --
FUNCTION skip_col(tab TABLE,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t;
END skip_col_pkg;
and body:
CREATE PACKAGE BODY skip_col_pkg AS
/* OVERLOAD 1: Skip by name
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_name
*
* PARAMETERS:
* tab - The input table
* col - The name of the columns to drop from the output
*
* DESCRIPTION:
* This PTF removes all the input columns listed in col from the output
* of the PTF.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t
AS
new_cols dbms_tf.columns_new_t;
col_id PLS_INTEGER := 1;
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
FOR j IN 1 .. col.count() LOOP
tab.column(i).pass_through := tab.column(i).description.name != col(j);
EXIT WHEN NOT tab.column(i).pass_through;
END LOOP;
END LOOP;
RETURN NULL;
END;
/* OVERLOAD 2: Skip by type
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_type
*
* PARAMETERS:
* tab - Input table
* type_name - A string representing the type of columns to skip
* flip - 'False' [default] => Match columns with given type_name
* otherwise => Ignore columns with given type_name
*
* DESCRIPTION:
* This PTF removes the given type of columns from the given table.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t
AS
typ CONSTANT VARCHAR2(1024) := upper(trim(type_name));
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
tab.column(i).pass_through :=
CASE upper(substr(flip,1,1))
WHEN 'F' THEN dbms_tf.column_type_name(tab.column(i).description)
!=typ
ELSE dbms_tf.column_type_name(tab.column(i).description)
=typ
END /* case */;
END LOOP;
RETURN NULL;
END;
END skip_col_pkg;
And sample usage:
-- skip number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number');
-- only number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number', flip => 'True')
-- skip defined columns
SELECT *
FROM skip_col_pkg.skip_col(scott.emp, columns(comm, hiredate, mgr))
WHERE deptno = 20;
I highly recommend to read entire example(creating standalone functions instead of package calls).
You could easily overload skip method for example: skip columns that does not start/end with specific prefix/suffix.
db<>fidde demo
Related: How to Dynamically Change the Columns in a SQL Query By Chris Saxon
Stay away from SELECT *, you are setting yourself for trouble. Always specify exactly which columns you want. It is in fact quite refreshing that the "feature" you are asking for doesn't exist.
I believe the rationale for it not existing is that the author of a query should (for performance sake) only request what they're going to look at/need (and therefore know what columns to specify) -- if someone adds a couple more blobs in the future, you'd be pulling back potentially large fields you're not going to need.
Temp table option here, just drop the columns not required and select * from the altered temp table.
/* Get the data into a temp table */
SELECT * INTO #TempTable
FROM
table
/* Drop the columns that are not needed */
ALTER TABLE #TempTable
DROP COLUMN [columnname]
SELECT * from #TempTable
declare #sql nvarchar(max)
#table char(10)
set #sql = 'select '
set #table = 'table_name'
SELECT #sql = #sql + '[' + COLUMN_NAME + '],'
FROM INFORMATION_SCHEMA.Columns
WHERE TABLE_NAME = #table
and COLUMN_NAME <> 'omitted_column_name'
SET #sql = substring(#sql,1,len(#sql)-1) + ' from ' + #table
EXEC (#sql);
I needed something like what #Glen asks for easing my life with HASHBYTES().
My inspiration was #Jasmine and #Zerubbabel answers. In my case I've different schemas, so the same table name appears more than once at sys.objects. As this may help someone with the same scenario, here it goes:
ALTER PROCEDURE [dbo].[_getLineExceptCol]
#table SYSNAME,
#schema SYSNAME,
#LineId int,
#exception VARCHAR(500)
AS
DECLARE #SQL NVARCHAR(MAX)
BEGIN
SET NOCOUNT ON;
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name
FROM sys.columns
WHERE name <> #exception
AND object_id = (SELECT object_id FROM sys.objects
WHERE name LIKE #table
AND schema_id = (SELECT schema_id FROM sys.schemas WHERE name LIKE #schema))
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #schema + '.' + #table + ' WHERE Id = ' + CAST(#LineId AS nvarchar(50))
EXEC(#SQL)
END
GO
It's an old question, but I hope this answer can still be helpful to others. It can also be modified to add more than one except fields. This can be very handy if you want to unpivot a table with many columns.
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name FROM sys.columns WHERE name <> 'colName' AND object_id = (SELECT id FROM sysobjects WHERE name = 'tblName')
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + 'tblName'
EXEC sp_executesql #SQL
Stored Procedure:
usp_SelectAllExcept 'tblname', 'colname'
ALTER PROCEDURE [dbo].[usp_SelectAllExcept]
(
#tblName SYSNAME
,#exception VARCHAR(500)
)
AS
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name from sys.columns where name <> #exception and object_id = (Select id from sysobjects where name = #tblName)
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #tblName
EXEC sp_executesql #SQL
For the sake of completeness, this is possible in DremelSQL dialect, doing something like:
WITH orders AS
(SELECT 5 as order_id,
"foobar12" as item_name,
800 as quantity)
SELECT * EXCEPT (order_id)
FROM orders;
Result:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| foobar12 | 800 |
+-----------+----------+
There also seems to be another way to do it here without Dremel.
Your question was about what RDBMS supports the * EXCEPT (...) syntax, so perhaps, looking at the jOOQ manual page for * EXCEPT can be useful in the future, as that page will keep track of new dialects supporting the syntax.
Currently (mid 2022), among the jOOQ supported RDBMS, at least BigQuery, H2, and Snowflake support the syntax natively. The others need to emulate it by listing the columns explicitly:
-- ACCESS, ASE, AURORA_MYSQL, AURORA_POSTGRES, COCKROACHDB, DB2, DERBY, EXASOL,
-- FIREBIRD, HANA, HSQLDB, INFORMIX, MARIADB, MEMSQL, MYSQL, ORACLE, POSTGRES,
-- REDSHIFT, SQLDATAWAREHOUSE, SQLITE, SQLSERVER, SYBASE, TERADATA, VERTICA,
-- YUGABYTEDB
SELECT LANGUAGE.CD, LANGUAGE.DESCRIPTION
FROM LANGUAGE
-- BIGQUERY, H2
SELECT * EXCEPT (ID)
FROM LANGUAGE
-- SNOWFLAKE
SELECT * EXCLUDE (ID)
FROM LANGUAGE
Disclaimer: I work for the company behind jOOQ
As others are saying: SELECT * is a bad idea.
Some reasons:
Get only what you need (anything more is a waste)
Indexing (index what you need and you can get it more quickly. If you ask for a bunch of non-indexed columns, too, your query plans will suffer.