Suppose I have a row of data, store such as the following:
------------------------
| Col 1 | Col 2 | Col 3 |
|------------------------|
| Foo | Bar | Foobar |
How might I concatinate this into a single string, such as the below?
Foo-Bar-Foobar
The column headings (and number of column headings) in this table will not be known, so selecting by column name is not an option(?).
Please note that I am not trying to concatinate a list of values in a column, I am trying to concatinate the values stores in one single row. I would also prefer to avoid using pivots, as I will be working with large sets of data and do not want to take the hit to performance.
In such cases I really adore the mighty abilities of XML in dealing with generic sets:
SELECT STUFF(b.query('
for $element in ./*
return
<x>;{$element/text()}</x>
').value('.','nvarchar(max)'),1,1,'')
FROM
(
SELECT TOP 3 * FROM sys.objects o FOR XML PATH('row'),ELEMENTS XSINIL,TYPE
) A(a)
CROSS APPLY a.nodes('/row') B(b);
The result
sysrscols;3;4;0;S ;SYSTEM_TABLE;2017-08-22T19:38:02.860;2017-08-22T19:38:02.867;1;0;0
sysrowsets;5;4;0;S ;SYSTEM_TABLE;2009-04-13T12:59:05.513;2017-08-22T19:38:03.197;1;0;0
sysclones;6;4;0;S ;SYSTEM_TABLE;2017-08-22T19:38:03.113;2017-08-22T19:38:03.120;1;0;0
Remarks
Some things to mention
I use the ; as delimiter, as the - might break with values containing hyphens (e.g. DATE)
I use TOP 3 from sys.objects to create an easy-cheesy-stand-alone sample
Thx to Zohard Peled I added ELEMENTS XSINIL to force the engine not to omit NULL values.
UPDATE Create JSON in pre-2016 versions
You can try this to create a JSON-string in versions before 2016
SELECT '{'
+ STUFF(b.query('
for $element in ./*
return
<x>,"{local-name($element)}":"{$element/text()}"</x>
').value('.','nvarchar(max)'),1,1,'')
+ '}'
FROM
(
SELECT TOP 3 * FROM sys.objects o FOR XML PATH('row'),TYPE
) A(a)
CROSS APPLY a.nodes('/row') B(b);
The result
{"name":"sysrscols","object_id":"3","schema_id":"4","parent_object_id":"0","type":"S ","type_desc":"SYSTEM_TABLE","create_date":"2017-08-22T19:38:02.860","modify_date":"2017-08-22T19:38:02.867","is_ms_shipped":"1","is_published":"0","is_schema_published":"0"}
{"name":"sysrowsets","object_id":"5","schema_id":"4","parent_object_id":"0","type":"S ","type_desc":"SYSTEM_TABLE","create_date":"2009-04-13T12:59:05.513","modify_date":"2017-08-22T19:38:03.197","is_ms_shipped":"1","is_published":"0","is_schema_published":"0"}
{"name":"sysclones","object_id":"6","schema_id":"4","parent_object_id":"0","type":"S ","type_desc":"SYSTEM_TABLE","create_date":"2017-08-22T19:38:03.113","modify_date":"2017-08-22T19:38:03.120","is_ms_shipped":"1","is_published":"0","is_schema_published":"0"}
Hint
You might add ELEMENTS XSINIL to this query as well. This depends, if you'd like NULLs to simply miss, or if you want to include them as "SomeColumn":""
I use UnitE and this is what I would use to select the columns dynamically from the person table.
INFORMATION_SCHEMA.COLUMNS stores the column list for the table and the SELECT statement is built around that.
Declare #Columns NVARCHAR(MAX)
Declare #Table varchar(15) = 'capd_person'
SELECT #Columns=COALESCE(#Columns + ',', '') + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE (TABLE_NAME=#Table )
EXEC('SELECT DISTINCT ' + #Columns + ' FROM ' + #Table)
You would need to change the EXEC command to suit your needs, using CONCAT as described before.
Simply do a SELECT CONCAT(col1,col2,col3) FROM table
However if you wish to make it neat
Use:
SELECT CONCAT(col1,'-',col2,'-',col3) FROM table.
Find more help here.
An improved version of JonTout's answer:
Declare #Columns NVARCHAR(MAX)
Declare #Table varchar(15) = 'TableName'
SELECT #Columns=COALESCE(#Columns + '+', '') +'CONVERT(varchar(max),ISNULL('+ COLUMN_NAME+',''''))+''-'''
FROM INFORMATION_SCHEMA.COLUMNS
WHERE (TABLE_NAME=#Table )
EXEC('SELECT ' + #Columns + ' FROM ' + #Table)
Related
So let's say 'table' has two columns that act as a GUID - the ID column and msrepl_tran_version. Our original programmer did not know that our replication created this column and included it in a comparison, which has resulted in almost 20,000 records being put into this table, of which only 1,588 are ACTUALLY unique, and it's causing long load times.
I'm looking for a way to exclude the ID and replication columns from a select distinct, without having to then list every single column in the table, since I'm going to have to select from the record set multiple times to fix this (there are other tables affected and the query is going to be ridiculous) I don't want to have to deal with my code being messy if I can help it.
Is there a way to accomplish this without listing all of the other columns?
Select distinct {* except ID, msrepl_tran_version} from table
Other than (where COL_1 is ID and COL_N is the replication GUID)
Select distinct COL_2, ..., COL_N-1, COL_N+1, ... from table
After more searching, I found the answer:
SELECT * INTO #temp FROM table
ALTER TABLE #temp DROP COLUMN id
ALTER TABLE #temp DROP COLUMN msrepl_tran_version
SELECT DISTINCT * FROM #temp
This works for what I need. Thanks for the answers guys!
Absolutely, 100% not possible, there is no subtract columns instruction.
It can't be done in the spirit of the OP's initial question. However, it can be done with dynamic sql:
--Dynamically build list of column names.
DECLARE #ColNames NVARCHAR(MAX) = ''
SELECT #ColNames = #ColNames + '[' + c.COLUMN_NAME + '],'
FROM INFORMATION_SCHEMA.COLUMNS c
WHERE c.TABLE_SCHEMA = 'dbo'
AND c.TABLE_NAME = 'YourTable'
--Exclude these.
AND c.COLUMN_NAME NOT IN ('ID', 'msrepl_tran_version')
--Keep original column order for appearance, convenience.
ORDER BY c.ORDINAL_POSITION
--Remove trailing comma.
SET #ColNames = LEFT(#ColNames, LEN(#ColNames) - 1)
--Verify query
PRINT ('SELECT DISTINCT ' + #ColNames + ' FROM [dbo].[YourTable]')
--Uncomment when ready to proceed.
--EXEC ('SELECT DISTINCT ' + #ColNames + ' FROM [dbo].[YourTable]')
One additional note: since you need to select from the record set multiple times and potentially join to other tables, you can use the above to create a view on the table. This should make your code fairly clean.
I have data that is 192 separate columns. They are column pairs so that there is data for a 15 minute time slice and a quality control number. The way it is setup now each row represents a single day. I would like to insert the data into another table with fewer columns (Date,ReadTime,QualityControlNumber,Reading,...)
I started with trying a while loop like this but using a variable to change the column header doesn't seem possible.
Should I nest while loops to increment the column headers or is there another trick I should be using
Code tried:
Declare #count varchar (10),
#QC varchar (10),
#Interval varchar(10)
set #count = 1
set #QC = 'QC#' + #count
set #Interval = 'Interval#' + #count
While (#count<97)
BEGIN
insert into Data_DATEstr (Number,[ReadDate],TimeInterval,QCReading,IntervalReading,ConversionFactor)
select [Number], [Start Date], #count, ['QC#'+#count], [Interval# +#count] ,[Conversion Factor]
from table
where [Number] = '103850581'
and [Start Date] = '060112'
set #count = (#count+1)
END
There is no need to use a while-loop for this. You want an UNPIVOT If you are trying to convert 192 separate columns to rows, then you will definitely want to use an UNPIVOT function. This can be written using dynamic SQL so then you will have have to code all of the fields:
DECLARE #cols AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #cols = stuff((select ','+quotename(C.name)
from sys.columns as C
where C.object_id = object_id('test') and
C.name like 'col%'
for xml path('')), 1, 1, '')
set #query = 'SELECT QualityControlNumber, replace(col, ''col'', '''') as col, value
from test
unpivot
(
value
for col in (' + #cols + ')
) p '
execute(#query)
See SQL Fiddle with Demo
Using dynamic SQL for this will get the list of fields that you want to transpose when the query is executed. This also prevents you having to manually code for 192 separate columns.
Although your question is not entirely clear, it sounds like you need some form of operation similar to UNPIVOT. This lets you rotate columns into rows. I'd suggest reading up on it and seeing if it will work for you -- even if it doesn't, it might suggest an approach that would work.
The command that you want is unpivot. Something like:
select Date, ReadTime, QualityControlNumber, Reading
from t
unpivot (Reading for date in (day1, . . . , dayn) as unpvt
You will find that you won't really get the date, but instead the original column name. You can fix this by putting the unpivot query in a subquery (or CTE) and then using string manipulations and cast to convert the column name to a date.
I need to select 90 columns out of 107 columns from my table.
Is it possible to write select * except( column1,column2,..) from table or any other way to get specific columns only, or I need to write all the 90 columns in select statement?
You could generate the column list:
select name + ', '
from sys.columns
where object_id = object_id('YourTable')
and name not in ('column1', 'column2')
It's possible to do this on the fly with dynamic SQL:
declare #columns varchar(max)
select #columns = case when #columns is null then '' else #columns + ', ' end +
quotename(name)
from sys.columns
where object_id = object_id('YourTable')
and name not in ('column1', 'column2')
declare #query varchar(max)
set #query = 'select ' + #columns + ' from YourTable'
exec (#query)
No, there's no way of doing * EXCEPT some columns. SELECT * itself should rarely, if ever, be used outside of EXISTS tests.
If you're using SSMS, you can drag the "columns" folder (under a table) from the Object Explorer into a query window, and it will insert all of the column names (so you can then go through them and remove the 17 you don't want)
There is no way in SQL to do select everything EXCEPT col1, col2 etc.
The only way to do this is to have your application handle this, and generate the sql query dynamically.
You could potentially do some dynamic sql for this, but it seems like overkill. Also it's generally considered poor practice to use SELECT *... much less SELECT * but not col3, col4, col5 since you won't get consistent results in the case of table changes.
Just use SSMS to script out a select statement and delete the columns you don't need. It should be simple.
No - you need to write all columns you need. You might create an view for that, so your actual statement could use select * (but then you have to list all columns in the view).
Since you should never be using select *, why is this a problem? Just drag the columns over from the Object Explorer and delete the ones you don't want.
We all know that to select all columns from a table, we can use
SELECT * FROM tableA
Is there a way to exclude column(s) from a table without specifying all the columns?
SELECT * [except columnA] FROM tableA
The only way that I know is to manually specify all the columns and exclude the unwanted column. This is really time consuming so I'm looking for ways to save time and effort on this, as well as future maintenance should the table has more/less columns.
You can try it this way:
/* Get the data into a temp table */
SELECT * INTO #TempTable
FROM YourTable
/* Drop the columns that are not needed */
ALTER TABLE #TempTable
DROP COLUMN ColumnToDrop
/* Get results and drop temp table */
SELECT * FROM #TempTable
DROP TABLE #TempTable
No.
Maintenance-light best practice is to specify only the required columns.
At least 2 reasons:
This makes your contract between client and database stable. Same data, every time
Performance, covering indexes
Edit (July 2011):
If you drag from Object Explorer the Columns node for a table, it puts a CSV list of columns in the Query Window for you which achieves one of your goals
If you don't want to write each column name manually you can use Script Table As by right clicking on table or view in SSMS like this:
Then you will get whole select query in New Query Editor Window then remove unwanted column like this:
Done
The automated way to do this in SQL (SQL Server) is:
declare #cols varchar(max), #query varchar(max);
SELECT #cols = STUFF
(
(
SELECT DISTINCT '], [' + name
FROM sys.columns
where object_id = (
select top 1 object_id from sys.objects
where name = 'MyTable'
)
and name not in ('ColumnIDontWant1', 'ColumnIDontWant2')
FOR XML PATH('')
), 1, 2, ''
) + ']';
SELECT #query = 'select ' + #cols + ' from MyTable';
EXEC (#query);
A modern SQL dialect like BigQuery proposes an excellent solution.
SELECT * EXCEPT(ColumnNameX, [ColumnNameY, ...])
FROM TableA
This is a very powerful SQL syntax to avoid a long list of columns that need to be updated all the time due to table column name changes. And this functionality is missing in the current SQL Server implementation, which is a pity. Hopefully, one day, Microsoft Azure will be more data scientist-friendly.
Data scientists like to have a quick option to shorten a query and remove some columns (due to duplication or any other reason).
https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntax#select-modifiers
You could create a view that has the columns you wish to select, then you can just select * from the view...
Yes it's possible (but not recommended).
CREATE TABLE contact (contactid int, name varchar(100), dob datetime)
INSERT INTO contact SELECT 1, 'Joe', '1974-01-01'
DECLARE #columns varchar(8000)
SELECT #columns = ISNULL(#columns + ', ','') + QUOTENAME(column_name)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'contact' AND COLUMN_NAME <> 'dob'
ORDER BY ORDINAL_POSITION
EXEC ('SELECT ' + #columns + ' FROM contact')
Explanation of the code:
Declare a variable to store a comma separated list of column names. This defaults to NULL.
Use a system view to determine the names of the columns in our table.
Use SELECT #variable = #variable + ... FROM to concatenate the
column names. This type of SELECT does not not return a result set. This is perhaps undocumented behaviour but works in every version of SQL Server. As an alternative you could use SET #variable = (SELECT ... FOR XML PATH('')) to concatenate strings.
Use the ISNULL function to prepend a comma only if this is not the
first column name.
Use the QUOTENAME function to support spaces and punctuation in column names.
Use the WHERE clause to hide columns we don't want to see.
Use EXEC (#variable), also known as dynamic SQL, to resolve the
column names at runtime. This is needed because we don't know the column names at compile time.
Like the others have said there is no way to do this, but if you're using Sql Server a trick that I use is to change the output to comma separated, then do
select top 1 * from table
and cut the whole list of columns from the output window. Then you can choose which columns you want without having to type them all in.
Basically, you cannot do what you would like - but you can get the right tools to help you out making things a bit easier.
If you look at Red-Gate's SQL Prompt, you can type "SELECT * FROM MyTable", and then move the cursor back after the "*", and hit <TAB> to expand the list of fields, and remove those few fields you don't need.
It's not a perfect solution - but a darn good one! :-) Too bad MS SQL Server Management Studio's Intellisense still isn't intelligent enough to offer this feature.......
Marc
DECLARE #SQL VARCHAR(max), #TableName sysname = 'YourTableName'
SELECT #SQL = COALESCE(#SQL + ', ', '') + Name
FROM sys.columns
WHERE OBJECT_ID = OBJECT_ID(#TableName)
AND name NOT IN ('Not This', 'Or that');
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #TableName
EXEC (#SQL)
UPDATE:
You can also create a stored procedure to take care of this task if you use it more often.
In this example I have used the built in STRING_SPLIT() which is available on SQL Server 2016+,
but if you need there are pleanty of examples of how to create it manually on SO.
CREATE PROCEDURE [usp_select_without]
#schema_name sysname = N'dbo',
#table_name sysname,
#list_of_columns_excluded nvarchar(max),
#separator nchar(1) = N','
AS
BEGIN
DECLARE
#SQL nvarchar(max),
#full_table_name nvarchar(max) = CONCAT(#schema_name, N'.', #table_name);
SELECT #SQL = COALESCE(#SQL + ', ', '') + QUOTENAME([Name])
FROM sys.columns sc
LEFT JOIN STRING_SPLIT(#list_of_columns_excluded, #separator) ss ON sc.[name] = ss.[value]
WHERE sc.OBJECT_ID = OBJECT_ID(#full_table_name, N'u')
AND ss.[value] IS NULL;
SELECT #SQL = N'SELECT ' + #SQL + N' FROM ' + #full_table_name;
EXEC(#SQL)
END
And then just:
EXEC [usp_select_without]
#table_name = N'Test_Table',
#list_of_columns_excluded = N'ID, Date, Name';
no there is no way to do this. maybe you can create custom views if that's feasible in your situation
EDIT May be if your DB supports execution of dynamic sql u could write an SP and pass the columns u don't want to see to it and let it create the query dynamically and return the result to you. I think this is doable in SQL Server atleast
If you are using SQL Server Management Studio then do as follows:
Type in your desired tables name and select it
Press Alt+F1
o/p shows the columns in table.
Select the desired columns
Copy & paste those in your select query
Fire the query.
Enjoy.
If you want to exclude a sensitive case column like the password for example, I do this to hide the value :
SELECT * , "" as password FROM tableName;
The best way to solve this is using view you can create view with required columns and retrieve data form it
example
mysql> SELECT * FROM calls;
+----+------------+---------+
| id | date | user_id |
+----+------------+---------+
| 1 | 2016-06-22 | 1 |
| 2 | 2016-06-22 | NULL |
| 3 | 2016-06-22 | NULL |
| 4 | 2016-06-23 | 2 |
| 5 | 2016-06-23 | 1 |
| 6 | 2016-06-23 | 1 |
| 7 | 2016-06-23 | NULL |
+----+------------+---------+
7 rows in set (0.06 sec)
mysql> CREATE VIEW C_VIEW AS
-> SELECT id,date from calls;
Query OK, 0 rows affected (0.20 sec)
mysql> select * from C_VIEW;
+----+------------+
| id | date |
+----+------------+
| 1 | 2016-06-22 |
| 2 | 2016-06-22 |
| 3 | 2016-06-22 |
| 4 | 2016-06-23 |
| 5 | 2016-06-23 |
| 6 | 2016-06-23 |
| 7 | 2016-06-23 |
+----+------------+
7 rows in set (0.00 sec)
In summary you cannot do it, but I disagree with all of the comment above, there "are" scenarios where you can legitimately use a *
When you create a nested query in order to select a specific range out of a whole list (such as paging) why in the world would want to specify each column on the outer select statement when you have done it in the inner?
In SQL Management Studio you can expand the columns in Object Explorer, then drag the Columns tree item into a query window to get a comma separated list of columns.
If we are talking of Procedures, it works with this trick to generate a new query and EXECUTE IMMEDIATE it:
SELECT LISTAGG((column_name), ', ') WITHIN GROUP (ORDER BY column_id)
INTO var_list_of_columns
FROM ALL_TAB_COLUMNS
WHERE table_name = 'PUT_HERE_YOUR_TABLE'
AND column_name NOT IN ('dont_want_this_column','neither_this_one','etc_column');
Postgres sql has a way of doing it
pls refer:
http://www.postgresonline.com/journal/archives/41-How-to-SELECT-ALL-EXCEPT-some-columns-in-a-table.html
The Information Schema Hack Way
SELECT 'SELECT ' || array_to_string(ARRAY(SELECT 'o' || '.' || c.column_name
FROM information_schema.columns As c
WHERE table_name = 'officepark'
AND c.column_name NOT IN('officeparkid', 'contractor')
), ',') || ' FROM officepark As o' As sqlstmt
The above for my particular example table - generates an sql statement that looks like this
SELECT o.officepark,o.owner,o.squarefootage
FROM officepark As o
Is there a way to exclude column(s) from a table without specifying
all the columns?
Using declarative SQL in the usual way, no.
I think your proposed syntax is worthy and good. In fact, the relational database language 'Tutorial D' has a very similar syntax where the keywords ALL BUT are followed by a set of attributes (columns).
However, SQL's SELECT * already gets a lot a flak (#Guffa's answer here is a typical objection), so I don't think SELECT ALL BUT will get into the SQL Standard anytime soon.
I think the best 'work around' is to create a VIEW with only the columns you desire then SELECT * FROM ThatView.
I do not know of any database that supports this (SQL Server, MySQL, Oracle, PostgreSQL). It is definitely not part of the SQL standards so I think you have to specify only the columns you want.
You could of course build your SQL statement dynamically and have the server execute it. But this opens up the possibility for SQL injection..
Right click table in Object Explorer, Select top 1000 rows
It'll list all columns and not *. Then remove the unwanted column(s). Should be much faster than typing it yourself.
Then when you feel this is a bit too much work, get Red Gate's SQL Prompt, and type ssf from tbl, go to the * and click tab again.
I know this is a little old, but I had just run into the same issue and was looking for an answer. Then I had a senior developer show me a very simple trick.
If you are using the management studio query editor, expand the database, then expand the table that you are selecting from so that you can see the columns folder.
In your select statement, just highlight the referenced columns folder above and drag and drop it into the query window. It will paste all of the columns of the table, then just simply remove the identity column from the list of columns...
A colleage advised a good alternative:
Do SELECT INTO in your preceding query (where you generate or get the
data from) into a table (which you will delete when done). This will
create the structure for you.
Do a script as CREATE to new query
window.
Remove the unwanted columns. Format the remaining columns
into a 1 liner and paste as your column list.
Delete the table you
created.
Done...
This helped us a lot.
Actually snowflake just released exclude so now you'd just:
SELECT * EXCLUDE [columnA,columnB,...] FROM tableA
Well, it is a common best practice to specify which columns you want, instead of just specifying *. So you should just state which fields you want your select to return.
That what I use often for this case:
declare #colnames varchar(max)=''
select #colnames=#colnames+','+name from syscolumns where object_id(tablename)=id and name not in (column3,column4)
SET #colnames=RIGHT(#colnames,LEN(#colnames)-1)
#colnames looks like column1,column2,column5
I did it like this and it works just fine (version 5.5.41):
# prepare column list using info from a table of choice
SET #dyn_colums = (SELECT REPLACE(
GROUP_CONCAT(`COLUMN_NAME`), ',column_name_to_remove','')
FROM `INFORMATION_SCHEMA`.`COLUMNS` WHERE
`TABLE_SCHEMA`='database_name' AND `TABLE_NAME`='table_name');
# set sql command using prepared columns
SET #sql = CONCAT("SELECT ", #dyn_colums, " FROM table_name");
# prepare and execute
PREPARE statement FROM #sql;
EXECUTE statement;
Sometimes the same program must handle different database stuctures. So I could not use a column list in the program to avoid errors in select statements.
* gives me all the optional fields. I check if the fields exist in the data table before use. This is my reason for using * in select.
This is how I handle excluded fields:
Dim da As New SqlDataAdapter("select * from table", cn)
da.FillSchema(dt, SchemaType.Source)
Dim fieldlist As String = ""
For Each DC As DataColumn In DT.Columns
If DC.ColumnName.ToLower <> excludefield Then
fieldlist = fieldlist & DC.Columnname & ","
End If
Next
In Hive Sql you can do this:
set hive.support.quoted.identifiers=none;
select
`(unwanted_col1|unwanted_col2|unwanted_col3)?+.+`
from database.table
this gives you the rest cols
The proposed answer (stored procedure) from BartoszX didn't work for me when using a view instead of a real table.
Credit for the idea and the code below (except for my fix) belongs to BartoszX.
In order that this works for tables as well as for views, use the following code:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[select_without]
#schema_name sysname = N'dbo',
#table_name sysname,
#list_of_columns_excluded nvarchar(max),
#separator nchar(1) = N','
AS
BEGIN
DECLARE
#SQL nvarchar(max),
#full_table_name nvarchar(max) = CONCAT(#schema_name, N'.', #table_name);
SELECT #SQL = COALESCE(#SQL + ', ', '') + QUOTENAME([Name])
FROM sys.columns sc
LEFT JOIN STRING_SPLIT(#list_of_columns_excluded, #separator) ss ON sc.[name] = ss.[value]
WHERE sc.OBJECT_ID = OBJECT_ID(#full_table_name)
AND ss.[value] IS NULL;
SELECT #SQL = N'SELECT ' + #SQL + N' FROM ' + #full_table_name;
EXEC(#SQL)
END
GO
Is there any RDBMS that implements something like SELECT * EXCEPT? What I'm after is getting all of the fields except a specific TEXT/BLOB field, and I'd like to just select everything else.
Almost daily I complain to my coworkers that someone should implement this... It's terribly annoying that it doesn't exist.
Edit: I understand everyone's concern for SELECT *. I know the risks associated with SELECT *. However, this, at least in my situation, would not be used for any Production level code, or even Development level code; strictly for debugging, when I need to see all of the values easily.
As I've stated in some of the comments, where I work is strictly a commandline shop, doing everything over ssh. This makes it difficult to use any gui tools (external connections to the database aren't allowed), etc etc.
Thanks for the suggestions though.
As others have said, it is not a good idea to do this in a query because it is prone to issues when someone changes the table structure in the future. However, there is a way to do this... and I can't believe I'm actually suggesting this, but in the spirit of answering the ACTUAL question...
Do it with dynamic SQL... this does all the columns except the "description" column. You could easily turn this into a function or stored proc.
declare #sql varchar(8000),
#table_id int,
#col_id int
set #sql = 'select '
select #table_id = id from sysobjects where name = 'MY_Table'
select #col_id = min(colid) from syscolumns where id = #table_id and name <> 'description'
while (#col_id is not null) begin
select #sql = #sql + name from syscolumns where id = #table_id and colid = #col_id
select #col_id = min(colid) from syscolumns where id = #table_id and colid > #col_id and name <> 'description'
if (#col_id is not null) set #sql = #sql + ','
print #sql
end
set #sql = #sql + ' from MY_table'
exec #sql
Create a view on the table which doesn't include the blob columns
Is there any RDBMS that implements something like SELECT * EXCEPT?
Yes, Google Big Query implements SELECT * EXCEPT:
A SELECT * EXCEPT statement specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output.
WITH orders AS(
SELECT 5 as order_id,
"sprocket" as item_name,
200 as quantity
)
SELECT * EXCEPT (order_id)
FROM orders;
Output:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| sprocket | 200 |
+-----------+----------+
EDIT:
H2 database also supports SELECT * EXCEPT (col1, col2, ...) syntax.
Wildcard expression
A wildcard expression in a SELECT statement. A wildcard expression represents all visible columns. Some columns can be excluded with optional EXCEPT clause.
EDIT 2:
Hive supports: REGEX Column Specification
A SELECT statement can take regex-based column specification in Hive releases prior to 0.13.0, or in 0.13.0 and later releases if the configuration property hive.support.quoted.identifiers is set to none.
The following query selects all columns except ds and hr.
SELECT `(ds|hr)?+.+` FROM sales
EDIT 3:
Snowflake also now supports: SELECT * EXCEPT (and a RENAME option equivalent to REPLACE in BigQuery)
EXCLUDE col_name EXCLUDE (col_name, col_name, ...)
When you select all columns (SELECT *), specifies the columns that should be excluded from the results.
RENAME col_name AS col_alias RENAME (col_name AS col_alias, col_name AS col_alias, ...)
When you select all columns (SELECT *), specifies the column aliases that should be used in the results.
and so does Databricks SQL (since Runtime 11.0)
star_clause
[ { table_name | view_name } . ] * [ except_clause ]
except_clause
EXCEPT ( { column_name | field_name } [, ...] )
and also DuckDB
-- select all columns except the city column from the addresses table
SELECT * EXCLUDE (city) FROM addresses;
-- select all columns from the addresses table, but replace city with LOWER(city)
SELECT * REPLACE (LOWER(city) AS city) FROM addresses;
-- select all columns matching the given regex from the table
SELECT COLUMNS('number\d+') FROM addresses;
DB2 allows for this. Columns have an attribute/specifier of Hidden.
From the syscolumns documentation
HIDDEN
CHAR(1) NOT NULL WITH DEFAULT 'N'
Indicates whether the column is implicitly hidden:
P Partially hidden. The column is implicitly hidden from SELECT *.
N Not hidden. The column is visible to all SQL statements.
Create table documentation As part of creating your column, you would specify the IMPLICITLY HIDDEN modifier
An example DDL from Implicitly Hidden Columns follows
CREATE TABLE T1
(C1 SMALLINT NOT NULL,
C2 CHAR(10) IMPLICITLY HIDDEN,
C3 TIMESTAMP)
IN DB.TS;
Whether this capability is such a deal maker to drive the adoption of DB2 is left as an exercise to future readers.
Is there any RDBMS that implements something like SELECT * EXCEPT
Yes! The truly relational language Tutorial D allows projection to be expressed in terms of the attributes to be removed instead of the ones to be kept e.g.
my_relvar { ALL BUT description }
In fact, its equivalent to SQL's SELECT * is { ALL BUT }.
Your proposal for SQL is a worthy one but I heard it has already been put to the SQL standard's committee by the users' group and rejected by the vendor's group :(
It has also been explicitly requested for SQL Server but the request was closed as 'won't fix'.
Yes, finally there is :) SQL Standard 2016 defines Polymorphic Table Functions
SQL:2016 introduces polymorphic table functions (PTF) that don't need to specify the result type upfront. Instead, they can provide a describe component procedure that determines the return type at run time. Neither the author of the PTF nor the user of the PTF need to declare the returned columns in advance.
PTFs as described by SQL:2016 are not yet available in any tested database.10 Interested readers may refer to the free technical report “Polymorphic table functions in SQL” released by ISO. The following are some of the examples discussed in the report:
CSVreader, which reads the header line of a CVS file to determine the number and names of the return columns
Pivot (actually unpivot), which turns column groups into rows (example: phonetype, phonenumber) -- me: no more harcoded strings :)
TopNplus, which passes through N rows per partition and one extra row with the totals of the remaining rows
Oracle 18c implements this mechanism. 18c Skip_col Polymorphic Table Function Example Oracle Live SQL and Skip_col Polymorphic Table Function Example
This example shows how to skip data based on name/specific datatype:
CREATE PACKAGE skip_col_pkg AS
-- OVERLOAD 1: Skip by name
FUNCTION skip_col(tab TABLE, col columns)
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t;
-- OVERLOAD 2: Skip by type --
FUNCTION skip_col(tab TABLE,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t;
END skip_col_pkg;
and body:
CREATE PACKAGE BODY skip_col_pkg AS
/* OVERLOAD 1: Skip by name
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_name
*
* PARAMETERS:
* tab - The input table
* col - The name of the columns to drop from the output
*
* DESCRIPTION:
* This PTF removes all the input columns listed in col from the output
* of the PTF.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t
AS
new_cols dbms_tf.columns_new_t;
col_id PLS_INTEGER := 1;
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
FOR j IN 1 .. col.count() LOOP
tab.column(i).pass_through := tab.column(i).description.name != col(j);
EXIT WHEN NOT tab.column(i).pass_through;
END LOOP;
END LOOP;
RETURN NULL;
END;
/* OVERLOAD 2: Skip by type
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_type
*
* PARAMETERS:
* tab - Input table
* type_name - A string representing the type of columns to skip
* flip - 'False' [default] => Match columns with given type_name
* otherwise => Ignore columns with given type_name
*
* DESCRIPTION:
* This PTF removes the given type of columns from the given table.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t
AS
typ CONSTANT VARCHAR2(1024) := upper(trim(type_name));
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
tab.column(i).pass_through :=
CASE upper(substr(flip,1,1))
WHEN 'F' THEN dbms_tf.column_type_name(tab.column(i).description)
!=typ
ELSE dbms_tf.column_type_name(tab.column(i).description)
=typ
END /* case */;
END LOOP;
RETURN NULL;
END;
END skip_col_pkg;
And sample usage:
-- skip number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number');
-- only number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number', flip => 'True')
-- skip defined columns
SELECT *
FROM skip_col_pkg.skip_col(scott.emp, columns(comm, hiredate, mgr))
WHERE deptno = 20;
I highly recommend to read entire example(creating standalone functions instead of package calls).
You could easily overload skip method for example: skip columns that does not start/end with specific prefix/suffix.
db<>fidde demo
Related: How to Dynamically Change the Columns in a SQL Query By Chris Saxon
Stay away from SELECT *, you are setting yourself for trouble. Always specify exactly which columns you want. It is in fact quite refreshing that the "feature" you are asking for doesn't exist.
I believe the rationale for it not existing is that the author of a query should (for performance sake) only request what they're going to look at/need (and therefore know what columns to specify) -- if someone adds a couple more blobs in the future, you'd be pulling back potentially large fields you're not going to need.
Temp table option here, just drop the columns not required and select * from the altered temp table.
/* Get the data into a temp table */
SELECT * INTO #TempTable
FROM
table
/* Drop the columns that are not needed */
ALTER TABLE #TempTable
DROP COLUMN [columnname]
SELECT * from #TempTable
declare #sql nvarchar(max)
#table char(10)
set #sql = 'select '
set #table = 'table_name'
SELECT #sql = #sql + '[' + COLUMN_NAME + '],'
FROM INFORMATION_SCHEMA.Columns
WHERE TABLE_NAME = #table
and COLUMN_NAME <> 'omitted_column_name'
SET #sql = substring(#sql,1,len(#sql)-1) + ' from ' + #table
EXEC (#sql);
I needed something like what #Glen asks for easing my life with HASHBYTES().
My inspiration was #Jasmine and #Zerubbabel answers. In my case I've different schemas, so the same table name appears more than once at sys.objects. As this may help someone with the same scenario, here it goes:
ALTER PROCEDURE [dbo].[_getLineExceptCol]
#table SYSNAME,
#schema SYSNAME,
#LineId int,
#exception VARCHAR(500)
AS
DECLARE #SQL NVARCHAR(MAX)
BEGIN
SET NOCOUNT ON;
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name
FROM sys.columns
WHERE name <> #exception
AND object_id = (SELECT object_id FROM sys.objects
WHERE name LIKE #table
AND schema_id = (SELECT schema_id FROM sys.schemas WHERE name LIKE #schema))
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #schema + '.' + #table + ' WHERE Id = ' + CAST(#LineId AS nvarchar(50))
EXEC(#SQL)
END
GO
It's an old question, but I hope this answer can still be helpful to others. It can also be modified to add more than one except fields. This can be very handy if you want to unpivot a table with many columns.
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name FROM sys.columns WHERE name <> 'colName' AND object_id = (SELECT id FROM sysobjects WHERE name = 'tblName')
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + 'tblName'
EXEC sp_executesql #SQL
Stored Procedure:
usp_SelectAllExcept 'tblname', 'colname'
ALTER PROCEDURE [dbo].[usp_SelectAllExcept]
(
#tblName SYSNAME
,#exception VARCHAR(500)
)
AS
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name from sys.columns where name <> #exception and object_id = (Select id from sysobjects where name = #tblName)
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #tblName
EXEC sp_executesql #SQL
For the sake of completeness, this is possible in DremelSQL dialect, doing something like:
WITH orders AS
(SELECT 5 as order_id,
"foobar12" as item_name,
800 as quantity)
SELECT * EXCEPT (order_id)
FROM orders;
Result:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| foobar12 | 800 |
+-----------+----------+
There also seems to be another way to do it here without Dremel.
Your question was about what RDBMS supports the * EXCEPT (...) syntax, so perhaps, looking at the jOOQ manual page for * EXCEPT can be useful in the future, as that page will keep track of new dialects supporting the syntax.
Currently (mid 2022), among the jOOQ supported RDBMS, at least BigQuery, H2, and Snowflake support the syntax natively. The others need to emulate it by listing the columns explicitly:
-- ACCESS, ASE, AURORA_MYSQL, AURORA_POSTGRES, COCKROACHDB, DB2, DERBY, EXASOL,
-- FIREBIRD, HANA, HSQLDB, INFORMIX, MARIADB, MEMSQL, MYSQL, ORACLE, POSTGRES,
-- REDSHIFT, SQLDATAWAREHOUSE, SQLITE, SQLSERVER, SYBASE, TERADATA, VERTICA,
-- YUGABYTEDB
SELECT LANGUAGE.CD, LANGUAGE.DESCRIPTION
FROM LANGUAGE
-- BIGQUERY, H2
SELECT * EXCEPT (ID)
FROM LANGUAGE
-- SNOWFLAKE
SELECT * EXCLUDE (ID)
FROM LANGUAGE
Disclaimer: I work for the company behind jOOQ
As others are saying: SELECT * is a bad idea.
Some reasons:
Get only what you need (anything more is a waste)
Indexing (index what you need and you can get it more quickly. If you ask for a bunch of non-indexed columns, too, your query plans will suffer.