Seeking to arrange columns in a table via sql but retain all of the other columns - sql

All,
I'm working on an AS400 and using IBM's SQL400 DB2 SQL. I have a table with 100 columns (i.e. fields) and I need to arrange the table such that several of the columns are arranged from left to right but the remaining columns can follow the arranged columns. For example-
MyDataTable
LastName1 SSN2 FirstName3 Address4 Sales5 FirstVisit6 TimeofVisit7 +93 more columns
I need to arrange the columns/fields to look as follow-
FirstName3 LastName1 SSN2 FirstVisit6 TimeofVisit7 Address4 Sales5 Remaining 93 columns
I'm not interested in GROUP BY or ORDER by as I don't want the data sorted within the column/field but I want to arrange the columns themselves . Additionally I'm trying to avoid running SELECT of 100+ columns/fields. In essence -I have a handful of columns/fields I need to place left to right in a table and I want the remaining fields to be listed in there original place. What is the most efficient way to achieve this in SQL?

I question the need for this, there's usually an opportunity to re-order the columns before presentation in the UI layer.
Unless you're just dealing with ad-hoc queries/extracts. But even there, the Run SQL Scripts component of IBM ACS will allow you to drag & drop the columns into a new order while looking at the results.
In any case, if you're ok with duplicated columns, then #smoore4's suggestion of just selecting the ones you're interested in and then all of them is the quickest solution.
SELECT t.LastName1, t.SSN2, t.FirstName3, t.Address4, t.Sales5,
t.FirstVisit6, t.TimeofVisit7, t.*
FROM MyDataTable t
Otherwise you are going to need to list the columns in the order you want. In order to save some typing, take a look at the QSYS2.SYSCOLUMNS table
select system_column_name concat ', '
from qsys2.syscolumns syscolumns
where system_table_name = 'MYTABLE'
and system_table_schema='MYLIB'
You can copy & paste the list of columns and reordered them for use in your original statement.
Lastly, note that SELECT * is generally a bad idea in any kind of production code. You may find the SQL statements I posted in this answer of some use for building lists of columns.

To avoid having to type all the columns in a large table, I use a DB2 function named listagg(). It looks like this:
select listagg(column_name, ', ') within group (order by ordinal_position)
from qsys2.syscolumns
where table_schema = 'library name'
and table_name = 'table name'
Just make sure you type your library and table names in all upper case. It will give you a comma separated string of column names. If the string is longer than 4000 characters (which can happen if your table has a lot of fields) then you can tell the function to return a larger field by casting the column larger like this:
select listagg(cast(column_name as varchar(8000)), ', ') within group (order by ordinal_position)
from qsys2.syscolumns
where table_schema = 'library name'
and table_name = 'table name'
This will produce a varchar(8000) result field. You can safely cast it all the way up to 32740, but if you do that, there can be no other columns on the row since the max row length without large objects is 32740.
Note: this is the SQL column name. To get the 10 character system name, you want to use trim(system_column_name) instead of column_name. The trim is important here as system column name is defined as char(10) vs. varchar(128), and will include trailing spaces unless they are trimmed off.

You are looking to rearrange the columns in a table?
here is an SQL function named alterTable_addColumn_likeColumn that will add a column to a table that has the same data definition as a column in a reference table. The purpose being you can use this function in a select from syscolumns SQL statement to add all the columns from the table you want to rearrange to a new table.
The alterTable_addColumn_likeColumn SQL function:
/* add a column to a table with the same data defn as the like */
/* column. */
CREATE OR REPLACE function alterTable_addColumn_likeColumn(
inSchema char(10),
inTableName char(80),
inColumnName char(80),
inLikeSchema char(10),
inLikeTable char(80),
inLIkeColumn char(80))
returns char(80)
language sql
specific core0066f
MODIFIES SQL DATA
SET OPTION datfmt = *ISO, DLYPRP = *YES, DBGVIEW = *SOURCE,
USRPRF = *OWNER, DYNUSRPRF = *OWNER
BEGIN
DECLARE VMSG CHAR(80) ;
DECLARE VCMDSTR CHAR(256) ;
declare vSqlCode decimal(5,0) default 0 ;
declare vSqlState char(5) default ' ' ;
declare vErrText char(256) default ' ' ;
declare sqlCode int default 0 ;
declare SqlState char(5) default ' ' ;
declare vDataType char(10) default ' ' ;
declare vLength decimal(5,0) default 0 ;
declare vDprc decimal(1,0) default 0 ;
declare vNullable char(1) default ' ' ;
declare vHasDefault char(1) default ' ' ;
declare vColumnDefault char(80) default ' ' ;
declare vColumnHeading char(80) default ' ' ;
declare vStmt char(2000) default ' ' ;
declare dataDefn varchar(80) default ' ' ;
declare vNotNull varchar(80) default '' ;
declare vDataDefn varchar(80) default '' ;
declare vDefault varchar(256) default '' ;
declare errmsg char(256) default ' ' ;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION
begin
SET vSqlCode = SQLCODE ;
SET vSqlState = SQLstate ;
get diagnostics exception 1 vErrText = message_text ;
end ;
/* check for add column already exist. */
select a.length
into vLength
from qsys2/syscolumns a
where a.table_schema = inSchema and a.table_name = inTableName
and a.column_name = inColumnName ;
if sqlcode = 0 then
return trim(inColumnName) || ' exists' ;
end if ;
/* read data defn of like column. */
select a.data_type,
decimal(a.length,5,0) length,
decimal(coalesce(numeric_scale,0),3,0) dprc,
a.is_nullable, a.has_default,
char(a.column_default,50) column_default, a.column_heading
into vDataType, vLength, vDprc,
vNullable, vHasDefault,
vColumnDefault, vColumnHeading
from qsys2/syscolumns a
where a.table_schema = inLikeSchema and a.table_name = inLikeTable
and a.column_name = inLikeColumn ;
/* data defn of the column. */
if vDataType = 'CHAR' or vDataType = 'VARCHAR' then
set vDataDefn = trim(vDataType) || '(' || trim(char(vLength)) || ')' ;
elseif vDataType in ('DATE','TIME','TIMESTAMP') then
set vDataDefn = trim(vDataType) ;
else
set vDataDefn = trim(vDataType) || '(' || trim(char(vLength)) ||
',' || trim(char(vDprc)) || ')' ;
end if;
/* is nullable. */
if vNullable = 'N' then
set vNotNull = 'not null ' ;
else
set vNotNull = '' ;
end if ;
/* default value. */
if vHasDefault = 'Y' then
set vDefault = 'default ' || trim(vColumnDefault) ;
end if ;
set vStmt = 'alter table ' || trim(inSchema) || '/' ||
trim(inTableName) || ' ' ||
'add column ' || trim(inColumnName) || ' ' ||
vDataDefn || ' ' || vNotNull ||
' ' || vDefault ;
prepare s1 from vStmt ;
execute s1 ;
return trim(inColumnName) || ' added' ;
END
to use the function, start with a new table:
create table qgpl/steve18 (
CREATETIME timestamp default current_timestamp )
run the select * from syscolumns query. Specify the alterTable_addColumn_likeColumn function as one of the select columns. This runs the function against every row selected from the SYSCOLUMNS table. The end result is to have columns added to the definition of the target table.
select char(a.column_name,20) colname, a.data_type,
decimal(a.length,5,0) length,
decimal(coalesce(numeric_scale,0),3,0) dprc,
a.system_column_name,
altertable_addColumn_likeColumn( 'QGPL','STEVE18',
a.column_name, a.table_schema, a.table_name,
a.column_name) addcol
from qsys2/syscolumns a
where a.table_schema = 'MYLIB' and a.table_name = 'CUSMS'

Related

How to find a value in all tables on firebird?

How to search for a value in all tables on Firebird?
Knowing a value, I need to find all tables, columns in which it occurs.
Someone can help me please? I have no clue where to start.
I'm using Firebird 3.0.
There is no built-in way to do that, you will need to explicitly query all (relevant) columns of all tables to achieve this. Taking inspiration from the not working code by kamil in Find tables, columns with specific value, you could do something like:
execute block
returns (
table_name varchar(63),
column_name varchar(63))
as
declare search_value varchar(30) = 'John';
declare has_result boolean;
begin
for select trim(r.rdb$relation_name), trim(f.rdb$field_name)
from rdb$relation_fields f
join rdb$relations r on f.rdb$relation_name = r.rdb$relation_name
and r.rdb$view_blr is null
and (r.rdb$system_flag is null or r.rdb$system_flag = 0)
order by r.rdb$relation_name, f.rdb$field_position
into :table_name, :column_name
do
begin
execute statement ('select exists(select * from "' || table_name || '" where "' || column_name || '" = ?) from rdb$database') (search_value)
into has_result;
if (has_result) then
suspend;
when any do
begin
/* value not comparable with varchar, skip */
end
end
end
This identifies which table + column is equal to search_value (but you can of course use a different condition than =, e.g. containing ? if you want to check for columns that contain search_value).
The above could be further refined by only selecting columns of an appropriate type, etc. And of course, the varchar(30) might not be suitable or sufficient for all situations.
You could also change this to a stored procedure, e.g. by changing the header to
create procedure search_all(search_value varchar(30))
returns (
table_name varchar(63),
column_name varchar(63))
as
declare has_result boolean;
begin
-- ... rest of code above
You can then execute it with:
select * from search_all('John')

Need an SQL script to update a new column with the values concatenated from 3 columns of the same table

I need a prepare an SQL script to be given to my production support team to run this in production. We have created a new column in the DB2 table. This column should be filled with the data by concatenating 3 column values of the same table in the same row.
To give a history, all the reason text which are entered in the front end are inserted into the request table by unstringing into 3 columns. Since they had limited length, we created a new column with increased length and going forward all insert will go into the new column. But we need to move all the existing data in the old 3 columns to his new one. S this SQL update is just an one time exercise.
Table: tab_request
We added new column to have a increased character length and to align with other table nomenclature. Now I need the script to update the column reasontext as below
update set field1 || .... should be your dml script. use coalesce() function to convert those null values to ''.
Update table1
set reasontext =
(coalesce(reason_1, '') || ' ' || coalesce(reason_2,'') || ' ' || coalesce(reason_3,''))
update tab_request set reasontext=CONCAT(reason_1,' ',reason_2,' ',reason_3)
If you want to avoid unnecessary spaces -- in the event that some reasons are NULL -- then you can use trim() with coalesce():
update table1
set reasontext = trim(coalesce(' ' || reason_1, '') ||
coalesce(' ' || reason_2, '') ||
coalesce(' ' || reason_3, '')
);
This is equivalent to concat_ws() available in some databases.

Dynamic SQL where condition with values from another table

I want to build a dynamic SQL query where I can use data from another table as where condition. Let's assume I have two tables: one table with financial data and the other one with conditions. They look something like this:
Table sales
c006 mesocomp c048 c020 c021
----- ---------- ------- ----- ----
120 01TA MICROSOFT 2 239
and a condition table with the following data:
dimension operator wert_db
--------- -------- -------
sales.c006 < 700
sales.c048 not like 'MIC%'
sales.c021 in (203,206)
I want to select all data from sales with the conditions stated in the condition table. So I have an SQL Query as follows:
SELECT *
FROM sales
WHERE sales.c006 < 700
AND sales.c048 NOT LIKE 'MIC%'
AND sales.c021 IN (203, 206)
Since you've posted no attempt to solve or research this yourself, I'll point you in a direction to get you started.
Your question already mentions using Dynamic SQL, so I assume you know at least what that is. You're going to populate a string variable, starting with 'SELECT * FROM Sales '.
You can use the STUFF...FOR XML PATH technique to assemble the conditions rows into a WHERE clause.
One change to the linked example is that you'll need to concatenate dimension, operator and wert_db into one artificial column in the innermost SELECT. Also instead of separating with a comma, you'll separate with ' AND '. And change the parameters of the STUFF function to take off the length of ' AND ' instead of the length of a comma.
DECLARE #tblSales TABLE
(
c006 VARCHAR(10),
mesocomp VARCHAR(100),
c048 VARCHAR(100),
c020 VARCHAR(100),
c021 VARCHAR(100)
)
INSERT INTO #tblSales(c006, mesocomp, c048, c020, c021)
VALUES(120,'01Ta','Microsoft','2','239')
SELECT * FROM #tblSales
DECLARE #tblCondition TABLE
(
Id INT,
dimension VARCHAR(100),
operator VARCHAR(10),
wert_db VARCHAR(100)
)
INSERT INTO #tblCondition(Id, dimension, operator, wert_db) VALUES(1,'sales.c006','<','700')
INSERT INTO #tblCondition(Id, dimension, operator, wert_db) VALUES(1,'sales.c048','not like','''MIC%''')
INSERT INTO #tblCondition(Id, dimension, operator, wert_db) VALUES(1,'sales.c021','in','(203,206)')
DECLARE #whereCondition VARCHAR(400)
SELECT #whereCondition = COALESCE(#whereCondition + ' ', '') + dimension + ' ' + operator + ' ' + wert_db + ' AND '
FROM #tblCondition
SET #whereCondition = SUBSTRING(#whereCondition,0, LEN(#whereCondition) - 3)
PRINT #whereCondition
DECLARE #sql VARCHAR(4000)
SET #sql = 'SELECT * FROM #tblSales Where ' + #whereCondition
PRINT #sql
EXEC(#sql)
--please use real tables so you will get everything working.

Update single column found in multiple tables

I have the same column in multiple tables in my database. I need to update every table that contains that column where the value is equal to 'xxxx'. There's a very similar stack question here which is close to what I'm looking for - I just need to add another condition in my WHERE statement. I'm not sure how to include it in the query as I keep getting syntax errors.
SELECT 'UPDATE ' + TABLE_NAME + ' SET customer= ''NewCustomerValue'' '
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'customer'
The part I'm having problems with is how to include the below line in the 'WHERE' statement.
AND customer='xxxx'
Try like this
SELECT 'UPDATE ' + TABLE_NAME + ' SET customer= ''NewCustomerValue'' where customer=''xxxx'''
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMN_NAME = 'customer'
try this:
' AND customer=''xxxx'' ' --(two ' inside a string = ')

SELECT * EXCEPT

Is there any RDBMS that implements something like SELECT * EXCEPT? What I'm after is getting all of the fields except a specific TEXT/BLOB field, and I'd like to just select everything else.
Almost daily I complain to my coworkers that someone should implement this... It's terribly annoying that it doesn't exist.
Edit: I understand everyone's concern for SELECT *. I know the risks associated with SELECT *. However, this, at least in my situation, would not be used for any Production level code, or even Development level code; strictly for debugging, when I need to see all of the values easily.
As I've stated in some of the comments, where I work is strictly a commandline shop, doing everything over ssh. This makes it difficult to use any gui tools (external connections to the database aren't allowed), etc etc.
Thanks for the suggestions though.
As others have said, it is not a good idea to do this in a query because it is prone to issues when someone changes the table structure in the future. However, there is a way to do this... and I can't believe I'm actually suggesting this, but in the spirit of answering the ACTUAL question...
Do it with dynamic SQL... this does all the columns except the "description" column. You could easily turn this into a function or stored proc.
declare #sql varchar(8000),
#table_id int,
#col_id int
set #sql = 'select '
select #table_id = id from sysobjects where name = 'MY_Table'
select #col_id = min(colid) from syscolumns where id = #table_id and name <> 'description'
while (#col_id is not null) begin
select #sql = #sql + name from syscolumns where id = #table_id and colid = #col_id
select #col_id = min(colid) from syscolumns where id = #table_id and colid > #col_id and name <> 'description'
if (#col_id is not null) set #sql = #sql + ','
print #sql
end
set #sql = #sql + ' from MY_table'
exec #sql
Create a view on the table which doesn't include the blob columns
Is there any RDBMS that implements something like SELECT * EXCEPT?
Yes, Google Big Query implements SELECT * EXCEPT:
A SELECT * EXCEPT statement specifies the names of one or more columns to exclude from the result. All matching column names are omitted from the output.
WITH orders AS(
SELECT 5 as order_id,
"sprocket" as item_name,
200 as quantity
)
SELECT * EXCEPT (order_id)
FROM orders;
Output:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| sprocket | 200 |
+-----------+----------+
EDIT:
H2 database also supports SELECT * EXCEPT (col1, col2, ...) syntax.
Wildcard expression
A wildcard expression in a SELECT statement. A wildcard expression represents all visible columns. Some columns can be excluded with optional EXCEPT clause.
EDIT 2:
Hive supports: REGEX Column Specification
A SELECT statement can take regex-based column specification in Hive releases prior to 0.13.0, or in 0.13.0 and later releases if the configuration property hive.support.quoted.identifiers is set to none.
The following query selects all columns except ds and hr.
SELECT `(ds|hr)?+.+` FROM sales
EDIT 3:
Snowflake also now supports: SELECT * EXCEPT (and a RENAME option equivalent to REPLACE in BigQuery)
EXCLUDE col_name EXCLUDE (col_name, col_name, ...)
When you select all columns (SELECT *), specifies the columns that should be excluded from the results.
RENAME col_name AS col_alias RENAME (col_name AS col_alias, col_name AS col_alias, ...)
When you select all columns (SELECT *), specifies the column aliases that should be used in the results.
and so does Databricks SQL (since Runtime 11.0)
star_clause
[ { table_name | view_name } . ] * [ except_clause ]
except_clause
EXCEPT ( { column_name | field_name } [, ...] )
and also DuckDB
-- select all columns except the city column from the addresses table
SELECT * EXCLUDE (city) FROM addresses;
-- select all columns from the addresses table, but replace city with LOWER(city)
SELECT * REPLACE (LOWER(city) AS city) FROM addresses;
-- select all columns matching the given regex from the table
SELECT COLUMNS('number\d+') FROM addresses;
DB2 allows for this. Columns have an attribute/specifier of Hidden.
From the syscolumns documentation
HIDDEN
CHAR(1) NOT NULL WITH DEFAULT 'N'
Indicates whether the column is implicitly hidden:
P Partially hidden. The column is implicitly hidden from SELECT *.
N Not hidden. The column is visible to all SQL statements.
Create table documentation As part of creating your column, you would specify the IMPLICITLY HIDDEN modifier
An example DDL from Implicitly Hidden Columns follows
CREATE TABLE T1
(C1 SMALLINT NOT NULL,
C2 CHAR(10) IMPLICITLY HIDDEN,
C3 TIMESTAMP)
IN DB.TS;
Whether this capability is such a deal maker to drive the adoption of DB2 is left as an exercise to future readers.
Is there any RDBMS that implements something like SELECT * EXCEPT
Yes! The truly relational language Tutorial D allows projection to be expressed in terms of the attributes to be removed instead of the ones to be kept e.g.
my_relvar { ALL BUT description }
In fact, its equivalent to SQL's SELECT * is { ALL BUT }.
Your proposal for SQL is a worthy one but I heard it has already been put to the SQL standard's committee by the users' group and rejected by the vendor's group :(
It has also been explicitly requested for SQL Server but the request was closed as 'won't fix'.
Yes, finally there is :) SQL Standard 2016 defines Polymorphic Table Functions
SQL:2016 introduces polymorphic table functions (PTF) that don't need to specify the result type upfront. Instead, they can provide a describe component procedure that determines the return type at run time. Neither the author of the PTF nor the user of the PTF need to declare the returned columns in advance.
PTFs as described by SQL:2016 are not yet available in any tested database.10 Interested readers may refer to the free technical report “Polymorphic table functions in SQL” released by ISO. The following are some of the examples discussed in the report:
CSVreader, which reads the header line of a CVS file to determine the number and names of the return columns
Pivot (actually unpivot), which turns column groups into rows (example: phonetype, phonenumber) -- me: no more harcoded strings :)
TopNplus, which passes through N rows per partition and one extra row with the totals of the remaining rows
Oracle 18c implements this mechanism. 18c Skip_col Polymorphic Table Function Example Oracle Live SQL and Skip_col Polymorphic Table Function Example
This example shows how to skip data based on name/specific datatype:
CREATE PACKAGE skip_col_pkg AS
-- OVERLOAD 1: Skip by name
FUNCTION skip_col(tab TABLE, col columns)
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t;
-- OVERLOAD 2: Skip by type --
FUNCTION skip_col(tab TABLE,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN TABLE PIPELINED ROW POLYMORPHIC USING skip_col_pkg;
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t;
END skip_col_pkg;
and body:
CREATE PACKAGE BODY skip_col_pkg AS
/* OVERLOAD 1: Skip by name
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_name
*
* PARAMETERS:
* tab - The input table
* col - The name of the columns to drop from the output
*
* DESCRIPTION:
* This PTF removes all the input columns listed in col from the output
* of the PTF.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
col dbms_tf.columns_t)
RETURN dbms_tf.describe_t
AS
new_cols dbms_tf.columns_new_t;
col_id PLS_INTEGER := 1;
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
FOR j IN 1 .. col.count() LOOP
tab.column(i).pass_through := tab.column(i).description.name != col(j);
EXIT WHEN NOT tab.column(i).pass_through;
END LOOP;
END LOOP;
RETURN NULL;
END;
/* OVERLOAD 2: Skip by type
* NAME: skip_col_pkg.skip_col
* ALIAS: skip_col_by_type
*
* PARAMETERS:
* tab - Input table
* type_name - A string representing the type of columns to skip
* flip - 'False' [default] => Match columns with given type_name
* otherwise => Ignore columns with given type_name
*
* DESCRIPTION:
* This PTF removes the given type of columns from the given table.
*/
FUNCTION describe(tab IN OUT dbms_tf.table_t,
type_name VARCHAR2,
flip VARCHAR2 DEFAULT 'False')
RETURN dbms_tf.describe_t
AS
typ CONSTANT VARCHAR2(1024) := upper(trim(type_name));
BEGIN
FOR i IN 1 .. tab.column.count() LOOP
tab.column(i).pass_through :=
CASE upper(substr(flip,1,1))
WHEN 'F' THEN dbms_tf.column_type_name(tab.column(i).description)
!=typ
ELSE dbms_tf.column_type_name(tab.column(i).description)
=typ
END /* case */;
END LOOP;
RETURN NULL;
END;
END skip_col_pkg;
And sample usage:
-- skip number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number');
-- only number cols
SELECT * FROM skip_col_pkg.skip_col(scott.dept, 'number', flip => 'True')
-- skip defined columns
SELECT *
FROM skip_col_pkg.skip_col(scott.emp, columns(comm, hiredate, mgr))
WHERE deptno = 20;
I highly recommend to read entire example(creating standalone functions instead of package calls).
You could easily overload skip method for example: skip columns that does not start/end with specific prefix/suffix.
db<>fidde demo
Related: How to Dynamically Change the Columns in a SQL Query By Chris Saxon
Stay away from SELECT *, you are setting yourself for trouble. Always specify exactly which columns you want. It is in fact quite refreshing that the "feature" you are asking for doesn't exist.
I believe the rationale for it not existing is that the author of a query should (for performance sake) only request what they're going to look at/need (and therefore know what columns to specify) -- if someone adds a couple more blobs in the future, you'd be pulling back potentially large fields you're not going to need.
Temp table option here, just drop the columns not required and select * from the altered temp table.
/* Get the data into a temp table */
SELECT * INTO #TempTable
FROM
table
/* Drop the columns that are not needed */
ALTER TABLE #TempTable
DROP COLUMN [columnname]
SELECT * from #TempTable
declare #sql nvarchar(max)
#table char(10)
set #sql = 'select '
set #table = 'table_name'
SELECT #sql = #sql + '[' + COLUMN_NAME + '],'
FROM INFORMATION_SCHEMA.Columns
WHERE TABLE_NAME = #table
and COLUMN_NAME <> 'omitted_column_name'
SET #sql = substring(#sql,1,len(#sql)-1) + ' from ' + #table
EXEC (#sql);
I needed something like what #Glen asks for easing my life with HASHBYTES().
My inspiration was #Jasmine and #Zerubbabel answers. In my case I've different schemas, so the same table name appears more than once at sys.objects. As this may help someone with the same scenario, here it goes:
ALTER PROCEDURE [dbo].[_getLineExceptCol]
#table SYSNAME,
#schema SYSNAME,
#LineId int,
#exception VARCHAR(500)
AS
DECLARE #SQL NVARCHAR(MAX)
BEGIN
SET NOCOUNT ON;
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name
FROM sys.columns
WHERE name <> #exception
AND object_id = (SELECT object_id FROM sys.objects
WHERE name LIKE #table
AND schema_id = (SELECT schema_id FROM sys.schemas WHERE name LIKE #schema))
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #schema + '.' + #table + ' WHERE Id = ' + CAST(#LineId AS nvarchar(50))
EXEC(#SQL)
END
GO
It's an old question, but I hope this answer can still be helpful to others. It can also be modified to add more than one except fields. This can be very handy if you want to unpivot a table with many columns.
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name FROM sys.columns WHERE name <> 'colName' AND object_id = (SELECT id FROM sysobjects WHERE name = 'tblName')
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + 'tblName'
EXEC sp_executesql #SQL
Stored Procedure:
usp_SelectAllExcept 'tblname', 'colname'
ALTER PROCEDURE [dbo].[usp_SelectAllExcept]
(
#tblName SYSNAME
,#exception VARCHAR(500)
)
AS
DECLARE #SQL NVARCHAR(MAX)
SELECT #SQL = COALESCE(#SQL + ', ', ' ' ) + name from sys.columns where name <> #exception and object_id = (Select id from sysobjects where name = #tblName)
SELECT #SQL = 'SELECT ' + #SQL + ' FROM ' + #tblName
EXEC sp_executesql #SQL
For the sake of completeness, this is possible in DremelSQL dialect, doing something like:
WITH orders AS
(SELECT 5 as order_id,
"foobar12" as item_name,
800 as quantity)
SELECT * EXCEPT (order_id)
FROM orders;
Result:
+-----------+----------+
| item_name | quantity |
+-----------+----------+
| foobar12 | 800 |
+-----------+----------+
There also seems to be another way to do it here without Dremel.
Your question was about what RDBMS supports the * EXCEPT (...) syntax, so perhaps, looking at the jOOQ manual page for * EXCEPT can be useful in the future, as that page will keep track of new dialects supporting the syntax.
Currently (mid 2022), among the jOOQ supported RDBMS, at least BigQuery, H2, and Snowflake support the syntax natively. The others need to emulate it by listing the columns explicitly:
-- ACCESS, ASE, AURORA_MYSQL, AURORA_POSTGRES, COCKROACHDB, DB2, DERBY, EXASOL,
-- FIREBIRD, HANA, HSQLDB, INFORMIX, MARIADB, MEMSQL, MYSQL, ORACLE, POSTGRES,
-- REDSHIFT, SQLDATAWAREHOUSE, SQLITE, SQLSERVER, SYBASE, TERADATA, VERTICA,
-- YUGABYTEDB
SELECT LANGUAGE.CD, LANGUAGE.DESCRIPTION
FROM LANGUAGE
-- BIGQUERY, H2
SELECT * EXCEPT (ID)
FROM LANGUAGE
-- SNOWFLAKE
SELECT * EXCLUDE (ID)
FROM LANGUAGE
Disclaimer: I work for the company behind jOOQ
As others are saying: SELECT * is a bad idea.
Some reasons:
Get only what you need (anything more is a waste)
Indexing (index what you need and you can get it more quickly. If you ask for a bunch of non-indexed columns, too, your query plans will suffer.