PostgreSQL: change order of columns in query - sql

I have a huge query with about 30 columns.
I ordered the query with:
Select *
From
.
.
.
order by id,status
Now I want that in the result to present columns in certain way.
The id column will be first, followed by status column and then all the rest.
is there a way to do that (without manually specifying 30 column names in select). Something like: Select id,status, REST

this will give you all columns except those you don't want to
SELECT id, status,' || array_to_string(ARRAY(SELECT 'o' || '.' || c.column_name
FROM information_schema.columns As c
WHERE table_name = 'table_name'
AND c.column_name NOT IN('id', 'status')
), ',') || ' FROM officepark As o' As sqlstmt

The "select *" will return the fields in the order in which they were listed when the table was created. If you want them returned in a particular order, just be sure to create the table with that order.

If you have to do it repeatly, you could create a new table:
CREATE TABLE FOO as
SELECT id, status, mydesiredorder
Or just a view,don't forget to move index constraint and foreign keys. If you must do it just once, was faster specify 30 columns than ask here

Related

Required to create an empty table by joining multiple tables in Oracle DB

I got an error while creating an empty table by joining two tables.
I have two tables tab1 & tab2 and there are many common columns names in both tables.
I tried this:
create table tab3 as
select * from tab1 a, tab2 b where a.id = b.id and 1=2;
This gave ORA-00957: duplicate column name. As I mentioned above there are many common columns name between these two tables. If I prepare a create table statement by writing around 500 column names one by one then it will consume lots of time. Please help me out here.
The simple answer is, don't use *. Or is that the whole point, to avoid writing five lines of column names?
One way to avoid these conflicts, but that assumes that you are joining on all columns with the same name in both tables and on no other columns, is to do something like
create table new_table as
select *
from table_a natural join table_b
where null is not null
;
(As you can guess, as an aside, I prefer null is not null to 1 = 2; the parser seems to prefer it too, as it will rewrite 1 = 2 as null is not null anyway.)
Will you need to control the order of the columns in the new table? If you do, you will need to write them out completely in the select clause, regardless of which join syntax you choose to use.
That's an interesting question.
The only idea I have to offer it to let another query to compose the query you need
select 'select ' || listagg(col_name, ', ') within group(order by 1) || 'from tab1 a,tab2 b where (a.id=b.id) and 1=2'
from (select 'a.' || column_name col_name from user_tab_cols where table_name = 'TAB1'
union all
select 'b.' || column_name from user_tab_cols where table_name = 'TAB2')
Please be aware for subqueries you need to specify table names in the upper case

Is there a way to execute a query on a database schema instead of a table

Thanks for reading my post. In our organisation we make use of an IBM DB2 database with multiple schema's which all have their own tables, procedures, views, etc... We would like to find a quick way to query one of these schema's based on the 'changed_by' field which exists in every table of the schema.
One of our users had write access on our database. We want to have an overview of which table's exactly he has updated in the past days. It is to much work to query every table of the schema individually.
The schema name is S_ORDER_SUMM, the schema contains 182 tables.
Something like this is what we need:
select (ALL TABLES) from S_ORDER_SUMM
where CHANGED_BY = 'Our_User'
Any help would be highly appreciated.
SELECT
-- 'UNION ALL ' ||
'SELECT ''' || T.TABNAME || ''' FROM SYSIBM.SYSDUMMY1 '
||'WHERE EXISTS (SELECT 1 FROM "' || T.TABSCHEMA || '"."' || T.TABNAME || '" '
||'WHERE CHANGE_DATE > CURRENT TIMESTAMP - 2 DAY AND CHANGED_BY=''Our_User'')'
FROM SYSCAT.TABLES T
JOIN SYSCAT.COLUMNS C ON C.TABSCHEMA=T.TABSCHEMA AND C.TABNAME=T.TABNAME
WHERE T.TABSCHEMA='S_ORDER_SUMM' AND T.TYPE='T'
AND C.COLNAME IN ('CHANGE_DATE', 'CHANGED_BY')
GROUP BY T.TABSCHEMA, T.TABNAME
HAVING COUNT(1)=2;
The query above returns a list of SELECT statements on every table of schema S_ORDER_SUMM containing both CHANGE_DATE and CHANGED_BY columns.
It's a series of the following statements (one line per statement in reality, I've formatted it just for demo):
SELECT 'MYTABLE'
FROM SYSIBM.SYSDUMMY1
WHERE EXISTS
(
SELECT 1
FROM "S_ORDER_SUMM"."MYTABLE"
WHERE CHANGE_DATE > CURRENT TIMESTAMP - 2 DAY AND CHANGED_BY='Our_User'
)
If you save the output to some file, for example, you may run this script afterwards.
You may generate a single statement for all tables as well. But you need to uncomment the commented out line and wrap the output into a final SELECT statement manually.

How to get several records searching on the whole database

My question is, is it possible to list all the columns from the whole database not just in specific tables based on 3 different criteria which ones are in an "OR" relationship. so for example I have database called "Bank" and I have 3 criterias "Criteria1; Criteria2; Criteria3" and if any of them is true so the relation between them should be OR and not AND than I will get back all the columns matching the criterias and the output put should provide "account_id" or "customer_id" from the same table.
How do I procced in this case?
It is possible, but you probably don't want to do it. Anyway, you could write a stored procedure that finds all tables that contain the columns you want:
select distinct table_name from user_tab_cols utc
where exists (select * from user_tab_cols where table_name = utc.table_name
and column_name = 'ACCOUNT_ID')
and exists (select * from user_tab_cols where table_name = utc.table_name
and column_name = 'CUSTOMER_ID');
Given the tables you could run a query where you append table name and your criteria:
execute immediate 'select account_id, customer_id from agreement where '
|| your_criteria_here;
A bit messy, inefficient and treat this as pseudo-code. However, if you really want to do this for an ad-hoq query it should point you in the right direction!

concat all rows in one rows

Hi i need to concat all rows from my table.
I have this query select * from table1; this table contains 400 fields
i cannot do this select column1 ||','||column2||','||.....from table1
can someone help e to fix it using select * from table1 to concatinate all rows.
And thank you.
In Oracle (and similar in other DBMS) you could use system tables.... and do this in two steps:
Assuming you want to combine all the columns into 1 column for X rows...
STEP 1:
SELECT LISTAGG(column_Name, '|| Chr(44)||') --this char(44) adds a comma
within group (order by column_ID) as Fields
--Order by column_Id ensures they are in the same order as defined in db.
FROM all_tab_Cols
WHERE table_name = 'YOURTABLE'
and owner = 'YOUROWNER'
--Perhaps exclude system columns
and Virtual_Column = 'NO'
STEP 2:
Copy the results into a new SQL statement and execute.
The would look something like Field1|| Chr(44)||Field2|| Chr(44)||Field3
SELECT <results>
FROM YOURTABLE;
which would result in a comma separated list of values in 1 column for all rows of YOURTABLE
If the length of all the columns (along with , space and ||) would exceed the 4000 characters allowed... we can use a clob data type instead through the use of XML objects...
* Replace step 1 with *
SELECT RTRIM(XMLAGG(XMLELEMENT(Column_ID,Column_Name,'|| Chr(44)||').extract('//text()') order by Column_ID).GetClobVal(),'|| Chr(44)||') fields
FROM all_tab_Cols
WHERE table_name = 'YOURTABLENAME'
and owner = 'YOUROWNER'
--Perhaps exclude system columns
and Virtual_Column = 'NO';
Syntax for the above attributed to This Oracle thread but updated for your needs.

excluding duplicate fields in a join

I have a dataset I'm doing analysis on. It turns out it can easily be enriched with demographic and community data which vastly improves the analytical results.
In order to do this I'm joining in demographic and community data before doing analysis. I need to exclude some fields from my core sample set, so my join looks something like this:
select sampledata.c1,
sampledata.c2,
demographics.*,
community.*
from sample data
join demographics using (zip)
join community using (fips)
This gets me multiple zip or fips columns in the output which my analysis engine can't deal with. I can't specify each field by hand - the enrichment tables result in hundreds of columns in the end.
I could do select *, but then I'd have all the columns from my sample data which I don't want.
How can I join in my enrichment data without duplicating fields, whilst still selecting the columns I want from my sample table?
One thought I had, was if postgres (my database) could fully qualify each column in the output (like sample.c1, demographics.c1, etc) I would be perfectly happy with this.
There is no column exclusion syntax in SQL, there is only column inclusion syntax (via the * operator for all columns, or listing the column names explicitly).
Generate list of only columns you want
However, you could generate the SQL statement with its hundreds of column names, minus the few duplicate columns you do not want, using schema tables and some built-in functions of your database.
SELECT
'SELECT sampledata.c1, sampledata.c2, ' || ARRAY_TO_STRING(ARRAY(
SELECT 'demographics' || '.' || column_name
FROM information_schema.columns
WHERE table_name = 'demographics'
AND column_name NOT IN ('zip')
UNION ALL
SELECT 'community' || '.' || column_name
FROM information_schema.columns
WHERE table_name = 'community'
AND column_name NOT IN ('fips')
), ',') || ' FROM sampledata JOIN demographics USING (zip) JOIN community USING (fips)'
AS statement
This only prints out the statement, it does not execute it. Then you just copy the result and run it.
If you want to both generate and run the statement dynamically in one go, then you may read up on how to run dynamic SQL in the PostgreSQL documentation.
Prepend column names with table name
Alternately, this generates a select list of all the columns, including those with duplicate data, but then aliases them to include the table name of each column as well.
SELECT
'SELECT ' || ARRAY_TO_STRING(ARRAY(
SELECT table_name || '.' || column_name || ' AS ' || table_name || '_' || column_name
FROM information_schema.columns
WHERE table_name in ('sampledata', 'demographics', 'community')
), ',') || ' FROM sampledata JOIN demographics USING (zip) JOIN community USING (fips)'
AS statement
Again, this only generates the statement. If you want to both generate and run the statement dynamically, then you'll need to brush up on dynamic SQL execution for your database, otherwise just copy and run the result.
If you really want a dot separator in the column aliases, then you'll have to use double-quoted aliases such as SELECT table_name || '.' || column_name || ' AS "' || table_name || '.' || column_name || '"'. However, double-quoted aliases can cause extra complications (case-sensitivity, etc); so, I used the underscore character instead to separate the table name from the column name within the alias, and the aliases can then be treated like regular column names else-wise.