Aliasing multiple columns using an expression in Oracle SQL Developer - sql

In Oracle SQL Developer, is it possible to alias multiple column names as part of a SELECT statement using an expression (as opposed to manually specifying the aliases for each column)?
Specifically, I have a mapping table that stores the task-relevant subset of columns from a large data table. Each entry in the mapping table ties a data table column name to a human readable description. I want to select the data table columns listed in the mapping table and display them with the mapping table descriptions as the column headers, but WITHOUT manually typing in the column names and their human-readable aliases one-by-one. Is this possible?
The closest I've found to an answer online is this SO question which suggests what I want to do is NOT possible: Oracle rename columns from select automatically?
But, that question is from 2010. I'm hoping the situation has changed. Thank you for your help.

This still cannot be done with 100% native SQL. These overly-dynamic situations are usually best avoided; a little extra typing is generally better than adding
complicated code.
If you truly have an exceptional case and are willing to pay the price there is a way to do this. It doesn't use 100% natural SQL, but it could be considered "pure" SQL since it uses the Oracle Data Cartridge framework to extend the database.
You can use my open source project Method4 to run dynamic SQL in SQL. Follow the Github steps to download and install the objects. The code is painfully complicated but luckily you won't need to understand most of it. Only the simple changes below are necessary to get started on customizing column names.
Method4 Changes
Create a variable to hold the new column name. Add it to the declaration section of the function ODCITableDescribe, on line 12 of the file METHOD4_OT.TPB.
v_new_column_name varchar2(32);
Create a SQL statement to map the old column to the new column. Add this to line 31, where it will be run for each column.
--Get mapped column name if it exists. If none, use the existing name.
select nvl(max(target_column_name), r_sql.description(i).col_name)
into v_new_column_name
from column_names
where source_column_name = r_sql.description(i).col_name;
Change line 42 to refer to the new variable name:
substr(v_new_column_name, 1, 30),
Mapping Table
drop table column_names;
create table column_names
(
source_column_name varchar2(30),
target_column_name varchar2(30),
constraint column_names_pk primary key(source_column_name)
);
insert into column_names values('A1234', 'BETTER_COLUMN_NAME');
insert into column_names values('B4321', 'Case sensitive column name.');
Query Example
Now the column names from any query can magically change to whatever values you want. And this doesn't simply use text replacement; the columns from a * will also change.
SQL> select * from table(method4.query('select 1 a1234, 2 b4321, 3 c from dual'));
BETTER_COLUMN_NAME Case sensitive column name. C
------------------ --------------------------- ----------
1 2 3
Warnings
Oracle SQL is horrendously complicated and any attempt to build a layer on top if it has many potential problems. For example, performance will certainly be slower. Although I've created many unit tests I'm sure there are some weird data types that won't work correctly. (But if you find any, please create a Github issue so I can fix it.)
In my experience, when people ask for this type of dynamic behavior, it's usually not worth the cost. Sometimes a little extra typing is the best solution.

Related

How to copy data from one table to another with nested required fields in repeatable objects

I'm trying to copy data from one table to another. The schemas are identical except that the source table has fields as nullable when they were meant to be required. Big query is complaining that the fields are null. I'm 99% certain the issue is that in many entries the repeatable fields are absent, which causes no issues when inserting into the table using our normal process.
The table I'm copying from used to have the exact same schema, but accidentally lost the required fields when recreating the table with a different partitioning scheme.
From what I can tell, there is no way to change the fields from nullable to required in an existing table. It looks to me like you must create a new table then use a select query to copy data.
I tried enabling "Allow large results" and unchecking "flatten results" but I'm seeing the same issue. The write preference is "append to table"
(Note: see edit below as I am incorrect here - it is a data issue)
I tried building a query to better confirm my theory (and not that the records exist but are null) but I'm struggling to build a query. I can definitely see in the preview that having some of the repeated fields be null is a real use case, so I would presume that translates to the nested required fields also being null. We have a backup of the table before it was converted to the new partitioning, and it has the same required schema as the table I'm trying to copy into. A simple select count(*) where this.nested.required.field is null in legacy sql on the backup indicates that there are quite a few columns that fit this criteria.
SQL used to select for insert:
select * from my_table
Edit:
When making a partition change on the table was also setting certain fields to a null value. It appears that somehow the select query created objects with all fields null rather than simply a null object. I used a conditional to set a nested object to either null or pick its existing value. Still investigating, but at this point I think what I'm attempting to do is normally supported, based on playing with some toy tables/queries.
When trying to copy from one table to another, and using SELECT AS STRUCT, run a null check like this:
IF(foo.bar is null, null, (SELECT AS STRUCT foo.bar.* REPLACE(...))
This prevents null nested structures from turning into structures full of null values.
To repair it via select statement, use a conditional check against a value that is required like this:
IF (bar.req is null, null, bar)
Of course a real query is more complicated than that. The good news is that the repair query should look similar to the original query that messed up the format

Sqlite ALTER TABLE - add column between existing columns?

If I have a table with columns: a, b, c and later I do a ALTER TABLE command to add a new column "d", is it possible to add it between a and b for example, and not at the end?
I heard that the position of the columns affects performance.
It's not possible to add a column between two existing columns with an ALTER TABLE statement in SQLite. This works as designed.
The new column is always appended to the end of the list of existing
columns.
As far as I know, MySQL is the only SQL (ish) dbms that lets you determine the placement of new columns.
To add a column at a specific position within a table row, use FIRST
or AFTER col_name. The default is to add the column last. You can also
use FIRST and AFTER in CHANGE or MODIFY operations to reorder columns
within a table.
But this isn't a feature I'd use regularly, so "as far as I know" isn't really very far.
With every sql platform I've seen the only way to do this is to drop the table and re-create it.
However, I question if the position of the column affects performance... In what way would it, what operations are you doing that you think it will make a difference?
I will also note that dropping the table and recreating it is often not a heavy lift. Making a backup of a table and restoring that table is easy on all major platforms so scripting a backup - drop - create - restore is an easy task for a competent DBA.
In fact I've done so often when users ask -- but I always find it a little silly. The most often reason given is the tool of choice behaves nicer when the columns are created in a certain order. (This was also #Jarad's reason below) So this is a good lesson for tool makers, make your tool able to reorder columns (and remember it between runs) -- then everyone is happy.
I use the DB.compileStatement:
sql = DB.compileStatement("INSERT INTO tableX VALUES (?,?,?);
sql.bindString(1,"value for column 1");
sql.bindString(2,"value for column 2");
sql.bindString(3,"value for column 3");
sql.executeUpdateDelete();
So there will be a big difference if order of the columns is not correct.
Unfortunately adding columns at a specific position is not possible using ALTER TABLE, at least not in SQLite. (MySQL it is possible). Workaroud is recreating the table.. (and backup and restore data)

SQL INSERT without specifying columns. What happens?

Was looking through the beloved W3schools and found this page and actually learned something interesting. I didn't know you could call an insert command without specifying columns to values. For example;
INSERT INTO table_name
VALUES (value1, value2, value3,...)
Pulling from my hazy memory, I seem to remember the SQL prof mentioning that you have to treat fields as if they are not in any particular order (although there is on the RDB side, but it's not guaranteed).
My question is, how does the server know which values get assigned to which fields?* I would test this myself, but am not going to use a production server to do which is all I a have access to at the moment.
If this technology specific, I am working on PostgresSQL. How is this particular syntax even useful?
Your prof was right - you should name the columns explicitly before naming the values.
In this case though the values will be inserted in the order that they appear in the table definition.
The problem with this is that if that order changes, or columns are removed or added (even if they are nullable), then the insert will break.
In terms of its usefulness, not that much in production code. If you're hand coding a quick insert then it might just help save you typing all the column names out.
They get inserted into the fields in the order they are in the table definition.
So if your table has fields (a,b,c), a=value1, b=value2, c=value3.
Your professor was right, this is lazy, and liable to break. But useful for a quick and dirty lazy insert.
I cannot resist to put a "RTFM" here.
The PostgreSQL manual details what happens in the chapter on INSERT:
The target column names can be listed in any order. If no list of
column names is given at all, the default is all the columns of the
table in their declared order; or the first N column names, if there
are only N columns supplied by the VALUES clause or query. The values
supplied by the VALUES clause or query are associated with the
explicit or implicit column list left-to-right.
Bold emphasis mine.
The values are just added in the same order as the columns appear in the table. It's useful in situations where you don't know the names of the columns you're working with but know what data needs to go in. It's generally not a good idea to do this though as it of course breaks if the order of the columns change or new columns are inserted in the middle.
That syntax only works without specifying columns if and only if you provide with the same number of values as the number of columns. The second more important thing is columns in a sql table are always in the same order and that depends on your table definition. The only thing that has no innate order in a sql table are rows.
when the Table is created , each column will have an order number in the system table. So each value would be inserted as per the order..
Firt value will go to first column ... an so on
In sql server , system table syscolumn maintains this order. Postgresql should have something similar to this..

DYnamic SQL examples

I have lately learned what is dynamic sql and one of the most interesting features of it to me is that we can use dynamic columns names and tables. But I cannot think about useful real life examples. The only one that came into my mind is statistical table.
Let`s say that we have table with name, type and created_data. Then we want to have a table that in columns are years from created_data column and in row type and number of names created in years. (sorry for my English)
What can be other useful real life examples of using dynamic sql with column and table as parameters? How do you use it?
Thanks for any suggestions and help :)
regards
Gabe
/edit
Thx for replies, I am particulary interested in examples that do not contain administrative things or database convertion or something like that, I am looking for examples where the code in example java is more complicated than using a dynamic sql in for example stored procedure.
An example of dynamic SQL is to fix a broken schema and make it more usable.
For example if you have hundreds of users and someone originally decided to create a new table for each user, you might want to redesign the database to have only one table. Then you'd need to migrate all the existing data to this new system.
You can query the information schema for table names with a certain naming pattern or containing certain columns then use dynamic SQL to select all the data from each of those tables then put it into a single table.
INSERT INTO users (name, col1, col2)
SELECT 'foo', col1, col2 FROM user_foo
UNION ALL
SELECT 'bar', col1, col2 FROM user_bar
UNION ALL
...
Then hopefully after doing this once you will never need to touch dynamic SQL again.
Long-long ago I have worked with appliaction where users uses their own tables in common database.
Imagine, each user can create their own table in database from UI. To get the access to data from these tables, developer needs to use the dynamic SQL.
I once had to write an Excel import where the excel sheet was not like a csv file but layed out like a matrix. So I had to deal with a unknown number of columns for 3 temporary tables (columns, rows, "infield"). The rows were also a short form of tree. Sounds weird, but was a fun to do.
In SQL Server there was no chance to handle this without dynamic SQL.
Another example from a situation I recently came up against. A MySQL database of about 250 tables, all in MyISAM engine and no database design schema, chart or other explanation at all - well, except the not so helpful table and column names.
To plan for conversion to InnoDB and find possible foreign keys, we either had to manually check all queries (and the conditions used in JOIN and WHERE clauses) created from the web frontend code or make a script that uses dynamic SQL and checks all combinations of columns with compatible datatype and compares the data stored in those columns combinations (and then manually accept or reject these possibilities).

Oracle SQL dynamically select a column

I have multiple tables which all have the same structure --except a couple of them have one column misnamed. I would like a sql statement that would allow the user to select that misnamed column using the correct name (there are only 2 possible names for the column-the correct one and the wrong one). I was thinking I could have the query first look at the all_tab_columns view to look up the table and decide which spelling of that column the table has to retrieve the data...
I understand the difficulty of renaming/altering existing production tables, but it seems like the best solution to this problem is simply to update the misnamed tables with the correct column name. Is there a reason (beyond extra work) that this is infeasible?