ISDATE Function in Oracle - sql

I am developing a web application which is getting data from an Oracle DB. The select statements are created dynamically. What I want to do is, whenever I select a date field in a table, it should return it to a string with the format of dd.mm.yyyy
what I need is basically a way to have a function like isdate(COLUMN_NAME, true stmt, false stmt)
SELECT ISDATE(First Column, to_char(FirstColumn,'dd.mm.yyyy'), FistColumn)
FROM ANYTABLE
is there a way for this?

You can check to see what the data type is for that table using the data dictionary, and connect multiple versions of the same query to handle whatever data type it might be.
For example let's say you had this table:
create table tbl_char (dt varchar2(10));
insert into tbl_char values ('01.03.2013');
And then ran:
select to_char(dt, 'dd.mm.yyyy')
from tbl_char
where exists (select 'x'
from all_tab_cols
where table_name = 'TBL_CHAR'
and column_name = 'DT'
and data_type = 'DATE')
union all
select dt
from tbl_char
where exists (select 'x'
from all_tab_cols
where table_name = 'TBL_CHAR'
and column_name = 'DT'
and data_type = 'VARCHAR2')
You would get one row, "01.03.2013", as output, because only the 2nd query actually ran. The first would have returned an error if not for the filter resulting from the EXISTS subquery. Now, if we were to change that varchar field over to a date, we would get exactly the same output, only the result would technically be from the first query. The second would run and return no rows.
sql fiddle: http://sqlfiddle.com/#!4/0001d/1/0

Related

Oracle SQL using variable in Select or For-Loop

I need to verify data across several tables. In essence i want to write a loop for the below statement with all of the fields in a given table.
sql> select fld1, count(*)
from table1
group by fld1
;
I'm thinking that I need to create at least 2 variables. The first variable would be prompted to provide the table name.
The second var would be something based on the result of :
select column_name from user_tab_col_statistics where table_name = table_variable
Should I also create a temp table and select into that?
As per my understanding you can store the values in the temporary table.
1.Fetch Table Name & Column Name put in Variable Var_Table_name,Var_Col
select Var_Col, count(*) into Var1,Var2 from Var_Table group by fld1 ;
for all the tables columns .You can create a temporary table and insert the values as mentioned below and store in the temp table .
1.Var_Table_name
2.Var_Col
3.Var1
4.Var2
PL/SQL does not prompt. On the other hand SQL*Plus will prompt for substitution variables. See following example.
MPOWEL01> #stack
MPOWEL01>
MPOWEL01> select table_name, column_name from user_tab_col_statistics where table_name = upper('&tbl_nm')
2 order by column_name;
Enter value for tbl_nm: marktest
old 1: select table_name, column_name from user_tab_col_statistics where table_name = upper('&tbl_nm')
new 1: select table_name, column_name from user_tab_col_statistics where table_name = upper('marktest')
TABLE_NAME COLUMN_NAME
------------------------------ ------------------------------
MARKTEST FLD1
MARKTEST FLD2
MARKTEST FLD3
MARKTEST FLD4
MPOWEL01>
set verify off will eliminate the substitution message line from the output.
In PL/SQL you either need to SELECT INTO variables or use a cursor.
I think you might just be able to use SQL to generate the SELECT statements you want to run, but what exactly do you mean by 'verify data'? Verify how? Using what standard?

SQL joining huge tables by excluding just one column in select statement [duplicate]

I'm trying to use a select statement to get all of the columns from a certain MySQL table except one. Is there a simple way to do this?
EDIT: There are 53 columns in this table (NOT MY DESIGN)
Actually there is a way, you need to have permissions of course for doing this ...
SET #sql = CONCAT('SELECT ', (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME), '<columns_to_omit>,', '') FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '<table>' AND TABLE_SCHEMA = '<database>'), ' FROM <table>');
PREPARE stmt1 FROM #sql;
EXECUTE stmt1;
Replacing <table>, <database> and <columns_to_omit>
(Do not try this on a big table, the result might be... surprising !)
TEMPORARY TABLE
DROP TABLE IF EXISTS temp_tb;
CREATE TEMPORARY TABLE ENGINE=MEMORY temp_tb SELECT * FROM orig_tb;
ALTER TABLE temp_tb DROP col_a, DROP col_f,DROP col_z; #// MySQL
SELECT * FROM temp_tb;
DROP syntax may vary for databases #Denis Rozhnev
Would a View work better in this case?
CREATE VIEW vwTable
as
SELECT
col1
, col2
, col3
, col..
, col53
FROM table
You can do:
SELECT column1, column2, column4 FROM table WHERE whatever
without getting column3, though perhaps you were looking for a more general solution?
If you are looking to exclude the value of a field, e.g. for security concerns / sensitive info, you can retrieve that column as null.
e.g.
SELECT *, NULL AS salary FROM users
To the best of my knowledge, there isn't. You can do something like:
SELECT col1, col2, col3, col4 FROM tbl
and manually choose the columns you want. However, if you want a lot of columns, then you might just want to do a:
SELECT * FROM tbl
and just ignore what you don't want.
In your particular case, I would suggest:
SELECT * FROM tbl
unless you only want a few columns. If you only want four columns, then:
SELECT col3, col6, col45, col 52 FROM tbl
would be fine, but if you want 50 columns, then any code that makes the query would become (too?) difficult to read.
While trying the solutions by #Mahomedalid and #Junaid I found a problem. So thought of sharing it. If the column name is having spaces or hyphens like check-in then the query will fail. The simple workaround is to use backtick around column names. The modified query is below
SET #SQL = CONCAT('SELECT ', (SELECT GROUP_CONCAT(CONCAT("`", COLUMN_NAME, "`")) FROM
INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'users' AND COLUMN_NAME NOT IN ('id')), ' FROM users');
PREPARE stmt1 FROM #SQL;
EXECUTE stmt1;
If the column that you didn't want to select had a massive amount of data in it, and you didn't want to include it due to speed issues and you select the other columns often, I would suggest that you create a new table with the one field that you don't usually select with a key to the original table and remove the field from the original table. Join the tables when that extra field is actually required.
You could use DESCRIBE my_table and use the results of that to generate the SELECT statement dynamically.
My main problem is the many columns I get when joining tables. While this is not the answer to your question (how to select all but certain columns from one table), I think it is worth mentioning that you can specify table. to get all columns from a particular table, instead of just specifying .
Here is an example of how this could be very useful:
select users.*, phone.meta_value as phone, zipcode.meta_value as zipcode
from users
left join user_meta as phone
on ( (users.user_id = phone.user_id) AND (phone.meta_key = 'phone') )
left join user_meta as zipcode
on ( (users.user_id = zipcode.user_id) AND (zipcode.meta_key = 'zipcode') )
The result is all the columns from the users table, and two additional columns which were joined from the meta table.
I liked the answer from #Mahomedalid besides this fact informed in comment from #Bill Karwin. The possible problem raised by #Jan Koritak is true I faced that but I have found a trick for that and just want to share it here for anyone facing the issue.
we can replace the REPLACE function with where clause in the sub-query of Prepared statement like this:
Using my table and column name
SET #SQL = CONCAT('SELECT ', (SELECT GROUP_CONCAT(COLUMN_NAME) FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = 'users' AND COLUMN_NAME NOT IN ('id')), ' FROM users');
PREPARE stmt1 FROM #SQL;
EXECUTE stmt1;
So, this is going to exclude only the field id but not company_id
Yes, though it can be high I/O depending on the table here is a workaround I found for it.
SELECT *
INTO #temp
FROM table
ALTER TABLE #temp DROP COlUMN column_name
SELECT *
FROM #temp
It is good practice to specify the columns that you are querying even if you query all the columns.
So I would suggest you write the name of each column in the statement (excluding the one you don't want).
SELECT
col1
, col2
, col3
, col..
, col53
FROM table
I agree with the "simple" solution of listing all the columns, but this can be burdensome, and typos can cause lots of wasted time. I use a function "getTableColumns" to retrieve the names of my columns suitable for pasting into a query. Then all I need to do is to delete those I don't want.
CREATE FUNCTION `getTableColumns`(tablename varchar(100))
RETURNS varchar(5000) CHARSET latin1
BEGIN
DECLARE done INT DEFAULT 0;
DECLARE res VARCHAR(5000) DEFAULT "";
DECLARE col VARCHAR(200);
DECLARE cur1 CURSOR FOR
select COLUMN_NAME from information_schema.columns
where TABLE_NAME=#table AND TABLE_SCHEMA="yourdatabase" ORDER BY ORDINAL_POSITION;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
OPEN cur1;
REPEAT
FETCH cur1 INTO col;
IF NOT done THEN
set res = CONCAT(res,IF(LENGTH(res)>0,",",""),col);
END IF;
UNTIL done END REPEAT;
CLOSE cur1;
RETURN res;
Your result returns a comma delimited string, for example...
col1,col2,col3,col4,...col53
I agree that it isn't sufficient to Select *, if that one you don't need, as mentioned elsewhere, is a BLOB, you don't want to have that overhead creep in.
I would create a view with the required data, then you can Select * in comfort --if the database software supports them. Else, put the huge data in another table.
At first I thought you could use regular expressions, but as I've been reading the MYSQL docs it seems you can't. If I were you I would use another language (such as PHP) to generate a list of columns you want to get, store it as a string and then use that to generate the SQL.
Based on #Mahomedalid answer, I have done some improvements to support "select all columns except some in mysql"
SET #database = 'database_name';
SET #tablename = 'table_name';
SET #cols2delete = 'col1,col2,col3';
SET #sql = CONCAT(
'SELECT ',
(
SELECT GROUP_CONCAT( IF(FIND_IN_SET(COLUMN_NAME, #cols2delete), NULL, COLUMN_NAME ) )
FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = #tablename AND TABLE_SCHEMA = #database
),
' FROM ',
#tablename);
SELECT #sql;
If you do have a lots of cols, use this sql to change group_concat_max_len
SET ##group_concat_max_len = 2048;
I agree with #Mahomedalid's answer, but I didn't want to do something like a prepared statement and I didn't want to type all the fields, so what I had was a silly solution.
Go to the table in phpmyadmin->sql->select, it dumps the query: copy, replace and done! :)
While I agree with Thomas' answer (+1 ;)), I'd like to add the caveat that I'll assume the column that you don't want contains hardly any data. If it contains enormous amounts of text, xml or binary blobs, then take the time to select each column individually. Your performance will suffer otherwise. Cheers!
Just do
SELECT * FROM table WHERE whatever
Then drop the column in you favourite programming language: php
while (($data = mysql_fetch_array($result, MYSQL_ASSOC)) !== FALSE) {
unset($data["id"]);
foreach ($data as $k => $v) {
echo"$v,";
}
}
The answer posted by Mahomedalid has a small problem:
Inside replace function code was replacing "<columns_to_delete>," by "", this replacement has a problem if the field to replace is the last one in the concat string due to the last one doesn't have the char comma "," and is not removed from the string.
My proposal:
SET #sql = CONCAT('SELECT ', (SELECT REPLACE(GROUP_CONCAT(COLUMN_NAME),
'<columns_to_delete>', '\'FIELD_REMOVED\'')
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = '<table>'
AND TABLE_SCHEMA = '<database>'), ' FROM <table>');
Replacing <table>, <database> and `
The column removed is replaced by the string "FIELD_REMOVED" in my case this works because I was trying to safe memory. (The field I was removing is a BLOB of around 1MB)
You can use SQL to generate SQL if you like and evaluate the SQL it produces. This is a general solution as it extracts the column names from the information schema. Here is an example from the Unix command line.
Substituting
MYSQL with your mysql command
TABLE with the table name
EXCLUDEDFIELD with excluded field name
echo $(echo 'select concat("select ", group_concat(column_name) , " from TABLE") from information_schema.columns where table_name="TABLE" and column_name != "EXCLUDEDFIELD" group by "t"' | MYSQL | tail -n 1) | MYSQL
You will really only need to extract the column names in this way only once to construct the column list excluded that column, and then just use the query you have constructed.
So something like:
column_list=$(echo 'select group_concat(column_name) from information_schema.columns where table_name="TABLE" and column_name != "EXCLUDEDFIELD" group by "t"' | MYSQL | tail -n 1)
Now you can reuse the $column_list string in queries you construct.
I wanted this too so I created a function instead.
public function getColsExcept($table,$remove){
$res =mysql_query("SHOW COLUMNS FROM $table");
while($arr = mysql_fetch_assoc($res)){
$cols[] = $arr['Field'];
}
if(is_array($remove)){
$newCols = array_diff($cols,$remove);
return "`".implode("`,`",$newCols)."`";
}else{
$length = count($cols);
for($i=0;$i<$length;$i++){
if($cols[$i] == $remove)
unset($cols[$i]);
}
return "`".implode("`,`",$cols)."`";
}
}
So how it works is that you enter the table, then a column you don't want or as in an array: array("id","name","whatevercolumn")
So in select you could use it like this:
mysql_query("SELECT ".$db->getColsExcept('table',array('id','bigtextcolumn'))." FROM table");
or
mysql_query("SELECT ".$db->getColsExcept('table','bigtextcolumn')." FROM table");
May be I have a solution to Jan Koritak's pointed out discrepancy
SELECT CONCAT('SELECT ',
( SELECT GROUP_CONCAT(t.col)
FROM
(
SELECT CASE
WHEN COLUMN_NAME = 'eid' THEN NULL
ELSE COLUMN_NAME
END AS col
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'employee' AND TABLE_SCHEMA = 'test'
) t
WHERE t.col IS NOT NULL) ,
' FROM employee' );
Table :
SELECT table_name,column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'employee' AND TABLE_SCHEMA = 'test'
================================
table_name column_name
employee eid
employee name_eid
employee sal
================================
Query Result:
'SELECT name_eid,sal FROM employee'
I use this work around although it may be "Off topic" - using mysql workbench and the query builder -
Open the columns view
Shift select all the columns you want in your query (in your case all but one which is what i do)
Right click and select send to SQL Editor-> name short.
Now you have the list and you can then copy paste the query to where ever.
If it's always the same one column, then you can create a view that doesn't have it in it.
Otherwise, no I don't think so.
I would like to add another point of view in order to solve this problem, specially if you have a small number of columns to remove.
You could use a DB tool like MySQL Workbench in order to generate the select statement for you, so you just have to manually remove those columns for the generated statement and copy it to your SQL script.
In MySQL Workbench the way to generate it is:
Right click on the table -> send to Sql Editor -> Select All Statement.
The accepted answer has several shortcomings.
It fails where the table or column names requires backticks
It fails if the column you want to omit is last in the list
It requires listing the table name twice (once for the select and another for the query text) which is redundant and unnecessary
It can potentially return column names in the wrong order
All of these issues can be overcome by simply including backticks in the SEPARATOR for your GROUP_CONCAT and using a WHERE condition instead of REPLACE(). For my purposes (and I imagine many others') I wanted the column names returned in the same order that they appear in the table itself. To achieve this, here we use an explicit ORDER BY clause inside of the GROUP_CONCAT() function:
SELECT CONCAT(
'SELECT `',
GROUP_CONCAT(COLUMN_NAME ORDER BY `ORDINAL_POSITION` SEPARATOR '`,`'),
'` FROM `',
`TABLE_SCHEMA`,
'`.`',
TABLE_NAME,
'`;'
)
FROM INFORMATION_SCHEMA.COLUMNS
WHERE `TABLE_SCHEMA` = 'my_database'
AND `TABLE_NAME` = 'my_table'
AND `COLUMN_NAME` != 'column_to_omit';
I have a suggestion but not a solution.
If some of your columns have a larger data sets then you should try with following
SELECT *, LEFT(col1, 0) AS col1, LEFT(col2, 0) as col2 FROM table
If you use MySQL Workbench you can right-click your table and click Send to sql editor and then Select All Statement This will create an statement where all fields are listed, like this:
SELECT `purchase_history`.`id`,
`purchase_history`.`user_id`,
`purchase_history`.`deleted_at`
FROM `fs_normal_run_2`.`purchase_history`;
SELECT * FROM fs_normal_run_2.purchase_history;
Now you can just remove those that you dont want.

SQL - conditionally set column values to NULL

I have a table - some_table which has a number of columns and some of them have some invalid value in some rows which need to transformed into NULL.
I cannot use the below due as mutating the original table is not allowed by permissions for one and also it needs to be repeated for all column names.
UPDATE some_table TABLE## SET column_name = NULL WHERE column_name = 'invalid value';
So it needs to be a 'SELECT' operation to create a new table with invalid values converted to NULL - is there a quick way to do this ?
Updating with an answer from #Jonny below
NULLIF is a good option. However is there a way to apply it to all columns rather having to do it for each column separately - sometimes the number of columns is pretty huge.
You could use a NULLIF
Have a look at 9.16.3. NULLIF
https://www.postgresql.org/docs/current/static/functions-conditional.html
SELECT NULLIF('invalid value', column_name)
FROM some_table
How about something like:
INSERT INTO some_table2 (column_name, ...) SELECT * FROM some_table WHERE column_name <> 'invalid value';
INSERT INTO some_table2 (column_name, ...) SELECT null, ... FROM some_table WHERE column_name = 'invalid_value';

Dynamic UPDATE statement to update values in columns returned by a previous SELECT

In essence, what I want to do is:
find all tables and their columns that match a specific query,
update values in these columns.
So say I have something like
SELECT COLUMN_NAME, TABLE_NAME, TABLE_SCHEMA
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
(
TABLE_SCHEMA = 'PUBLIC'
) AND (
COLUMN_NAME LIKE '%SOMETHING%'
OR COLUMN_NAME LIKE '%SOMETHINGELSE%'
) AND (
DATA_TYPE = 'BIGINT' OR
DATA_TYPE = 'TINYINT' OR
DATA_TYPE = 'SMALLINT' OR
DATA_TYPE = 'INTEGER'
)
Or for Oracle something like:
SELECT COLUMN_NAME, TABLE_NAME
FROM USER_TAB_COLS
WHERE
(
COLUMN_NAME LIKE '%SOMETHING%'
OR COLUMN_NAME LIKE '%SOMETHINGELSE%'
) AND
DATA_TYPE IN ('NUMBER')
I want to then do an UPDATE on all resulting columns similar to:
UPDATE _RESULTING_COLUMN_NAMES_HERE_THEORETICALLY_
SET
_SINGLE_COLUMN_NAME_ = _SOME_NEW_VALUE_
WHERE _SINGLE_COLUMN_NAME_ = _SOME_OLD_VALUE_;
Well obviously that does not work or even exist, but I hope you understand what I want to achieve.
I could see a way where you generate an UPDATE statement for each matching table from the SELECT resultset, but I don't really see how to achieve this.
To make things more fun, I'd need to do that for a list of old_value to new_value transformations.
Any ideas are welcome.
I am trying to have this work on HSQLDB and Oracle as my 2 requirements, but supporting additional platforms would be a pretty good bonus.
Anytime you think you need to use dynamic SQL, you should stop, take a step back and see if there's another way to do it, or if you REALLY need to do what you're doing .
I'd probably seriously question your base "requirement" of:
updating all columns for all tables matching some string, and of type integer (or variations thereof).
Something still smells "funny" ... I'd be VERY careful about what you're doing - make sure you know what the results are going to be, test test test .. and TEST again ... on a DEV box somewhere ...
that said, anytime I need to resort to dynamic SQL, I have found the simplest way is to start with a "template":
So in your case, the final UPDATE you want to fire is as you put it:
UPDATE _RESULTING_COLUMN_NAMES_HERE_THEORETICALLY_
SET
_SINGLE_COLUMN_NAME_ = _SOME_NEW_VALUE_
WHERE _SINGLE_COLUMN_NAME_ = _SOME_OLD_VALUE_;
Ok, I'd probably re-write that as a string now, and start a query using the WITH clause:
WITH w_template AS ( select
rtrim(q'[ UPDATE _RESULTING_COLUMN_NAMES_HERE_THEORETICALLY_ ]')||CHR(10)||
rtrim(q'[ SET ]')||CHR(10)||
rtrim(q'[ _SINGLE_COLUMN_NAME_ = _SOME_NEW_VALUE_ ]')||CHR(10)||
rtrim(q'[ WHERE _SINGLE_COLUMN_NAME_ = _SOME_OLD_VALUE_; ]')
template from dual
)
Note I haven't changed anything in your query (yet). All I did was wrap some "q'[" and "]'" around it ... an rtrim, a CHR(10) and put it in a WITH clause.
1) q'[ some string ]' is an alternate way to do a string. The advantage of it is you can have single quotes inside that string without any real issue:
ie q'[ some 'string' ]' works just fine ... prints " some 'string' "
2) RTRIM - I left spaces at end of line in there as cosmetic so it's easier for us to read. However, due to length restrictions of strings, those spaces can grow that string really big, really fast with a larger query. So RTRIM is a habit I've gotten into . Keep the cosmetic spaces, but tell Oracle not to use them ;) they're just for us.
3) CHR(10) - cosmetic only - you can leave this off if you want. I like it as if you want to dump the query during testing, you can easily read the query and see what it built.
Next we'll change the names of your dynamic values there so we can more easily spot them and substitue them:
WITH w_template AS ( select
rtrim(q'[ UPDATE <table_name> ]')||CHR(10)||
rtrim(q'[ SET ]')||CHR(10)||
rtrim(q'[ <col_name> = <col_new_val> ]')||CHR(10)||
rtrim(q'[ WHERE <col_name> = <col_old_val>; ]')
template from dual
)
all I did was create an easily identified "strings" that I'll use to substitute values in later.
Note that if your columns were strings, you might need quotes in there: <col_name> = '<col_new_val>'
but seems you're dealing with integer data .. so I think we're ok ...
Now we need to pull your data ... so we go back to your original query:
SELECT COLUMN_NAME, TABLE_NAME
FROM USER_TAB_COLS
WHERE
(
COLUMN_NAME LIKE '%SOMETHING%'
OR COLUMN_NAME LIKE '%SOMETHINGELSE%'
) AND
DATA_TYPE IN ('NUMBER')
Hmm, I'll have to trust you in your query there, I'm not sure that'll run on Oracle, but you know your query better than I do ;) So I'll trust your query "as is" for this example - as long as it picks out the data you want, and includes the table name, column name, and the before/after values you want (which it currently doesn't) we're ok.
So all we need to do is tack those two together ... we'll do this:
WITH w_template AS ( select
rtrim(q'[ UPDATE <table_name> ]')||CHR(10)||
rtrim(q'[ SET ]')||CHR(10)||
rtrim(q'[ <col_name> = <col_new_val> ]')||CHR(10)||
rtrim(q'[ WHERE <col_name> = <col_old_val>; ]')
template from dual
)
w_data AS (
SELECT COLUMN_NAME, TABLE_NAME
FROM USER_TAB_COLS
WHERE
(
COLUMN_NAME LIKE '%SOMETHING%'
OR COLUMN_NAME LIKE '%SOMETHINGELSE%'
) AND
DATA_TYPE IN ('NUMBER')
)
Then we just need to add the final query, using REPLACE to substitute values ..
(note: not sure where you get "some_new_value" and "some_old_value" from ??? you'll have to join that into your query .. )
WITH w_template AS ( select
rtrim(q'[ UPDATE <table_name> ]')||CHR(10)||
rtrim(q'[ SET ]')||CHR(10)||
rtrim(q'[ <col_name> = <col_new_val> ]')||CHR(10)||
rtrim(q'[ WHERE <col_name> = <col_old_val>; ]')
template from dual
),
w_data AS (
SELECT COLUMN_NAME, TABLE_NAME
FROM USER_TAB_COLS
WHERE
(
COLUMN_NAME LIKE '%SOMETHING%'
OR COLUMN_NAME LIKE '%SOMETHINGELSE%'
) AND
DATA_TYPE IN ('NUMBER')
)
SELECT REPLACE ( REPLACE ( REPLACE ( REPLACE (
wt.template, '<table_name>',
wd.table_name ),
'<col_name>', wd.column_name ),
'<col_new_val>', ??? ),
'<col_old_val>', ??? ) query
FROM w_template wt,
w_data wd
I left ??? there for the old / new values, since you didn't indicate where they'd come from ??
but if you run that, it should spit out some update statements .. ;)
Once you're comfortable with those, pushing them through execute immediate is the easy work.
Again, I would advise to be cautious of this approach, this is ok for a 1 off migration, or such, however, it is not advised for a daily job to be running on a regular basis. ;)
find all tables and their columns that match a specific query,
update values in these columns.
With HSQLDB, it is not possible to do this with just SQL. You need to write a short Java program to list the required table names and their column names, then construct an UPDATE statement per table and execute it.
With Oracle, you could write the same program in PL/SQL. But the Java language solution is compatible with both database engines.

Informix: Select null problem

Using Informix, I've created a tempory table which I am trying to populate from a select statement. After this, I want to do an update, to populate more fields in the tempory table.
So I'm doing something like;
create temp table _results (group_ser int, item_ser int, restype char(4));
insert into _results (group_ser, item_ser)
select
group_ser, item_ser, null
from
sometable
But you can't select null.
For example;
select first 1 current from systables
works but
select first 1 null from systables
fails!
(Don't get me started on why I can't just do a SQL Server like "select current" with no table specified!)
You don't have to write a stored procedure; you simply have to tell IDS what type the NULL is. Assuming you are not using IDS 7.31 (which does not support any cast notation), you can write:
SELECT NULL::INTEGER FROM dual;
SELECT CAST(NULL AS INTEGER) FROM dual;
And, if you don't have dual as a table (you probably don't), you can do one of a few things:
CREATE SYNONYM dual FOR sysmaster:"informix".sysdual;
The 'sysdual' table was added relatively recently (IDS 11.10, IIRC), so if you are using an older version, it won't exist. The following works with any version of IDS - it's what I use.
-- #(#)$Id: dual.sql,v 2.1 2004/11/01 18:16:32 jleffler Exp $
-- Create table DUAL - structurally equivalent to Oracle's similarly named table.
-- It contains one row of data.
CREATE TABLE dual
(
dummy CHAR(1) DEFAULT 'x' NOT NULL CHECK (dummy = 'x') PRIMARY KEY
) EXTENT SIZE 8 NEXT SIZE 8;
INSERT INTO dual VALUES('x');
REVOKE ALL ON dual FROM PUBLIC;
GRANT SELECT ON dual TO PUBLIC;
Idiomatically, if you are going to SELECT from Systables to get a single row, you should include 'WHERE tabid = 1'; this is the entry for Systables itself, and if it is missing, the fact that your SELECT statement does return any data is the least of your troubles. (I've never seen that as an error, though.)
This page says the reason you can't do that is because "NULL" doesn't have a type. So, the workaround is to create a sproc that simply returns NULL in the type you want.
That sounds like a pretty bad solution to me though. Maybe you could create a variable in your script, set it to null, then select that variable instead? Something like this:
DEFINE dummy INT;
LET dummy = NULL;
SELECT group_ser, item_ser, dummy
FROM sometable
SELECT group_ser, item_ser, replace(null,null) as my_null_column
FROM sometable
or you can use nvl(null,null) to return a null for your select statement.
Is there any reason to go for an actual table? I have been using
select blah from table(set{1})
select blah from table(set{1})
is nice when you are using 10.x database. This statement doesn't touch database. The amount of read/write operations is equal to 0,
but
when you're using 11.x it will cost you at least 4500 buffer reads because this version of Informix creates this table in memory and executes query against it.
select to_date(null) from table;
This works when I want to get a date with null value
You can use this expression (''+1) on the SELECT list, instead of null keyword. It evaluates to NULL value of type DECIMAL(2,0).
This (''+1.0001) evaluates to DECIMAL(16,4). And so on.
If you want DATE type use DATE(''+1) to get null value of type DATE.
(''+1)||' ' evaluates to an empty string of type VARCHAR(1).
To obtain NULL value of type VARCHAR(1) use this expression:
DATE(''+1)||' '
Works in 9.x and 11.x.