In Oracle SQL Plus, using Copy command while creating table/inserting record always converting Table CHAR /VARCHAR2 Columns to BYTE format and causing issue in length mismatch, so im not able to re copy to my original tables.
Tried below setting in SESSION and sqlnet.ora, but no help
ALTER SESSION SET NLS_LENGTH_SEMANTICS = CHAR;
connected.
SQL> DESC DDD1;
Name Null? Type
----------------------------------------- -------- ----------------------------
ROW_ID NOT NULL VARCHAR2(15 CHAR)
PRIV_FLG NOT NULL CHAR(1 CHAR)
SQL> COPY TO USER/PASS#DB CREATE DDD12 USING SELECT * FROM DDD1;
Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
Table DDD12 created.
5036 rows selected from DEFAULT HOST connection.
5036 rows inserted into DDD12.
5036 rows committed into DDD12 at USER/PASS#DB.
SQL> DESC DDD12;
Name Null? Type
----------------------------------------- -------- ----------------------------
ROW_ID NOT NULL VARCHAR2(60)
PRIV_FLG NOT NULL CHAR(4)
Actual should be replica of DDD1 Table.
The documentation for the COPY command says:
The COPY command is not being enhanced to handle datatypes or features introduced with, or after Oracle8i. The COPY command is likely to be made obsolete in a future release.
Character length semantics were introduced in Oracle 9i:
Oracle9i introduces character semantics. It is useful for defining the storage ...
So while it's maybe a bit disappointing, it maybe isn't surprising that COPY doesn't honour that 'new' feature; and it isn't something they're likely to add in now. You are creating a new connection/session to the remote DB, and that new session would have to have the alter session re-issued after the connection is made. As that isn't happening it's defaulting to the target's database initialisation parameter value, which is BYTE.
It's perhaps slightly more surprising that SQL Developer does apparently re-issue that alter session. Perhaps more usefully for you, if you want to use a command-line tool for this instead of a GUI, so does SQLcl. You'll have to use one of those rather than SQL*Plus if you want to utilise that command and preserve the source semantics.
Another option may be to start from your current target database, set the session parameter there, and copy from the current source database instead. Or, if you can't do that, you could perhaps have a DB trigger that issues an alter session, but that's probably getting a bit to complicated now...
It's also worth noting MoS document ID 2914.1 "Common SQL*Plus Questions and Answers" which says:
Q: What is the purpose of the COPY command ?
A: Copy command in SQL*Plus is useful when you want to transfer data between Oracle and NON-Oracle databases. If you want to copy Oracle table to another Oracle table, use "create table .. as select .." or "create table, insert into .. select .." statements. Columns may lose precision if you copy from Oracle rdbms to Oracle rdbms.
While that doesn't really give an alternative for copy to another database, you could use a database link; or export/import.
Related
I've created a DB2 sql script that populates a static table and then does a rename to swap out the live table with the newly updated one. Its a fairly large SQL script so I'm only including the areas that Im having a an error on.
I'm getting the error: "[IBM][CLI Driver][DB2/NT64] SQL0104N An unexpected token "RENAME" was found following "D_HOLIDAY_LOG_OLD; ". Expected tokens may include: "TRUNCATE". LINE NUMBER=382. SQLSTATE=42601".
I suspect, its a syntax issue with the RENAME commands. If I need to add the whole query, I can. Thanks in advance
CREATE OR REPLACE PROCEDURE NSD_HOLIDAY_LOG_SPROC()
LANGUAGE SQL
SPECIFIC SP_NSD_HOLIDAY_LOG_SPROC
DYNAMIC RESULT SETS 1
BEGIN
COMMIT;
TRUNCATE TABLE TMWIN.NSD_HOLIDAY_LOG immediate;
DROP TABLE NSD_HOLIDAY_LOG_OLD;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG TO NSD_HOLIDAY_LOG_LIVE;
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_OLD TO NSD_HOLIDAY_LOG;
END#
This is frequently asked.
As you are using static SQL in an SQL PL stored procedure, you must follow the documented rules for blocks of Compound SQL (Compiled) statements.
On of those rules is that static SQL has a restricted set of statements that can appear in such a block of code.
For example, with current versions of Db2-LUW, you cannot use any of the following statically (including rename table) :
ALTER , CONNECT,CREATE, DESCRIBE, DISCONNECT, DROP, FLUSH EVENT MONITOR, FREE LOCATOR, GRANT, REFRESH TABLE, RELEASE (connection only), RENAME TABLE, RENAME TABLESPACE, REVOKE, SET CONNECTION, SET INTEGRITY, SET PASSTHRU, SET SERVER OPTION ,TRANSFER OWNERSHIP
Other Db2 platforms (Z/OS, i-series) might have different restrictions but the same principle.
To achieve what you need you can use dynamic SQL instead of Static-SQL (as long as you understand the implications).
In other words, instead of writing:
RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD;
you could instead use:
execute immediate('RENAME TABLE TMWIN.NSD_HOLIDAY_LOG_LIVE TO NSD_HOLIDAY_LOG_OLD' );
or equivalent.
You can also use two statements, one to PREPARE and the other to EXECUTE , whichever suits the design. Refer to the documentation for execute immediate.
The same is true for other statements that your version of Db2 disallows in static compound-SQL (compiled) blocks (for example, DROP, or CREATE etc.).
I would like to keep data which is already saved into a Table field varchar(32) and to convert it to BLOB in Firebird database.
I am using a software: IBExpert ....
If it is possible, how to do that ?
Let's consider you have table TEST with one column NAME:
create table test (name varchar(32));
insert into test values('test1');
insert into test values('test2');
insert into test values('test3');
commit;
select * from test;
It is possible to change the column from varchar to BLOB by the following script:
alter table test add name_blob blob;
update test set name_blob = name;
commit;
alter table test drop name;
alter table test alter column name_blob to name;
commit;
select * from test;
Specifically in IBExpert it easy to do with Firebird 2.1 and Firebird 2.5 via direct modifications of system tables (this "obsolete" method was prohibited in Firebird 3 but nothing was introduced to replace it).
This works in both directions, VARCHAR to BLOB and BLOB to VARCHAR.
You have to have a DOMAIN - that is, a named data type, in SQL, and that domain should be of BLOB type - then IBExpert itself would issue the command, that Firebird 2.x executes, if you set Firebird 2.x in database options.
If you don't have IB Expert then the following command you have to issue:
CREATE DOMAIN T_TEXT_BLOB AS -- prerequisite
BLOB SUB_TYPE 1;
update RDB$RELATION_FIELDS -- the very action
set RDB$FIELD_SOURCE = 'T_TEXT_BLOB'
/* the name of the blob domain to be set as the field's new data type */
where RDB$FIELD_SOURCE = '.....' /* name of the column */
and RDB$RELATION_NAME = '....' /* name of the table */
See also:
https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref25-ddl-domn.html
https://firebirdsql.org/file/documentation/reference_manuals/fblangref25-en/html/fblangref-appx04-relfields.html
When you create a column or change its datatype without explicitly naming the type (like just varchar(10)) then under the hood Firebird creates automatically-managed one-per-column user domains with names like RDB$12345, so while you can do that too, perhaps having an explicit named domain would be more practical and more safe.
This method however fails on Firebird 3, where you do have to copy the whole table, as shown by Maxim above.
http://tracker.firebirdsql.org/browse/CORE-6052
FB devs warn about some "bugs" and "it never worked properly" however refuse to give details.
UPDATE I finally managed to reproduce the bug mentioned in the tracker.
At least in Firebird 2.1 reference counting is broken w.r.t. BLOB payloads.
So the trick seems to be to instantly re-write implicit blobs into explicit, tricking Firebird to think we supplied new content for all the BLOB values.
Assuming the names from Maxim's answer above...
To trigger and demonstrate the bug take the "vanilla" VarChar database, apply the conversion described above, and issue the following commands:
update test set name = name -- trying to force data conversion, but instead forcing Firebird reference counting bug
select cast( name as VarChar(200) ) from test - or actually any command that would try to actually read the field contents - any such attempt would be shot down with the infamous invalid BLOB ID Firebird error.
To work around the bug we must prevent Firebird from (broken) reference counting. So, we must do a fake update, invoking expression evaluator, so that Firebird optimizer would loose track of value sources and would fail to realize the data was not really changed.
update test set name = '' || name || '' -- really forcing data over-writing and conversion, bypassing BLOB reference counting.
select cast( name as VarChar(200) ) from test - now works like a charm (albeit 200 was too short a string and you would be stuck with "overflow" error
The update command can be any other triggering expression evaluator, for example update test set name = cast( name as VarChar( NNN ) ) - but you would need to devise large enough NNN for specific column of specific table. So, string concatenations with empty string is universal and does the work, at least on Firebird 2.1.
The above stands for Firebird 2.1.7 Win32. I did not manage to trigger "invalid BLOB id" with Firebird 2.5.8 Win64 - it "just worked".
At least using the single-connection schema update script, which is anyway the intended way to do database upgrades.
Maybe if I would do schema upgrades while simultaneously there would be users actively working - the FB 2.5 would get broken too, don't know.
Whether to use this shortcut risky way disregarding FB developers' hints, or to use "official" Maxim's answer, possibly dismounting and then re-creating half the database which happened to have "dependencies" upon the field to be dropped, stays up to the reader.
In HSLQDB v 2.3.1 there is a create type clause for defining UDTs. But there appears to be no alter type clause, as far as the docs are concerned (and the db returns a unexpected token error if I try this).
Is it possible to amend/drop a UDT in HSQLDB? What would be the best practice, if for example I originally created
create type CURRENCY_ID as char(3)
because I decide I'm going to use ISO codes. But then I actually decide that I'm going to store the codes as integers instead. What is the best way to modify the schema in my db? (this is a synthetic example, obviously I wouldn't use integers in this case).
I guess I might do
alter table inventory alter column ccy set data type int
drop type CURRENCY_ID
create type CURRENCY_ID as int
alter table inventory alter column ccy set data type CURRENCY_ID
but is there a better way to do this?
After trying various methods, I ended up writing a script to edit the *.script file of the database directly. It's a plain text file with SQL commands that recreates the DB programmatically. In detail:
open db, shutdown compact
Edit the script file: replace the type definition, e.g. create type XXX as int to create type XXX as char(4)
For each table, replace the insert into table XXX values (i,...) with insert into table XXX values('str',...). This was done with a script that had the mappings from the old (int) value into the new (char) value.
In my particular case, I was changing a primary key, so I had to remove the identity directive from the create table statement, and also I had to remove a line that had a alter table XXX alter column YYY restart sequence 123.
save and close script file, open db, shutdown compact
This isn't great, but it worked. Advantages :
Ability to re-define UDT.
Ability to map the table values programmatically.
Method is generic and can be used for other schema changes, beside UDTs.
Cons
No checking that schema is consistent (although it does throw up errors if it can't read the script).
Dangerous when reading file as a text file. e.g. what if I have a VARCHAR column with newlines in it? When I parse the script file and write it back, this would need to be escaped.
Not sure if this works with non-memory DBs. i.e. those that don't only have a *.script file when shutdown.
Probably not efficient for large DBs. My DB was small ~ 1MB.
We are using SQL Server 2008. We have an Existing database and it was required to ADD a new COLUMN to one of the Table which has 2700 rows only but one of its column is of type VARCHAR(8000). When i try to add new column (CHAR(1) NULL) by using ALTER table command, it takes too much time!! it took 5 minutes and the command was still running to i stopped the command.
Below is the command, i was trying to add new column:
ALTER TABLE myTable Add ColumnName CHAR(1) NULL
Can someone help me to understand that How the SQL Server handles
the ALTER Table command? what happens exactly?
Why it takes so much time to Add new column
EDIT:
What is the effect of Table size on ALTER Command?
Altering a table requires a schema lock. Many other operations require the same lock too. After all, it wouldn't make sense to add a column halfway a select statement.
So a likely explanation is that a process had the table locked for 5 minutes. The ALTER then has to wait until it gets the lock itself.
You can see blocked processes, and the blocking process, from the Activity Monitor in SQL Server Management Studio.
Well, one thing to bear in mind is that you were adding a new fixed length column to the table. The way that rows are structured in storage, all fixed length columns are placed before all of the variable length columns, for each row. So every row would have had to be updated in storage to make this change.
If, in turn, this caused the number of rows which could be stored on each page to change, a great many new allocations may have been required.
That being said, for the number of rows indicated, I wouldn't have though it should take 5 minutes - unless, as Andomar indicated, there was some lock contention also involved.
I've just wasted the past two hours of my life trying to create a table with an auto incrementing primary key bases on this tutorial, The tutorial is great the issue I've been encountering is that the Create Target fails if I have a column which is a timestamp and a table that is called timestamp in the same table...
Why doesn't oracle flag this as being an issue when I create the table?
Here is the Sequence of commands I enter:
Creating the Table:
CREATE TABLE myTable
(id NUMBER PRIMARY KEY,
field1 TIMESTAMP(6),
timeStamp NUMBER,
);
Creating the Sequence:
CREATE SEQUENCE test_sequence
START WITH 1
INCREMENT BY 1;
Creating the trigger:
CREATE OR REPLACE TRIGGER test_trigger
BEFORE INSERT
ON myTable
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT test_sequence.nextval INTO :NEW.ID FROM dual;
END;
/
Here is the error message I get:
ORA-06552: PL/SQL: Compilation unit analysis terminated
ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed
Any combination that does not have the two lines with a the word "timestamp" in them works fine. I would have thought the syntax would be enough to differentiate between the keyword and a column name.
As I've said I don't understand why the table is created fine but oracle falls over when I try to create the trigger...
CLARIFICATION
I know that the issue is that there is a column called timestamp which may or may not be a keyword. MY issue is why it barfed when I tried to create a trigger and not when I created the table, I would have at least expected a warning.
That said having used Oracle for a few hours, it seems a lot less verbose in it's error reporting, Maybe just because I'm using the express version though.
If this is a bug in Oracle how would one who doesn't have a support contract go about reporting it? I'm just playing around with the express version because I have to migrate some code from MySQL to Oracle.
There is a note on metalink about this (227615.1) extract below:
# symptom: Creating Trigger fails
# symptom: Compiling a procedure fails
# symptom: ORA-06552: PL/SQL: %s
# symptom: ORA-06553: PLS-%s: %s
# symptom: PLS-320: the declaration of the type of this expression is incomplete or malformed
# cause: One of the tables being references was created with a column name that is one of the datatypes (reserved key word). Even though the field is not referenced in the PL/SQL SQL statements, this error will still be produced.
fix:
Workaround:
1. Rename the column to a non-reserved word.
2. Create a view and alias the column to a different name.
TIMESTAMP is not listed in the Oracle docs as a reserved word (which is surprising).
It is listed in the V$RESERVED_WORDS data dictionary view, but its RESERVED flag is set to 'N'.
It might be a bug in the trigger processing. I would say this is a good one for Oracle support.
You've hinted at the answer yourself. You're using timestamp as a column name but it's also a keyword. Change the column name to something else (eg xtimestamp) and the trigger compiles.
Well, I'm not totally sure about it, but I think this happens because the SQL code used to manipulate and access database objects is interpreted by some interpreter different form the one used to interpret PL/SQL code.
Have in mind that SQL an PL/SQL are different things, and so they are processed differently. So, I think there is some error in one interpreter, just not sure which one is.
Instead of having Oracle maintain a view, use EXECUTE IMMEDIATE (i.e. if 'Rename the column to a non-reserved word' is not an option.
You can execute via EXECUTE IMMEDIATE. IT's not better way but work's and avoid column rename.
In my case rename column will be a caotic way