I have an oracle table with 2 columns, both of them are using NUMBERS data type, When I enter any number starting with 0 it removes the 0. So the solution is to change the data type to VARCHAR2. I have a script that
creates a temp table with VARCHAR2 and primary key
copies the old table
Drops the old table
Renames the temp to the old table
However I'm facing an issue. When copying the table, any data that was truncated before remains that way, is there anyway I can add a 0 at the start of the old data?. Below is the script I have created.
/* create a new table named temp */
CREATE TABLE TEMP_TABLE
(
IMEISV_PREFIX VARCHAR2(8),
IMEI_FLAG NUMBER(2),
CONSTRAINT IMEIV_PK PRIMARY KEY (IMEISV_PREFIX)
);
/* copy everything from the old table to the new temp table */
INSERT INTO TEMP_TABLE
SELECT * FROM REF_IMEISV_PREFIX;
/* Delete the original table */
DROP TABLE REF_IMEISV_PREFIX;
/* Rename the temp table to the original table */
RENAME TEMP_TABLE TO REF_IMEISV_PREFIX;
No there is not. When Oracle saves the data to the database, it saves it in the format at that time. All other information is removed. There is no way to restore historic data.
In fact, when you stored the data to the database before, let's say you do this:
insert into tableX (anumber) values ('01');
In fact it does:
insert into tableX (anumber) values (to_number('01'));
So it is lost from the very beginning. (Note that the example is actually a bad habit! You should never rely on casting in the database, always hand over the data in the right data type!)
If you need to show that leading zero your problem is a interface problem, not a database problem. You can format your output to show how many leading zero do you want.
If the data is a number let it as is.
Related
We have a table which is having a LONG datatype column. A procedure is used (from front end application) to insert data into this table which is also having input parameter as LONG.
Now due to some issues with LONG column values we need to switch from LONG TO CLOB. This needs to be performed on production database.
Sample :
Table Name : TEST_REC_TAB
this table is containing approx millions of records.
Can I proceed with the below steps.
Create a new table using below. Now LONG column will be created as CLOB in new table.
create table TEST_REC_TAB_BKP as select E_ID ,to_lob(EMAIL_BODY) EMAIL_BODY from TEST_REC_TAB;
Rename the TEST_REC_TAB table to some different name.
alter table TEST_REC_TAB RENAME TO TEST_REC_TAB_TEMP;
Rename backup table to original. (to use bkp table as original table)
alter table TEST_REC_TAB_BKP RENAME TO TEST_REC_TAB;
Set CLOB column in new table as not null;
alter table TEST_REC_TAB modify email_body not null;
new table as below
Further we will change the below highlighted LONG parameter to CLOB in procedure.
Will there be any issue with this approach? Kindly suggest if there is any better way to achieve this.
OR
Can we directly alter the main table column from LONG to CLOB?
It can be done directly, eg
SQL> create table t ( x int, y long );
Table created.
SQL> insert into t
2 values (1,'xxx');
1 row created.
SQL> commit;
Commit complete.
SQL> alter table t modify y clob;
Table altered.
but its an expensive operation and could mean an extended time that the table is out of commission. Check out DBMS_REDEFINITION as a nice way of basically automating the process you described above, whilst keeping access to the table during 99% of the exercise.
for an erroneous situation we've had to park a bunch of DB rows, which we now want to put back into a table. The problem with this is one of the columns, RowVersion, which is of type Timestamp. Inserting the old values does not seem to be possible, so it seems like these would need to be re-generated. I am still researching what the impact would be (seems none/low).
However, I was thinking of a workaround that does keep the original row values intact. My idea was as follows, in 1 locked transaction:
-- Sample table with only UNIQUEIDENTIFIER Id & TIMESTAMP RowVersion
ALTER TABLE Test
ALTER COLUMN RowVersion VARBINARY(8);
INSERT INTO Test (
Id,
RowVersion
)
VALUES (
'd261ff28-6279-4e81-93f4-825c9ca689bd',
0x000000000000CF0A
);
ALTER TABLE Test
ALTER COLUMN RowVersion TIMESTAMP;
Which results in the error Cannot alter column 'RowVersion' because it is 'timestamp'.. Is there any workaround for this? Or to avoid the XY-problem: Is there any ways of inserting rows with a specific row AND keeping the column as a timestamp?
How can I alter the type of a column in an existing table in MonetDB? According to the documentation the code should be something like
ALTER TABLE <tablename> ALTER COLUMN <columnname> SET ...
but then I am basically lost because I do not know which standard the SQL used by MonetDB follows here and I get a syntax error. If this statement is not possible I would be grateful for a workaround that is not too slow for large (order of 10^9 records) tables.
Note: I ran into this problem while doing some bulk data imports from csv files into a table in my database. One of the columns is of type INT but the values in the file at some point exceed the INT limit of 2^31-1 (yes, the table is big) and so the transaction aborts. After I found out the reason for this failure, I wanted to change it to BIGINT but all versions of SQL code I tried failed.
This is currently not supported. However, there is a workaround:
Example table for this example, say we want to change the type of column b from integer to double.
create table a(b integer);
insert into a values(42);
Create a temporary column alter table a add column b2 double;
Set data in temporary column to original data update a set b2=b;
Remove the original column alter table a drop column b;
Re-create the original column with the new type alter table a add column b double;
Move data from temporary column to new column update a set b=b2;
Drop the temporary column alter table a drop column b2;
Profit
Note that this will change the ordering of columns if there are more than one. However, this is only a cosmetic issue.
I have a table that is being used by an old program, clarion. In clarion when you add a table, it makes some kind of dictionary that maps the table. To create this mapping, the columns must be in order. Clarion doesn't read it by name, but it does read it by order and size.
I have to 'alter' a column in table. However, that column needs to be calculated by a function:
ALTER FUNCTION uf_gifValue (#aID int)
RETURNS varchar(8)
WITH SCHEMABINDING
BEGIN
DECLARE #returnValue varchar(8)
SELECT #returnValue = ISNULL(C.columnName, '')
FROM
dbo.TableA AS A (this is old table)
LEFT JOIN dbo.TableB AS B
ON A.BId= B.BId
LEFT JOIN dbo.TableC AS C
ON B.CId= C.CId
WHERE
A.AId= #aID
RETURN ISNULL(#returnValue , '')
END
GO
The table that is affected have schema something like:
Table
{
column1
.
.
.
.
AffectedColumn
.
.
.
.
AId int identity(1,1) PRIMARY KEY}
First I tried to make the function, make the temp table, drop the constraints on the old table and make a temp table with the calculated field. Then, I raised the constraints on the temp table and inserted from the old table to the temp table. Lastly, I dropped the old table and renamed the temp table to the old table name. This method did not work because the old table is referenced by a function and cannot be dropped.
Next, I tried to make a function that does nothing and returns ''. I made a temp table with the calculated field and dropped the constraints of the old table. Then, I raised constraints on the temp table and inserted from the old table to the temporary table, dropped the old table and renamed the temp table to the old table name. Then, I altered the function so that it returns the proper value.
This issue I had with this method is that I cannot alter calculated column or function while it is being referenced.
The last thing that I tried was to drop the constraints on the old table, make the function, add the calculated column, and add columns that are after the one that is affected in order to preserve column order. Then, I dropped columns that I are between the affected column and the new columns, including the old column. However, this did not work because I cannot drop 'AID' since it is after the affected column.
Please Note: The value of the column is being calculated from other tables than the table containing the column.
Is there a way that I will be able to alter this column to contain the value as calculated by my function?
Why don't you rename the table and add a view named by the old table returning everything and maybe even a result of a function evaluated in real time. This kind of simple view is editable.
I'm trying to setup temporary tables for unit-testing purposes. So far I managed to create a temporary table which copies the structure of an existing table:
CREATE TEMP TABLE t_mytable (LIKE mytable INCLUDING DEFAULTS);
But this lacks the data from the original table. I can copy the data into the temporary table by using a CREATE TABLE AS statement instead:
CREATE TEMP TABLE t_mytable AS SELECT * FROM mytable;
But then the structure of t_mytable will not be identical, e.g. column sizes and default values are different. Is there a single statement which copies everything?
Another problem with the first query using LIKE is that the key column still references the SEQUENCE of the original table, and thus increments it on insertion. Is there an easy way to create the new table with its own sequence, or will I have to set up a new sequence by hand?
I'm using the following code to do it:
CREATE TABLE t_mytable (LIKE mytable INCLUDING ALL);
ALTER TABLE t_mytable ALTER id DROP DEFAULT;
CREATE SEQUENCE t_mytable_id_seq;
INSERT INTO t_mytable SELECT * FROM mytable;
SELECT setval('t_mytable_id_seq', (SELECT max(id) FROM t_mytable), true);
ALTER TABLE t_mytable ALTER id SET DEFAULT nextval('t_my_table_id_seq');
ALTER SEQUENCE t_mytable_id_seq OWNED BY t_mytable.id;
Postgres 10 or later
Postgres 10 introduced IDENTITY columns conforming to the SQL standard (with minor extensions). The ID column of your table would look something like:
id integer PRIMARY KEY GENERATED BY DEFAULT AS IDENTITY
Syntax in the manual.
Using this instead of a traditional serial column avoids your problem with sequences. IDENTITY columns use exclusive, dedicated sequences automatically, even when the specification is copied with LIKE. The manual:
Any identity specifications of copied column definitions will only be
copied if INCLUDING IDENTITY is specified. A new sequence is created
for each identity column of the new table, separate from the sequences
associated with the old table.
And:
INCLUDING ALL is an abbreviated form of INCLUDING DEFAULTS INCLUDING IDENTITY INCLUDING CONSTRAINTS INCLUDING INDEXES INCLUDING STORAGE INCLUDING COMMENTS.
The solution is simpler now:
CREATE TEMP TABLE t_mytable (LIKE mytable INCLUDING ALL);
INSERT INTO t_mytable TABLE mytable;
SELECT setval(pg_get_serial_sequence('t_mytable', 'id'), max(id)) FROM tbl;
As demonstrated, you can still use setval() to set the sequence's current value. A single SELECT does the trick. pg_get_serial_sequence()]6 gets the name of the sequence.
db<>fiddle here
Related:
How to reset postgres' primary key sequence when it falls out of sync?
Is there a shortcut for SELECT * FROM?
Creating a PostgreSQL sequence to a field (which is not the ID of the record)
Original (old) answer
You can take the create script from a database dump or a GUI like pgAdmin (which reverse-engineers database object creation scripts), create an identical copy (with separate sequence for the serial column), and then run:
INSERT INTO new_tbl
SELECT * FROM old_tbl;
The copy cannot be 100% identical if both tables reside in the same schema. Obviously, the table name has to be different. Index names would conflict, too. Retrieving serial numbers from the same sequence would probably not be in your best interest, either. So you have to (at least) adjust the names.
Placing the copy in a different schema avoids all of these conflicts. While you create a temporary table from a regular table like you demonstrated, that's automatically the case since temp tables reside in their own temporary schema.
Or look at Francisco's answer for DDL code to copy directly.