We have a table which is having a LONG datatype column. A procedure is used (from front end application) to insert data into this table which is also having input parameter as LONG.
Now due to some issues with LONG column values we need to switch from LONG TO CLOB. This needs to be performed on production database.
Sample :
Table Name : TEST_REC_TAB
this table is containing approx millions of records.
Can I proceed with the below steps.
Create a new table using below. Now LONG column will be created as CLOB in new table.
create table TEST_REC_TAB_BKP as select E_ID ,to_lob(EMAIL_BODY) EMAIL_BODY from TEST_REC_TAB;
Rename the TEST_REC_TAB table to some different name.
alter table TEST_REC_TAB RENAME TO TEST_REC_TAB_TEMP;
Rename backup table to original. (to use bkp table as original table)
alter table TEST_REC_TAB_BKP RENAME TO TEST_REC_TAB;
Set CLOB column in new table as not null;
alter table TEST_REC_TAB modify email_body not null;
new table as below
Further we will change the below highlighted LONG parameter to CLOB in procedure.
Will there be any issue with this approach? Kindly suggest if there is any better way to achieve this.
OR
Can we directly alter the main table column from LONG to CLOB?
It can be done directly, eg
SQL> create table t ( x int, y long );
Table created.
SQL> insert into t
2 values (1,'xxx');
1 row created.
SQL> commit;
Commit complete.
SQL> alter table t modify y clob;
Table altered.
but its an expensive operation and could mean an extended time that the table is out of commission. Check out DBMS_REDEFINITION as a nice way of basically automating the process you described above, whilst keeping access to the table during 99% of the exercise.
Related
How can I change DATA TYPE of a column from number to varchar2 without deleting the table data?
You can't.
You can, however, create a new column with the new data type, migrate the data, drop the old column, and rename the new column. Something like
ALTER TABLE table_name
ADD( new_column_name varchar2(10) );
UPDATE table_name
SET new_column_name = to_char(old_column_name, <<some format>>);
ALTER TABLE table_name
DROP COLUMN old_column_name;
ALTER TABLE table_name
RENAME COLUMN new_column_name TO old_coulumn_name;
If you have code that depends on the position of the column in the table (which you really shouldn't have), you could rename the table and create a view on the table with the original name of the table that exposes the columns in the order your code expects until you can fix that buggy code.
You have to first deal with the existing rows before you modify the column DATA TYPE.
You could do the following steps:
Add the new column with a new name.
Update the new column from old column.
Drop the old column.
Rename the new column with the old column name.
For example,
alter table t add (col_new varchar2(50));
update t set col_new = to_char(col_old);
alter table t drop column col_old cascade constraints;
alter table t rename column col_new to col_old;
Make sure you re-create any required indexes which you had.
You could also try the CTAS approach, i.e. create table as select. But, the above is safe and preferrable.
The most efficient way is probably to do a CREATE TABLE ... AS SELECT
(CTAS)
alter table table_name modify (column_name VARCHAR2(255));
Since we can't change data type of a column with values, the approach that I was followed as below,
Say the column name you want to change type is 'A' and this can be achieved with SQL developer.
First sort table data by other column (ex: datetime).
Next copy the values of column 'A' and paste to excel file.
Delete values of the column 'A' an commit.
Change the data type and commit.
Again sort table data by previously used column (ex: datetime).
Then paste copied data from excel and commit.
I'm processing a big hive's table (more than 500 billion records).
The processing is too slow and I would like to make it faster.
I think that by adding partitions, the process could be more efficient.
Can anybody tell me how I can do that?
Note that my table already exists.
My table :
create table T(
nom string,
prenom string,
...
date string)
Partitioning on date field.
Thx
SET hive.exec.dynamic.partition = true;
SET hive.exec.dynamic.partition.mode = nonstrict;
INSERT OVERWRITE TABLE table_name PARTITION(Date) select date from table_name;
Note :
In the insert statement for a partitioned table make sure that you are specifying the partition columns at the last in select clause.
You have to restructure the table. Here are the steps:
Make sure no other process is writing to the table.
Create new external table using partitioning
Insert into new table by selecting from the old table
Drop the new table (external), only table will be dropped but data will be there
Drop the old table
Create the table with original name by pointing to the location under step 2
You can run repair command to fix all the metadata.
Alternative 4, 5, 6 and 7
Create the table with original name by running show create table on new table and replace with original table name
Run LOAD DATA INPATH command to move files under partitions to new partitions of new table
Drop the external table created
Both the approaches will achieve restructuring with one insert/map reduce job.
How can I alter the type of a column in an existing table in MonetDB? According to the documentation the code should be something like
ALTER TABLE <tablename> ALTER COLUMN <columnname> SET ...
but then I am basically lost because I do not know which standard the SQL used by MonetDB follows here and I get a syntax error. If this statement is not possible I would be grateful for a workaround that is not too slow for large (order of 10^9 records) tables.
Note: I ran into this problem while doing some bulk data imports from csv files into a table in my database. One of the columns is of type INT but the values in the file at some point exceed the INT limit of 2^31-1 (yes, the table is big) and so the transaction aborts. After I found out the reason for this failure, I wanted to change it to BIGINT but all versions of SQL code I tried failed.
This is currently not supported. However, there is a workaround:
Example table for this example, say we want to change the type of column b from integer to double.
create table a(b integer);
insert into a values(42);
Create a temporary column alter table a add column b2 double;
Set data in temporary column to original data update a set b2=b;
Remove the original column alter table a drop column b;
Re-create the original column with the new type alter table a add column b double;
Move data from temporary column to new column update a set b=b2;
Drop the temporary column alter table a drop column b2;
Profit
Note that this will change the ordering of columns if there are more than one. However, this is only a cosmetic issue.
I have a BLOB field that contains images of 1-2 MBs. I want to create a new table that has only 2 fields - a primary key with a reference ID and the BLOB, then replace the field that holds the BLOB with a field that holds the reference ID for the same BLOB in the new table.
I don't know SQL very well, I'm not even sure if this is possible using only SQL. Or do I need to make a C++/Python program to do it?
Note: I'm using SQLite for the database, and since they don't enforce types for the fields, I don't even need to create a new field, I can just replace the BLOB with the ID.
You have to copy all blobs to the other table, and update the old field with the ID of the new row.
The latter is possible with last_insert_rowid(), but only for a single row, so you have to use a mechanism that does the inserts and updates step by step.
This can be done with a trigger (and a dummy view, so that the UPDATE that triggers the copying does not actually update a table):
CREATE TABLE NewTable (ID INTEGER PRIMARY KEY, Blob);
CREATE TEMPORARY VIEW OldTable_view AS
SELECT * FROM OldTable;
CREATE TEMPORARY TRIGGER move_blob
INSTEAD OF UPDATE ON OldTable_view
BEGIN
INSERT INTO NewTable(Blob)
SELECT Blob FROM OldTable WHERE ID = OLD.ID;
UPDATE OldTable
SET Blob = last_insert_rowid()
WHERE ID = OLD.ID;
END;
UPDATE OldTable_view SET ID = ID;
DROP VIEW OldTable_view;
I have an oracle table with 2 columns, both of them are using NUMBERS data type, When I enter any number starting with 0 it removes the 0. So the solution is to change the data type to VARCHAR2. I have a script that
creates a temp table with VARCHAR2 and primary key
copies the old table
Drops the old table
Renames the temp to the old table
However I'm facing an issue. When copying the table, any data that was truncated before remains that way, is there anyway I can add a 0 at the start of the old data?. Below is the script I have created.
/* create a new table named temp */
CREATE TABLE TEMP_TABLE
(
IMEISV_PREFIX VARCHAR2(8),
IMEI_FLAG NUMBER(2),
CONSTRAINT IMEIV_PK PRIMARY KEY (IMEISV_PREFIX)
);
/* copy everything from the old table to the new temp table */
INSERT INTO TEMP_TABLE
SELECT * FROM REF_IMEISV_PREFIX;
/* Delete the original table */
DROP TABLE REF_IMEISV_PREFIX;
/* Rename the temp table to the original table */
RENAME TEMP_TABLE TO REF_IMEISV_PREFIX;
No there is not. When Oracle saves the data to the database, it saves it in the format at that time. All other information is removed. There is no way to restore historic data.
In fact, when you stored the data to the database before, let's say you do this:
insert into tableX (anumber) values ('01');
In fact it does:
insert into tableX (anumber) values (to_number('01'));
So it is lost from the very beginning. (Note that the example is actually a bad habit! You should never rely on casting in the database, always hand over the data in the right data type!)
If you need to show that leading zero your problem is a interface problem, not a database problem. You can format your output to show how many leading zero do you want.
If the data is a number let it as is.