oracle sql precision,scale ,insert calculate and drop - sql

table = mytable
temp col = tempcol
col = mycol
currently contains 5000 rows various values from 99999.99999 to 0.00001
I need to keep the data create a script to create a temp column,round the values to 7,3 update mycol to a null value, modify my column from 10,5 to 7,3 return the data to mycol, drop the temp column. Job done.
so far
SELECT mycol
INTO tempcol
FROM mytable
update mytable set mycol = null
alter table mytable modify mycol number (7,3)
SELECT tempcol
INTO mycol
FROM mytable
drop tempcol
can you please fill in the missing gaps are direct me to a solution.

Well first of all a NUMBER(10,5) can store results from -99999 to 99999 while NUMBER(7,3) interval is only [-9999,9999] so you will potentially encounter conversion errors. You probably want to change the column into a NUMBER(8,3).
Now your plan seems sound: you can not reduce the precision or the scale of a column while there is data in that column, so you will store data into a temporary column. I would do it like this:
SQL> CREATE TABLE mytable (mycol NUMBER(10,5));
Table created
SQL> /* populate table */
2 INSERT INTO mytable
3 (SELECT dbms_random.value(0, 1e10)/1e5
4 FROM dual CONNECT BY LEVEL <= 1e3);
1000 rows inserted
SQL> /* new temp column */
2 ALTER TABLE mytable ADD (tempcol NUMBER(8,3));
Table altered
SQL> /* copy data to temp */
2 UPDATE mytable
3 SET tempcol = mycol,
4 mycol = NULL;
1000 rows updated
SQL> ALTER TABLE mytable MODIFY (mycol NUMBER(8,3));
Table altered
SQL> UPDATE mytable
2 SET mycol = tempcol;
1000 rows updated
SQL> /* cleaning */
2 ALTER TABLE mytable DROP COLUMN tempcol;
Table altered

Related

Copy data from one column (bigint) to another column (bigint[]) in postgres sql?

I need to copy all data from one column customerId to another new column (customerIds - which is in different format) in the same table. There is a column called customerId whose type is bigint and I need to copy data from this column to customerIds whose data type is bigint[].
Is there any way to do this in postgres sql? I know how to copy data from one column to other column which is in same format but not sure how to do this when new column is array.
Same table and column is in same format.
UPDATE table_name
SET customerId = customerIds
As you have only one id per customer you can update simply the forst element of the array
CREATE TABLE table_name (customerId BIGINT, customerIds BIGINT[]);
INSERT INTO table_name VALUES(1);
INSERT INTO table_name VALUES(2);
INSERT INTO table_name VALUES(3);
INSERT INTO table_name VALUES(4);
INSERT INTO table_name VALUES(5);
UPDATE table_name SET customerIds[1] = customerId ;
CREATE TABLE
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
INSERT 0 1
UPDATE 5
SELECT * FROM table_name
customerid
customerids
1
{1}
2
{2}
3
{3}
4
{4}
5
{5}
SELECT 5
fiddle
maybe this helps:
with cte as (Select 4::BIGINT as num )
SELECT array_agg(cte.num) from cte;
for your problem:
UPDATE table_name
SET customerId = arrag_agg(customerIds)
answer is just an idea. not tested

How to update a column's value using row number in Teradata

I want to update a column's value in this way
new value = old value + row_number() * 1000
also for row_number I want to use order by old value
but I didn't find any solution.
sample data
column
1
3
5
after update query it should be
column
1001
2003
3005
CREATE VOLATILE TABLE test, NO FALLBACK
(MyCol SMALLINT NOT NULL)
PRIMARY INDEX (MyCol)
ON COMMIT PRESERVE ROWS;
INSERT INTO test VALUES (1);
INSERT INTO test VALUES (3);
INSERT INTO test VALUES (5);
SELECT MyCol FROM test;
UPDATE test
FROM (SELECT MyCol
, ROW_NUMBER() OVER (ORDER BY MyCol) AS RowNum_
FROM test) DT1
SET MyCol = test.MyCol + (RowNum * 1000)
WHERE test.MyCol DT1.MyCol;
SELECT MyCol FROM TEST;

Couldn't create temp table from select query if result empty

I want to crate a temp table from select query (My table has many columns, therefore I don't want to create the temp table manually)
I use the following query:
SELECT * INTO #TempTable
FROM MyTable
WHERE ...
If this query return empty rows, it won't create #TempTable. Hence, I cannot use this #TempTable for the next queries.
Is there a way to resolve this?
If the query SELECT * FROM MyTable WHERE ... in your code you posted:
SELECT *
INTO TempTable
FROM MyTable WHERE ...
returned no rows, it will create an empty TempTable, but it won't fill any data in it if there is no rows matched the WHERE clause. But it should create the table TempTable at least with the same structure as the MyTable and it will be empty.
For example this:
SELECT * INTO TempTable FROM MyTable WHERE 1 <> 1;
Will always create an empty table TempTable with the same structure as MyTable since the predicate 1 <> 1 is always false.
However you can declare it like so:
DECLARE #Temp TABLE(Field1 int, ...);
This is because you are dynamically creating and populating temporary table and not creating it explicitly.In such scenario, you must check the existence of the temp table in the beginning before you create one.
Try this:
IF OBJECT_ID('tempdb..#TempTable') IS NOT NULL
BEGIN
DROP TABLE #TempTable
END
SELECT * INTO #TempTable FROM MyTable
Select * From #TempTable
your query
SELECT * INTO #TempTable
FROM MyTable
WHERE ...
will create an empty table if the select returns no rows

Loop through rows and add a number for a column for each of them automatically in SQL Server

I have got an over 500 rows table with a column called ID which is of datetype INT. Currently the values are all NULL.
What I want to achieve is to populate the ID column with an incremental number for each row, say 1, 2, 3, 4, ..., 500 etc.
Please give me a help with any idea how to achieve this by SQL script.
using ROW_NUMBER in a CTE is one way, but here's an alternative; Create a new id1 column as int identity(1,1), then copy over to id, then drop id1:
-- sample table
create table myTable(id int, value varchar(100));
-- populate 10 rows with just the value column
insert into myTable(value)
select top 10 'some data'
from sys.messages;
go
-- now populate id with sequential integers
alter table myTable add id1 int identity(1,1)
go
update myTable set id=id1;
go
alter table myTable drop column id1;
go
select * from myTable
Result:
id value
----------- -------------
1 some data
2 some data
3 some data
4 some data
5 some data
6 some data
7 some data
8 some data
9 some data
10 some data
While you could also drop and recreate ID as an identity, it would lose its ordinal position, hence the temporary id1 column.
#create one temporary table
CREATE TABLE Tmp
(
ID int NOT NULL
IDENTITY(1, 1),
field(s) datatype NULL
)
#suppose your old table name is tbl,now pull
#Id will be auto-increment here
#dont select Id here as it is Null
INSERT INTO Tmp (field(s) )
SELECT
field(s)
FROM tbl
#drop current table
DROP TABLE tbl
#rename temp table to current one
Exec sp_rename 'Tmp', 'tbl'
#drop your temp table
#write alter command to set identitry to Id of current table
good luck

Using Merge statement inside a cursor

We have a requirement to populate a master table which consists of columns from a set of 20 different tables.
I have written a stored procedure to join some of the tables that return me max number of columns and have them in a cursor.
Now. I am using for loop to iterate through the cursor records so I can insert them into the master table.
How I can use a merge statement inside the cursor for loop so I can check if I need to update existing row or insert a new row depending if the records already exists or not.
Any ideas if we can use merge statement inside a cursor for loop? Any examples?
You can do a MERGE by selecting the cursor's data from DUAL. For example
Create a source and destination table with some data
SQL> create table src ( col1 number, col2 varchar2(10) );
Table created.
SQL> create table dest( col1 number, col2 varchar2(10) );
Table created.
SQL> insert into src values( 1, 'A' );
1 row created.
SQL> insert into src values( 2, 'B' );
1 row created.
SQL> insert into dest values( 1, 'C' );
1 row created.
SQL> commit;
Commit complete.
Run the merge
SQL> ed
Wrote file afiedt.buf
1 begin
2 for x in (select * from src)
3 loop
4 merge into dest
5 using( select x.col1 col1, x.col2 col2
6 from dual ) src
7 on( src.col1 = dest.col1 )
8 when matched then
9 update set col2 = src.col2
10 when not matched then
11 insert( col1, col2 )
12 values( src.col1, src.col2 );
13 end loop;
14* end;
SQL> /
PL/SQL procedure successfully completed.
And verify that the merge did what we wanted. Row 1 was updated and row 2 was inserted.
SQL> select * from dest;
COL1 COL2
---------- ----------
1 A
2 B
However, it generally wouldn't make too much sense to structure the code this way. You'd generally be better off putting the query that you'd use to open the cursor into the MERGE statement directly so that rather than selecting one row of data from DUAL, you're selecting all the data you want to merge from all the tables you're trying to merge the data from. Of course, it may make sense to create a view for this query that the MERGE statement can query in order to keep the MERGE statement readable.