oracle insert after updating x columns data of the selected data - sql

I need to achieve a functionality in my project:
I have to select 10 rows from a table having 5 columns and then insert the data in same table after updating 2 columns of the data retrieved (from the select query) ...what query will do for achieving the same functionality.
example :(10 rows)
data in column1 is 'zz','zz','zz','zz','zz','zz','zz','zz','zz','zz'.
data in column2 is 'ClassA','ClassB','ClassC','ClassD','ClassE','ClassA','ClassB','ClassC','ClassD','ClassE'
INSERT INTO tableT (SELECT * FROM tableT (update column1='yy',column2=append '_tt' on the existing data in the rows column))
after firing query, we have 20 records, 10 old and 10 new.
10 new records data will be
column1 is 'yy','yy','yy','yy','yy','yy','yy','yy','yy','yy'
data in column2 is 'ClassA_tt','ClassB_tt','ClassC_tt','ClassD_tt','ClassE_tt','ClassA_tt','ClassB_tt','ClassC_tt','ClassD_tt','ClassE_tt'
and the data of other remaining 3 columns will be same
please guide me in framing the query

It depends on the size of your query and on the way you want data to be updated. But for your example you could use
insert into table (column1,column2)
select decode(column2,'1','yy','2','zz',null) col1, col2 from table;
EDIT:
After you changed your question I dont understand what you want to do at all. Please explain this:
data in column2 is abc,bcd,dce,xyz,etc
because I dont get the pattern.
EDIT 2:
OK. Here we go:
INSERT INTO <table_name> (col1,col2,col3,col4,col5)
SELECT col1,col2,col3,col4,col5
FROM (
select 'yy' as col1 , (col2 || '_tt') as col2,col3,col4,col5, rownum r_num
from <table_name>
) where r_num <= 10;
You didnt specify which 10 rows you want. This will select and change firs 10 rows returned by inner select query.

Related

How to duplicate records, modify and add them to same table

I got some question and hopefully you can help me out. :)
What I have is a table like this:
ID Col1 Col2 ReverseID
1 Number 1 Number A
2 Number 2 Number B
3 Number 3 Number C
What I want to achieve is:
Create duplicate of every record with switched columns and add them to original table
Add the ID of the duplicate to ReverseID column of original record and vice-versa
So the new table should look like:
ID Col1 Col2 ReverseID
1 Number 1 Number A 4
2 Number 2 Number B 5
3 Number 3 Number C 6
4 Number A Number 1 1
5 Number B Number 2 2
6 Number C Number 3 3
What I've done so far was working with temporary table:
SELECT * INTO #tbl
FROM myTable
UPDATE #tbl
SET Col1 = Col2,
Col2 = Col1,
ReverseID = ID
INSERT INTO DUPLICATEtable(
Col1,
Col2,
ReverseID
)
SELECT Col1,
Col2,
ReverseID
FROM #tbl
In this example code I used a secondary table just for making sure I do not compromise the original data records.
I think I could skip the SET-part and change the columns in the last SELECT statement to achieve the same, but I am not sure.
Anyway - with this I am ending up at:
ID Col1 Col2 ReverseID
1 Number 1 Number A
2 Number 2 Number B
3 Number 3 Number C
4 Number A Number 1 1
5 Number B Number 2 2
6 Number C Number 3 3
So the question remains: How do I get the ReverseIDs correctly added to original records?
As my SQL knowledge is pretty low I am almost sure, this is not the simplest way of doing things, so I hope you guys & girls can enlighten me and lead me to a more elegant solution.
Thank you in advance!
br
mrt
Edit:
I try to illustrate my initial problem, so this posting gets long. ;)
.
First of all: My frontend does not allow any SQL statements, I have to focus on classes, attributes, relations.
First root cause:
Instances of a class B (B1, B2, B3, ...) are linked together in class Relation, these are many-to-many relations of same class. My frontend does not allow join tables, so that's a workaround.
Stating a user adds a relation with B1 as first side (I just called it 'left') and B2 as second side (right):
Navigating from B1, there will be two relations showing up (FK_Left, FK_Right), but only one of them will contain a value (let's say FK_Left).
Navigating from B2, the value will be only listed in the other relation (FK_Right).
So from the users side, there are always two relations displayed, but it depends on how the record was entered, if one can find the data behind relation_left or relation_right.
That's no practicable usability.
If I had all records with vice-versa partners, I can just hide one of the relations and the user sees all information behind one relation, regardless how it was entered.
Second root cause:
The frontend provides some matrix view, which gets the relation class as input and displays left partners in columns and right partners in rows.
Let's say I want to see all instances of A in columns and their partners in rows, this is only possible, if all relations regarding the instances of A are entered the same way, e.g. all A-instances as left partner.
The matrix view shall be freely filterable regarding rows and columns, so if I had duplicate relations, I can filter on any of the partners in rows and columns.
sorry for the long text, I hope that made my situation a bit clearer.
I would suggest just using a view instead of trying to create and maintain two copies of the same data. Then you just select from the view instead of the base table.
create view MyReversedDataView as
select ID
, Col1
, Col2
from MyTable
UNION ALL
select ID
, Col2
, Col1
from MyTable
The trick to this kind of thing is to start with a SELECT that gets the data you need. In this case you need a resultset with Col1, Col2, reverseid.
SELECT Col2 Col1, Col1 Col1, ID reverseid
INTO #tmp FROM myTable;
Convince yourself it's correct -- swapped column values etc.
Then do this:
INSERT INTO myTable (Col1, col2, reverseid)
SELECT Col1, Col2, reverseid FROM #tmp;
If you're doing this from a GUI like ssms, don't forget to DROP TABLE #tmp;
BUT, you can get the same result with a pure query, without duplicating rows. Why do it this way?
You save the wasted space for the reversed rows.
You always get the reversed rows up to the last second, even if you forget to run the process for reversing and inserting them into the table.
There's no consistency problem if you insert or delete rows from the table.
Here's how you might do this.
SELECT Col1, Col2, null reverseid FROM myTable
UNION ALL
SELECT Col2 Col1, Col1 Col2, ID reverseid FROM myTable;
You can even make it into a view and use it as if it were a table going forward.
CREATE VIEW myTableWithReversals AS
SELECT Col1, Col2, null reverseid FROM myTable
UNION ALL
SELECT Col2 Col1, Col1 Col2, ID reverseid FROM myTable;
Then you can say SELECT * FROM myTableWithReversals WHERE Col1 = 'value' etc.
Let me assume that the id column is auto-incremented. Then, you can do this in two steps:
insert into myTable (Col1, Col2, reverseid)
select col2, col1, id
from myTable t
order by id; -- ensures that they go in in the right order
This inserts the new ids with the right reverseid. Now we have to update the previous values:
update t
set reverseid = tr.id
from myTable t join
myTable tr
on tr.reverseid = t.id;
Note that no temporary tables are needed.

Create column with values from multiple columns in SQL

I want to add a column to a table that includes value from one of two columns, depending on which row contains the value.
For instance,
SELECT
concat("Selecting Col1 or 2", cast("Col1" OR "Col2" as string)) AS relevantinfo,
FROM table
I do not know much SQL and I know this query does not work. Is this even possible to do?
Col1 Col2
1
4
3
4
5
FINAL RESULT
Col1 Col2 relevantinfo
1 1
4 4
3 3
4 4
5 5
You can use the COALESCE function, which will return the first non-null value in the list.
SELECT col1, col2, COALESCE(col1, col2) AS col3
FROM t1;
Working example: http://sqlfiddle.com/#!9/05a83/1
I wouldn't alter the table structure to add a redundant information that can be retrieved with a simple query.
I would rather use that kind of query with IFNULL/ISNULL(ColumnName, 'Value to use if the col is null'):
--This will work only if there can't be a value in both column at the same time
--Mysql
SELECT CONCAT(IFNULL(Col1,''),IFNULL(Col2,'')) as relevantinfo FROM Table
--Sql Server
SELECT CONCAT(ISNULL(Col1,''),ISNULL(Col2,'')) as relevantinfo FROM Table

SQL selection from one of four columns?

I have a database table with four columns:
Column 1 | Column 2 | Column 3 | Column 4
The value in each of those four columns will either be a price such as ($3.99) or will have the value 'Listed'.
I would like to know the best SQL query to find the following:
Get all rows where all four rows equal 'Listed'
Get all rows where some of the columns have prices and some have the value 'Listed'
Try this Query !
When all four rows equals to 'Listed' :
SELECT *
FROM [table name]
WHERE
Column1='Listed'
AND
Column2='Listed'
AND
Column3='Listed'
AND
Column4='Listed'
When some of the columns have prices and some have the value 'Listed' :
SELECT *
FROM [table name]
WHERE
(
Column1='Listed'
OR
Column2='Listed'
OR
Column3='Listed'
OR
Column4='Listed'
)
AND
(
Column1 LIKE '$%.%'
OR
Column2 LIKE '$%.%'
OR
Column3 LIKE '$%.%'
OR
Column4 LIKE '$%.%'
)
These are actually fairly simple queries
Select where all 4 columns are listed.
SELECT * FROM {table}
WHERE Column1='Listed' AND Column2='Listed' AND Column3='Listed' AND Column4='Listed'
Select where at least one column is listed and at least one column is a price(numeric)
SELECT * FROM {table}
WHERE (Column1='Listed' OR Column2='Listed' OR Column3='Listed' OR Column4='Listed')
AND (ISNUMERIC(Column1) OR ISNUMERIC(Column2) OR ISNUMERIC(Column3) OR ISNUMERIC(Column4))
Something like these should give you what you want, no guarantees though since different database engines use some slightly different SQL.
I think you should do something like this:
SELECT * FROM `testing` WHERE Col1='Listed' AND Col2='Listed' AND Col3='Listed' AND Col4='Listed'
And for the "OR" statement:
SELECT * FROM `testing` WHERE Col1='Listed' OR Col2='Listed' OR Col3='Listed' OR Col4='Listed'
Of cours you'll need to replace Col1, Col2, Col3 and Col4 with your Column names. Also change your table name.

SQL Server inconsistent results over 2 columns using = and <>

I am trying to replace a manual process with an SQL-SERVER (2012) based automated one. Prior to doing this, I need to analyse the data in question over time to produce some data quality measures/statistics.
Part of this entails comparing the values in two columns. I need to count where they match and where they do not so I can prove my varied stats tally. This should be simple but seems not to be.
Basically, I have a table containing two columns both of which are defined identically as type INT with null values permitted.
SELECT * FROM TABLE
WHERE COLUMN1 is NULL
returns zero rows
SELECT * FROM TABLE
WHERE COLUMN2 is NULL
also returns zero rows.
SELECT COUNT(*) FROM TABLE
returns 3780
and
SELECT * FROM TABLE
returns 3780 rows.
So I have established that there are 3780 rows in my table and that there are no NULL values in the columns I am interested in.
SELECT * FROM TABLE
WHERE COLUMN1=COLUMN2
returns zero rows as expected.
Conversely therefore in a table of 3780 rows, with no NULL values in the columns being compared, I expect the following SQL
SELECT * FROM TABLE
WHERE COLUMN1<>COLUMN2
or in desperation
SELECT * FROM TABLE
WHERE NOT (COLUMN1=COLUMN2)
to return 3780 rows but it doesn't. It returns 3709!
I have tried SELECT * instead of SELECT COUNT(*) in case NULL values in some other columns were impacting but this made no difference, I still got 3709 rows.
Also, there are some negative values in 73 rows for COLUMN1 - is this what causes the issue (but 73+3709=3782 not 3780 my number of rows)?
What is a better way of proving the values in these numeric columns never match?
Update 09/09/2016: At Lamaks suggestion below I isolated the 71 missing rows and found that in each one, COLUMN1 = NULL and COLUMN2 = -99. So the issue is NULL values but why doesn't
SELECT * FROM TABLE WHERE COLUMN1 is NULL
pick them up? Here is the information in Information Schema Views and System Views:
ORDINAL_POSITION COLUMN_NAME DATA_TYPE CHARACTER_MAXIMUM_LENGTH IS_NULLABLE
1 ID int NULL NO
.. .. .. .. ..
7 COLUMN1 int NULL YES
8 COLUMN2 int NULL YES
CONSTRAINT_NAME
PK__TABLE___...
name type_desc is_unique is_primary_key
PK__TABLE___... CLUSTERED 1 1
Suspect the CHARACTER_MAXIMUM_LENGTH of NULL must be the issue?
You can find the count based on the below query using left join.
--To find COLUMN1=COLUMN2 Count
--------------------------------
SELECT COUNT(T1.ID)
FROM TABLE T1
LEFT JOIN TABLE T2 ON T1.COLUMN1=T2.COLUMN2
WHERE t2.id is not null
--To find COLUMN1<>COLUMN2 Count
--------------------------------
SELECT COUNT(T1.ID)
FROM TABLE T1
LEFT JOIN TABLE T2 ON T1.COLUMN1=T2.COLUMN2
WHERE t2.id is null
Through the exhaustive comment chain above with all help gratefully received, I suspect this to be a problem with the table creation script data types for the columns in question. I have no explanation from an SQL code point of view, as to why the "is NULL" intermittently picked up NULL values.
I was able to identify the 71 rows that were not being picked up as expected by using an "except".
i.e. I flipped the SQL that was missing 71 rows, namely:
SELECT * FROM TABLE WHERE COLUMN1 <> COLUMN 2
through an except:
SELECT * FROM TABLE
EXCEPT
SELECT * FROM TABLE WHERE COLUMN1 <> COLUMN 2
Through that I could see that COLUMN1 was always NULL in the missing 71 rows - even though the "is NULL" was not picking them up for me when I ran
SELECT * FROM TABLE WHERE COLUMN1 IS NULL
which returned zero rows.
Regarding the comparison of values stored in the columns, as my data volumes are low (3780 recs), I am just forcing the issue by using ISNULL and setting to 9999 (a numeric value I know my data will never contain) to make it work.
SELECT * FROM TABLE
WHERE ISNULL(COLUMN1, 9999) <> COLUMN2
I then get the 3780 rows as expected. It's not ideal but it'll have to do and is more or less appropriate as there are null values in there so they have to be handled.
Also, using Bertrands tip above I could view the table creation script and the columns were definitely set up as INT.

comparing 2 consecutive rows in a recordset

Currently,I have this objective to meet. I need to query the database for certain results. After done so, I will need to compare the records:
For example: the query return me with 10 rows of records, I then need to compare: row 1 with 2, row 2 with 3, row 3 with 4 ... row 9 with 10.
The final result that I wish to have is 10 or less than 10 rows of records.
I have one approach currently. I do this within a function, hand have the variables call "previous" and "current". In a loop I will always compare previous and current which I populate through the record set using a cursor.
After I got each row of filtered result, I will then input it into a physical temporary table.
After all the results are in this temporary table. I'll do a query on this table and insert the result into a cursor and then returning the cursor.
The problem is: how can I not use a temporary table. I've search through online about using nested tables, but somehow I just could not get it working.
How to replace the temp table with something else? Or is there other approach that I can use to compare the row columns with other rows.
EDIT
So sorry, maybe I am not clear with my question. Here is a sample of the result that I am trying to achieve.
TABLE X
Column A B C D
100 300 99 T1
100 300 98 T2
100 300 97 T3
100 100 97 T4
100 300 97 T5
101 11 11 T6
ColumnA is the primary key of the table. ColumnA has duplicates because table X is an audit table that keep tracks of all changes.column D acts as the timestamp for that record.
For my query, I am only interested in changes in column A,B and D. After the query I would like to get the result as below:
Column A B D
100 300 T1
100 100 T4
100 300 T5
101 11 T6
I think Analytics might do what you want :
select col1, col2, last(col1) over (order by col1, col2) LASTROWVALUE
from table1
this way, LASTROWVALUE will contain de value of col1 for the last row, which you can directly compare to the col1 of the current row.
Look this URL for more info : http://www.orafaq.com/node/55
SELECT ROW_NUMBER() OVER(ORDER BY <Some column name>) rn,
Column1, <Some column name>, CompareColumn,
LAG(CompareColumn) OVER(ORDER BY <Some column name>) PreviousValue,
LEAD(CompareColumn) OVER(ORDER BY <Some column name>) NextValue,
case
when CompareColumn != LEAD(CompareColumn) OVER(ORDER BY <Some column name>) then CompareColumn||'-->'||LEAD(CompareColumn) OVER(ORDER BY <Some column name>)
when CompareColumn = LAG(CompareColumn) OVER(ORDER BY <Some column name>) then 'NO CHANGE'
else 'false'
end
FROM <table name>
You can use this logic in a loop to change behaviour.
Hi It's not very clear what exactly yuo want to accomplish. But maybe you can fetch the results of the original query in a PLSQL collection and use that to do your comparison.
What exactly are you doing the row comparison for? Are you looking to eliminate duplicates, or are you transforming the data into another form and then returning that?
To eliminate duplicates, look to use GROUP BY or DISTINCT functionality in your SELECT.
If you are iterating over the initial data and transforming it in some way then it is hard to do it without using a temporary table - but what exactly is your problem with the temp table? If you are concerned about the performance of a cursor then maybe you could do one outer SELECT that compares the results of two inner SELECTs - but the trick is that the second SELECT is offset by one row, so you achieve the requirement of comparing row 1 against row2, etc.
I think you are complicating things with the temp table.
It can be made using a cursor and 2 temporary variables.
Here is the pseudo code:
declare
v_temp_a%xyz;
v_temp_b%xyz;
i number;
cursor my_cursor is select xyz from xyz;
begin
i := 1;
for my_row in my_cursor loop
if (i = 1)
v_temp_a := my_row;
else
v_temp_b := v_temp_a;
v_temp_a := my_row;
/* at this point v_temp_b has the previous row and v_temp_a has the currunt row
compare them and put whatever logic you want */
end if
i := i + 1;
end loop
end