i have been having trouble solving what seems to me a complex aspect of plsql, I have a table1 with a list of table_names and a table2 with backups to those table_names
Table1
id name max_rows date
1 a 100 2018-10-06
2 b 100 2018-10-06
3 c 100 2018-10-06
Table2
id name_bck FK date_created date closed
1 a_bck 1
2 b_bck 2
3 c_bck 3
so the idea is for me to insert rows from a onto a_bck until a_bck reaches its limit (max_rows) then i would create a new a_bck2 and close(update close date from table2);
the problem im having is i cant access columns from table a of my table1, i want to insert data from a to a_bck where its date column from table a is < then sysdate
so i created 2 cursors to go through both tables and return the data i need to perform operations over.
cursor c1 is
select id name max_rows from table1 where date<sysdate;
curso2 c2 is
select id_fk , name, from table2 where table1.id=table2.id_fk and close_date is null;
then i would loop through and fetch the data related to table1 and table2
fetch c1 into id, n_tab, n_rows;
FETCH c2 INTO id_fk, n_tab2;
I would like some help on how to dynamically access the columns from the tables on table1.
I tried to summarize the best way possible.
If anyone could show me a small example of how i could implement this.
PS: i cant use partitions
Thank you in advance
Related
I am trying to check if the values from Table1 exist in Table2.
The thing is that the values are comma separated in Table1
Table 1
ID
TXT
1
129(a),P24
2
P112
3
P24,XX
4
135(a),135(b)
Table 2
ID
P24
P112
P129(a)
135(a)
135(b)
The following only works if the complete cell value exists in both tables:
SELECT Table1.ID, Table1.TXT
FROM Table1 LEFT JOIN Table2 ON Table1.[TXT] = Table2.[ID]
WHERE (((Table2.ID) Is Null));
MY QUESTION IS:
Is there a way to check each comma separated value and return those that do not exists in Table 2.
In above example the value XX should end up in the result.
Not sure why you store your data in that way (which is bad practice as sos mentioned above), but you need to mimic the temp table like in SQL server.
Select from table1 and create different txt rows per id.
Insert the results from section 1 into the table3.
Select from table3 and join it to table2.
Delete table 3.
Table3 the temp table
ID
TXT
1
129(a)
1
P24
2
P112
3
P24
3
XX
4
135(a)
4
135(b)
Here is some explanation MS Access database (2010) how to create temporary table/procedure/view from Query Designer
Problem
I have a situation in which I have two tables in which I would like the entries from table 2 (lets call it table_2) to be matched up with the entries in table 1 (table_1) such that there are no duplicates rows of table_2 used in the match up.
Discussion
Specifically, in this case there are datetime stamps in each table (field is utcdatetime). For each row in table_1, I want to find the row in table_2 in which has the closed utcdatetime to the table 1 utcdatetime such that the table2.utcdatetime is older than the table_1 utcdatetime and within 30 minutes of the table 1 utcdatetime. Here is the catch, I do not want any repeats. If a row in table 2 gets gobbled up in a match on an earlier row in table 1, then I do not want it considered for a match later.
This has currently been implemented in a Python routine, but it is slow to iterate over all of the rows in table 1 as it is large. I thought I was there with a single SQL statement, but I found that my current SQL results in duplicate table 2 rows in the output data.
I would recommend using a nested select to get whatever results you're looking for.
For instance:
select *
from person p
where p.name_first = 'SCCJS'
and not exists (select 'x' from person p2 where p2.person_id != p.person_id
and p.name_first = 'SCCJS' and p.name_last = 'SC')
I'm creating a new table using PostgreSQL, but I need to get a parameter from another table as an input.
This is the table I have (I called table_1):
id column_1
1 100
2 100
3 100
4 100
5 100
I want to create a new table, but only using ids that are higher than the highest id from the table above (table_1). Something like this:
insert into table_new
select id, column_1 from table_old
where id > (max(id) from table_1)
How can I do this? I tried searching, but I got to several posts like https://community.powerbi.com/t5/Desktop/M-Query-Create-a-table-using-input-from-another-table/td-p/209923, Take one table as input and output using another table BigQuery and sql query needs input from another table, which are not exactly what I need.
Just use where id > (select max(id) from table_1).
I would like to insert records in a table below (structure of table with example data). I have to use TSQL to achieve this:
MasterCategoryID MasterCategoryDesc SubCategoryDesc SubCategoryID
1 Housing Elderly 4
1 Housing Adult 5
1 Housing Child 6
2 Car Engine 7
2 Car Engine 7
2 Car Window 8
3 Shop owner 9
So for example if I enter in a new record with MasterCategoryDesc = 'Town' it will insert '4' in MasterCategoryID with the respective SubCategoryDesc + ID.
CAN I SIMPLIFY THIS QUESTION BY REMOVING THE SubCategoryDesc and SubCategoryID columns. How can I achieve this now just with the 2 columns MasterCategoryID and MasterCategoryDesc
INSERT into Table1
([MasterCategoryID], [MasterCategoryDesc], [SubCategoryDesc], [SubCategoryID])
select TOP 1
case when 'Town' not in (select [MasterCategoryDesc] from Table1)
then (select max([MasterCategoryID])+1 from Table1)
else (select [MasterCategoryID] from Table1 where [MasterCategoryDesc]='Town')
end as [MasterCategoryID]
,'Town' as [MasterCategoryDesc]
,'owner' as [SubCategoryDesc]
,case when 'owner' not in (select [SubCategoryDesc] from Table1)
then (select max([SubCategoryID])+1 from Table1)
else (select [SubCategoryID] from Table1 where [SubCategoryDesc]='owner')
end as [SubCategoryID]
from Table1
SQL FIDDLE
If you want i can create a SP too. But you said you want an T-SQL
This will take three steps, preferably in a single Stored Procedure. Make sure it's within a transaction.
a) Check if the MasterCategoryDesc you are trying to insert already exists. If so, take its ID. If not, find the highest MasterCategoryID, increase by one, and save it to a variable.
b) The same with SubCategoryDesc and SubCategoryID.
c) Insert the new record with the two variables you created in steps a and b.
Create a table for the MasterCategory and a table for the SubCategory. Make an ___ID column for each one that is identity (1,1). When loading, insert new rows for nonexistent values and then look up existing values for the INSERT.
Messing around with finding the Max and looking up data in the existing table is, in my opinion, a recipe for failure.
Using Sql Server 2000
I want to find out the duplicate record in the table
Table1
ID Transaction Value
001 020102 10
001 020103 20
001 020102 10 (Duplicate Records)
002 020102 10
002 020103 20
002 020102 10 (Duplicate Records)
...
...
Transaction and value can be repeat for different id's, not for the same id...
Expected Output
Duplicate records are...
ID Transaction Value
001 020102 10
002 020102 10
...
...
How to make a query for view the duplicate records.
Need Query help
You can use
SELECT
ID, Transaction, Value
FROM
Table1
GROUP BY
ID, Transaction, Value
HAVING count(ID) > 1
Select Id, Transaction, Value, Count(id)
from table
group by Id, Transaction, Value
having count(id) > 1
This query will show you the count of times the ID has been repeated with each entry of the Id. If you don't need it you can simply remove the Count(Id) column from the select clause.
Self join (with additional PK or Timestamp or...)
I can see that people've provided solution with grouping but none has provided the self join solution. The only problem is that you'd need some other row descriptor that should be unique for each record. Be it primary key, timestamp or anything else... Suppose that the unique column's name is Uniq this would be the solution:
select distinct ID, [Transaction], Value
from Records r1
join Records r2
on ((r2.ID = r1.ID) and
(r2.[Transaction] = r1.[Transaction]) and
(r2.Value = r1.Value) and
(r2.Uniq != r1.Uniq))
The last join column makes it possible to not join each row to itself but only to other duplicates...
To find out which one works best for you, you can check their execution plan and execute some testing.
You can do this:
SELECT ID, Transaction, Value
FROM Table
GROUP BY ID, Transaction, Value
HAVING COUNT(*) > 1
To delete the duplicates, if you have no primary key then you need to select the distinct values into a separate table, delete everything from this one, then copy the distinct records back:
SELECT ID, Transaction, Value
INTO #tmpDeduped
FROM Table
GROUP BY ID, Transaction, Value
DELETE FROM Table
INSERT Table
SELECT * FROM #tmpDeduped