I have two table A and B. A has a column b_id which act as a foreign key reference for a many to one relationship.
So is there any load difference in Oracle when executing the query like
select A.* from A, B where A.b_id=B.ID and B.ID=? -- auto-generated by hibernate
and
select * from A where b_id = ? -- Created manually
UPDATE : I need data from only table A
For sure there will be a difference between both queries, the first one is getting data from two tables and the second one is just querying one unique table.
Even if you don't return any results of table B in the first query, these data are used for the jointure condition (not the case in the second query).
Related
Prolog:
I have two tables in two different databases, one is an updated version of the other. For example we could imagine that one year ago I duplicated table 1 in the new db (say, table 2), and from then I started working on table 2 never updating table 1.
I would like to compare the two tables, to get the differences that have grown in this period of time (the tables has preserved the structure, so that comparison has meaning)
My way of proceeding was to create a third table, in which I would like to copy both table 1 and table 2, and then count the number of repetitions of every entry.
In my opinion, this, added to a new attribute that specifies for every entry the table where he cames from would do the job.
Problem:
Copying the two tables into the third table I get the (obvious) error to have two duplicate key values in a unique or primary key costraint.
How could I bypass the error or how could do the same job better? Any idea is appreciated
Something like this should do what you want if A and B have the same structure, otherwise just select and rename the columns you want to confront....
SELECT
*
FROM
B
WHERE NOT EXISTS (SELECT * FROM A)
if NOT EXISTS doesn't work in your DBMS you could also use a left outer join comparing the rows columns values.
SELECT
A.*
from
A left outer join B
on A.col = B.col and ....
I have two tables A(i,j,k) and B(m,n).
I want to update the 'm' column of B table by taking sum(j) from table A. Is it possible to do it in Vertica?
Following code can be used for Teradata, but does Vertica have this kind of flexibility?
Update B from (select sum(j) as m from A)a1 set m=a1.m;
The Teradata SQL syntax won't work with Vertica, but the following query should do the same thing :
update B set m = (select sum(j) from A)
Depending on the size of your tables, this may not be an efficient way to update data. Vertical is a WORM (write once read many times) store, and is not optimized for updates or deletes.
An alternate way would be to first temporarily move the data in the target table to another intermediate (but not temporary) table. After that write a join query using the other table to produce the desired result, and finally use export table with that join query. Finally drop the intermediate table. Of course, this is assuming you have partitioned your table in a way suitable for your update logic.
Say I have four tables.
Table 1:
PK_Column_a
Table 2
PK_Column_c
FK_Column_a
Table 3
FK_Column_c
FK_Column_e
PK_c,e
Table 4
PK_Column_e
If I now want write a SQL query that will select
table1.Column_a, table2.column_c, table4.Column_e
And I wish to connect them where their foreign keys are pointing (e.g. Where table1.Column_a = table2.Column_a).
Do I need to include table 3 in my "FROM" statement? or can I connect table 2 and table 4 without joining them through table 3?
I believe the answer is yes, you would need to join to Table-3, because otherwise you won't be able to bring in data from Table-4. (There's no other way to describe the relationship for the data in Table-4 to the data in Table-1 or Table-2.)
You have to join through the Table-3, otherwise you will generate a cross join and the data won't be valid. Simply every row of table 1&2 will merge with every row from Table 4 ...
I managed to delete 4,000 rows from a table in my 129,000-row production database (Postgres 9.4 on Heroku), but only identified the problem a few days later.
I have a backup from before the loss, but only want to selectively restore the missing rows back to the table, preserving their id's. (A complete restore is not an option as new data has since been added to the table.)
Into a local testing database I have imported the backed-up table as articles_backup, alongside the actual articles table. I want to find all the rows in articles_backups that are missing from articles and then copy these to a new table articles_restores that I will then restore to the production database, back into the articles table (preserving record id's).
This query successfully returns all the id's of the deleted records:
select articles_backups.id
from articles_backups
left outer join articles on (articles_backups.id = articles.id)
where articles.id is null
But I have not been able to copy the result to a new table. I have unsuccessfully tried:
select *
into articles_restores
from articles_backups
left outer join articles on (articles_backups.id = articles.id)
where articles.id is null;
Which gives:
ERROR: column "id" specified more than once
Basically your query with LEFT JOIN / IS NULL does what you are after:
Select rows which are not present in other table
You get the error because you select all columns from both tables, and there is an id column in both. It's not possible to create a new table with duplicate column names, and it's not what you want to begin with. Only select columns from articles_backups:
CREATE TABLE articles_restores AS
SELECT ab.*
FROM articles_backups ab
LEFT JOIN articles a USING (id)
WHERE a.id IS NULL;
While being at it I simplified your query syntax with table aliases. The USING clause is just for the convenience of shorter code. It folds the two id columns into one, but all other columns are still in there twice if you SELECT *.
Use CREATE TABLE AS. SELECT INTO is also defined by the SQL standard and implemented in Postgres, but its use is discouraged. It's used in PL/pgSQL functions for a different purpose. Details:
Creating temporary tables in SQL
You could use an except to retrieve all the rows from articles_backup that are different from articles:
(assuming both tables have the same columns in the same order)
you could also create a temp table with this info to make it easy on your repairing statements:
create table temp_articles as
select * from articles_backup
except
select * from articles
step 1 - update rows from 'articles_backup' present in articles.
This step needs attention... you will have to establish a rule to choose between the data present in articles and the one present in temp_articles.
UPDATE articles a
SET a.col1=b.col1,
a.col2=b.col2,
(... other columns ...)
FROM (SELECT * FROM temp_articles) AS b
WHERE a.id = b.id and /* your rule for data to be (or not) updated goes here */
step 2 - insert rows from 'articles_backup' not present in articles (your deleted records):
insert into articles
select * from temp_articles where id not in (select id from articles)
Let us know if you need more help.
I have two tables in two Databases having identical schema. The two databases are on different servers at different location. Now the data can be inserted and updated in any of the two databases table. The requirement is to sync the two tables in different databases so that they are always having the updated information.
The primary key column will always be unique in either database table.
How to achieve this via SSIS ?
Kindly guide.
You can achieve it with 2 Script Tasks. In the first one:
-- what exists in A and not in B
SELECT * INTO DB1.temp.TBL_A_except FROM
(
SELECT pk FROM DB1.schema1.TBL_A
EXCEPT
SELECT pk FROM DB2.schema2.TBL_B
);
-- what exists in B and not in A
SELECT * INTO DB2.temp.TBL_B_except FROM
(
SELECT pk FROM DB2.schema2.TBL_B
EXCEPT
SELECT pk FROM DB1.schema1.TBL_A
);
Second one:
INSERT INTO DB2.schema2.TBL_B
SELECT * FROM DB1.temp.TBL_A_except;
INSERT INTO DB1.schema1.TBL_A
SELECT * FROM DB2.schema2.TBL_B_except;
DROP TABLE DB1.temp.TBL_A_except;
DROP TABLE DB2.schema2.TBL_B;
If you really want to achieve this with SSIS transformation techniques, I'd use two data flows with 2xCache Connection Manager as our temp table 1 and 2. First one to save data into cache, second to load from cache into tables.
or
Two data flows. Source -> Lookup -> Destination.
Implement lookup to check the second table for existance of PK. If for a record Tbl_A there is no such PK in Tbl_B it means you have to insert this row into Tbl_B. No Match Output, directs row to Destination.