How to get results from sql-jobs without running it every time - sql

I am new to SSC. My scenario is that I have created tables A, B, and C which are related to one another.
Whenever I need data from these three tables I always need to join them to get results. It's a bit time consuming to do this all the time.
Because of this I created a table 'R' and a procedure to update its contents. In this procedure I am joining all the tables (A, B, and C) and storing the result in table R.
To get the results into this table I create a SqlJob which runs once daily. However, there is a problem. Sometimes I want the results from A, B, and C tables where records were inserted recently (before R has been updated).
Is there any solution to get the result from the R table every time without running the SqlJob to update it constantly?
Additional Information
My desired solution is that any time I need data, table R is queried, not the joined tables A, B, and C. Your solution must take this into account.
Thank you.

Instead of running a procedure to constantly update table 'R', create a database view. This view would join A, B, and C together.
Then, any time you need to query A, B, and C, instead of risking getting stale data by querying table R, you would query the view.
I don't know your database schema, so I don't know what fields to join tables A, B, and C on, but it might look something like this:
CREATE VIEW V1
AS
SELECT * FROM A INNER JOIN B ON A.X = B.X INNER JOIN C ON B.Y = C.Y;
To query the view, you would use a SELECT statement just as you would with a table:
SELECT * FROM V1;

add a timex (timestamp) column in your R Table.
so in any time you can get your latest result set.

Based on feedback from the OP that the table 'R' must always be the table queried (is this homework?), then I suppose the only solution would be to place an update trigger on each of the tables 'A', 'B', and 'C' so that when any of these tables are updated their updated contents are automatically placed in table 'R'.
Though inefficient, at least this is better than running a stored procedure on some time basis, for example every 5 minutes.
CREATE PROCEDURE [usp_SyncR]
AS
BEGIN
SET NOCOUNT ON;
UPDATE [R]
SELECT *
GETUTCDATE() as [UpdatedOn]
FROM A INNER JOIN B ON A.X = B.X INNER JOIN C ON B.Y = C.Y
END
CREATE TRIGGER [trg_A_Sync_R]
ON [A]
AFTER Update
AS
BEGIN
EXEC [usp_SyncR];
END
CREATE TRIGGER [trg_B_Sync_R]
ON [B]
AFTER Update
AS
BEGIN
EXEC [usp_SyncR];
END
CREATE TRIGGER [trg_C_Sync_R]
ON [C]
AFTER Update
AS
BEGIN
EXEC [usp_SyncR];
END

Related

BigQuery: Select column if it exists, else put NULL?

I am updating a daily dashboard. Assume everyday I have two tables TODAY and BEFORE_TODAY. What I have been doing daily was something like:
SELECT a, b FROM TODAY
UNION ALL
SELECT a,b FROM BEFORE_TODAY;
TODAY table is generated daily and is appended to all the data before it. Now, I need to generate a new column say c and in order to UNION ALL the two, I need that to be available on BEFORE_TODAY as well.
How can I add a conditional statement to BEFORE_TODAY to check if I have a c column and use that column, else use NULL instead of it.
Something is wrong with your data model. You should be putting the data into separate partitions of the same table. Then you would have no problems. You could just query the master table for the partitions you want. The column would appear in the history, with a NULL value.
That is the right solution. You can create a hacked solution, assuming that your tables have a primary key. For instance, if a, b is unique on each row, you could do:
select t.a, t.b, t.c
from today t
union all
select b.a, b.b,
(select c -- not qualified on purpose
from before_today b2
where b2.a = b.a and b2.b = b.b
) as
from before_today b cross join
(select null as c) c;
If b.c does not exist, then the subquery returns c.c. If it does exist this it returns b2.c from the subquery.
Although by combining dynamic SQL and INFORMATION_SCHEMA, you can write a script to achieve what you told, this doesn't seems to be the right thing to do.
As part of your data model planning, you should add column c to BEFORE_TODAY ahead of time. The newly added column on existing rows will always have NULL value. Then you can add column to TODAY and reference column c as normal.

How can I copy records and all its related records from one database to another database using the index of the highest parent table?

The database that I'm using is Informix; the version is 9.4.
I have a scenario where I'm trying to migration some specific records from one database to another database. Here's an example of what I'm trying to do,
Let's say I have three tables A, B, C in database D1. I need to copy some records from these three tables to database D2.
The relations between A, B, C are here below,
A - parent with primary key a1
B - child to A with primary key b1 and the reference key a1
C - child to B with primary key c1 and the reference key b1
I want to move some records from database D1 with a specific condition a1 = 'something'. Along with A, I need to copy records from B and C that are related to A directly (A<->B) or indirectly (A<->C through B).
What is the easiest and most reliable way to copy the data?
FYI. This is a one time job, not a continuous one.
On the face of it, if the volume of data to be transferred is small enough, then you could use:
BEGIN WORK;
INSERT INTO D2:A
SELECT * FROM A WHERE a1 = 'something';
INSERT INTO D2:B
SELECT B.* FROM B JOIN A ON B.a1 = A.a1
WHERE A.a1 = 'something';
INSERT INTO D2:C
SELECT C.*
FROM C
JOIN B ON C.b1 = B.b1
JOIN A ON B.a1 = A.a1
WHERE A.a1 = 'something';
COMMIT WORK;
It might be possible to simplify things if the condition on A is really as simple as a1 = 'something' so that there is only one record from A to transfer (since a1 is the primary key of A).
BEGIN WORK;
INSERT INTO D2:A
SELECT * FROM A WHERE A.a1 = 'something';
INSERT INTO D2:B SELECT B.* FROM B
WHERE B.a1 = 'something';
INSERT INTO D2:C
SELECT C.*
FROM C
JOIN B ON C.b1 = B.b1
WHERE B.a1 = 'something';
COMMIT WORK;
This avoids joins back to table A.
If the volume of data makes this preposterous, you're probably stuck with something like unloading and reloading the data. You'd be wise to lock the tables in share mode while unloading them.
What volume makes the triple-insert operation preposterous? That's hard to answer, but if the transferred data requires more logical log space than you've got on the server running D2, then you've got problems. Whether it is then best to split the transactions or whether to go for unload/reload is hard to decide. On the whole, unload/reload is probably better if the space required is too large.

SQL SELECT query where the IDs were already found

I have 2 tables:
Table A has 3 columns (for example) with opportunity sales header data:
OPP_ID, CLOSE_DTTM, STAGE
Table B has 3 columns with the individual line items for the Opportunities:
OPP_LINE_ID, OPP_ID, AMOUNT_USD
I have a select statement that correctly parses through Table A and returns a list of Opportunities. What I would like to do is, without joining the data, to have a SELECT statement that will get data from Table B but only for the OPP_IDs that were found in my first query.
The result should be 2 views/resultset (one for each select query) and not just 1 combined view where Table B is joined to Table A.
The reason why I want to keep them separate is because I will have to perform a few manipulations to the result from table B and i don't want the result from table A affected.
Subquery is all what you need
SELECT OPP_ID, CLOSE_DTTM, STAGE
From table a
where a.opp_id IN (Select opp_id from table b)
Presuming you're using this in some client side data access library that represents B's data in some 2 dimensional collection and you want to manipulate it without affecting/ having A's data present in that collection:
Identify the records in A:
SELECT * FROM a WHERE somecolumn = 'somevalue'
Identify the records in B that relate to A, but don't return A's data:
SELECT b.* FROM a JOIN b ON a.opp_id = b.opp_id WHERE a.somecolumn = 'somevalue'
Just because JOIN is used doesn't mean your end-consuming program has to know about A's data. You could also use IN, like the other answer does, but internally the database will rewrite them to be the same thing anyway
I tend to use exists for this type of query:
select b.*
from b
where exists (select 1 from a where a.opp_id = b.opp_id);
If you want two results sets, you need to run two queries. It is unclear what the second query is, perhaps the first query on A.

How to populate query with result from different table?

I have two tables. Table A and table B. Table A has a column that is a reference to the primary key to table B. I want to run a select query on table A and then populate the column that referrers to B with all of the data in that row of B.
SELECT * from A a LEFT JOIN B b ON a."b_id" = b."id" WHERE ...
That gives a result with each row containing all of the columns of A and all of the columns of B. It is a confusing mess to figure out which column is from which table. I want to be able to do something like.
row.A."column name"
row.B."column name"
I don't want to have to rename every single column using AS. There must be a better way to do this.
Not a 100% sure what your asking but what I think your asking is.
You want a way to have only column B values to show? If so you could do:
SELECT B.*
FROM A
JOIN B
ON A.b_id = B.id
That will only get you the B columns and data, If you want A also maybe do but you want to have it separate from b maybe do:
SELECT B.*,'|' AS ['|'], A.*
FROM A
JOIN B
ON A.b_id = B.id
Hopefully this is helpful, if not to you maybe another reader.

SQL Server Table Lock during bulk insert

Below is the sample query, consider A
INSERT INTO Target (Col1,Col2,Col3,Col4) ----------------Statement#1
Select A.Col1,B.Col2,A.Col3,C.Col4 ----------------Statement#2
FROM A WITH(NOLOCK) INNER JOIN B WITH(NOLOCK)
ON A.Id = B.ID
LEFT JOIN C WITH NOLOCK
ON C.Id = B.ID
Where A.Id = 11
At which stage the lock will be applied on table [exclusive lock?], how SQL is going to execute the query?
Result will be fetched from table A, B and C based on join and where clause.
On ready result, start inserting data in table and at same time apply the lock on table.
So when actual data is written on the page table is locked but not during select even though it is INSERT INTO with SELECT?
Those two steps are the logical steps for query execution. What SQL Server can do/do at physical level is another story. At this moment:
INSERT INTO Target (Col1,Col2,Col3,Col4) ----------------Statement#1
Select A.Col1,B.Col2,A.Col3,C.Col4 ----------------Statement#2
FROM A WITH(NOLOCK) INNER JOIN B WITH(NOLOCK)
ON A.Id = B.ID
LEFT JOIN C WITH NOLOCK
ON C.Id = B.ID
Where A.Id = 11
for every output record (see SELECT clause) it takes an X lock on a RID or a KEY within target table (RID for heap / KEY for clustered index) and it inserts that record. This steps are repeated for every output record. So, it doesn't read all records from source tables and only after this step it starts inserting records into target table. Because of NOLOCK table hint on source table it will takes only Sch-S (schema stability) locks on these tables.
If you want to take an X lock on target table then you could use
INSERT INTO Target WITH(TABLOCKX) (Col1,Col2,Col3,Col4)
SELECT ...
If you want minimally logged inserts then please read this article.
Did you specify any "Table Lock" hint. If you want to Row-level lock Set "Table Lock" to off.
or check this it will help you...
http://technet.microsoft.com/en-us/library/ms180876(v=sql.105).aspx