I have been doing a few exercises for an exam, they dont come with answers so just looking clarification, it asks to find the errors in the table and to explain each error which 2PL principles it violates.
I have came up with
T4 at Step 14 cannot work as T1 has not released the lock it holds.
T4 at step 15 and 16 as T1 as it doesnt have the lock.
T4 at step 5 and 6 as T2 hasn't released the lock.
Is this correct
Thanks
Related
I'm trying to see if there are any rows in table A which I've missed in table B.
For this, I'm using the following query:
SELECT t1.cusa
FROM patch t1
LEFT JOIN trophy t2
ON t2.titleid = t1.titleid
WHERE t2.titleid IS NULL
And the query worked before, but now that the trophy table has nearly 200.000 rows, it's extremely slow. I've waited 5 minutes for it to execute but it was still loading and timed out eventually.
Is there any way to speed this query up?
Adding Indexes to titleId on both tables (but especially t2) is the quickest way to get better performance. 200K records is nothing for SQL Server.
Try this and it might perform a bit better!
SELECT t1.cusa
FROM patch t1
WHERE NOT EXISTS (SELECT 1
FROM trophy t2
WHERE t2.titleid = t1.titleid );
I'm getting what appears to be a bizarre bug with Oracle 11.2.0.2.0. Everything tells me that this must be a bug, but I haven't been able to find mentions of it and it's a common enough use case that you would think that it would have been found and fixed long before now. Things work fine in the RDS version of Oracle (12-something) which lends credence to my opinion that it must be a bug.
I've been able to reproduce the issue with nothing but selects from dual, so no tables need to be added to test this. Consider the following SQL:
select t2.*
from (
select 1 as id from "DUAL"
) t1
left outer join (
select 2 as id, 10 as foo from "DUAL"
) t2 on t1.id = t2.id
left outer join (
select
1 as id,
-- case when 1 = 1 then 10 else 7 end as foo,
2 as bar
from "DUAL"
) t3 on t1.id = t3.id
It produces exactly the results you'd expect:
id foo
-- ---
null null
Since the first outer join fails (there is no row in t2 with an id of 1), the columns are null as they should be. Notice I'm not even using the results of the second join. But if I uncomment the case statement, the results change!
id foo
-- ---
2 10
Here are some things I've observed so far:
The results change if there are 1 or more additional outer joins in the query which also contain a case statement.
It doesn't matter if the second outer join passes or fails.
It only appears that literals in t2's select statement have the issue. That is, if the select statement were something like select id, 10 as foo from "MY_TABLE", id would be null as you'd expect but foo still is 10.
There are a couple of ways you can answer this question:
If this is actually not a bug, can you explain why this is occurring and cite documentation explaining the behavior?
If this is a bug, can you link to references to the bug and in what version of 11g that it was resolved in (if any)?
What I don't want are "this must be a bug because it violates such-and-such spec" answers. I already know this and I already completely agree with you. I need to know why it's happening (if it's not a bug) or where/when it was fixed if it is a bug.
If you have a version of Oracle where both versions of the query work properly, please leave a comment with the version number. It may be useful to find out what bug this was and when it was fixed.
I want to update the primary key in SQL Server. I executed three insert statement in my table. And the primary key column is like this.
Id NUM
-------
1 T1
2 T2
3 T3
7 T4
8 T5
9 T6
13 T7
14 T8
15 T9
16 T10
I want to update the column Id to get this:
Id NUM
-------
1 T1
2 T2
3 T3
4 T4
5 T5
6 T6
7 T7
8 T8
9 T9
10 T10
Can someone please guide me on how to resolve this?
Thanks in advance.
Don't do it! Remember the purpose of primary keys. They are non-NULL keys that uniquely identify each row in a table. They serve multiple uses. In particular, they are used for foreign key references. And, in SQL Server, they are (by default) used to sort the original data.
The identity column provides an increasing sequence of numbers, balancing the objective of an increasing number with performance. As a result, gaps appear for various reasons, but particularly due to deletes, failed inserts, and performance optimizations in a parallel environment.
In general, the aesthetics of gapless numbers are less important than the functionality provided by the keys -- and gaps have basically no impact on performance.
And, in particular, changing primary keys can be quite expensive:
The data on the pages needs to be re-sorted for the clustered index. This is true even when the ordering does not change.
Foreign keys have to be updated, if you have cascading updates set for the indexes.
Foreign keys are invalidated -- a really bad thing -- if you happen not to have the proper foreign key definitions.
And, even if you do go through the trouble of doing this, gaps are going to appear in the future, due to deletes, failed inserts, and database optimizations.
use row_number() to generate the new sequence. You need to order by NUM ignoring the first character T
UPDATE t
SET Id = rn
FROM
(
SELECT Id, NUM,
rn = row_number() OVER (ORDER BY convert(int,
substring(NUM, 2, len(NUM) - 1) ) )
FROM yourtable
) t
Unfortunately my ability to Query has outgrown my knowledge of SQL Optimization, so i am hoping someone would help a young analyst by looking at this atrocious execution plan and provide some wisdom as to how i could speed it up. I've read a few threads about spooling, but they were all mostly a discussion about weather an Eager Table spool is good or bad, and the answer is always "it depends".
My execution plan looks like it's Spooling and Sorting the same #Temp Table multiple times, and it's eating up a lot of execution cost.
My understanding of a Table Spool is that it is temporary storage to be used later, but if the data is already stored for later use, why would it spool the same data over and over again? My query doesn't require any ordering so why would it need to sort the same #TempTable/Spool multiple times?
I'm so new to this, i can't figure out how to attach the entire execution plan.... so i attached an image of the bottom half of it...
Help me experienced analysts. You're my only hope.
A Little Context.
I currently have a transaction table that tracks all changes made to a lead in my CRM, and i am attempting to create a new table from this data to speed up reporting.
I am pulling data from this transaction table and flagging the first action, first user, and other firsts of a lead by using Row_Number(). I am then inserting every "first" into a #Temp Table, as i know i am going to utilize this data multiple times.
SELECT
ID,
Action,
ROW_NUMBER() OVER (PARTITION BY ID, Action ORDER BY DATE) AS ActionNum,
ROW_NUMBER() OVER (PARTITION BY ID, Actor ORDER BY DATE) AS USERNUM
INTO #Temp
FROM Table
;
I am then Left joining this #Temp Table many times (10 times actually). I have tried multiple other ways of solving this issue but using Row_Number multiple times seems like the best solution.
SELECT
*
FROM #temp T1
LEFT JOIN #Temp T2
ON T2.ID = T1.ID AND T2.Action = A2 AND T2.ActionNum = 1
LEFT JOIN #Temp T3
On T3.ID = T1.ID AND T3.Action = A3 AND T3.ActionNum = 1
LEFT JOIN #Temp T4
ON T4.ID = T1.ID AND T4.UserNum = 1
WHERE
T1.Action = A1
AND
T1.ActionNum = 1
I've looked into creating a clustered index on the #TempTable, but i must not be doing it right because it didn't change anything about my execution.
Thanks in advance for all your help! Any good reading material is also greatly apprecaited!
Best,
Austin
I have a fairly generic select query. When I select the top 1245 results of a particular result set, it runs in under a second, as expected. However, if I run it for 1246, it runs continuously as if on an infinite loop. I've checked the formatting of rows 1245 and 1246, the data for which appears completely fine. I can also run the same query on a separate group of users numbering over 2300, which again, runs almost instantly, which makes me think it's not in relation to memory issues.
As a quick example of the query formatting:
SELECT TOP 1246 a.id,
(SELECT TOP 1 col_1 FROM table_1 t INNER JOIN table_2 c ON t.id=c.id WHERE t.id=a.id) AS [columnAlias]
FROM table_3 a
Open to any ideas on troubleshooting.
If I can provide anything else that might help, just ask.
The difference in performance is probably due to changes in the execution plan. You might want to check that statistics are up-to-date.
Second, your query really makes no sense, because the subquery has no relationship to the outer query. So, you might as well accept that you are going to get a single value and move it to the from clause:
SELECT TOP 1246 a.id, col_1 AS [columnAlias]
FROM table_3 a CROSS JOIN
(SELECT TOP 1 col_1 FROM table_1 t INNER JOIN table_2 c ON t.id=c.id);
Finally, if you have some other intention with your query, you should ask another question. If you revise your question, you may invalidate this answer which draws downvotes.
"Running continuously on an infinite loop" when you get to a specific record makes me suspicious that you are encountering a deadlocking situation. There isn't anything obvious in what you posted that would cause it, so I'd suspect there is a context where this is running, e.g. it's part of a several step transaction, that could be the cause.