Query seems executing despite it is finished in Impala - hive

I am running a query and despite the fact that it reached the limit in (8 mins) and it seems FINISHED inside the query details in CM, it stops at (19 mins). Why is this happening? I saw the below information in query details:
First row fetched: 7.9m (75ms)
Last row fetched: 8.0m (7.21s)
Released admission control resources: 19.5m (11.6m)
Unregister query: 19.5m (103ms)

Related

Bigquery LegacySQL invalid snapshot in time using special decorator #0

I'm running bigquery legacy sql and using the special decorator #0 it gives an error on any table:
Invalid snapshot time 1570001838355 for table
upbeat-stratum-242175:my_dataset.my_data. Cannot read before
1570001838359
Running this again will only change with current timestamp but always shows a difference in the timestamps from the error of ~4 seconds.
Also this is happening regardless the table I run it against.
https://cloud.google.com/bigquery/table-decorators
#0 seems broken right now. Could you use #-604700000 instead?
The calculation is 3600000 * 24 * 7 = 604800000 and cut a little from the end.
It allows you to time travel back to 7 days ago, which is practically the same as #0 (when it works).

SQL SentryOne Plan Explorer, What is Duration?

I've just started using the SentryOne Plan Explorer to help tune my SQL Server queries, and have a question, I can't seem to find an answer for. What is Duration?
I would think it's the total time it took for the query to run. However, every query I am testing goes much longer in real-time than what ends up showing under Duration.
Below is a screenshot of what I'm seeing. Watching the query run takes over 2 minutes, but the final duration ends up being .770?
Thanks for any insight!
This is the answer provided by SentryOne:
While a query is running, we show clock time on the status bar. However, at the end, we sum up the total duration, in milliseconds, as reported by the trace rows we collected. We subtract duration from any trace rows that are discarded (e.g. events that don't generate plans, like WAITFOR).

SQL Server Timeout based on locking?

I've got a SQL query that normally runs for about .5 seconds and now, as our server gets slightly more busy, it is timing out. It involves a join on a table that is getting updated every couple seconds as a matter of course.
That is, the EmailDetails table gets updated from a mail service to notify me of reads,clicks, etc and that table gets updated every couple seconds as those notifications come through. I'm wondering if the join below is timing out because those updates are locking the table and blocking my SQL join.
If so, any suggestions how to avoid my timeout would be appreciated.
SELECT TOP 25
dbo.EmailDetails.Id,
dbo.EmailDetails.AttendeesId,
dbo.EmailDetails.Subject,
dbo.EmailDetails.BodyText,
dbo.EmailDetails.EmailFrom,
dbo.EmailDetails.EmailTo,
dbo.EmailDetails.EmailDetailsGuid,
dbo.Attendees.UserFirstName,
dbo.Attendees.UserLastName,
dbo.Attendees.OptInTechJobKeyWords,
dbo.Attendees.UserZipCode,
dbo.EmailDetails.EmailDetailsTopicId,
dbo.EmailDetails.EmailSendStatus,
dbo.EmailDetails.TextTo
FROM
dbo.EmailDetails
LEFT OUTER JOIN
dbo.Attendees ON (dbo.EmailDetails.AttendeesId = dbo.Attendees.Id)
WHERE
(dbo.EmailDetails.EmailSendStatus = 'NEEDTOSEND' OR
dbo.EmailDetails.EmailSendStatus = 'NEEDTOTEXT')
AND
dbo.EmailDetails.EmailDetailsTopicId IS NOT NULL
ORDER BY
dbo.EmailDetails.EmailSendPriority,
dbo.EmailDetails.Id DESC
Adding Execution Plan:
https://dl.dropbox.com/s/bo6atz8bqv68t0i/emaildetails.sqlplan?dl=0
It takes .5 seconds on my fast macbook but on my real server with magnetic media it takes 8 seconds.

Low Oracle Trigger Performance after 10 hours

We have AFTER trigger defined on INSERT in some table. We have seen that till 10 hours everything works fine, but after 10 hours INSERT query is taking time in minutes. We are inserting around 100 rows per second. We have seen that at oracle side some redo logs, undo segment extension and write wait is taking much of the time during the problem duration. But we have seen that undo segment extension limit is not reached but undo segment extension is taking time. undo table space is fine during that time.
So is it due to result of redo logs??

Sporadic Execution Times for Query in SQL Server 2008

I have been running some speed tests on a query where I insert 10,000 records into a table that has millions (over 24mil) of records. The query (below) will not insert duplicate records.
MERGE INTO [dbo].[tbl1] AS tbl
USING (SELECT col2,col3, max(col4) col4, max(col5) col5, max(col6) col6 FROM #tmp group by col2, col3) AS src
ON (tbl.col2 = src.col2 AND tbl.col3 = src.col3)
WHEN NOT MATCHED THEN
INSERT (col2,col3,col4,col5,col6)
VALUES (src.col2,src.col3,src.col4,src.col5,src.col6);
The execution times of the above query are sporadic; ranging anywhere from 0:02 seconds to 2:00 minutes.
I am running these tests within SQL Server Studio via a script that will create the 10,000 rows of data (into the #tmp table), then the MERGE query above is fired. The point being, the same exact script is executing for each test that I run.
The execution times bounce around from seconds to minutes as in:
Test #1: 0:10 seconds
Test #2: 1:13 minutes
Test #3: 0:02 seconds
Test #4: 1:56 minutes
Test #5: 0:05 seconds
Test #6: 1:22 minutes
One metric that I find interesting is that the seconds/minutes alternating sequence is relatively consistent - i.e. every other test the results are in seconds.
Can you give me any clues as to what may be causing this query to have such sporadic execution times?
I wish I could say what the cause of the sporadic execution times was, but I can say what I did to work around the problem...
I created a new database and target table and added 25 million records to the target table. Then I ran my original tests on the new database/table by repeatedly inserting 10k records into the target table. The results were consistent execution times of aprox 0:07 seconds (for each 10k insert).
For kicks I did the exact same testing on a machine that has twice as much CPU/Memory than my dev laptop. The results were consistent execution times of 0:00 seconds (It's time for a new dev machine ;))
I dislike not discovering the cause to the problem, but in this case I'm going to have to call it good and move on. Hopefully, someday, a StackO die-hard can update this question with a good answer.