Trial column in client statictics SQL - sql

I start attach Client Statistics to examinate my queries. There are "Trial x" columns, What do they mean?
I use SQL Server 2014

Trial X column specifies the number of times you execute the query. Trial column creates each time when you execute the query. Maximum of last 10 trial statistics will be shown. Average values of the trial will be shown in Average column.
MSDN says it “Displays information about the query execution grouped
into categories. When Include Client Statistics is selected from the
Query menu, a Client Statistics window is displayed upon query
execution. Statistics from successive query executions are listed
along with the average values. Select Reset Client Statistics from the
Query menu to reset the average.”

Every time you run a query with include client statistics, SQL Server includes a run history ("Trial n"). This allows you to easily compare recent execution statistics. The history is generally useful only if you are running the same basic query repeatedly and are making changes that may affect performance. The up and down arrows indicate progression or digression compared to the previous run.

Related

SQL query for inventory management

Hope I can explain the problem I'm having trouble with.
I have to write a stepwise methodology using pseudocode/SQL query to auto generate a list of products/items with low stock/expiry from the inventory database.The list must be updated at 12 a.m. daily.
I tried this
CREATE EVENT IF NOT EXISTS update_table
ON SCHEDULE EVERY 1 DAY STARTS '2022-05-22 00:00:00'
ON COMPLETION PRESERVE ENABLE
Do
Select inventory.products from inventory where inventory.stocks <
inventory.required_stocks.
Your stated requirement is to run some sort of report very soon after the beginning of each calendar day.
The next question you must answer is this: What will you do with that report? Will you simply drop it into "low_stock" table someplace in your database? Will you format it into an email message and send it to your purchasing department? It will be difficult to make "pseudocode" for your requirement without first analyzing the overall business process you intend to enhance.
Various RDBMS systems have ways of doing scheduled things at particular times of day. You've shown the EVENT setup provided by MariaDB / MySQL. SQL Server has their "Jobs" system. postgreSQL has the pg_cron extension. Yo
The thing is, you can't just do SELECT operations from within these scheduled database actions: the result sets have noplace to go from that context. You can do CREATE TABLE midnight_run AS SELECT whatever ... to place the results in a table. But then the results are in another table.
If you want to get the results out of the DBMS, you'll need a UNIXish cron job or a Windowsish scheduled task running an appropriate application at midnight each day.
Pro tip Do your best to avoid scheduling stuff for precisely midnight. Many things run then. If you wait until a couple of minutes after the hour, your code is less likely to contend with other midnight code.

SQL Method for Cascading Workload Based on Rank and Available Hours

Recently I created an automated production scheduling tool through Excel that assigns a rank to items being produced in the same process, and then uses that rank in combination with the workload to create a schedule.
It functions exactly the way it is intended to, but due to the large amount of data and it being excel it has very slow performance, which is why I am looking to move the calculations over to SQL.
The general logic is like this:
-Always produce everything from the first day before the second day
-Always produce items from an earlier rank before items from a later rank
You can see how this plays out in the image below, where the line has 21.5 hours today, so items will be produced on day 1 until it equals 21.5, where the remainder is then carried over to day 2 and so on.
I was able to do this in excel using lengthy positional based formulas, but I am trying to think of a way to get the same result in SQL without having to rely on looking at the row above.
I am not sure how to convey something like 'Subtract from the available time production time of higher priority items produced on the same day'.
I apologize if the question is unclear, but any advice would be appreciated.
Image of Production Hours Cascading by Priority and Day
Example of Position-Based Fomula
Thanks to shawnt00, that put me in the right direction. Ultimately I had to modify the case statements a bit to go off of the cumulative total instead, but I was able to get the desired results using a sum() Over (partition by order by ) statement.

CDC LSNs: queries return different minimum value

I have CDC enable on a table and I'm trying to get the minimum LSN for that table to use in an ETL job. However when I run this query
select sys.fn_cdc_get_min_lsn('dbo_Table')
I get a different result to this query
select min(__$start_lsn) from cdc.dbo_Table_CT
Shouldn't these queries return the same values? and if they don't why not? and how to get them back in sync?
The first query:
select sys.fn_cdc_get_min_lsn('dbo_Table')
Invokes a system function that returns the lowest POSSIBLE lsn for the capture instance. This value is set when the cleanup function runs. It is recorded in, and queried from, cdc.change_tables.
The second query:
select min(__$start_lsn) from cdc.dbo_Table_CT
Looks at the actual capture instance and returns the lowest ACTUAL lsn for the instance. This value is set when the first actual change to the instance is logged after the cleanup function runs. It is recorded in, and queried from, cdc.dbo_Table_CT.
They're unlikely to tie out, statistically speaking. For the purposes of an ETL job, using the call to the system table will likely be quicker, and is a more accurate reflection of when the current set of change records started being accumulated.

Dynamically filtering large query result for presentation in SSRS

We have a system that records data to an SQL Server DB captured from field equipment every minute. This data is used for a number of purposes, one of which is for charting in reports via SSRS.
The issue is that with such a high volume of data, when a report is run for period of for example 3 months, the volume of data returned obviously causes excessive report rendering times.
I've been thinking of finding a way of dynamically reducing the amount of data returned, based on the start and end time periods chosen. Something along the lines of a sliding scale where from the duration between the start and end period, I can apply different levels of filtering so that where larger periods are chosen, more filtering occurs while for smaller periods less or no filtering occurs.
There is still a need to be able to produce higher resolution (as in more data points returned) reports for troubleshooting purposes.
For example:
Scenario 1:
User is executing a report for a period of 3 months. Result set returned by the query is reduced for performance reasons without adversely affecting what information the user wants to see (the chart is still representative of the changes over time).
Scenario 2:
User executes the report for a period of 1 hour, in order to look for potential indicator(s) of problems with field devices while troubleshooting the system. For this short time period, no filtering is applied.
My first thought was to use a modulo operation on the primary key of the data (which is an identity field), whereby the divisor is chosen depending on the difference between the start and end dates.
For example, something like if the difference between the start and end dates for the report execution period is 5 weeks, choose a divisor of 5 and apply a mod to the PK, selecting where the result is equal to zero.
I would love to get feedback as to whether this sounds like a valid approach or whether there is a better way to do this.
Thanks.

Business Objects (webi) stuck forever on retrieving data for specific dates

I am very new to BO webintelligence.
I am doing a very simple query, retrieve Sales Amount for dates between 2012 and 2013. Just for this simple query when I run it, my BO crashes or gets stuck on "please wait" window. Why is this happening? If I select like 3 or 4 days like between Jan 1st 2012 and Jan 4th 2012, it runs fine. Is there anything from my end that I am doing wrong? This is in production mode.
I also wanted to point out that I have tried limiting my data set for specific region etc.
Too many unknowns to successfully assist. I presume this is webi 3.1.x or 4.1.x and I presume that the Sales Amount is a measure and that your query includes a date which does not include a time component. If you only have these two objects in your results pane and have the region in your conditions then pulling something like [Date] and sum([Sales Amt]) should not take long to execute. As previous poster suggests, try to execute the sql in a tool like SQL*Developer or Management Studio or however the source database / OLAP is stored.
Even though you've limited the data to a "region" this may be too much data. Try selecting a smaller result set of 100 rows or changing the option for "retrieve duplicate rows" on the query panel.
If possible, post the query from your report using generic object names?