Create Hourly Average query from Logger table - sql

I use a logger table with timestamp,point id and value fields.
The control system adds a record each time a point id value is changed to the logger table.
Here's an example of the SELECT query of the logger table:
I want to run query on that table and get in return 1 hourly average values of some tags(point id's of different value - for example: 1 hourly average of point_id=LT_174, 1 hourly average of point_id=AT_INK and so on).
I would like to present the results list in pivot table results list.
Can I do that? and if it's imposible to get all requested tags together, how can I run the same query for 1 tag? (I use VB.Net as a platform for running this query, so I can build it by calculating all requested tags in a loop, each time 1 tag).
I'll be happy to get ideas and suggestions for this problem.

Related

GCP Bigquery - query empty values from a record type value

I'm trying to query all resources that has empty records on a specific column but I'm unable to make it work. Here's the query that I'm using:
SELECT
service.description,
project.labels,
cost AS cost
FROM
`xxxxxx.xxxxx.xxxx.xxxx`
WHERE
service.description = 'BigQuery' ;
Here's the results:
As you can see, I'm getting everything with that query, but as mentioned, I'm looking to get resources with empty records only for example record 229,230 so on.
Worth to mention that schema for the column I'm trying to query is:
project.labels RECORD REPEATED
The above was mentioned because I tried using several combinations of WHERE but everything ends up in error.
To identify empty repeated record - you can use ARRAY_LENGTH in WHERE clause like in below example
WHERE ARRAY_LENGTH(project.labels) = 0

Create and define functions to apply to different segments/filters that exist (Instead of doing one very long SQL query)

I am trying to come up with some arithmetic calculations for some survey data. I want to do these calculations for a number of segments and want to figure out how to do it without writing numerous SELECT statements.
This is what I have so far:
FACT table. This tables holds survey data at a respondent level - for example, if a survey had 10 questions, this table will have 11 columns: a column to identify the respondent_ID and 10 other columns to identify the responses to those questions.
DIMENSION table. This table segments we want to view the survey data by at a respondent level - for example, if we want to view survey responses by membership_status and age_bracket, this table will have 3 columns: a column to identify the respondent_ID, and two columns to identify membership_status and age_bracket.
OUTPUT.
I want to get aggregate calculations to summarizes the responses to the survey overall and to each question. I also want to be able to get this information for all possible segments that exist in the DIMENSIONS table.
I can do the query below, however I'll need to do this for every segment:
SELECT
COUNT(DISTINCT(CASE WHEN f.QUESTION_1 IN ('8', '9', '10') THEN f.RESPONDENT_ID END))*1.0 / COUNT(DISTINCT(CASE WHEN f.QUESTION_1 IS NOT NULL THEN f.RESPONDENT_ID END))*1.0 AS CSAT_1
FROM FACT f
JOIN DIMENSION d ON f.RESPONDENT_ID = d.RESPONDENT_ID
WHERE d.MEMBERSHIP_STATUS = 'ACTIVE'
The calculation above gives us something called a top 3 box. That is just one calculation, I will need to do many of them. Additionally, ever calculation will need to be done for each segment. In order to get a calculation for nonactive members, I would need to run another query and set d.MEMBERSHIP_STATUS = 'INACTIVE' and I would need to run another query with no filter, to get the overall calculation.
Is there a way I could store all my arithmetic calculations needed in my output as a function (maybe in a temp table or something) - my thought is that it'll be better to set the functions somewhere, and then when I need to calculate the output, I would some how call the function to do all the calculations I need, and give me all the calculations for every segment I have?
I can't fully envision how to get there, or if this is even a good solution, so guidance and detailed SQL code would be extremely helpful.Examples please!

Microsoft Access query to retrieve random transactions with seemingly impossible criteria

I was asked to assist with developing a report to retrieve a 25% sample of random transactions within a specific date range. I am not a programmer but I was able to devise the following fairly quickly:
SELECT TOP 25 PERCENT account.CID, account.ACCT, account.NAME, log.DATE, log.action_txt, log.field_nm, log.from_data, log.to_data, log.tran_id, log.init
FROM account INNER JOIN log ON account.ACCT = log.ACCT
GROUP BY account.CID, account.ACCT, account.NAME, log.DATE, log.action_txt, log.field_nm, log.from_data, log.to_data, log.tran_id, log.init
HAVING (((log.DATE) Between #2/7/2018# And #6/15/2018#) AND ((log.action_txt)="mod" Or (log.action_txt)="del") AND ((log.init)="J1X"
ORDER BY log.tran_dt
This returns 25% of the records within the date range. Each record row is unique but each account number potentially has multiple records on each day. In some cases the records have the same date and tran_id as well.
Upon further discussion with the requester, he actually wants to see all of the transactions for 25% of the accounts that have activity on each day within the date range. Thus if there were 100 accounts on 3/1/2018 with records in this table, he wants to see all of the transactions for 25 of those accounts; if there were 60 accounts on 3/2/2018 with records in this table, he wants to see all of the transactions for 15 of those accounts; and so on.
I was thinking that an Access module would work best in this scenario as I believe there are multiple parts to this. I figured that I need a function to loop through the date range and for each day:
1. Count the account numbers only one time
2. Return all of the transactions for 25% of the total accounts
But as I mentioned, I am not a programmer and I am exhausted from searching possible solutions for the many parts.
I think the key to your question is that you only really need a pseudo random selection of results for your report. So you can force the Random number generator to reorder your results based on a value in the record and the current time.
Something like this should work - I assume your actiontxt field is a text field and pull out the length of each field and apply that to current date/time to create a pseudo random number that can be sorted.
All I really do is change your ORDER BY line
See if this works for you
SELECT TOP 25 PERCENT
account.CID, account.ACCT, account.NAME, log.DATE, log.action_txt, log.field_nm, log.from_data,
log.to_data, log.tran_id, log.init
FROM account
INNER JOIN log ON account.ACCT = log.ACCT
GROUP BY account.CID, account.ACCT, account.NAME, log.DATE, log.action_txt, log.field_nm, log.from_data, log.to_data, log.tran_id, log.init
HAVING (((log.DATE) Between #2/7/2018# And #6/15/2018#) AND ((log.action_txt)="mod" Or (log.action_txt)="del") AND ((log.init)="J1X"
ORDER BY Rnd(CLng(Now()*Len(log.action_txt))-(Now()*Len(log.action_txt)));
Modified similar idea from other StackOverflow question and response

SQL - How to insert a subquery into a query in order to retrieve unique values

I am writing reports using Report Builder 3, and I need some help with an sql query to get unique values.
Using the following sample data:
I need to be able to get one single value for feeBudRec returned for each feeRef. The value of each feeBudRec is always the same for each individual feeRef (eg for every data row for feeRef LR01 will have a feeBudRec of 1177).
The reason why I need to get a single feeBudRec value for each feeRef is that I need to be able to total the feeBudRec value for each feeRef in a feePin (eg for feePin LEE, I need to total the feeBudRec values for LR01 and PS01, which should be 1177 + 1957 to get a total of 3134; but if I don't have unique values for feeBudRec, it will add the values for each row, which would bring back a total of 11756 for the 8 LEE rows).
My experience with writing SQL queries is very limited, but from searching the internet, it looks like I'll need to put in a subquery into my SQL query in order to get a single unique feeBudRec figure for each feeRef, and that a subquery that gets a minimum feeBudRec value for each feeRef should work for me.
Based on examples I've found, I think the following subquery should work:
SELECT a.feeRef, a.feeBudRec
FROM (
SELECT uvw_EarnerInfo.feeRef, Min(uvw_EarnerInfo.feeBudRec) as AvailableTime
FROM uvw_EarnerInfo
GROUP BY
uvw_EarnerInfo.feeRef
) as x INNER JOIN uvw_EarnerInfo as a ON a.feeRef = x.feeRef AND a.feeBudRec = x.AvailableTime;
The problem is that I have no idea how to insert that subquery into the query I'm using to produce the report (as follows):
SELECT
uvw_EarnerInfo.feeRef
,uvw_EarnerInfo.PersonName
,uvw_EarnerInfo.PersonSurname
,uvw_EarnerInfo.feePin
,uvw_RB_TimeLedger.TimeDate
,uvw_RB_TimeLedger.matRef
,uvw_RB_TimeLedger.TimeTypeCode
,uvw_RB_TimeLedger.TimeCharge
,uvw_RB_TimeLedger.TimeElapsed
,uvw_WoffTimeByTime.WoffMins
,uvw_WoffTimeByTime.WoffCharge
,uvw_EarnerInfo.feeBudRec
,uvw_EarnerInfo.personOccupation
FROM
uvw_RB_TimeLedger
LEFT OUTER JOIN uvw_WoffTimeByTime
ON uvw_RB_TimeLedger.TimeId = uvw_WoffTimeByTime.TimeId
RIGHT OUTER JOIN uvw_EarnerInfo
ON uvw_EarnerInfo.feeRef = uvw_RB_TimeLedger.feeRef
WHERE
uvw_RB_TimeLedger.TimeDate >= #TimeDate
AND uvw_RB_TimeLedger.TimeDate <= #TimeDate2
If that subquery will get the correct results, can anyone please help me with inserting it into my report query. Otherwise, can anyone let me know what I will need to do to get a unique feeBudRec value for each feeRef?
Depends on the exact schema, but assuming the uvw_EarnerInfo lists the Pin, Ref, and Rec without duplicates, try adding an extra column (after personOccupation) on the end of your query such as :
feeBudRecSum = (Select SUM(FeeBudRec) From uvw_EarnerInfo x
where x.feePin = uvw_EarnerInfo.feePin
Group By x.FeePin)
Note that you would not Sum these values in your report. This column should have the total you are looking for.
The key to Report Builder is to get your query correct from the offset and let the wizard then structure your report for you. It takes all the hard work out of structuring your report manually.
I haven't used Report Builder for a while now but in the query builder of the report displaying the graphical representation of your query you should be able to drag and drop columns in and out of the query set. Dragging a column upwards and out of the box (showing your columns) would have the effect of causing your report to break on this column.
If you restructure this way you will probably have to run the report generator again to regenerate the report and restructure it.
Once you are happy with the structure you can then begin to add the summary columns.

Group by run when there is no run number in data (was Show how changing the length of a production run affects time-to-build)

It would seem that there is a much simpler way to state the problem. Please see Edit 2, following the sample table.
I have a number of different products on a production line. I have the date that each product entered production. Each product has two identifiers: item number and serial number I have the total number of labour hours for each product by item number and by serial number (i.e. I can tell you how many hours went into each object that was manufactured and what the average build time is for each kind of object).
I want to determine how (if) varying the length of production runs affects the average time it takes to build a product (item number). A production run is the sequential production of multiple serial numbers for a single item number. We have historical records going back several years with production runs varying in length from 1 to 30.
I think to achieve this, I need to be able to assign 'run id'. To me, that means building a query that sorts by start date and calculates a new unique value at each change in item number. If I knew how to do that, I could solve the rest of the problem on my own.
So that suggests a series of related questions:
Am I thinking about this the right way?
If I am on the right track, how do I generate those run id values? Calculate and store is an option, although I have a (misguided?) preference for direct queries. I know exactly how I would generate the run numbers in Excel, but I have a (misguided?) preference to do this in the database.
If I'm not on the right track, where might I find that track? :)
Edit:
Table structure (simplified) with sample data:
AutoID Item Serial StartDate Hours RunID (proposed calculation)
1 Legend 1234 2010-06-06 10 1
3 Legend 1235 2010-06-07 9 1
2 Legend 1237 2010-06-08 8 1
4 Apex 1236 2010-06-09 12 2
5 Apex 1240 2010-06-10 11 2
6 Legend 1239 2010-06-11 10 3
7 Legend 1238 2010-06-12 8 3
I have shown that start date, serial, and autoID are mutually unrelated. I have shown the expectation that labour goes down as the run length increases (but this is a 'fact' only via received wisdom, not data analysis). I have shown what I envision as the heart of the solution, that being a RunID that reflects sequential builds of a single item. I know that if I could get that runID, I could group by run to get counts, averages, totals, max, min, etc. In addition, I could do something like hours/ to get percentage change from the start of the run. At that point I could graph the trends associated with different run lengths either globally across all items or on a per item basis. (At least I think I could do all that. I might have to muck about a bit, but I think I could get it done.)
Edit 2: This problem would appear to be: how do I get the 'starting' member (earliest start date) of each run when I don't already have a runID? (The runID shown in the sample table does not exist and I was originally suggesting that being able to calculate runID was a potentially viable solution.)
AutoID Item
1 Legend
4 Apex
6 Legend
I'm assuming that having learned how to find the first member of each run that I would then be able to use what I've learned to find the last member of each run and then use those two results to get all other members of each run.
Edit 3: my version of a query that uses the AutoID of the first item in a run as the RunID for all units in a run. This was built entirely from samples and direction provided by Simon, who has the accepted answer. Using this as the basis for grouping by run, I can produce a variety of run statistics.
SELECT first_product_of_run.AutoID AS runID, run_sibling.AutoID AS itemID, run_sibling.Item, run_sibling.Serial, run_sibling.StartDate, run_sibling.Hours
FROM (SELECT first_of_run.AutoID, first_of_run.Item, first_of_run.Serial, first_of_run.StartDate, first_of_run.Hours
FROM dbo.production AS first_of_run LEFT OUTER JOIN
dbo.production AS earlier_in_run ON first_of_run.AutoID - 1 = earlier_in_run.AutoID AND
first_of_run.Item = earlier_in_run.Item
WHERE (earlier_in_run.AutoID IS NULL)) AS first_product_of_run LEFT OUTER JOIN
dbo.production AS run_sibling ON first_product_of_run.Item = run_sibling.Item AND first_product_of_run.AutoID run_sibling.AutoID AND
first_product_of_run.StartDate product_between.Item AND
first_product_of_run.StartDate
Could you describe your table structure some more? If the "date that each product entered production" is a full time stamp, or if there is a sequential identifier across products, you can write queries to identify the first and last products of a run. From that, you can assign IDs to or calculate the length of the runs.
Edit:
Once you've identified 1,4, and 6 as the start of a run, you can use this query to find the other IDs in the run:
select first_product_of_run.AutoID, run_sibling.AutoID
from first_product_of_run
left join production run_sibling on first_product_of_run.Item = run_sibling.Item
and first_product_of_run.AutoID <> run_sibling.AutoID
and first_product_of_run.StartDate < run_sibling.StartDate
left join production product_between on first_product_of_run.Item <> product_between.Item
and first_product_of_run.StartDate < product_between.StartDate
and product_between.StartDate < run_sibling.StartDate
where product_between.AutoID is null
first_product_of_run can be a temp table, table variable, or sub-query that you used to find the start of a run. The key is the where product_between.AutoID is null. That restricts the results to only pairs where no different items were produced between them.
Edit 2, here's how to get the first of each run:
select first_of_run.AutoID
from
(
select product.AutoID, product.Item, MAX(previous_product.StartDate) as PreviousDate
from production product
left join production previous_product on product.AutoID <> previous_product.AutoID
and product.StartDate > previous_product.StartDate
group by product.AutoID, product.Item
) first_of_run
left join production earlier_in_run
on first_of_run.PreviousDate = earlier_in_run.StartDate
and first_of_run.Item = earlier_in_run.Item
where earlier_in_run.AutoID is null
It's not pretty, and will break if StartDate is not unique. The query could be simplified by adding a sequential and unique identifier with no gaps. In fact, that step will probably be necessary if StartDate is not unique. Here's how it would look:
select first_of_run.AutoID
from production first_of_run
left join production earlier_in_run
on (first_of_run.Sequence - 1) = earlier_in_run.Sequence
and first_of_run.Item = earlier_in_run.Item
where earlier_in_run.AutoID is null
Using outer joins to find where things aren't still twists my brain, but it's a very powerful technique.