SQL statement that retrieves formulas for two different dates in db - sql

All,
I have three total tables. The first table 'rollup1' contains the number of views and number of clicks for a campaign, as well as a one-up number for the day field (largest number in column represents the current date) A second table 'rollup2' contains the earnings for the campaign. It also contains the same one-up number for the dayfield. The third table 'campaigns' contains the ID/names for the campaigns. campaigns.id = rollup1.id = rollup2.id and rollup1.day=rollup2.day
I want to generate an SQL query that lists the campaign id, name, specific calculated value from yesterday, and specific calculated value from today. The specific calculated value I'm looking for is (earnings/clicks)*1000.
The results will look like:
id | name | yesterday | today
a | Campaign1 | $0.05 | $0.010
I think I can use case statements, but I can't seem to get it correct. Here's what I have so far. It calculates the formula for yesterday, but not the one for today. I need these to be side by side.
select campaigns.id, campaigns.name, rollup1.views,rollup1.clicks,rollup2.costs,round((rollup2.costs/rollup1.views)*1000,2) as yesterday
from campaigns,rollup1,rollup2
where campaigns.id = rollup1.campaign_id and campaigns.id = rollup2.campaign_id
and rollup1.dayperiod = rollup2.dayperiod
and rollup1.dayperiod = (SELECT (max(rollup1.dayperiod) -1) FROM rollup1)
Thanks for any help you can provide.

Related

SQL query to get the data as per the input

Let’s say I am passing the input as input=2021-01-21,CGT for an sql query. CGT will be the common keyword in the database but the dates keep changing. I want the records which contain CGT and all the other dates except the date mentioned in the input parameter.
Don’t answer it as retrieve all the records that contain CGT and filter it out in Excel as the records for the particular date is huge in number. So I want the other dates which are less in count which can be handled.
Example query :-
select records from tablename where var_name=‘input’;
Based on your question, I assume that this is what you want.
Given:
| input |
---------------
2021-01-21,CGT
2021-01-22,CGT
2021-01-23,CGT
2021-01-25,CGT
2021-01-26,CGT
2021-01-27,CGT
2021-01-28,CGT
If you specify '2021-01-23' you expect to get:
| input |
---------------
2021-01-21,CGT
2021-01-22,CGT
2021-01-25,CGT
2021-01-26,CGT
2021-01-27,CGT
2021-01-28,CGT
You did not specify the database you are using. However, the concept should be similar regardless of the database platform.
SELECT *
FROM sample
WHERE SUBSTRING_INDEX(input,',',1) <> '2021-01-22'
[MySQL]
In [DB2], Given that you wish to pass in the input string and extract
the date, do the following:
SELECT *
FROM sample
WHERE SUBSTRING(input,1, (LOCATE(',',input) - 1)) <> '2021-01-22'

Calculate data in a second column using data from the first one

I need to create a SQL query which calculates some data.
For instance, I have such SQL query:
SELECT SUM(AMOUNT) FROM FIRMS WHERE FIRM_ID IN(....) GROUP BY FIRM;
which produces such data:
28,740,573
30,849,923
25,665,724
43,223,313
34,334,534
35,102,286
38,556,820
19,384,871
Now, in a second column I need to show relation between one entry and sum of all entries. Like that:
28,740,573 | 0.1123
30,849,923 | 0.1206
25,665,724 | 0.1003
43,223,313 | 0.1689
34,334,534 | 0.1342
35,102,286 | 0.1372
38,556,820 | 0.1507
19,384,871 | 0.0758
For instance, sum of all entries from first column above is gonna be 255,858,044 and the value in a first entry, second cell is gonna be 28,740,573 / 255,858,044 = 0.1123. And same for each entry in a result.
How can I do that?
UPD: Thanks #a_horse_with_no_name, I forgot to DBMS. It's Oracle.
Most databases now support the ANSI standard window functions. So, you can do:
SELECT SUM(AMOUNT),
SUM(AMOUNT) / SUM(SUM(AMOUNT)) OVER () as ratio
FROM FIRMS
WHERE FIRM_ID IN (....)
GROUP BY FIRM;
Note: Some databases do integer division. So, if AMOUNT is an integer, then you need to convert to a non-integer number in these databases. One easy method is to multiple by 1.0.

MS Access SQL updating in sequence

I have a table that provides a point-in-time snapshot with the following headings:
| Cust# | Last Trans. | Charge | Quantity |
Every month, I will receive a file from a third party with transactions that will add new cust# or change existing customer information. I am having problems when there are multiple updates to the same Cust# in one month.
For example:
processing the following transaction file:
should yield the following snapshot table:
It may not be the best method, but now I have 3 separate queries to handle NEW, CHANGE and CANCEL. There are no problems with NEW and CANCEL.
Here's how my CHANGE query is set up:
UPDATE snp
INNER JOIN tr
ON snp.[Cust#] = tr.[Cust#]
SET
snp.[Last Trans] = tr.Transaction,
snp.Charge = snp.Charge + tr.Charge,
snp.Quantity = tr.Quantity
WHERE tr.Trans='CHANGE'
Note that Charge is incremental and Quantity is not. Updating Charge is working as expected, but Quantity is not. I do not necessarily get the latest quantity.
How do I ensure that if there are any changes to one customer, that the last Quantity field taken is from the latest CHANGE row (ie. max ID of that cust#)?
SELECT * FROM snp
WHERE ID IN (SELECT MAX(ID)
FROM tr
GROUP BY CUST#)
The inner query would give you all customers' max ID. You can filter the cust# based on your change criteria. The outer query would give you all the details of that row. You can then use those values in your queries.

Multi-Row Per Record SQL Statement

I'm not sure this is possible but my manager wants me to do it...
Using the below picture as a reference, is it possible to retrieve a group of records, where each record has 2 rows of columns?
So columns: Number, Incident Number, Vendor Number, Customer Name, Customer Location, Status, Opened and Updated would be part of the first row and column: Work Notes would be a new row that spans the width of the report. Each record would have two rows. Is this possible with a GROUP BY statement?
Record 1
Row 1 = Number, Incident Number, Vendor Number, Customer Name, Customer Location, Status, Opened and Updated
Row 2 = Work Notes
Record 2
Row 1 = Number, Incident Number, Vendor Number, Customer Name, Customer Location, Status, Opened and Updated
Row 2 = Work Notes
Record n
...
I don't think that possible with the built in report engine. You'll need to export the data and format it using something else.
You could have something similar to what you want on short description (list report, group by short description), but you can't group by work notes so that's out.
One thing to note is that the work_notes field is not actually a field on the table, the work_notes field is of type journal_input, which means it's really just a gateway to the actual underlying data model. "Modifying" work_notes actually just inserts into sys_journal_field.
sys_journal_field is the table which stores the work notes you're looking for. Given a sys_id of an incident record, this URL will give you all journal field entries for that particular record:
/sys_journal_field_list.do?sysparm_query=name=task^element_id=<YOUR_SYS_ID>
You will notice this includes ALL journal fields (comments + work_notes + anything else), so if you just wanted work notes, you could simply add a query against element thusly:
/sys_journal_field_list.do?sysparm_query=name=task^element=work_notes^element_id=<YOUR_SYS_ID>
What this means for you!
While you can't separate a physical row into multiple logical rows in the UI, in the case of journal fields you can join your target table against the sys_journal_field table using a Database View. This deviates from your goal in that you wouldn't get a single row for all work notes, but rather an additional row for each matched work note.
Given an incident INC123 with 3 work notes, your report against the Database View would look kind of like this:
Row 1: INT123 | markmilly | This is a test incident |
Row 2: INT123 | | | Work note #1
Row 3: INT123 | | | Work note #2
Row 4: INT123 | | | Work note #3

Cumulative average number of records created for specific day of week or date range

Yeah, so I'm filling out a requirements document for a new client project and they're asking for growth trends and performance expectations calculated from existing data within our database.
The best source of data for something like this would be our logs table as we pretty much log every single transaction that occurs within our application.
Now, here's the issue, I don't have a whole lot of experience with MySql when it comes to collating cumulative sum and running averages. I've thrown together the following query which kind of makes sense to me, but it just keeps locking up the command console. The thing takes forever to execute and there are only 80k records within the test sample.
So, given the following basic table structure:
id | action | date_created
1 | 'merp' | 2007-06-20 17:17:00
2 | 'foo' | 2007-06-21 09:54:48
3 | 'bar' | 2007-06-21 12:47:30
... thousands of records ...
3545 | 'stab' | 2007-07-05 11:28:36
How would I go about calculating the average number of records created for each given day of the week?
day_of_week | average_records_created
1 | 234
2 | 23
3 | 5
4 | 67
5 | 234
6 | 12
7 | 36
I have the following query which makes me want to murderdeathkill myself by casting my body down an elevator shaft... and onto some bullets:
SELECT
DISTINCT(DAYOFWEEK(DATE(t1.datetime_entry))) AS t1.day_of_week,
AVG((SELECT COUNT(*) FROM VMS_LOGS t2 WHERE DAYOFWEEK(DATE(t2.date_time_entry)) = t1.day_of_week)) AS average_records_created
FROM VMS_LOGS t1
GROUP BY t1.day_of_week;
Halps? Please, don't make me cut myself again. :'(
How far back do you need to go when sampling this information? This solution works as long as it's less than a year.
Because day of week and week number are constant for a record, create a companion table that has the ID, WeekNumber, and DayOfWeek. Whenever you want to run this statistic, just generate the "missing" records from your master table.
Then, your report can be something along the lines of:
select
DayOfWeek
, count(*)/count(distinct(WeekNumber)) as Average
from
MyCompanionTable
group by
DayOfWeek
Of course if the table is too large, then you can instead pre-summarize the data on a daily basis and just use that, and add in "today's" data from your master table when running the report.
I rewrote your query as:
SELECT x.day_of_week,
AVG(x.count) 'average_records_created'
FROM (SELECT DAYOFWEEK(t.datetime_entry) 'day_of_week',
COUNT(*) 'count'
FROM VMS_LOGS t
GROUP BY DAYOFWEEK(t.datetime_entry)) x
GROUP BY x.day_of_week
The reason why your query takes so long is because of your inner select, you are essentialy running 6,400,000,000 queries. With a query like this your best solution may be to develop a timed reporting system, where the user receives an email when the query is done and the report is constructed or the user logs in and checks the report after.
Even with the optimization written by OMG Ponies (bellow) you are still looking at around the same number of queries.
SELECT x.day_of_week,
AVG(x.count) 'average_records_created'
FROM (SELECT DAYOFWEEK(t.datetime_entry) 'day_of_week',
COUNT(*) 'count'
FROM VMS_LOGS t
GROUP BY DAYOFWEEK(t.datetime_entry)) x
GROUP BY x.day_of_week