Keeping track of web application usage in an SQL Database (Need Suggestions) - sql

I have a system that will allow users to send messages to thier clients. I want to limit the user to x amount of messages per day/week/month (depending on subscription). What is a good way to tally for the time period and reset the tally once the time period restarts? I would like to incorporate some sort of SQL job to reset the tally.

Each time you send a set of messages, log the date and the number of messages sent. Your application can then sum the message count fields, grouping by either the day, the week of year, or the year of the date to enforce limits. The where clause used to limit the user to a specific number would use a message limit, start date, and stop date from the user profile table, or some global settings table.
In MySql dialect, you would write something like this:
select
users.id,
(users.msg_limit - subq.msgs_used) as msgs_available
from users
inner join (select
sum(msg_log.cnt) as msgs_used
from msg_log
where weekofyear(msg_log.date) = weekofyear(now())
and msg_log.user = :user_id_param) as subq;

table limits (id, client_id, limit_days, msg_count);
Assume limits count from midnight, Sunday, and 1st of the month.
New clients get three records, one each for limit_days of 1, 7, and 31.
Cron job to run at midnight resets
msg_count = DayLimit where limit_days = 1;
msg_count = WeekLimit where limit_days = 7 and weekday(current_date) = 1
msg_count = MOnthLimit where limit_days = 31 and day(current_date) = 1
Sending message is allowed if min(msg_count) > 0 where client_id = '$client_id'
then send msg and update limits set msg_count = msg_count - 1 where client_id = '$client_id'

Related

Build query that brings only sessions that have only errors?

I have a table with sessions events names. Each session can have 3 different types of events.
There are sessions that have only error type event and I need to identify them by getting a list those session.
I tried the following code:
SELECT
test.SessionId, SS.RequestId
FROM
(SELECT DISTINCT
SSE.SessionId,
SSE.type,
COUNT(SSE.SessionId) OVER (ORDER BY SSE.SessionId, SSE.type) AS total_XSESIONID_TYPE,
COUNT(SSE.SessionId) OVER (ORDER BY SSE.SessionId) AS total_XSESIONID
FROM
[CMstg].SessionEvents SSE
-- WHERE SSE.SessionId IN ('fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb' )
) AS test
WHERE
test.total_XSESIONID_TYPE = test.total_XSESIONID
AND test.type = 'Errors'
-- AND test.SessionId IN ('fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb' )
Each session can have more than one type, and I need to count only the sessions that have only type 'errors'. I don't want to include sessions that have additional types of events in the count
While I'm running the first query I'm getting a count of 3 error event per session, but while running the all procedure the number is multiplied to 90?
Sample table :
sessionID
type
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
NonError
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
00c896a0-dccc-41bf-8dff-a5cd6856bb76
Errors
In this case I should get
sessionid = fa3ed523-60f9-4af0-a85f-1dec9e9d2cdb
Please advice - hope this is clearer now, thanks!
It's been a long time but I think something like this should get you the desired results:
SELECT securemeSessionId
FROM <TableName> -- replace with actual table name
GROUP BY securemeSessionId
HAVING COUNT(*) = COUNT(CASE WHEN type = 'errors' THEN 1 END)
And a pro tip: When asking sql-server questions, it's best to follow these guidelines
SELECT *
FROM NameOfDataBase
WHERE type!= 'errors'
Is it what you wanted to do?

Unable to figure out accurate query to get timestamps for my report

I am trying to write a report based on data of users logging into a session.
I need to be able to get a full session time from when the first person joins the meeting to when the last person leaves.
When someone joins a meeting it is logged as, "Initialize-Load Video chat Window"
There are 2 ways to close the meeting but one way is being logged.
- There is an "End Chat" button that the user can use and that is logged as, "Video Chat-End Chat"
- If the user does not use that button and just exits out the program/browser, the database does not log that, and I would like to use the last logged element in the logType column.
I would like it to look like this below:
This is my query:
select vl.originalChatSessionID,
CONVERT(DATE, min(vl.ReceivedDateTime)) as VideoDate,
--CONVERT(TIME, min(vl.ReceivedDateTime)) as StartTime,
min(vl.ReceivedDateTime) as StartTime2,
--CONVERT(TIME, max(vl.ReceivedDateTime)) as EndTime,
max(vl.ReceivedDateTime) as EndTime2,
DATEDIFF(MINUTE, min(vl.ReceivedDateTime), max(vl.ReceivedDateTime)) as SessionLength
from iclickphrDxvideolog vl
--inner join iclickphrDxVideoHistory vh
-- on vl.originalChatSessionID = vh.meetingid
-- and vl.applicationUserID = vh.applicationUserID
where originalChatSessionID = #MeetingSessionID
--and (vl.logType = 'Initialize-Load Video chat Window' or vl.logType = 'Video Chat-End Chat')
group by originalChatSessionID
Problem is I am grabbing the first logged element and last logged element in the logType column, and I know that. If i uncomment the part in the where clause where it says; --and (vl.logType = 'Initialize-Load Video chat Window' or vl.logType = 'Video Chat-End Chat'), then i do not have an issue with the Start time of the session....but have a big issue with the End Time of the session.
Below is a picture of how the raw data looks:
I've a assumed that originalChatSessionID defines a unique session. If not then you will have to change the 'PARTITION BY' clause to mirror the column or columns that make it unique.
This also assumes that ReceivedDateTime is a datetime datatype
SELECT DISTINCT
vl.originalChatSessionID,
VideoDate = MIN(vl.ReceivedDateTime) OVER(PARTITION BY originalChatSessionID),
StartTime = MIN(ISNULL(sc.StartChat, vl.ReceivedDateTime)) OVER(PARTITION BY originalChatSessionID),
EndTime = MAX(ISNULL(ec.EndChat, vl.ReceivedDateTime)) OVER(PARTITION BY originalChatSessionID),
SessionLength = DATEDIFF(
minute,
MIN(ISNULL(sc.StartChat, vl.ReceivedDateTime)) OVER(PARTITION BY originalChatSessionID),
MAX(ISNULL(ec.EndChat, vl.ReceivedDateTime)) OVER(PARTITION BY originalChatSessionID)
)
FROM iclickphrDxvideolog vl
LEFT JOIN (
SELECT originalChatSessionID, StartChat = MIN(ReceivedDateTime)
WHERE logType = 'Initialize-Load Video chat Window'
GROUP BY originalChatSessionID
) sc ON vl.originalChatSessionID = sc.originalChatSessionID
LEFT JOIN (
SELECT originalChatSessionID, EndChat = MAX(ReceivedDateTime)
WHERE logType = 'Video Chat-End Chat'
GROUP BY originalChatSessionID
) ec ON vl.originalChatSessionID = ec.originalChatSessionID
This is untested as I don't have time to recreate your dataset. If you need more help I suggest you post a script to recreate your sample data so people can use it to test against.
The above uses two sub queries, one to get the first instance of Initialize-Load Video chat Window and one to get the last instance of Video Chat-End Chat. I have LEFT JOINed to these so they will return NULL values is nothing is found. In the main part of the query, I've used ISNULL() to test if the start record is not found then use the earliest record for the session and if the end record is not found then use the last record for the session.
Note that there is no grouping but I have used DISTINCT to get a
similar result. This SQL statement does not have a WHERE clause so you
could create a view from it then simply use that view with a WHERE
clause for your report.

process of late events in SA

I was doing a test, when I generated data that was 30 days old.
When sent to SA job all that input was dropped, but per settings in event ordering blade I was expecting that all will be passed thru.
Part of job query contains:
---------------all incoming events storage query
SELECT stream.*
INTO [iot-predict-SA2-ColdStorage]
FROM [iot-predict-SA2-input] stream TIMESTAMP BY stream.UtcTime
so my expectation is to have everything that was pushed to SA job in blob storage.
When I sent events that were only 5 hours old - then the input was marked as late (expected) and processed.
Per SS first marked area is showing outdated events input, but no output (red), the second part shows late processed events.
full query
WITH AlertsBasedOnMin
AS (
SELECT stream.SensorGuid
,stream.Value
,stream.SensorName
,ref.AggregationTypeFlag
,ref.MinThreshold AS threshold
,ref.Count
,CASE
WHEN (ref.MinThreshold > stream.Value)
THEN 1
ELSE 0
END AS isAlert
FROM [iot-predict-SA2-input] stream TIMESTAMP BY stream.UtcTime
JOIN [iot-predict-SA2-referenceBlob] ref ON ref.SensorGuid = stream.SensorGuid
WHERE ref.AggregationTypeFlag = 8
)
,AlertsBasedOnMax
AS (
SELECT stream.SensorGuid
,stream.Value
,stream.SensorName
,ref.AggregationTypeFlag
,ref.MaxThreshold AS threshold
,ref.Count
,CASE
WHEN (ref.MaxThreshold < stream.Value)
THEN 1
ELSE 0
END AS isAlert
FROM [iot-predict-SA2-input] stream TIMESTAMP BY stream.UtcTime
JOIN [iot-predict-SA2-referenceBlob] ref ON ref.SensorGuid = stream.SensorGuid
WHERE ref.AggregationTypeFlag = 16
)
,alertMinMaxUnion
AS (
SELECT *
FROM AlertsBasedOnMin
UNION ALL
SELECT *
FROM AlertsBasedOnMax
)
,alertMimMaxComputed
AS (
SELECT SUM(alertMinMaxUnion.isAlert) AS EventCount
,alertMinMaxUnion.SensorGuid AS SensorGuid
,alertMinMaxUnion.SensorName
FROM alertMinMaxUnion
GROUP BY HoppingWindow(Duration(minute, 1), Hop(second, 30))
,alertMinMaxUnion.SensorGuid
,alertMinMaxUnion.Count
,alertMinMaxUnion.AggregationTypeFlag
,alertMinMaxUnion.SensorName
HAVING SUM(alertMinMaxUnion.isAlert) > alertMinMaxUnion.Count
)
,alertsMimMaxComputedMergedWithReference
AS (
SELECT System.TIMESTAMP [TimeStampUtc]
,computed.EventCount
,0 AS SumValue
,0 AS AvgValue
,0 AS StdDevValue
,computed.SensorGuid
,computed.SensorName
,ref.MinThreshold
,ref.MaxThreshold
,ref.TimeFrameInSeconds
,ref.Count
,ref.GatewayGuid
,ref.SensorType
,ref.AggregationType
,ref.AggregationTypeFlag
,ref.EmailList
,ref.PhoneNumberList
FROM alertMimMaxComputed computed
JOIN [iot-predict-SA2-referenceBlob] ref ON ref.SensorGuid = computed.SensorGuid
)
,alertsAggregatedByFunction
AS (
SELECT Count(1) AS eventCount
,stream.SensorGuid AS SensorGuid
,stream.SensorName
,ref.[Count] AS TriggerThreshold
,SUM(stream.Value) AS SumValue
,AVG(stream.Value) AS AvgValue
,STDEV(stream.Value) AS StdDevValue
,ref.AggregationTypeFlag AS flag
FROM [iot-predict-SA2-input] stream TIMESTAMP BY stream.UtcTime
JOIN [iot-predict-SA2-referenceBlob] ref ON ref.SensorGuid = stream.SensorGuid
GROUP BY HoppingWindow(Duration(minute, 1), Hop(second, 30))
,ref.AggregationTypeFlag
,ref.[Count]
,ref.MaxThreshold
,ref.MinThreshold
,stream.SensorGuid
,stream.SensorName
HAVING
--as this is alert then this factor will be relevant to all of the aggregated queries
Count(1) >= ref.[Count]
AND (
--average
(
ref.AggregationTypeFlag = 1
AND (
AVG(stream.Value) >= ref.MaxThreshold
OR AVG(stream.Value) <= ref.MinThreshold
)
)
--sum
OR (
ref.AggregationTypeFlag = 2
AND (
SUM(stream.Value) >= ref.MaxThreshold
OR Sum(stream.Value) <= ref.MinThreshold
)
)
--stdev
OR (
ref.AggregationTypeFlag = 4
AND (
STDEV(stream.Value) >= ref.MaxThreshold
OR STDEV(stream.Value) <= ref.MinThreshold
)
)
)
)
,alertsAggregatedByFunctionMergedWithReference
AS (
SELECT System.TIMESTAMP [TimeStampUtc]
,0 AS EventCount
,computed.SumValue
,computed.AvgValue
,computed.StdDevValue
,computed.SensorGuid
,computed.SensorName
,ref.MinThreshold
,ref.MaxThreshold
,ref.TimeFrameInSeconds
,ref.Count
,ref.GatewayGuid
,ref.SensorType
,ref.AggregationType
,ref.AggregationTypeFlag
,ref.EmailList
,ref.PhoneNumberList
FROM alertsAggregatedByFunction computed
JOIN [iot-predict-SA2-referenceBlob] ref ON ref.SensorGuid = computed.SensorGuid
)
,allAlertsUnioned
AS (
SELECT *
FROM alertsAggregatedByFunctionMergedWithReference
UNION ALL
SELECT *
FROM alertsMimMaxComputedMergedWithReference
)
---------------alerts storage query
SELECT *
INTO [iot-predict-SA2-Alerts-ColdStorage]
FROM allAlertsUnioned
---------------alerts to alert events query
SELECT *
INTO [iot-predict-SA2-Alerts-EventStream]
FROM allAlertsUnioned
---------------alerts to stream query
SELECT *
INTO [iot-predict-SA2-TSI-EventStream]
FROM allAlertsUnioned
---------------all incoming events storage query
SELECT stream.*
INTO [iot-predict-SA2-ColdStorage]
FROM [iot-predict-SA2-input] stream TIMESTAMP BY stream.UtcTime
---------------all incoming events to time insights query
SELECT stream.*
INTO [iot-predict-SA2-TSI-AlertStream]
FROM [iot-predict-SA2-input] stream TIMESTAMP BY stream.UtcTime
Since you are using "TIMESTAMP BY", Stream Analytics job event ordering settings are taking effects. Please check your job's "event ordering" settings, specifically below two:
Events that arrive late -- the late arrival limit between 0 second and 21 days.
Handling other events -- error handling policy, drop or adjust the application time to system clock time.
I guess that, most likely, your late arrival limit was more than 5 hours, so that those 5-hours old events could be processed.
You may already figure out from above that Stream Analytics job can only process "old" events up to 21 days late. To work around this limitation, you can consider one of below options:
Remove TIMESTAMP BY, then all your windowing aggregate will be using enqueue time. This might generate incorrect result according to your query logic.
Select "adjust" as the error handling policy. Again, this might generate incorrect result according to your query logic.
Shifting the application time (stream.UtcTime) to a more resent time by using DATEADD() function, for example TIMESTAMP BY DATEADD(day, 10, UtcTime). This works well when this is a onetime task, and you know the time range of your events.
Use batch job(outside Stream Analytics) to process data that 30 days old.
After a chat with guys from MS, it emerged that my test have to had an extra step to perform.
To have late events processed, regardless late event settings, we need to start this job in a way, that late event is considered as a sent when job was started, so in this particular case, we have to start SA job using custom start date and set it 30 days ago.

Weave rows representing email messages into send & reply conversation threads

I have (two) tables of SENT and RECEIVED email messages exchanged between patients and their doctors within an app. I need to group these rows into conversation threads exactly the way you would expect to see them in your email inbox, but with the following difference:
Here, “thread” encompasses all back-and-forth exchanges between the same 2 users. Thus, each single unique pair of communicating users constitutes 1 and only 1 thread.
The following proof-of-concept code successfully creates a notion of “thread” for a single instance where I know the specific patient and doctor user IDs. The parts I can’t figure out are:
(1) how to accomplish this when I’m pulling multiple patients and doctors from tables, and then
(2) to sort the resulting threads by initiating-date
SELECT send.MessageContent, send.SentDatetime, rec.ReadDatetime, other_stuff
FROM MessageSend send
INNER JOIN MessageReceive rec
ON send.MessageId = rec.MessageId
WHERE
( send.UserIdSender = 123
OR rec.UserIdReceiver = 123 )
AND
(send.UserIdSender = 456
OR rec.UserIdReceiver = 456)
If MessageID is unique for a conversion, You can order the messages using the send and received date time.
If you want to filter for particular doctor or patient ,you can include it in the where clause.
SELECT send.MessageContent, send.SentDatetime, rec.ReadDatetime, other_stuff
FROM MessageSend send
INNER JOIN MessageReceive rec
ON send.MessageId = rec.MessageId
ORDER BY send.MessageId,send.SentDatetime, rec.ReadDatetime

Need some help in creating a query in SQL?

I have 2 following tables :
Ticket(ID, Problem, Status,Priority, LoggedTime,CustomerID*, ProductID*);
TicketUpdate(ID,Message, UpdateTime,TickedID*,StaffID*);
Here is a question to be answered:
Close all support tickets which have not been updated for at least 24 hours. This will be records that have received at least one update from a staff member and no further updates from the customer (or staff member) for at least 24 hours.
My query is:
UPDATE Ticket SET Status = 'closed' FROM TicketUpdate
WHERE(LoggedTime - MAX(UpdateTime))> 24
AND Ticket.ID = TicketUpdate.TicketID;
When I run this query on mysql it says that "<" does not exist.
Can you tell me is my query right to for calculating the records which have not been updated for at least 24 hours and if it is right what should I do use instead of "<"?
... records that have received at least one update from a staff member and
no further updates from the customer (or staff member) for at least 24
hours.
So, effectively, the last update must have been done by a staff member and be older than 24 hours. That covers it all.
(BTW, you have a typo: TickedID -> I use ticketid here.)
UPDATE ticket t
SET status = 'closed'
FROM (
SELECT DISTINCT ON (1)
ticketid
,first_value(updatetime) OVER w AS last_up
,first_value(staffid) OVER w AS staffid
FROM ticketupdate
-- you could join back to ticket here and eliminate 'closed' ids right away
WINDOW w AS (PARTITION BY ticketid ORDER BY updateTime DESC)
) tu
WHERE tu.ticketid = t.id
AND tu.last_up < (now()::timestamp - interval '24 hours')
AND tu.staffid > 1 -- whatever signifies "update from a staff member"
AND t.status IS DISTINCT FROM 'closed'; -- to avoid pointless updates
Note that PostgreSQL folds identifiers to lower case if not double-quoted. I advise to stay away from mixed case identifiers to begin with.
If you are working with postgreSQL then this should work
UPDATE Ticket SET Status = 'closed' FROM TicketUpdate
WHERE abs(extract(epoch from LoggedTime - MAX(UpdateTime))) >24
AND Ticket.ID = TicketUpdate.TicketID;