Round/CEIL in SSAS calculated measure - ssas

I want to convert seconds into minutes according to the user.
For Example:
If user did 2 calls of 20 seconds then my caluclations are converting it to two calls of 60 sec=120s
But I want to change it to 20+20 =40s ~ 60s
Can anyone help me with this?
I have used below MDX
CASE [USAGETYPE].[USAGETYPE]
WHEN [USAGETYPE].[USAGETYPE].&[VOICE] THEN (ROUND(([Measures].[RATEDVOLUME]/60))*60)
WHEN [USAGETYPE].[USAGETYPE].&[ROAMING VOICE] THEN (ROUND(([Measures].[RATEDVOLUME]/60))*60)
But it is converting 20 sec to 0 mins.
Can anyone tell me what is equavalant of ceil function in MDX

I suggest you create a base measure, hide it and then create a calculated measure that does the rounding of the aggregated value.

Do you want convert:
20s -> 1m;
40s -> 1m;
60s -> 1m;
80s -> 2m;
?
Then try the reverse int:
-int(-[Measures].[RATEDVOLUME]/60)

Related

toHour and date_trunc functions do not work accurate in clickhouse

I wanna split time field to before hour and after hour part and based on the definition of
date_trunc i am using this code:
date_trunc('hour', time_field)
input: 2022-07-31 20:00:23.000
output: 2022-07-31 19:30:00.000
Why this code changes 20:00:23.000 to 19:30:00.000.
And my second question is about toHour .I am using below code :
toHour(time_field)
input :‍ ‍‍‍‍2‍022-07-31 19:30:00.000‍‍
output : 0
it should be 19 ,why 0?
and when i use:
formatDateTime(time_field, '%Y-%m-%d-%H')
input : 2015-10-18 21:40:13.000
output : 2015-10-19-01
what's the matter with these functions? do i need to convert the time to another timezone?
Thanks to my teammates this issue is resolved finally ! I am sharing the answer with the ones who have this issue. i use dbeaver. so ,it was a client side issue, I have to set use_server_time_zone in dbeaver to make it right
See https://github.com/ClickHouse/clickhouse-jdbc/issues/604.

Is there a way to select/list which points are spatially related to the start point / end point of line type entities using oracle spatial?

I have two sets of spatially related entities that represent elements of a sewer system:
Pipes [VETROCOCOLECTOR] and Manholes [VECAMARANORMAL].
Each set of entities has, in total, about 20 000 records.
(Globally it corresponds to a number of independent, unconnected systems: imagine sewer systems for different cities in the same state.)
My aim is to get some attributes from the manholes for any given pipe using sql (oracle spatial). It’s of critical importance to flag/list which manholes are located Upstream / Downstream for further hydraulic studies.
I’m using oracle 11g Release 11.2.0.3.0 – 64 bit production.
My code is as follows:
SELECT /*+ ORDERED */
B.IPID AS PIPE_ID, B.CODIGO_INSAAR AS PIPE_CODE,
'VECAMARANORMAL' as POINT_TYPE, A.IPID AS MANHOLE_ID, A.CODIGO_INSAAR AS MANHOLE_CODE, 'Upstream' as POSITION
FROM VECAMARANORMAL A, VETROCOCOLECTOR B
WHERE SDO_RELATE(A.Geometry, ST_StartPoint(B.Geometry), 'mask=ANYINTERACT') = 'TRUE' and B.IPID in (50110129,50110130,50085788)
UNION
SELECT /*+ ORDERED */
B.IPID AS PIPE_ID, B.CODIGO_INSAAR AS PIPE_CODE,
'VECAMARANORMAL' as POINT_TYPE, A.IPID AS MANHOLE_ID, A.CODIGO_INSAAR AS MANHOLE_CODE, 'Downstream' as POSITION
FROM VECAMARANORMAL A, VETROCOCOLECTOR B
WHERE SDO_RELATE(A.Geometry, ST_Endpoint(B.Geometry), 'mask=ANYINTERACT') = 'TRUE' and B.IPID in (50110129,50110130,50085788);
The first part of the union relates spatially a manhole with the startpoint of a pipe, the second with the endpoint of a pipe, using the functions ST_StartPoint and ST_Endpoint, respectively. (functions by Simon Greener)
Here are the results of my small sample
enter image description here
Pipe_ID
PIPE_CODE
POINT_TYPE
MANHOLE_ID
MANHOLE_CODE
POSITION
50085788
490 - 480
VECAMARANORMAL
50109999
490
Upstream
50085788
490 - 480
VECAMARANORMAL
50110000
480
Downtream
50110129
480.10 - 480
VECAMARANORMAL
50109998
480.1
Upstream
50110129
480.10 - 480
VECAMARANORMAL
50110000
480
Downtream
50110130
480 - 470
VECAMARANORMAL
50110000
480
Upstream
50110130
480 - 470
VECAMARANORMAL
50110001
470
Downtream
I’m able to do this for a small number (3) of selected pipes (although it takes around 50 seconds).
I would like to do the same for some selected systems and, ideally, for the overall network.
Obviously there’s something I need to change so that this can perform at scale, in an acceptable period of time.
Please advise me on how to do it.
Thanks a lot in advance and take care.
Best regards,
Pedro

How to get the time elapsed in seconds between two timestamps in dql?

I'm using Symfony and Doctrine. I'd like to get the time elapsed between two timestamps. Here is a portion of my query (both a.date and q.date are type: timestamp):
$qb->select('a.date - q.date AS elapsed_time');
This gives a numerical result, but I can't tell what the units are. 9 seconds gave me 49, and 60 seconds gave me 99; I can't make sense of that.
I tried this too:
$qb->select('DATE_DIFF(a.date, q.date) AS elapsed_time');
This works, but gives the result in days. I really need minutes or seconds.
Use UNIX_TIMESTAMP instead. try this:
$qb->select('(UNIX_TIMESTAMP(a.date) - UNIX_TIMESTAMP(q.date)) AS elapsed_time');
You need to use the function DATE_FORMAT.
$qb->select(DATE_FORMAT(DATE_DIFF(a.date, q.date), '%s') AS elapsed_time);
Try this. I dont have a mysql console now, but i think it would work.

Query a specific time-range and alert at specific time of the day

I need to run a rule at 2 am, querying logs from 0 to 2 am, and alert if matches are found.
So far all the rules I created are frequency rules, but I don't know how to achieve the specific time range for the query, and a specific time for the alert, can someone please help?
(I guess the ANY type could let me add my time range as part of the filter....but then how can I run the rule at 2 am every day?)
The now is take the time of the server.
filter:
- range:
"#timestamp":
"from": "now-2h"
"to": "now"
In UTC:
filter:
range:
"#timestamp":
gte: "now/d+0h"
lt: "now/d+2h"
if you want your alert to be effective for specific hours only, you can
create an enhancement that drop the alert if the current time doesnt match your needs
check
https://elastalert.readthedocs.io/en/latest/recipes/adding_enhancements.html
regards

Splunk - Merging Associated Events

I have a script which sends individual events into Splunk, each event is essentially a report on a HTTP Request, either GET or POST. The event contains a number of fields but two key ones are StepName and Timing:
StepName will be a title for the HTTPRequest etc. PostLogin
Timing will be a int value of the milliseconds taken by HttpRequest
I'm writing a report which shows the average time taken for each step over last 15 minutes. However, from an end users point of view, some steps are part of one process e.g.
Step1 - GetLoginPage
Step2 - PostLoginPage
Step3 - ProcessUserDetails
Step4 - GetHomePage
In this case Step2 and Step3 would be one process for an end user, therefore I'd like to be able to report on these as if they were one step so the following:
GetLoginPage 50
PostLoginPage 100
ProcessUserDetails 250
GetHomePage 80
would become
GetLoginPage 50
PostLoginPage 350
GetHomePage 80
I can use a replace on the StepName so I have
GetLoginPage 50
PostLoginPage 100
PostLoginPage 250
GetHomePage 80
How can I then merge these results so it summates the two PostLoginPage steps and then gives me an average over the time period for the three individual steps?
Note each step has a field called TransactionGUID which associates a group of steps for the same execution.
If you post your question over at http://splunk-base.splunk.com/answers/ , you'll have access to a greater audience of Splunk expertise , and I will attempt to answer your question there.