I am trying to put a limitation on the reservoir pressure so it doesnt go above a certain amount let's say 4000 PSIA
So when I do the injection, and then the production will decline, the pressure will increase, once it hits a value the run will terminate
Would you please suggest a way to do that?
Best Regards
Someone helped me out with the answer, so I hope my answer will help someone else
ACTIONX
ACT1 4060 /
FPR > 4060 /
/
END
ENDACTIO
Related
I have this issue where I record a daily entry for all users in my system (several thousands, even 100.000+). These entries have 3 main features, "date", "file_count", "user_id".
date
file_count
user_id
2021-09-28
200
5
2021-09-28
10
7
2021-09-29
210
5
2021-09-29
50
7
Where I am in doubt is how to run an anomaly detection algorithm efficiently on all these users.
My goal is to be able to report whether a user has some abnormal behavior each day.
In this example, user 7 should be flagged as an anomaly because the file_count suddenly is x5 higher than "normal".
My idea was firstly to create a model for each user but since there are so many users this might not be feasible.
Could you help explain me how to do this in an efficient manner if you know an algorithm that could solve this problem?
Any help is greatly appreciated!
Article for anomaly detection in audit data can be found many on the Internet.
One simple article with many of examples/approaches can be found in original (Czech) language here: https://blog.root.cz/trpaslikuv-blog/detekce-anomalii-v-auditnich-zaznamech-casove-rady/ or translated using google technology: https://blog-root-cz.translate.goog/trpaslikuv-blog/detekce-anomalii-v-auditnich-zaznamech-casove-rady/?_x_tr_sl=auto&_x_tr_tl=en&_x_tr_hl=sk&_x_tr_pto=wapp
PS: Clustering (Clustering Based Unsupervised Approach) can be way to go, when searching for simple algorithm.
Using INFO CPU command on Redis, I get the following values back (among other values):
used_cpu_sys:688.80
used_cpu_user:622.75
Based on my understanding, the value indicates the CPU time (expressed in seconds) accumulated since the launch of the Redis instance, as reported by the getrusage() call (source).
What I need to do is calculate the % CPU utilization based on these values. I looked extensively for an approach to do so but unfortunately couldn't find a way.
So my questions are:
Can we actually calculate the % CPU utilization based on these 2 values? If the answer is yes, then I would appreciate some pointers in that direction.
Do we need some extra data points for this calculation? If the answer is yes, I would appreciate if someone can tell me what those data points would be.
P.S. If this question should belong to Server Fault, please let me know and I will post it there (I wasn't 100% sure if it belongs here or there).
You need to read the value twice, calculate the delta, and divide by the time elapsed between the two reads. That should give you the cpu usage in % for that duration.
So if you run the following, it clearly shows which parts of SQL are using the most memory.
SELECT SUM (pages_in_bytes) as 'Bytes Used', type
FROM sys.dm_os_memory_objects
GROUP BY type
ORDER BY 'Bytes Used' DESC;
GO
Does anyone have an article anywhere ( i have searched the net!) for what each 'type' actually is? Specifically MEMOBJ_SOSWORKER
cheers - any help appreciated
I had a similar question many years back.
The answer/comments pointed to great reference by Paul S. Randal.
SQL Server has many stats that can safely be ignored. If you include all wait timers available it can skew the average.
Also, there is a lot of info about the SOS Worker here if you scroll down to the S's.
and thanks for reading my thread.
I have read some of the previous posts on formatting/normalising input data for a Neural Network, but cannot find something that addresses my queries specifically. I apologise for the long post.
I am attempting to build a radial basis function network for analysing horse racing data. I realise that this has been done before, but the data that I have is "special" and I have a keen interest in racing/sportsbetting/programming so would like to give it a shot!
Whilst I think I understand the principles for the RBFN itself, I am having some trouble understanding the normalisation/formatting/scaling of the input data so that it is presented in a "sensible manner" for the network, and I am not sure how I should formulate the output target values.
For example, in my data I look at the "Class change", which compares the class of race that the horse is running in now compared to the race before, and can have a value between -5 and +5. I expect that I need to rescale these to between -1 and +1 (right?!), but I have noticed that many more runners have a class change of 1, 0 or -1 than any other value, so I am worried about "over-representation". It is not possible to gather more data for the higher/lower class changes because thats just 'the way the data comes'. Would it be best to use the data as-is after scaling, or should I trim extreme values, or something else?
Similarly, there are "continuous" inputs - like the "Days Since Last Run". It can have a value between 1 and about 1000, but values in the range of 10-40 vastly dominate. I was going to scale these values to be between 0 and 1, but even if I trim the most extreme values before scaling, I am still going to have a huge representation of a certain range - is this going to cause me an issue? How are problems like this usually dealt with?
Finally, I am having trouble understanding how to present the "target" values for training to the network. My existing results data has the "win/lose" (0 or 1?) and the odds at which the runner won or lost. If I just use the "win/lose", it treats all wins and loses the same when really they're not - I would be quite happy with a network that ignored all the small winners but was highly profitable from picking 10-1 shots. Similarly, a network could be forgiven for "losing" on a 20-1 shot but losing a bet at 2/5 would be a bad loss. I considered making the results (+1 * odds) for a winner and (-1 / odds) for a loser to capture the issue above, but this will mean that my results are not a continuous function as there will be a "discontinuity" between short price winners and short price losers.
Should I have two outputs to cover this - one for bet/no bet, and another for "stake"?
I am sorry for the flood of questions and the long post, but this would really help me set off on the right track.
Thank you for any help anyone can offer me!
Kind regards,
Paul
The documentation that came with your RBFN is a good starting point to answer some of these questions.
Trimming data aka "clamping" or "winsorizing" is something I use for similar data. For example "days since last run" for a horse could be anything from just one day to several years but tends to centre in the region of 20 to 30 days. Some experts use a figure of say 63 days to indicate a "spell" so you could have an indicator variable like "> 63 =1 else 0" for example. One clue is to look at outliers say the upper or lower 5% of any variable and clamp these.
If you use odds/dividends anywhere make sure you use the probabilities ie 1/(odds+1) and a useful idea is to normalize these to 100%.
The odds or parimutual prices tend to swamp other predictors so one technique is to develop separate models, one for the market variables (the market model) and another for the non-market variables (often called the "fundamental" model).
I'm using Portaudio in order to record the sound in a .raw file but I would like to start the recording only when there is a sound and stop it when there is a silence.
Is there a way to do this with Portaudio?
If not, do you have any idea about how I could do it?
Thanks in advance!
Portaudio cannot do what you need. The solution you are looking for is called Vox. Internet search vox algorithm and you'll find lots of implementations. I'm sure there are even libraries that will calculate it for you. I usually just take the rms of the signal buffer and compare it to a predetermined threshold. If you don't convert the signal level to dB you will probably be working with values in the range of 0.01 To 0.05. In dB you should be working in the -50 to -30 range.