I'm using prometheus to detect pauses in inbound traffic for an API endpoint. The query for this seems to be working nicely: alert if there's no change in HTTP requests for a given endpoint over a one-hour time period:
sum by(path, custom_identifier, kubernetes_namespace) (
delta(http_request_duration_seconds_count{
status_code="201",
method="POST",
path="/api/v1/path/to/endpoint"}[1h])
) == 0
What I want, though, is for this alert to resolve immediately upon resumption of traffic. As currently written, it won't resolve until traffic has been restored for the same time period - 1h.
Is this a thing that Prometheus can do? Is there a different method of accomplishing this goal?
- alert: AlertName
expr: sum by(path, custom_identifier, kubernetes_namespace) (
delta(http_request_duration_seconds_count{
status_code="201",
method="POST",
path="/api/v1/path/to/endpoint"}[1m])) == 0 #recovers after 1min if resolved
for: 1h
Related
I have 4 transaction controllers named
Trans_api1
__Http Request
Trans_api2
__Http Request
Trans_api3
__Http Request
Trans_api4
__Http Request
that contain Http Requests, However when I run my test plan, I want them to run in numerical order but then they run randomly. How do I fix the order to it runs from 1-4?
Each JMeter thread (virtual user) runs Samplers upside down so you don't need to do anything, your requests are executed from top to bottom already. If you run your test with 1 user - you will see that requests are being executed sequentially.
If you're seeing some "mess" most probably it's caused by concurrency, like
1st user starts 1st request
2st user starts 2nd request
2nd user starts 1st request
1st user starts 3rd request
etc.
You can see this yourself if you add ${__threadNum} function as the prefix or postfix for your request (or transaction controller) label and eventually ${__groovy(vars.getIteration(),)} function to display the current loop number
With 1 user:
With 2 users:
With 2 users and 2 iterations:
As it evidenced by the above images each users executes samplers sequentially on each iteration, these "inconsistencies" are misleadingly interpreted due to concurrency
See Apache JMeter Functions - An Introduction article to get familiarized with JMeter Functions concept
Maybe the problem should be solved in another way.
I am using a Grafana SingleStat pane with "Delta" activated and the dashboard shows me "today so far".
Prometheus metric: sum(request_duration_count{...})
Problem:
I have a metric counting requests. Between 03:00 and 06:00 an automated test triggers my service and the metric is incremented by value x. (I set a Grafana annotation at the starting point.)
I want to get a single stat request count without the test requests.
Advanced Problem:
Nagios checks every y minutes and also increments the counter.
How can I remove these "test-counts"?
Any ideas?
Apache (ec2) --- Client (ELB)
| |
|-------[1.]FIN------->|
| |
|<-----[2.]FIN+ACK-----|
| |
|---------ACK--------->|
| |
With Wireshark I'd like to extract only the packet "[1.]FIN" described above figure which is emitted by server's 80 port and which "initiates" FIN sequence.
I've tried a filter:
tcp.flags.fin && tcp.srcport==80
but the filter also extracts the extra "[2.]FIN+ACK" packets.
How can I filter out only [1.] packet considering "FIN" sequence initiator?
Background:
I'm struggling to get rid of 504 errors with AWS ELB and ec2 (apache), where "FIN - FIN/ACK - ACK" sequence is initiated by the backend apache-side. In such environment FIN sequence initiated by ELB is ideal as AWS official sais: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html
According to https://aws.amazon.com/jp/premiumsupport/knowledge-center/504-error-classic/, I've tried changing replace MPM (event -> worker) and disabling TCP_DEFER_ACCEPT, which slightly reduced 504 errors. However the situation is not much improved.
The point I think will be to find the cause which makes apache initiate active-close sequence, thus I'm firstly trying to extract initiating FIN packet from apache among at most 512 parallel connections between ELB and EC2 (apache).
tcp.flags.fin == 1 && tcp.flags.ack == 0
A filter such as tcp.flags.fin only checks for the presence of the parameter. To find certain values of a parameter, a comparison is needed. That is why filters like "tcp" work to find TCP packets.
The filter match for FIN does not exclude other flags being set or not set, so a comparison is needed for each flag that should be part of the filter.
Probably explanation is simple - but I couldn't find answer to my question:
I am running jmeter test from one VM (worker) to another (target). On worker I have jmeter with 100 threads (100 users). On target I have API that runs on Apache. When I run "apachetop -f access_log" on target, I see only about 7 req/s.
Can someone explain me, why I don't see 100 req/s on target?
In test result in jmeter I see always 200 OK, so all request are hitting the target, and moreover target always responds. So I am not dropping any requests here. Network bandwidth between machines is 1G. What I am missing here?
Thanks,
Daddy
100 users doesn't necessarily mean 100 requests per second, even more, it is highly unlikely.
According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Roughly, if JMeter is able to get response from server in 1 second - you will get 100 requests/second. If response time will be 2 seconds - throughput will be 50 requests/second, etc, response time 4 seconds - 25 requests/second, etc.
Also JMeter configuration matters. If you don't provide enough loops you may run into a situation where some threads already finished and some are not even started. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed explanation.
Your target load = 100 threads ( you are assuming it should generate 100 req/sec as per your plan)
Your actual load = 7 req / sec = 7*3600 / hour = 25200
Per thread throughput = 25200 / 100 threads = 252 iterations / thread / hour
Per transaction time = 3600 / 252 = 14.2 secs
This means, JMeter should be actually sending each request every 14 secs per thread. i.e., 100 requests for every 14.2 secs.
Now, analyze your JMeter summary report for the transaction timers to find out where the remaining 13.2 secs are being spent.
Possible issues are
1. High DNS resolution time (DNS issue)
2. High connection setup time (indicates load balancer issues)
3. High Request send time (indicates n/w or firewall throttling issues)
4. High request receive time (same as #3)
Now, the time that you see in Apache logs are mostly visible to JMeter as time to first byte time. I am not sure about the machine that you are running your testing. If your worker can support curl, use Curl to find the components for a single request.
echo 'request payload for POST'
| curl -X POST -H 'User-Agent: myBrowser' -H 'Content-Type: application/json' -d #- -s -w '\nDNS time:\t%{time_namelookup}\nTCP Connect time:\t%{time_connect}\nAppCon Protocol time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://mytest.test.com
If the above output indicates no such issues then the time must have been spent within JMeter. You should tune your JMeter implementation by using various options like beanshell / JSR223 etc.
In MT4, there exists a stage/state: when we switch from AccountA to AccountB, when Connection is established and init() and start() are triggered by MT4; but before the "blinnnggg" (sound) when all the historical/outstanding trades are loaded from Server.
Switch Account>Establish Connection>Trigger Init()/Start() events>Start Downloading of Outstanding/Historical trades>Completed Downloading (issue "bliinng" sound).
I need to know (in MQL4) that all the trades are completed downloaded from the tradeServer --to know that the account is truly empty -vs- still downloading history from tradeServer.
Any pointer will be appreciated. I've explored IsTradeAllowed() IsContextBusy() and IsConnected(). All these are in "normal" state and the init() and start() events are all fired ok. But I cannot figure out if the history/outstanding trade lists has completed downloading.
UPDATE: The final workaround I finally implemented was to use the OrdersHistoryTotal(). Apparently this number will be ZERO (0) during downloading of order history. And it will NEVER be zero (due to initial deposit). So, I ended-up using this as a "flag".
Observation
As the problem was posted, there seems no such "integrated" method for MT4-Terminal.
IsTradeAllowed() reflects an administrative state of the account/access to the execution of the Trading Services { IsTradeAllowed | !IsTradeAllowed }
IsConnected() reflects a technical state of the visibility / login credentials / connection used upon an attempt to setup/maintain an online connection between a localhost <-> Server { IsConnected() | !IsConnected() }
init() {...} is a one-stop setup facility, that is/was being called once an MT4-programme { ExpertAdvisor | Script | TechnicalIndicator } was launched on a localhost machine. This facility is strongly advised to be non-blocking and non-re-entrant. A change from the user account_A to another user account_B is typically ( via an MT4-configuration options ) a reason to stop an execution of a previously loaded MQL4-code ( be it an EA / a Script / a Technical Indicator ) )
start() {...} is an event-handler facility, that endlessly waits, for a next occurrence of an FX-Market Event appearance ( being propagated down the line by the Broker MT4-Server automation ) that is being announced via an established connection downwards, to the MT4-Terminal process, being run on a localhost machine.
A Workaround Solution
As understood, the problem may be detected and handled indirectly.
While the MT4 platform seems to have no direct method to distinguish between the complete / in-complete refresh of the list of { current | historical } trades, let me propose a method of an indirect detection thereof.
Try to launch a "signal"-trade ( a pending order, placed geometrically well far away, in the PriceDOMAIN, from the current Ask/Bid-levels ).
Once this trade would be end-to-end registered ( Server-side acknowledged ), the local-side would have confirmed the valid state of the db.POOL
Making this a request/response pattern between localhost/MT4-Server processes, the localhost int init(){...} / int start(){...} functionality may thus reflect a moment, when the both sides have synchronised state of the records in db.POOL