Micrometer gauge metric update value slowly? - httprequest

I'm using micrometer gauge metric to monitor Http_max_response_time in Vertx service. (configure metric with Prometheus).
When testing, I send a request with timeout 3 seconds at 13:15:16 and the gauge metric return right value about Http_max_response_time (3s). But after that request, there is not any request with timeout 3 seconds send to server, the gauge metric still return Http_max_response_time = 3 second until 13:17:51, and then it updates new value Http_max_response_time to less than 3s. I think it need update more frequently.
My questions here:
How long the gauge metric update new value OR how long it keeps current value?
Which logic that the gauge metric Http_max_response_time execute? Does it just update a global value and return it when there is an observation?
If my question is not clear, please comment and I will show detail more.
Thank in advance,

Updated:
Vertx-micrometer-metrics use a Timer metric for response time, and using a TimeWindowMax to update highest value.
Max for basic Timer implementations such as CumulativeTimer, StepTimer is a time window max (TimeWindowMax). It means that its value is the maximum value during a time window. If no new values are recorded for the time window length, the max will be reset to 0 as a new time window starts. Time window size will be the step size of the meter registry unless expiry in DistributionStatisticConfig is set to other value explicitly. The reason why a time window max is used is to capture max latency in a subsequent interval after heavy resource pressure triggers the latency and prevents metrics from being published.
So we can change default expiry configuration in DistributionStatisticConfig to smaller value as you want.
Here my code to change TimeWindowMax of metrics which contain responseTime to 2 seconds:
registry.config().meterFilter(
new MeterFilter() {
#Override
public DistributionStatisticConfig configure(Meter.Id id, DistributionStatisticConfig config) {
if(id.getName().contains("responseTime")) {
return DistributionStatisticConfig.builder()
.expiry(Duration.ofSeconds(5))
.build()
.merge(config);
}
return config;
}
});
And It worked.

Related

Increase delay and load over time in jMeter

I'm trying to get an increase in delay/pause and load over time in jmeter load testing while keeping the sequence constant. For example -
Initially - 10 samples (of 2 get requests - a,b,a,b,a,b...)
Then after 10 samples, a delay/pause of 10 secs and then 20 samples (a,b,a,b,a,b...)
After the 20 samples, another delay/pause of 20 secs; then 30 samples (a,b,a,b,a,b...)
And so on.
Constraints here being -
Getting exact number of samples
Getting the desired delay
The order of requests should be maintained
The critical section controller helps with maintaining the order of threads but only in a normal thread group. So if I try the ultimate thread group to get the desired variable delay and load, the order and number of samples go haywire.
I've tried the following-
Run test group consecutively
Flow control action
Throughput controller
Module controller
Interleave controller
Synchronizing timer (with and without flow control)
Add think times to children
Is there any way to get this output in jMeter? Or should I just opt for scripting?
Add User Defined Variables and set the following variables there:
samples=10
delay=10
Add Thread Group and specify the required number of threads and iterations
Add Loop Controller under the Thread Group and set "Loop Count" to ${samples}. Put your requests under the Loop Controller
Add JSR223 Sampler and put the following code into Script area:
def delay = vars.get('delay') as long
sleep(delay * 1000)
def new_delay = delay + 10
vars.put('delay', new_delay as String)
def samples = vars.get('samples') as int
def new_samples = samples + 10
vars.put('samples', new_samples as String)

How to read data from BigQuery periodically in Apache Beam?

I want to read data from Bigquery periodically in Beam, and the test codes as below
pipeline.apply("Generate Sequence",
GenerateSequence.from(0).withRate(1, Duration.standardMinutes(2)))
.apply(Window.into(FixedWindows.of(Duration.standardMinutes(2))))
.apply("Read from BQ", new ReadBQ())
.apply("Convert Row",
MapElements.into(TypeDescriptor.of(MyData.class)).via(MyData::fromTableRow))
.apply("Map TableRow", ParDo.of(new MapTableRowV1()))
;
static class ReadBQ extends PTransform<PCollection<Long>, PCollection<TableRow>> {
#Override
public PCollection<TableRow> expand(PCollection<Long> input) {
BigQueryIO.TypedRead<TableRow> rows = BigQueryIO.readTableRows()
.fromQuery("select * from project.dataset.table limit 10")
.usingStandardSql();
return rows.expand(input.getPipeline().begin());
}
}
static class MapTableRowV1 extends DoFn<AdUnitECPM, Void> {
#ProcessElement
public void processElement(ProcessContext pc) {
LOG.info("String of mydata is " + pc.element().toString());
}
}
Since BigQueryIO.TypedRead is related to PBegin, one trick is done in ReadBQ through rows.expand(input.getPipeline().begin()). However, this job does NOT run every two minutes. How to read data from bigquery periodically?
Look at using Looping Timers. That provides the right pattern.
As written your code would only fire once after sequence is built. For fixed windows you would need an input value coming into the Window for it to trigger. For example, have the pipeline read a Pub/Sub input and then have an agent push events every 2 minutes into the topic/sub.
I am going to assume that you are running in streaming mode here; however, another way to do this would be to use a batch job and then run it every 2 mins from Composer. Reason being if your job is idle for effectively 90 secs (2 min - processing time) your streaming job wasting some resources.
One other note: Look at thinning down you column selection in your BigQuery SQL (to save time and money). Look at using some filter on your SQL to pick up a partition or cluster. Look at using #timestamp filter to only scan records that are new in last N. This could give you better control over how you deal with latency and variability at the db level.
As you have mentioned in the question, BigQueryIO read transforms start with PBegin, which puts it at the start of the Graph. In order to achieve what you are looking for, you will need to make use of the BigQuery client libraries directly within a DoFn.
For an example of this have a look at this
transform
Using a normal DoFn for this will be ok for small amounts of data, but for a large amount of data, you will want to look at implementing that logic in a SDF.

Testing Flink window

I have a simple Flink application, which sums up the events with the same id and timestamp within the last minute:
DataStream<String> input = env
.addSource(consumerProps)
.uid("app");
DataStream<Pixel> pixels = input.map(record -> mapper.readValue(record, Pixel.class));
pixels
.keyBy("id", "timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(dynamoDBSink);
env.execute(jobName);
I am trying to test this application with the recommended approach in documentation. I also have looked at this stackoverflow question, but adding the sink hadn't helped.
I do have a #ClassRule as recommended in my test class. The function looks like this:
StreamExecutionEnvironment env=StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
CollectSink.values.clear();
Pixel testPixel1 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel2 = Pixel.builder().id(2).timestampRoundedToMinutes("202002261220").constant(1).build();
Pixel testPixel3 = Pixel.builder().id(1).timestampRoundedToMinutes("202002261219").constant(1).build();
Pixel testPixel4 = Pixel.builder().id(3).timestampRoundedToMinutes("202002261220").constant(1).build();
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.keyBy("id","timestampRoundedToMinutes")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
JobExecutionResult result = env.execute("AggregationTest");
assertNotEquals(0, CollectSink.values.size());
CollectSink is copied from documentation.
What am I doing wrong? Is there also a simple way to test the application with embedded kafka?
Thanks!
The reason why your test is failing is because the window is never triggered. The job runs to completion before the window can reach the end of its allotted time.
The reason for this has to do with the way you are working with time. By specifying
.keyBy("id","timestampRoundedToMinutes")
you are arranging for all the events for the same id and with timestamps within the same minute to be in the same window. But because you are using processing time windowing (rather than event time windowing), your windows won't close until the time of day when the test is running crosses over the boundary from one minute to the next. With only four events to process, your job is highly unlikely to run long enough for this to happen.
What you should do instead is something more like this: set the time characteristic to event time, and provide a timestamp extractor and watermark assigner. Note that by doing this, there's no need to key by the timestamp rounded to minute boundaries -- that's part of what event time windows do anyway.
public static void main(String[] args) throws Exception {
...
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
env.fromElements(testPixel1, testPixel2, testPixel3, testPixel4)
.assignTimestampsAndWatermarks(new TimestampsAndWatermarks())
.keyBy("id")
.timeWindow(Time.minutes(1))
.sum("constant")
.addSink(new CollectSink());
env.execute();
}
private static class TimestampsAndWatermarks extends BoundedOutOfOrdernessTimestampExtractor<Event> {
public TimestampsAndWatermarks() {
super(/* delay to handle out-of-orderness */);
}
#Override
public long extractTimestamp(Event event) {
return event.timestamp;
}
}
See the documentation and the tutorials for more about event time, watermarks, and windowing.

When Elm Html.Program calls subscriptions

I've found a possible answer to this question in a Google Group but I'll like to know if it's correct and add a follow-up question if it is correct.
The answer there is
Every time the global update function in your app runs for any reason,
the global subscriptions object is reevaluated as well, and effect
managers receive the new list of current subscriptions
If any time the model is changed subscriptions get called what is the effect on subscriptions such as Time.every second taken from Time Effect Elm Guide - is that means the timer get reset when the model changes? What if that was Time.every minute - if the model changes 20 seconds after it starts will it fire in 60 - 20 = 40 seconds or will it fire in 1 minute?
You can check when update and subscriptions are called by adding a Debug.log statement to each. The subscriptions function is called first at the start (since the messages which will be sent to update may depend on it) and also after each call to update.
The timer interval seems to be unaffected by subsequent calls to subscriptions. For example, if you use the elm clock example, change the subscription to
Time.every (10*Time.second) Tick
and add a button to the view which resets the model value to 0, you will see that the tick still takes place at regular 10s intervals, no matter when you click the button.
TLDR; It will fire in 1 minute, unless you turn your subscription
off and on during the first minute
Every time your update runs, the subscriptions function will run too.
The subscriptions function essentially is a list of things you want your app to subscribe to.
In the example you have a subscription that generates a Tick message every 60 seconds.
The behavior you may expect is:
T= 0s: The first time subscriptions runs, you start your subscription to "receive Tick message every 60 seconds".
T= between 0 AND 60s: As long as that particular subscription remains ON, it doesn't matter how often your update function runs. subscriptions will be run, but as long as your particular subscription to the Tick remains ON, things are fine.
T= 60s: You receive a Tick message from your subscription, which in turn will fire update to be called.
T= 60s: subscriptions will run again (because of previous call to update)
What could be interesting is what happens if the subscription to Tick is canceled along the way and then reinstated:
T= 0: subscription to Tick
T= 20s: suppose something changes in the model, causing subscription to Tick to be canceled
T= 40s: some other change in model, causing subscription to Tick to be turned on again
T= 100s: Tick message is fired, and passed to update function
T= 100s: subscriptions will run again

API throttle RateLimit-Remaining never updates every minutes

I have an API created with Laravel 5.2. I am using throttle for rate limiting. In my route file, I have set the following..
Route::group(['middleware' => ['throttle:60,1'], 'prefix' => 'api/v1'], function() {
//
}
As I understood the Laravel throttling, the above script will set the request limit to 60 per minute. I have an application querying the route which repeat every 10 seconds. So, per minute there are 6 request which is much more satisfying the above throttle.
Problem is, my query works until I execute 60 request regardless of time and after 60 request, it responds me 429 Too Many Requests with header Retry-After: 60. As per my understanding, the X-RateLimit-Remaining should update every 1 minute. But it seems never updated until it goes to 0. After become zero, then waits for 60 seconds then updated.
Am I doing anything wrong here.