Update Micronaut metric without a controller or a job - kotlin

I'm working in a micronaut project, using Kotlin, and I want to use micrometer to create a metric related with the number of entries that I've in the database.
My question is: The update of this metric must rely on a cronjob? What I want is when the metric's endpoint is requested (eg /metrics/name) it authomaticaly updates the metric and return the response. However, I don't want to have a route in a controller dedicated to it. So there's any way of doing it?
Regards

Related

REST bulk endpoint for fetching data

We(producer for the API) have an endpoint
/users/:{id}/name
which is used to retrieve name for the user 'id'
Now as a consumer I want to display the list of names for users like:
user1: id1, name1
uder2: id2, name2
where I have the ids in input.
In such a case should I make 2(here the list is dependent on UI pagination example 50) separate calls to the API to fetch information or else create/ask the producer to create a bulk endpoint like:
POST /users/name
body: { ids: []}
If later, then am I not loosing the REST principle here to fetch information using POST but not GET? If former, then I am not putting too much network overhead in the system?
Also since this seems to be a very common usecase, if we choose the POST method is there really a need of the GET endpoint since the POST endpoint can handle a single user as well?
Which approach should be chosen?
A GET API call should be used for fetching data. Since browser knows GET calls are idempotent, it can handle some situations on its own, like make another call if previous fails.
Since REST calls are lightweight, we tend to overuse API call repeatedly. In your case, since you want all name v/s id mapping at once, one call is sufficient. Or have a Aggregator endpoint in backend API gateway to reduce network traffic and make repeated calls nearer to actual service.
Keeping GET /users/:{id}/name , is also not a bad idea alongside this. It helps to abstract business functionality. A particular scenario can only allow single fetch.
Also using GET /users/name with pagination and returning list of names is complex for single use so keeping both are fine.

How To: Save JMeterVariable values to influxdb with the sampleresults

I'd like to store some JMeterVariables together with the sampleResults to an influxdb using a BackendListenerClient for influxdb (I am using package rocks.nt.apm.jmeter to get the raw results).
My current test logs in for a random customer requests some random entities and logs out. Most of the results are within a range, I'd like to zoom in to certain extreme sample results, find out for which customer / requested entity these results are. We have seen in the past we can find performance issues with specific configurations this way.
I store customer and entity ID in a variable. My issue is that the JMeterVariables are not accessible from the BackendListenerClient. I looked at the sample_variables property, but this property will store the variables in the sampleEvent, which is not accessible in the BackendListener.
I could use the threadName, or sample label to store the vars, but I saw the CSVwriter can actually write the var values from the event, which is a much nicer solution.
Looking forward on your thoughts,
Best regards, Spud
You get it right - the Backend Listener is not customizable in terms of fine-shaping the data you're sending to Influx.
Alas.
However, there's a Swiss Army Knife always available in JMeter: the JSR223 components.
The JSR223 listener, in your case.
The InfluxDB line protocol is simple as simple could be, the HTTP/Rest libraries are
in abundance (Apache HTTP must have been already included with standard JMeter, to my recollection, no additional jars needed) - just pick it all up, form your timeseries as you like, toss it towards your InfluxDB REST endpoint, job's done.

API design for machine learning models

Using Django rest framework to build an API webservice that contains many of already trained machine learning models. Some models can predict a batch_size of 1 or an image at a time. Others need a history of data (timelines) to be able to predict/forecasts. Usually these timelines can hardly fit and passed as parameter. Being that, we want to give the requester the ability to request by either:
sending the data (small batches) to predict as parameter.
passing a database id/reference as parameter then the API will query the database and do the predictions.
So the question is, what would be the best API design for identifying which approach the requester chose?. Some considered approaches:
Add /db to the path of the endpoint ex: POST models/<X>/db. The problem with this approach is that (2x) endpoints are generated for each model.
Add parameter db as boolean to each request. The problem with such approach is that it adds additional overhead for each request just to check which approach. Also, make the code less readable.
Global variable set for each requester when signed for the API token. The problem is that you restricted the requester for 1 mode which is not convenient.
What would be the best approach for this case
The fact that you currently have more than one source would cause me to seriously consider attempting to abstract the "source" component as much as possible, to allow all manner of sources. For example, suppose that future users would like to pull data out of a mongodb, instead of a whatever db you currently are using? Or from some other storage structure? Or pull from a third party? Or, or, or....
In any case the question is now "how much do they all have in common, and what should they all implement?"
class Source(object):
def __get_batch__(self, batch_size=1):
raise NotImplementedError() #each source needs to implement this on its own
#http_library.POST_endpoint("/db")
class DBSource(Source):
def __init__(self, post_data):
if post_data["table"] in ["data1", "data2"]:
self.table = table
else:
raise Exception("Must use predefined table to prevent SQL injection")
def __get_batch__(self, batch_size=1):
return sql_library.query("SELECT * FROM {} LIMIT ?".format(self.table), batch_size)
#http_library.POST_endpoint("/local")
class LocalSource(Source):
def __init__(self, post_data):
self.data = post_data["data"]
def __get_batch__(self, batch_size=1):
data = self.data[self.i, self.i+batch_size]
i += batch_size
return data
This is just an example. However, if a fixed part of your path designates "the source", then you have left yourself open to scale this indefinitely.
Add /db to the path of the endpoint ex: POST models//db. The problem with this approach is that (2x) endpoints are generated for
each model.
Inevitable. DRY out common code to sub-methods.
Add parameter db as boolean to each request. The problem with such approach is that it adds additional overhead for each request just to
check which approach. Also, make the code less readable.
There won't be any additional overhead (that's what your underlying framework does to match a URL to a function/method anyway). However, these are 2 separate functionalities, I would keep them separate, so I would prefer the first approach.
Global variable set for each requester when signed for the API token. The problem is that you restricted the requester for 1 mode
which is not convenient.
Yikes! unless you provide a UI letting a user to select his preference and apply it globally (I don't think any UX will agree to that)
That being said, the api design should be driven by questioning who is mastering (or owning) the data. If it's the application and user already knows the ID of that entity, then you shouldn't be asking the data from the user.
If it's the user, and then if it won't fit in a POST body, then I would say, a real-time API may not be the right solution, think about message queues/pub-sub based systems.
If you need a hybrid solution as you asked in the question, then, I would prefer the 1st approach.

Ember change normalizeResponse based on queried model

I'm using a second datastore with my Ember app, so I can communicate with a separate external API. I have no control over this API.
With a DS.JSONSerializer I can add some missing properties like id:
normalizeResponse(store, primaryModelClass, payload, id, requestType) {
if (requestType == 'query') {
payload.forEach(function(el, index) {
payload[index].id = index
})
}
Now I can do some different tricks for each different requestType. But every response is parsed. Now sometimes a response from one request needs to be parsed differently.
So what I am trying to do is change the normalizeResponse functionality for each different request path (mapped to a fake model using pathForType in an adapter for this store). But the argument store is always the same (obviously) and the argument promaryModelClass is always "unknown mixin" - not sure if this can be any help.
How can I find what model was requested? With this information I could do a switch() in normalizeResponse.
Is there a different way to achieve my goal that does not require me to make a separate adapter for every path/model?
There are over a dozen normalize functions available. Something should work for what I am trying to achieve.
I think this is a great example of a use case of not using ember data.
Assuming that you have models A,B,C that are all working great with ember data, leave those alone.
I'd create a separate service and make raw requests to that different endpoint. So you'd replace this.store.query('thing', {args}) with a separate service that uses ember-ajax (or ember-fetch or whatever). If you need, you can use that service to hold the data that you need (Ember-data is just a service anyway) or you can create models and push them into the store manually.
Without knowing more about your exact situation, hard to give a specific code/advice, but I'd just avoid this problem and write your own custom service.
You can use primaryModelClass.modelName.

How do multiple versions of a REST API share the same data model?

There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.