I'd like to store some JMeterVariables together with the sampleResults to an influxdb using a BackendListenerClient for influxdb (I am using package rocks.nt.apm.jmeter to get the raw results).
My current test logs in for a random customer requests some random entities and logs out. Most of the results are within a range, I'd like to zoom in to certain extreme sample results, find out for which customer / requested entity these results are. We have seen in the past we can find performance issues with specific configurations this way.
I store customer and entity ID in a variable. My issue is that the JMeterVariables are not accessible from the BackendListenerClient. I looked at the sample_variables property, but this property will store the variables in the sampleEvent, which is not accessible in the BackendListener.
I could use the threadName, or sample label to store the vars, but I saw the CSVwriter can actually write the var values from the event, which is a much nicer solution.
Looking forward on your thoughts,
Best regards, Spud
You get it right - the Backend Listener is not customizable in terms of fine-shaping the data you're sending to Influx.
Alas.
However, there's a Swiss Army Knife always available in JMeter: the JSR223 components.
The JSR223 listener, in your case.
The InfluxDB line protocol is simple as simple could be, the HTTP/Rest libraries are
in abundance (Apache HTTP must have been already included with standard JMeter, to my recollection, no additional jars needed) - just pick it all up, form your timeseries as you like, toss it towards your InfluxDB REST endpoint, job's done.
Related
I try to setup an integration/API test suite with Karate and consider to use Karate Netty for mocking required services. For the test setup the system under test A (a Spring Boot app) is started up completely. The Karate tests are then executed by a Maven test run against this instance.
The service A depends on multiple other services these needs to be mocked away for the tests. To do so my idea was to configure a running Karate Netty standalone instance as HTTP proxy (done by JVM args of the service A).
Now my idea was to create one test feature file: xyz-test.feature
And the required mocks for this file are defined in an associated mock feature file: xyz-mock.feature
(The test scenarios are rather complex and the responses of the external services could vary)
This means for a full test run I need to load up a couple of mock feature files. So:
What is the matching strategy for multiple mock feature files? Which scenario wins, so to say.
Is there any way to ensure, that the right mock file is used for the associated test file?
(Clearly I can reconfigure the running standalone instance and advice it to use xyz-mock.feature next.
But this would stop me from using parallel execution for my API tests, right?)
I already thought about reusing the Correlation-Id which I can send in for each test and then match against this in the mock file (it is also sent to all called services). But:
Is there a way to define a global matcher per mock file?
It sounds like you need only one mock file. You could boot 2 on different ports if you wanted, but there is no way to "merge" them into one port - if that is what you were looking for.
In my experience, you will be able to have a single mock take care of all your edge cases. This is because Karate's approach is un-conventional: you pretty much write a stateful server. But by keeping variables in memory and some clever JSON-path, you can simulate CRUD with very few lines of code: https://github.com/intuit/karate/tree/master/karate-netty#background
You can use only one at a time, by design
Given the above limitation, here's an interesting idea: add something like an extra pathMatches('/__test/reset') scenario that cleans-up your state and sets the Background variables to things like * def cats = []. Now in each feature, just call the special "reset" URL at the start. The good thing is Karate is thread-safe. Another idea as you said is you can maintain two or three different variables and use some logic to "route" based on a header, again very easy IMO. Use a map of maps, e.g:
def data = { cats1: {}, cats2: {}, cats3: {} }
And you can get the header, e.g. if it is mode: cats1
* def mode = karate.get('requestHeaders.mode[0]')
* def cats = data[mode]
not sure if this answers your question, but if the last Scenario has an "empty" description, it is a "catch all" and can in theory delegate to another server (or mock): https://github.com/intuit/karate/tree/develop/karate-netty#proxy-mode
Your question is a little confusing, so you may have to edit and re-word it if I haven't understood.
EDIT: using multiple mock files should be possible in 1.1.0 onwards: https://github.com/intuit/karate/issues/1566
Could you explain what is the usage of Modules with select query?
For example if I write (as shown on this page https://cumulocity.com/guides/users-guide/administration/):
select * from MeasurementCreated
Is it useful to get real time notifications by subscribing of the related channel? Is the module reachable by an angularJs Module? Can this module be used in other CEL statements?
Just selecting data without putting it into another stream can make sense in the case you want to make this data available via a real-time channel to some external application (this could be of course AngularJs).
Take a look at this section in the docs: http://cumulocity.com/guides/reference/real-time-statements/#notifications
This very one example though does not make a lot of sense because raw measurement data is already provided on a real-time channel
http://www.cumulocity.com/guides/reference/measurements/#notifications
As for the second part of the question:
Yes it is possible to communicate with other modules within your tenant.
e.g. You can declare some stream in module a and it will be available in module b.
There is a ton of documentation on academic theory and best practices on how to manage versioning for RESTful Web Services, however I have not seen much discussion on how multiple REST APIs interact with data.
I'd like to see various architectural strategies or documentation on how to handle hosting multiple versions of your app that rely on the same data pool.
For instance, suppose you make a database level destructive change to a database table that causes you to have to increment your major API version to v2.
Now at any given time, users could be interacting with the v1 web service and the v2 web service at the same time and creating data that is visible and editable by both services. How should this be handled?
Most of changes introduced to API affect the content of the response, till changes introduced are incremental this is not a very big problem (note: you should never expose the exact DB model directly to the clients).
When you make a destructive/significant change to DB model and new API version of API is introduced, there are two options:
Turn the previous version off, filter out all queries to reply with 301 and new location.
If 1. is impossible to need to maintain both previous and current version of the API. Since this might time and money consuming it should be done only for some time and finally previous version should be turned off.
What with DB model? When two versions of API are active at the same time I'd try to keep the DB model as consistent as possible - having in mind that running two versions at the same time is just temporary. But as I wrote earlier, DB model should never be exposed directly to the clients - this may help you to avoid a lot of problems.
I have given this a little thought...
One solution may be this:
Just because the v1 API should not change, it doesn't mean the underlying implementation cannot change. You can modify the v1 implementation code to set a default value, omit the saving of a field, return an unchecked exception, or do some kind of computational logic that helps the v1 API to be compatible with the shared datasource. Then, implement a better, cleaner, more idealistic implementation in v2.
when you are going to change any thing in your API structure that can change the response, you most increase you'r API Version.
for example you have this request and response:
request post: a, b, c, d
res: {a,b,c+d}
and your are going to add 'e' in your response fetched from database.
if you don't have any change based on 'e' in current client versions, you can add it on your current API version.
but if you'r new changes are going to change last responses, for example:
res: {a+e, b, c+d}
you most increase API number to prevent crashing.
changing in the request input's are the same.
What's the best strategy to use when writing JMeters tests against a web application where the values of certain query-string and post variables are going to change for each run.
Quick, common, example
You go to a Web Page
Enter some information into a form
Click Save
Behind the scenes, a new record is entered in the database
You want to edit the record you just entered, so you go to another web page. Behind the scenes it's passing the page a parameter with the Database ID of the row you just created
When you're running step 5 of the above test, the page parameter/Database ID is going to change each time.
The workflow/strategy I'm currently using is
Record a test using the above actions
Make a note of each place where a query string variable may change from run to run
Use a XPath or Regular Expression Extractor to pull the value out of a response and into a JMeter variable
Replace all appropriate instances of the hard-coded parameter with the above variable.
This works and can be automated to an extent. However, it can get tedious, is error prone, and fragile. Is there a better/commonly accepted way of handling this situation? (Or is this why most people just use JMeter to play back logs? (-;)
Sounds to me like your on the right track. The best that can be achieved by JMeter is to extract page variables with a regular expression or xpath post processor. However your absolutely correct in that this is not a scalable solution and becomes increasingly tricky to maintain or grow.
If you've reached is point then you may want to consider a tool which is more specialised for this sort of problem. Have a look web testing tool such as Watir, it will automatically handle changing post parameters; but you would still need to extract parameters if you need to do a database update but using Watir allows for better code reuse making the problem less painful.
We have had great success in testing similar scenarios with JMeter by storing parameters in JMeter Variables within a JDBC assertion. We then do our http get/post and use a BSF Assertion and javascript do complex validation of the response. Hope it helps
Let me begin with an illustrative example (assume the implementation is in a statically typed language such as Java or C#).
Assume that you are building a content management system (CMS) or something similar. The data is hierarchically organised into Folders. Each folder has a collection of children; a child may be a Page or a Folder. All items are stored within a root folder. No cycles are allowed. We have an acyclic graph.
The system will have a remote API and instances of Folder and Page must be serialized / de-serialized across the network. With a typical implementation of folder, in which a folder's children are a List, serialization of the root node would send the entire graph. This is unacceptable for obvious reasons.
I am interested to hear people have solved this problem in the past.
I have two potential suggestions:
Navigation by query: Change the domain model so that the folder class contains only a list of IDs for each child. To access a child we must query for it. Serialisation is now trivial since the graph ends at a well defined point. The major downside is that we lose type safety - the ID could be for something other than a folder/child.
Stop and re-attach: During serialization stop whenever we detect a reference to a folder or page, send the ID instead. When de-serializing we must then look up the corresponding object for each ID and re-attach it at the relevant position in the nascent object.
I don't know what kind of API you are trying to build, but your suggestion #1 sounds like it is close to what is recommended for REST style services and APIs. Basically, a Folder object would contain a list of URLs to its children.
The Navigation by query solution was used for NFS. By reading through your question, it looks to me, as if you're trying to implements kind of a file system yourself.
If you're looking specifically into sending objects over the network there is always CORBA. Aside from that there is DCOM and the newer WCF. But wait there is more like RMI. Furthermore there are Web Services. I'll stop here now.
Suppose You model the whole tree with every element being a Node, specialisations of Node being Folder and, umm, Leaf. You have a "root" Node. Nodes have a methods
canHaveChildren()
getChildren()
Leaf nodes have the obvious behaviours (never even need to hit the network)
Folders getChildren() get the next set of nodes.
I did devise a system with Restful services along these lines. Seemed to be reasonably easy to program to.
I would not do it by the Navigation by query method. Simply because I would like to stick with the domain model where folders contains folders or pages.
Customizing the serialization might also be tricky, bug prone and difficult to change\understand.
I would suggest that you introduce and object like FolderBowser in your model which takes an id and gives you a list of contents of the folder. That will make your service operations simpler.
Cheers,
Unmesh
The classical solution is probably to use a proxy pattern, where some of the graph is sent over the network and some of the folders are replaced by proxies that will not have their lists of children populated until they are queried. A round trip to the server takes a significant amount of time and it will probably result in too many requests if all folders are proxies (this would yield a new request each time the contents of a folder is inspected), so you want to go for some trade off between the size of each chunk of data and the number of server requests needed in a typical scenario. This is of course application specific, but sending the contents of all child folders in for instance depth 2 might be a useful strategy...
Long story short: What will probably work best is your solution #1 with the exception that you want to send more than one folder at a time because of the overhead of a round trip to the server...