How to query rabbitmq_exporter - rabbitmq

I am trying to use a rather popular Docker image with from my understanding uses Prometheus to scrape data from RabbitMQ. This assumption seems confirmed as the /metrics endpoint gives me exactly the data I would expect from Prometheus in this context.
My problem is that the usual queries to Prometheus yield unexpected results. If I query /api/v1/query?query=rabbitmq_queue_memory for example, I would expect to receive data about the queue memory. Building requests like this works according to the Prometheus documentation and also works on a plain Prometheus server. The field also does exist. However all I receive is a reponse status code of 200 with a html body:
<html>
<head>
<title>RabbitMQ Exporter</title>
</head>
<body>
<h1>RabbitMQ Exporter</h1>
<p><a href='/metrics'>Metrics</a></p>
</body>
</html>
It also does not matter if I actually make a correct query. The same result appears for /apasdfasdfasfsi/v1/query?query=rabbitmq_queue_memory
Any ideas how to properly query data here? Since this image is rather popular and I cannot find any related issues anywhere (except from myself), I assume it does work, but I am simply doing something wrong.

It would look as if you're querying your RabbitMQ exporter rather than Prometheus. Don't know if you already have a Prometheus instance, so you may need to start one, point it to your exporter's /metrics and then query said Prometheus instance for /api/v1/query?query=rabbitmq_queue_memory.
All the exporter does is produce the /metrics output you see. Prometheus (properly configured) will then scrape that endpoint periodically, build timeseries for each metric (from the value of each metric across time) and you can query Prometheus for said timeseries or aggregations thereof.

Related

How To: Save JMeterVariable values to influxdb with the sampleresults

I'd like to store some JMeterVariables together with the sampleResults to an influxdb using a BackendListenerClient for influxdb (I am using package rocks.nt.apm.jmeter to get the raw results).
My current test logs in for a random customer requests some random entities and logs out. Most of the results are within a range, I'd like to zoom in to certain extreme sample results, find out for which customer / requested entity these results are. We have seen in the past we can find performance issues with specific configurations this way.
I store customer and entity ID in a variable. My issue is that the JMeterVariables are not accessible from the BackendListenerClient. I looked at the sample_variables property, but this property will store the variables in the sampleEvent, which is not accessible in the BackendListener.
I could use the threadName, or sample label to store the vars, but I saw the CSVwriter can actually write the var values from the event, which is a much nicer solution.
Looking forward on your thoughts,
Best regards, Spud
You get it right - the Backend Listener is not customizable in terms of fine-shaping the data you're sending to Influx.
Alas.
However, there's a Swiss Army Knife always available in JMeter: the JSR223 components.
The JSR223 listener, in your case.
The InfluxDB line protocol is simple as simple could be, the HTTP/Rest libraries are
in abundance (Apache HTTP must have been already included with standard JMeter, to my recollection, no additional jars needed) - just pick it all up, form your timeseries as you like, toss it towards your InfluxDB REST endpoint, job's done.

Why don't apollo-client's GraphQL queries appear in Chrome's XHR Network filter?

I'm using apollo-client, and I've just noticed that my GraphQL queries don't appear on the list of Network calls when the XHR filter is active.
How is this possible? GQL is just a set of semantics on top of regular old HTTP, right? It's not like a JS library can introduce a whole new networking capability.
In the first image below, you see me filtering for requests with "gra" in them; two appear: the OPTIONS call, and then the POST (which is the real meat). In the second image, I additionally filter by XHRs; the POST is gone.
The "XHR" filter says it captures "XHR and Fetch". The only alternative I can think of would be dynamically adding <script> tags to the document, and I very much doubt that's how apollo-client is managing things.
I don't know what the "json" Type is. The docs for the DevTools don't mention that type.
I think the reason for this is that Chrome's fetch capability is not simply sugar on top of the age-old XMLHttpRequest capability, and the filter option in the DevTools (which says "XHR") is either not built or not designed as an umbrella "asynchronous request" filter. I hope they change that, especially since the Type column in the Network panel can still be used to differentiate between the two.
If I had to speculate, I'd say the behavior we observe flows from the way the Network tab hooks into the underlying networking code, and fetch calls don't travel the relevant portion of the codepath that XHR does. As supporting evidence, I offer this SO answer, which highlights functional differences between XHR & fetch that might readily be explained by their being entirely separate code. (It's also possible some of these differences are a result of the fetch spec explicitly calling for behavior different from XHR.)
Finally, I'd add that we probably only noticed this with Apollo's GQL tools because apollo-client was created in a post-XHR world, and may be one of the first libraries we worked with that prefers fetch over XHR rather than using an XHR polyfill.

http://dbpedia.org/sparql endpoint not reliable?

Sometimes a query works sometimes it doesn't. I get sometimes "Virtuoso S1T00 Error SR171: Transaction timed out" (no timeout is set or a big timeout is set - so this is not the problem there is another problem behind that i am not aware of) or simply a browser HTTP 500 error page.
Sometimes it works from a new browser window in IE sometimes it doesn't work from FF.
What is going on with dbpedia sparql endpoint? Is there some caching or something that I am not aware of?
The DBPedia query service is kindly provided for free, and does tend to get (ab)used by many users. If you need something that you can rely on I'd suggest setting up your own instance (IIRC there are EC2 instances for that purpose).
It's a shame that the error messages tend to be so random.
Due to Large set of data DBpedia is working very harse.It won't produce proper result.If you need better result try to setup ARQ for Sparql query on your localmachine.It will give better outcome.

How to poll for updates with JSONP?

I have a Web server that updates its data once per minute, and want to make that data available to clients of all types. In order to reduce bandwidth, I set up the PHP script to support conditional GETs, using IF-MODIFIED-SINCE and/or IF-NONE-MATCH. The idea is that clients can poll every 30 seconds and thereby be sure that they won't miss anything, but also won't get duplicate data.
That all works great for most types of clients, and I've verified that it works with clients that support the standard HTTP conditional GET semantics.
But it doesn't work with JavaScript because JSONP inserts a <script> tag into the DOM and lets the browser handle things--and there's no support (at least, none that I know of) for conditional GETs in <script> tags.
So I modified my PHP script to support passing an etag value. The returned data contains an etag value that's unique for that minute. When the JavaScript client receives data from the server, it saves the etag value so it can use that value in subsequent requests. The request takes the form:
http://api.mydomain.com/script.php?fmt=json&callback=jscallback&etag=ab79bc65e
If the etag of the data doesn't match the passed etag, then I send the new data.
This all works well and was surprisingly easy to code up using jQuery. My dilemma, though is what to do if the etag matches. I see two choices:
Return an HTTP 304 (Not Modified)
Return an HTTP 200 (OK), but with the returned data containing just the header information (modified date, etag, etc.) and no actual data items.
If I do the first, then the JavaScript client code is greatly simplified. The browser seems to work just fine if it gets a 304 response to an injected <script> tag. But ... something bothers me about this solution. I don't know what it is, but it seems like I'm depending on behavior that could be browser-specific. Some browser might decide to report an error if it gets a 304.
Doing the second would require a little bit more work on the server, slightly more bandwidth, and would require the clients to check the data to see if the data was updated. It's more work for everybody, but it seems cleaner.
So, to my question. If you were writing a JavaScript client to get this data, which would you prefer? A silent failure that never calls your "success" callback? Or a "success" return that has no data (beyond status) in it? A third option?
Absent any discussion from others, I went with my gut here and implemented the second option. The web server returns an HTTP 200, and the data contains a "Not Modified" status code along with header information, but no records. That makes the JavaScript just slightly more complicated, but prevents me from depending on undocumented behavior.

WCF Paged Results & Data Export

I've walked into a project that is using a WCF service for the data tier. Currently, when data is needed for a grid, all rows are returned and the results are bound to a grid and the dataset is stuffed into a session variable for paging/sorting/rebinding. We've already hit a max message size problem, so I'm thinking it's time to convert from fetch and cache to fetch only the current page.
Face value this seems easy enough, but there's a small catch. The user is allowed to export the entire result set at any point. This means that for grid viewing purposes fetching the current page is fine, but when they want to do an export, I still need to make a call for all data.
This puts me back into the max message size issue. What is the recommended approach for this type of setup?
We are currently using the wsHttpBinding...
Thanks for any assistance.
I think the recommended approach for large files is to use WCF streaming. I'm not sure the exact details for your scenario, but you could take a look at this as a starting point:
http://msdn.microsoft.com/en-us/library/ms789010.aspx
I would probably do something like this in your case
create a service with a "paged" GetData() method - where you specify the page index and the page size as additional parameters. This should give you a nice clean interface for "regular" use, and that should not hit the maxMessageSize limits
create a second service (or method) that would send all data - ideally, you could bundle that up into a ZIP file or something on the server, before sending it. If that ZIP file is still too large, you might want to check out WCF streaming for handling large files, as Andy already pointed out
The maxMessageSizeLimit is in place for a good reason: to avoid Denial of Service attacks where a WCF service would just get flooded with large messages and thus brought to its knees. If you can, always try to keep that in mind and don't just jack up the maxMessageSize to 2 GB - it might come back to bite you :-)