RabbitMQ Management plugin auto refreshes the page in selected intervals of 5, 30 or 300 seconds.
I want it to refresh it every 1 second. Is it possible?
Its too late reply :P but for the sake of documentation I am answering this.
You can try a quick hack with the management plugin.
Steps:
Unzip the management plugin:
cd /usr/lib/rabbitmq/lib/rabbitmq_server-3.7.4/plugins
unzip rabbitmq_management-3.7.4.ez
cd rabbitmq_management-3.7.4
vim rabbitmq_management-3.7.4/priv/www/js/tmpl/layout.ejs
[...]
<option value="5000">Refresh every 5 seconds</option>
<option value="10000">Refresh every 10 seconds</option>
<option value="30000">Refresh every 30 seconds</option>
<option value="">Do not refresh</option>
[...]
Edit with appropriate values (in your case 1000 --> Refresh every 1 second)
Move old plug in:
cd /usr/lib/rabbitmq/lib/rabbitmq_server-3.7.4/plugins/
mv rabbitmq_management-3.7.4.ez /myhome/rabbitmq_management-3.7.4.ez
Zip the the plugin directory
zip -r rabbitmq_management-3.7.4.ez rabbitmq_management-3.7.4
Restart the rabbitmq service.
-Rahul N.
I don't think that 5 secs can be changed. I noticed that the message count returned by the api - http://username:password#rabbitmq_server:15672/api/queues/%2f/your_queue_name also updates only after every 5 seconds even if if you keep sending requests to it.
I don't know why someone ever need 1sec resolution, maybe you should use Management HTTP API rather then web interface?
But yes, technically it is possible. You may change some option value to 1000 and select it or use some browser extension like userscript to do that for you every page visit. The other way is to get into management plugin and make it return an extra option in select.
Related
Is there a way to check if Elasticsearch has finished processing my request?
I want to perform integration tests for my application checking if a record can be found after insertion.
For example if I make a following request:
POST /_all/_bulk
{
"update":{
"_id":419,
"_index":"popc",
"_type":"offers"
}
}
{
"doc":{
"id":"419",
"author":"foo bar",
"number":"642-00419"
},
"doc_as_upsert":true
}
And I check immediately, the test fails, because it takes some time for Elasticsearch to complete my request. If I sleep for 1 second before the assertion it works most of the time, but not always. I could extend the sleep time to eg. 3 seconds, but it makes the tests very slow, hence my question.
I have tried using cat pending tasks and pending cluster tasks endpoints, but the responses are always empty.
If any of this is relevant, I'm using Elasticsearch 5.4, Laravel Scout 3.0.5 and tamayo/laravel-scout-elastic 3.0.3
I found this PR: https://github.com/elastic/elasticsearch/pull/17986
You can use refresh: wait_for and Elasticsearch will only respond once your data is available for search.
You can wait for the response; when you receive the response to the update request, it's done (and you won't see it in pending or current tasks). I think the problem you're having is probably with the refresh interval (see dynamic settings). Indexed documents are not available for search right away, and this is the (maximum) amount of time before they will be available. (You can change this setting for what makes sense for your use case, or use this setting to let you know how long you need to sleep before searching for the integration tests.)
If you want to see at in-progress tasks, you can use the tasks api.
i'm trying to get my google page speed insights rating to be decent, but there are some external files that i would want to be cached aswell, anyone knows what would be the best way to deal with this?
https://s.swiftypecdn.com/cc.js (5 minutes)
https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js (60 minutes)
https://pagead2.googlesyndication.com/pagead/osd.js (60 minutes)
https://www.google-analytics.com/plugins/ua/linkid.js (60 minutes)
https://hey.hellobar.com/…d5837892514411fd16abbb3f71f0d400607f8f0b (2 hours)
https://www.google-analytics.com/analytics.js (2 hours)
Copy to your server and serve locally or from CDN, with different browser cache settings. Update GA scripts periodically with cronjob or something similar.
On Wordpress there are plugins that can do that for you, like this one: Above The Fold; they call this feature Javascript localization.
On the other hand, I use Google Pagespeed Module on server and it's directive MapProxyDomain in combination with Alternative async tracking snippet. That seems most elegant for me.
This should be enough for you to start solving your problem.
set cache-control to external resources?
You can't control the headers sent from a server that you don't control.
In other words, either host a copy yourself or there's nothing you can do about it.
Thanks
There is no solution for those files. If those files are CDN like bootstrap cdn you can copy those files locally into your host but if those request are generated on runtime than you can do nothing about it. :)
You can make your own cache
Place some files to the browser's localStorage (after the first time they arrive from the distant server) and next time you can serve them from the local copy. This way, you store things right where they're needed - the only thing to be careful with is updating them, you need a way to replace these files when it's time for a new version.
If you don't want to do this from scratch, here are some Javascript libraries:
https://www.sitepoint.com/9-javascript-libraries-working-with-local-storage/
Check out this lsCache for example, it looks super practical:
lscache.set('greeting', 'Hello World!', 2); // 2 minute expiration
lscache.get('greeting'); // returns "Hello World!"
lscache.remove('greeting'); // remove
lscache.flush(); // flush the entire cache
lscache.flushExpired(); // flush only expired items
I was tasked with creating a health check for our production site. It is a .NET MVC web application. There are a lot of dependencies and therefore points of failure e.g. a document repository, Java Web services, Site Minder policy server etc.
Management wants us to be the first to know if ever any point fails. Currently we are playing catch up if a problem arises, because it is the the client that informs us. I have written a suite of simple Selenium WebDriver based integration tests that test the sign in and a few light operations e.g. retrieving documents via the document api. I am happy with the result but need to be able to run them on a loop and notify IT when any fails.
We have a TFS build server but I'm not sure if it is the right tool for the job. I don't want to continuously build the tests, just run them. Also it looks like I can't define a build schedule more frequently than on a daily basis.
I would appreciate any ideas on how best achieve this. Thanks in advance
What you want to do is called a suite of "Smoke Tests". Smoke Tests are basically very short and sweet, independent tests that test various pieces of the app to make sure it's production ready, just as you say.
I am unfamiliar with TFS, but I'm sure the information I can provide you will be useful, and transferrable.
When you say "I don't want to build the tests, just run them." Any CI that you use, NEEDS to build them TO run them. Basically "building" will equate to "compiling". In order for your CI to actually run the tests, it needs to compile.
As far as running them, If the TFS build system has any use whatsoever, it will have a periodic build option. In Jenkins, I can specify a Cron time to run. For example:
0 0 * * *
means "run at 00:00 every day (midnight)"
or,
30 5 * 1-5 *
which means, "run at 5:30 every week day"
Since you are making Smoke Tests, it's important to remember to keep them short and sweet. Smoke tests should test one thing at a time. for example:
testLogin()
testLogout()
testAddSomething()
testRemoveSomething()
A web application health check is a very important feature. The use of smoke tests can be very useful in working out if your website is running or not and these can be automated to run at intervals to give you a notification that there is something wrong with your site, preferable before the customer notices.
However where smoke tests fail is that they only tell you that the website does not work, it does not tell you why. That is because you are making external calls as the client would, you cannot see the internals of the application. I.E is it the database that is down, is a network issue, disk space, a remote endpoint is not functioning correctly.
Now some of these things should be identifiable from other monitoring and you should definitely have an error log but sometimes you want to hear it from the horses mouth and the best thing that can tell you how you application is behaving is your application itself. That is why a number of applications have a baked in health check that can be called on demand.
Health Check as a Service
The health check services I have implemented in the past are all very similar and they do the following:
Expose an endpoint that can be called on demand, i.e /api/healthcheck. Normally this is private and is not accessible externally.
It returns a Json response containing:
the overall state
the host that returned the result (if behind a load balancer)
The application version
A set of sub system states (these will indicate which component is not performing)
The service should be resilient, any exception thrown whilst checking should still end with a health check result being returned.
Some sort of aggregate that can present a number of health check endpoints into one view
Here is one I made earlier
After doing this a number of times I have started a library to take out the main wire up of the health check and exposing it as a service. Feel free to use as an example or use the nuget packages.
https://github.com/bronumski/HealthNet
https://www.nuget.org/packages/HealthNet.WebApi
https://www.nuget.org/packages/HealthNet.Owin
https://www.nuget.org/packages/HealthNet.Nancy
I have Jenkins project that perform some sort of sanity check on couple of independent documents. Check result is written in JUnit XML format.
When one document test fails, entire build fails. Jenkins can be simply configured to send email to commiter in this situation. But I want to notify commiters only when new test failed or any failed test was fixed with the commit. They are not interested in failed tests for documents they have not edited. Email should contain only information of changes in tests, not full test report. Is it possible to send this kind of notification with any currently available Jenkins plugins? What could be the simplest way to achieve this?
I had the same question today. I wanted to configure Jenkins sending notifications only when new tests fail.
What I did was to install email-ext plugin.
You can find there a special trigger that is called Regression (An email will be sent any time there is a regression. A build is considered to regress whenever it hasmore failures than the previous build.)
Regarding fixed tests, there is Improvement trigger (An email will be sent any time there is an improvement. A build is considered to have improved wheneverit has fewer failures than the previous build.)
I guess that this is what you are looking for.
Hope it helps
There's the email-ext plugin. I don't think it does exactly what you want (e.g. sending only emails to committers who have changed a file that is responsible for a failure). You might be able to work around that/extend the plugin though.
Also have a look at the new Emailer, which talks about new email functionality in core hudson that is based on aforementioned plugin.
I was wondering if there was a way to do this in Hudson (or with any of the various plugins). My IDEAL scenario:
I want to trigger a build based on a job through a REST-like API, and on that build, I want it to return me a job ID. After-wards, I would like to poll this ID to see its status. When it is done, I would like to see the status, and the build number.
Now, since I can't seem to get that working, here is my current solution that I have yet to implement:
When you do a REST call to do a build, its not very REST-ful. It simply returns HTML, and I would have to do a kind of parsing to get the job ID. Alternatively, I can do a REST call for all the history listing all the jobs, and the latest one would be the one I just built. Once I have that, I can poll the console output for the output of the build.
Anyone know a way I can implement my "ideal" solution?
Yes, you can use the Hudson Remote API for this (as #Dan mentioned). Specifically, you need to configure your job to accept Remote Triggers (Job Configuration -> Build Triggers -> Trigger builds remotely) and then you can fire off a build with a simple HTTP GET to the right url.
(You may need to jump through a couple additional hoops if your Hudson requires authentication.)
I'm able to start a Hudson job with wget:
wget --auth-no-challenge --http-user=test --http-password=test "http://localhost:8080/job/My job/build?TOKEN=test"
This returns a bunch of HTML with a build number #20 that you could parse. The build number can then be used to query whether the job is done / successful.
You can examine the Hudson Remote API right from your browser for most of the Hudson web pages that you normally access by appending /api (or /api/xml to see the actual XML output), e.g. http://your-hudson/job/My job/api/.
Update: I see from your question that you probably know much of what I wrote. It is worth exploring the built-in Hudson API documentation a bit. I just discovered this tidbit that might help.
You can get the build number of the latest build (as plain text) from the URL: http://your-hudson/job/My job/lastBuild/buildNumber
Once you have the build number, I think the polling and job status is straightforward once you understand the API.
And what if you don't want the latest build number, but you want the build number of the build that was triggered by hitting the build URL ?
As far as I can tell, hitting that URL returns a 302 that redirects you to the job's mainpage, with no indication whatsoever of what the build number is of the one that you triggered.