I need a JIRA plugin to manage queues of issues and notify issue watchers in case the position of an issue in the queue changes. I use old JIRA 4.2, thus, I need a solution for this version of JIRA.
Queue management is pretty much the same as backlog management — I need to order the issues manually with a UI (with drag and drop, preferably). I can have many queues, though.
Another important requirement is that issue watchers get notified when the issue changes it's position in queue. The later can happen either because someone adds an issue to the queue in front of the issue or some other issue in front is taken out.
I know there is an Agile Plugin for JIRA, but I don't know whether it can do what I want. It's hard to tell by reading the white papers. I do not want to loose time on trying. Maybe there are people who has already implemented a similar set up with JIRA.
Thanks a lot for your inputs.
There is no solution for JIRA 4.2. There was a solution for 5.1, developed by mail.ru guys. However, the mail.ru queue management plugin didn't have a notification functionality.
So we ended up upgrading to JIRA 5.1 and extending the functionality to notify stakeholders of queue changes: whenever the position of the issue in the queue changes, watcher are notified. This automation tools significantly reduced the workload on the product owner to coordinate priorities among the parties.
Here is the source code: https://github.com/ITAttractor/jira-ticket-queue-plugin
Related
We’ve been using the Tokbox platform for several months now with a Javascript web-client as well as an Android phone client, where sessions and connections are managed by a Python server. While integration and bring-up went well on both ends (client and server), we continue to encounter problems with the in-session audio and video experience.
Sessions are always routed and always between two participants only, with much use of a collaborative editor.
The in-session experience is like a coin toss: we never know how it’s going to go, and that’s becoming a business threat.
Web-Client: A/V Resources
The most common problem is the acquisition of audio and/or video: at the beginning of a session, one or the other participants may have problems hearing or seeing the other. Allocating a new connection to establish new streams does not fix that, nor does restarting the browser.
Question: What’s the recommended way to detect possible resource locks (e.g. does another application hog the camera/microphone)?
Web-Client: Network
Bandwidth and packet loss are a challenge, for example this inspector graph:
Audio and video of both participants is all over the place, and while we can not control the network connections the web-client should be able to reliably give useful information.
Question: Other than continuous connection monitoring with getStats() and maybe the experimental navigator.connection property, how can the web-client monitor network connectivity?
Pre-Call Test
We recommend to customers to run a pre-call test and have implemented it on our site as well. However, results of that test often times do not reflect the in-session connectivity. Worse, a pre-call test may detect a low (no video) bandwidth while Skype works just fine.
Question: How can that be?
I'm a member of the TokBox development team. I remember you reported an issue with the Python SDK, thanks for that!
Web-Client: A/V Resources
Most acquisition issues are detected by the JS SDK and if they aren't then we'd really like to hear about it! Please report reproduction steps or affected session IDs to TokBox support (referencing this StackOverflow question): https://support.tokbox.com/hc/en-us/requests/new
Most acquisition errors appear as OT_HARDWARE_UNAVAILABLE or OT_MEDIA_ERR_ABORTED errors. Are you detecting and surfacing these errors to your users? There is also the special OT_CHROME_MICROPHONE_ACQUISITION_ERROR error which is due to a known issue with Chrome that has been mostly fixed since Chrome 63 (see https://bugs.chromium.org/p/webrtc/issues/detail?id=4799).
Web-Client: Network
Unfortunately this is one of the more difficult issues to troubleshoot. Yes, Subscriber#getStats() is the best tool we have at our disposal and is a wrapper around the native RTCPeerConnection#getStats() function. Unfortunately we don't have much control over the values returned by the native function and if you think our SDK is returning incorrect values when compared with values from RTCPeerConnection#getStats() then please let us know!
It would be worthwhile confirming whether the issue is reproducible in all browsers or only a particular one. If you have detailed data regarding the inaccuracy of the native RTCPeerConnection#getStats() function then we could work together to report it to the browser vendor(s).
Fortunately we have just released the new Publisher#getStats() function which lets you get the publisher side of the stats. This should help you narrow down the cause of a connectivity issue to either a publisher or subscriber side. Please let us know if this helps with tracking down these issues.
Pre-Call Test
Again, these tests are based on Subscriber#getStats() which in turn are based on RTCPeerConnection#getStats(), the accuracy of which is out of our hands, but we'd love any reproduction steps to either fix a bug in our client SDK or report a bug to the browser vendors.
Just to confirm though, when you say you've implemented a pre-call test in your site, did you use the official JavaScript network test module? https://github.com/opentok/opentok-network-test-js This is actually what's used by the TokBox pre-call test.
#Aiham, thanks for responding, I've been looking at the the new Publisher#getStats() you linked to (thank you!), so we too can give our users some way of visibly seeing the network conditions that might be affected the quality of their call (and who's causing it). However, it seems as though bytes / packets sent goes up sharply as the number of subscribers increases, even though we're in a routed session.
Am I wrong to expect the Publisher#getStats() statistics to stay fairly stable regardless of the number of subscribers then receiving that stream in a routed session? I expected the nature of a routed call to mean it's sent once to the OpenTok Media Servers, and the statistics would end there.
We are using Akka.Net and in some cases we need actors to communicate reliably while preserving order over a message queue (i.e. Oracle Advanced Queues or WebSphere MQ, but any message queuing system would work such as RabbitMQ).
We have various requirements why we are using the message queue, so the question isn't if we should be using this with Akka, the question is how.
How would we go about connecting the queue up to Akka so that it is as seamless as possible?
Is a a custom Mailbox the route to go down? Do we need to right a custom IMessageQueue implementation? Or maybe we need a custom router? Are there any specific tests we can run to be sure our Mailbox/IMessageQueue works well with Akka.Net?
EDIT:
Should we maybe looking to implement a custom Transport?
Can any pointers be offered on where to start?
In general implementing custom mailbox based on some reliable queue is not feasible solution - actually it has been already done on the Akka JVM side, and it failed all hopes.
One of the basic reasons is usually the misunderstanding of the basic idea - when people are talking about reliable delivery (that MQ-systems offers), what they really mean, is reliable processing. What if your messages has been send with 100% delivery ratio, but ultimately receiving actor/node has crashed while processing them? From the mailbox point of view everything went smooth...
For this reason, usually the way to go is a dedicated actor - or hierarchy of them - working as a gateway to external messaging system. This way you can not only send message them but also mark them as receive after explicit acknowledgement from successfully completed process. One of the examples may be akka-rabbitmq (written in Scala).
I have been developing an app which processes many WSAPI and LBAPI requests which take an extended period of time to complete. In the event that certain parameters are changed, these requests become irrelevant and canceling them would be the best thing to do, in an effort to clear up the network queue for the new set of requests that need to take place.
I have searched the docs of both APIs and haven't been able to find any way included in the SDK to cancel these requests. I'm wondering if there might be a way to do this manually, or if there is a function I might be missing.
Thanks!
You haven't found it because it doesn't exist in Ext. :-) We have run into similar things in the past but haven't had a critical need to build this into the framework yet.
The best info I've found on this problem is this post which describes how to augment stores to support canceling outstanding loads:
http://www.mattgoldspink.co.uk/2013/02/03/ext-js-cancel-a-load-on-an-ext-data-store/
Using NserviceBus 3.3 with Raven for subscription persistence.
I'm creating a prototype application that will consume messages from a publisher in our test environment. The application will only be used for a few weeks, at which point it may be (essentially) thrown away in its current form.
I don't want the publisher to continue to send messages to the outbound queue for this subscriber. In effect, I want its existence to be completely removed from the system.
How would I go about removing all knowledge of this subscriber from the system?
To do this you need to manage subscriptions manually.
Have a look at the PubSub sample, specifically this file and you also need to tell the bus not to autosubscribe, this sample should provide you with all the code required to do this.
The link to the PubSub article is broken. Here is the new link: https://github.com/Particular/NServiceBus.Msmq.Samples/tree/master/PubSub
I am a newbie to real-time application development and am trying to wrap my head around the myriad options out there. I have read as many blog posts, notes and essays out there that people have been kind enough to share. Yet, a simple problem seems unanswered in my tiny brain. I thought a number of other people might have the same issues, so I might as well sign up and post here on SO. Here goes:
I am building a tiny real-time app which is asynchronous chat + another fun feature. I boiled my choices down to the following two options:
LAMP + RabbitMQ
Node.JS + Redis + Pub-Sub
I believe that I get the basics to start learning and building this out. However, my (seriously n00b) questions are:
How do I communicate with the end-user -> Client to/from Server in both of those? Would that be simple Javascript long/infinite polling?
Of the two, which might more efficient to build out and manage from a single Slice (assuming 100 - 1,000 users)?
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
I hope this isn't a crazy question and won't get flamed right away. Would love some constructive feedback, love this community!
Thank you.
Architecturally, both of your choices are the same as storing data in an Oracle database server for another application to retrieve.
Both the RabbitMQ and the Redis solution require your apps to connect to an intermediary server that handles the data communications. Redis is most like Oracle, because it can be used simply as a persistent database with a network API. But RabbitMQ is a little different because the MQ Broker is not really responsible for persisting data. If you configure it right and use the right options when publishing a message, then RabbitMQ will actually persist the data for you but you can't get the data out except as part of the normal message queueing process. In other words, RabbitMQ is for communicating messages and only offers persistence as a way of recovering from network problems or system crashes.
I would suggest using RabbitMQ and whatever programming languages you are already familiar with. Since the M in LAMP is usually interpreted as MySQL, this means that you would either not use MySQL at all, or only use it for long term storage of data, not for the realtime communications.
The RabbitMQ site has a huge amount of documentation about building apps with AMQP. I suggest that after you install RabbitMQ, you read through the docs for rabbitmqctl and then create a vhost to experiment in. That way it is easy to clean up your experiments without resetting everything. I also suggest using only topic exchanges because you can emulate the behavior of direct and fanout exchanges by using wildcards in the routing_key.
Remember, you only publish messages to exchanges, and you only receive messages from queues. The exchange is responsible for pattern matching the message's routing_key to the queue's binding_key to determine which queues should receive a copy of the message. It is worthwhile learning the whole AMQP model even if you only plan to send messages to one queue with the same name as the routing_key.
If you are building your client in the browser, and you want to build a prototype, then you should consider just using XHR today, and then move to something like Kamaloka-js which is a pure Javascript implementation of AMQP (the AMQ Protocol) which is the standard protocol used to communicate to a RabbitMQ message broker. In other words, build it with what you know today, and then speed it up later which something (AMQP) that has a long term future in your toolbox.
Should I just build everything out with jQuery in the 'old school' paradigm and then identify which stack might make more sense? Just so that I can get the product fleshed out as a prototype and then 'optimize' it. Or is writing in one over the other more than mere optimization? ( I feel so, but I am not 100% on this personally )
This is usually called RAD (rapid application design/development) and it is what I would recommend right now. This lets you build the proof of concept that you can use to work off of later to get what you want to happen.
As for how to talk to the clients from the server, and vice versa, have you read at all on websockets?
Given the choice between LAMP or event based programming, for what you're suggesting, I would tell you to go with the event based programming, so nodejs. But that's just one man's opinion.
Well,
LAMP - Apache create new process for every request. RabbitMQ can be useful with many features.
Node.js - Uses single process to handle all request asynchronously with help of event looping. So, no extra overhead process creation like apache.
For asynchronous chat application,
socket.io + Node.js + redis pub-sup is best stack.
I have already implemented real-time notification using above stack.