How to use xodus blobs in a multi-thread-scenario? And should they be closed? - blob

First question:
The documentation states that you should not close the InputStream that you retrieved by the getBlob method. The javadoc of this method states, that you should close it. Who is right?
Second one:
I'm using xodus in an "asynchronous environment", where blob-streaming is suspended/resumed in a cooperative-multitasking-style using callbacks and backpressure-detection (in my specific case vertx write queues mixed with drain-handlers). So, while I'm never accessing the blob's InputStream from different Threads at the same time, I may access them from different Threads in time slots that are guaranteed to be isolated timewise from each other. Is this save?
In other words: The documentation told me that "Concurrent access" to the same blob is not possible - does this mean different Threads at the same time, or different Threads at any time?
Thank you so much for any help!

You should not close the input stream, as the documentation states. I've fixed the javadoc, thanks for noticing.
"Concurrent access" is meant as an access to a single instance of InputStream from different threads at the same time. Cooperative multitasking should work fine if the access to the stream is really successive and the happens-before order is kept.

Related

Two "start" needed in the same lane in BPMN 1.2

I know in BPMN there is just a "start event" for each pool. In my case I have a pool that can begin when a message is caught or because the actor decide to do it by his own decision.
How can I model that? I'm not sure I can use an event-based exclusive XOR.
Maybe a complex gateway?
As stated in many best practice how-tos, it is NOT RECOMMENDED to use multiple start events in a pool. BPMN specification 1.2 contains this note too:
9.3.2.
...
It is RECOMMENDED that
this feature be used sparingly and that
the modeler be aware that other readers of the Diagram may have difficulty
understanding the intent of the Diagram.
...
On the other side, the common rule for the case with omitted start event is
If the Start Event is not used, then all Flow Objects that do not have
an incoming Sequence Flow SHALL be instantiated when the Process is instantiated.
I assume this will be fair enough for the case of manual process start too. Even if the process has only message start event it will be correctly started because Message Start Event is a fair flow object with no incoming sequence flow and thus it complies to the above rule.
However, if you want to be 100% sure the process will go the way you want then the Event Based Exclusive Gateway (which is available since version 1.1) is your choice. Placing it before multiple different start events will make the process choose either of them for start.
Further explanation can be found in this blog.
Unlimited process instances
If you don't mind that during execution of your process the pool could be used multiple times (e. g. once started by a message and 3 times by an actor) then you can simply use multiple start events (BPMN 1.2 PDF Spec 9.3.2 page 37 allows this):
Single instance
If you can only allow a single run of the pool, you might have to instantiate it manually at the start of your execution and then decide whether to use it and when. Here is an example of how this can be done:
The Event-Based Gateway (Spec 9.5.2.4) will "decide" what to do with your pool:
If Actor decides to start or a message comes from the main pool, some actions will take place;
If the process is "sure" that additional pool will not be required, a signal is cast to terminate its instance.

Philips Hue command limitation

First of all I'm developing my own C# library for controlling Philips Hue, which means I'm not using the official SDK. (I'm guessing that the SDK will make sure you won't have any problems)
I'm a little confused about the limitation in the Core concepts page in the API, which states:
We can’t send commands to the lights too fast. If you stick to around 10 commands per second to the /lights resource as maximum you should be fine. For /groups commands you should keep to a maximum of 1 per second.
I intend to respect this limitation, but does the limitation still apply when you are performing GET requests on the /lights resource, or is it only for sending actual commands with PUT requests to /lights/<id>/state that change the state of the light? Same question goes for the /groups resource.
Also is it even possible to damage anything by sending too many requests, or will it just take longer to get all responses?
Edit:
My overall question is: How should I understand the API limitation?
A more specific sub-question is: Should I wait 100 ms before sending another /lights command, relative to when I received a response, or relative to when I sent the previous command?
Another sub-question is: Should I consider this limitation only when using PUT requests on e.g. /lights/<id>/state, or on all request types GET/PUT/POST/DELETE
I don't know if anything was changed in firmware updates, but I have discovered that the bridge might not be so simple as you would think, and that the API description isn't very clear.
I've done a little testing while running firmware 01009914.
The bridge seems to have some kind of queue of incoming commands. I sent {"bri":254} to a group 9 times and 1 final command of {"bri":1}. From the first command to when the light is actually dimmed, takes roughly 3-4 seconds. Each time I sent a command the bridge replied almost instantly with success token.
I did the same small tests sending other commands, 10 of each JSON object:
{"bri":254} 3-4 seconds
{"on":true, "bri":254} 6-7 seconds
{"on":true, "bri":254, "alert":"none", "effect":"none"} 12-13 seconds
This actually shows that each change of attributes takes roughly 0.3 seconds for the bridge to handle.
I will claim that for each attribute we change, the bridge takes about 300 ms to finish, and the limitation of commands should be understood as: As long as you stick with changing one attribute of a group each second, you should be fine.
Note: I only tried with one group consisting of three lights, and I don't know if the bridge actually does have a queue of incoming commands, and in case it does have a queue, I don't know what the limit of items in it is.
Edit:
Now we have some official clarification of the Hue System Performance.
I'm fairly certain that the 10 commands per second is a guideline to prevent failure of the Bridge, and is a technical limitation of the hardware. Any more than that and you're apt to overload the bridge. I believe this applies to commands as well as requests.
Both approaches are reasonable. For laziness' sake, you could wait for 100ms to send a response, but I would only rely on this method if you don't plan on any other interactions with the Bridge.
I consider this limitation on all request types.
You won't damage anything if you send commands too fast. However, if you send commands too fast the bridge might become unresponsive and/or some messages can be ignored.
When it comes to the bridge, the way I think of it is that the bridge is more or less single threaded, so it works best if you make sure you don't send the next command before the previous one has returned.
In practice we've found that this works much better than waiting a fixed time between each request. In fact, you can pretty much send commands as fast as you want as long as you wait for the previous one to finish.
When you send a command to the bridge, the bridge has to then send it to the lamps through Zigbee. Since it's a mesh network in some cases the message has to make a couple of hops from lamp to lamp before it reaches the target. Depending on how many lamps you have and how many hops the signal needs to take, this can take a while. Also, it's possible that some messages randomly take much longer than others.
In general the system is not designed to handle very fast changes, but if you keep the above in mind you can make many cool effects :)

blocking call on two Queues?

I have an algorithm (task in VxWorks) that is reading data from multiple queues to be able to manage priorities accordingly. Now , the msgQReceive( ) function, can be set to WAIT_FOREVER which would make it a blocking call until something is available to receive and process. Now how can I do this if I have multiple queues? Currently I check in a while(1) loop if any of the queues have any contents and receive them if so but if nothing is there, my algorithm just spins and spins and spins and eats CPU resources for nothing. How can I prevent this best?
You should be able to use VxWorks events coupled with a Message Queue.
See msgQEvStart function and Kernel Programmer's Guide, section 7.9.
This is akin to using a select() for I/O operation.
You do a blocking eventReceive which returns a bitmask indicating which queue has content and you then do a non-blocking msgQReceive to retrieve the data.
Or you can look at How can a task wait on multiple vxworks Queues? which I wrote a while ago,
As already mentioned, you could use events, alternatively if you can use a pipe instead of msgQ, you could potentially use select.
As another alternative, perhaps consider having multiple tasks, each servicing a single msgQ

blocked requests in io_service

I have implemented client server program using boost::asio library.
In my implementation there are times when io_service.run() blocks indefinitely. In case I pass another request to io_service, the blocked call begins to execute normally.
Is there any way to see what are the pending requests inside the io_service queue ?
I have not used work object to block the run call!
There are no official ways to query into the io_service to find all pending request. However, there are a few techniques to debug the problem:
Boost 1.47 introduced handler tracking. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, an identifier, and the operation type, to the standard error stream.
Attach a debugger dig through the layers to find and examine operation queues. This answer covers both understanding handler tracking and using a debugger to examine an operation queue for the epoll_reactor.
Finally, if you believe it is a bug, then it may be worth updating to the latest version or checking the revision history for relevant changes. Regardless, describing the problem in more detail may allow others to help identify the source of the problem and potential solutions.
Now i spent a few hours reading and experimenting (i need more boost::asio functionality for work as well) and it turns out: Kind of.
But it is not as straightforward or readable as one might hope.
Under the hood (well, under the outermost hood) io_service has a bunch of other services registered, which do the work async_ operations of their respective fields require.
These are the "Services" described in the reference.
Now sadly, the services stay registered, wether there is work to do or not. For example if your io_service has a udp socket, it will still have all the corresponding services, even if the socket itself is inactive.
But you can ask your io_service which services it has. Lets say you want to know wether your io_service called m_io_service has an udp datagram_socket_service. Then you can call something like:
if (boost::asio::has_service<boost::asio::datagram_socket_service<boost::asio::ip::udp> >(m_io_service))
{
//Whatever
}
That does not help a lot, because it will be true no matter wether the socket is active or not. But after you know, that you have that service, you can get a ref to it using use_service instead of has_service but with the same elegant amount of <>.
And now you can inspect the service to see what it is up to. Sadly, it will not tell you what the outstanding handlers names are (probably partly because it does not know them) but if it is a socket, you can get its implemention_type and with that check whether it currently is_open or find either the local_endpoint as well as the remote_endpoint.
In case of a deadline_timer_service you can, among other stuff, find out when it expires_at.
See the reference for more information what the service is and is not willing to tell you.
http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/reference.html
This information should then hopefully allow you to determine which async_ operation did not return.
And if not, at the very least you can cancel any unexpectedly active services.

How to run two tasks independently

I am building one application on Mac OS X (10.6). In this application, I have one screen where user will provide input and that will be saved as a plist in local folder. This plist file needs to be trasferred to server using HTTP POST service. There should be check for server connectivity and if connections fails the files will be saved in local folder. With certain time duration, again the server connection will be checked and if found, then send all the files store in local folder one by one.
Basically, The GUI application will run continously to get input from user and in another thread there should be check for server connectivity and sending the files.
So my question is what might be the good approach to solve the problem and if any one can send some sample code, it would be great to me.
Thanks,
Barun
There are several approaches to threading in Objective-C! The easiest strategy is NSOperationQueue. Override NSOperation to handle your HTTP request, optionally set a completion block if you need to be notified when it's done, add an instance of it to an NSOperationQueue object and you're good to go. Set up an NSTimer to reschedule the upload if it fails the first time. You can use NSURLConnection to handle the web stuff. Note that NSURLConnection can make connections asynchronously or blocking. Since your NSOperation subclass runs in a separate thread already, you probably want to use the blocking method (if you don't you have to create a concurrent NSOperation subclass, which is a lot more work).
You can also use Grand Central Dispatch's API, detach a new thread to methods you specify, or use plain old c (I wouldn't recommend the last two but it's good to mention them). As a bonus, NSOperationQueue and Grand Central Dispatch both know "what's right" when you have multiple operations running at once, and will scale the number of threads to fit the number of core's in the user's computer to obtain the best performance.
Check the docs for NSOperationQueue, NSOperation, and NSURLConnection. The guides and example projects will have all the source code you need to get you started in the right direction.