Can more than one application claim an interface in libusb? - usb

I am working on a hardware/software application where, connected via usb, is a device that does some off board processing on some data. The application is meant to be open multiple times and which device needs which data is identified by an in-stream parameter. My question is, can more than one application claim an interface? my first implementation used WinUSB but I quickly realized that that limits me to only one instance. The libusb documentation claims that this limitation is removed in their driver.
My concern is, because I intend to have far more than 8 instances running, having only the 8 interfaces allotted will not be sufficient. If I cannot, in fact, claim an interface more than once, is there a method where I could have the applications call a shared library that claims the interface and manages and routes traffic between the applications?

As far as I know you can only have one handle open to a device in either implementation.
I think you are on track in terms of how to handle this problem. The way I have done something like this in the past is to create a service that is to run in the background. This service should be launched by the first instance of the application, and can keep a reference count of it's clients. On your next instance of the application increment your reference count, and whenever a client application closes decrement the reference count. When the last application closes the service can close too.
The service would have the job of opening the device and reading all data in to a buffer. From there you can either put smarts in to the service to process the data and load it in to different shared buffers that are each individually accessible by your other client application instances, or you could simply make one huge buffer available to everyone (but this is a riskier solution).

Related

Transferring a Back-to-back call, how to provide status to client

In our application we have a back-to-back-connection between an operator (client) and a caller, via an ucma-application we built. Now we want to transfer the caller to another operator or number.
This transfer is attended, so we want to keep the call in the client at least until the transfer is completed.
The client application tells the ucma-application to do the transfer. As such, the server makes a transfer on the leg from ucma -> caller. In this scenario, the leg from ucma to the client application remains intact, but we want to receive information about this transfer so that we can show the transfer status in the client application. If the transfer fails, it should also be clear to the operator (it should also be on hold during the time of the transfer, and continue to be on hold even after transfer failed).
Which is the correct way to do this in UCMA?
It's hard to give you advice as there are multiple ways to do what you want depending on what you need to achieve.
I think the main problem is that you are doing the transfer in the middle, you can't tell the Lync Client to go on "hold". Because of this, you can only put the call on hold from the point of view of UMCA application. This means that if you will have to provide your own UI to unhold the call if it fails, maybe from your own Client Application GUI.
What you could do is write a Lync Client SDK controlled Lync Client application. If you have a Lync Client SDK controlled Lync Client, you could remote control the Lync Client to do the transfer, that way you get the standard Lync Client failed transfer UI. If you do this, what is the point of the UCMA application?
If you have to do it from the UCMA point of view, you could:
Provide the UI in your only Client Application (I would think no nice) including controlling the hold status on a failure
Lync Client SDK controlled Lync Client to put the call on Hold that way it's the standard Lync Client way to unhold on failure, then the only need to worry about the display of a failed transfer. Maybe display something in your client application, maybe a send a in call IM from the UCMA application?
See if the Lync Client support BoardWork Extensions (specifically the Remote Control Hold Event Package). If it does then you can remotely put the call on hold, most likely though that it doesn't :(

Reuse WCF server instance between operations, without concurrency

How can I make the WCF server instance (the instance of the class in the .svc.cs / .svc.vb file) stay alive between requests?
It's a stateless, read-only type of service: I'm fine with different clients reusing the same instance. However, it's not thread-safe: I don't want two threads to execute a method on this instance concurrently.
Ideally, what I'm looking for is that WCF manages a "worker pool" of these instances. Say, 10. New request comes in: fetch an instance, handle the request. Request over, go back to the pool. Already 10 concurrent requests running? Pause the 11th until a new worker is free.
What I /don't/ want is per-client sessions. Startup for these instances is expensive, I don't want to do that every time a new client connects.
Another thing I don't want: dealing with this client-side. This is not the responsibility of the client, which should know nothing about the implementation of the server. And I can't always control that.
I'm getting a bit lost in unfamiliar terminology from the MSDN docs. I have a lot working, but this pool system I just can't seem to get right.
Do I have to create a static pool and manage it myself?
Thanks
PS: A source of confusion for me is that almost anything in this regard points toward the configuration of the bindings. Like basicHttp or wsHttp. But that doesn't sound right: this should be on a higher level, unrelated to the binding: this is about the worker managers. Or not?
In the event that you have a WCF service that centralizes business logic, provides/controls access to another “single” backend resource (e.g. data file, network socket) or otherwise contains some type of shared resource, then you most likely need to implement a singleton.
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
In general, use a singleton object if it maps well to a natural singleton in the application domain. A singleton implies the singleton has some valuable state that you want to share across multiple clients. The problem is that when multiple clients connect to the singleton, they may all do so concurrently on multiple worker threads. The singleton must synchronize access to its state to avoid state corruption. This in turn means that only one client at a time can access the singleton. This may degrade responsiveness and availability to the point that the singleton is unusable as the system grows.
The singleton service is the ultimate shareable service, which has both pros(as indicated above) and cons (as implied in your question, you have to manage thread safety). When a service is configured as a singleton, all clients get connected to the same single well-known instance independently of each other, regardless of which endpoint of the service they connect to. The singleton service lives forever, and is only disposed of once the host shuts down. The singleton is created exactly once when the host is created.
https://msdn.microsoft.com/en-us/magazine/cc163590.aspx

Making a thread-unsafe DLL call in BizTalk Orchestration (or only running one Orchestration at a time)

I have an issue with a 3rd party DLL, which is NOT thread-safe, but which I need to call within an orchestration.
I'm making the DLL call within an Expression shape. The same DLL is called in a number of different orchestrations.
The problem I have is that for a series of incoming messages, BizTalk will run multiple orchestrations (or multiple instances of an orchestration) in parallel - which leads to exceptions within the DLL.
Is there any way around this, given that refactoring the DLL isn't an option. Or, is there a way to throttle BizTalk to only run one orchestration at any one time. (I've seen some hacks restricting the working pool to the number of processors, but this doesn't seem to help. We can't downgrade to a single-core machine!)
I would rather find a way of keeping the DLL happy (though I can't think how) than throttle BizTalk - but if there is a way to throttle that would be an acceptable short-term solution whilst we discuss with the 3rd party. (who are a large organisation and really should know better!)
Even on a single core machine, BizTalk will run concurrent orchestrations.
You could throttle the orchestration by implementing the singleton pattern in the orchestration.
You do this by creating a loop in the orchestration and having two receive shapes, one before the start of the loop and one inside the loop.
Both these receive are bound to the same inbound logical port.
You create a correlation set which specifies something like BTS.MessageType and set the first receive shape to initiate the correlation and the second receive to follow the correlation.
As long as the loop does not end you can guarantee that any message of a certain type will always be processed by the same instance of the orchestration.
However, using singletons is a design decision which comes with drawbacks. For example, throughput suffers, and you have to ensure that your singleton cannot suspend, else it will create a block for all subsequent messages.
Hope this helps.

Need help with WCF design

I have been tasked with creating a set of web services. We are a Microsoft shop, so I will be using WCF for this project. There is an interesting design consideration that I haven't been able to figure out a solution for yet. I'll try to explain it with an example:
My WCF service exposes a method named Foo().
10 different users call Foo() at roughly the same time.
I have 5 special resources called R1, R2, R3, R4, and R5. We don't really need to know what the resource is, other than the fact that a particular resource can only be in use by one caller at a time.
Foo() is responsible to performing an action using one of these special resources. So, in a round-robin fashion, Foo() needs to find a resource that is not in use. If no resources are available, it must wait for one to be freed up.
At first, this seems like an easy task. I could maybe create a singleton that keeps track of which resources are currently in use. The big problem is the fact that I need this solution to be viable in a web farm scenario.
I'm sure there is a good solution to this problem, but I've just never run across this scenario before. I need some sort of resource tracker / provider that can be shared between multiple WCF hosts.
Any ideas from the architects out there would be greatly appreciated!
Create another central service which only the web services know about. This service takes on the role of the resource manager.
All of the web services in the farm will communicate with this central service to query for resource availability and to "check out" and "check in" resources.
You could track the resource usage in a database table, which all the servers on the farm could access.
Each resource would have a record in the database, with fields that indicate whether (or since when) it is in use. You could get fancy and implement a timeout feature as well.
For troubleshooting purposes, you could also record who is using the resource.
If you record when each resource is being used (in another table), you would be able to verify that your round-robin is functioning as you expect, decide whether you should add more copies of the resource, etc.
There are any number of approaches to solving this problem, each with their own costs and benefits. For example:
Using MSMQ to queue all requests, worker processes pull messages from the queue, pass to Rn and post responses back to a response queue for Foo to dispatch back to the appropriate caller.
Using an in-memory or locally persisted message dispatcher to send the next request to an on-box service (e.g. via Named Pipes for perf) based upon some distribution algorithm of your choice.
etc.
Alas, you don't indicate whether your requests have to survive power outages, if they need to be transactionally aware, whether the callers are also WCF, how quickly these calls will be received, how long it takes for Rn to return with results, etc.
Whichever solution you choose, I strongly encourage you to split your call to Foo() into a RequestFoo() and GetFooResponse() pair or implement a WCF callback hosted by the caller to receive results asynchronously.
If you do NOT do this, you're likely to find that your entire system will grind to a halt the second traffic exceeds your resources' abilty to satisfy the workload.

In COM, how can I get notified when a client dies?

I've got a COM solution with a client (an EXE) and a server (a service EXE). When the client is killed, or dies abnormally, I'd like to know about it in the server. Unfortunately, it appears that COM doesn't call Release when this happens.
How can I tell that a COM client has died, so that I can clean up for it?
Update: Answers so far require that the server have a pointer to the client, and call a method on it periodically (ping). I'd rather not do this, because:
I don't currently have a callback pointer (server->client), and I'd prefer to avoid introducing one, unless I really have to.
This still won't call Release on the object that represents the client endpoint, which means that I'll have to maintain the other client-specific resources separately, and hold a weak pointer from the "endpoint" to the other resources. I can't really call Release myself, in case COM decides to do it later.
Further to the second: Is there any way, in COM, to kill the client stub, in lieu of calling Release? Can I get hold of the stub manager for my object and interface and tell it to do cleanup?
Killing is rather extremal process, so neither CORBA, nor COM/DCOM nor Java's RMI has no explicit work around. Instead you can create very simple callback to implement 'ping'. It can be for example time based or on occasional base.
Also you can think about third EXE - that works as monitor for your client and provides notification to server (a service).
Simplest solution is for the server to run a PING test on a timer.
In a multi threaded apartment setup this can run on a background thread.
This test should call from server to client with a call that's guaranteed to make it to the client if it is alive e.g. a QueryInterface call on a client object.
Unexpected failures can be treated as an indication that the client is dead.
The server will need to manage the list of clients it pings intelligently and should ensure that the ping logic itself doesn't keep the client alive.