Reusing client in a WCF service in .Net 3.5 - wcf

I understand that ClientBase<> has inbuilt caching added to it for the channel factories. But it should be in later versions of .Net (4.5) : more here
Now, am having a simple sample project to see if the same client is reused or a new is created for every call to analyse the time taken. Below is some code snippet:
[ServiceContract]
public interface ISoapService
{
[OperationContract]
string GetData(int value);
}
Client Code:
private static void NonCachedVersion()
{
Console.WriteLine("Non-Cached Version");
for (int i = 0; i < 5; i++)
{
var watch = Stopwatch.StartNew();
using (var client = new SoapServiceReference.SoapServiceClient())
{
client.GetData(5);
watch.Stop();
}
Console.WriteLine("{0}: {1}ms", i, watch.ElapsedMilliseconds);
}
}
private static void CachedVersion()
{
Console.WriteLine("Cached Version");
using (var client = new SoapServiceReference.SoapServiceClient())
{
for (int i = 0; i < 5; i++)
{
var watch = Stopwatch.StartNew();
client.GetData(5);
watch.Stop();
Console.WriteLine("{0}: {1}ms", i, watch.ElapsedMilliseconds);
}
client.Close();
}
}
Calling them:
static void Main(string[] args)
{
NonCachedVersion();
Console.WriteLine();
CachedVersion();
Console.ReadLine();
}
And here's the output:
Non-Cached Version
0: 2030ms
1: 22ms
2: 19ms
3: 21ms
4: 18ms
Cached Version
0: 19ms
1: 7ms
2: 7ms
3: 7ms
4: 8ms
Also, if I call the cached version first and then the non-cached version, timings are follows:
Cached Version
0: 2275ms
1: 13ms
2: 8ms
3: 8ms
4: 8ms
Non-Cached Version
0: 19ms
1: 18ms
2: 18ms
3: 18ms
4: 19ms
Thus, it is essentially the first call that is expensive, rest seem to be utilizing some sort of cache.
So, few things:
For Non-cached version - even though the client is disposed after every call, next client create is very fast compared to the first call. Caching of the channel happening despite dispose?
For Cached version - time is less than non-cached version as its taking that extra time to create the client object. Correct?
Product is compiled for .Net 3.5, is caching enabled in that for the channels? I see that the CacheSetting property is added in .Net 4.5 only.

Related

Repast: Query() method runs greatly slower than manual iteration

Recently I found a big problem using repast query() method. I found it is somehow significantly slower than using the simple manual iteration approach to get the specific agent set. Take a package object query for all hubs with the same "hub_code" for example, I tested both using query and manual iteration approaches:
public void arrival_QueryApproach() {
try {
if (this.getArr_time() == this.getGs().getTick()) {
Query<Object> hub_query = new PropertyEquals<Object>(context, "hub_code", this.getSrc());
for (Object o: hub_query.query()) {
if (o instanceof Hub) {
((Hub)o).getDepature_queue().add(this);
this.setStatus(3);
this.setCurrent_hub(this.getSrc());
break;
}
}
}
}
catch (Exception e) {
System.out.println("No hub identified: " + this.getSrc());
}
}
public void arrival_ManualApproach() {
try {
if (this.getArr_time() == this.getGs().getTick()) {
for (Hub o: gs.getHub_list()) {
if (o.getHub_code().equals(this.getSrc())) {
((Hub)o).getDepature_queue().add(this);
this.setStatus(3);
this.setCurrent_hub(this.getSrc());
break;
}
}
}
}
catch (Exception e) {
System.out.println("No hub identified: " + this.getSrc());
}
}
The executing speed is dramatically different. There are 50000 package and 350 hub objects in my model. It took me on average 1 minute and 40 seconds to run 1600 ticks when using built-in query function, while it takes only 5 seconds when using manual iteration approach. What are the causes to this dramatic difference and why query works so slow? Instead it should logically runs much quicker.
Another issue assocaited with the query methods is that “PropertyGreaterThanEquals” or "PropertyLessThanEquals" runs much slower than using the method “PropertyEquals”. below is another simple example about query a suitable dock for a truck to unload goods.
public void match_dock() {
// Query<Object> pre_fit = new PropertyGreaterThanEquals(context, "unload_speed", 240);
// Query<Object> pre_fit = new PropertyLessThanEquals(context, "unload_speed", 240);
Query<Object> pre_fit = new PropertyEquals(context, "unload_speed", 240);
for (Object o : pre_fit.query()) {
if (o instanceof Dock) {
System.out.println("this dock's id is: " + ((Dock)o).getId());
}
}
}
There are only 3 docks and 17 truck objects in the model. it took less than one second to run total of 1920 ticks if using "PropertyEquals"; however, it took me more than 1 minute to run total of 1920 ticks if choosing the query methods “PropertyGreaterThanEquals” or "PropertyLessThanEquals". in this sense, I have to again loop through the all objects(docks) and doing the greater than query manually? This appears to be another issue much affecting the model execution speed?
I am using java version "11.0.1" 2018-10-16 LTS
Java(TM) SE Runtime Environment 18.9 (build 11.0.1+13-LTS)
My Eclipse complier level is 10. Installed JREs (default) JDK 11.
Thanks for helpful advice.

In dotnet core how can I ensure only one copy of my application is running?

In the past I have done something like this
private static bool AlreadyRunning()
{
var processes = Process.GetProcesses();
var currentProc = Process.GetCurrentProcess();
logger.Info($"Current proccess: {currentProc.ProcessName}");
foreach (var process in processes)
{
if (currentProc.ProcessName == process.ProcessName && currentProc.Id != process.Id)
{
logger.Info($"Another instance of this process is already running: {process.Id}");
return true;
}
}
return false;
}
Which has worked well. In the new dotnet core world everything has a process name of dotnet so I can only run one dotnet app at a time! Not quite what I want :D
Is there an ideal way of doing this in dotnet? I see mutex suggested but I am not sure I understand the possible downsides or error states running on other systems than a windows machine.
.NET Core now supports global named mutex. From PR description, that added that functionality:
On systems that support thread process-shared robust recursive mutexes, they will be used
On other systems, file locks are used. File locks, unfortunately, don't have a timeout in the blocking wait call, and I didn't find any other sync object with a timed wait with the necessary properties, so polling is done for timed waits.
Also, there is a useful note in Named mutex not supported on Unix issue about mutex name, that should be used:
By default, names have session scope and sessions are more granular on Unix (each terminal gets its own session). Try adding a "Global" prefix to the name minus the quotes.
In the end I used a mutex and it seeeeeems okay.
I grabbed the code from here
What is a good pattern for using a Global Mutex in C#?
The version of core I am using does not seem to have some of the security settings stuff so I just deleted it. I am sure it will be fine. (new Mutex only takes 3 parameters)
private static void Main(string[] args)
{
LogManager.Configuration = new XmlLoggingConfiguration("nlog.config");
logger = LogManager.GetLogger("console");
logger.Info("Trying to start");
const string mutexId = #"Global\{{guid-guid-guid-guid-guid}}";
bool createdNew;
using (var mutex = new Mutex(false, mutexId, out createdNew))
{
var hasHandle = false;
try
{
try
{
hasHandle = mutex.WaitOne(5000, false);
if (!hasHandle)
{
logger.Error("Timeout waiting for exclusive access");
throw new TimeoutException("Timeout waiting for exclusive access");
}
}
catch (AbandonedMutexException)
{
hasHandle = true;
}
// Perform your work here.
PerformWorkHere();
}
finally
{
if (hasHandle)
{
mutex.ReleaseMutex();
}
}
}
}

Choosing a WCF Instance Management

We are designing a WCF web service on IIS with the following characteristics:
Object creation is relatively heavy (takes about 500 ms) because it involves writing a file
No state needs to be preserved once an object is created
Each call from a client takes on average of 150 - 200 ms. This call involves sending a UDP request to another server and receiving a response.
We expect about 30 simultaneous clients. It may grow to 50 clients. In the worst case scenario (50 clients), we will need a pool of 10 instances of the object to handle this load.
Which of the 3 Instance Management contexts (PerCall, PerSession, Single) would you recommend and why? Which instance enables us to manage a pool of available objects that would do the work?
Out of the box, WCF does not support a service object pool. You need a custom IInstanceProvider. In this scenario the WCF context mode would define when WCF requests a new object from the IInstanceProvider and the IInstanceProvider behavior would manage the pool. Setting the service to either PerInstance or PerSession could make sense depending on the usage.
If you are using a Dependency Injection Container in your implementation such as Castle Windsor, StructureMap, or MS Enterprise Library's Unity then you can use the container's exsiting IInstanceProvider with a pooled lifestyle. All of those containers are reasonable (although I don't personally have much experience with having them manage object pools).
My personal choice of container is Castle Windsor. In that case you would use Windsor's WCF Integration Facility and a pooled lifestyle.
Here's a quick test console program that uses the Castle.Facilites.WcfIntegraion NuGet package.
using Castle.Facilities.WcfIntegration;
using Castle.MicroKernel.Registration;
using Castle.Windsor;
using System;
using System.Collections.Generic;
using System.ServiceModel;
using System.Threading.Tasks;
namespace WCFWindsorPooledService
{
[ServiceContract]
public interface ISimple
{
[OperationContract]
void Default(string s);
}
public class SimpleService : ISimple
{
private static int CurrentIndex = 0;
private int myIndex;
public SimpleService()
{
// output which instance handled the call.
myIndex = System.Threading.Interlocked.Increment(ref CurrentIndex);
}
public void Default(string s)
{
Console.WriteLine("Called #" + myIndex);
System.Threading.Thread.Sleep(5);
}
}
class PooledService
{
static void Main(string[] args)
{
Console.WriteLine("\n\n" + System.Reflection.MethodInfo.GetCurrentMethod().DeclaringType.Name);
// define mapping of interfaces to implementation types in Windsor container.
IWindsorContainer container = new WindsorContainer();
container.AddFacility<WcfFacility>()
.Register(Component.For<SimpleService>().LifeStyle.PooledWithSize(2, 5));
var host = new Castle.Facilities.WcfIntegration.DefaultServiceHostFactory()
.CreateServiceHost(typeof(SimpleService).AssemblyQualifiedName,
new Uri[] { new Uri("http://localhost/Simple") });
host.Open();
ChannelFactory<ISimple> factory = new ChannelFactory<ISimple>(host.Description.Endpoints[0]);
List<Task> tasks = new List<Task>();
for (int i = 0; i < 20; i++)
{
tasks.Add(Task.Run(() =>
{
ISimple proxy = factory.CreateChannel();
proxy.Default("Hello");
((ICommunicationObject)proxy).Shutdown();
}));
}
Task.WaitAll(tasks.ToArray());
((ICommunicationObject)host).Shutdown();
container.Dispose();
}
}
public static class Extensions
{
static public void Shutdown(this ICommunicationObject obj)
{
try
{
obj.Close();
}
catch (Exception ex)
{
Console.WriteLine("Shutdown exception: {0}", ex.Message);
obj.Abort();
}
}
}
}
I'm not going to pretend to understand all the rules of how Castle manages a pool, but a pool is clearly being used. The output is:
PooledService
Called #1
Called #5
Called #2
Called #3
Called #4
Called #6
Called #7
Called #8
Called #7
Called #4
Called #2
Called #5
Called #1
Called #10
Called #6
Called #9
Called #4
Called #7
Called #1
Called #9

usbManager openDevice call fails after several hundred successful attempts

I'm using usbmanager class to manage USB host on my android 4.1.1 machine.
all seems to work quite well for a few hundreds of transactions until (after ~ 900 transactions) opening the device fails, returning null without exception.
Using a profiler it doesn't seem to be a matter of memory leakage.
this is how I initialize the communication from my main activity (doing this once):
public class MainTestActivity extends Activity {
private BroadcastReceiver m_UsbReceiver = null;
private PendingIntent mPermissionIntent = null;
UsbManager m_manager=null;
DeviceFactory m_factory = null;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
mPermissionIntent = PendingIntent.getBroadcast(this, 0, new Intent(ACTION_USB_PERMISSION), 0);
IntentFilter filter = new IntentFilter(ACTION_USB_PERMISSION);
filter.addAction(UsbManager.ACTION_USB_DEVICE_DETACHED);
m_UsbReceiver = new BroadcastReceiver() {
public void onReceive(Context context, Intent intent) {
String action = intent.getAction();
if (UsbManager.ACTION_USB_DEVICE_DETACHED.equals(action)) {
UsbDevice device = (UsbDevice)intent.getParcelableExtra(UsbManager.EXTRA_DEVICE);
if (device != null) {
// call your method that cleans up and closes communication with the device
Log.v("BroadcastReceiver", "Device Detached");
}
}
}
};
registerReceiver(m_UsbReceiver, filter);
m_manager = (UsbManager) getSystemService(Context.USB_SERVICE);
m_factory = new DeviceFactory(this,mPermissionIntent);
}
and this is the code of my test:
ArrayList<DeviceInterface> devList = m_factory.getDevicesList();
if ( devList.size() > 0){
DeviceInterface devIf = devList.get(0);
UsbDeviceConnection connection;
try
{
connection = m_manager.openDevice(m_device);
}
catch (Exception e)
{
return null;
}
The test will work OK for 900 to 1000 calls and after this the following call will return null (without exception):
UsbDeviceConnection connection;
try
{
connection = m_manager.openDevice(m_device);
}
You might just run out of file handles, a typical limit would be 1024 open files per process.
Try calling close() on the UsbDeviceConnection, see doc.
The UsbDeviceConnection object has allocated system ressources - e.g. a file descriptor - which will be released only on garbage collection in your code. But in this case you run out of ressources before you run out of memory - which means the garbage collector is not invoked yet.
I had opendevice fail on repeated runs on android 4.0 even though I open only once in my code. I had some exit paths that did not close the resources and I had assumed the OS would free it on process termination.
However there seems to be some issue with release of resources on process termination -I used to have issues even when I terminated and launched a fresh process.
I finally ensured release of resources on exit and made the problem go away.

Recursive Bus.Send() with-in a Handler (Transactions, Threading, Tasks)

I have a handler similar to the following, which essentially responds to a command and sends a whole bunch of commands to a different queue.
public void Handle(ISomeCommand message)
{
int i=0;
while (i < 10000)
{
var command = Bus.CreateInstance<IAnotherCommand>();
command.Id = i;
Bus.Send("target.queue#d1555", command);
i++;
}
}
The issue with this block is, until the loop is fully completed none of the messages appear in the target queue or in the outgoing queue. Can someone help me understand this behavior?
Also if I use Tasks to send messages within the Handler as below, messages appear immediately. So two questions on this,
What's the explanation on Task based Sends to go through immediately?
Are there are any ramifications on using Tasks with in message handlers?
public void Handle(ISomeCommand message)
{
int i=0;
while (i < 10000)
{
System.Threading.ThreadPool.QueueUserWorkItem((args) =>
{
var command = Bus.CreateInstance<IAnotherCommand>();
command.Id = i;
Bus.Send("target.queue#d1555", command);
i++;
});
}
}
Your time is much appreciated!
First question: Picking a message from a queue, running all the registered message handlers for it AND any other transactional action(like writing new messages or writes against a database) is performed in ONE transaction. Either it all completes or none of it. So what you are seeing is the expected behaviour: picking a message from the queue, handling ISomeCommand and writing 10000 new IAnotherCommand is either done completely or none of it. To avoid this behaviour you can do one of the following:
Configure your NServiceBus endpoint to not be transactional
public class EndpointConfig : IConfigureThisEndpoint, AsA_Publisher,IWantCustomInitialization
{
public void Init()
{
Configure.With()
.DefaultBuilder()
.XmlSerializer()
.MsmqTransport()
.IsTransactional(false)
.UnicastBus();
}
}
Wrap the sending of IAnotherCommand in a transaction scope that suppresses the ambient transaction.
public void Handle(ISomeCommand message)
{
using (new TransactionScope(TransactionScopeOption.Suppress))
{
int i=0;
while (i < 10000)
{
var command = Bus.CreateInstance();
command.Id = i;
Bus.Send("target.queue#d1555", command);
i++;
}
}
}
Issue the Bus.Send on another thread, by either starting a new thread yourself, using System.Threading.ThreadPool.QueueUserWorkItem or the Task classes. This works because an ambient transaction is not automatically carried over to a new thread.
Second question: The ramifications of using Tasks, or any of the other methods I mentioned, is that you have no transactional quarantee for the whole thing.
How do you handle the case where you have generated 5000 IAnotherMessage and the power suddenly goes out?
If you use 2) or 3) the original ISomeMessage will not complete and will be retried automatically by NServiceBus when you start up the endpoint again. End result: 5000 + 10000 IAnotherCommands.
If you use 1) you will lose IAnotherMessage completely and end up with only 5000 IAnotherCommands.
Using the recommended transactional way, the initial 5000 IAnotherCommands would be discarded, the original ISomeMessage comes back on the queue and is retried when the endpoint starts up again. Net result: 10000 IAnotherCommands.
If memory serves NServiceBus wraps the calls to the message handlers in a TransactionScope if the transaction option is used and TransactionScope needs some help to be cross-thread friendly:
TransactionScope and multi-threading
If you are trying to reduce overhead you can also bundle your messages. The signature for the send is Bus.Send(IMessage[]messages). If you can guarantee that you don't blow up the size limit for MSMQ, then you could Send() all the messages at once. If the size limit is an issue, then you can chunk them up or use the Databus.