How to add distributed tracing to websockets properly?
As each websocket connection is long lived, a lot of messages ends up getting traced under one activity. Is there any way to split that? (See below link for reference)
Looks like there is an issue written on .netcore github related to this but while it is getting fixed we want to have an interim fix. Any suggestions on how to handle this?
Related
I am experimenting the new version of NServicsBus. I find following step by step sample on particular site.
https://docs.particular.net/samples/step-by-step/
Can any one tell me how to configure MSMQ for Transport. Here is my scenario.
Client create message
Client message should be stored in MSMQ
Server Application running on same machine which subscribe the message.
Server handler get message from MSMQ and process it further. i.e Store in DB or send to other web service.
Retry to process message if it does not worked first time
after 3 retries send message to error queue
How do i configure this sample to use MSMQ for my scenario.
Helpful information to include
Product name:NServiceBus.Core
Version: 6.3.4
Stacktrace:
Description:
Did you know that we have released a LearningTransport and LearningPersistence just for purposes like these? Have a look at it here.
Having said that, the transport swapping should be rather seamless so even if you have setup a small PoC using this transport/persistence, you can change it to MSMQ or other production-ready transports/persistence when you go live.
Again, as stated in the documentation page and as the name suggests, this is not for use in production.
I would recommend you walk through this.
https://docs.particular.net/tutorials/intro-to-nservicebus/
Will answer your questions, and future ones you have.
We have been playing around with Brave(Java implementation of Zipkin) and successfully added tracing for REST and database calls. We would like to also add RabbitMQ to the tracing and would like some thoughts from anyone who may have had similar experiences that they could share.
We have tried to find some stuff online but can't seem to find an interceptor we could add to our rabbit implementation. Can you recommend anything?
Thanks in advance.
The best way to ask for a feature is using github issues.
To add a new transport such as RabbitMQ, you'd have to affect Brave (reporter) and Zipkin (collector)
https://github.com/openzipkin/zipkin/issues
https://github.com/openzipkin/brave/issues
We have an issue with a windows service which uses nServiceBus. At some random moment, the nServiceBus stops processing messages and direct them directly to Error queue, and I have to restart the service. After the restart, the messages arrived in the input message queue are handled, and everything gets back to normal. If we re-drop the messages which were went to error queue, it is processing it successfully without any issue.
We are using log4net logs to audit the message flow and storing in DB. The NServiceBus Handler stops to log in log4net. After we restart the windows service (NServiceBus) then it start to log again. We are NOT able to redproduce this issue in development environment. We are suspecting this could be a NService Bus Memory Leak issue. But we don't know how to confirm this issue and resolve the same.
We are planning to move this Windows Service (NServiceBus) to different server as a trial and error basis. Did anyone face this issue ever and resolved it? Please help us to resolve this issue as it is causing more troubles in Production environment.
NServiceBus Version that we are using : 2.0.0.1329
Message queue and windows service are in the same machine.
I believe you're running on a version of NServiceBus that is about 5 years old and is no longer supported. While I could give you the standard recommendation of upgrading to a more current release, it could very well be that some of the configuration APIs that you're using have been made obsolete so you may need to make some modifications there and/or in the app.configs.
I'm sorry to say that there probably isn't a better solution for you at this time.
In general, I'd suggest trying to track the NServiceBus releases somewhat more closely. If you're within 6-12 months of the current release, you should generally be in good shape.
I work on a few .NET web apps that use Redis heavily for caching along with ServiceStack's Redis client. In all cases I've got Redis running on the same machine. I've used both BasicRedisClientManager and PooledRedisClientManager (always implemented as singletons) and have had some issues with both approaches.
With BasicRedisClientManager, things would work fine for a while, but eventually Redis would start refusing connections. Using netstat we discovered that thousands of TCP connections to the default Redis port were hanging around in TIME_WAIT status.
We then switched to PooledRedisClientManager, which seemed to fix the problem immediately. However, not long after, we started noticing occasional CPU spikes that we narrowed down to thread waiting (System.Threading.Monitor.Wait calls) caused by PooledRedisClientManager.GetClient.
In code, we use a get-in-get-out approach (using ServiceStack's handy ExecAs shortcuts) so in general connections are acquired very frequently but held as briefly as possible.
We get a modest amount of traffic but we're no StackExchange, and I can't help but think the ServiceStack client is up to the job and we're just doing something wrong. Is PooledRedisClientManager the correct approach here? Would it be advisable to simply increase the pool size? Or is that likely just masking a problem with our code?
Just looking for general guidance here, I don't have specific code I need help with at this point. Thanks in advance.
Are you absolutely sure all Redis connections are being disposed?
With ServiceStack, the Redisproperty on Service and ViewPageBase (if you're using SS Razor) do dispose themselves, but any time you request a connection from the pool yourself you must dispose it yourself.
However, despite this, we recently had issues with our pool being exhausted of all connections, too. One of my colleagues discovered that there wasn't proper clean up for Razor pages and made a pull request here - This means that there has only been correct disposal on Razor pages since ServiceStack v4.0.21. I have not checked if that fix has been back-ported to the v3 branch.
My colleague also added TrackingRedisClientsManager that may help you track down the improper disposal. See here
You can also check the stats of a PooledRedisClientManager by using this helper method. We threw it on a little razor page to check the stats as we feel appropriate) but you could write better code around this to monitor the pool health of specific nodes, too.
I have a WCF application with a couple thousand clients connecting to a pair of services running under IIS. What I've noticed is that some of these clients get into a hung state, and I'm trying to reproduce this.
When this problem was first noticed, I had not modified the throttling configuration and the services were set to ConcurrencyMode.Single. One thing I noticed was that an IISReset on the server caused many clients to hang. Yet pulling this same stunt on the client running against IIS on my local machine doesn't seem to cause the problem.
I caught this only once in the wild, but didn't have debugging enabled at the time. The symptom I witnessed was that the client appeared to be trying to open a connection to the web server, but did not succeed. While monitoring with Fiddler, I saw no attempt to reach the service endpoint. Obviously that makes me suspect the client proxy.
I have a very solid hunch as to what's happening -- namely I've been using "Close()" instead of "Abort()" when the service throws an exception, which I believe is causing the channels to become corrupted. But considering the effort to get a new version out there, I need to reproduce this problem by causing a client on my own machine to hang before I can start making changes to the code.
Where should I start?
Thanks in advance,
roufamatic
Have you got any logging turned on? This could help in diagnosing the problem. It can be done completely in config, so no need to build a new version. Use the Service Configuration Editor tool to set it all up. The Visual Studio 2008 Training Kit has a good tutorial on how to use logging and the log viewer.
I suppose this was too vague a question though I was mostly curious what people might suggest. As it turns out there was a nontrivial difference between my workstation and a production environment that, once resolved, allowed me to see the problem. In this case, somehow using Fiddler to watch the traffic actually prevented the error from occurring! Now to ask another question.