Azure diagnostics with WCF hosted in azure - wcf

Could someone please confirm whether azure diagnostics is possible for WCF hosted in azure?
I followed this guide: http://blogs.msdn.com/b/sajid/archive/2010/04/30/basic-azure-diagnostics.aspx
After doing Trace.WriteLine("Information","testing").
I was expecting a WADTable on azure storage, but not appearing.
Thanks

What was your transfer period, filter level, and how long did you wait to see it appear? Do you have the AzureDiagnostics trace listener in your config file (or added programmatically). Nothing willl appear if you are not using the listener.
Diagnostics by default are aggregated locally and will never appear in your storage account unless you actually initiate a transfer with the correct filter level (Undefined will do it all). There are billing reasons why that is the case, btw (it costs you money in Tx and storage).

This blog post is about 18 months old and there have been some breaking changes for Windows Azure Diagnostics since then from SDK perspective. Please refer to the latest SDK documentation or these blog posts:
http://convective.wordpress.com/2010/12/01/configuration-changes-to-windows-azure-diagnostics-in-azure-sdk-v1-3/
http://blog.bareweb.eu/2011/03/implementing-azure-diagnostics-with-sdk-v1-4/

Related

What does loopback health really check?

I added the component #loopback/health to my loopback4 server but I don't understand on what it's based to assume my server is up. I searched on https://loopback.io/doc/en/lb4/Health.html#add-custom-live-and-ready-checks and on google but I can't find any infos about how it's working.
Thanks for your light !
Without configuring any additional custom checks, #loopback/health only configures a Startup Check that keeps track when the REST server (which is a LifeCycleObserver) is started and shutdown. This is useful for infrastructure with existing tooling that consumes (e.g. Kubernetes, Cloud Foundry), or if the LoopBack 4 project does more beyond a REST server.
It is still an experimental package, and there are intentions to expand the scope to encompass other LifeCycleObservers of the LoopBack 4 app such as DataSources.

Gridgain console load balance

I have Gridgain three node cluster and also running Gridgain web console agent and web console on all three nodes. It is all hosted on Windows Server.
I would like to load balance My web console. The problem is I don't know how to share user registration database which it stores in a work directory. Can I use external database to store all that information so that my cluster uses the same database?
There is a problem with Web Console Agent as well. How do I share tokens stored in default.properties?
There is no definitive guide on how to create a cluster for web console for high availability.
Can someone please guide me on how can I form a cluster for a Web console sharing its user store and tokens?
Thanks
If you are looking for multi-cluster support, take a look at documentation:
https://www.gridgain.com/docs/web-console/latest/multi-cluster-support
If you are looking for agent fault-tolerance: just start several agents. Fisrt agent will process all messages, other will be in the hot-stand-by mode.
If you are looking for connection fault-tolerance between agent and cluster (if cluster node failed that is a connection point for agent, Web Console will loose connection to cluster), just specify several nodes addresses as comma-separated list for "node-uri" parameter (in default.properties or as command-line argument).
For example:
node-uri=http://192.168.0.1:8080,http://192.168.0.2:8080;http://192.168.0.3:8080
Hope this helps.

Azure Web apps are getting very slow

I have an Azure web app which is connected with my mobile app. some times my azure app is facing slowness. why azure app getting slow sometimes.
am getting this issue sometimes please check the image
502 means a Gateway timeout. The front-end of the azure web app infrastructure wasn't able to communicate with the process serving your application. This could be due to variety of reasons. The common reasons include
application time-out
deadlocks in application
Application restarts
I would recommend you to enable the following logging to collect some data and investigate the same:
Web Server Logging (use this to check the time_taken field)
Failed Request Tracing (this will help you in determining which module is taking the time)
Detailed Error Messages (This will provide the exact error)
There is another option to investigate your app. Browse to Diagnose and solve problems section for your app and refer to the instructions there. See the screenshot below:

Upload text logs to MVC 4 web server every second

I have a Web Server implemented using dot net MVC4. There are clients connected to this web server which perform some operations and upload live logs to the server using WebClient.UploadString method. Sending these logs from client to server is being done in group of 2500 characters at a time.
Things work fine until 2-3 client upload logs. However when more than 3 clients try to upload logs simultaneously they start receiving "http 500 internal server error".
I might have to scale up and add more slaves but that will make the situation worse.
I want to implement Jenkins like live logging, where logs from slave are updated live.
Please suggest some better and scalable solution to this problem.
Have you considered looking into SignalR?
It can be used for anything from instant messaging to stocks! I have implemented both a chatbox, and a custom system that sends off messages, does calculations and then passes them back down to client. It is very reliable, there are some nice tutorials, and I think it's awesome.

WCF tracing in Azure production

How do I set WCF tracing in Azure (production environment) so that I'll have logging of all WCF errors?
Can't you use Windows Azure Diagnostics for this purpose? Once it is properly configured, your trace logs will be available in a Windows Azure Storage account that you have specified in your code. More information about Windows Azure Diagnostics can be found here: https://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics/.
Just like Guarav said, you can simply use the Azure diagnostics to log all errors to your storage account (there's a good read in the MSDN Magazine: Take Control of Logging and Tracing in Windows Azure).
Now I personally don't like the 'flat' logging when working with WCF. I find it very important to be able to trace through activities. That's why for all Azure projects where I use WCF I don't use the normal diagnostics.
I use a trick documented by Christian Weyer where I log to a classic *.svclog file and have those files shipped to my storage account. Then I use the CloudBerry Storage Explorer to simply view those logs that include the activites. This is possible by creating a custom XmlWriterTraceListener that writes to a local resource which is shipped to your storage account.