Severity of Stackdriver logs always INFO for .NET Core app deployed to GKE - asp.net-core

I have deployed my ASP.NET Core application to GKE and I am now seeing output logged in Stackdriver. However for some reason all of the log entries have severity of INFO. It doesn't matter if it's exception log (with severity ERROR) or something else... everything is logged as INFO.
How can I instruct Stackdriver Logger to tag log entries from .NET Core application with appropriate severity types?

Well, first things first, that configuration should be done inside the log generator of your .NET application, like this:
Sometimes the application logs have some (some times none) string, such as stderr or stdout, that Stackdriver reads as the severity, so you could add a "Severity" field to your logs with the proper value and, this way, Stackdriver would read it as you specify, check the values here.
or in the GKE cluster with Fluentd, you can refer to this documentation for that. Also this could work.
In general your logs are missing the severity field or have other strings.

Related

Google Cloud Platform - Catch and log errors and automatically terminate VM

I am running a workflow on a n1-ultramem-40 instance that will run for several days. If an error occurs, I would like to catch and log the error, be notified, and automatically terminate the Virtual Machine. Could I use StackDriver and gcloud logging to achieve this? How could I automatically terminate the VM using these tools? Thanks!
Let's break the puzzle into two parts. The first is logging an error to Stackdriver and the second is performing an external action automatically when such an error is detected.
Stackdriver provides a wide variety of language bindings and package integrations that result in log messages being written. You could include such API calls in your application which detects the error. If you don't have access to the source code of your application but it instead logs to an external file, you could use the Stackdriver agents to monitor log files and relay the log messages to Stackdriver.
Once you have the error messages being sent to Stackdriver, the next task would be defining a Stackdriver log export definition. This is the act of defining a "filter" that looks for the specific log entry message(s) that you are interested in acting upon. Associated with this export definition and filter would be a PubSub topic. A pubsub message would then be written to this topic when an Stackdriver log entry is made.
Finally, we now have our trigger to perform your action. We could use a Cloud Function triggered from a PubSub message to execute arbitrary API logic. This could be code that performs an API request to GCP to terminate the VM.

Stackdriver Trace not showing traces from Zipkin by Express API correctly

In our cluster, we have set up a Zipkin collector for Stackdriver Trace (like this) so we can trace our apps.
I am running the simple JavaScript web example that is offered. It works correctly when I configure the app to send the traces to the collector that is running in the cluster (in recorder.js).
However, when I want to inspect the traces in Stackdriver Trace, something seems to be going wrong:
The HTTP Method column is empty, and the URI column seems to show the HTTP method. How can I make these columns display the correct information?
Let me know if I need to add more information.

re-configuring a worklight application with analytics

After redeploying a worklight application, some configuration for analytics got lost and I'm trying to configure worklight with analytics again.
The dashboard shows "No data available" for time after the deployment although there are old records displayed for the time before the deployment of the application. So the db was not affected.
I set the wl.analytics.logs.forward property to "true" in worklight.properties;
also I set the wl.analytics.url of the db to be something like:
https://myserver:port/analytics/data
The dashboard is on
https://myserver:port/analytics/console
That is the URL for the analytics server.
Although if I put the db URL in a browser I get something like:
Error 404: java.io.FileNotFoundException: SRVE0190E: File not found: /data
Checked SystemOut.log and SystemErr.log (WAS logs) and I did not see errors there.
Does anybody know which is the XML I need to check in order to validate the configuration is OK for analytics? How could I troubleshoot this problem? Are there other logs I could check?
In the list of environment variables you gave I do not see any for username and password. Try to set:
wl.analytics.password=admin
wl.analytics.username=admin
It would be useful to see a wireshark trace, maybe you are not getting 403s. The Analytics data uploader generally has a small bit of protections and you have the option to keep or remove it.
#patbarron is correct about the multiple WAR files though. You need to send your analytics data to the /analytics-service context. The WAR analytics-service is the WAR that handles all the data processing, querying, etc. The other WAR analytics just handles the console UI.
When testing it might be beneficial to lower the
wl.analytics.queue and wl.analytics.queue.size, those values are for collecting data on the MobileFirst runtime server. Data is collected at the runtime server then sent to the analytics server. The larger these values are generally, the longer it will take to send. There are good to set for production

Read-only web console access in ActiveMQ

I'm using ActiveMQ 5.10 and would like to create a user that has read-only access through the web console.
Red Hat published this article, mentioning that it's not really read only due to a bug in ActiveMQ.
According to the bug report AMQ-4567, the bug is fixed as of ActiveMQ 5.9. However, I'm not seeing it work appropriately.
I have tried a number of different configurations, with the most recent being two separate JAAS implementations, one for Jetty and one for ActiveMQ. The relevant property files are excerpted below.
I can mostly log in to the web console using the "system" user. But the guest user doesn't work at all. The application user (appuser) doesn't need access to the web console at all.
My authN/authZ needs are pretty trivial: one admin user, one application account, and one read-only monitoring account.
Is there any good way to get this working with a recent version of ActiveMQ (>= 5.9.0)?
groups.properties
admins=system
users=appuser,admin
guests=guest
users.properties
system={password redacted}
appuser=appuser
guest=guest
jetty-realm.properties
system: MD5:46cf1b5451345f5176cd70713e0c9e07,user,admin
guest: guest,guest
As an aside, I used the Jetty tutorial and the Rundeck instructions to figure out the jetty-realm.properties file and chapter 6 of ActiveMQ in Action to work out the ActiveMQ JAAS.
I was finally able to get to what I wanted by deploying the web console to an external Tomcat instance. I assume that when it runs out of process, it can't bypass security and so has to use whatever credentials you provide. In this case, I gave the Tomcat instance the read-only JMX user credentials.
It's not great, as there is no security trimmed UI. You can still attempt to create new destinations, delete destinations, etc. When you try with a read-only user, you get an error. That gets a "D" for UX, but a "B" for security.

Want to deploy WCF web service on Azure platform

I want to deploy my WCF web Service on Azure plateform.
I have created a Storage account for my website, and also created a cloud Service and uploaded my package file and config file to the staging site.
But while uploading, The message displays
'Your staging deployment is starting. Hang on, the page will refresh once the deployment begins.'
I am waiting sice 2-3 hours and not getting the desired output.
Am I doing correctly? Or is there anything that I forgot?
Please Help...!
Most likely there is a problem in your code or the packaging that is causing the role to continuously restart. This is a fairly common problem, but there are a lot of possible causes (missing an assembly reference, an uncaught exception, the Run() method is exiting, a Startup Task is failing, or many other things). You need to gather more information to know exactly what the problem is and how to fix it.
There are many threads here on SO about this topic. There's also a Microsoft post discussing how to diagnose this type of issue. Those are good places to start.