asp.net 5 (core 1) logging confusion - wanting to only see "debug" messages - asp.net-core

I'm trying to get started using logging in an ASP.NET Core 1.0 application - but this is my first time using any sort of logging, and the built in solution is presenting me some confusion.
In my Startup.cs, I initialize logging as I've seen in the sample applications;
log.AddConsole(Configuration.GetSection("Logging"));
log.AddDebug();
This works fine, my Logging section in the config file is defined as follows;
{
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Debug",
"System": "Debug",
"Microsoft": "Debug"
}
}
}
This all works fine, but the problem is that I see everything output to the screen, like this;
That information is not bad, of course - but it's a bit verbose and clutters up the command line. I'd really like to only see errors and debug-level messages, such as when I call them like this;
Logger.LogDebug("debug info");
But I'm very unclear about how to go about this.
Is there any way to achieve this level of tuning?
Update
after more working with it, if I create only a console with LogLevel.Error, I can get the immediate result I want - but then I lose any information of the other levels. Can non-relevant information (LogLevel.Information and lower) be sent to another place? Like the Output console in Visual Studio?

To answer your question you can implement an ILogger and then log what you want in the "public void Log". Inside of that method you can switch statement the LogLevel and write it to the console, write it to a DB, only write certain things, etc. The round about steps would be:
Create a class that implements ILogger. If you want to manually handle every log level and do something different with it you can do it there (as long as you haven't set the LogLevel too high in the startup). Here you can write to a Db, write to the console, email, whatever you want.
Create class that implements ILoggerProvider, fill in it's methods (CreateLogger is where you'll instantiate your ILogger you created in step 1).
Create an extension method to ILoggerFactor that looks something like this:
public static ILoggerFactory AddMyLogger(this ILoggerFactory loggerFactory, IHttpContextAccessor httpContextAccessor)
{
loggerFactory.AddProvider(new AppsLoggerProvider(httpContextAccessor));
return loggerFactory;
}
In the Configure method of the Startup, add something like this (what you pass down the line is up to you):
// Create our logger and set the LogLevel
loggerFactory.AddMyLogger(httpContextAccessor);
loggerFactory.MinimumLevel = LogLevel.Verbose;
Note: The ILoggerFactor was injected into the Configure method (ILoggerFactory loggerFactory).

ASP.NET Core defines the following six levels of logging verbosity:
Trace – For the most detailed messages, containing possibly sensitive information. Should never be enabled in production.
Debug – For interactive investigation during development: Useful for debugging but without long term value.
Information – For tracking the general flow of the application.
Warning – For unnormal events in the application, including errors and exceptions, which are handled and as such do not impact the application’s execution but could be a sign of potential probelms.
Error – For actual failures which cause the current activity to fail, leaving the application in a recoverable state though, so other activities will not be impacted.
Critical – For failures on the application level which leaves the application in a unrecoverable state and impacts the further execution.
These levels are sorted by verbosity from very verbose, to very quiet but with important consequences. This also means that Information is considered less verbose than Debug.
This means that when you configure a logger to show logging of the Debug verbosity, you always include the less verbose log levels as well. So you will also see Information. The only way to get rid of those Information log entries would be to set the verbosity to Warning or higher.
The amount of logging on the Information level was chosen deliberately by the ASP.NET Core team, to make sure that important parts are visible. It might seem very verbose to you, but it’s entirely by design. In general, it’s more useful to log more than too little, so in case you actually need to access the logs, you have enough context information to make the logs actually useful.
If you want to hide that information, you could for example set the log level for Microsoft.AspNet.Hosting to Warning. Or alternatively, set the log level to Warning in general, but set it to Debug for your application’s namespaces to only see your own logging output.
You could also log to files and utilize a log file viewing utility (e.g. TailBlazer) to access the logs while using filters to focus on the parts you are interested in.

Related

Why should I use ILogger or Serilog?

As the question suggests, why should I use ILogger or Serilog or any other 3rd party logger for that matter. I have absolutely nothing against either or any 3rd party loggers. I've traditionally always rolled my own, I don't find it difficult or time consuming, more often than not I've just reused/rehashed something I've written before.
I'm not looking to bash anything. But from what I've read they pretty much do what my logger does and that's just simple messages and time stamps so I can see whats going on. I capture this
Date/Time
Message or error message
Module that added the log (name)
Logged in user (if my app has users)
I'm just looking to see what the community thinks
For Serilog, I've recently been through a series of article that explains it fairly well: https://ranjeet.dev/All-about-structured-logging-introduction/
The main reason is to leverage structured logging, which consists in adding machine friendly meta data, on top of classical human friendly string based logging. When you end up with a big system in production, this is a gamer changer, because going through Gb of text files would be very unproductive. Then, on top of that, you get a very well thought framework with addins and integration to easily manipulate your logs.

Is it possible to write a plugin for Glimpse's existing SQL tab?

Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).

How to get tcmid of currently logged user in Tridion?

private void Subscribe()
{
EventSystem.Subscribe<User, LoadEventArgs>(GetInfo, EventPhases.Initiated);
}
public void GetInfo(User user, LoadEventArgs args, EventPhases phase)
{
TcmUri id = user.Id;
string name = user.Title;
Console.WriteLine(id.ToString());
Console.WriteLine(name);
}
I wrote above code and add the assembly in config file in Tridion server but no console window is coming on login of a user
The event you were initially subscribing to is the processed phase of any identifiable object with any of its actions, that will trigger basically on every transaction happening in the SDL Tridion CMS, so it won't give you any indication of when a user logs in (it's basically everything which happens all the time).
Probably one of the first things which is happening after a user logs in, is that its user info and application data is read. So what you should try is something along the lines of:
private void Subscribe()
{
EventSystem.Subscribe<User, LoadEventArgs>(GetInfo, EventPhases.Initiated);
}
public void GetInfo(User user, LoadEventArgs args, EventPhases phase)
{
TcmUri id = user.Id;
string name = user.Title;
}
But do keep in mind that this will also be triggered by other actions, things like viewing history, checking publish transactions and possibly a lot more. I don't know how you can distinguish this action to be part of a user login, since there isn't an event triggered specifically for that.
You might want to check out if you can find anything specific for a login in the LoadEventArgs for instance in its ContextVariables, EventStack, FormerLoadState or LoadFlags.
Edit:
Please note that the Event System is running inside the SDL Tridion core, so you won't ever see a console window popup from anywhere. If you want to log information, you can include the following using statement:
using Tridion.Logging;
After adding a reference to the Tridion.Logging.dll which you can find in your ..\Tridion\bin\client directory. Then you can use the following logging statement in your code:
Logger.Write("message", "name", LoggingCategory.General, TraceEventType.Information);
Which you will find back in your Tridion Event log (provided you have set the logging level to show information messages too).
But probably the best option here is to just debug your event system, so you can directly inspect your object when the event is triggered. Here you can find a nice blog article about how to setup debugging of your event system.
If you want to get the TCM URI of the current user, you can do so in a number of ways.
I would recommend one of these:
Using the Core Service, call GetCurrentUser and read the Id property.
Using TOM.NET, read the User.Id property of the current Session.
It looks like you want #2 in this case as your code is in the event system.

Why is Mage_Persistent breaking /api/wsdl?soap

I get the following error within Magento CE 1.6.1.0
Warning: session_start() [<a href='function.session-start'>function.session-start</a>]: Cannot send session cookie - headers already sent by (output started at /home/dev/env/var/www/user/dev/wdcastaging/lib/Zend/Controller/Response/Abstract.php:586) in /home/dev/env/var/www/user/dev/wdcastaging/app/code/core/Mage/Core/Model/Session/Abstract/Varien.php on line 119
when accessing /api/soap/?wsdl
Apparently, a session_start() is being attempted after the entire contents of the WSDL file have already been output, resulting in the error.
Why is magento attempting to start a session after outputting all the datums? I'm glad you asked. So it looks like controller_front_send_response_after is being hooked by Mage_Persistent in order to call synchronizePersistentInfo(), which in turn ends up getting that session_start() to fire.
The interesting thing is that this wasn't always happening, initially the WSDL loaded just fine for me, initially I racked my brains to try and see what customization may have been made to our install to cause this, but the tracing I've done seems to indicate that this is all happening entirely inside of core.
We have also experienced a tiny bit of (completely unrelated) strangeness with Mage_Persistent which makes me a little more willing to throw my hands up at this point and SO it.
I've done a bit of searching on SO and have found some questions related to the whole "headers already sent" thing in general, but not this specific case.
Any thoughts?
Oh, and the temporary workaround I have in place is simply disabling Mage_Persistent via the persistent/options/enable config data. I also did a little bit of digging as to whether it might be possible to observe an event in order to disable this module only for the WSDL controller (since that seems to be the only one having problems), but it looks like that module relies exclusively on this config flag to determine it's enabled status.
UPDATE: Bug has been reported: http://www.magentocommerce.com/bug-tracking/issue?issue=13370
I'd report this is a bug to the Magento team. The Magento API controllers all route through standard Magento action controller objects, and all these objects inherit from the Mage_Api_Controller_Action class. This class has a preDispatch method
class Mage_Api_Controller_Action extends Mage_Core_Controller_Front_Action
{
public function preDispatch()
{
$this->getLayout()->setArea('adminhtml');
Mage::app()->setCurrentStore('admin');
$this->setFlag('', self::FLAG_NO_START_SESSION, 1); // Do not start standart session
parent::preDispatch();
return $this;
}
//...
}
which includes setting a flag to ensure normal session handling doesn't start for API methods.
$this->setFlag('', self::FLAG_NO_START_SESSION, 1);
So, it sounds like there's code in synchronizePersistentInf that assumes the existence of a session object, and when it uses it the session is initialized, resulting in the error you've seen. Normally, this isn't a problem as every other controller has initialized a session at this point, but the API controllers explicitly turns it off.
As far as fixes go, your best bet (and probably the quick answer you'll get from Magento support) will be to disable the persistant cart feature for the default configuration setting, but then enable it for specific stores that need it. This will let carts
Coming up with a fix on your own is going to be uncharted territory, and I can't think of a way to do it that isn't terribly hacky/unstable. The most straight forward way would be a class rewrite on the synchronizePersistentInf that calls it's parent method unless you've detected this is an API request.
This answer is not meant to replace the existing answer. But I wanted to drop some code in here in case someone runs into this issue, and comments don't really allow for code formatting.
I went with a simple local code pool override of Mage_Persistent_Model_Observer_Session to exit out of the function for any URL routes that are within /api/*
Not expecting this fix to need to be very long-lived or upgrade-friendly, b/c I'm expecting them to fix this in the next release or so.
public function synchronizePersistentInfo(Varien_Event_Observer $observer)
{
...
if ($request->getRouteName() == 'api') {
return;
}
...
}

Logging status of application to console window

I am currently refactoring an application that prints its status to the console window. At the moment I am doing something like this:
Console.Write("Print some status.....")
//some code
Console.WriteLine("Done!")
Now while this works fine, all the logic is hidden between console.writelines and I find makes it very hard to read.
I don't know if there is a better way of doing this, but I just wanted to ask and see if anyone has come up with a better/more clean way of print application status to the console.
Any ideas?
Take a look at Log4Net, it handles everything, but might be an overkill for your app, no idea. However knowing Log4Net will likely help you down the road someday so maybe this is a good chance too learn it.
I second using Log4Net. It is pretty easy to use it without invoking the difficult parts - just do the following:
In your applications Main() method, call
log4net.Config.BasicConfigurator.Configure(new log4net.Appender.ConsoleAppender());
That sets up a basic Console logger that logs all messages to stdout.
In the class that needs logging, create a new ILog like so:
private static readonly log4net.ILog log = log4net.LogManager.GetLogger(typeof (MyClass));
Then in the method that needs logging, call
log.Debug("Print Some status ...");
Once you have all of this set up and working. look through the Log4Net documentation on how to set up more useful logging. You can do a lot of different types of logging without changing the logging calls in your code at all.
Why not use a Logger object that write errors into a text file? You could come with some "priority" error messages such as: Logger.print(new priority("important"), "blabla");
This way, you could find in your file the exact time and all the message you want.
If you absolutely want the console, you could use the priority on the console.. so it would only prints what you tell the logger to print, such as network error, etc..