I am about to log information of the Spider in some IActionResult to a dividual file with the date.
For example:
C:\20210317spider.txt
C:\20210318spider.txt
C:\20210319spider.txt
I don't want to use some method such as Logger.LogInformation to log all information into a file.
How can I achieve this? Thank you.
You can use the ${shortdate} layoutrenderer inside the FileName-Layout for the NLog FileTarget:
<nlog>
<targets>
<target type="file" name="spiderFile" fileName="C:/${shortdate}spider.txt" />
</targets>
<rules>
<logger name="Spider" writeTo="spiderFile" final="true" />
</rules>
</nlog>
See also the NLog Tutorial and the list of available layoutrenders that can be used with NLog Layout.
Then you can do this with Microsoft ILoggerFactory:
public class MyClass
{
private readonly ILogger _spiderLogger;
public MyClass(ILoggerFactory loggerFactory)
{
_spiderLogger = loggerFactory.CreateLogger("Spider");
_spiderLogger.LogInformation("Hello little spider");
}
}
Related
I have serilog parameters described in web.config as below:
<appSettings>
<add key="serilog:minimum-level" value="Debug" />
<add key="serilog:using:RollingFile" value="Serilog.Sinks.RollingFile" />
<add key="serilog:write-to:File.rollingInterval" value="Day"/>
<add key="serilog:write-to:RollingFile.pathFormat" value="c:\temp\log.txt" />
</appSettings>
In my wcf service, code is as below
public class CalculateService : ICalculateService
{
private Logger _logger;
public CalculateService()
{
_logger = new LoggerConfiguration()
.ReadFrom.AppSettings()
.CreateLogger();
_logger.Debug("calling constructor");
}
}
When I'm calling this service from the Console application, a separate log file is created each time. I want to initialize log only once and want to create a new log file each day.
I’m trying to remove the response Server header from an Azure Web App ( with an ASP Net core application )
After many tries of changing the web.config and removing the header in app code using a middleware, Microsoft doesn’t give up and set the response header to Server: Microsoft-IIS/10.0 :)
The problem appears only when I’m trying to access the server on http (not https). Response code from the server is 301, and this is the only response that has the Server header.
Checking the logs I was not able to find any request to http://, and perhaps this is why I’m not able to remove header, because the request is not process in my application code.
A solution that I’m thinking is to disable the azure HTTPS only and do the redirect to https in my code (I tested and is working - server header is removed)
Is there another workaround without disabling the HTTPS only option?
Here is what I tried
Startup.cs
public void Configure(IApplicationBuilder app)
{
app.Use(async (context, next) =>
{
context.Response.Headers.Add("server", string.Empty)
}
app.UseHttpsRedirection();
}
web.config
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.web>
<httpRuntime enableVersionHeader="false" />
<!-- Removes ASP.NET version header. -->
</system.web>
<system.webServer>
<httpProtocol>
<customHeaders>
<remove name="Server" />
<remove name="X-Powered-By" />
</customHeaders>
<redirectHeaders>
<clear />
</redirectHeaders>
</httpProtocol>
<security>
<requestFiltering removeServerHeader="true" />
<!-- Removes Server header in IIS10 or later and also in Azure Web Apps -->
</security>
<rewrite>
<outboundRules>
<rule name="Change Server Header"> <!-- if you're not removing it completely -->
<match serverVariable="RESPONSE_Server" pattern=".+" />
<action type="Rewrite" value="Unknown" />
</rule>
</outboundRules>
</rewrite>
</system.webServer>
</configuration>
UPDATE
When the URL of http:// is requested, IIS will process it, this time without code. So we can't control it by the code, we can only set it on the server, such as some scripts or tools. But on Azure, we have no way to directly operate as a physical server, so after exploration, I suggest that Front Door can be used to deal with this problem. Hiding server information through proxy should be a better way.
After my test, the server information is hidden, you can refer to this document . We can see from the picture that there is no 301 redirect request, and no server information in other requests.
PREVIOUS
You need to modify Global.asax.cs and Web.config file in your program.
In Global.asax.cs.
public class MvcApplication : System.Web.HttpApplication
{
protected void Application_Start()
{
AreaRegistration.RegisterAllAreas();
FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
RouteConfig.RegisterRoutes(RouteTable.Routes);
BundleConfig.RegisterBundles(BundleTable.Bundles);
MvcHandler.DisableMvcResponseHeader = true;
PreSendRequestHeaders += Application_PreSendRequestHeaders;
}
protected void Application_PreSendRequestHeaders(object sender, EventArgs e)
{
//HttpContext.Current.Response.Headers.Remove("Server");
HttpContext.Current.Response.Headers.Set("Server","N/A");
}
}
And In Web.config.
<system.webServer>
<modules runAllManagedModulesForAllRequests="true" >
</modules>
<httpProtocol>
<customHeaders>
<remove name="X-Powered-By" />
</customHeaders>
</httpProtocol>
</system.webServer>
Then u can deploy your app. After the above code modification, access to the interface or static resources can see that the server information is modified, of course, it can also be deleted by Remove.
You also can handle special event by http status code.
protected void Application_PreSendRequestHeaders(object sender, EventArgs e)
{
//HttpContext.Current.Response.Headers.Remove("Server");
int StatusCode= HttpContext.Current.Response.StatusCode;
// handle like http status code 301
HttpContext.Current.Response.Headers.Set("Server","N/A");
}
Whenever an error occurs within our API, we end up getting multiple emails for a single error. Based on the log messages; we can see that these other emails seem to be getting generated because various Microsoft libraries are calling something like _logger.LogError as well as our own _logger.LogError that happens when we handle the error.
For example, when there is a database timeout, we see 4 emails from these different classes:
Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware
Microsoft.EntityFrameworkCore.Database.Command
Microsoft.EntityFrameworkCore.Query
Web.Controllers.ErrorController (Our own exception handler; this is
the only one we want to see)
The last one is the only one that has our own error formatting with helpful information in it such as current user, etc. The others just contain a stack trace, which is already within our own formatted email.
We can't figure out for sure where these other log messages are coming from; but the most likely thing I can think of is that within Microsoft's libraries, they are calling _logger.LogError(), and our own NLog configuration is handling all instances of LogError; instead of just handling our own.
How can we prevent these other log statements from being logged and especially emailed to us?
This is our setup in Program.cs:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(Microsoft.Extensions.Logging.LogLevel.Trace);
})
.UseNLog();
You could filter this in .NET Core - because your using the Microsoft.Extensions.Logging integration and Microsoft send the messages to that - and in NLog.
Configure in .NET Core
Modify you config, e.g. appsettings.json. For example at least an error for all Microsoft.* and Warning for Microsoft.EntityFrameworkCore.*
{
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Trace",
"Microsoft": "Error",
"Microsoft.EntityFrameworkCore": "Warning"
}
}
}
Note, this isn't NLog specific so you can't use the NLog level names. possible level names
Read more about this approach here.
Configure NLog
Or you could configure in the NLog config.
In the NLog.config, edit the <rules> .Those rules are processed from top to bottom.
You could filter the namespace with the name attribute (* are allowed)
Without a writeTo attribute, the logs are discarded
If a rule has final="true" and matching the events, the next rule won't be processed.
For example:
<rules>
<!--All logs, including from Microsoft-->
<logger name="*" minlevel="Trace" writeTo="allfile" />
<!--Skip non-critical Microsoft logs and so log only own logs-->
<logger name="Microsoft.*" maxlevel="Info" final="true" /> <!-- BlackHole without writeTo -->
<logger name="*" minlevel="Trace" writeTo="ownFile-web" />
</rules>
You could read about the nlog.config rules here.
This is also possible with the same approach from code, see here.
I have been able to read the properties from a table in the database as it was described here Reading mule config from database
Now, I am not able to apply these properties to the flow configs and also access them as out bound properties in the Java Processor classes through the MuleEventContext's message.
Update: below is my flow XML code
<flow name="push-data">
<poll doc:name="Push Poll">
<fixed-frequency-scheduler frequency="${push.data.poll.frequency}" timeUnit="MINUTES" />
<set-property propertyName="tempFilePath" value="${temp.csv.file.path}" doc:name="Property"/>
<component class="com.reports.processors.PushDataProcessor" doc:name="PushDataProcessor"/>
<logger message="worked!!!" level="INFO" doc:name="Logger"/>
<exception-strategy ref="push-report-data_Catch_Exception_Strategy" doc:name="Reference Exception Strategy"/>
</flow>
I am trying to set the properties "push.data.poll.frequency" and "temp.csv.file.path". Earlier, these properties existed in the "mule-app.properties" file.
So, My question is, How do I set the properties loaded from the database to the flow. Please keep in mind that I have already loaded the properties from the database as described in the link above. I just want to be able to set these properties to the flow rather than taking them from the mule-app.properties.
EDIT: To add some more information,
I am using a class with #Configuration annotation. The class as described in the link above, loads the properties from the database. Below is the source code.
#Configuration(name="databasePropertiesProvider")
#Component
public class DatabasePropertiesProvider {
#Autowired(required=true)
private MyService myService;
#Bean
public Properties getProperties() throws Exception {
Properties properties = new Properties();
// get properties from the database
Map<String,String> propertiesMap = myService.getMuleAppPropertiesFromDB();
if(null != propertiesMap && !CollectionUtils.isEmpty(propertiesMap))
properties.putAll(propertiesMap);
return properties;
}
#Bean
public static PropertySourcesPlaceholderConfigurer placeHolderConfigurer() {
return new PropertySourcesPlaceholderConfigurer();
}}
But this class runs after the app is initialized. Previously, I had configured the PropertySourcesPlaceholderConfigurer in the xml config with the factory-bean as the DatabasePropertiesProvider class. But since DatabasePropertiesProvider has a dependency on MyService class, and the dependency was not getting resolved due to MyService bean not initializing in the container before the property config, I had to make some changes to DatabasePropertiesProvider(the version above) so that this runs after the app initialization.
But now, the problem is that I am unable to access those properties that are loaded from the database.
UPDATE 2: I found a solution. Apparently I was trying to autowire the #Service MyService in the databasePropertiesProvider class. The autowiring was failing with null due to which I made some more modifications to the databasePropertiesProvider class so that It runs after the app is initialized.
Now when I look at it, I realized that I dont need to connect to the database through all the service and repository layers. I moved the query execution code from the repository class to the databasePropertiesProvider class and now the properties are loaded during initialization time and the flows can get the properties without making any changes.
Thanks for all your help guys. Made me do a lot of thinking.
Regards,
Zulfiqar
I found a solution. Apparently I was trying to autowire the #Service MyService in the databasePropertiesProvider class. The autowiring was failing with null due to which I made some more modifications to the databasePropertiesProvider class so that It runs after the app is initialized.
Now when I look at it, I realized that I dont need to connect to the database through all the service and repository layers. I moved the query execution code from the repository class to the databasePropertiesProvider class and now the properties are loaded during initialization time and the flows can get the properties without making any changes.
The whole code looks like this
XML Config:-
<bean class="net.intigral.reports.provider.properties.DatabasePropertiesProvider" id="databasePropertiesProvider">
<property name="entityManager" ref="entityManager" />
</bean>
<bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer">
<property name="properties">
<bean factory-bean="databasePropertiesProvider" factory-method="getProperties" />
</property>
</bean>
Java Code:-
public class DatabasePropertiesProvider {
EntityManager entityManager;
public Properties getProperties() throws Exception {
Properties properties = new Properties();
// get properties from the database
Map<String,String> propertiesMap = getMuleAppPropertiesFromDB();
if(null != propertiesMap && !CollectionUtilsIntg.isEmpty(propertiesMap))
properties.putAll(propertiesMap);
return properties;
}
public EntityManager getEntityManager() {
return entityManager;
}
public void setEntityManager(EntityManager entityManager) {
this.entityManager = entityManager;
}
#SuppressWarnings("unchecked")
private Map<String,String> getMuleAppPropertiesFromDB() {
Map<String,String> collect = null;
String query = "select key, value from MuleAppProps muleAppProps";
List<Object[]> results = entityManager.createQuery(query).getResultList();
if (CollectionUtilsIntg.isNotEmpty(results)) {
collect = results.stream().collect(Collectors.toMap(o -> (String)o[0], o -> (String)o[1]));
}
return collect;
}}
Now, I am able to load the properties the same way I used to load from mule-app.properties in the FLOWs.
Let your db contains following properties values with key/value pair as below :-
A simple example like below you can refer to read values from Database:-
<spring:beans>
<spring:bean id="dataSource" name="myCon" class="org.enhydra.jdbc.standard.StandardDataSource">
<spring:property name="url" value="jdbc:sqlserver://YourIpAddress\\SQLEXPRESS:1433;databaseName=YourDB;user=sa;password=yourDBPassword" />
<spring:property name="driverName" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" />
</spring:bean>
<!-- Required to connect to datasource -->
<spring:bean name="PropertyPlaceholderConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<spring:property name="properties" ref="CommonsConfigurationFactoryBean" />
</spring:bean>
<spring:bean name="CommonsConfigurationFactoryBean"
class="org.springmodules.commons.configuration.CommonsConfigurationFactoryBean">
<spring:constructor-arg ref="DatabaseConfiguration" />
</spring:bean>
<spring:bean name="DatabaseConfiguration" class="org.apache.commons.configuration.DatabaseConfiguration">
<spring:constructor-arg type="javax.sql.DataSource" ref="dataSource" />
<spring:constructor-arg index="1" value="YourTableName" />
<spring:constructor-arg index="2" value="Key" />
<spring:constructor-arg index="3" value="Value" />
</spring:bean>
</spring:beans>
<db:generic-config name="Database_Configuration" dataSource-ref="dataSource" doc:name="Generic Database Configuration" />
<http:listener-config name="HTTP_Listener_Configuration" host="0.0.0.0" port="8081" doc:name="HTTP Listener Configuration" />
<flow name="firstflow" processingStrategy="synchronous">
<http:listener config-ref="HTTP_Listener_Configuration" path="/test" doc:name="HTTP" />
<set-payload value="File name ${file.name} File path ${file.path}" doc:name="Set Payload" />
</flow>
You need to add commons-configuration.jar, spring.jar and spring-modules-jakarta-commons.jar in your classpath
If you want to access properties values in Java class you can inject it using Spring property in init-method of Spring bean.
refer:- http://www.codeproject.com/Articles/28893/Loading-Application-Properties-from-a-Database
I have a small set of ServiceStack REST services that is using NLog 2.1 (from NuGet) for logging.
My test server is running:
Windows 7
IIS 7.5
.NET 4.5
NLog config:
<nlog throwExceptions="true" internalLogToConsole="true" internalLogLevel="Debug" xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target name="c" xsi:type="Console" />
<target name="f1" xsi:type="File" fileName="C:\logs\test.log" />
</targets>
<rules>
<logger name="*" writeTo="c,f1" />
</rules>
</nlog>
My NLog config is exceedingly simple... just trying to get it working...
and in this configuration, everything works fine. NLog creates the log files correctly.
On my DEVELOPMENT machine, I am using:
Windows 7
Xamarin Studio / XSP4
Mono 3.2.3
Here is my Application_Start...
protected void Application_Start() {
LogManager.LogFactory = new NLogFactory();
ILog log = LogManager.GetLogger(typeof(Global));
log.Info("Application_Start called");
try {
new AppHost().Init();
} catch (Exception e) {
log.Error("Exception caught initializing AppHost");
}
}
In this configuration, my service's AppHost().Init() throws an exception as ServiceStack is registering my services in ServiceController.cs. I believe that part is irrelevant except that it is the first time something is logged outside of Application_Start (because both of the calls in Application_Start work... the log.info before the exception and the log.error after the exception).
Here is the exception that is shown:
The most relevant bit is that there was a System.NotImplementedException thrown at NLog.Internal.FileAppenders.BaseFileAppender.WindowsCreateFile (System.String fileName, Boolean allowConcurrentWrite).
I have found a workaround (in the accepted answer below). Hopefully anyone else who runs into this will quickly come upon this solution.
Some digging around on Google led me to this NLog pull request:
Avoid Win32-specific file functions in Mono where parts not implemented.
It appears that this change tries to use the preprocessor to NOT call WindowsCreateFile at all. However, for some reason, this still executes.
So I then went to check the newest version of BaseFileAppender.cs in the NLog GitHub repository to make sure someone didn't at some later point break this again.
#if !NET_CF && !SILVERLIGHT && !MONO
try
{
if (!this.CreateFileParameters.ForceManaged && PlatformDetector.IsDesktopWin32)
{
return this.WindowsCreateFile(this.FileName, allowConcurrentWrite);
}
}
catch (SecurityException)
{
InternalLogger.Debug("Could not use native Windows create file, falling back to managed filestream");
}
#endif
Hmmm... it's still there. What gives? Why doesn't MONO seem to be defined by the preprocessor, thus allowing this block to execute? I'm not sure. Before I started down that path of investigation, I noticed another change...
if (!this.CreateFileParameters.ForceManaged && ...
So after following that tracing that ForceManaged boolean back to it's origin, it appeared that I could set forceManaged="true" on my FileTarget declaration in my NLog config. Could it really be that simple?!
<nlog throwExceptions="true" internalLogToConsole="true" internalLogLevel="Debug" xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<targets>
<target name="c" xsi:type="Console" />
<target name="f1" xsi:type="File" forceManaged="true" fileName="C:\logs\test.log" />
</targets>
<rules>
<logger name="*" writeTo="c,f1" />
</rules>
</nlog>
Once that change was made, everything worked... the call to WindowsCreateFile that was throwing the exception was skipped & a managed filestream was used instead, which works great. Hallelujah!
Lastly, I could not find that forceManaged flag in the NLog documentation anywhere. Would likely have found this solution sooner if it was.