NLog smtp failover - config

i have NLog configured to log error to mail target using corporate smtp server.
<target name="email" xsi:type="Mail"
from="aaa#aaa.cz"
to="bbb#bbb.cz"
subject="KIT - ${logger}"
body="${message} ${exception:format=tostring}"
smtpServer="ccc.ddd.cz"
smtpPort="25"
smtpUserName="abc"
smtpPassword="abc" />
Everything works fine untill smtp server is down. I would like to somehow configure to use secondary smtp server when primary smtp server is not available - kind of SMTP failover.
Any ideas how to configure it in NLog? Is it possible to achieve this with NLog?

From the NLog FallbackGroup documentation on GitHub:
<targets>
<target xsi:type="FallbackGroup" name="String" returnToFirstOnSuccess="Boolean">
<target xsi:type="wrappedTargetType" ... />
<target xsi:type="wrappedTargetType" ... />
...
<target xsi:type="wrappedTargetType" ... />
</target>
</targets>
List your targets in the order you wish NLog to attempt to use them. Don't forget to set the name="String" (in your case "email") and the returnToFirstOnSuccess="Boolean", usually "true" but maybe not for you, depending on why you had to failover/fallback. If it is usually just a transient problem, switching back to the primary makes sense. If it is usually because the primary server goes down for extended periods, you may want to set it to false so that a successful log using the secondary server doesn't make NLog switch back to the primary each time since it will just end up doing the fallback again.

Related

Elasticsearch logging with NLog fails in ASP Net Core API

I making some tests for logging into an Elasticsearch instance using NLog in our API. The Elasticsearch instance is running inside a Docker, if the API is executed using IIS Express I can log into Elasticsearch without a problem and I can look at the "logstash" index created, but if I run the API inside a Docker container the logs never reach Elasticsearch and the index is never created.
My NLog config:
<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
throwConfigExceptions="true"
internalLogLevel="info"
internalLogFile="c:\temp\internal-nlog-AspNetCore3.txt">
<extensions>
<add assembly="NLog.Targets.ElasticSearch"/>
</extensions>
<targets>
<target name="ElasticSearch" xsi:type="BufferingWrapper" flushTimeout="5000">
<target xsi:type="ElasticSearch"/>
</target>
</targets>
<rules>
<logger name="*" minlevel="Trace" writeTo="ElasticSearch" />
<logger name="Microsoft.*" maxlevel="Info" final="true" />
</rules>
</nlog>
And in my appsettings.json:
"ElasticsearchUrl": "http://192.168.0.9:9200",
Perhaps I'm missing something or I'm not understanding the interaction between the containers.
(1) Your question doesn't provide any details about the configuration of the two containers (one running your app, one running Elasticsearch).
I have an example logging to Elasticsearch, configured with Kibana to view the results, although it uses a different logger provider (Essential.LoggerProvider.Elasticsearch), however it has a docker-compose file that shows the connection between Elasticsearch and Kibana, https://github.com/sgryphon/essential-logging/tree/master/examples/HelloElasticsearch
# Docker Compose file for E-K stack
# Run with:
# docker-compose up -d
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.6.1
...
networks:
- elastic-network
kibana:
image: docker.elastic.co/kibana/kibana-oss:7.6.1
...
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
networks:
- elastic-network
networks:
elastic-network:
driver: bridge
The relevant parts show setting up a network bridge between the two docker machines, and then the connection between them.
While "http://192.168.0.9:9200" might be the correct connection from outside (your IIS) into Elasticsearch, you would have to check if it is how your API docker sees the Elasticsearch machine, e.g. how Kibana sees Elasticsearch in the example above is "http://elasticsearch:9200"
You would need to update the question with details of your docker configuration, e.g. the command line you run to start them, or a docker compose file, etc. to work out why they can't see each other.
(2) You might want to check that it really is working from IIS, as it seems unusual that NLog would create an index "logstash-" ... normally Logstash would create that index and NLog should create it's own, e.g. log4net creates index "log-", Essential.LoggerProvider.Elasticsearch uses "dotnet-", etc.
Disclaimer: I am the author of Essential.LoggerProvider.Elasticsearch

Get Nuget to pass a client certificate to a private ProGet server using SSL

I have a ProGet server that currently uses SSL and requires a client certificate in order to communicate with it. We would like to be able to use this server directly from the command line or within the Visual Studio package manager.
When accessed via a browser there are no issues with viewing the repository. When using nuget.exe on the command line the result is 403 Forbidden. I have used Fiddler to monitor the request and it highlights that the server is asking for a client certificate, Fiddler allows you to inject the required certificate and the nuget request is then successful.
Is it possible to provide a client certificate when using NuGet:
nuget install PackageName -Source https://myhost -Cert ???
Or with a setup like this we are going to have to fall back to using an API key to gain access?
Are we able to provide the certificate when using Visual Studio?
Starting from NuGet 5.7.2 you can use client-cert feature.
Configuration example:
<configuration>
...
<packageSources>
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" protocolVersion="3" />
<add key="Contoso" value="https://contoso.com/packages/" />
<add key="Example" value="https://example.com/bar/packages/" />
</packageSources>
...
<clientCertificates>
<storeCert packageSource="Contoso"
storeLocation="currentUser"
storeName="my"
findBy="thumbprint"
findValue="4894671ae5aa84840cc1079e89e82d426bc24ec6" />
<fileCert packageSource="Example"
path=".\certificate.pfx"
password="..." />
<fileCert packageSource="Bar"
path=".\certificate.pfx"
clearTextPassword="..." />
</clientCertificates>
...
</configuration>
Also you can use nuget client-certs CLI command for configuration.
I have realised that some years later I never posted the answer to this issue. In order to get NuGet to use certificates the certificate had to be added to the Credential Manager in Windows as a certificate based credential. NuGet then automatically picked this up when communicating with a matching URL.

How to set up membrane reverse proxy for 2 hosts with common authenication?

I have the following requirement. Please advise on how to set up the proxies.xml properly.
localhost/... user authentication is required from root level down (basically user needs to be authenticated once to access the whole website, which includes the 2 subsystems below)
localhost/subsys1/... all requests under this url should go to host1:8081
localhost/subsys2/... all requests under this url should go to host2:8082
I tried to set up the proxies.xml this way, but it doesn't seem to work.
<router>
<serviceProxy port="80">
<path>/</path>
<basicAuthentication>
<user name="guest" password="guest"/>
</basicAuthentication>
</serviceProxy>
<serviceProxy port="80">
<path>/subsys1</path>
<target host="host1" port="8081"/>
</serviceProxy>
<serviceProxy port="80">
<path>/subsys2</path>
<target host="host2" port="8082"/>
</serviceProxy>
</router>
Thanks,
Denny
I think the basic auth should be placed on the two proxied service as it will do the auth part.If u are trying to do it globally i havent tried it that way and i am not sure that it can be configured as such.
http://www.membrane-soa.org/service-proxy-doc/4.2/interceptors/examples.htm .What i also found usefull is that membrane service proxy is built on spring :-)

Glassfish JMS queue with HornetQ: Store locally and Forward remotely

I need some precise steps (with reference to glassfish docs) for the following scenario;
How to create JMS queues to support "store locally and forward remotely". The remote system is HornetQ.
The remote connectivity should support SSL and user/password authentication
It should support automatic retry and configuration of # of retry.
In case of any failure, it should be possible to select the jms messages are resend in bulk
I already went through some of the glassfish docs but needs to be further validated by the experts.
Simple Scenario but still not working "Send a JMS to sourceQueue and JMS bridge service to transfer to targetQueue". Here are the configurations;
A. domain.xml (extract)
<jms-service default-jms-host="default_JMS_host" type="EMBEDDED">
<jms-host host="localhost" name="default_JMS_host" lazy-init="false">
<property name="imq.bridge.bridge1.type" value="jms"></property>
<property name="imq.bridge.bridge1.xmlurl" value="file:///C:/TEMP/bridge.xml"></property>
<property name="imq.bridge.bridge1.autostart" value="true"></property>
<property name="imq.bridge.bridge1.logfile.limit" value="0"></property>
<property name="imq.bridge.bridge1.logfile.count" value="1"></property>
<property name="imq.bridge.enabled" value="true"></property>
<property name="imq.bridge.admin.user" value="admin"></property>
<property name="imq.bridge.admin.password" value="admin"></property>
<property name="imq.bridge.activelist" value="bridge1"></property>
</jms-host>
</jms-service>
B. bridge.xml (bridge configuration)
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE jmsbridge SYSTEM "sun_jmsbridge_1_0.dtd">
<jmsbridge name="bridge1">
<link name="link1">
<enabled ="true"></enabled>
<source connection-factory-ref=”jms/__defaultConnectionFactory" destination-ref="sourceQueue"></source>
<target connection-factory-ref="jms/__defaultConnectionFactory" destination-ref="targetQueue "></target>
</link>
<connection-factory ref-name="jms/__defaultConnectionFactory"/>
<connection-factory ref-name="jms/__defaultConnectionFactory"/>
<destination ref-name="sourceQueue" type="queue" lookup-name="sourceQueue"/>
<destination ref-name="targetQueue" type="queue" lookup-name="targetQueue"/>
</jmsbridge>
Glassfish deploys the Glassfish JMS server. If you want to talk to HornetQ you need to use HornetQ libraries and use the proper API (either core or JMS) to talk to the HornetQ server.
If you need XA integration through MDBs then you will need to deploy the Resource Adapter and do the proper recovery integration. Look at the glassFish on how to deploy an external resource adapter.. but that's an area that nobody at RedHat has tested yet and given the state of glassFish being discontinued I doubt that will happen any time soon.
Another way you could do is to deploy the JMS Bridge within JBoss / HornetQ, Where any message sent on GlassFish JMS would be consumed on HornetQ through the bridging process.

NLog in WCF Service

Can I use NLog in a WCF Service? I am trying to but cannot get it to work.
First I set up a simple configuration in a Windows Forms application to check that I was setting up correctly and this wrote the log file fine (I am writing to a network location using name and not IP address).
I then did exactly the same thing in the WCF Service. It did not work.
To check permissions I then added some code to use a TextWriter.
TextWriter tw = new StreamWriter(fileName);
tw.WriteLine(DateTime.Now);
tw.Close();
This worked OK so I know I can write to the location.
Check that your NLog.config file is in the same directory as your .svc file and NOT the Bin directory.
If you've just added the config file to the WCF project, then published it you will probably find your config file has been copied to the bin directory which is why NLog can't find it. Move it to up a level then restart the website hosting the service (to make sure the change is picked up).
This had me stumped for a while this morning!
Put your NLog config in the web.config file. Like so:
<?xml version="1.0"?>
<configuration>
<configSections>
<section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/>
</configSections>
. . . (lots of web stuff)
<nlog>
<targets>
<target name="file" xsi:type="File" fileName="${basedir}/logs/nlog.log"/>
</targets>
<rules>
<logger name="*" minlevel="Trace" writeTo="file" />
</rules>
</nlog>
</configuration>
See my comment to your original question for how to turn on NLog's internal logging.
To turn on NLog's internal logging, modify the top of you NLog config to look like this:
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.mono2.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
internalLogLevel="Trace"
internalLogFile="nlog_log.log"
>
The key parts are internalLogLevel and internalLogFile.
You can also set internalLogToConsole to true or false to direct the internal logging to the console.
There is another setting, throwExceptions, that tells NLog whether or not to throw exceptions. Ordinarily, this is set to false once logging is successfully configured and working. You can set it to true to help determine if your problem is due to an NLog error.
So, if you had all of those options enabled, the top of your NLog configuration might look like this:
<nlog xmlns="http://www.nlog-project.org/schemas/NLog.mono2.xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
autoReload="true"
internalLogLevel="Trace"
internalLogFile="nlog_log.log"
internalLogToConsole="true"
throwExceptions="true"
>
My first guess is that NLog is not finding the config information. Are you using an external config file (NLog.config) or "inline" configuration (in your app.config or web.config)? In your project, is(are) your config file(s) marked (in Properties) as Copy Always?