Serilog & Elastic Search aspnetcore with docker not working - asp.net-core

I have installed both elastic search and kibana 8.6.1
I am using Serilog.AspnetCore 6.1, Serilog.Enrichers.Environment 2.2, and Serilog.Sinks.ElasticSearch 9.0
I have configured my application with Serilog to log data to elastic search
configuration
.Enrich.FromLogContext()
.Enrich.WithMachineName()
.WriteTo.Debug()
.WriteTo.Console()
.WriteTo.Elasticsearch(new ElasticsearchSinkOptions(new Uri(elasticConfigurationSettings.Uri))
{
IndexFormat = $"applogs-{context.HostingEnvironment.ApplicationName?.ToLower().Replace(".", "-")}-{context.HostingEnvironment.EnvironmentName?.ToLower().Replace(".", "-")}-{DateTime.UtcNow:yyyy-MM}",
TypeName = null,
AutoRegisterTemplate = true
})
.Enrich.WithProperty("Environment", context.HostingEnvironment.EnvironmentName ?? "Environment Missing")
.Enrich.WithProperty("Application", context.HostingEnvironment.ApplicationName ?? "Application Unknown")
.ReadFrom.Configuration(context.Configuration);
SelfLog.Enable(Console.Error);
Below is my Serilog settings in the appsettings.json file
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Information",
"System": "Warning"
}
}
},
"ElasticConfiguration": {
"Uri": "http://localhost:9200"
}
I have an httphandler that has logging statements
logger.LogInformation("Sending request to {Url}", request.RequestUri);
logger.LogInformation("Received a success response from {Url}", response.RequestMessage.RequestUri);
but I never see any data in kibana. Any help would be appreciated.

you have to networking your containers with each other
you can using them by adding Container Orchestrator and making you're project multi-container app
see more information about it in this link

I was able to get things working.
I changed
"ElasticConfiguration": {
"Uri": "http://localhost:9200"
}
to
"ElasticConfiguration": {
"Uri": "http://elasticsearch:9200"
}
since this is the name of the container w/in docker.
I updated my docker-compose file to update a few settings within elasticsearch 8.6.1
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- xpack.security.enabled=false
- discovery.type=single-node
- ELASTIC_CLIENT_APIVERSIONING=1
I updated my Kibana settings as well
- ELASTICSEARCH_HOSTS=http://elasticsearch:9200
In addition, I came across this SO post discussing Serilog and ES8, which helped me update my common logging library
I am now able to launch Kibana and see my logs.

Related

How to change values in appSettings.json file based on environment

I am building API in .net core project and have to store some key value pairs in appSettings.json file for jwt token. For example i need to store valid issuer and valid audience. For development i have my appSettings.json file as below:
`{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"jwt": {
"JwtIssuer": "https://localhost:44332/",
"JwtAudience": "https://localhost:44332/",
"JwtSecretKey": "my super secure key"
}
"AllowedHosts": "*"
}`
For development environment this is fine but when i will have to deploy to say azure, do i need to change these URLs manually here in appSettings.json file or is there any other efficient way to manage these URLs that automatically gets updated based on environment.
The default of ConfigurationBuilder is looking for appsettings.<EnvironmentName>.json file, so based on the environment that you are working with
when you are in IIS Express you are in Development and when you deploy your application your environment is Production. This is why you need appsettings.Production.json

How to make the MS Identity SPA example work for organization and personal accounts

I am following this link to make my own project.
My project has
Reactjs
Asp.Net Core Web Api
to access user's emails from Hotmail.The Reactjs app and WebApi live on different servers.
The example is almost what I need, but it only accepts accounts within same organization, not accepts Personal accounts.
I thought I only need to change the Tenant ID from a specific ID to "common" in related configurations and it will work.
I also registered the Supported account types on Azure for both Web Api and Reactjs to use the
AnyOrg + Personal Account
here is the config for Angular
{
"auth": {
"clientId": "28xxx12",
"authority": "https://login.microsoftonline.com/common",
"validateAuthority": true,
"redirectUri": "http://localhost:4200",
"postLogoutRedirectUri": "https://localhost:44321/signout-oidc",
"navigateToLoginRequestUrl": true
},
"cache": {
"cacheLocation": "localStorage"
},
"scopes": {
"loginRequest": [ "openid", "profile", "Mail.Read", "offline_access", "user.read"]
},
"resources": {
"todoListApi": {
"resourceUri": "https://localhost:44351/api/todolist/",
"resourceScope": "https://papayee008.onmicrosoft.com/papayee008/access_as_user"
}
}
}
here is the config for Web Api:
{
"AzureAd": {
"Instance": "https://login.microsoftonline.com/",
"Domain": "papayee008.onmicrosoft.com",
"TenantId": "common",
"ClientId": "28xxx12"
},
"https_port": 44351,
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*"
}
But when I was trying to log in with my personal account, it throws me this error:
The simplest solution would be the use the same app/client id for both the SPA and the API. That is, instead of making separate app registrations for each component (as is suggested in the readme), simply make one app registration where you combine the steps for Register the service app (TodoListAPI) and Register the client app (TodoListSPA) in the readme of the sample repository.
EDIT: judging by the last screenshot, it might also be the case that your changes on "supported account types" on the AAD portal haven't taken effect yet. There's usually a few seconds delay, and if you tried to login with a personal account during that time, the issue in the screenshot would be expected.

How do I make APIManagement Service Logger deploy before the Application insights Resource?

I am trying to make the following ARM deploy a APIM service logger, however the service logger starts to deploy before the app insights resource and fails, the app insights resource is in a seperate template. I have added a dependson statement and thought that would do the job but that did'nt work either. Also the code below actually works if the app insights is already deployed.
does anyone have any pointers?
{
"type": "Microsoft.ApiManagement/service/loggers",
"name": "[concat(variables('apiManagementInstanceName'), '/', parameters('appInsightsName'))]",
"apiVersion": "2018-01-01",
"properties": {
"loggerType": "applicationInsights",
"description": "Logger resources to APIM",
"credentials": {
"instrumentationKey": "[reference(resourceId('Microsoft.Insights/components', parameters('appInsightsName')), '2015-05-01').InstrumentationKey]"
}
}
"dependsOn": [
"[resourceId('microsoft.insights/components', parameters('appInsightsName'))]"
]
}
also tried depending on both the APIM and app insights
"dependsOn": [
//"[resourceId('Microsoft.ApiManagement/service', variables('apiManagementInstanceName'))]"
"[resourceId('microsoft.insights/components', parameters('appInsightsName'))]"
],
You can use linked templates to reference another template file and define dependencies on it: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates#linked-template

ASP.NET Core publishing to Azure (Staging)

VS 2019
ASP.NET Core 3.1
I have developed a Web App locally and now I am ready to deploy to an Azure Staging environment.
My Web App was originally .Net (not Core) and I had not problems deploying it.
How to I tell the deployment process to use the "Staging" environment?
My launchSettings.json contains the following:
{
"$schema": "http://json.schemastore.org/launchsettings.json",
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:59000",
"sslPort": 0
}
},
"profiles": {
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}
I have a appSettings.Staging.json pointing to the Staging database...
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Warning"
}
},
"ConnectionStrings": {
"DbConnection": "Data Source=myapp.database.windows.net;Initial Catalog=MyAppCoreStaging;user id=myappstepadmin;password=mypassword;MultipleActiveResultSets=True"
}
}
But I am not sure how to tell it to use Staging when I deploy.
At the moment when I deploy, the browser starts up on the page and I get:
HTTP Error 500.30 - ANCM In-Process Start Failure
Common solutions to this issue:
The application failed to start
The application started but then stopped
The application started but threw an exception during startup
Troubleshooting steps:
Check the system event log for error messages
Enable logging the application process' stdout messages
Attach a debugger to the application process and inspect
For more information visit: https://go.microsoft.com/fwlink/?LinkID=2028265
Is there something I need to configure on Azure to use the Staging?
Since you are deploying to Azure and didn't specify that you are using a CI/CD Pipeline as your method of publishing, I assume that you are using the publishing profiles provided from Azure portal directly in Visual Studio.
In the Publish dialog, click on Edit -> settings -> Configuration and select Stage
In your Program.cs, your CreateWebHostBuilder (assuming you are using ASP.NET Core 3.0+; it's also possible for 2.2 but it's not IWebHostBuilder), you can specify that the appsettings file should be dependent on your solution configuration:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseEnvironment(Environment);
where Environment can be a property with preprocessor directives:
public static string Environment
{
get
{
string environmentName;
#if DEBUG
environmentName = "development";
#elif STAGE
environmentName = "staging";
#elif RELEASE
environmentName = "production";
#endif
return environmentName;
}
}
If you use a build pipeline, you should look at this.
steps:
- task: DotNetCoreCLI#2
inputs:
command: 'publish'
publishWebProjects: true
arguments: '-o $(build.artifactstagingdirectory) /p:EnvironmentName=Staging'

Steeltoe Serilog Dynamic Logger not working in .net core 2.2 app

I was trying to use the new Steeltoe Serilog Dynamic Logger https://steeltoe.io/docs/steeltoe-logging/#2-0-serilog-dynamic-logger in my .net core 2.2 application. I have used 2.3.0 version of Steeltoe.Extensions.Logging.SerilogDynamicLogger package. In my program.cs, I have the below code
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.ConfigureAppConfiguration((hostContext, configApp) =>
{
configApp.AddCloudFoundry();
configApp.AddConfigServer();
})
.UseSerilog((hostingContext, loggerConfiguration) => loggerConfiguration
.ReadFrom.Configuration(hostingContext.Configuration)
.WriteTo.Trace())
.ConfigureLogging((builderContext, loggingBuilder) =>
{
loggingBuilder.ClearProviders();
loggingBuilder.AddConfiguration(builderContext.Configuration.GetSection("Logging"));
// Add Serilog Dynamic Logger
loggingBuilder.AddSerilogDynamicConsole();
});
In the above block, first of all I dont know why
loggingBuilder.AddConfiguration(builderContext.Configuration.GetSection("Logging"));
is required because it is meant for configuring Microsoft ILogger and Serilog does not recommend such setting. Anyways, I have both in my appsettings.json
"Logging": {
"IncludeScopes": false,
"LogLevel": {
"Default": "Warning",
"System": "Warning",
"Microsoft": "Warning"
}
},
"Serilog": {
"MinimumLevel": {
"Default": "Information"
},
"WriteTo": [
{
"Name": "Console",
"Args": {
"formatter": "Serilog.Formatting.Compact.CompactJsonFormatter, Serilog.Formatting.Compact"
}
},
{
"Name": "Trace",
"Args": {
"formatter": "Serilog.Formatting.Compact.CompactJsonFormatter, Serilog.Formatting.Compact"
}
}
],
"Enrich": [ "FromLogContext" ]
},
After deploying to PCF, upon clicking Configure Logging Levels, I could see only 1/1 under Filter Loggers, also upon changing Default logger, log levels are not getting controlled. I am using PCF 2.4. Any thoughts on, why it is not working will be helpful.
I tested the sample at https://github.com/SteeltoeOSS/Samples/tree/master/Management/src/AspDotNetCore/CloudFoundry with 2.3.0 (it is currently at 2.3.0-rc2 which is identical). It is working for me with CF 2.6. Can you try to deploy the sample in your environment and make sure the Logging endpoint looks like this:
in your cli, run cf logs <sample app name> | grep Test. Now adjusting the Cloudfoundry.Controllers logging level, visit the home page. You should see a difference in the verbosity of the logs. Hopefully with this you can compare and see where the app/configuration is different.
➜ CloudFoundry git:(2.x) ✗ cf logs actuator | grep Test
2019-08-28T12:51:17.67-0400 [APP/PROC/WEB/0] OUT Test Critical message
2019-08-28T12:51:17.67-0400 [APP/PROC/WEB/0] OUT Test Error message
2019-08-28T12:51:17.67-0400 [APP/PROC/WEB/0] OUT Test Warning message
2019-08-28T12:51:17.67-0400 [APP/PROC/WEB/0] OUT Test Informational message
2019-08-28T12:51:17.67-0400 [APP/PROC/WEB/0] OUT Test Debug message
----- after adjusting ------
2019-08-28T12:52:16.29-0400 [APP/PROC/WEB/0] OUT Test Critical message