Consider a wizard generated ASP.NET Core project (NET 6). Add a Google.Cloud.Diagnostics.AspNetCore3 NuGet package and services.AddGoogleDiagnosticsForAspNetCore() to Startup.cs. Let GOOGLE_APPLICATION_CREDENTIALS environment variable point to a path to your service account JSON.
Somewhere in the app (e.g. a controller) add the following:
_logger.LogDebug("Nope");
_logger.LogInformation("Yeah");
Google Cloud Logs Explorer shows only the "Yeah" (no specific filters). My appsettings.json looks like:
"Logging": {
"LogLevel": {
"Default": "Debug",
"System": "Information",
"Microsoft": "Information"
}
}
As far as I understand the "Default": "Debug" should work everywhere where a more specific config is missing.
Why am I not seeing the "Nope" being logged? Anything obvious that I'm missing? It's worth mentioning that both Visual Studio Debug Output as well as the Console output show both Nope/Yeah as expected.
Short Answer: Google.Cloud.Diagnostics.AspNetCore3 does not use appsettings.json (at least for now) and one must explicitly set log levels.
Now to the long answer and working code after that.
To add Google Diagnostics to our project we have 3 overloads of ...AddGoogleDiagnosticsForAspNetCore(...) available, and also ...AddGoogle(...) just to use a service we need, such as logging service. (... at the beginning changes depending on dotnet version, examples at the end).
1- In a GCP environment, ...AddGoogleDiagnosticsForAspNetCore() signature is used to set defaults for the Diagnostics. Service details are fetched from GCP.
2- In a GCP environment, ...AddGoogleDiagnosticsForAspNetCore( AspNetCoreTraceOptions, LoggingServiceOptions, ErrorReportingServiceOptions ) signature we can set 3 types of options: AspNet Tracing, Logging Service and Error Reporting Service.
For this use case, if we want only logging services, we can either use positional arguments (null,new LoggingServiceOptions{...},null) (last null is not required) or named arguments (loggingOptions: new LoggingServiceOptions{...})
There are many to be set in LoggingServicesOptions{...} but just for log level purpose the following will suffice: new LoggingServiceOptions{ Options = LoggingOptions.Create(logLevel: LogLevel.Debug) }.
Now we have come to the important one. Although documentation covers enough of it implicitly, it is not made explicitly clear that this use case will directly set options, not services.
3- Although not explicitly clear, this use is for cases outside GCP or when GCP cannot be set properly(not sure how!?) AddGoogleDiagnosticsForAspNetCore( projectId, serviceName, serviceVersion, TraceOptions, LoggingOptions, ErrorReportingOptions ). This may seem similar to the 2nd signature at first, but it does not set options for services.
When one sees Project ID was not provided and could not be autodetected message for 1st or 2nd signature, they have to provide it as a parameter which immediately switches the function to use this 3rd signature.
In this case, if we want only logging services, it has to be used in the form of (projectId,null,null,null,LoggingOptions...,null) for positional arguments (last null is not required) or (projectId:"some ID",loggingOptions: LoggingOptions...) for named arguments
LoggingOptions... is simply be LoggingOptions.Create(logLevel: LogLevel.Debug) to set log level.
4- Apart from adding these details while adding Google Diagnostics to the services, we can instead add logging options when we set configurations: ...AddGoogle( LoggingServiceOptions{...} ). But in this use, we need to provide a project Id in it; new LoggingServiceOptions{ ProjectId = "some ID", Options = LoggingOptions.Create(logLevel: LogLevel.Debug) }
fill in the ...
dotnet 6 started using new top level statements. so we have following steps to follow.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddGoogleDiagnosticsForAspNetCore(
projectId: "some ID",
loggingOptions: LoggingOptions.Create(logLevel: LogLevel.Debug)
);
// or
var builder = WebApplication.CreateBuilder(args);
builder.Logging.AddGoogle(
new LoggingServiceOptions {
ProjectId = "some ID",
Options=LoggingOptions.Create(logLevel:LogLevel.Debug)
}
);
Since the OP mentions the use of Startup.cs, the project uses the old style so these are the required parts for that.
// inside ConfigureServices
services.AddGoogleDiagnosticsForAspNetCore(
projectId: "some ID",
loggingOptions: LoggingOptions.Create(logLevel: LogLevel.Debug)
);
// or
// before using "UseStartup"
.ConfigureLogging(
builder => builder.AddGoogle(
new LoggingServiceOptions {
ProjectId = "some ID",
Options=LoggingOptions.Create(logLevel:LogLevel.Debug)
}
)
)
Extra
We can read from the configuration file (top-level format)
var builder = WebApplication.CreateBuilder(args);
var config = builder.Configuration;
builder.Services.AddGoogleDiagnosticsForAspNetCore(
projectId:config["GCP:ID"],
loggingOptions: LoggingOptions.Create(
logLevel: Enum.Parse<LogLevel>(config["GCP:Logging:LogLevel:Default"]
)));
and add a GCP section in appsettings.json
"GCP":{
"ID":"some ID",
"Logging":{
"LogLevel":{
"Default":"Debug"
}
}
}
I've downloaded the mentioned package (it's open-source) and checked default logging-options creation:
As you may see the default logLevel is Information.
And as you go through the implementation there's no sign of reading the level from the config - it's simply passed from the options you may specify in the code:
Initial invocation:
Service provider registration:
Creation of logging provider and options:
Creation of options (1st picture)
And creation of logger (probably invoked somewhere internally by ASP.NET)
The simple answer is Google package doesn't read anything from the appsettings.json by default.
You can set the logging level by using the LoggingOptions:
(Other options omitted for brevity)
builder.Services.AddGoogleDiagnosticsForAspNetCore(loggingOptions: new Google.Cloud.Diagnostics.Common.LoggingServiceOptions
{
// ... Other required options, e.g. projectId
Options = Google.Cloud.Diagnostics.Common
.LoggingOptions.Create(logLevel: LogLevel.Debug
// ... Other necessary options
),
});
Related
I'm trying to add ssh-keys to my Google Cloud project at the project level with terraform:
resource "google_compute_project_metadata_item" "oslogin" {
project = "${google_project_services.myproject.project}"
key = "enable-oslogin"
value = "false"
}
resource "google_compute_project_metadata_item" "block-project-ssh-keys" {
project = "${google_project_services.myproject.project}"
key = "block-project-ssh-keys"
value = "false"
}
resource "google_compute_project_metadata_item" "ssh-keys" {
key = "ssh-keys"
value = "user:ssh-rsa myverylongpublickeythatireplacewithtexthereforobviousreasons user#computer.local"
depends_on = [
"google_project_services.myproject",
]
}
I tried all types of combinations of the 2 metadata flags oslogin and block-project-ssh-keys, which always get set without issues. But the ssh keys never appear in GCPs web GUI let alone the authorized_keys file. I even tried adding the depends_on, to make sure the project is existent before adding the keys, but that didn't help either.
Yet, Terraform says:
google_compute_project_metadata_item.ssh-keys: Creation complete after 8s (ID: ssh-keys)
Adding the exact same key manually on the web GUI works fine. At this point I believe I have tried everything, read all the first page Google results to 'terraform gcp add ssh key' and similar queries ... I'm at my wits end.
The issue was that the ssh key was being added to a different project.
I started with Google's tutorial on GCP/Terraform. This creates a generic project with the gcloud tool first. Then proceeds to create accounts using that generic project. This is necessary because you need a user to run terraform against their API. Then they create a new project facilitating these users with terraform each time you apply. The generic project created with gcloud is not being touched after the initial creation.
If you omit the "project" parameter from the google_compute_project_metadata_item.ssh-keys resource, it used the generic project and added the ssh keys there - at least in my case.
Solution: explicitly add the project parameter to the metadata resource item to make sure it's being added to the right project
Lets say I have applicationA that has 3 property files:
-> applicationA
- datasource.properties
- security.properties
- jms.properties
How do I move all properties to a spring cloud config server and keep them separate?
As of today I have configured the config server that will only read ONE property file as this seems to be the standard way. This file the config server picks up seems to be resolved by using the spring.application.name. In my case it will only read ONE file with this name:
-> applicationA.properties
How can I add the other files to be resolved by the config server?
Not possible in the way how you requested. Spring Cloud Config Server uses NativeEnvironmentRepository which is:
Simple implementation of {#link EnvironmentRepository} that uses a SpringApplication and configuration files located through the normal protocols. The resulting Environment is composed of property sources located using the application name as the config file stem (spring.config.name) and the environment name as a Spring profile.
See: https://github.com/spring-cloud/spring-cloud-config/blob/master/spring-cloud-config-server/src/main/java/org/springframework/cloud/config/server/environment/NativeEnvironmentRepository.java
So basically every time when client request properties from Config Server it creates ConfigurableApplicationContext using SpringApplicationBuilder. And it is launched with next configuration property:
String config = application;
if (!config.startsWith("application")) {
config = "application," + config;
}
list.add("--spring.config.name=" + config);
So possible names for property files will be only application.properties(or .yml) and config client application name that is requesting configuration - in your case applicationA.properties.
But you can "cheat".
In config server configuration you can add such property
spring:
cloud:
config:
server:
git:
search-paths: '{application}, {application}/your-subdirectory'
In this case Config Server will search for same property file names but in few directories and you can use subdirectories to keep your properties separate.
So with configuration above you will be able to load configuration from:
applicationA/application.properies
applicationA/your-subdirectory/application.properies
This can be done.
You need to create your own EnvironmentRepository, which loads your property files.
org.springframework.cloud.config.server.support.AbstractScmAccessor#getSearchLocations
searches for the property files to load :
for (String prof : profiles) {
for (String app : apps) {
String value = location;
if (app != null) {
value = value.replace("{application}", app);
}
if (prof != null) {
value = value.replace("{profile}", prof);
}
if (label != null) {
value = value.replace("{label}", label);
}
if (!value.endsWith("/")) {
value = value + "/";
}
output.addAll(matchingDirectories(dir, value));
}
}
There you could add custom code, that reads the required property files.
The above code matches exactly the behaviour described in the spring docs.
The NativeEnvironmentRepository does NOT access GIT/SCM in any way, so you should use
JGitEnvironmentRepository as base for your own implementation.
As #nmyk pointed out, NativeEnvironmentRepository boots a mini app in order to collect the properties by providing it with - sort of speak - "hardcoded" {appname}.* and application.* supported property file names. (#Stefan Isele - prefabware.com JGitEnvironmentRepository ends up using NativeEnvironmentRepository as well, for that matter).
I have issued a pull request for spring-cloud-config-server 1.4.x, that supports defining additional file names, through a spring.cloud.config.server.searchNames environment property, in the same sense one can do for a single springboot app, as defined in the Externalized Configuration.Application Property Files section of the documentation, using the spring.config.name enviroment property. I hope they review it soon, since it seems many have asked about this feature in stack overflow, and surely many many more search for it and read the currently advised solutions.
It worths mentioning that many ppl advise "abusing" the profile feature to achieve this, which is a bad practice, in my humble opinion, as I describe in this answer
I'm using RavenDb Server and Client 3.5.0 and I have tried to get UniqueConstraint work without success.
The simple case:
using Raven.Client.UniqueConstraints;
public class User {
public string Id { get; set; }
[UniqueConstraint]
public string Email { get; set; }
}
The documentation says:
Drop the Raven.Bundles.UniqueContraints assembly in the Plugins
directory.
I did it by NuGet: Install-Package RavenDB.Bundles.UniqueConstraints -Version 3.5.0
and then paste the binary Raven.Bundles.UniqueConstraints.dll to folder Plugins that I created myself in Raven's root directory.
After save an User document I get this in Metadata:
"Ensure-Unique-Constraints": [
{
"Name": "Email",
"CaseInsensitive": false
}
]
All seems to work, but I still saving documents with the same email.
UniqueConstraintCheckResult<User> checkResult = session.CheckForUniqueConstraints(user);
// returns whether its constraints are available
if (checkResult.ConstraintsAreFree())
{
session.Store(user);
session.SaveChanges();
}
I check this link RavenDB UniqueConstraint doesn't seem to work and this one https://groups.google.com/forum/#!searchin/ravendb/unique|sort:relevance/ravendb/KzO-eIf9vV0/NJyJ4DNniFUJ and many other that people have the same problem without solution. In some cases they said that are checking if the property already exist in database manualy as solution.
The documentation also says:
To activate unique constraints server-wide, simply add Unique
Constraints to Raven/ActiveBundles configuration in the global
configuration file, or setup a new database with the unique
constraints bundle turned on using API or the Studio
but with no clue how to do that. I did some search and find a possible how:
In Studio, select database, go to Settings -> Database settings, and I found this config:
{
"Id": "TestRaven",
"Settings": {
"Raven/DataDir": "~\\TestRaven"
},
"SecuredSettings": {},
"Disabled": false
}
and I tried add this config:
"Settings": {
"Raven/DataDir": "~\\TestRaven",
"Raven/ActiveBundles": "UniqueConstraints"
}
Then I get an error when trying save it. The message erros says something like "the database is already created and cant modify or add bundles" and make a sugestion to add this line "Raven-Temp-Allow-Bundles-Change": true and I was able to save de settings with UniqueConstraint bundle configuration.
So far I think I did all requirement that documentation describe. The last one is:
Any bundle which is not added to ActiveBundles list, will not be
active, even if the relevant assembly is in the Plugins directory.
The only place that I found a bundle list is creating a new database in Studio, but the list is not editable, just an information about what already has enabled.
The documentation says a lot of requirements but just dont tell us how to do it, super smart, and we have to try gess how. I could get to here so far, but gess what? It still not working!
My question is, UniqueConstraints realy work in RavenDb? Have someone get this working?
If yes, cloud please tell me how?
Thank you in advance!
[Edited]
I forgot to mention that I added the follow line:
store.Listeners.RegisterListener(new UniqueConstraintsStoreListener());
And also tried with version 3.5.1.
The issue is that the specified name of the bundle is incorrect so it won't be active on the server side. Please use "Unique Constraints" instead of "UniqueConstraints" in "Raven/ActiveBundles" settings option.
I'm trying to determine the best approach for executing business logic in a Push adapter. I've run the example PushAdapter (Module_07_04_nativeAPIForiOSPush) successfully from my local environment, but, adding WL.Server.setActiveUser() throws an error.
I'm running the demo PushAdapter adapter locally in Worklight Studio (6.0.0.201309171829), added as the first line in the adapter:
WL.Server.setActiveUser("PushAppRealm",userId);
...
Deployed the adapter change, run with same params and get this error in the Worklight console:
Can't find method com.worklight.integration.js.JavaScriptIntegrationLibraryImplementation.setUserIdentity(string,string). (/integration.js#36)
FWLSE0101E: Caused by: [project Module_07_04_nativeAPIForiOSPush]null
The adapter runs without any problems without this line. I'm trying to set the active user because I want to get the user's preferences next to determine business logic on whether to create the notification. Is there another approach?
I've also run this in a new workspace (after I applied the Fix Pack 1 to WL Studio 6), but, same result.
Questions are 1) why getting this error?, and 2) is this a valid approach?
Thanks.
var userIdentity = {
userId: "userid",
displayName: "userid",
attributes: {
foo: "bar"
}
};
WL.Server.setActiveUser("PushAppRealm", userIdentity);
This should work. However for this sample you should not be explicitly setting the user identity.The method WL.Server.setActiveUser() is used to set the user identity in case of adapter based authentication.This sample uses form based authentication.
I'm building a C# application which uses plug-ins. The application must guarantee to the user that plug-ins will not do whatever they want on the user machine, and will have less privileges that the application itself (for example, the application can access its own log files, whereas plug-ins cannot).
I considered three alternatives.
Using System.AddIn. I tried this alternative first, because it seamed much powerful, but I'm really disappointed by the need of modifying the same code seven times in seven different projects each time I want to modify something. Besides, there is a huge number of problems to solve even for a simple Hello World application.
Using System.Activator.CreateInstance(assemblyName, typeName). This is what I used in the preceding version of the application. I can't use it nevermore, because it does not provide a way to restrict permissions.
Using System.Activator.CreateInstance(AppDomain domain, [...]). That's what I'm trying to implement now, but it seems that the only way to do that is to pass through ObjectHandle, which requires serialization for every used class. Although plug-ins contain WPF UserControls, which are not serializable.
So is there a way to create plug-ins containing UserControls or other non serializable objects and to execute those plug-ins with a custom PermissionSet ?
One thing you could do is set the current AppDomain's policy level to a restricted permission set and add evidence markers to restrict based on strong name or location. The easiest would probably be to require plugins are in a specific directory and give them a restrictive policy.
e.g.
public static void SetRestrictedLevel(Uri path)
{
PolicyLevel appDomainLevel = PolicyLevel.CreateAppDomainLevel();
// Create simple root policy normally with FullTrust
PolicyStatement fullPolicy = new PolicyStatement(appDomainLevel.GetNamedPermissionSet("FullTrust"));
UnionCodeGroup policyRoot = new UnionCodeGroup(new AllMembershipCondition(), fullPolicy);
// Build restrictred permission set
PermissionSet permSet = new PermissionSet(PermissionState.None);
permSet.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution));
PolicyStatement permissions = new PolicyStatement(permSet, PolicyStatementAttribute.Exclusive);
policyRoot.AddChild(new UnionCodeGroup(new UrlMembershipCondition(path.ToString()), permissions));
appDomainLevel.RootCodeGroup = policyRoot;
AppDomain.CurrentDomain.SetAppDomainPolicy(appDomainLevel);
}
static void RunPlugin()
{
try
{
SetRestrictedLevel(new Uri("file:///c:/plugins/*"));
Assembly a = Assembly.LoadFrom("file:///c:/plugins/ClassLibrary.dll");
Type t = a.GetType("ClassLibrary.TestClass");
/* Will throw an exception */
t.InvokeMember("DoSomething", BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Static,
null, null, null);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
Of course this isn't rigorously tested and CAS policy is notoriously complex so there is always a risk that this code might allow some things to bypass the policy, YMMV :)