Access appplication context properties during a native QuarkusIntegrationTest - properties

In my quarkus project I just io.quarkus:quarkus-keycloak-admin-client for testing. This extension provides some properties such as
quarkus.keycloak.admin-client.username
quarkus.keycloak.admin-client.password
I use those properties in a test utility by getting them with the ConfigProvider interface
var adminUser = ConfigProvider.getConfig().getValue(KEYCLOAK_ADMIN_USER_PROPERTY, String.class);
var adminPassword = ConfigProvider.getConfig().getValue(KEYCLOAK_ADMIN_PASSWORD_PROPERTY, String.class);
This does work for regular QuarkusTest.
When running a native QuarkusIntegrationTest (plain extended QuarkusTest) these properties can not be found, leading to an exception.
This is, most probably, as they only live in the applications context. And during the native test phase, I only have access to DevService properties, but not application context properties. How can I access application context properties during a native test?

Related

Can I create an object from the DI container/Lamar in .NET 6.0 minimal hosting, preserving singletons?

We have migrated from a windows Framework 4.7 application to .NET 6.0. Lamar is added for Dependency Injection. We are trying to finalize a refactor to the latest "one-file" program.cs but are getting unexpected System.ObjectDisposedException: 'Cannot access a disposed object'. In all cases, the error is against a Func<T> during object creation.
All our tests are running correctly using the same environment, except to start the tests we (a) create the DI container and (b) use the container to create an object that loads the singletons (from MongoDB):
Container = new Container(registry);
var start = Container.GetInstance<HomeService>();
In the program.cs, we configure the container, but do not get to see it created, or access it inside program.cs. Instead we create HomeService from IServiceProvider during the first use of a controller. Here we were trying to limit the lifecyle scope during creation:
using (var scope = _container.CreateScope())
{
scope.ServiceProvider.GetService<INewHomeService>();
}
For test, we use the same loading steps, except for adding controllers/mvc, of course (i.e. NOT using builder.Services.AddControllers(); and builder.Services.AddMvc() for (integration) testing).
We have tried a lot of different things, like creating our object independently to the startup, but that did not align the singletons. We can get functionality by using static instead, but then we lose dynamic change access.
Some great tips like Resolving instances with ASP.NET Core DI from within ConfigureServices and https://andrewlock.net/exploring-dotnet-6-part-10-new-dependency-injection-features-in-dotnet-6/ but I can't see the specific example to get the live container just after initial creation.
Is it possible that the issue is just the difference between the lifecycle management of the new .NET DI implementation? As this is configuration at the composition root, if we can configure as per our testing approach, it should solve our problem. Other solutions welcome!
The problem 'Cannot access a disposed object' was being caused by a lifecycle mismatch between retained context and the controller access. The code retained a handle on the state object, that had a handle on the factory using FUNC. As we did not configure the Func as anything, it was transient during the controller graph creation, and so was disposed when the controller request ended.
To solve, we tried registering ALL of the FUNC, per How to use Func<T> in built-in dependency injection which was a large task as we had a few factories throughout an old codebase.
The better solution was to create a factory in the composition root, and use an injected IserviceProvider (or with Lamar an IContainer). This is a simple workaround.
With our creation concern, the creation of our object after the completion of the startup process is working correctly as a lazy validation of the first controller access.

Accessing HTTP headers in GraphQL v17

Currently, I am using a custom context object in my GraphQL application. It is built via a class that extends GraphQLServletContextBuilder. In version 17, they have deprecated the use of the context object. Our app is using the custom context to get access to a variable in the HTTP header to make it available to resolvers via the DataFetchingEnvironment variable added to resolver functions.
I cannot seem to find how to replicate get access to HTTP headers in a resolver function outside of using a custom context object. We're using the built in servlet as part of the GraphQL kickstart package. The only way I've seen referenced so far on how to get anything into the new context is by setting the context via the ExecutionInput call. And that call is buried in their servlet.
The example I've seen:
var executionInput = ExecutionInput.newExecutionInput()
.query(query)
.variables(variables)
.graphQLContext(Map.of(CustomContext.class, context));
I don't necessarily need a custom context object (especially since it's eventually going away), I just need to know how to get access to an HTTP header inside a resolver function if a custom context is no longer the way to do it going forward.

How generate a workflow in runtime with elsa workflow

With Elsa workflow designer possible to define a workflow and publish it, also can make a workflow programmatically by implementing the IWorkflow interface.
I need to make programmatically workflow at runtime, save it on the database and run it sometimes.
in the asp.net core project controller, I resolve IWorkflowBuilder as a dependency and make a workflow with WorkflowBuilder and return a WorkFlowblueprint object, but I don't know how I can store it and how to run it?
I also have Elsa dashboard on my project and I use EntityFramework Persistence for it.
Is there a way to convert a WorkflowBluePrint to WorkFlowDefination or generate WorkflowDefination from scratch programmatically?
Does everyone have any idea?
Although it might theoretically be possible to store an IWorkflow implementation in the database, there are some caveats that make this tricky to say the least. Here is why:
A workflow definition created by the designer consists purely of a list of activities and connections between them. Because of that, everything is easily serialized into JSON and stored in the database.
However, when you write a C# class, you can do more fancy things, such as configuring activities using C# lambda expressions and implement "inline" activity code. When you try to serialize this to JSON, these C# expressions will be serialized using just their type names.
Although there might be ways to somehow store a programmatic workflow into the database, perhaps even by storing a compiled assembly in the DB, I don't think it's worth the trouble because there are better ways.
You said that you need a programmatic workflow that you only run sometimes.
To achieve that, you do not need to store a workflow in the database.
The way Elsa works is that all workflow sources are converted into a thing called a Workflow Blueprint.
A workflow blueprint is what represents an executable workflow where all the necessary details are dehydrated that the workflow invoker can use.
There are different "sources" to establish these workflow blueprints by means of classes that implement IWorkflowProvider, of which there are three:
Programmatic Workflow Provider
Database Workflow Provider
Blob Storage Workflow Provider
The programmatic provider is what turns IWorkflow implementations into workflow blueprints, while the database provider turns workflow definitions into blueprints. The blob storage provider is similar, except it turns JSON files into blueprints.
The bottom line is that the origin of a workflow blueprint doesn't matter for the workflow engine.
All workflow blueprints are accessed through a service called the workflow registry, which you can use to load & execute a given workflow.
For example, if you have a programmatic workflow called MyWorkflow, you can execute it whenever you want like this:
public class MyWorkflow : IWorkflow
{
public void Build(IWorkflowBuilder builder)
{
builder.WriteLine("Hello World!");
}
}
[ApiController]
[Route("my-workflow")]
public class MyWorkflowController : Controller
{
private readonly IWorkflowRegistry _workflowRegistry;
private readonly IStartsWorkflow _workflowStarter;
public MyWorkflowController(IWorkflowRegistry workflowRegistry, IStartsWorkflow workflowStarter)
{
_workflowRegistry = workflowRegistry;
_workflowStarter = workflowStarter;
}
[HttpGet("run")]
public async Task<IActionResult> RunMyWorkflow(CancellationToken cancellationToken)
{
// 1. Get my workflow blueprint.
var myWorkflowBlueprint = (await _workflowRegistry.GetWorkflowAsync<MyWorkflow>(cancellationToken))!;
// 2. Run the workflow.
await _workflowStarter.StartWorkflowAsync(myWorkflowBlueprint, cancellationToken: cancellationToken);
return Ok();
}
}
Invoking this controller will execute MyWorkflow.
As you can see, there is no need to store the workflow in the database in order to be able to execute it on demand. Even if you did store the workflow in the database, the code would be the same, provided that the name of the workflow remains "MyWorkflow". Under the covers, the GetWorkflowAsync<TWorkflow> is simply an extension method that uses the type name to find the workflow by name. If you wanted to load a workflow by name for which there's no workflow class defined, you would simply use FindByNameAsync, or FindAsync if all you had is a workflow definition ID.

Java - is bytebuddy agent capable of "Fully" redefine a class?

Is byte-buddy agent capable of overcoming Attach API restrictions e.g. "new method definition", "static variable changes" ? I can see that redefineClasses method is being called from Agent Builder, but not sure if this is also following the same restrictions as the attach API.
I am trying to understand whether I can do the following:
1) Load the agent jar using an application class loader e.g. ParallelWebappClassLoader. My application is a servlet webapp and during runtime it uses the above classloader to load all application classes.
2) Fully redefine my classes i.e. any method addition/updates and static/local variable changes/updates/addition.
I do have an agent which currently works within the Attach API restrictions, but I am struggling to delegate the class loading from System Class Loader to application.
Many Thanks,
This is a restriction of the Java virtual machine you are running. Byte Buddy is capable of "fully redefining" a class by using its API but most VMs will reject such changes. Have a look at the dynamic code evolution VM for being able to apply such changes.

Dynamics CRM 2013: Inheritance security rules violated while overriding member

I created a quite simple plugin for Dynamics CRM 2013 that should populate some attributes based on some other attribute values.
The following error message occurs when query data:
Unexpected exception from plug-in (Execute):
Foobar.IsoCountry.Plugins.PreAddressCreateUpdate: System.TypeLoadException: Inheritance
security rules violated while overriding member:
'Microsoft.Crm.Services.Utility.DeviceRegistrationFailedException.GetObjectData(System.Runtime
.Serialization.SerializationInfo, System.Runtime.Serialization.StreamingContext)'.
Security accessibility of the overriding method must match the security accessibility of
the method being overriden.
The code is quite simple:
var context = localContext.PluginExecutionContext;
var orgServiceSystem = localContext.OrganizationServiceAsSystem;
var target = this.GetTargetEntity(context).ToEntity<Account>();
using (var xrm = new XrmContext(localContext.OrganizationServiceAsCallingUser))
{
var list = from account in xrm.AccountSet where account.Name.StartsWith("foobar")select account;
///...
}
I am using the lasts SDK Version (6.1.1) and targeting Dynamics CRM Online (Spring Wave Update is installed).
The only think that might be a bit special, is the fact that I am usiong ILMerge to combine multiple dll into my plugin.dll.
Solved, I used an dedicated Visual Studio project for the data-access (repository pattern) and part of this project was a class (BaseXrmFactory) that used the "DeviceIdManager" class("DeviceIdManager.cs"; part of the SDK).
The class was used to created new instances of the org-service:
...
private ClientCredentials GetDeviceCredentials()
{
return Microsoft.Crm.Services.Utility.DeviceIdManager.LoadOrRegisterDevice();
}
...
Via ILMerge was this project include in my plugin-dll that was deployed to the CRM. Once I removed the class (DeviceIdManager) the plugin executed as expected :)
I do not fully understand way this was the problem because the BaseXrmFactory was NOT invoked as part of the plugin execution.
I had two plugins first with Isolation mode Sandbox and second with isolation mode None. I set all isolation mode to none and after all working correctly.