Ktor equivalent of "Context" from project Reactor? - ktor

In project Reactor there is the concept of a Context which is a key value store that can be shared across components. We use it in some of our projects to manage the correlationId. API Reference: https://projectreactor.io/docs/core/release/api/reactor/util/context/Context.html
I am wondering in ktor is there a similar concept? I want a way to manage shared things such as a correlationId throughout the application that I can then pull from when making a client request?

Kotlin has a concept of a CoroutineContext which is similar to what you need - it's a map-like structure carried with every coroutine.
It's vaguely defined in the official documentation: here and here.
And this is how it for instance can be used for tracing: https://github.com/Shinigami072/OpenTracing-Kotlin-Coroutine-Integration/blob/master/coroutine-tracing-api-core/src/main/kotlin/ActiveSpan.kt

Related

Akka Remote shared classes

I have two different Java 8 projects that will live on different servers and which will both use Akka (specifically Akka Remoting) to talk to each other.
For instance, one app might send a Fizzbuzz message to the other app:
public class Fizzbuzz {
private int foo;
private String bar;
// Getters, setters & ctor omitted for brevity
}
I've never used Akka Remoting before. I assume I need to create a 3rd project, a library/jar for holding the shared messages (such as Fizzbuzz and others) and then pull that library in to both projects as a dependency.
Is it that simple? Are there any serialization (or other Akka and/or networking) considerations that affect the design of these "shared" messages? Thanks in advance!
Shared library is a way to go for sure, except there are indeed serialization concerns:
Akka-remoting docs:
When using remoting for actors you must ensure that the props and messages used for those actors are serializable. Failing to do so will cause the system to behave in an unintended way.
For more information please see Serialization.
Basically, you'll need to provide and configure the serialization for actor props and messages sent (including all the nested classes of course). If I'm not mistaking default settings will get you up and running without any configuration on your side, provided that everything you send over the wire is java-serializable.
However, default config uses default Java serialization, which is known to be quite inefficient - so you might want to switch to protobuf, kryo, or maybe even json. In that case, it would make sense to provide the serialization implementation and bindings as a shared library - either a dedicated one or a part of the "shared models" one that you mentioned in the question - depends if you want to reuse it elsewhere and mind/don't mind having serailization-related transitive dependencies popping all over the place.
Finally, if you allow some personal opinion, I would suggest trying protobuf first - it's binary format (read: efficient) and is widely supported (there are bindings for other languages). Kryo works well too (I have a few closed-source akka-cluster apps with kryo serialization in production), but has a few quirks with regards to collection/map handling.

Multi-platform InputStream Alternative in Kotlin?

I’m looking for a multi-platform alternative to input streams. My concrete task is to fetch an encrypted file from a remote server via https and decrypt it on demand.
In Java land I would an implement InputStream which proxies the reads to the input stream from the https library. How can I do the same in kotlin targeting multiple platforms.
I see ktor returns an ByteReadChannel, but I don’t know which functions.
I’m lost and don’t know where to start. Thanks for your help in advance.
If the framework you are using does not provide you with a full-fledged InputStream implementation, the only chance left is to write your own. Much like what the ktor developers did: ByteReadChannel is just an abstraction of "reading bytes from a channel".
This abstraction lives in the common part and allows to write application and business logic around it.
The key to make this work in the context of a Kotlin Multiplatform project is, the actual implementation needs to be provided in the platform specific parts. The JVM specific code of the ktor project actually has an implementation that uses InputStream: InputStream.toByteReadChannel.
You certainly don't have to do it like your example from the ktor project and model everything down from byte channels up to file representations. If you want to leverage Kotlin framework classes, Sequences might be handy. This could look something like this:
// in common
interface FileFetcher {
fun fetch(): Sequence<Byte>
}
expect fun fileFetcher(source: String): FileFetcher
// in jvm
class JvmFileFetcher(val input: java.io.InputStream): FileFetcher {
override fun fetch(): Sequence<Byte> = input.readBytes().asSequence()
}
actual fun fileFetcher(source: String): FileFetcher {
val input = java.net.URL(source).openStream()
return JvmFileFetcher(input)
}
You would define an interface FileFetcher along with a factory function fileFetcher in the common part. By using the expect keyword on the fileFetcher function you need to provide platform-specific implementations for all target platforms you define. Use the FileFetcher interface in the common part to implement your logic (decrypting file contents etc.). See the documentation for Sequence for how to work with it.
Then implement the factory function for all platforms and use the actual keyword on them. You will then need to write platform-specific implementations of FileFetcher. My example shows how a JVM version of the FileFetcher interface.
The example is of course very basic and you probably would not want to do it exactly like this (at least some buffering would be needed, I guess). Also, within the JVM part you could also leverage your favorite networking/HTTP library easily.

The plugin design pattern explained (as described by Martin Fowler)

I am trying to understand and exercise the plugin pattern, as explained by Martin Fowler.
I can understand in which way it makes use of the separated interface pattern, and that it requires a factory to provide the right implementation of the interface, based on the currently used environment (test, prod, dev, etc). But:
How exactly does the factory read the environment values and decide which object (implementing the IdGenerator interface) to create?
Is the factory a dependency of the domain object (DomainObject)?
Thank you very much.
The goal of the Plugin pattern is to provide a centralized configuration runtime to promote modularity and scalability. The criteria that determines which implementation to select can be the environment, or anything else, like account type, user group, etc. The factory is just one way to create the desired plugin instance based on the selection criteria.
Implementation Selection Criteria
How your factory reads the selection criteria (environment state) depends on your implementation. Some common approaches are:
Command-Line Argument, for example, CLI calls from different CI/CD pipeline stages can pass a dev/staging/production argument
YAML Config Files could be deserialized into an object or parsed
Class Annotations to tag each implementation with an environment
Feature Flags, e.g. SaaS like Launch Darkly
Dependency Injection framework like Spring IoC
Product Line Engineering software like Big Lever
REST Endpoint, e.g. http://localhost/test/order can create a test order object without notifying any customers
HTTP Request Parameter, such as a field in the header or body
Dependency on Factory
Since the DomainObject calls the factory to create an object with the desired implementation, the factory will be a dependency of the domain object. That being said, the modern approach is to use a dependency injection (DI) library (Guice, Dagger) or a framework with built-in DI (Spring DI, .Net Core). In these cases, there still is a dependency on the DI library or framework, but not explicitly on any factory class.
Note: The Plugin design pattern described on pp.499-503 of PEAA was written by Rice and Foemmel, not Martin Fowler.
You will want to get a full PFD of the "Patterns of Enterprise Application Architecture". What is visible on Fowler's site is basically first half-page of any chapter :)
What is being describes is basically the expanded version of idea behind polymorphism.
I don't think "plugin" can actually be described as a "pattern". It's more like result of other design choices.
What you have are .. emm ... "packages", where the main class in each of them implements a third party interface. Each of those packages also have their internal dependencies (other classes or even other libraries), which are used for some specific task. Each package has it's of configuration (which might be added through DIC config) ans each of them get "registered" in your main application.
The mentioning of a factory is almost a red herring, because these days that functionality would be applied using DIC.

How to wire up WCF Service Application, Unity and AutoMapper

I have been playing around the last couple of days with different solutions for mapping DTO's to entities for a VS2013, EF6, WCF Service App project.
It is a fairly large project that is currently undergoing a major refactoring to bring the legacy code under test (as well as port the ORM from OpenAccess to EF6).
To be honest I had never used AutoMapper before but what I saw I really liked so I set out to test it out in a demo app and to be honest I am a bit ashamed that I have been unable to achieve a working solution after hours of tinkering and Googling. Here is a breakdown of the project:
WCF Service Application template based project (.svc file w/code behind).
Using Unity 3.x for my IoC container and thus creating my own ServiceHostFactory inheriting from UnityServiceHostFactory.
Using current AutoMapper nuget package.
DTO's and DAL are in two separate libraries as expected, both of which are referenced by the service app project.
My goal is simple (I think): Wire up and create all of my maps in my composition root and inject the necessary objects (using my DI container) into the class that has domain knowledge of the DTO's and a reference to my DAL library. Anyone that needs a transformation would therefore only need to reference the transformation library.
The problem: Well, there are a couple of them...
1) I cannot find a working example of AutoMapper in Unity anywhere. The code snippet that is referenced many times across the web for registering AutoMapper in Unity (see below) references a Configuration class that doesn't seem to exist anymore and I cannot find any documentation on its deprecation:
container.RegisterType<AutoMapper.Configuration, AutoMapper.Configuration>(new PerThreadLifetimeManager(), new InjectionConstructor(typeof(ITypeMapFactory),
AutoMapper.Mappers.MapperRegistry.AllMappers())).RegisterType<ITypeMapFactory,
TypeMapFactoy>().RegisterType<IConfiguration, AutoMapper.Configuration>().RegisterType<IConfigurationProvider,
AutoMapper.Configuration>().RegisterType<IMappingEngine, MappingEngine>();
2) Where to create the maps themselves... I would assuming that I could perform this operation right in my ServiceHostFactory but is that the correct place? There is a Bootstrapper project out there but I have not gone down that road (yet) and would like to avoid it if possible.
3) Other than the obviously necessary reference to AutoMapper in the DTO lib, what would I be injecting into the instantition, the configuration object (assuming IConfiguration or IConfigurationProvider) and which class I am injecting into the constructor of the WCF service to gain access to the necessary object.
I know #3 is a little vague but since I cannot get AutoMapper bound in my Unity container, I cannot test/trial/error to figure out the other issues.
Any pointers would be greatly appreciated.
UPDATE
So I now have a working solution that is testing correctly but would still like to get confirmation that I am following any established best practices.
First off, the Unity container registration for AutoMapper (as of 11/13/2013) v3.x looks like this:
container
.RegisterType<ConfigurationStore, ConfigurationStore>
(
new ContainerControlledLifetimeManager()
, new InjectionConstructor(typeof(ITypeMapFactory)
, MapperRegistry.AllMappers())
)
.RegisterType<IConfigurationProvider, ConfigurationStore>()
.RegisterType<IConfiguration, ConfigurationStore>()
.RegisterType<IMappingEngine, MappingEngine>()
.RegisterType<ITypeMapFactory, TypeMapFactory>();
Right after all of my container registrations, I created and am calling a RegisterMaps() method inside of ConfigureContainer(). I created a test mapping that does both an auto mapping for like named properties as well as a custom mapping. I did this in my demo app for two reasons primarily:
I don't yet know AutoMapper in a WCF app hosted in IIS and injected with Unity well enough to fully understand its behavior. I do not seem to have to inject any kind of configuration object into my library that does the transformations and I am still reading through the source to understand its implementation.
As I understand it, there is a caching mechanism at play here and that if a mapping is not found in cache that it will create it on the fly. While this is great in theory, the only way I could then test my mappings that were occurring in my composition root was to do some sort of custom mapping and then call Mapper.Map in the library that performs mapping and returns the DTO.
All of that blathering aside, here is what I was able to accomplish.
WCF Service App (composition root) injects all of the necessary objects including my DtoConversionMapper instance.
The project is made up of the WCF Service App (comp root), DtoLib, DalLib, ContractsLib (interfaces).
In my ServiceFactoryHost I am able to create mappings, including custom mappings (i.e. map unlike named properties between my DTO and EF 6 entity).
The DtoConversionMapper class lives in the DtoLib library and looks like this: IExampleDto GetExampleDto(ExampleEntity entity);
Any library with a reference to the DtoLib can convert back and forth, including the Service App where the vast majority of these calls will take place.
Any guiding advice would be greatly appreciated but I do have a working demo now that I can test things out with while I work through this large refactoring.
Final Update
I changed the demo project just a little by adding another library (MappingLib) and moved all of my DTO conversions and mappings to it in a static method. While I still call the static method in my composition root after the Unity container is initialized, this gives me the added flexibility of being able to call that same map creation method in my NUnit unit test libraries, effectively eliminating any duplication of code surrounding auto mapper and makes it very testable.

Profiler lib for wcf + postsharp

We need to add a new profiling feature to our WCF application, for logging where time is spendt in the application. I'm looking at PostSharp for a convention driven approach of applying the logging and need some input on how to actually log it. I've already created a custom class for logging purposes, using StopWatch and can log the steps through the layers of my WCF application. However I'm wondering if there's a thread safe alternative library I could use in conjunction with PostSharp for this purpose. I've come across MiniProfiler, but it seems to be intended for ASP.NET MVC applications mainly. Any other frameworks I should consider or should I just use my custom class?
Thanks
I did something like that in the past using a IClientMessageInspector implemented on a custom IEndpointBehavior.
Depending on what kind of logging you want, this might just do the trick. There's an example in the following link
IClientMessageInspector Interface
PostSharp itself is thread-safe. The aspects that you write may be thread-unsafe if poorly written, but there's always a way to make them thread-safe.
If you're using OnMethodBoundaryAspect and need to pass something from OnEntry to OnSuccess, store the initial stopwatch value in OnMethodExecutionArgs.MethodExecutionTag.