Pattern like Bridge but can add primitive method? - oop

Suppose two abstract class:
Log:
property:timestamp;
property:message;
LogFormatter:
(String *)formatlog:(Log)log;
This look like a bridge, Log behave like Abstraction and LogFormatter like Implementor. In my opinion, bridge can't add primitive method to Implementor. But I want to dynamic add property to log in the future, and use a subclass of LogFormatter to format it. This will break Liskov Substitution principle.
Anyone has any suggestion?

If Log has properties timestamp and message, I think perhaps you mean it LogEntry.
The Bridge pattern is used when you want to decouple an abstraction from its
implementation so that the two can very independently. I think this is not
your case. If you simply want to format a LogEntry as a String, just pass LogEntry as a parameter to a LogFormatter's method which returns a String.
If you want to dynamic add property to log in the future, and use a subclass of LogFormatter to format it. I would suggest Decorator or Strategy pattern, depending on what you
really mean.
Edit:
The following pseudo code is an example how one can implement this with decorator to add property such as thread-ID to the log entry
Log {
timestamp
message
threadid
}
interface LogFormatter {
String format(Log);
}
// default implementation
DefaultLogFormatter implements LogFormatter {
String format(Log) {
return Log.getTimestamp() + Log.getMessage();
}
}
// decorated with thread Id
ThreadIdLogFormatter implements LogFormatter {
LogFormatter formatter;
ThreadIdLogFormatter(LogFormatter formatter) {
this.formatter = formatter;
}
String format(Log) {
String threadid = Log.getThreadId();
return formatter.format() + threadid;
}
}
LogFormatter formatterDefault = new DefaultLogFormatter();
LogFormatter formatterThreadId = new ThreadIdLogFormatter(formatterDefault);
Log log = new Log("message");
// 2016-11-28 09:22:07.055 INFO message
String logEntryDefault = formatterDefault.format(log);
// 2016-11-28 09:22:07.055 INFO message (Thread 11)
String logEntryThreadid = formatterThreadId.format(log);
Personally i would just define a abstract LogFormatter (or interface) and implement LogFormatter subclasses that format a Log entry into some form of text (line, xml, json, ...). Design Pattern is nice but perhaps not really necessary in this case.

Related

How best to return a single value of different types from function

I have a function that returns either an error message (String) or a Firestore DocumentReference. I was planning to use a class containing both and testing if the error message is non-null to detect an error and if not then the reference is valid. I thought that was far too verbose however, and then thought it may be neater to return a var. Returning a var is not allowed however. Therefore I return a dynamic and test if result is String to detect an error.
IE.
dynamic varResult = insertDoc(_sCollection,
dataRec.toJson());
if (varResult is String) {
Then after checking for compliance, I read the following from one of the gurus:
"It is bad style to explicitly mark a function as returning Dynamic (or var, or Any or whatever you choose to call it). It is very rare that you need to be aware of it (only when instantiating a generic with multiple type arguments where some are known and some are not)."
I'm quite happy using dynamic for the return value if that is appropriate, but generally I try to comply with best practice. I am also very aware of bloated software and I go to extremes to avoid it. That is why I didn't want to use a Class for the return value.
What is the best way to handle the above situation where the return type could be a String or alternatively some other object, in this case a Firestore DocumentReference (emphasis on very compact code)?
One option would be to create an abstract state class. Something like this:
abstract class DocumentInsertionState {
const DocumentInsertionState();
}
class DocumentInsertionError extends DocumentInsertionState {
final String message;
const DocumentInsertionError(this.message);
}
class DocumentInsertionSuccess<T> extends DocumentInsertionState {
final T object;
const DocumentInsertionSuccess(this.object);
}
class Test {
void doSomething() {
final state = insertDoc();
if (state is DocumentInsertionError) {
}
}
DocumentInsertionState insertDoc() {
try {
return DocumentInsertionSuccess("It worked");
} catch (e) {
return DocumentInsertionError(e.toString());
}
}
}
Full example here: https://github.com/ReactiveX/rxdart/tree/master/example/flutter/github_search

Companion object with extension function in kotlin?

I would like to have extension function and use logger from kotlin-logging and have constants inside companion object.
My function:
fun String.toFoo(): Foo {
logger.debug { "Mapping [$this] to Foo" }
if(MY_CONST.equals(this) {
...
}
Question is where I should put val logger = KotlinLogging.logger {} and MY_CONST since I cannot use companion object with an extension function?
If you just want you logger to be a singleton you can make an object that contains and instance of the logger and reach it from there.
Object LoggerSingleton( val logger = KotlinLogging.logger{})
Then in your extension function
fun String.toFoo(): Foo {
LoggerSingleton.logger.debug { "Mapping [$this] to Foo" }
if(MY_CONST.equals(this) {
}
Since an Object in Kotlin is guaranteed to have only one instance you won't have a different logger for each use of toFoo.
EDIT
To keep the desired class name
Use this signature
Like so:
Object StringLoggerSingleton( val logger = KotlinLogging.logger("String"))
I do not know what you want to accomplish with your logger, but I show you what I did already ;-)
Usually I put extension functions in its own file named similar to what the function is actually extending (e.g. either StringExtensionFunction or if is more related to its purpose and maybe only available if certain dependencies are available, I also did something like, e.g. JsoupExtensionFunctions (where there was a String.toJsoupHtml(), File.toJsoupXml(), etc.)).
If I then need constants I just place them within that file, e.g. by just writing something like:
private const val MY_CONST = "my_const_value"
No surrounding class, no surrounding object.
Regarding the logger... as loggers are usually tied to a certain name/class, I usually put a logger inside every (important) class or associate some logger to specific names... So I am not completely sure what your intent is here... If it's ok for you that the logger is returning the container of your extension function (maybe StringExtensionFunction.kt), then you can also put a logger-val inside that file similar to what I showed with MY_CONST.
If your intention was rather to reuse the callers logger, that might not work so easily... (the easiest would then probably be to pass it to the function, but usually you do not want that)... and other mechanisms may not really be worth it ;-)

Validation Data Class Parameters Kotlin

If I am modeling my value objects using Kotlin data classes what is the best way to handle validation. Seems like the init block is the only logical place since it executes after the primary constructor.
data class EmailAddress(val address: String) {
init {
if (address.isEmpty() || !address.matches(Regex("^[a-zA-Z0-9]+#[a-zA-Z0-9]+(.[a-zA-Z]{2,})$"))) {
throw IllegalArgumentException("${address} is not a valid email address")
}
}
}
Using JSR-303 Example
The downside to this is it requires load time weaving
#Configurable
data class EmailAddress(#Email val address: String) {
#Autowired
lateinit var validator: Validator
init {
validator.validate(this)
}
}
It seems unreasonable to me to have object creation validation anywhere else but in the class constructor. This is the place responsible for the creation, so that is the place where the rules which define what is and isn't a valid instance should be. From a maintenance perspective it also makes sense to me as it would be the place where I would look for such rules if I had to guess.
I did make a comment, but I thought I would share my approach to validation instead.
First, I think it is a mistake to perform validation on instantiation. This will make the boundary between deserialization and handing over to your controllers messy. Also, to me, if you are sticking to a clean architecture, validation is part of your core logic, and you should ensure with tests on your core logic that it is happening.
So, to let me tackle this how I wish, I first define my own core validation api. Pure kotlin. No frameworks or libraries. Keep it clean.
interface Validatable {
/**
* #throws [ValidationErrorException]
*/
fun validate()
}
class ValidationErrorException(
val errors: List<ValidationError>
) : Exception() {
/***
* Convenience method for getting a data object from the Exception.
*/
fun toValidationErrors() = ValidationErrors(errors)
}
/**
* Data object to represent the data of an Exception. Convenient for serialization.
*/
data class ValidationErrors(
val errors : List<ValidationError>
)
data class ValidationError(
val path: String,
val message: String
)
Then I have a framework specific implementations. For example a javax.validation.Validation implementation:
open class ValidatableJavax : Validatable {
companion object {
val validator = Validation.buildDefaultValidatorFactory().validator!!
}
override fun validate() {
val violations = validator.validate(this)
val errors = violations.map {
ValidationError(it.propertyPath.toString(), it.message)
}.toMutableList()
if (errors.isNotEmpty()) {
throw ValidationErrorException(errors = errors)
}
}
}
The only problem with this, is that the javax annotations don't play so well with kotlin data objects - but here is an example of a class with validation:
import javax.validation.constraints.Positive
class MyObject(
myNumber: BigDecimal
) : ValidatableJavax() {
#get:Positive(message = "Must be positive")
val myNumber: BigDecimal = myNumber
}
Actually, it looks like that validation is not a responsibility of data classes. data tells for itself — it's used for data storage.
So if you would like to validate data class, it will make perfect sense to set #get: validation on arguments of the constructor and validate outside of data class in class, responsible for construction.
Your second option is not to use data class, just use simple class and implement whole logic in the constructor passing validator there
Also, if you use Spring Framework — you can make this class Bean with prototype scope, but chances are it will be absolutely uncomfortable to work with such kind of spaghetti-code :)
I disagree with your following statement :
Seems like the init block is the only logical place since it executes after the primary constructor.
Validation should not be done at construction time, because sometimes, you need to have intermediate steps before getting a valid object, and it does not work well with Spring MVC for example.
Maybe use a specific interface (like suggested in previous answer) with a method dedicated to executing validation.
For the validation framework, I personnaly use valiktor, as I found it a lot less cumbersome that JSR-303

Type hinting v duck typing

Using the following simple Example (coded in php):
public function doSomething(Registry $registry)
{
$object = $registry->getData('object_key');
if ($object) {
//use the object to do something
}
}
public function doSomething($registry)
{
$object = $registry->getData('object_key');
if ($object) {
//use the object to do something
}
}
What are the benefits of either approach?
Both will ultimately fail just at different points:
The first example will fail if an object not of type Registry is passed, and the second will fail if the object passed does not implement a getData method.
How do you choose when to use either approach?
Those are 2 different design approaches. The responsibility falls on the developer(s) to make sure either methods won't fail.
Type hinting is a more robust approach while duck typing gives you more flexibility.

How can I simplify my deserialization framework?

I have a Serialization interface which is designed to encapsulate the differences between XML/JSON/binary serialization for my application. It looks something like this:
interface Serialization {
bool isObject();
int opApply(int delegate(string member, Serialization value) del); //iterate object
...
int toInt(); //this part is ugly, but without template member overloading, I
long toLong(); //figure out any way to apply generics here, so all basic types
... //have a toType primitive
string toString();
}
class JSONSerialization : Serialization {
private JSON json;
...
long toLong() {
enforce(json.type == JSON_TYPE.NUMBER, SerializationException.IncorrectType);
return cast(long)json.toNumber();
}
...
}
So, what I then set up is a set of templates for registering type deserializers and calling them:
...
registerTypeDeserializer!Vec3(delegate Vec3(Serialization s) {
return Vec3(s[0].toFloat, s[1].toFloat, s[2].toFloat);
});
...
auto v = parseJSON("some file").deserialize!Vec3;
...
registerTypeDeserializer!Light(delegate Light(Serialization s) {
return new Light(s["intensity"].toFloat, s["position"].deserialize!Vec3);
});
This works well for structs and simple classes, and with the new parameter identifier tuple and parameter default value tuple I should even be able to add automatic deserializer generation. However, I don't really like the inconsistency between basic and user defined types, and more importantly, complex types have to rely on global state to acquire references:
static MaterialLibrary materials;
registerTypeDeserializer!Model(delegate Model(Serialization s) {
return new Model(materials.borrow(s["material"].toString), ...);
});
That's where it really falls apart. Because I can't (without a proliferation of register deserializer functions) pass other parameters to the deserializer, I'm having difficulty avoiding ugly global factories. I've thought about eliminating the deserialize template, and requiring a deserialize function (which could accept multiple parameters) for each user defined type, but that seems like a lot of work for e.g. POD structs.
So, how can I simplify this design, and hopefully avoid tons of boilerplate deserializers, while still allowing me to inject object factories appropriately, instead of assigning them globally?
Basic types can be read using readf \ formattedRead, so you can create a wrapper function that uses this formattedRead it possible, otherwise it uses a static function from the desired type to read the value. Something like this:
auto _readFrom(T)(string s){
static if(__traits(compiles,(readf("",cast(T*)(null))))){
T result;
formattedRead(s,"%s",&result);
return result;
}else{
return T.readFrom(s);
}
}