Let's suppose I need to save a text in my application into a file, but allowing the user to have more than one format (.pdf, .word, .txt, ...) to select.
A first approach could be:
if (extension == ".pdf")
ExportToPdf(file);
else if (extension == ".txt")
ExportToTxt(file);
...
but I usually encapsulate the above like this:
abstract class Writer
{
abstract bool CanWriteTo(string file);
abstract void Write(string text, string file);
}
class WritersHandler
{
List<Writer> _writers = ... //All writers here
public void Write(string text, string file)
{
foreach (var writer in _writers)
{
if (writer.CanWriteTo(file)
{
writer.Write(text, file);
return;
{
}
throw new Exception("...");
}
}
Using it, if I need to add a new extension/format, all I have to do is create a new class (that inherits from Writer) for that writer and implement the CanWriteTo(..) and Write(..) methods, and add that new writer to the list of writers in WritersHandler (maybe adding a method Add(Writer w) or manually, but that's not the point now).
I also use this in other situations.
My question is:
What's the name of this pattern? (maybe it's a modification of a pattern, don't know).
It's the Chain Of Responsibility.
It basically defines a chain of processing objects, where the supplied command is passed to the next processing object if the current one can't handle it.
I would do it a bit differently than you.
The main difference would be the way of storing handlers and picking the right one.
In fact I think that chain of responsibility is a bad choice here. Moreover iterating through the ist of handlers may be time consuming (if there are more of them). Dictionary provides O(1) writer retrieval.
If I were to guess I'd tell that my pattern is called Strategy.
abstract class Writer
{
abstract string SupportedExtension {get;}
abstract void Write(string text, string file);
}
class WritersHandler
{
Dictionary<string,Writer> _writersByExtension = ... //All writers here
public void Init(IEnumerable<Writer> writers)
{
foreach ( var writer in writers )
{
_writersByExtension.Add( writer.SupportedExtension, writer );
}
}
public void Write(string text, string file)
{
Writer w = _writersByExtension.TryGetValue( GetExtension(file) );
if (w == null)
{
throw new Exception("...");
}
w.Write(text, file);
}
}
Related
Normally, the .Dump() extension method in LINQPad shows XNode and its derived class instances as a rendered XML fragment. Sometimes while developing code I would prefer to see actual properties of the object, in the same table form that is dumped for other types, like a table that would show the Name, Value, FirstAttribute and whatsnot properties of the node and their .ToString() values, or interactively expandable collections of subobjects. In short, as if XNode were not handled specially at all.
I am working around this by dumping individual properties, but this is tedious.
This answer suggests writing a custom extension code to achieve a similar effect for another type, namely IEnumerable, but it seems a narrower and rarer case than that which I am dealing with.
Is there an out-of-the box way to do what I want?
LINQPad supports customizing Dump for types. Using some extension methods, you can convert the types to ExpandoObjects and then they will be output with properties.
In My Extensions, after the MyExtensions class, add a top level method:
static object ToDump(object obj) {
if (obj is XObject x)
return x.ToExpando();
else
return obj;
}
In the MyExtensions class, add the following extension methods. I already had the object->Dictionary methods for converting to anonymous objects, so I used those, but you could combine them to create a single ToExpando on object:
public static ExpandoObject ToExpando(this object obj) => obj.ToDictionary().ToExpando();
public static IDictionary<string, object> ToDictionary(this object obj) {
if (obj is IDictionary<string, object> id)
return id;
else {
var dictAnsObj = new Dictionary<string, object>();
foreach (var prop in obj.GetType().GetPropertiesOrFields()) {
try {
dictAnsObj.Add(prop.Name, prop.GetValue(obj));
}
catch (Exception ex) {
dictAnsObj.Add(prop.Name, ex);
}
}
return dictAnsObj;
}
}
public static ExpandoObject ToExpando(this IDictionary<string, object> objDict) {
var e = new ExpandoObject();
var di = (IDictionary<string, object>)e;
foreach (var kvp in objDict)
di.Add(kvp);
return e;
}
You will also need this Type extension:
// ***
// *** Type Extensions
// ***
public static List<MemberInfo> GetPropertiesOrFields(this Type t, BindingFlags bf = BindingFlags.Public | BindingFlags.Instance) =>
t.GetMembers(bf).Where(mi => mi.MemberType == MemberTypes.Field | mi.MemberType == MemberTypes.Property).ToList();
If you are okay with just displaying the top level object in class format, you could just use this extension method when you need to:
public static T DumpAs<T, NewT>(this T obj, Func<T, NewT> castFn, string description = null) {
if (description != null)
castFn(obj).Dump(description);
else
castFn(obj).Dump();
return obj;
}
For example,
XElement xn;
xn.DumpAs(x => x.ToExpando());
Otherwise, you will have to comment out the ToDump method or do something tricky with fluent methods to turn it on and off.
This answer depends on the previous answer, but extends it to handle dumping XObjects as classes when desired with an alternative extension method and ToDump method. It uses the same extensions as my previous answer otherwise.
In the MyExtensions class, add a new type of dump and a bool to track status:
public static bool bDumpAsClass = false;
public static object DumpAsClass(this object input, string descr = null) {
bDumpAsClass = true;
if (descr != null)
input.Dump(descr);
else
input.Dump();
bDumpAsClass = false;
return input;
}
Outside the MyExtensions class, add a ToDump method that uses the bool:
static object ToDump(object obj) {
if (MyExtensions.bDumpAsClass) {
if (obj is XObject x)
return x.ToExpando();
}
return obj;
}
Then you can just use DumpAsClass instead of Dump when you want to dump an XObject or descendant as a class, expanding any members as well.
Obviously you could expand the types handled when bDumpAsClass is true.
I would like to ask if the decorator pattern suits my needs and is another way to make my software design much better?
Previously I have a device which is always on all the time. On the code below, that is the Device class. Now, to conserve some battery life, I need to turn it off then On again. I created a DeviceWithOnOffDecorator class. I used decorator pattern which I think helped a lot in avoiding modifications on the Device class. But having On and Off on every operation, I feel that the code doesn't conform to DRY principle.
namespace Decorator
{
interface IDevice
{
byte[] GetData();
void SendData();
}
class Device : IDevice
{
public byte[] GetData() {return new byte[] {1,2,3 }; }
public void SendData() {Console.WriteLine("Sending Data"); }
}
// new requirement, the device needs to be turned on and turned off
// after each operation to save some Battery Power
class DeviceWithOnOffDecorator:IDevice
{
IDevice mIdevice;
public DeviceWithOnOffDecorator(IDevice d)
{
this.mIdevice = d;
Off();
}
void Off() { Console.WriteLine("Off");}
void On() { Console.WriteLine("On"); }
public byte[] GetData()
{
On();
var b = mIdevice.GetData();
Off();
return b;
}
public void SendData()
{
On();
mIdevice.SendData();
Off();
}
}
class Program
{
static void Main(string[] args)
{
Device device = new Device();
DeviceWithOnOffDecorator devicewithOnOff = new DeviceWithOnOffDecorator(device);
IDevice iDevice = devicewithOnOff;
var data = iDevice.GetData();
iDevice.SendData();
}
}
}
On this example: I just have two operations only GetData and SendData, but on the actual software there are lots of operations involved and I need to do enclose each operations with On and Off,
void AnotherOperation1()
{
On();
// do all stuffs here
Off();
}
byte AnotherOperation2()
{
On();
byte b;
// do all stuffs here
Off();
return b;
}
I feel that enclosing each function with On and Off is repetitive and is there a way to improve this?
Edit: Also, the original code is in C++. I just wrote it in C# here to be able to show the problem clearer.
Decorator won't suite this purpose, since you are not adding the responsibility dynamically. To me what you need to do is intercept the request and execute on() and off() methods before and after the actual invocation. For that purpose write a Proxy that wraps the underlying instance and do the interception there while leaving your original type as it is.
I was trying to create my own custom analyzer and tokenizer classes in Lucene. I followed mostly the instructions here:
http://www.citrine.io/blog/2015/2/14/building-a-custom-analyzer-in-lucene
And I updated as needed (in Lucene's newer versions the Reader is stored in "input")
However I get an exception:
TokenStream contract violation: reset()/close() call missing, reset() called multiple times, or subclass does not call super.reset(). Please see Javadocs of TokenStream class for more information about the correct consuming workflow.
What could be the reason for this? I gather calling reset\close is not my job at all, but should be done by the analyzer.
Here's my custom analyzer class:
public class MyAnalyzer extends Analyzer {
protected TokenStreamComponents createComponents(String FieldName){
// TODO Auto-generated method stub
return new TokenStreamComponents(new MyTokenizer());
}
}
And my custom tokenizer class:
public class MyTokenizer extends Tokenizer {
protected CharTermAttribute charTermAttribute =
addAttribute(CharTermAttribute.class);
public MyTokenizer() {
char[] buffer = new char[1024];
int numChars;
StringBuilder stringBuilder = new StringBuilder();
try {
while ((numChars =
this.input.read(buffer, 0, buffer.length)) != -1) {
stringBuilder.append(buffer, 0, numChars);
}
}
catch (IOException e) {
throw new RuntimeException(e);
}
String StringToTokenize = stringBuilder.toString();
Terms=Tokenize(StringToTokenize);
}
public boolean incrementToken() throws IOException {
if(CurrentTerm>=Terms.length)
return false;
this.charTermAttribute.setEmpty();
this.charTermAttribute.append(Terms[CurrentTerm]);
CurrentTerm++;
return true;
}
static String[] Tokenize(String StringToTokenize){
//Here I process the string and create an array of terms.
//I tested this method and it works ok
//In case it's relevant, I parse the string into terms in the //constructor. Then in IncrementToken I simply iterate over the Terms array and //submit them each at a time.
return Processed;
}
public void reset() throws IOException {
super.reset();
Terms=null;
CurrentTerm=0;
};
String[] Terms;
int CurrentTerm;
}
When I traced the Exception, I saw that the problem was with input.read - it seems that there is nothing inside input (or rather there is a ILLEGAL_STATE_READER in it) I don't understand it.
You are reading from the input stream in your Tokenizer constructor, before it is reset.
The problem here, I think, is that you are handling the input as a String, instead of as a Stream. The intent is for you to efficiently read from the stream in the incrementToken method, rather than to load the whole stream into a String and pre-process a big ol' list of tokens at the beginning.
It is possible to go this route, though. Just move all the logic currently in the constructor into your reset method instead (after the super.reset() call).
I was wondering if anybody find a way to stub/mock a logic inside a lambda without making the lambda's visibility?
public List<Item> processFile(String fileName) {
// do some magic..
Function<String, List<String>> reader = (fileName) -> {
List<String> items = new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
String output;
while ((output = br.readLine()) != null) {
items.add(output);
}
} catch (IOException e) {
e.printStackTrace();
}
return items;
};
List<String> lines = reader.apply("file.csv");
// do some more magic..
}
I would say the rule is that if a lambda expression is so complex that you feel the need to mock out bits of it, that it's probably too complex. It should be broken into smaller pieces that are composed together, or perhaps the model needs to be adjusted to make it more amenable to composition.
I will say that Andrey Chaschev's answer which suggests parameterizing a dependency is a good one and probably is applicable in some situations. So, +1 for that. One could continue this process and break down the processing into smaller pieces, like so:
public List<Item> processFile(
String fileName,
Function<String, BufferedReader> toReader,
Function<BufferedReader, List<String>> toStringList,
Function<List<String>, List<Item>> toItemList)
{
List<String> lines = null;
try (BufferedReader br = toReader.apply(fileName)) {
lines = toStringList.apply(br);
} catch (IOException ioe) { /* ... */ }
return toItemList.apply(lines);
}
A couple observations on this, though. First, this doesn't work as written, since the various lambdas throw pesky IOExceptions, which are checked, and the Function type isn't declared to throw that exception. The second is that the lambdas you have to pass to this function are monstrous. Even though this doesn't work (because of checked exceptions) I wrote it out:
void processAnActualFile() {
List<Item> items = processFile(
"file.csv",
fname -> new BufferedReader(new FileReader(fname)),
// ERROR: uncaught IOException
br -> {
List<String> result = new ArrayList<>();
String line;
while ((line = br.readLine()) != null) {
result.add(line);
}
return result;
}, // ERROR: uncaught IOException
stringList -> {
List<Item> result = new ArrayList<>();
for (String line : stringList) {
result.add(new Item(line));
}
return result;
});
}
Ugh! I think I've discovered new code smell:
If you have to write a for-loop or while-loop inside a lambda, you're doing something wrong.
A few things are going on here. First, the I/O library is really composed of different pieces of implementation (InputStream, Reader, BufferedReader) that are tightly coupled. It's really not useful to try to break them apart. Indeed, the library has evolved so that there are some convenience utilities (such as the NIO Files.readAllLines) that handle a bunch of leg work for you.
The more significant point is that designing functions that pass aggregates (lists) of values among themselves, and composing these functions, is really the wrong way to go. It leads every function to have to write a loop inside of it. What we really want to do is write functions that each operate on a single value, and then let the new Streams library in Java 8 take care of the aggregation for us.
The key function to extract here from the code described by the comment "do some more magic" which converts List<String> into List<Item>. We want to extract the computation that converts one String into an Item, like this:
class Item {
static Item fromString(String s) {
// do a little bit of magic
}
}
Once you have this, then you can let the Streams and NIO libraries do a bunch of the work for you:
public List<Item> processFile(String fileName) {
try (Stream<String> lines = Files.lines(Paths.get(fileName))) {
return lines.map(Item::fromString)
.collect(Collectors.toList());
} catch (IOException ioe) {
ioe.printStackTrace();
return Collections.emptyList();
}
}
(Note that more half of this short method is for dealing with the IOException.)
Now if you want to do some unit testing, what you really need to test is that little bit of magic. So you wrap it into a different stream pipeline, like this:
void testItemCreation() {
List<Item> result =
Arrays.asList("first", "second", "third")
.stream()
.map(Item::fromString)
.collect(Collectors.toList());
// make assertions over result
}
(Actually, even this isn't quite right. You'd want to write unit tests for converting a single line into a single Item. But maybe you have some test data somewhere, so you'd convert it to a list of items this way, and then make global assertions over the relationship of the resulting items in the list.)
I've wandered pretty far from your original question of how to break apart a lambda. Please forgive me for indulging myself.
The lambda in the original example is pretty unfortunate since the Java I/O libraries are quite cumbersome, and there are new APIs in the NIO library that turn the example into a one-liner.
Still, the lesson here is that instead of composing functions that process aggregates, compose functions that process individual values, and let streams handle the aggregation. This way, instead of testing by mocking out bits of a complex lambda, you can test by plugging together stream pipelines in different ways.
I'm not sure if that's what you're asking, but you could extract a lambda from lambda i.e. to another class or as is and pass it as a parameter. In an example below I mock reader creation:
public static void processFile(String fileName, Function<String, BufferedReader> readerSupplier) {
// do some magic..
Function<String, List<String>> reader = (name) -> {
List<String> items = new ArrayList<>();
try(BufferedReader br = readerSupplier.apply(name)){
String output;
while ((output = br.readLine()) != null) {
items.add(output);
}
} catch (IOException e) {
e.printStackTrace();
}
return items;
};
List<String> lines = reader.apply(fileName);
// do some more magic..
}
public static void main(String[] args) {
// mocked call
processFile("file.csv", name -> new BufferedReader(new StringReader("line1\nline2\n")));
//original call
processFile("1.csv", name -> {
try {
return new BufferedReader(new FileReader(name));
} catch (FileNotFoundException e) {
throw new RuntimeException(e);
}
});
}
I've read this answer (How to store complex objects into hadoop Hbase?) regarding the storing of string arrays with HBase.
There it is said to use the ArrayWritable Class to serialize the array. With WritableUtils.toByteArray(Writable ... writable) I'll get a byte[] which I can store in HBase.
When I now try to retrieve the rows again, I get a byte[] which I have somehow to transform back again into an ArrayWritable.
But I don't find a way to do this. Maybe you know an answer or am I doing fundamentally wrong serializing my String[]?
You may apply the following method to get back the ArrayWritable (taken from my earlier answer, see here) .
public static <T extends Writable> T asWritable(byte[] bytes, Class<T> clazz)
throws IOException {
T result = null;
DataInputStream dataIn = null;
try {
result = clazz.newInstance();
ByteArrayInputStream in = new ByteArrayInputStream(bytes);
dataIn = new DataInputStream(in);
result.readFields(dataIn);
}
catch (InstantiationException e) {
// should not happen
assert false;
}
catch (IllegalAccessException e) {
// should not happen
assert false;
}
finally {
IOUtils.closeQuietly(dataIn);
}
return result;
}
This method just deserializes the byte array to the correct object type, based on the provided class type token.
E.g:
Let's assume you have a custom ArrayWritable:
public class TextArrayWritable extends ArrayWritable {
public TextArrayWritable() {
super(Text.class);
}
}
Now you issue a single HBase get:
...
Get get = new Get(row);
Result result = htable.get(get);
byte[] value = result.getValue(family, qualifier);
TextArrayWritable tawReturned = asWritable(value, TextArrayWritable.class);
Text[] texts = (Text[]) tawReturned.toArray();
for (Text t : texts) {
System.out.print(t + " ");
}
...
Note:
You may have already found the readCompressedStringArray() and writeCompressedStringArray() methods in WritableUtils
which seem to be suitable if you have your own String array-backed Writable class.
Before using them, I'd warn you that these can cause serious performance hit due to
the overhead caused by the gzip compression/decompression.