Does this saving/loading pattern have a name? - variables

There's a variable persistence concept I have integrated multiple times:
// Standard initialiation
boolean save = true;
Map<String, Object> dataHolder;
// variables to persist
int number = 10;
String text = "I'm saved";
// Use the variables in various ways in the project
void useVariables() { ... number ... text ...}
// Function to save the variables into a datastructure and for example write them to a file
public Map<String, Object> getVariables()
{
Map<String, Object> data = new LinkedHashMap<String, Object>();
persist(data);
return(data);
}
// Function to load the variables from the datastructure
public void setVariables(Map<String, Object> data)
{
persist(data);
}
void persist(Map<String, Object> data)
{
// If the given datastructure is empty, it means data should be saved
save = (data.isEmpty());
dataHolder = data;
number = handleVariable("theNumber", number);
text = handleVariable("theText", text);
...
}
private Object handleVariable(String name, Object value)
{
// If currently saving
if(save)
dataHolder.put(name, value); // Just add to the datastructure
else // If currently writing
return(dataHolder.get(name)); // Read and return from the datastruct
return(value); // Return the given variable (no change)
}
The main benefit of this principle is that you only have a single script where you have to mention new variables you add during the development and it's one simple line per variable.
Of course you can move the handleVariable() function to a different class which also contains the "save" and "dataHolder" variables so they wont be in the main application.
Additionally you could pass meta-information, etc. for each variable required for persisting the datastructure to a file or similar by saving a custom class which contains this information plus the variable instead of the object itself.
Performance could be improved by keeping track of the order (in another datastructure when first time running through the persist() function) and using a "dataHolder" based on an array instead of a search-based map (-> use an index instead of a name-string).
However, for the first time, I have to document this and so I wondered whether this function-reuse principle has a name.
Does someone recognize this idea?
Thank you very much!

Related

Materialized view to use different Serde

Version used: Kafka 3.1.1, Confluent 7.1.0, Avro 1.11.0
I’m creating a REST controller which is “searching” for AVRO objects in a topic. The objects in the topic are serialized using SpecificAvroSerde<>. Each topic has assigned two AVRO schemas. One for the key (with several fields of various types) and one for the value (multiple fields and types).
I’ve done this several times whereby I’m consuming the topic in a KTable and then materialize it. There is only one pair of serdes involved and the serialized format is the same for both the topic and the materialized view (RocksaltDb). The REST controller then can look up the store and either perform a get with a key or do a range scan between two keys. This all works as expected.
private final static String TOPIC_NAME = "input-topic";
private final static String VIEW_NAME = "materialized-view";
private final SpecificAvroSerde<ProductXrefKey> productXrefKeySerde = new SpecificAvroSerde<>();
private final SpecificAvroSerde<ProductXref> productXrefSerde = new SpecificAvroSerde<>();
final Map<String, Object> props = this.kafkaProperties.buildStreamsProperties();
productXrefKeySerde.configure(props, true);
productXrefSerde.configure(props, false);
KTable<ProductXrefKey, ProductXref> productXrefTable = builder
.table(TOPIC_NAME, Consumed.with(productXrefKeySerde, productXrefSerde),
Materialized.<ProductXrefKey, ProductXref, KeyValueStore<Bytes, byte[]>>as(VIEW_NAME)
.withKeySerde(productXrefKeySerde)
.withValueSerde(productXrefSerde));
<…>
final ReadOnlyKeyValueStore<ProductXrefKey, ProductXref> store =
streamsBuilderFactoryBean.getKafkaStreams().store(fromNameAndType(VIEW_NAME, keyValueStore()));
try (KeyValueIterator<ProductXrefKey, ProductXref> range = store.range(fromKey, toKey)) {
if (range != null) {
range.forEachRemaining(kv -> {
<…>
});
} else {
log.info("Could not find {} in local ReadOnlyKeyValueStore {}", fromKey, viewName);
}
}
I now want to change this using a prefix scan instead. Since the key contains multiple fields there is no way to only serialize first part (i.e. first few fields) of the key I need a specialized serializer. This also means I have to use a different serializer for the materialized view itself (SpecificAvroSerde puts the magic number and schema ID at the beginning of the byte array) as otherwise the serialized output for the prefix and the key in the materialized view can’t be compared. Hence I created a specialised Serde which serializes the key using the same logic as when used for serializing the prefix but omitting the fields not required for the scan (i.e. omitting the last field). Above code now looks
private final static String TOPIC_NAME = "input-topic";
private final static String VIEW_NAME = "materialized-view";
private final SpecificAvroSerde<ProductXrefKey> productXrefKeySerde = new SpecificAvroSerde<>();
private final SpecificAvroSerde<ProductXref> productXrefSerde = new SpecificAvroSerde<>();
private final SpecificAvroSerde<ProductXrefKey> materializedProductXrefKeySerde = new ProductXrefKeySerde();
// for the value part we can still used standard serde as no change in serialization logic needed
private final SpecificAvroSerde<ProductXref> materializedProductXrefSerde = new SpecificAvroSerde<>();
// telling the serializer to cut off last field
private final SpecificAvroSerde<ProductXref> prefixScanProductXrefSerde = new ProductXrefKeySerde(true);
final Map<String, Object> props = this.kafkaProperties.buildStreamsProperties();
productXrefKeySerde.configure(props, true);
productXrefSerde.configure(props, false);
KTable<ProductXrefKey, ProductXref> productXrefTable = builder
.table(TOPIC_NAME, Consumed.with(productXrefKeySerde, productXrefSerde),
Materialized.<ProductXrefKey, ProductXref, KeyValueStore<Bytes, byte[]>>as(VIEW_NAME)
.withKeySerde(materializedProductXrefKeySerde)
.withValueSerde(materializedProductXrefSerde));
<…>
final ReadOnlyKeyValueStore<ProductXrefKey, ProductXref> store =
streamsBuilderFactoryBean.getKafkaStreams().store(fromNameAndType(VIEW_NAME, keyValueStore()));
try (KeyValueIterator<ProductXrefKey, ProductXref> range = store.prefixScan(prefixKey, prefixScanProductXrefSerde)){
if (range != null) {
range.forEachRemaining(kv -> {
<…>
});
} else {
log.info("Could not find {} in local ReadOnlyKeyValueStore {}", prefixKey, viewName);
}
}
My assumption was, that the topic gets deserialized using the SpecificAvroSerde and then gets serialized for the view using my ProductXrefKeySerde. The problem is, that the content in the materialized view is still serialized using the same logic as in the original topic. It appears that the serializer is never used during the topic being processed and stored in the materialized view. I can verify that also on the file system and see that the keys in the RocksaltDb files are serialized with the magic byte and schema ID and hence prefixScan wont be able to fine anything.
How can I change the serialization format for the materialized view?
Or is there a better way for serializing a prefix AVRO object?
It appears that there is some optimization happening which avoids deserialization/serialization if KTable is directly materialized. I've changed the logic such that it consumes it as a KStream and then creates the KTable (toTable(...))
KTable<ProductXrefKey, ProductXref> productXrefStream = builder
.stream(TOPIC_NAME, Consumed.with(productXrefKeySerde, productXrefSerde))
.toTable(Materialized.<ProductXrefKey, ProductXref, KeyValueStore<Bytes, byte[]>>as(VIEW_NAME)
.withKeySerde(productXrefKeySerde)
.withValueSerde(productXrefSerde));
With this small change, data now gets deserialized (using SpecificAvroSerde<>) and serialized again using the provided ProductXrefKeySerde. Now also the prefix scan works and returns the records as expected.

Graph traversal name to graph name mapping

Is there any API using which I can get graphTraversalName to graphName mapping defined in the script?
I am using the below messy code but it's error-prone if both graphs are using the same underlying storage.
Map<String, String> graphTraversalToNameMap = new ConcurrentHashMap<String, String>();
while(traversalSourceIterator.hasNext()){
String traversalSource = traversalSourceIterator.next();
String currentGraphString = ( (GraphTraversalSource) graphManager.getAsBindings().get(traversalSource)).getGraph().toString();
graphNameTraversalMap.put(currentGraphString, traversalSource);
}
Iterator<String> graphNamesIterator = graphManager.getGraphNames().iterator();
while(graphNamesIterator.hasNext()){
String graphName = graphNamesIterator.next();
String currentGraphString = graphManager.getGraph(graphName).toString();
String traversalSource = graphNameTraversalMap.get(currentGraphString);
graphTraversalToNameMap.put(traversalSource, graphName);
}
Does gremlinExecutor.getScriptEngineManager().getBindings().entrySet() provide order guarantee? I can iterate over this and populate my map
Is there any API using which I can get graphTraversalName to graphName mapping defined in the script?
No. They share the same namespace in Gremlin Server so the relationship gets lost programmatically. You would need to do something like what you are doing but I wouldn't rely on toString() of a Graph for equality. Perhaps use the Graph instance itself? Although that might not work either depending on your situation and what you want for equality as you could have two different Graph configurations pointed at the same data and want to resolve those as the same graph. I'm also not sure that any approach will work generally for all graph systems. Anyway, I think I'd experiment with using Map<Graph, String> graphTraversalToNameMap for your case and see how that goes.
Does gremlinExecutor.getScriptEngineManager().getBindings().entrySet() provide order guarantee?
No as it is backed by a ConcurrentHashMap. You would have to provide your own order.
Underlying storage details can be obtained from the configuration object and can be used for the mapping, sample code:
public class GraphTraversalMappingUtil {
public static void populateGraphTraversalToNameMapping(GraphManager graphManager){
if(graphTraversalToNameMap.size() != 0){
return;
}
Iterator<String> traversalSourceIterator = graphManager.getTraversalSourceNames().iterator();
Map<StorageBackendKey, String> storageKeyToTraversalMap = new HashMap<StorageBackendKey, String>();
while(traversalSourceIterator.hasNext()){
String traversalSource = traversalSourceIterator.next();
StorageBackendKey key = new StorageBackendKey(
graphManager.getTraversalSource(traversalSource).getGraph().configuration());
storageKeyToTraversalMap.put(key, traversalSource);
}
Iterator<String> graphNamesIterator = graphManager.getGraphNames().iterator();
while(graphNamesIterator.hasNext()) {
String graphName = graphNamesIterator.next();
StorageBackendKey key = new StorageBackendKey(
graphManager.getGraph(graphName).configuration());
graphTraversalToNameMap.put(storageKeyToTraversalMap.get(key), graphName);
}
}
}
For full code, refer: https://pastebin.com/7m8hi53p

Java 8 map with Map.get nullPointer Optimization

public class StartObject{
private Something something;
private Set<ObjectThatMatters> objectThatMattersSet;
}
public class Something{
private Set<SomeObject> someObjecSet;
}
public class SomeObject {
private AnotherObject anotherObjectSet;
}
public class AnotherObject{
private Set<ObjectThatMatters> objectThatMattersSet;
}
public class ObjectThatMatters{
private Long id;
}
private void someMethod(StartObject startObject) {
Map<Long, ObjectThatMatters> objectThatMattersMap = StartObject.getSomething()
.getSomeObject.stream()
.map(getSomeObject::getAnotherObject)
.flatMap(anotherObject-> anotherObject.getObjectThatMattersSet().stream())
.collect(Collectors.toMap(ObjectThatMatters -> ObjectThatMatters.getId(), Function.identity()));
Set<ObjectThatMatters > dbObjectThatMatters = new HashSet<>();
try {
dbObjectThatMatters.addAll( tartObject.getObjectThatMatters().stream().map(objectThatMatters-> objectThatMattersMap .get(objectThatMatters.getId())).collect(Collectors.toSet()));
} catch (NullPointerException e) {
throw new someCustomException();
}
startObject.setObjectThatMattersSet(dbObjectThatMatters);
Given a StartObject that contains a set of ObjectThatMatters
And a Something that contains the database structure already fetched filled with all valid ObjectThatMatters.
When I want to swap the StartObject set of ObjectThatMatters to the valid corresponding db objects that only exist in the scope of the Something
Then I compare the set of ObjectThatMatters on the StartObject
And replace every one of them with the valid ObjectThatMatters inside the Something object
And If some ObjectThatMatters doesn't have a valid ObjectThatMatters I throw a someCustomException
This someMethod seems pretty horrible, how can I make it more readable?
Already tried to change the try Catch to a optional but that doesn't actually help.
Used a Map instead of a List with List.contains because of performance, was this a good idea? The total number of ObjectThatMatters will be usually 500.
I'm not allowed to change the other classes structure and I'm only showing you the fields that affect this method not every field since they are extremely rich objects.
You don’t need a mapping step at all. The first operation, which produces a Map, can be used to produce the desired Set in the first place. Since there might be more objects than you are interested in, you may perform a filter operation.
So first, collect the IDs of the desired objects into a set, then collect the corresponding db objects, filtering by the Set of IDs. You can verify whether all IDs have been found, by comparing the resulting Set’s size with the ID Set’s size.
private void someMethod(StartObject startObject) {
Set<Long> id = startObject.getObjectThatMatters().stream()
.map(ObjectThatMatters::getId).collect(Collectors.toSet());
HashSet<ObjectThatMatters> objectThatMattersSet =
startObject.getSomething().getSomeObject().stream()
.flatMap(so -> so.getAnotherObject().getObjectThatMattersSet().stream())
.filter(obj -> id.contains(obj.getId()))
.collect(Collectors.toCollection(HashSet::new));
if(objectThatMattersSet.size() != id.size())
throw new SomeCustomException();
startObject.setObjectThatMattersSet(objectThatMattersSet);
}
This code produces a HashSet; if this is not a requirement, you can just use Collectors.toSet() to get an arbitrary Set implementation.
It’s even easy to find out which IDs were missing:
private void someMethod(StartObject startObject) {
Set<Long> id = startObject.getObjectThatMatters().stream()
.map(ObjectThatMatters::getId)
.collect(Collectors.toCollection(HashSet::new));// ensure mutable Set
HashSet<ObjectThatMatters> objectThatMattersSet =
startObject.getSomething().getSomeObject().stream()
.flatMap(so -> so.getAnotherObject().getObjectThatMattersSet().stream())
.filter(obj -> id.contains(obj.getId()))
.collect(Collectors.toCollection(HashSet::new));
if(objectThatMattersSet.size() != id.size()) {
objectThatMattersSet.stream().map(ObjectThatMatters::getId).forEach(id::remove);
throw new SomeCustomException("The following IDs were not found: "+id);
}
startObject.setObjectThatMattersSet(objectThatMattersSet);
}

Use MEF to compose parts but postpone the creation of the parts

As explained in these questions I'm trying to build an application that consists of a host and multiple task processing clients. With some help I have figured out how to discover and serialize part definitions so that I could store those definitions without having to have the actual runtime type loaded.
The next step I want to achieve (or next two steps really) is that I want to split the composition of parts from the actual creation and connection of the objects (represented by those parts). So if I have a set of parts then I would like to be able to do the following thing (in pseudo-code):
public sealed class Host
{
public CreationScript Compose()
{
CreationScript result;
var container = new DelayLoadCompositionContainer(
s => result = s);
container.Compose();
return script;
}
public static void Main()
{
var script = Compose();
// Send the script to the client application
SendToClient(script);
}
}
// Lives inside other application
public sealed class Client
{
public void Load(CreationScript script)
{
var container = new ScriptLoader(script);
container.Load();
}
public static void Main(string scriptText)
{
var script = new CreationScript(scriptText);
Load(script);
}
}
So that way I can compose the parts in the host application, but actually load the code and execute it in the client application. The goal is to put all the smarts of deciding what to load in one location (the host) while the actual work can be done anywhere (by the clients).
Essentially what I'm looking for is some way of getting the ComposablePart graph that MEF implicitly creates.
Now my question is if there are any bits in MEF that would allow me to implement this kind of behaviour? I suspect that the provider model may help me with this but that is a rather large and complex part of MEF so any guidelines would be helpful.
From lots of investigation it seems that is not possible to separate the composition process from the instantiation process in MEF so I have had to create my own approach for this problem. The solution assumes that the scanning of plugins results in having the type, import and export data stored somehow.
In order to compose parts you need to keep track of each part instance and how it is connected to other part instances. The simplest way to do this is to make use of a graph data structure that keeps track of which import is connected to which export.
public sealed class CompositionCollection
{
private readonly Dictionary<PartId, PartDefinition> m_Parts;
private readonly Graph<PartId, PartEdge> m_PartConnections;
public PartId Add(PartDefinition definition)
{
var id = new PartId();
m_Parts.Add(id, definition);
m_PartConnections.AddVertex(id);
return id;
}
public void Connect(
PartId importingPart,
MyImportDefinition import,
PartId exportingPart,
MyExportDefinition export)
{
// Assume that edges point from the export to the import
m_PartConnections.AddEdge(
new PartEdge(
exportingPart,
export,
importingPart,
import));
}
}
Note that before connecting two parts it is necessary to check if the import can be connected to the export. In other cases MEF does that but in this case we'll need to do that ourselves. An example of how to approach that is:
public bool Accepts(
MyImportDefinition importDefinition,
MyExportDefinition exportDefinition)
{
if (!string.Equals(
importDefinition.ContractName,
exportDefinition.ContractName,
StringComparison.OrdinalIgnoreCase))
{
return false;
}
// Determine what the actual type is we're importing. MEF provides us with
// that information through the RequiredTypeIdentity property. We'll
// get the type identity first (e.g. System.String)
var importRequiredType = importDefinition.RequiredTypeIdentity;
// Once we have the type identity we need to get the type information
// (still in serialized format of course)
var importRequiredTypeDef =
m_Repository.TypeByIdentity(importRequiredType);
// Now find the type we're exporting
var exportType = ExportedType(exportDefinition);
if (AvailableTypeMatchesRequiredType(importRequiredType, exportType))
{
return true;
}
// The import and export can't directly be mapped so maybe the import is a
// special case. Try those
Func<TypeIdentity, TypeDefinition> toDefinition =
t => m_Repository.TypeByIdentity(t);
if (ImportIsCollection(importRequiredTypeDef, toDefinition)
&& ExportMatchesCollectionImport(
importRequiredType,
exportType,
toDefinition))
{
return true;
}
if (ImportIsLazy(importRequiredTypeDef, toDefinition)
&& ExportMatchesLazyImport(importRequiredType, exportType))
{
return true;
}
if (ImportIsFunc(importRequiredTypeDef, toDefinition)
&& ExportMatchesFuncImport(
importRequiredType,
exportType,
exportDefinition))
{
return true;
}
if (ImportIsAction(importRequiredTypeDef, toDefinition)
&& ExportMatchesActionImport(importRequiredType, exportDefinition))
{
return true;
}
return false;
}
Note that the special cases (like IEnumerable<T>, Lazy<T> etc.) require determining if the importing type is based on a generic type which can be a bit tricky.
Once all the composition information is stored it is possible to do the instantiation of the parts at any point in time because all the required information is available. Instantiation requires a generous helping of reflection combined with the use of the trusty Activator class and will be left as an exercise to the reader.

How to add options for Analyze in Apache Lucene?

Lucene has Analyzers that basically tokenize and filter the corpus when indexing. Operations include converting tokens to lowercase, stemming, removing stopwords, etc.
I'm running an experiment where I want to try all possible combinations of analysis operations: stemming only, stopping only, stemming and stopping, ...
In total, there 36 combinations that I want to try.
How can I do easily and gracefully do this?
I know that I can extend the Analyzer class and implement the tokenStream() function to create my own Analyzer:
public class MyAnalyzer extends Analyzer
{
public TokenStream tokenStream(String field, final Reader reader){
return new NameFilter(
CaseNumberFilter(
new StopFilter(
new LowerCaseFilter(
new StandardFilter(
new StandardTokenizer(reader)
)
), StopAnalyzer.ENGLISH_STOP_WORDS)
)
);
}
What I'd like to do is write one such class, which can somehow take boolean values for each of the possible operations (doStopping, doStemming, etc.). I don't want to have to write 36 different Analyzer classes that each perform one of the 36 combinations. What makes it difficult is the way the filters are all combined together in their constructors.
Any ideas on how to do this gracefully?
EDIT: By "gracefully", I mean that I can easily create a new Analyzer in some sort of loop:
analyzer = new MyAnalyzer(doStemming, doStopping, ...)
where doStemming and doStopping change with each loop iteration.
Solr solves this problem by using Tokenizer and TokenFilter factories. You could do the same, for example:
public interface TokenizerFactory {
Tokenizer newTokenizer(Reader reader);
}
public interface TokenFilterFactory {
TokenFilter newTokenFilter(TokenStream source);
}
public class ConfigurableAnalyzer {
private final TokenizerFactory tokenizerFactory;
private final List<TokenFilterFactory> tokenFilterFactories;
public ConfigurableAnalyzer(TokenizerFactory tokenizerFactory, TokenFilterFactory... tokenFilterFactories) {
this.tokenizerFactory = tokenizerFactory;
this.tokenFilterFactories = Arrays.asList(tokenFilterFactories);
}
public TokenStream tokenStream(String field, Reader source) {
TokenStream sink = tokenizerFactory.newTokenizer(source);
for (TokenFilterFactory tokenFilterFactory : tokenFilterFactories) {
sink = tokenFilterFactory.newTokenFilter(sink);
}
return sink;
}
}
This way, you can configure your analyzer by passing a factory for one tokenizer and 0 to n filters as constructor arguments.
Add some class variables to the custom Analyzer class which can be easily set and unset on the fly. Then, in the tokenStream() function, use those variables to determine which filters to perform.
public class MyAnalyzer extends Analyzer {
private Set customStopSet;
public static final String[] STOP_WORDS = ...;
private boolean doStemming = false;
private boolean doStopping = false;
public JavaSourceCodeAnalyzer(){
super();
customStopSet = StopFilter.makeStopSet(STOP_WORDS);
}
public void setDoStemming(boolean val){
this.doStemming = val;
}
public void setDoStopping(boolean val){
this.doStopping = val;
}
public TokenStream tokenStream(String fieldName, Reader reader) {
// First, convert to lower case
TokenStream out = new LowerCaseTokenizer(reader);
if (this.doStopping){
out = new StopFilter(true, out, customStopSet);
}
if (this.doStemming){
out = new PorterStemFilter(out);
}
return out;
}
}
There is one gotcha: LowerCaseTokenizer takes as input the reader variable, and returns a TokenStream. This is fine for the following filters (StopFilter, PorterStemFilter), because they take TokenStreams as input and return them as output, and so we can chain them together nicely. However, this means you can't have a filter before the LowerCaseTokenizer that returns a TokenStream. In my case, I wanted to split camelCase words into parts, and this has to be done before converting to lower case. My solution was to perform the splitting manually in the custom Indexer class, so by the time MyAnalyzer sees the text, it has already been split.
(I have also added a boolean flag to my customer Indexer class, so now both can work based solely on flags.)
Is there a better answer?