I need to implement composite keys in hyperledger so that I could have a unique key based on the attributes put into the ledger. The function CreateCompositeKey(objectType string, attributes []string)(string,error)
takes in objectType and attributes string. I couldnt find any examples of this online, how are the relevant attributes to be made into the composite key passed and in what way is the output given?
So the way Composite keys should be used is make a key first and then push it to the blockchain with PutState(key string, value []byte) error where the hey in PutState is the output of CreateCompositeKey? If not, then how are composite keys to be used?
Similarly in
GetStateByPartialCompositeKey(objectType string, keys []string) (StateQueryIteratorInterface, error)
How are the keys we want to make queries by passed to the function? And what are the output data types "StateQueryIteratorInterface" and "HistoryQueryIteratorInterface"?
I am fairly new to programming and have no prior knowledge of databases so am getting confused with really basic things. I'd really appreciate some help!
In Hyperledger Fabric there is an example chaincode which shows how to use composite keys, check it out: Marbles
Basically it almost as you said:
key, err := stub.CreateCompositeKey(index, []string{key1, key2, key3})
// Skiped
stub.PutState(key, value)
The function simply creates a key by combining the attributes into a single string. Its application is where we need to store multiple instances of one type on the ledger. The keys of those instances will be constructed from a combination of attributes— for example, "Order" + ID, yielding ["Order1","Order2", ...].
This function comes in hand when you intend to search for assets based on components of the key in range queries.
The 'CreateCompositeKey' in SHIM construct a composite key (indeed, a unique key) based on a combination of several attributes.
The inverse function is SplitCompositeKey which splits the compositeKey into attributes.
func SplitCompositeKey(compositeKey string) (string, []string, error)
The 'TestTradeWorkflow_Agreement' function is this code is also useful in understanding the whole process:
For Java implementation it did not clearly come out in the documentation/ examples, did a little digging around and you can use 'compositeKey.toString()' as the composite key.
Example below:
final String compositeKey = stub.createCompositeKey("my-key-part-1", "my-key-part-2").toString();
stub.putStringState(compositeKey, myJSONString); // use this
stub.putState(compositeKey, myJSONString.getBytes()); // or this
Related
I need a key-value store (e.g. a Mapor a custom class) which only allows keys out of a previously defined set, e.g. only the keys ["apple", "orange"]. Is there anything like this built-in in Kotlin? Otherwise, how could one do this? Maybe like the following code?
class KeyValueStore(val allowedKeys: List<String>){
private val map = mutableMapOf<String,Any>()
fun add(key: String, value: Any) {
if(!allowedKeys.contains(key))
throw Exception("key $key not allowed")
map.put(key, value)
}
// code for reading keys, like get(key: String) and getKeys()
}
The best solution for your problem would be to use an enum, which provides exactly the functionality that you're looking for. According to the docs, you can declare an enum like so:
enum class AllowedKeys {
APPLE, ORANGE
}
then, you could declare the keys with your enum!
Since the keys are known at compile time, you could simply use an enum instead of String as the keys of a regular Map:
enum class Fruit {
APPLE, ORANGE
}
val fruitMap = mutableMapOf<Fruit, String>()
Instead of Any, use whatever type you need for your values, otherwise it's not convenient to use.
If the types of the values depend on the key (a heterogeneous map), then I would first seriously consider using a regular class with your "keys" as properties. You can access the list of properties via reflection if necessary.
Another option is to define a generic key class, so the get function returns a type that depends on the type parameter of the key (see how CoroutineContext works in Kotlin coroutines).
For reference, it's possible to do this if you don't know the set of keys until runtime. But it involves writing quite a bit of code; I don't think there's an easy way.
(I wrote my own Map class for this. We needed a massive number of these maps in memory, each with the same 2 or 3 keys, so I ended up writing a Map implementation pretty much from scratch: it used a passed-in array of keys — so all maps could share the same key array — and a private array of values, the same size. The code was quite long, but pretty simple. Most operations meant scanning the list of keys to find the right index, so the theoretic performance was dire; but since the list was always extremely short, it performed really well in practice. And it saved GBs of memory compared to using HashMap. I don't think I have the code any more, and it'd be far too long to post here, but I hope the idea is interesting.)
I need some clarification, maybe someone can help me about the pinned and null entities with .from(). I understand that I need to use .fromUnfiltered() to get them in the stream. But what about .fromUniquePair(), are entities propagated down the stream if they are null and pinned ? Similar question if I use .fromUnfiltered() with a .join(), will the join() take the null and pinned entities in the second class?
Thank you!
Pinning has no effect on Constraint Streams - both from() and join() will always include pinned entities. Let's therefore concentrate on the uninitizalized entities.
The thing to understand about fromUniquePair(Something.class) is that it is a shorthand for the following:
from(Something.class)
.join(Something.class, ...) // Replace "..." with joiners to get unique pairs.
Therefore, both the left and the right will retrieve only initialized entities. If you want unique pairs including uninitialized entities, you will have to give up the shorthand and use a nested stream:
fromUnfiltered(Something.class)
.join(fromUnfiltered(Something.class), ...) // Replace "..." with the same joiners as above.
Extensible records were one of the most amazing Elm's features, but since v0.16 adding and removing fields is no longer available. And this puts me in an awkward position.
Consider an example. I want to give a name to a random thing t, and extensible records provide me a perfect tool for this:
type alias Named t = { t | name: String }
„Okay,“ says the complier. Now I need a constructor, i.e. a function that equips a thing with specified name:
equip : String -> t -> Named t
equip name thing = { thing | name = name } -- Oops! Type mismatch
Compilation fails, because { thing | name = ... } syntax assumes thing to be a record with name field, but type system can't assure this. In fact, with Named t I've tried to express something opposite: t should be a record type without its own name field, and the function adds this field to a record. Anyway, field addition is necessary to implement equip function.
So, it seems impossible to write equip in polymorphic manner, but it's probably not a such big deal. After all, any time I'm going to give a name to some concrete thing I can do this by hands. Much worse, inverse function extract : Named t -> t (which erases name of a named thing) requires field removal mechanism, and thus is not implementable too:
extract : Named t -> t
extract thing = thing -- Error: No implicit upcast
It would be extremely important function, because I have tons of routines those accept old-fashioned unnamed things, and I need a way to use them for named things. Of course, massive refactoring of those functions is ineligible solution.
At last, after this long introduction, let me state my questions:
Does modern Elm provides some substitute for old deprecated field addition/removal syntax?
If not, is there some built-in function like equip and extract above? For every custom extensible record type I would like to have a polymorphic analyzer (a function that extracts its base part) and a polymorphic constructor (a function that combines base part with additive and produces the record).
Negative answers for both (1) and (2) would force me to implement Named t in a more traditional way:
type Named t = Named String t
In this case, I can't catch the purpose of extensible records. Is there a positive use case, a scenario in which extensible records play critical role?
Type { t | name : String } means a record that has a name field. It does not extend the t type but, rather, extends the compiler’s knowledge about t itself.
So in fact the type of equip is String -> { t | name : String } -> { t | name : String }.
What is more, as you noticed, Elm no longer supports adding fields to records so even if the type system allowed what you want, you still could not do it. { thing | name = name } syntax only supports updating the records of type { t | name : String }.
Similarly, there is no support for deleting fields from record.
If you really need to have types from which you can add or remove fields you can use Dict. The other options are either writing the transformers manually, or creating and using a code generator (this was recommended solution for JSON decoding boilerplate for a while).
And regarding the extensible records, Elm does not really support the “extensible” part much any more – the only remaining part is the { t | name : u } -> u projection so perhaps it should be called just scoped records. Elm docs itself acknowledge the extensibility is not very useful at the moment.
You could just wrap the t type with name but it wouldn't make a big difference compared to approach with custom type:
type alias Named t = { val: t, name: String }
equip : String -> t -> Named t
equip name thing = { val = thing, name = name }
extract : Named t -> t
extract thing = thing.val
Is there a positive use case, a scenario in which extensible records play critical role?
Yes, they are useful when your application Model grows too large and you face the question of how to scale out your application. Extensible records let you slice up the model in arbitrary ways, without committing to particular slices long term. If you sliced it up by splitting it into several smaller nested records, you would be committed to that particular arrangement - which might tend to lead to nested TEA and the 'out message' pattern; usually a bad design choice.
Instead, use extensible records to describe slices of the model, and group functions that operate over particular slices into their own modules. If you later need to work accross different areas of the model, you can create a new extensible record for that.
Its described by Richard Feldman in his Scaling Elm Apps talk:
https://www.youtube.com/watch?v=DoA4Txr4GUs&ab_channel=ElmEurope
I agree that extensible records can seem a bit useless in Elm, but it is a very good thing they are there to solve the scaling issue in the best way.
We have several aggregate roots that have two primary means of identification:
an integer "key", which is used as a primary key in the database (and is used as a foreign key by referencing aggregates), and internally within the application, and is not accessible by the public web API.
a string-based "id", which also uniquely identifies the aggregate root and is accessible by the public web API.
There are several reasons for having an integer-based private identifiers and a string-based public identifier - for example, the database performs better (8-byte integers as opposed to variable-length strings) and the public identifiers are difficult to guess.
However, the classes internally reference each other using the integer-based identifiers and if an integer-based identifier is 0, this signifies that the object hasn't yet been stored to the database. This creates a problem, in that entities are not able to reference other aggregate roots until after they have been saved.
How does one get around this problem, or is there a flaw in my understanding of persistence ignorance?
EDIT regarding string-based identifiers
The string-based identifiers are generated by the repository, connected to a PostgreSQL database, which generates the identifier to ensure that it does not clash with anything currently in the database. For example:
class Customer {
public function __construct($customerKey, $customerId, $name) {
$this->customerKey = $customerKey;
$this->customerId = $customerId;
$this->name = $name;
}
}
function test(Repository $repository, UnitOfWork $unitOfWork) {
$customer = new Customer(0, $repository->generateCustomerId(), "John Doe");
// $customer->customerKey == 0
$unitOfWork->saveCustomer($customer);
// $customer->customerKey != 0
}
I assume that the same concept could be used to create an entity with an integer-based key of non-0, and the Unit of Work could use the fact that it doesn't exist in the database as a reason to INSERT rather than UPDATE. The test() function above would then become:
function test(Repository $repository, UnitOfWork $unitOfWork) {
$customer = new Customer($repository->generateCustomerKey(), $repository->generateCustomerId(), "John Doe");
// $customer->customerKey != 0
$unitOfWork->saveCustomer($customer);
// $customer->customerKey still != 0
}
However, given the above, errors may occur if the Unit of Work does not save the database objects in the correct order. Is the way to get around this to ensure that the Unit of Work saves entities in the correct order?
I hope the above edit clarifies my situation.
It's a good approach to look at Aggregates as consistency boundaries. In other words, two different aggregates have separate lifecycles and you should refrain from tying their fates together inside the same transaction. From that axiom you can safely state that no aggregate A will ever have an ID of 0 when looked at from another aggregate B's perspective, because either the transaction that creates A has not finished yet and it is not visible by B, or it has completed and A has an ID.
Regarding the double identity, I'd rather have the string ID generated by the language than the database because I suppose coming up with a unique ID would imply a transaction, possibly across multiple tables. Languages can usually generate unique strings with a good entropy.
I need to generate Id for child object of my document. What is the current syntax for generating document key?
session.Advanced.Conventions.GenerateDocumentKey(document) is not there anymore. I've found _documentSession.Advanced.DocumentStore.Conventions.GenerateDocumentKey method but its' signature is weird: I am okay with default key generation algorithm I just want to pass an object and receive an Id.
The default implementation of GenerateDocumentKey is to get the "dynamic tag name" for the class, and append a slash. For example, class Foo would turn into Foos/ which then goes through the HiLoKeyGenerator so that ids can be assigned on the client-side without having to consult the server each time.
If you really want this behavior, you could try to use the HiLoKeyGenerator on your own, but have you considered something simpler? I don't know what your model is, but if the child thing is fully owned by the containing document (which it should be, to be in the same document) have you have several much easier options:
Just use the index within the collection
Keep a int NextChildThingId property on the document and increment that every time you add a ChildThing
Just use a Guid, although those are no fun to read, type, look at, compare, or speak to someone over the phone.