Hyperledger private data dynamic access - dynamic

Is it possible to dynamically set access to private data in Hyperledger fabric 1.4? Unlike the collections file where we have to add the organizations that can have access to a particular "collection", is it possible to add access through chaincode?

Had to do some research on this myself, but since Fabric v1.4 it is possible to dynamically add peers to private data collections. Private data reconciliation ensures that all private data state in that collection, which was created prior to the peer joining, will be delivered to the new peer.
In more detail: With the collections file you specify an initial endorsement policy. This endorsement policy can be updated later through a SDK function called SetPrivateDataValidationParameter. After this update takes place, new private data key-value-pairs will be delivered according to the new endorsement policy.
Additionally, if you want to update the collections definition file itself, you can specify a new one when upgrading the chaincode. The collections definition file specifies, which peers are allowed to see the data, so in order to change that, you need to upgrade your chaincode.

Related

Implementing a RMW operation in Redis

I would like to maintain comma separated lists of entries of the following form <ip>:<app> indexed by a an account ID. There would be one such list for each user indexed by their account ID with the number of users in the millions. This is mainly to track which server in a cluster a user using a certain application is connected to.
Since all servers are written in Java, with Redisson I'm currently doing:
RSet<String> set = client.getSet(accountKey);
and then I can modify the set using some typical Java container APIs supported by Redisson. I basically need three types of updates to these comma separated lists:
Client connects to a new application = append
Client reconnects with existing application to new endpoint = modify
Client disconnects = remove
A new connection would require a change to a field like:
1.1.1.1:foo,2.2.2.2:bar -> 1.1.1.1:foo,2.2.2.2:bar,3.3.3.3:baz
A reconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 3.3.3.3:foo,2.2.2.2:bar
A disconnect would require an update like:
1.1.1.1:foo,2.2.2.2:bar -> 2.2.2.2:bar
As mentioned the fields would be keyed by the account ID of the user.
My question is the following: Without using Redisson how can I implement this "directly" on top of Redis commands? The goal is to allow rewriting certain components in a language different than Java. The cluster handles close to a million requests per second.
I'm actually quite curious how Redisson implements an RSet under the hood and I haven't had time to dig into it. I guess one option would be to use Lua, but I've never used it with Redis. Any ideas how to efficiently implement these operations on top of Redis on a manner that is easily supported by multiple languages, i.e. not relying on a specific library?
Having actually thought about the problem properly, it can be solved directly with a HSET. Where <app> is the field name and the value are the IPs. Keys being user accounts.

How to do right rest api update?

I am developing rest api update method for user profile resource user/profile. I am disappointed what http method should i use. Update contains some required attributes so it more PUT request, where client need to fill all attributes. But how it can extend attributes in future. If i will decide to add new attribute then it will automatically clear because client is not implement it yet.
But what if this new attribute has default value or is set by another route?
Can i use PUT with not stricting number of attributes and use old data if new isn't come in request. Or how it can be done normally?
HTTP is an application whose application domain is the transfer of documents over a network -- Webber, 2011.
PUT is the appropriate method to use when "saving" a new version of a document onto a web server.
how it can extend attributes in future.
You design your schemas to be forward and backward compatible; in practice, what this means is that you can add new optional elements with reasonable default values. When you need to add a new required element, you change the name of the schema.
You'll find prior art in this topic by searching XML literature for must ignore.
You understand correctly: PUT is for complete replacement, so values that you don't include would be lost.
Instead, use the PATCH method, which is for making partial updates. You can update only the properties you include values for.

S3 notification when file is overwritten, or deleted

since we store our log files on S3 and to meet PCI requirements we have to be notified when someone tampers with the log files.
How can I be notified every time a put request is placed that replaces an existing object, or when an existing object is delete. The alert should not fire if a new object is created unless it replaces an existing one.
S3 does not currently provide deletion or overwrite-only notifications. Deletion notifications were added after the initial launch of the notification feature and can notify you when an object is deleted, but does not notify you when on object is implicitly deleted by overwrite.
However, S3 does have functionality to accomplish what you need, in a way that seems superior to what you are contemplating: object versioning and multi-factor authentication for deletion, both discussed here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html
With versioning enabled on the bucket, an overwrite of a file doesn't remove the old version of the file. Instead, each version of the file has an opaque string, assigned by S3, identifying the Version ID.
If someone overwrites a file, you would then have two versions of the same file in the bucket -- the original one and the new one -- so you not only have evidence of tampering, you also have the original file, undisturbed. Any object with more than one version in the bucket has, by definition, been overwritten at some point.
If you also enable Multi-Factor Authentication (MFA) Delete, then none of the versions of any object can be removed without access to the hardware or virtual MFA device.
As an developer of AWS utilities, tools, and libraries (3rd party; I'm not affiliated with Amazon), I am highly impressed by Amazon's implementation of object versioning in S3, because it works in such a way that client utilities that are unaware of versioning or that versioning is enabled on the bucket should not be affected in any way. This means you should be able to activate versioning on a bucket without changing anything in your existing code. For example:
fetching an object without an accompanying version id in the request simply fetches the newest version of the object
objects in versioned buckets aren't really deleted unless you explicitly delete a particular version; however, you can still "delete an object," and get the expected response back. Subsequently fetching the "deleted" object without specifying an accompanying version id still returns a 404 Not Found, as in the non-versioned environment, with the addition of an unobtrusive x-amz-delete-marker: header included in the response to indicate that the "latest version" of the object is in fact a delete marker placeholder. The individual versions of the "deleted" object remain accessible to version-aware code, unless purged.
other operations that are unrelated to versioning, which work on non-versioned buckets, continue to work the same way they did before versioning was enabled on the bucket.
But, again... with code that is version-aware, including the AWS console (two new buttons appear when you're looking at a versioned bucket -- you can choose to view it with a versioning-aware console view or versioning-unaware console view) you can iterate through the different versions of an object and fetch any version that has not been permanently removed... but preventing unauthorized removal of objects is the point of MFA delete.
Additionally, of course, there's bucket logging, which is typically only delayed by a few minutes from real-time and could be used to detect unusual activity... the history of which would be preserved by the bucket versioning.

IBM Worklight - JSONStore logic to refresh data from the server and be able to work offline

currently the JSONStore API provides a load() method that says in the documentation:
This function always stores whatever it gets back from the adapter. If
the data exists, it is duplicated in the collection". This means that
if you want to avoid duplicates by calling load() on an already
populated collection, you need to empty or drop the collection before.
But if you want to be able to keep the elements you already have in
the collection in case there is no more connectivity and your
application goes for offline mode, you also need to keep track of
these existing elements.
Since the API doesn't provide a "overwrite" option that would replace the existing elements in case the call to the adapter succeeds, I'm wondering what kind of logic should be put in place in order to manage both offline availability of data and capability to refresh at any time? It is not that obvious to manage all the failure cases by nesting the JS code due to the promises...
Thanks for your advices!
One approach to achieve this:
Use enhance to create your own load method (i.e. loadAndOverwrite). You should have access to the all the variables kept inside an JSONStore instance (collection name, adapter name, adapter load procedure name, etc. -- you will probably use those variables in the invokeProcedure step below).
Call push to make sure there are no local changes.
Call invokeProcedure to get data, all the variables you need should be provided in the context of enhance.
Find if the document already exists and then remove it. Use {push: false} so JSONStore won't track that change.
Use add to add the new/updated document. Use {push: false} so JSONStore won't track that change.
Alternatively, if the document exists you can use replace to update it.
Alternatively, you can use removeCollection and call load again to refresh the data.
There's an example that shows how to use all those API calls here.
Regarding promises, read this from InfoCenter and this from HTML5Rocks. Google can provide more information.

How to find the Transport Request with my custom objects?

I've copied two Function Modules QM06_SEND_PAPER_STEP2 and QM06_FM_TASK_CLAIM_SEND_PAPER to similar Z* Function Modules. I've put these FMs into a ZQM06 Function Group which was created by another developer.
I want to use Transaction SCC1 to move my developments from one client to another. In transaction SE01 Transport Organizer I don't find the names of my 2 function modules anywhere.
How can I find out the change request with my work?
I copied the FM in order to modify functionality and I know FMs are client independent.
Function modules, like other ABAP workbench entities, are client-independent. That is, you do not need to copy them between clients on the same instance.
However, you can find the transport request that contains your changes by going to transaction SE37, entering the name of your function module, and then choosing Utilities -> Versions -> Version Management from the menu.
Provided you did not put the changes into a local package (like $TMP) the system will have asked you for a transport request when you saved or activated your changes, that is, unless the function group is in a modifiable transport request, in which case it would have created a new task for your user under that request which will contain you changes. To check the package, use Goto -> Object Directory Entry from the menu in SE37.
Function modules are often added to transports under the function group name, especially if they're new.
The easiest way to fin dhte transport is to go to SE37, display the function module, and then go to Version Management.
The answer from mydoghasworms is correct. Alternatively you can also use transaction SE03 -> Search for Objects in Requests/Tasks (top of the transaction screen) -> check the box next to "R3TR FUGR" and type in your function group name.