Clarifying behavior of unset - gun

I wanted to clarify my understanding of unset() or rather the behavior I am observing. I understand if I call unset() it replaces the value with a null (per the deleting data in Gun). So this is what I would like to confirm, assuming you've called unset():
1) When you call once() or on() it returns null for nodes which have been unset()
2) When you call Gun.obj.empty(table, '_') it returns false
I also tried setting the value of my set to null e.g.
get('mylist').put(null)
Which worked! I wanted to empty my set. However, the next time I added a new node my original set along with all of the original nodes were restored. I ended up writing the following to empty my set
this.context.once().map().once(data => {
let key = data["_"]["#"];
let node = this.context.get(key);
if (node) {
this.context.unset(node);
}
});

I believe that .unset tombstones (null) an item in the table, but not the table itself. Just so others are aware: .unset is community maintained inside of the GUN repo, and not maintained by me - so I may be wrong about its behavior OR the extension may be out of date.
Gun.obj.empty({}) is just a utility to check if an object is empty or not. The 2nd parameter lets you pass it a property to ignore (like '_' which every GUN node has on it in the JS implementation).
You are correct though, that gun.get('list').put(null) should "clear out" the list, such that when you go to save data to it again (or call .set(item) to add an item) this should force/cause GUN to generate a new list/table/set/collection.
It is wrong behavior for it to "resurrect" the old list (unless another peer was trying to at the same time re-write to the old list at the same parent context), so this should be considered a bug that needs to be fixed. (Probably due to the old list ID being cached and not cleared from memory properly)

Related

SHOW KEYS in Aerospike?

I'm new to Aerospike and am probably missing something fundamental, but I'm trying to see an enumeration of the Keys in a Set (I'm purposefully avoiding the word "list" because it's a datatype).
For example,
To see all the Namespaces, the docs say to use SHOW NAMESPACES
To see all the Sets, we can use SHOW SETS
If I want to see all the unique Keys in a Set ... what command can I use?
It seems like one can use client.scan() ... but that seems like a super heavy way to get just the key (since it fetches all the bin data as well).
Any recommendations are appreciated! As of right now, I'm thinking of inserting (deleting) into (from) a meta-record.
Thank you #pgupta for pointing me in the right direction.
This actually has two parts:
In order to retrieve original keys from the server, one must -- during put() calls -- set policy to save the key value server-side (otherwise, it seems only a digest/hash is stored?).
Here's an example in Python:
aerospike_client.put(key, {'bin': 'value'}, policy={'key': aerospike.POLICY_KEY_SEND})
Then (modified Aerospike's own documentation), you perform a scan and set the policy to not return the bin data. From this, you can extract the keys:
Example:
keys = []
scan = client.scan('namespace', 'set')
scan_opts = { 'concurrent': True, 'nobins': True, 'priority': aerospike.SCAN_PRIORITY_MEDIUM }
for x in (scan.results(policy=scan_opts)): keys.append(x[0][2])
The need to iterate over the result still seems a little clunky to me; I still think that using a 'master-key' Record to store a list of all the other keys will be more performant, in my case -- in this way, I can simply make one get() call to the Aerospike server to retrieve the list.
You can choose not bring the data back by setting includeBinData in ScanPolicy to false.

How to keep initial value of a field after POST is done

I have an Oracle Forms app. There is a form with a date field and I need to keep it initial value (when form is loaded), to compare it with actual value in the field. Also there is a button on the form, that posts changes.
I've tried to store initial value with global variable and check if it's changed, also I've tried to simply check that :system.record_status != 'QUERY' to track if date is modified.
Problem that at the moment, when button is pressed and post is done the values of all global variables become null, so I can't compare the initial value with the new one and :system.record_status becomes 'QUERY' again, and I don't see any more if user modified something.
How to keep the initial values or track that data was changed, doesn't matter if user posts changes or not?
This:
Problem that at the moment, when button is pressed and post is done the values of all global variables become null, so I can't compare the initial value with the new one
doesn't work that way. Post (if you refer to POST built-in) (nor COMMIT, as we're at it) doesn't clear global variables. Explicitly setting it to NULL does, so - check the form whether you've done it somewhere in your code. How? Run the form in debug mode, trace its execution and see what's going on.
Another thing that might be going wrong is that global variables's datatype is CHAR so - if you plan to compare it to a different datatype value, you should perform conversion. As it is a date value, consider applying TO_DATE function to the global variable with appropriate format mask.
this will work:
IF GET_ITEM_PROPERTY(:SYSTEM.CURSOR_ITEM,UPDATE_COLUMN) ='TRUE' THEN
Copy(Get_Item_Property(itm, Database_Value), :System.Cursor_Item);

Flows disappearing when deleting a single flow

In my ODL code, I have recently noticed that when uninstalling flows, I get unexpected behavior. The scenario goes something like this:
A bunch of flows are installed across multiple tables
I delete a flow by using the same NodeId, TableId and FlowId that I used when creating it. For reference, I use SalFlowService's addFlow and removeFlow methods.
I execute ovs-ofctl dump-flows and notice that ALL flows on the given node and given table are deleted. For reference, the flowId I use is something like "routing-rename-src-0.0.0.0-to-123.123.123.0".
It appears to me that ODL somehow completely fails at recognizing the FlowId, and defaults to deleting all flows on the given table. No error messages are sent from OpenFlow, and no errors are logged in ODL.
The thing is, I am definitely using the same FlowId object.
Now, I am a bit confused about what could go wrong, but I have an idea, it's just that there's conflicting evidence online, and since I haven't worked on OpenFlowPlugin, I can't quite tell myself.
Flows are or tend to be posted using integers for flowIds, in the REST request paths.
In ODL code such as l2switch, flowIDs can be strings. This makes certain debugging easier to parse through.
Now, this is pretty strange. Are we using integers, or strings, or can ODL make a conversion between integer and strings by a mapping mechanism of sorts? Either way, I get unexpected behavior. Interestingly, the code I linked to does not do deletion... so maybe it's more of a hack in this case?
EDIT : I have now tried to rename my IDs as mere numbers, or as "PluginName" + "-" + number, and uninstallation still seems to fail. The problem is now that I just can't uninstall a flow rule without uninstalling the entire table with it...
EDIT 2 : This issue allowed me to understand that the flow id is not necessarily used to remove the flow. I came up with the following procedure to delete flows, in a way that doesn't cause all flows on the table to get deleted:
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
flowIdentity.context.salFlowService.removeFlow(builder.build());
The very difference with my previous code was that I was not using a Flow object to initialize the input builder. In this form, my methods for adding and removing are identical. As long as I preserve the Flow object after adding the flow, I can delete the flow, and the tables will not be wiped.
But there is an exception. On table 0, I have installed two different table-change rules with identical actions, but different priorities. The matches are slightly different (one defines an in-port, the other doesn't). When I delete the most generic (and lowest priority) rule, the other one gets deleted also.
I don't understand why this happens. Even if I try setting the priority in the input builder, this still happens. Hrm.
As I wrote in my second edit, this post suggests that flow deletion does not work explicitly based on Id, but rather, on the fields that are defined in the input builder of the method. I haven't tested this, but I suspect if the flow reference is omitted from the builder, the defined fields will be used to delete all matching rules, which could imply deleting all flows by accident if the wrong fields are set.
Given the following code to add flows:
final AddFlowInputBuilder builder = new AddFlowInputBuilder(flow);
builder.setNode(new NodeRef(nodeId));
builder.setFlowRef(new FlowRef(flowPath));
builder.setFlowTable(new FlowTableRef(tableId));
builder.setPriority(flow.getPriority());
flowIdentity.context.salFlowService.addFlow(builder.build());
The following code to remove flows works as expected (using the SAME Flow object):
final RemoveFlowInputBuilder builder = new RemoveFlowInputBuilder(flow);
builder.setNode(new NodeRef(flowLocation.nodeIdentifier));
builder.setFlowRef(new FlowRef(flowLocation.flowPath));
builder.setFlowTable(new FlowTableRef(flowLocation.tableIdentifier));
builder.setPriority(flow.getPriority());
builder.setStrict(Boolean.TRUE);
flowIdentity.context.salFlowService.removeFlow(builder.build());
Without "strict" set to true, this can cause unexpected deletion of similar rules on the same table. I am unsure of the way flows are matched on deletion, with or without strict, but this much I can confirm.

SSIS Logging of OnVariableValueChanged with variable value

I am trying to log all changes in variable values in a SSIS generated with BIML.
I managed to create an event handler that writes everytime a variable changes its value.
When I log I use a parameter whose value I set to "System.VariableValue". I pass this parameter (togheter with variableName and PackageName) to a StoredProc and i write in a log table.
My problem is that often (but NOT always) it seems the parameter does not has any value. I see a new line in the DB Log table so this means the evnt is correctly raised and handled BUT it seems the parameter is empty.
The strangest thing is that sometimes values are logged correctly but not always, not for the same variables, not for the same packages, rather, in a quite random fashion.
Could it be a problem the fact that several variables could change value almost at the same time (some contention on the DB) ? I doubt it, because the row itself gets written on the DB. I even tried to write, as a value, something like 'New value = ' + ? that is, appending the parameter value to a fixed string. The fixed part gets written correctly but.. no value.
The name of the variable that changed value is always written correctly.
Any idea what this could be due to?
As a workaround I tried to use the ready-made logging facility of SSIS but in this case in the message column of the SYSSSISLOG table i can just read the name of the variable that changed, not its new value.
thankx
You could use Event Handler to do that. Go to the Variables page, go to Variable Grid Options, check Raise event when variable value changes, and there should be one more option appear for those variables, which is Raise Change Event, default to False, change to True for those variables that you need to track the changes (log). And put a logging task into Event handler
UPDATE
the new line could be the value of parameter has been reset, and that value, most likely equal to blank or whitespace, but still, that is recognized as value change.
if you are not very sure when that happened, you could set a Breakpoint to certain task and add watch window to see how the value change or whether the value will hit blank in the middle of your process

How to identify Drive ID?

The new Google Drive Android API has 2 types of string IDs available, the 'resource' ID and the 'encoded' ID.
'encoded' id from DriveId.encodeToString()
"DriveId:CAESHDBCMW1RVVcyYUZKZmRhakIzMDBVbXMYjAUgssy8yYFRTTNKRU55"
'resource' id from DriveId.getResourceId()
"UW2aFJfdajB3M3JENy00Ums0B1mQ"
In the process I end-up with a string that can contain any one of them (result of some timing issues). My question is:
If I need to 'parse' the string in order to identify the type, is there a characteristic I can rely on? For instance:
'encoded' id will always start with 'DriveId:' substring
'resource' id will have some length limit
can I abuse error return from 'decodeFromString()'?
or should I form (pre-pend) the string container with my own tag? What could be the minimal 'safe' tag (i.e. what will never appear in the beginning of these ids) ?
Please point me in the right direction so I don't have to re-do it with the next release.
I have run into yet another issue that should be mentioned here so others don't waste time falling into the same pit. The 'resourceID' can be ported and will remain unique for the object it identifies, where 'encodedID' has only 'device' scope. Means that you CAN'T transfer 'encodedID' to another device (with the same account) and try to retrieve file/folder with it. So I assume it is unique to a Google Play Services instance.
Please do not rely on any formatting of either ID type. This are subject to change without notice.
If you need to use both, and track the differences between them you should have your own method of doing so within your app.
Really, you should probably always just store the encoded ID since this one is always guaranteed to present, and if it contains a resourceId, its easy to get back out.