I am having one protobuf message -
message Sample{
string field1 = 1;
string field2 = 2;
string field3 = 3;
}
These messages are stored in datastore in binary format.
So if I want to remove any of the defined field in the above message will it cause any issue in deserialization of the message from datastore?
No. Removing fields is fine, although you might want to mark it reserved so that nobody reuses it in an incompatible way. New code with old data (with the field) will silently ignore it; old code with new data will just load without the field populated, since everything in proto3 is implicitly optional. This was more of a problem in proto2, when required was a thing. Another option is to leave the field but mark it with [deprecated = true] - it'll still exist and be populated, but some tools will mark the member with the platform-specific obsolete markers for that language/framework.
Adding and removing fields will not cause safety issues as mentioned in the answer from Marc. The only safety you should care about it is to mark the field as reserved. This will ensure that no one uses the same field number accidentally in future
Related
I wanted to clarify my understanding of unset() or rather the behavior I am observing. I understand if I call unset() it replaces the value with a null (per the deleting data in Gun). So this is what I would like to confirm, assuming you've called unset():
1) When you call once() or on() it returns null for nodes which have been unset()
2) When you call Gun.obj.empty(table, '_') it returns false
I also tried setting the value of my set to null e.g.
get('mylist').put(null)
Which worked! I wanted to empty my set. However, the next time I added a new node my original set along with all of the original nodes were restored. I ended up writing the following to empty my set
this.context.once().map().once(data => {
let key = data["_"]["#"];
let node = this.context.get(key);
if (node) {
this.context.unset(node);
}
});
I believe that .unset tombstones (null) an item in the table, but not the table itself. Just so others are aware: .unset is community maintained inside of the GUN repo, and not maintained by me - so I may be wrong about its behavior OR the extension may be out of date.
Gun.obj.empty({}) is just a utility to check if an object is empty or not. The 2nd parameter lets you pass it a property to ignore (like '_' which every GUN node has on it in the JS implementation).
You are correct though, that gun.get('list').put(null) should "clear out" the list, such that when you go to save data to it again (or call .set(item) to add an item) this should force/cause GUN to generate a new list/table/set/collection.
It is wrong behavior for it to "resurrect" the old list (unless another peer was trying to at the same time re-write to the old list at the same parent context), so this should be considered a bug that needs to be fixed. (Probably due to the old list ID being cached and not cleared from memory properly)
I'm currently trying to perform a dynamic lossless assignment in an ABAP 7.0v SP26 environment.
Background:
I want to read in a csv file and move it into an internal structure without any data losses. Therefore, I declared the field-symbols:
<lfs_field> TYPE any which represents a structure component
<lfs_element> TYPE string which holds a csv value
Approach:
My current "solution" is this (lo_field is an element description of <lfs_field>):
IF STRLEN( <lfs_element> ) > lo_field->output_length.
RAISE EXCEPTION TYPE cx_sy_conversion_data_loss.
ENDIF.
I don't know how precisely it works, but seems to catch the most obvious cases.
Attempts:
MOVE EXACT <lfs_field> TO <lfs_element>.
...gives me...
Unable to interpret "EXACT". Possible causes: Incorrect spelling or comma error
...while...
COMPUTE EXACT <lfs_field> = <lfs_element>.
...results in...
Incorrect statement: "=" missing .
As the ABAP version is too old I also cannot use EXACT #( ... )
Example:
In this case I'm using normal variables. Lets just pretend they are field-symbols:
DATA: lw_element TYPE string VALUE '10121212212.1256',
lw_field TYPE p DECIMALS 2.
lw_field = lw_element.
* lw_field now contains 10121212212.13 without any notice about the precision loss
So, how would I do a perfect valid lossless assignment with field-symbols?
Don't see an easy way around that. Guess that's why they introduced MOVE EXACT in the first place.
Note that output_length is not a clean solution. For example, string always has output_length 0, but will of course be able to hold a CHAR3 with output_length 3.
Three ideas how you could go about your question:
Parse and compare types. Parse the source field to detect format and length, e.g. "character-like", "60 places". Then get an element descriptor for the target field and check whether the source fits into the target. Don't think it makes sense to start collecting the possibly large CASEs for this here. If you have access to a newer ABAP, you could try generating a large test data set there and use it to reverse-engineer the compatibility rules from MOVE EXACT.
Back-and-forth conversion. Move the value from source to target and back and see whether it changes. If it changes, the fields aren't compatible. This is unprecise, as some formats will change although the values remain the same; for example, -42 could change to 42-, although this is the same in ABAP.
To-longer conversion. Move the field from source to target. Then construct a slightly longer version of target, and move source also there. If the two targets are identical, the fields are compatible. This fails at the boundaries, i.e. if it's not possible to construct a slightly-longer version, e.g. because the maximum number of decimal places of a P field is reached.
DATA target TYPE char3.
DATA source TYPE string VALUE `123.5`.
DATA(lo_target) = CAST cl_abap_elemdescr( cl_abap_elemdescr=>describe_by_data( target ) ).
DATA(lo_longer) = cl_abap_elemdescr=>get_by_kind(
p_type_kind = lo_target->type_kind
p_length = lo_target->length + 1
p_decimals = lo_target->decimals + 1 ).
DATA lv_longer TYPE REF TO data.
CREATE DATA lv_longer TYPE HANDLE lo_longer.
ASSIGN lv_longer->* TO FIELD-SYMBOL(<longer>).
<longer> = source.
target = source.
IF <longer> = target.
WRITE `Fits`.
ELSE.
WRITE `Doesn't fit, ` && target && ` is different from ` && <longer>.
ENDIF.
There were few attempts of questions answerered in regards to ICE03 (String overflow) for CustomActionData, but I cannot seem to determine/conclude the correct (or accepted) practice of how to go around this issue.
My current resolution was to reduce the length of the key-value-pair by keeping both the key and property names short, i.e. from:
<CustomAction Id="MyCustomActionData"
Property="MyCustomActionCA"
Value="myKeyName1=[SOME_PROPERTY_NAME];myKeyName2=[SOME_DESCRIPTIVE_PROPNAME]"/>
to:
<CustomAction Id="MyCustomActionData"
Property="MyCustomActionCA"
Value="k1=[K1];k2=[K2]"/>
But I feel that I'm just sweeping the problem under the rug and sooner or later, I'll encounter again (also, this is based on assumptions of my additional question below).
The more obvious solution is the re-evaluate and re-design it so that least amount of data needs to be passed down to the C# CustomAction (the classic "why would you want to declare a function method to pass 20 parameters?" question by all code-reviewers). Obviously, for most languages today, we can easily redesign the API and pass an object (as a class, struct, etc - depends on languages) that self-contains what it needs, but how does one go about it for inter-process calls (I've seen JSON RPC messages with reasonably large data and I'd usually wonder if it was because somebody tried to fix some legacy code by adding more and more until it got bloated rather than sitting down and re-design, which is not possible on some "11th hour" deadline that just has to get fixed in shortest time allowed).
Perhaps the solution is to create an XML file and use expat ('util:XmlFile') to search and replace the key-value-pair before calling CustomAction, and pass the filename of the altered XML as CustomActionData for CustomAction to use, which then in C# CustomAction code, it just deserializes it and treats it as objects. But that too feels a little klunky (it may also confuse the next developer who takes over my task in the future), not to mention if it was passwords we'd want to not have it in an XML file and keep it as Property with Hidden="yes"...
So my question is, what are the clean/elegant solutions or pattern (or practices) to resolve this issue of passing CustomActionData that may exceed table column size?
If I may also ask an additional question which is somewhat related, I am assuming that the linker (light) warning LGHT1076 is based on the length of the value (i.e. "keyA=[A];keyB=[B]") being too long, and so if I chose very short property variable and key-names, it would most likely not trigger this warning. But from what I understand, the table column size is 255 characters (please correct me if I'm wrong) thus during the run-time, if property value is longer than column size, it can cause some issue (or truncated)?
The solution I use is to create multiple properties and then concatenate the properties at the end into a single property, this way:
<CustomAction Id="SetSqlProperties"
Property="SqlProperties"
Value="SQL_LOGIN_ID=[SQL_LOGIN_ID];SQL_PASSWORD=[SQL_PASSWORD];
SQL_AUTH_TYPE=[SQL_AUTH_TYPE];SQL_SERVERS=[SQL_SERVERS]" />
<CustomAction Id="SetServerProperties"
Property="ServerProperties"
Value="Domain=[DOMAIN];ComputerName=[COMPUTER_NAME];
FullServerName=[FULLCOMPUTERNAME];Version=[ProductVersion];
ServerType=[SERVER_TYPE];SrvMode=[SrvMode]" />
<CustomAction Id="SetPropertiesConfigReplace"
Property="ConfigReplace"
Value="InstallFolder=[INSTALLFOLDER];[ServerProperties];[SqlProperties]" />
In this example I would use the property [ConfigReplace] containing all values from SQL Server and local server.
About the ICE03, in the documentation you can find this:
The string's length is greater than the column width specified by the column definition. Note that the installer does not internally limit the column width to the specified value. See Column Definition Format.
MSDN
The new Google Drive Android API has 2 types of string IDs available, the 'resource' ID and the 'encoded' ID.
'encoded' id from DriveId.encodeToString()
"DriveId:CAESHDBCMW1RVVcyYUZKZmRhakIzMDBVbXMYjAUgssy8yYFRTTNKRU55"
'resource' id from DriveId.getResourceId()
"UW2aFJfdajB3M3JENy00Ums0B1mQ"
In the process I end-up with a string that can contain any one of them (result of some timing issues). My question is:
If I need to 'parse' the string in order to identify the type, is there a characteristic I can rely on? For instance:
'encoded' id will always start with 'DriveId:' substring
'resource' id will have some length limit
can I abuse error return from 'decodeFromString()'?
or should I form (pre-pend) the string container with my own tag? What could be the minimal 'safe' tag (i.e. what will never appear in the beginning of these ids) ?
Please point me in the right direction so I don't have to re-do it with the next release.
I have run into yet another issue that should be mentioned here so others don't waste time falling into the same pit. The 'resourceID' can be ported and will remain unique for the object it identifies, where 'encodedID' has only 'device' scope. Means that you CAN'T transfer 'encodedID' to another device (with the same account) and try to retrieve file/folder with it. So I assume it is unique to a Google Play Services instance.
Please do not rely on any formatting of either ID type. This are subject to change without notice.
If you need to use both, and track the differences between them you should have your own method of doing so within your app.
Really, you should probably always just store the encoded ID since this one is always guaranteed to present, and if it contains a resourceId, its easy to get back out.
I am finding that whenever I load an object from my database it is immediately appearing as being Dirty.
I found some code that will let me see if the object is dirty here: http://nhforge.org/wikis/howtonh/finding-dirty-properties-in-nhibernate.aspx
var x = session.Get<MyRecord>(123);
var dirtyEntity = session.IsDirtyEntity(x);
dirtyEntity is always evaluating to true to entities of this class.
Looking into it a bit more I think I've found the root of the problem. I have a property which is mapped onto an nchar(15) column in SQL Server. The value in the DB has trailing spaces, but the value appearing on the object has been trimmed. So... the following is returning true.
var x = session.Get<MyRecord>(123);
var dirtyProperty = session.IsDirtyProperty(x, "Status");
Does anyone know how I can get nHibnernate to say that "Status OK" and "Status OK " are equivelent, and that the entity is not dirty?
Why don't you use varchar as the data type in database for the said column. This will solve your issue as well as prevent the wastage of space going on in the database.
This may be solvable by using an IUserType for the property. There is a TrimString example in UNHAddins; you can read about its usage here.