Google Protocol Buffers - serialization

I'm trying to parse a very big message (approx 25 fields) and serialize them. The fields in the message appear in the same order all the time and in the proto file I numbered them accordingly. Is there a method to set the fields with the tag value (the number in the proto file)?
Thanks,
Chem.

google::protobuf::Message myMessage;
const google::protobuf::Descriptor * myDescriptor = myMessage.GetDescriptor();
const google::protobuf::FieldDescriptor * myField = myDescriptor->FindFieldByNumber(9001);
const google::protobuf::Reflection * myReflection = myMessage.GetReflection();
myReflection->SetInt32( &myMessage, myField, 7);
Obviously you'll need to change the field number, type of the field, and value to which you want to set.

Related

READ TABLE WITH TABLE KEY does not find record

I'm trying to use the class /ui5/cl_json_parser for parsing a JSON string.
The following code snippet reproduces the problem:
REPORT ztest_json_parse.
DATA: input TYPE string,
output TYPE string,
json_parser TYPE REF TO /ui5/cl_json_parser.
input = '{"address":[{"street":"Road","number":"545"},{"street":"Avenue","number":"15"}]}'.
CREATE OBJECT json_parser.
json_parser->parse( input ).
json_parser->print( ).
output = json_parser->value( path = '/address/1/street' ).
WRITE output.
The print method shows the correct parsed JSON string, but the output variable is always empty.
I have traced the code down to the method VALUE of the class /UI5/CL_JSON_PARSER, at line 15, which contains:
read table m_entries into l_entry with table key parent = l_parent name = l_name.
In the debugger, I can see that l_parent = '/address/1' and l_name = 'street', and that the internal table m_entries contains a record with parent = '/address/1' and name = 'street'. Nevertheless the READ statement always returns sy-subrc = 4 and does not find anything.
Can anyone help?
First: Do not use /ui5/cl_json_parser class, it is intended for internal use ONLY and has no reliable documentation
Secondly, here is the sample how you can fetch street value from the first element of your JSON:
DATA(o_json) = cl_abap_codepage=>convert_to( '{"address":[{"street":"Road","number":"545"},{"street":"Avenue","number":"15"}]' ).
DATA(o_reader) = cl_sxml_string_reader=>create( o_json ).
TRY.
DATA(o_node) = o_reader->read_next_node( ).
WHILE o_node IS BOUND.
DATA(op) = CAST if_sxml_open_element( o_node ).
LOOP AT op->get_attributes( ) ASSIGNING FIELD-SYMBOL(<a>).
DATA(attr) = <a>->get_value( ).
ENDLOOP.
IF attr <> 'street'.
o_node = o_reader->read_next_node( ).
ELSE.
DATA(val) = CAST if_sxml_value_node( o_reader->read_next_node( ) ).
WRITE: '/address/1/street =>', val->get_value( ).
EXIT.
ENDIF.
ENDWHILE.
CATCH cx_root INTO DATA(e_txt).
ENDTRY.
As far as I know, there is no class in ABAP that allows fetching single JSON attributes like XPath.
Certainly agree with Suncatcher on avoid UI5 Json parser.
If you dont control/know the structure of the source data, Suncatchers answer is good.
However,
if you know the basic structure of the source JSON and you must, if you plan to access the first address row, fieldname street .
AND you can have the source provided using uppercase variable names then you can use the so called identity transformation.
types: begin of ty_addr,
street type string,
number type string,
end of ty_addr.
types ty_addr_t type STANDARD TABLE OF ty_addr.
DATA: input TYPE string,
ls_addr TYPE ty_addr,
lt_addr type ty_addr_t.
input = '{"ADDRESS":[{"STREET":"Road","NUMBER":"545"},{"STREET":"Avenue","NUMBER":"15"}]}'.
CALL TRANSFORMATION id SOURCE XML input
RESULT address = lt_addr.
read table lt_addr index 1 into ls_addr.
WRITE ls_addr-street.

Force FsCheck to generate NonEmptyString for discriminating union fields of type string

I'm trying to achieve the following behaviour with FsCheck: I'd like to create a generator that will generate a instance of MyUnion type, with every string field being non-null/empty.
type MyNestedUnion =
| X of string
| Y of int * string
type MyUnion =
| A of int * int * string * string
| B of MyNestedUnion
My 'real' type is much larger/deeper than the MyUnion, and FsCheck is able to generate a instance without any problem, but the string fields of the union cases are sometimes empty. (For example it might generate B (Y (123, "")))
Perhaps there's some obvious way of combining FsCheck's NonEmptyString and its support for generating arbitrary union types that I'm missing?
Any tips/pointers in the right direction greatly appreciated.
Thanks!
This goes against the grain of property based testing (in that you explicitly prevent valid test cases from being generated), but you could wire up the non-empty string generator to be used for all strings:
type Alt =
static member NonEmptyString () : Arbitrary<string> =
Arb.Default.NonEmptyString()
|> Arb.convert
(fun (nes : NonEmptyString) -> nes.Get)
NonEmptyString.NonEmptyString
Arb.register<Alt>()
let g = Arb.generate<MyUnion>
Gen.sample 1 10 g
Note that you'd need to re-register the default generator after the test since the mappings are global.
A more by-the-book solution would be to use the default derived generator and then filter values that contain invalid strings (i.e. use ==>), but you might find it not feasible for particularly deep nested types.

U-sql call data in json array

I have browsed the web and forum to download the data from the file json, but my script does not work.
I have a problem with downloading the list of objects of rates. Can someone please help? I can not find fault.
{"table":"C","no":"195/C/NBP/2016","tradingDate":"2016-10-06","effectiveDate":"2016-10-07","rates":
[
{"currency":"dolar amerykański","code":"USD","bid":3.8011,"ask":3.8779},
{"currency":"dolar australijski","code":"AUD","bid":2.8768,"ask":2.935},
{"currency":"dolar kanadyjski","code":"CAD","bid":2.8759,"ask":2.9339},
{"currency":"euro","code":"EUR","bid":4.2493,"ask":4.3351},
{"currency":"forint (Węgry)","code":"HUF","bid":0.013927,"ask":0.014209},
{"currency":"frank szwajcarski","code":"CHF","bid":3.8822,"ask":3.9606},
{"currency":"funt szterling","code":"GBP","bid":4.8053,"ask":4.9023},
{"currency":"jen (Japonia)","code":"JPY","bid":0.036558,"ask":0.037296},
{"currency":"korona czeska","code":"CZK","bid":0.1573,"ask":0.1605},
{"currency":"korona duńska","code":"DKK","bid":0.571,"ask":0.5826},
{"currency":"korona norweska","code":"NOK","bid":0.473,"ask":0.4826},
{"currency":"korona szwedzka","code":"SEK","bid":0.4408,"ask":0.4498},
{"currency":"SDR (MFW)","code":"XDR","bid":5.3142,"ask":5.4216}
],
"EventProcessedUtcTime":"2016-10-09T10:48:41.6338718Z","PartitionId":1,"EventEnqueuedUtcTime":"2016-10-09T10:48:42.6170000Z"}
This is my script in sql.
#trial =
EXTRACT jsonString string
FROM #"adl://kamilsepin.azuredatalakestore.net/ExchangeRates/2016/10/09/10_0_c60d8b8895b047c896ce67d19df3cdb2.json"
USING Extractors.Text(delimiter:'\b', quoting:false);
#json =
SELECT Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(jsonString) AS rec
FROM #trial;
#columnized =
SELECT
rec["table"]AS table,
rec["no"]AS no,
rec["tradingDate"]AS tradingDate,
rec["effectiveDate"]AS effectiveDate,
rec["rates"]AS rates
FROM #json;
#rateslist =
SELECT
table, no, tradingDate, effectiveDate,
Microsoft.Analytics.Samples.Formats.Json.JsonFunctions.JsonTuple(rates) AS recl
FROM #columnized;
#selectrates =
SELECT
recl["currency"]AS currency,
recl["code"]AS code,
recl["bid"]AS bid,
recl["ask"]AS ask
FROM #rateslist;
OUTPUT #selectrates
TO "adl://kamilsepin.azuredatalakestore.net/datastreamanalitics/ExchangeRates.tsv"
USING Outputters.Tsv();
You need to look at the structure of your JSON and identify, what constitutes your first path inside your JSON that you want to map to correlated rows. In your case, you are really only interested in the array in rates where you want one row per array item.
Thus, you use the JSONExtractor with a JSONPath that gives you one row per array element (e.g., rates[*]) and then project each of its fields.
Here is the code (with slightly changed paths):
REFERENCE ASSEMBLY JSONBlog.[Newtonsoft.Json];
REFERENCE ASSEMBLY JSONBlog.[Microsoft.Analytics.Samples.Formats];
#selectrates =
EXTRACT currency string, code string, bid decimal, ask decimal
FROM #"/Temp/rates.json"
USING new Microsoft.Analytics.Samples.Formats.Json.JsonExtractor("rates[*]");
OUTPUT #selectrates
TO "/Temp/ExchangeRates.tsv"
USING Outputters.Tsv();

Error in programmatically copying data to SpFieldChoice field SharePoint 2010

I am new to sharepoint, I have a custom field type derived from SpFieldChoice , my field allows users to select multiple values, I have a requirement of replacing some old custom columns with the new column and copy the data in old column to the new column. the old column also allows the users to select multiple values by ticking checkboxes, I have the following code to copy the data to new field.
foreach (SPListItem item in list.Items)
{
if (item[oldField.Title] == null)
{
item[newFld.Title] = string.Empty;
item.Update();
}
else
{
string[] itemvalues = item[oldField.Title].ToString().Split(new string[] {";#"}, StringSplitOptions.None);
StringBuilder multiLookupValues = new StringBuilder();
multiLookupValues.Append(";#");
for (int cnt = 0; cnt < (itemvalues.Length) / 2; cnt++)
{
multiLookupValues.Append (itemvalues[(cnt * 2) + 1].ToString() + ";#");
}
item[newFld.Title] = multiLookupValues.ToString();
item.SystemUpdate(false) ;
}
}
This code works fine until the length of resulting stringbuilder is less than 255 charachters , but when this length is greater then 255 I get the following Exception.
Invalid choice Value. A choice field contains invalid data.Please check the value and try again.
Is there any other way of copying data to SpFiledChoice, How can I resolve this problem? please help me.
Do the update multiple times so that the string doesn't exceed - i.e. value +=. However, if the problem is that the value can't be longer that 255, you have to consider how you are doing the choices. If it is exceeding the length and updating the value multiple times doesn't work (and a Site Column will have the same limitations), you can do the next best thing:
1) Create a new list that will hold the choices
2) Change the destination field to be a lookup
3) Update accordingly for each item (picking up the ID from the lookup field)
There's no limit to this.
David Sterling
david_sterling#sterling-consulting.com
www.sterling-consulting.com

Struggling with SQL BCP (uniqidentifier, default values on columns...)

EDIT: My only pending issue is c) (True and False on file, bit on database, I can't change neither the file nor the database scheme, there's hundreds of terabytes I can't touch).
The system receives a file (hundreds of thousands of them, actually) with a certain format. Things are:
a) First type is a uniqidentifier (more on this later)
b) On the database, the table's first 4 values are generated by the database (they are related to dates), meaning that those 4 values are not found on the files (all the rest are -and are in order-, even if it's always their representation as text or they are empty)
c) Bit values are represented with a False/True on the file.
So, the issue for 1 is that in the text file I receive as input, the uniqidentifier is using brackets. When I tried to generate the file with the format nul options using the bcp command tool, it would make it a sqlchar with 37 characters (which makes no sense to me, since it would either be 36 or 38).
Row separator is "+++\r\n", column separator is "©®©".
How would I go about generating the format files? I've been stuck with this for some time, I never used bcp before and errors I've got don't really tell much ("Unexpected EOF encountered in BCP data-file")
Am I supposed to specify all the columns in the format file or just the ones I desire to read from the files I get?
Thanks!
NOTE: I can't provide the SQL schema since it's for the company I work for. But it's pretty much: smalldate, tinyint tinyint tinyint (this four are generated by the db), uniqidentifier, chars, chars, more varchars, some bits, more varchars, some nvarchar. ALL values, except for those generated by the db, accept null.
My current problem is with the skipping the first 4 columns.
http://msdn.microsoft.com/en-us/library/ms179250(v=SQL.105).aspx
I followed that guide but somehow it's not working. Here's the changes (I'm just hard-changing column names to keep privacy of the project, even if it sounds stupid)
This is the one generated with bcp (with format nul -c) -note I put it as link 'cause it's not that short-
http://pastebin.com/4UkpPp1n
The second one, which is supposed to do the same but ignoring the first 4 columns is in the next pastebin:
http://pastebin.com/Lqj6XSbW
Yet it is not working. The error is "Error = [Microsoft][SQL Native Client]The number of fields provided for bcp operation is less than the number of columns on the server.", which was supposed to be the purpose of all that.
Any help will be greatly appreciated.
I'd create a new table with a CHAR(38) for the GUID. Import your data into this staging table, then translate it with CAST(SUBSTRING(GUID, 2, 36) AS UNIQUEIDENTIFIER) to import the staging data into your permanent table. This approach also works well for dates in odd formats, numbers with currency symbols, or generally any kind of poorly-formatted input.
BCP format files are a little touchy, but fundamentally aren't too complicated. If that part continues to give you trouble, one option is to import the whole row as a single VARCHAR(1000) field, then split it up within SQL - if you're comfortable with SQL text processing that is.
Alternately, if you are familiar with some other programming language, like Perl or C#, you can create a script to pre-process your inputs into a more friendly form, like tab-delimited. If you're not familiar with some other programming language, I suggest you pick one and get started! SQL is a great language, but sometimes you need a different tool; it's not great for text processing.
If you're familiar with C#, here's my code to generate a format file. No one gets to make fun of my Whitestone indentation :P
private static string CreateFormatFile(string filePath, SqlConnection connection, string tableName, string[] sourceFields, string[] destFields, string fieldDelimiter, string fieldQuote)
{
string formatFilePath = filePath + ".fmt";
StreamWriter formatFile = null;
SqlDataReader data = null;
try
{
// Load the metadata for the destination table, so we can look up fields' ordinal positions
SqlCommand command = new SqlCommand("SELECT TOP 0 * FROM " + tableName, connection);
data = command.ExecuteReader(CommandBehavior.SchemaOnly);
DataTable schema = data.GetSchemaTable();
Dictionary<string, Tuple<int, int>> metadataByField = new Dictionary<string, Tuple<int, int>>();
foreach (DataRow row in schema.Rows)
{
string fieldName = (string)row["ColumnName"];
int ordinal = (int)row["ColumnOrdinal"] + 1;
int maxLength = (int)row["ColumnSize"];
metadataByField.Add(fieldName, new Tuple<int, int>(ordinal, maxLength));
}
// Begin the file, including its header rows
formatFile = File.CreateText(formatFilePath);
formatFile.WriteLine("10.0");
formatFile.WriteLine(sourceFields.Length);
// Certain strings need to be escaped to use them in a format file
string fieldQuoteEscaped = fieldQuote == "\"" ? "\\\"" : fieldQuote;
string fieldDelimiterEscaped = fieldDelimiter == "\t" ? "\\t" : fieldDelimiter;
// Write a row for each source field, defining its metadata and destination field
for (int i = 1; i <= sourceFields.Length; i++)
{
// Each line contains (separated by tabs): the line number, the source type, the prefix length, the field length, the delimiter, the destination field number, the destination field name, and the collation set
string prefixLen = i != 1 || fieldQuote == null ? "0" : fieldQuote.Length.ToString();
string fieldLen;
string delimiter = i < sourceFields.Length ? fieldQuoteEscaped + fieldDelimiterEscaped + fieldQuoteEscaped : fieldQuoteEscaped + #"\r\n";
string destOrdinal;
string destField = destFields[i - 1];
string collation;
if (destField == null)
{
// If a field is not being imported, use ordinal position zero and a placeholder name
destOrdinal = "0";
fieldLen = "32000";
destField = "DUMMY";
collation = "\"\"";
}
else
{
Tuple<int, int> metadata;
if (metadataByField.TryGetValue(destField, out metadata) == false) throw new ApplicationException("Could not find field \"" + destField + "\" in table \"" + tableName + "\".");
destOrdinal = metadata.Item1.ToString();
fieldLen = metadata.Item2.ToString();
collation = "SQL_Latin1_General_CP1_CI_AS";
}
string line = String.Join("\t", i, "SQLCHAR", prefixLen, fieldLen, '"' + delimiter + '"', destOrdinal, destField, collation);
formatFile.WriteLine(line);
}
return formatFilePath;
}
finally
{
if (data != null) data.Close();
if (formatFile != null) formatFile.Close();
}
}
There was some reason I didn't use a using block for the data reader at the time.
It seems as if it is not possible for BCP to understand True and False as bit values. It's better to either go with SSIS or first replace the contents of the text (not a good idea to create views or anything like that, it is more overhead).