How are the fields delimited in a Serialized byte stram? - serialization

I was wondering how were delimited the fields of a serialized object in a byte stream?
Is there some kind of binary flag separating them, or are the length of each field defined at the beginning (or is it using another delimitation technique I didn't think of)?
Thanks!

Serialization is process of converting nested data (object) to flat stream. There are a lot of ways to do that. Each of them have corresponding specifications. If you want to have details tell which serialization you are interested in and lets search for docs.

Related

Alternative to case statments when changing a lot of numeric controls

I'm pretty new to LabVIEW, but I do have experience in other programing languages like Python and C++. The code I'm going to ask about works, but there was a lot of manual work involved when putting it together. Basically I read from a text file and change control values based on values in the text file, in this case its 40 values.
I have set it up to pull from a text file and split the string by commas. Then I loop through all the values and set the indicator to read the corresponding value. I had to create 40 separate case statements to achieve this. I'm sure there is a better way of doing this. Does anyone have any suggestions?
There could be done following improvements (additionally to suggested by sweber:
If file contains just data, without "label - value" format, then you could read it as csv (comma separated values) format, and read actually just 1st row.
Currently, you set values based on order. In this case, you could: create reference to all indicators, build them to array in proper order, in For Loop assign values to indicators via property node Value.
Overall, I support sweber that if it is some key - value data, then better to use either JSON format, or .ini file format, which support such structure.
Let's start with some optimization:
It seems your data file contains nothing more than just 40 numbers. You can wire an 1D DBL array to the default input of the string-to-array VI, and you will get just a 1D array out. No need for a 2D array.
Second, there is no need to convert the FOR index value to a string, the CASE accepts integers, too.
Now, about your question: The simplest solution is to display the values as array, just as they come from the string-to-array VI.
But I guess each value has a special meaning, and you would like to display it's name/description somehow. In this case, create a cluster with 40 values, edit their labels as you like, and make sure their order in the cluster is the same as the order of the values in the files.
Then, wire the 1D array of values to this cluster via an array-to-cluster VI.
If you plan to use the text file to store and load the values, converting the cluster data to JSON and vv. might be something for you, as it transports the labels of the cluster into the file, too. (However, changing labels is an issue, then)

How to store a set of strings in Redshift?

I need to store an array of strings as a table column in Redshift and then check whether this array contains some string.
Since Redshift doesn't support array types, I started looking for ways around it. The only thing I came up is to encode this array as a pipe-separated string, previously escaping all the pipes within the elements of the array. Looking up of the element will be done using regexps.
While this solution seems to be viable, it requires some pre- and post-processing. Maybe you can recommend some alternatives?

Merge multiple csv files with difference header tile in Objective C

I have multiple csv file with difference header tile, and I want to merge all of them & keep combination Header title.
I think I can doing cover csv to array then compare header tile in all file then merge csv file. However, seem it get huge processing time because o a lot of loop there. Could you help if have any fast solution.
Example:
file1.csv
No,Series,Product,A,B,C,D,E
1, AAA, XX, a1,b1,c1,d1,e1
file2.csv
No,Series,Product,A,C,D,B,E,F,G
1, AAB, XX, a1,c1,d1,b1,e1,f1,g1
file3.csv
No,Series,Product,A,A1,A2,C,D,B1,B,E
1, AAC, XX, a1,a11,a21,c1,d1,b11,b1,e1
My expected merge file is:
merge.csv
No,Series,Product,A,A1,A2,B,B1,C,D,E,F,G
1, AAA, XX, a1,0,0,b1,0,c1,d1
1, AAB, XX, a1,0,0,b1,0,c1,d1,e1,f1,g1
1, AAC, XX, a1,a11,a21,b1,b11,c1,d1,e1
"Data which not available in column will show as "0" or "NA",etc.
From your comment it seems you have no code but you think your sketch will be slow, it sounds like you are optimising prematurely – code your algorithm, test it, if its slow use Instruments to see where the time is being spent and then look at optimisation.
That said some suggestions:
You need to decide if you are supporting general CSV files, where field values may contain commas, newlines or double quotes; or simple CSV files where none of those characters is present in a field. See section 2 of Common Format and MIME Type for Comma-Separated Values (CSV) Files what what you need to parse to support general CSV files, and remember you need to output values using the same convention. If you stick with simple CSV files then NSString's componentsSeparatedByString: and NSArray's componentsJoinedByString: may be all you need to parse and output respectively.
Consider first iterating over your files reading just the header rows, parse those, and produce the merged list of headers. You will need to preserve the order of the headers, so you can pair them up with the data rows, so arrays are your container of choice here. You may choose to use sets in the merging process, but the final merged list of headers should also be an array in the order you wish them to appear in the merged file. You can use these arrays of headers directly in the dictionary methods below.
Using a dictionary as in your outline is one approach. In this case look at NSDictionary's dictionaryWithObjects:forKeys: for building the dictionary from the parsed header and record. For outputting the dictionary look at objectsForKeys:notFoundMarker: and using the merged list of headers. This supports missing keys and you supply the value to insert. For standard CSV's the missing value is empty (i.e. two adjacent commas in the text) but you can use NA or 0 as you suggest.
You can process each file in turn, a row at a time: read, parse, make dictionary, get an array of values back from dictionary with the missing value in the appropriate places, combine, write. You never need to hold a complete file in memory at any time.
If after implementing your code using dictionaries to easily handle the missing columns you find it is too slow you can then look at optimising. You might want to consider instead of breaking each input data row into fields and the recombining adding in the missing columns that you just do direct string replacement operations on the text of the data row and just add in extra delimiters as needed – e.g. if column four is missing you can change the third comma for two commas to insert the missing column.
If after designing your algorithm and coding it you hit problems you can ask a new question, include your algorithm and code, a link back to this question so people can follow the history, and explain what your issue is. Someone will undoubtedly help you take the next step.
HTH

Constructing an ascii message in a string datatype

I need to construct a message to send to a serial device that is 1024 ASCII characters. If I construct this message in a regular string data type will this work? I assume the data will be wrong because strings are formatted in Unicode. How could I go about doing something like this?
First of all, strings are not designed to handle binary data so you should not use it when you need access to row binary data.
Most suitable candidate for this is byte array. It will ensure that the data is store exactly you want it.
For writing to serial port, you should use binary streams. It will take a binary array and can put on any writable device.

What MySQL datatype & attributes should be used to store large amounts of html formatted data?

I'm setting up a database using PHPMyAdmin and many fields will be large chunks of HTML.
What MySQL datatype & attributes should be used for fields that store large amounts of HTML data?
TEXT, MEDIUMTEXT, or LONGTEXT
I would recommend against storing large chunks of HTML (or other text data) in a database. I find that it's often far more useful to store files as files, and put a filename in the database instead.
On the other hand, if you're doing everything through phpMyAdmin, you may not have that option available to you.
You really really should start with the documentation, then if you have questions based on the data types you find there, try to ask for some clarification. But it really helps to understand what the datatypes are before asking the question: Documentation here:
http://dev.mysql.com/doc/refman/5.4/en/data-types.html
That said, take a closer look at text and blob. Text will store a large body of textual information (probably a good choice) where blob is designed for binary data. This does make a difference based on the query functions and what data types they operate on.
I think you can store HTML in simple TEXT field. If your html is more then 64KB then you can use MEDIUMTEXT instead.
See also Storage Requirements for String Types for more details about maximum length of stored value.
Also remember than characters in Unicode can require more then 1 byte to store.