I have a string data which I need to parse into a dictionary object. Here is my code:
NSString *barcode = [NSString stringWithString:#"{\"OTP\": 24923313, \"Person\": 100000000000112, \"Coupons\": [ 54900012445, 499030000003, 00000005662 ] }"];
NSLog(#"%#",[barcode objectFromJSONString]);
In this log, I get NULL result. But if I pass only one value in Coupons, I get the results. How to get all three values ?
00000005662 might not be a proper integer number as it's prefixed by zeroes (which means it's octal, IIRC). Try removing them.
Cyrille is right, here is the autoritative answer:
The application/json Media Type for JavaScript Object Notation (JSON): 2.4 Numbers
The representation of numbers is similar to that used in most programming languages. A number contains an integer component that may be prefixed with an optional minus sign, which may be followed by a fraction part and/or an exponent part.
Octal and hex forms are not allowed. Leading zeros are not allowed.
Related
I'm working with an API that returns an array of objects. I can get all the keys, but two of those have numbers as keys, and I cannot get it. Give me an error.
I really dont know why I can not get it those keys.
Is there something different due to are numbers?
BTW Im using axios.
If you're using dot notation, you should change to bracket notation to access properties start by a number.
The code below uses dot notation, it throws an error
const test = {"1h" : "test value"};
console.log(test.1h); // error
Why :
In the object.property syntax, the property must be a valid JavaScript
identifier.
An identifier is a sequence of characters in the code that identifies a variable, function, or property.
In JavaScript, identifiers are case-sensitive and can contain Unicode letters, $, _, and digits (0-9), but may not start with a digit.
The code below uses bracket notation, works fine
const test = {"1h" : "test value"};
console.log(test["1h"]); // works
Why :
In the object[property_name] syntax, the property_name is just a
string or Symbol. So, it can be any string, including '1foo', '!bar!',
or even ' ' (a space).
Check out the document here
I am working on some legacy code at the moment and have come across the following:
FooString = String.Format("{0:####0.000000}", FooDouble)
My question is, is the format string here, ####0.000000 any different from simply 0.000000?
I'm trying to generalize the return type of the function that sets FooDouble and so checking to make sure I don't break existing functionality hence trying to work out what the # add to it here.
I've run a couple tests in a toy program and couldn't see how the result was any different but maybe there's something I'm missing?
From MSDN
The "#" custom format specifier serves as a digit-placeholder symbol.
If the value that is being formatted has a digit in the position where
the "#" symbol appears in the format string, that digit is copied to
the result string. Otherwise, nothing is stored in that position in
the result string.
Note that this specifier never displays a zero that
is not a significant digit, even if zero is the only digit in the
string. It will display zero only if it is a significant digit in the
number that is being displayed.
Because you use one 0 before decimal separator 0.0 - both formats should return same result.
I am using JSON-RPC over TCP, the problem is that I could not find any JSON parse capable of parsing multiple JSON objects correctly, and it would be relatively hard to split it, since there is no delimiter used.
Anyone knows a way how I could handle i.e. this:
{"foo":false, "bar: true, "baz": "cool"}{"ba
Somehow I need to split it so I end up just with the first, complete JSON object. The remaining string needs to stay in buffer until I have enough data to parse it properly.
XBMC's JSON-RPC doc does give a hint:
As such, your client needs to be able to deal with this, eg. by counting and matching curly braces ({}).
Update: As Jody Hagins pointed out, beware of curly braces inside JSON strings when using this approach.
Another possible and probably much better solution would be using a streaming JSON parser like yajl (or its Objective-C wrapper yajl-objc). You can feed the parser with data until it says the current object is done and then restart parsing.
#ePirat, if someone just concatenates multiple JSON dictionaries without delimiters, they should be shot.
For parsing: JSONSerialization parses NSData which could come in any encoding. Fortunately, if you have multiple JSON dictionaries concatenated, they are quite easy to take apart. All you need is looking at the bytes and check for the characters \ " { and }.
If you find a { then increase the counter for "open brackets".
If you find a } then decrease the counter for "open brackets". If the counter is at zero, you've found the end of a dictionary.
If you find a ", then repeatedly look at the next character. If the next character is a " then skip it and go to the normal processing (you've found the end of a string). If the next character is a \ then skip that character and the following character. If the next character is anything else, skip it.
If you reach the end of the data, then your JSON data is incomplete. You could remember which state you were in (count of open brackets, whether you are parsing a string, and if parsing a string whether you just encountered a backlash character) and continue right where you left off.
No need to convert the NSData to a string until you've separated it into dictionaries. If you suspect that you might be given UTF-16 or UTF-32, check whether bytes 0, 1, 2 or 1, 2, 3 are zero (UTF-32), then check whether bytes 0 and 2 or 1 and 3 are zero (UTF-16). But in that case, if the server sends non-standard JSON in UTF-16 or UTF-32, change "the person responsible should be shot" to "the person responsible must be shot".
I want to get unicode code point for a given unicode character in Objective-C. NSString said it internal use UTF-16 encoding and said,
The NSString class has two primitive methods—length and characterAtIndex:—that provide the basis for all other methods in its interface. The length method returns the total number of Unicode characters in the string. characterAtIndex: gives access to each character in the string by index, with index values starting at 0.
That seems assume characterAtIndex method is unicode aware. However it return unichar is a 16 bits unsigned int type.
- (unichar)characterAtIndex:(NSUInteger)index
The questions are:
Q1: How it present unicode code point above UFFFF?
Q2: If Q1 make sense, is there method to get unicode code point for a given unicode character in Objective-C.
Thx.
The short answer to "Q1: How it present unicode code point above UFFFF?" is: You need to be UTF16 aware and correctly handle Surrogate Code Points. The info and links below should give you pointers and example code that allow you to do this.
The NSString documentation is correct. However, while you said "NSString said it internal use UTF-16 encoding", it's more accurate to say that the public / abstract interface for NSString is UTF16 based. The difference is that this leaves the internal representation of a string a private implementation detail, but the public methods such as characterAtIndex: and length are always in UTF16.
The reason for this is it tends to strike the best balance between older ASCII-centric and Unicode aware strings, largely due to the fact that Unicode is a strict superset of ASCII (ASCII uses 7 bits, for 128 characters, which are mapped to the first 128 Unicode Code Points).
To represent Unicode Code Points that are > U+FFFF, which obviously exceeds what can be represented in a single UTF16 Code Unit, UTF16 uses special Surrogate Code Points to form a Surrogate Pair, which when combined together form a Unicode Code Point > U+FFFF. You can find details about this at:
Unicode UTF FAQ - What are surrogates?
Unicode UTF FAQ - What’s the algorithm to convert from UTF-16 to character codes?
Although the official Unicode UTF FAQ - How do I write a UTF converter? now recommends the use of International Components for Unicode, it used to recommend some code officially sanctioned and maintained by Unicode. Although no longer directly available from Unicode.org, you can still find copies of the "no longer official" example code in various open-source projects: ConvertUTF.c and ConvertUTF.h. If you need to roll your own, I'd strongly recommend examining this code first, as it is well tested.
From the documentation of length:
The number returned includes the
individual characters of composed
character sequences, so you cannot use
this method to determine if a string
will be visible when printed or how
long it will appear.
From this, I would infer that any characters above U+FFFF would be counted as two characters and would be encoded as a Surrogate Pair (see the relevant entry at http://unicode.org/glossary/).
If you have a UTF-32 encoded string with the character you wish to convert, you could create a new NSString with initWithBytesNoCopy:length:encoding:freeWhenDone: and use the result of that to determine how the character is encoded in UTF-16, but if you're going to be doing much heavy Unicode processing, your best bet is probably to get familiar with ICU (http://site.icu-project.org/).
What are the drawbacks of the String type in abap? When to use it, when not?
An example : I have a text field that should save values ranging from 0 to 12 chars, better to use a string or a Char(12)?
Thanks!
A string is stored as a dynamic array of characters while a char is statically allocated.
Some of the downsides of strings include:
Overhead - because they are dynamic the length must be stored in addition to the actual string.
The substring and offset operators don't work with strings.
Strings cannot be turned into translatable text elements.
So to answer your question, strings should only be used for fairly long values with a wide range of lengths where the additional overhead is negligible relative to the potential wasted space of a static char(x) variable.
I think CHAR is the best because you are 100% sure that the field has to only hold 0-12 characters.
string is the variable length Data type , while in char you have to define the length ..
for type C(Text field (alphanumeric characters)) and String X or hexadecimal string have initial value (X'0 … 0') .
to avoid initial value , and to use actual length C type is used
Strings are good when:
The text length will be variable.
Spaces are part of the string (trailing spaces in CHAR fields are lost)
You pass them around a lot (when STRING variable metadata is less than char field size)
You need to get the STRING length often. It is more optimal than with CHAR fields.
CHAR fields are good:
If they are small, they are fast (less than around 32 chars on unicode systems)
CHAR field literals using (') quotes instead of (`) can be made into translatable texts.
Things to remember:
All variables have metadata, but strings also has some internal pointer to the string data, which could add up to 64 bytes to memory consumption. Something to keep in mind.
When assigning a literal text to a variable, try to match the literal type to the variable type. Use 'test' for CHAR and `test` for STRING. This is usually slightly faster.
String Variable :
A String is a variable length data type which is used to store any length of data. Variable length fields are used because they save space.
String, can store any number of characters. String will allocate the memory at runtime which is also called as dynamic memory allocation, will allocate the memory as per the size of the string. Strings cannot be declared using parameters as the memory of allocation is dynamic.
But in your case, you already know max-length of field(0 - 12 characters), So CHAR type is best for use in your case. A STRING type generally used to variable length data or a long values.
Read more