Is there a better/shorter way in creating byte array from constant hex than the version below?
byteArrayOf(0xA1.toByte(), 0x2E.toByte(), 0x38.toByte(), 0xD4.toByte(), 0x89.toByte(), 0xC3.toByte())
I tried to put 0xA1 without .toByte() but I receive syntax error complaint saying integer literal does not conform to the expected type Byte. Putting integer is fine but I prefer in hex form since my source is in hex string. Any hints would be greatly appreciated. Thanks!
as an option you can create simple function
fun byteArrayOfInts(vararg ints: Int) = ByteArray(ints.size) { pos -> ints[pos].toByte() }
and use it
val arr = byteArrayOfInts(0xA1, 0x2E, 0x38, 0xD4, 0x89, 0xC3)
If all your bytes were less than or equal to 0x7F, you could put them directly:
byteArrayOf(0x2E, 0x38)
If you need to use bytes greater than 0x7F, you can use unsigned literals to make a UByteArray and then convert it back into a ByteArray:
ubyteArrayOf(0xA1U, 0x2EU, 0x38U, 0xD4U, 0x89U, 0xC3U).toByteArray()
I think it's a lot better than appending .toByte() at every element, and there's no need to define a custom function as well.
However, Kotlin's unsigned types are an experimental feature, so you may have some trouble with warnings.
The issue is that bytes in Kotlin are signed, which means they can only represent values in the [-128, 127] range. You can test this by creating a ByteArray like this:
val limits = byteArrayOf(-0x81, -0x80, -0x79, 0x00, 0x79, 0x80)
Only the first and last values will produce an error, because they are out of the valid range by 1.
This is the same behaviour as in Java, and the solution will probably be to use a larger number type if your values don't fit in a Byte (or offset them by 128, etc).
Side note: if you print the contents of the array you've created with toInt calls, you'll see that your values larger than 127 have flipped over to negative numbers:
val bytes = byteArrayOf(0xA1.toByte(), 0x2E.toByte(), 0x38.toByte(), 0xD4.toByte(), 0x89.toByte(), 0xC3.toByte())
println(bytes.joinToString()) // -95, 46, 56, -44, -119, -61
I just do:
val bytes = listOf(0xa1, 0x2e, 0x38, 0xd4, 0x89, 0xc3)
.map { it.toByte() }
.toByteArray()
Related
I believe FFFFFFFF is -1 because of Two's Complement.
I tried to convert Hex String into Integer, but I got the error.
Here is the code I have tried.
code
// Extension functions
val Int.asByteArray get() =
byteArrayOf(
(this shr 24).toByte(),
(this shr 16).toByte(),
(this shr 8).toByte(),
this.toByte())
val Int.asHex get() = this.asByteArray.asHexUpper
// Main Code
fun main()
{
println((-1).asHex)
println("FFFFFFFF".toInt(16))
}
Result
FFFFFFFF
Exception in thread "main" java.lang.NumberFormatException: For input string: "FFFFFFFF"
at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Integer.parseInt(Integer.java:652)
at java.base/java.lang.Integer.parseInt(Integer.java:770)
Is this intended? or Is this error?
If this is intended, then what should I do?
Your code presumably can't parse 4 bytes (32 bit) to a signed integer, since this only contains 31 bits as one bit is reserved for the sign.
A solution would be to parse it into an unsigned integer (as Some random IT boy stated) and then convert the (16bit-)UInt to a (16bit-)Int:
fun main()
{
val u = "FFFFFFFF".toUInt(16) //16 as it is base 16
println(u) //prints "4294967295"
val v = u.toInt()
println(v) //prints "-1"
}
Use with caution as this will not work when your data is not of length 32: You can not parse FFFF and expect it to be -1 as it would be parsed to 0000FFFF which is equal to 65535.
For data lengths of 1, 2 or 8 bytes, look into the data types Byte, Short or Long and their respective Functions:
val u = "FFFFFFFF".toULong(16)
println(u) //prints 4294967295
val v = u.toLong()
println(v) //still prints 4294967295
It's -1 because the first bit of the 4 byte memory block decides the sign of the number. In this case since it's all 1's then it's a negative number. Then the value of this memory block is the Two's complement because we're dealing with a signed integer
The problem you're facing is not Kotlin specific: checkout this question from StackOverflow
Java has also trouble parsing such string to an Integer, but a Long has no trouble; because it has 4 more bytes to spare, and in fact, if you'd write the value literal 0xFFFFFFFF Kotlin grabs it as if it's a Long:
A quick fix for these could be to use the unsigned counterpart of the int UInt
"FFFFFFFF".toUInt(16) // 4294967295
Or just use a Long
In https://kotlinlang.org/
fun main(Args : Array<String>){
var list = listOf(1,2,3)
for(x in list){
print(x.toChar())
}
}
This is just an illustration of the challenge i am facing on some code i am writing that is supposed to add elements to a Char list from some of the elements of an Int list. the code above is producing the following result. Thank you for the assistance in advance
result :
The code you posted produces not the chars denoting the digits but the chars with values equal to those Ints (i.e. 0x01, 0x02, 0x03).
If you need to print the ints, then use either print(x) or print("$x") or print(x.toString()).
If you want to get the chars denoting the digits, you can do that as '0' + x, given x in 0..9.
int first[] = {1, 4};
int second[] = {2, 3, 7};
arrayOfCPointers[0] = first;
arrayOfCPointers[1] = second;
NSLog(#"size of %lu", sizeof(arrayOfCPointers[0]) / sizeof(int));
I want to have an array of sub arrays. Each sub array needs to be a different size. But I need to be able to find out the size of each sub array?
The Log keeps returning 1
You need to store the size somewhere. The language does not do so for bare C arrays. All you have is the address of the first element.
I'd write a wrapper class or struct to hold the array and it's metadata (like length).
typedef struct tag_arrayholder
{
int* pArray;
int iLen;
}ArrayHolder;
int first[] = {1, 4};
ArrayHolder holderFirst;
holderFirst.pArray = first;
holderFirst.iArrayLen = sizeof(first) / sizeof(int);
arrayOfCPointers[0] = holderFirst;
NSLog(#"size of %lu", arrayOfCPointers[0].iLen);
Or, like trojanfoe said, store special value marking the last position (exactly the approach zero-terminated string uses)
The "sizeof" instruction could be used to know the amount of bytes used by the array, but it works only with static array, with dynamics one it returns the pointer size. So with static array you could use this formula : sizeof(tab)/sizeof(tab[0]) to know the size of your array because the first part give you the tab size in bytes and the second the size of an element, so the result is your amount of element in your array ! But with a dynamic array the only way is to store the size somewhere or place a "sentinal value" at the end of your array and write a loop which count elements for you !
(Sorry for my English i'm french :/)
The NSLog statement is printing the value 1 because the expression you're using is dividing the size of the first element of the array (which is the size of an int) by the size of an int.
So what you currently have is this:
NSLog(#"size of %lu", sizeof(arrayOfCPointers[0]) / sizeof(int));
If you remove the array brackets, you'll get the value you're looking for:
NSLog(#"size of %lu", sizeof(arrayOfCPointers) / sizeof(int));
As other answers have pointed out, this won't work if you pass the array to another method or function, since all that's passed in that case is an address. The only reason the above works is because the array's definition is in the local scope, so the compiler can use the type information to compute the size.
I need to take a char [] array and copy it's value to another, but I fail every time.
It works using this format:
char array[] = { 0x00, 0x00, 0x00 }
However, when I try to do this:
char array[] = char new_array[];
it fails, even though the new_array is just like the original.
Any help would be kindly appreciated.
Thanks
To copy at runtime, the usual C method is to use the strncpy or memcpy functions.
If you want two char arrays initialized to the same constant initializer at compile time, you're probably stuck with using #define:
#define ARRAY_INIT { 0x00, 0x00, 0x00 }
char array[] = ARRAY_INIT;
char new_array[] = ARRAY_INIT;
Thing is, this is rarely done because there's usually a better implementation.
EDIT: Okay, so you want to copy arrays at runtime. This is done with memcpy, part of <string.h> (of all places).
If I'm reading you right, you have initial conditions like so:
char array[] = { 0x00, 0x00, 0x00 };
char new_array[] = { 0x01, 0x00, 0xFF };
Then you do something, changing the arrays' contents, and after it's done, you want to set array to match new_array. That's just this:
memcpy(new_array, array, sizeof(array));
/* ^ ^ ^
| | +--- size in bytes
| +-------------- source array
+-------------------------destination array
*/
The library writers chose to order the arguments with the destination first because that's the same order as in assignment: destination = source.
There is no language-level built-in means to copy arrays in C, Objective-C, or C++ with primitive arrays like this. C++ encourages people to use std::vector, and Objective-C encourages the use of NSArray.
I'm still not sure of exactly what you want, though.
im learning cocos2d [open gl wrapper for objective C on iPhone], and now playing with sprites have found this in a example,
enum {
easySprite = 0x0000000a,
mediumSprite = 0x0000000b,
hardSprite = 0x0000000c,
backButton = 0x0000000d,
magneticSprite = 0x0000000e,
magneticSprite2 = 0x0000000f
};
...
-(id) init
{...
/second sprite
TSprite *med = [TSprite spriteWithFile:#"butonB.png"]; //blue
[med SetCanTrack:YES];
[self addChild: med z:1 tag:mediumSprite];
med.position=ccp(299,230);
[TSprite track:med];
so the variable defined in the enum is used in the tag name of the created sprite object,
but i don understand
why give values in hexa to the tags to use
the enum with out tags
as I knew this enum in obj C and C
typedef enum {
JPG,
PNG,
GIF,
PVR
} kImageType;
thanks!
Usually, when you are creating an enum, you want to use it as a type (variable, method parameters etc.).
In this case, it's just a way how to declare integer constants. Since thay don't want to use the enum as type, the name is not necessary.
Edit:
Hexadecimal numbers are commonly used when the integer is a binary mask. You won't see any operators like +,-,*,/ used with such a number, you'll see bitwise operators (!, &, |, ^).
Every digit in a hexadecimal number represents 4 bits. The whole number is a 32-bit integer and by writing it in hexadecimal in this case, you are saying that you are using only the last four bits and the other bits can be used for something else. This wouldn't be obvious from a decimal number.
Enums are automatically assigned values, incremented from 0 but you can assign your own values.
If you don't specify any values they will be starting from 0 as in:
typedef enum {
JPG,
PNG,
GIF,
PVR
} kImageType;
But you could assign them values:
typedef enum {
JPG = 0,
PNG = 1,
GIF = 2,
PVR = 3
} kImageType;
or even
typedef enum {
JPG = 100,
PNG = 0x01,
GIF = 100,
PVR = 0xff
} kImageType;
anything you want, repeating values are ok as well.
I'm not sure why they are given those specific values but they might have some meaning related to use.
Well, you seem to be working off a terrible example. :)
At least as far as enums are concerned. It's up to anyone to define the actual value of an enum entry, but there's no gain to use hex numbers and in particular there's no point in starting the hex numbers with a through f (10 to 15). The example will also work with this enum:
enum {
easySprite = 10,
mediumSprite,
hardSprite,
backButton,
magneticSprite,
magneticSprite2
};
And unless there's some point in having the enumeration start with value 10, it will probably work without specifying any concrete values.