Objective-C: How to correctly initialize char[]? - objective-c

I need to take a char [] array and copy it's value to another, but I fail every time.
It works using this format:
char array[] = { 0x00, 0x00, 0x00 }
However, when I try to do this:
char array[] = char new_array[];
it fails, even though the new_array is just like the original.
Any help would be kindly appreciated.
Thanks

To copy at runtime, the usual C method is to use the strncpy or memcpy functions.
If you want two char arrays initialized to the same constant initializer at compile time, you're probably stuck with using #define:
#define ARRAY_INIT { 0x00, 0x00, 0x00 }
char array[] = ARRAY_INIT;
char new_array[] = ARRAY_INIT;
Thing is, this is rarely done because there's usually a better implementation.
EDIT: Okay, so you want to copy arrays at runtime. This is done with memcpy, part of <string.h> (of all places).
If I'm reading you right, you have initial conditions like so:
char array[] = { 0x00, 0x00, 0x00 };
char new_array[] = { 0x01, 0x00, 0xFF };
Then you do something, changing the arrays' contents, and after it's done, you want to set array to match new_array. That's just this:
memcpy(new_array, array, sizeof(array));
/* ^ ^ ^
| | +--- size in bytes
| +-------------- source array
+-------------------------destination array
*/
The library writers chose to order the arguments with the destination first because that's the same order as in assignment: destination = source.
There is no language-level built-in means to copy arrays in C, Objective-C, or C++ with primitive arrays like this. C++ encourages people to use std::vector, and Objective-C encourages the use of NSArray.
I'm still not sure of exactly what you want, though.

Related

Creating ByteArray in Kotlin

Is there a better/shorter way in creating byte array from constant hex than the version below?
byteArrayOf(0xA1.toByte(), 0x2E.toByte(), 0x38.toByte(), 0xD4.toByte(), 0x89.toByte(), 0xC3.toByte())
I tried to put 0xA1 without .toByte() but I receive syntax error complaint saying integer literal does not conform to the expected type Byte. Putting integer is fine but I prefer in hex form since my source is in hex string. Any hints would be greatly appreciated. Thanks!
as an option you can create simple function
fun byteArrayOfInts(vararg ints: Int) = ByteArray(ints.size) { pos -> ints[pos].toByte() }
and use it
val arr = byteArrayOfInts(0xA1, 0x2E, 0x38, 0xD4, 0x89, 0xC3)
If all your bytes were less than or equal to 0x7F, you could put them directly:
byteArrayOf(0x2E, 0x38)
If you need to use bytes greater than 0x7F, you can use unsigned literals to make a UByteArray and then convert it back into a ByteArray:
ubyteArrayOf(0xA1U, 0x2EU, 0x38U, 0xD4U, 0x89U, 0xC3U).toByteArray()
I think it's a lot better than appending .toByte() at every element, and there's no need to define a custom function as well.
However, Kotlin's unsigned types are an experimental feature, so you may have some trouble with warnings.
The issue is that bytes in Kotlin are signed, which means they can only represent values in the [-128, 127] range. You can test this by creating a ByteArray like this:
val limits = byteArrayOf(-0x81, -0x80, -0x79, 0x00, 0x79, 0x80)
Only the first and last values will produce an error, because they are out of the valid range by 1.
This is the same behaviour as in Java, and the solution will probably be to use a larger number type if your values don't fit in a Byte (or offset them by 128, etc).
Side note: if you print the contents of the array you've created with toInt calls, you'll see that your values larger than 127 have flipped over to negative numbers:
val bytes = byteArrayOf(0xA1.toByte(), 0x2E.toByte(), 0x38.toByte(), 0xD4.toByte(), 0x89.toByte(), 0xC3.toByte())
println(bytes.joinToString()) // -95, 46, 56, -44, -119, -61
I just do:
val bytes = listOf(0xa1, 0x2e, 0x38, 0xd4, 0x89, 0xc3)
.map { it.toByte() }
.toByteArray()

Can you have objective c enum same values?

I'm a beginner in Objective C. I've encountered the following typedef enum in Apple docs.
typedef enum NSUnderlineStyle : NSInteger {
NSUnderlineStyleNone = 0x00,
NSUnderlineStyleSingle = 0x01,
NSUnderlineStyleThick = 0x02,
NSUnderlineStyleDouble = 0x09,
NSUnderlinePatternSolid = 0x0000,
NSUnderlinePatternDot = 0x0100,
NSUnderlinePatternDash = 0x0200,
NSUnderlinePatternDashDot = 0x0300,
NSUnderlinePatternDashDotDot = 0x0400,
NSUnderlineByWord = 0x8000
} NSUnderlineStyle;
Aren't the values for
NSUnderlineStyleNone = 0x00,
NSUnderlinePatternSolid = 0x0000,
the same hex 0? How is it possible to differentiate the two values?
Thanks in advance.
While Apple included them in the same enum definition, there are 3 distinct sets of values being defined. The first is line style, the second is pattern, and one is an option (ByWord).
When you define your options, you choose from at most one value from each set, and you OR them together. By defining a style and a pattern with the same value, it simply means that the default, as defined by bit 0 in the result, will be no underline, but if a style is chosen, the default pattern will be a solid line.

sem_open - valgrind complains about uninitialised bytes

I have a trivial program:
int main(void)
{
const char sname[]="xxx";
sem_t *pSemaphor;
if ((pSemaphor = sem_open(sname, O_CREAT, 0644, 0)) == SEM_FAILED) {
perror("semaphore initilization");
exit(1);
}
sem_unlink(sname);
sem_close(pSemaphor);
}
When I run it under valgrind, I get the following error:
==12702== Syscall param write(buf) points to uninitialised byte(s)
==12702== at 0x4E457A0: __write_nocancel (syscall-template.S:81)
==12702== by 0x4E446FC: sem_open (sem_open.c:245)
==12702== by 0x4007D0: main (test.cpp:15)
==12702== Address 0xfff00023c is on thread 1's stack
==12702== in frame #1, created by sem_open (sem_open.c:139)
The code was extracted from a bigger project where it ran successfully for years, but now it is causing segmentation fault.
The valgrind error from my example is the same as seen in the bigger project, but there it causes a crash, which my small example doesn't.
I see this with glibc 2.27-5 on Debian. In my case I only open the semaphores right at the start of a long-running program and it seems harmless so far - just annoying.
Looking at the code for sem_open.c which is available at:
https://code.woboq.org/userspace/glibc/nptl/sem_open.c.html
It seems that valgrind is complaining about the line (270 as I look now):
if (TEMP_FAILURE_RETRY (__libc_write (fd, &sem.initsem, sizeof (sem_t)))
== sizeof (sem_t)
However sem.initsem is properly initialised earlier in a fairly baroque manner, firstly by explicitly setting fields in the sem.newsem (part of the union), and then once that is done by a call to memset (L226-228):
/* Initialize the remaining bytes as well. */
memset ((char *) &sem.initsem + sizeof (struct new_sem), '\0',
sizeof (sem_t) - sizeof (struct new_sem));
I think that this particular shenanigans is all quite optimal, but we need to make sure that all of the fields of new_sem have actually been initialised... we find the definition in https://code.woboq.org/userspace/glibc/sysdeps/nptl/internaltypes.h.html and it is this wonderful creation:
struct new_sem
{
#if __HAVE_64B_ATOMICS
/* The data field holds both value (in the least-significant 32 bytes) and
nwaiters. */
# if __BYTE_ORDER == __LITTLE_ENDIAN
# define SEM_VALUE_OFFSET 0
# elif __BYTE_ORDER == __BIG_ENDIAN
# define SEM_VALUE_OFFSET 1
# else
# error Unsupported byte order.
# endif
# define SEM_NWAITERS_SHIFT 32
# define SEM_VALUE_MASK (~(unsigned int)0)
uint64_t data;
int private;
int pad;
#else
# define SEM_VALUE_SHIFT 1
# define SEM_NWAITERS_MASK ((unsigned int)1)
unsigned int value;
int private;
int pad;
unsigned int nwaiters;
#endif
};
So if we __HAVE_64B_ATOMICS then the structure has a data field which contains both the value and the nwaiters, otherwise these are separate fields.
In the initialisation of sem.newsem we can see that these are initialised correctly, as follows:
#if __HAVE_64B_ATOMICS
sem.newsem.data = value;
#else
sem.newsem.value = value << SEM_VALUE_SHIFT;
sem.newsem.nwaiters = 0;
#endif
/* pad is used as a mutex on pre-v9 sparc and ignored otherwise. */
sem.newsem.pad = 0;
/* This always is a shared semaphore. */
sem.newsem.private = FUTEX_SHARED;
I'm doing all of this on a 64-bit system, so I think that valgrind is complaining about the initialisation of the 64-bit sem.newsem.data with a 32-bit value since from:
value = va_arg (ap, unsigned int);
We can see that value is defined simply as an unsigned int which will usually still be 32 bits even on a 64-bit system (see What should be the sizeof(int) on a 64-bit machine?), but that should just be an implicit cast to 64-bits when it is assigned.
So I think this is not a bug - just valgrind getting confused.

Union struct: printing struct member of type uint32_t skips two bytes and prints wrong value

Need help with union struct. I'm receiving byte stream that consists of various packets, so I'm putting the byte data into union struct and accessing needed data via struct members. The problem is with uint32_t type member - the read skips its two bytes and shows wrong value when accessing via its member. Here's full demo code:
PacketUtils.h
#include <stdint.h>
typedef struct {
uint8_t startSymbol;
uint8_t packetType;
uint32_t deviceId;
uint16_t packetCRC;
} PacketData;
typedef union {
uint8_t *bytes; // stores raw bytes
PacketData *packet;
} Packet;
// Puts bytes into predefined struct
void getPacketFromBytes(void *bytes, Packet *packetRef);
PacketUtils.c
#include <stdio.h>
#include "UnionStruct.h"
void getPacketFromBytes(void *bytes, Packet *packetRef)
{
uint8_t *rawBytes = (uint8_t *)bytes;
packetRef->bytes = rawBytes;
}
Calling code:
// sample byte data
uint8_t packetBytes[] = {0x11, 0x02, 0x01, 0x01, 0x01, 0x03, 0xbb, 0xbd};
Packet packetRef;
getPacketFromBytes(packetBytes, &packetRef);
printf("%x\n", packetRef.packet->startSymbol); // good - prints 0x11
printf("%x\n", packetRef.packet->packetType); // good - prints 0x02
printf("%x\n", packetRef.packet->deviceId); // bad - prints bd bb 03 01
printf("%x\n", packetRef.packet->packetCRC); // bad - prints 36 80 (some next values in memory)
Everything is OK when PacketData struct consist of uint8_t or uint16_t type members then the print shows correct values. However, printing deviceId of type uint32_t skips two bytes (0x01 0x01) and grabs last 4 bytes. Printing packetCRC prints the values out of given byte array - some two values in memory, like packetBytes[12] and packetBytes[13]. I can't figure out why it skips two bytes...
The problem is due to the fields being padded out to default alignment on your platform. On most modern architectures 32-bit values are most efficient when read/written to a 32-bit word aligned address.
In gcc you can avoid this by using a special attribute to indicate that the structure is "packed". See here:
http://gcc.gnu.org/onlinedocs/gcc-3.3.6/gcc/Type-Attributes.html
So struct definition would look something like this:
typedef struct {
uint8_t startSymbol;
uint8_t packetType;
uint32_t deviceId;
uint16_t packetCRC;
} PacketData __attribute__((packed));
The 32-bit number will be aligned on a 4-byte boundary only. If you move it to the start of your struct, it may just work as you want.
Processors usually are optimised to fetch data on multiples of the datum size - 4 bytes for 32-bit, 8 bytes for 64-bit... - and the compiler knows this and adds gaps into the data structures to make sure that the processor can fetch the data efficiently.
If you don't want to deal with the padding and can't move the data structure around, you could define
typedef struct {
uint8_t startSymbol;
uint8_t packetType;
uint16_t deviceIdLow;
uint16_t deviceIdHigh;
uint16_t packetCRC;
} PacketData;
and then just write
uint32_t deviceID = packetRef.packet->deviceIdLow | (packetRef.packet->deviceIdLow << 16);

NSSwapInt from byte array

I'm trying to implement a function that will read from a byte array (which is a char* in my code) a 32bit int stored with different endianness. I was suggested to use NSSwapInt, but I'm clueless on how to go about it. Could anyone show me a snippet?
Thanks in advance!
Heres a short example:
unsigned char bytes[] = { 0x00, 0x00, 0x01, 0x02 };
int intData = *((int *)bytes);
int reverseData = NSSwapInt(intData);
NSLog(#"integer:%d", intData);
NSLog(#"bytes:%08x", intData);
NSLog(#"reverse integer: %d", reverseData);
NSLog(#"reverse bytes: %08x", reverseData);
The output will be:
integer:33619968
bytes:02010000
reverse integer: 258
reverse bytes: 00000102
As mentioned in the docs,
Swaps the bytes of iv and returns the resulting value. Bytes are swapped from each low-order position to the corresponding high-order position and vice versa. For example, if the bytes of inv are numbered from 1 to 4, this function swaps bytes 1 and 4, and bytes 2 and 3.
There's also a NSSwapShort and NSSwapLongLong.
There is a potential of a data misalignment exception if you solve this problem by using integer pointers - e.g. some architectures require 32-bit values to be at addresses which are multiples of 2 or 4 bytes. The ARM architecture used by the iPhone et al. may throw an exception in this case, but I've no iOS device handy to test whether it does.
A safe way to do this which will never throw any misalignment exceptions is to assemble the integer directly:
int32_t bytes2int(unsigned char *b)
{
int32_t i;
i = b[0] | b[1] << 8 | b[2] << 16 | b[3] << 24; // little-endian, or
i = b[3] | b[2] << 8 | b[1] << 16 | b[0] << 24; // big-endian (pick one)
return i;
}
You can pass this any byte pointer and it will assemble 4 bytes to make a 32-bit int. You can extend the idea to 64-bit integers if required.