I've been trying to send packets to a minecraft server from my custom Cocoa application (written in objective-c of course). I am a little confused as how to do that though. I did it in Java. That was very easy. Doing this is objective-c though is proving to be a bit more challenging.
This is the code that I am using:
- (void)handshake
{
PacketHandshake *packet = [PacketHandshake packetString:[NSString stringWithFormat:#"%#;%#:%i", username, IP, PORT]];
[packet writeData:dataOut];
}
Which calls:
- (void)writeData:(NSOutputStream *)dataOut
{
[super writeData:dataOut]; //Writes the "header" which is a char with the value of 0x02 (char packetID = 0x02)
NSUInteger len = [string lengthOfBytesUsingEncoding:NSUTF16BigEndianStringEncoding]; //Getting the length of the string i guess?
NSData *data = [string dataUsingEncoding:NSUTF16BigEndianStringEncoding]; //Getting string bytes?
[dataOut write:(uint8_t*)len maxLength:2]; //Send the length?
[dataOut write:[data bytes] maxLength:[data length]]; //Send the actual string?
}
I have established a successful connection to the server beforehand, but I don't really know whether or not I am sending the packets correctly. Could somebody please explain how I should send various data types and objects. (int, byte/char, short, double, NSString, BOOL/bool)
Also, is there any specific or universal way to send packets like the ones required by Minecraft?
Ok, I guess the question is now: how do data types, mainly strings, relate in Java and Objective-C?
Any help is appreciated, thank you!
Nobody knows?
Maybe you're running into a network/host byte order problem? I know very little about Minecraft- but I note that it's mentioned here that shorts in the Minecraft protocol use network byte order, which is big-endian (all other data types are 1 byte long so endianness is not relevant).
All x86 machines use little-endian.
I don't know whether your PacketHandshake class is converting the data before sending it- if not you could use the c library functions ntohs() and htons(), for which you'd need to include sys/types.h and
netinet/in.h
The link also mentions that strings are 64 byte array of standard ASCII chars, padded with 0x20s. You can get the ASCII value out of an NSString by doing [string UTF8String], which returns const char*- i.e. your standard C String ending with a 0x0, and then maybe pad it. But if it just works in Java, then maybe you don't need to.
Related
I'm facing a problem that I don't understand. Before beginning to explain it, even if I have worked on a Swift project this year that was using some Objective-C, I am new to this language and its concepts.
So, there is my problem : I want to access the bytes of an NSData object. I know there several ways to do so :
[data bytes];
data.bytes;
[data getBytes: dest, length: [data length]];
But each method doesn't return the same value as the console, when I'm using po [data bytes].
Can you explain me why this happens ? I don't really understand what I'm missing.
Thanks.
data and data.bytes are of two totally different types. data is an instance of NSData, while data.bytes is a raw pointer (const void *). When you call po in the debugger (short for "print object"), it will call -description on things which inherit from NSObject, or just print the value if they do not.
In this case, since data is an NSData (which has -description), if you po data, it calls [data description] and prints the result of that out; since NSData knows how to nicely format its contents, it will print nicely.
However, since data.bytes is a void *, there is no way for the debugger to know how to print it (void * can point to anything; how to interpret it is totally up to you) so it just prints out the pointer itself.
If you want to print the data from the debugger directly, you can tell it how to interpret the pointer and print it out. If you know that the data blob is n bytes long, you can run the following command:
p/x *(uint8_t (*)[<n>])data.bytes
where <n> is replaced with the literal length of the data (e.g. uint8_t (*)[8])). *(uint8_t (*)[<n>])data.bytes tells the debugger to reinterpret data.bytes as an array of n bytes (giving it the length so it knows how much data to read from memory) while p/x tells it to print the hex values of the bytes it finds.
I'm trying to read in the first four bytes of a file. I know that this works correctly with the following C code:
FILE *file = fopen(URL.path.UTF8String, "rb");
uint data;
fread(&data, 4, 1, file);
NSLog(#"%u", data);
This prints out: 205
I'm trying to find the equivalent way of doing this in Objective-C/with Cocoa functions. I've tried a number of things. I feel like the following is close:
NSFileHandle *fileHandle = [NSFileHandle fileHandleForReadingFromURL:URL error:nil];
NSData *data2 = [fileHandle readDataOfLength:4];
NSLog(#"%#", data2);
NSLog(#"%u", (uint)data2.bytes);
This prints out: < cd000000 >
and: 1703552
As expected, the first four bytes of the file are indeed CD000000.
I'm assuming there's one of two things causing the difference (or both):
fread is not counting the 0s following the CD. I've confirmed this by only reading in 1 byte with the fileHandle, but sometimes this number will extend greater than one byte, so I can't restrict it like this. Do I need to manually check that the bytes coming in aren't 00?
This has something to do with endianness. I have tried a number of functions such as CFSwapInt32BigToHost but have not been able to get back the right value. It would be great if anyone can enlighten me as to how endianness works/effects this.
You are not dereferencing the data.
NSLog(#"%u", (uint)data2.bytes); // wrong
The "quick hack" version is like this:
NSLog(#"%u", *(uint *) data2.bytes); // hack
A more robust solution requires copying to a variable somewhere, to get the alignment right, but this doesn't matter on all platforms:
uint value;
[data getBytes:&value length:sizeof(value)];
NSLog(#"%u", value);
Another solution is to explicitly read the data byte-by-byte, which is most portable, has no alignment issues on any platform, and has no byte-order issues on any platform:
unsigned char *p = data.bytes;
uint value = (unsigned) p[0] | ((unsigned) p[1] << 8) |
((unsigned) p[2] << 16) | ((unsigned) p[3] << 24);
NSLog(#"%u", value);
As you can see, there are good reasons why we avoid putting binary data in files ourselves, and leave it to libraries or use text formats.
This can't be an issue with byte order, because fread() is working correctly. The fread() function and the -readDataOfLength: method will both give you the same result: a chunk of bytes.
You attempt reinterpret a sequence of 4 bytes as an unsigned int. This is not guaranteed to work on all platforms. It will only work if sizeof(unsigned int) equals 4. And it will only work if the byte order is the same for reading and writing.
Furthermore, you are not printing the scalars correctly with NSLog.
fread() in binary mode won't do anything to your data, you'll get the bytes as they are in the file.
It's absolutely byte ordering that is causing this, but I don't know anything about Apple's Objective C API:s. I don't understand why you don't need to do pointer accesses to the data2 object, even (why isn't data2.bytes failing, and data2->bytes needed?).
Also, the documentation for NSData doesn't say anything about byte order that I could find.
I'm using cocoaasyncsocket to send data Google Protocol Buffers (using http://code.google.com/p/metasyntactic/wiki/ProtocolBuffers) to a Java server. This is all fine BUT for messages (protoToSend) >128bytes I'm running into issues as the Java server can not read the message length correctly, I think because I'm sending the wrong length from Objective C.
I currently send the data as follows:
AsyncSocket *socket;
- (void)sendProtoToServer:(RequestMessage *)protoToSend {
NSData *d = [protoToSend data];
int s = [protoToSend serializedSize];
NSData *size = [NSData dataWithBytes:&s length:1];
[socket writeData:size withTimeout:TIME_OUT tag:100];
[socket writeData:d withTimeout:TIME_OUT tag:101];
}
Any ideas?
Thanks in advance
The length is little-endian varint encoded, presumably - meaning it is in chunks of 7-bits with the MSB as a continuation bit. If the MSB is set, then you need to process the next byte (and so on) to get the combined length, then use bitwise shift to combine them.
Indeed, for all numbers < 128, this indeed looks identical to reading a single byte.
See here for the spec on decoding base-128 varints.
I need to put a short and integer at the begging of a message that i am sending to a java server. The server is expecting to read a short (message id) then an integer (message length). I've read in the stackoverflow that NSMutableData is similar to java ByteBuffer.
I am trying to pack the message into NSMutableData then send it.
So this is what I have but is not working !.
NSMutableData *data = [NSMutableData dataWithLength:(sizeof(short) + sizeof(int))];
short msg_id = 2;
int length = 198;
[data appendBytes:&msg_id length:sizeof(short)];
[data appendBytes:&length length:sizeof(int)];
send(sock, data, 6, 0);
The server is using Java ByteBuffer to read in the received data. So the bytes coming in is:
32,120,31,0,2,0
which is invalid.
The correct value so the ByteBuffer can read them as .getShort() and .getInt()
0,2,0,0,0,-66
You're basically putting stuff into the NSData object correctly, but you're not using it with the send function correctly. First off, as dreamlax suggests, use NSMutableData's -initWithCapacity initializer to get a capacity, not zeroed bytes.
Your data pointer is a pointer to an Objective-C (NSData) object, not a the actual raw byte buffer. The send function is a classic UNIX-y C function, and doesn't know anything about Objective-C objects. It expects a pointer to the actual bytes:
send(sock, [data bytes], [data length], 0);
Also, FWIW, note that endianness matters here if you're expecting to recover the multibyte fields on the server. Consider using HTONL and HTONS on the short and int values before putting them in the NSData buffer, assuming the server expects "network" byte order for its packet format (though maybe you control that).
I think your use of dataWithLength: will give you an NSMutableData object with 6 bytes all initialised to 0, but then you append 6 more bytes with actual values (so you'll end up with 12 bytes all up). I'm assuming here that short is 2 bytes and int is 4. I believe you want to use dataWithCapacity: to hint how much memory to reserve for your data that you are packing.
As quixoto has pointed out, you need to use the bytes method, which returns a pointer to the first byte of the actual data. The length method will return the number of bytes you have.
Another thing you need to watch out for is endianness. The position of the most significant byte is dependent on the underlying architecture.
I'm working with Objective-C and I need to add int's from a NSArray to a NSMutableData (I'm preparing a to send the data over a connection). If I wrap the int's with NSNumber and then add them to NSMutableData, how would I find out how many bytes are in the NSNumber int? Would it be possible to use sizeof() since according to the apple documentation, "NSNumber is a subclass of NSValue that offers a value as any C scalar (numeric) type."?
Example:
NSNumber *numero = [[NSNumber alloc] initWithInt:5];
NSMutableData *data = [[NSMutableData alloc] initWithCapacity:0];
[data appendBytes:numero length:sizeof(numero)];
numero is not a numeric value, it is a pointer to a an object represting a numeric value. What you are trying to do won't work, the size will always be equal to a pointer (4 for 32 bit platforms and 8 for 64 bit), and you will append some garbage pointer value to your data as opposed to the number.
Even if you were to try to dereference it, you cannot directly access the bytes backing an NSNumber and expect it to work. What is going on is an internal implementation detail, and may vary from release to release, or even between different configurations of the same release (32 bit vs 64 bit, iPhone vs Mac OS X, arm vs i386 vs PPC). Just packing up the bytes and sending them over the wire may result in something that does not deserialize properly on the other side, even if you managed to get to the actual data.
You really need to come up with an encoding of an integer you can put into your data and then pack and unpack the NSNumbers into that. Something like:
NSNumber *myNumber = ... //(get a value somehow)
int32_t myInteger = [myNumber integerValue]; //Get the integerValue out of the number
int32_t networkInteger = htonl(myInteger); //Convert the integer to network endian
[data appendBytes:&networkInteger sizeof(networkInteger)]; //stuff it into the data
On the receiving side you then grab out the integer and recreate an NSNumber with numberWithInteger: after using ntohl to convert it to native host format.
It may require a bit more work if you are trying to send minimal representations, etc.
The other option is to use an NSCoder subclass and tell the NSNumber to encode itself using your coder, since that will be platform neutral, but it may be overkill for what you are trying to do.
First, NSNumber *numero is "A pointer to a NSNumber type", and the NSNumber type is an Objective-C object. In general, unless specifically stated somewhere in the documentation, the rule of thumb in object-oriented programming is that "The internal details of how an object chooses to represent its internal state is private to the objects implementation, and should be treated as a black box." Again, unless the documentation says you can do otherwise, you can't assume that NSNumber is using a C primitive type of int to store the int value you gave it.
The following is a rough approximation of what's going on 'behind the scenes' when you appendBytes:numero:
typedef struct {
Class isa;
double dbl;
long long ll;
} NSNumber;
NSNumber *numero = malloc(sizeof(NSNumber));
memset(numero, 0, sizeof(NSNumber));
numero->isa = objc_getClass("NSNumber");
void *bytes = malloc(1024);
memcpy(bytes, numero, sizeof(numero)); // sizeof(numero) == sizeof(void *)
This makes it a bit more clear that what you're appending to the NSMutableData object data is the first four bytes of what ever numero is pointing to (which, for an object in Obj-C is always isa, the objects class). I suspect what you "wanted" to do was copy the pointer to the instantiated object (the value of numero), in which case you should have used &numero. This is a problem if you're using GC as the buffer used by NSMutableData is not scanned (ie, the GC system will no longer "see" the object and reclaim it, which is pretty much a guarantee for a random crash at some later point.)
It's hopefully obvious that even if you put the pointer to the instantiated NSNumber object in to data, that pointer only has meaning in the context of the process that created it. A pointer to that object is even less meaningful if you send that pointer to another computer- the receiving computer has no (practical, trivial) way to read the memory that the pointer points to in the sending computer.
Since you seem to be having problems with this part of the process, let me make a recommendation that will save you countless hours of debugging some extremely difficult implementation bugs you're bound to run in to:
Abandon this entire idea of trying to send raw binary data between machines and just send simple ASCII/UTF-8 formatted information between them.
If you think that this is some how going to be slow, or inefficient, then let me recommend that you bring every thing up using a simplified ASCII/UTF-8 stringified version first. Trust me, debugging raw binary data is no fun, and the ability to just NSLog(#"I got: %#", dataString) is worth its weight in gold when you're debugging your inevitable problems. Then, once everything has gelled, and you're confident that you don't need to make any more changes to what it is you need to exchange, "port" (for lack of a better word) that implementation to a binary only version if, and only if, profiling with Shark.app identifies it as a problem area. As a point of reference, these days I can scp a file between machines and saturate a gigabit link with the transfer. scp probably has to do about five thousand times as much processing per byte to compress and encrypt the data than this simple stringification all while transferring 80MB/sec. Yet on modern hardware this is barely enough to budge the CPU meter running in my menu bar.