How to treat a float as an integer bit level in AssemblyScript - assemblyscript

I need to implement some C code blow:
float number = 1.5f
long i = * ( long * ) &number;
It is not about to convert the value from float into integer.
This data need to be modified bit level.

Just use reinterpret built-in function:
let num32: f32 = 1.5
let num64: f64 = 2.5
let uint32 = reinterpret<u32>(num32);
// uint32 <- 0x3fc00000
let uint64 = reinterpret<u64>(num64);
// uint64 <- 0x4004000000000000

Related

Example for length-prefix framing for protocol buffers

I'm currently working on an protocol buffer system for transporting large messages up to 6 Mb. My concern is that I misinterpreted the following post (https://eli.thegreenplace.net/2011/08/02/length-prefix-framing-for-protocol-buffers) . My idea of that post is:
message GeometryInTime
{
uint32 vecLength = 1;
message Vector3d
{
optional double x = 1;
optional double y = 2;
optional double z = 3;
}
uint32 timeStampLength = 1;
message Timestamp
{
optional int64 seconds = 1;
optional uint32 nanos = 2;
}
}
Is that a valid implementation for the length prefixed system valid? Does it work for repeated fields? Does the length get the serialized length or unserialized ( I'm confusing myself with that )? Does this work for partial message deserialization?
Edit:
message Vector3d
{
optional double x = 1;
optional double y = 2;
optional double z = 3;
}
message Timestamp
{
optional int64 seconds = 1;
optional uint32 nanos = 2;
}
message GeometryInTime
{
uint32 vecLength = 1;
optional Vector3d vector = 2;
uint32 timeStampLength = 3;
optional Timestamp timestamp = 4;
}
An embedded message is just a definition, not a usage. Right now GeometryInTime contains only the lengths.
In terms of embedding sub-messages: there are two formats: length-prefixed and grouped (start/end token, this option is basically deprecated now). When using length-prefixed, the library deals with everything - the length will always be "varint" encoded.
The only time custom length prefix approaches is relevant is for the root message - as part of a framing protocol. In that scenario, the library has nothing to do with it, so no amount of changes to the message will make any difference: you need to handle the frame data (length prefix etc in whatever format) outside of the serializer.

Converting Byte to UInt8 (Swift 3)

I have the following code written in Objective-C:
int port1 = SERVER_DEVICE_PORT;
int port2 = SERVER_DEVICE_PORT>>8;
Byte port1Byte[1] = {port1};
Byte port2Byte[1] = {port2};
NSData *port1Data = [[NSData alloc]initWithBytes: port1Byte length: sizeof(port1Byte)];
NSData *port2Data = [[NSData alloc]initWithBytes: port2Byte length: sizeof(port2Byte)];
I have converted it to Swift 3 like so:
let port1: Int = Int(SERVER_DEVICE_PORT)
let port2: Int = Int(SERVER_DEVICE_PORT) >> 8
let port1Bytes: [UInt8] = [UInt8(port1)]
let port2Bytes: [UInt8] = [UInt8(port2)]
let port1Data = NSData(bytes: port1Bytes, length: port1)
let port2Data = NSData(bytes: port2Bytes, length: port2)
However, with this code I am receiving the following error:
How can this be fixed?
The easiest way in Swift 3 to get the two lowest bytes from a 32 bit value is
var SERVER_DEVICE_PORT : Int32 = 55056
let data = Data(buffer: UnsafeBufferPointer(start: &SERVER_DEVICE_PORT, count: 1))
// or let data = Data(bytes: &SERVER_DEVICE_PORT, count: 2)
let port1Data = data[0]
let port2Data = data[1]
print(port1Data, port2Data)
This results UInt8 values, to get Data use
let port1Data = Data([data[0]])
let port2Data = Data([data[1]])
If – for some reason – the 32bit value is big endian (most significant byte in the smallest address) then port1Data = data[3] and port2Data = data[2].

What is the canonical way to create network packets in Objective C?

Coming from python I could do something like this.
values = (1, 'ab', 2.7)
s = struct.Struct('I 2s f')
packet = s.pack(*values)
I can pack together arbitrary types together very simply with python. What is the standard way to do it in Objective C?
Using a C struct is the normal approach. For example:
typedef struct {
int a;
char foo[2];
float b;
} MyPacket;
Would define a type for an int, 2 characters and a float. You can then interpret those bytes as a byte array for writing:
MyPacket p = {.a = 2, .b = 2.7};
p.foo[0] = 'a';
p.foo[1] = 'b';
char *toWrite = (char *)&p; // a buffer of size sizeof(p)
Not very clear a question, but maybe you're looking for a (packed struct)?
__attribute__((packed)) struct NetworkPacket {
int integer;
char character;
};

Convert float to int in Objective-C

How can I convert a float to int while rounding up to the next integer? For example, 1.00001 would go to 2 and 1.9999 would go to 2.
float myFloat = 3.333
// for nearest integer rounded up (3.333 -> 4):
int result = (int)ceilf(myFloat );
// for nearest integer (3.4999 -> 3, 3.5 -> 4):
int result = (int)roundf(myFloat );
// for nearest integer rounded down (3.999 -> 3):
int result = (int)floor(myFloat);
// For just an integer value (for which you don't care about accuracy)
int result = (int)myFloat;
Use ceil function:
int intValue = (int)ceil(yourValue);
You can use following C methods to get the int values from different dataTypes.
extern float ceilf(float);
extern double ceil(double);
extern long double ceill(long double);
These functions return float, double and long double respectively. But the job of these function is to get ceil of or floor of the argument. As in http://en.wikipedia.org/wiki/Floor_and_ceiling_functions
Then you can cast the return value to desired type like.
int intVariable = (int)ceilf(floatValueToConvert);
Hope it is helpful.
if we have float value like 13.123 to convert it into integer like 13
Code
float floatnumber=13.123; //you can also use CGFloat instead of float
NSLog(#"%.f",floatnumber); // use for print in command prompt

Conversion System::Uint^ to unsigned long

I have a function which takes pointer value as argument in my C Static Library.
Now I am writing C/CLI wrapper on it which in turn will be used in C# code.
long function_C( PULONG pulsize, PULONG pulcount );
Wrapper Function C++/CLI
long function_Managed( System::Uint^ size, System::Uint^ pulcount );
I am calling function_C function from function_Managed.Now I facing problem to convert System::Uint^ PULONG.
My Query is
1. is this correct do this.
2. If this is correct than how to convert System::Uint^ to PULONG
long function_C(PULONG pulsize, PULONG pulcount);
int function_Managed(unsigned% size, unsigned% count)
{
unsigned long lsize = size, lcount = count;
long const ret = function_C(&lsize, &lcount);
size = lsize, count = lcount;
return ret;
}
To C# code, function_Managed will have this signature:
int function_Managed(ref uint size, ref uint count)
See here for more info. Summarized below:
unsigned int k = *safe_cast<System::UInt^>(x);