I am trying to convert Java byte[] to Objective-c - objective-c

I'm trying to convert this objective-c code. I want to create the same value as Java.
long value = 165500000;
for (int i = 8; i-- > 0; value >>= 8) {
data[i] = (byte) value;
}
The code below is the code I converted to Objective-c. But the data is different from Java.
How do I get the same data as Java?
long time = 165500000;
char cData[8];
for (int i = 8; i-- > 0; time >>= 8) {
cData[i] = (char)time;
}

Related

heap corruption when using pin_ptr to copy from native code to managed code

I am trying to copy unsigned short from native code to managed code, but I get a heap corruption when calling memcpy.
INPUT: unsigned short* input
OUTPUT: array<unsigned short> output
I have the following code and if I set testDataSize is 100 then I don't see corruption.
Could someone please shed some light ?
Thanks,
typedef unsigned short uns16;
// DLL Entry Point
void main()
{
int testDataSize = 600;
int frSize = testDataSize / 2;
for (int j = 0; j < 1; j++)
{
uns16* input;
array<uns16>^ output1;
array<uns16>^ output2;
input = new uns16(frSize);
output1 = gcnew array <uns16>(frSize);
output2 = gcnew array <uns16>(frSize);
// initialize
for (int i = 0; i < frSize; i++)
{
input[i] = i;
}
//test 1
Stopwatch^ sw1 = Stopwatch::StartNew();
//-------------------------------------------------------------------
array<short>^ frameDataSigned = gcnew array<short>(frSize);
Marshal::Copy(IntPtr((void*)(input)), frameDataSigned, 0, frameDataSigned->Length);
System::Buffer::BlockCopy(frameDataSigned, 0, output1, 0, (Int32)(frSize) * 2);
//-------------------------------------------------------------------
auto res1 = sw1->ElapsedTicks;
//test 2
Stopwatch^ sw2 = Stopwatch::StartNew();
//-------------------------------------------------------------------
cli::pin_ptr<uns16> pinnedManagedData = &output2[0];
memcpy(pinnedManagedData, (void*)(input), frSize * sizeof(uns16));
//-------------------------------------------------------------------
auto res2 = sw2->ElapsedTicks;
....
int frSize = 300;
input = new uns16(frSize);
This doesn't allocate an array. It allocates a single uint16_t, and sets its value to 300. You need to use square brackets to allocate an array.
input = new uns16[frSize];

Arduino convert constant char to unsigned long

I'm asking you to know how to convert a constant char variable[] to a unsigned long variable!
The problem doesn't exist if not for :
I've to convert this value for example "0x20DF10EF" if I convert it to long it return me back "551489775".
What i want is to receive back "0x20DF10EF"!
Hope i've explained well enough my problem!
Best regards D.Tibe!
---- Edit ----
while(O != 'I'){
if(reciver.decode(&results)){
CMD[i] = "0x" + String(results.value, HEX);
CMD[i].toUpperCase();
Val[0] = CMD[i].c_str();
//Vil[0] = CMD[i].c_str();
//for(int i = 0; i < sizeof(Val[0])-1 ;i++)
//{
//}
Byte = String(results.bits, DEC);
delay(1000);
O = 'I';
reciver.resume();
}
This is my code!
I have to convert my Val[0] (that is a Constant char) to Unsigned long variable.
Like said before i'll have a value like this 0x20DF10EF in my constant char and i want to get exactly the same on my unsigned long variable, SO :
Val[0] will be = to 0x20DF10EF and i want to get back the same value but into the unsigned long variable like this
unsigned long Var will be = to 0x20DF10EF
If I understood correctly, you want to parse a const char * string with an hex number and put it into a variable.
If this is correct, there are two ways: using the sscanf function or converting it by hand.
Method 1:
unsigned long result;
if (sscanf(Val[0], "0x%x", &result) != 1)
{
Serial.println("Val[0] is not a valid hex value");
}
Method 2:
unsigned long result = 0;
byte i;
for (i = 2; i < strlen(Val[0]); i++)
{
if ((Val[0][i] >= '0') && (Val[0][i] <= '9'))
{
result = (result << 4) + Val[0][i] - '0';
}
else if ((Val[0][i] >= 'A') && (Val[0][i] <= 'F'))
{
result = (result << 4) + 10 + Val[0][i] - 'A';
}
else if ((Val[0][i] >= 'a') && (Val[0][i] <= 'f'))
{
result = (result << 4) + 10 + Val[0][i] - 'a';
}
else
{
Serial.println("Val[0] is not a valid hex value");
break;
}
}
By the way, adding 0x in front of the string is useless for this conversion. If you can, remove it and then replace "0x%x" with "%x" in the sscanf solution, or i = 2 with i = 0 in the hand-made one.

Swift RC4 vs. Objective-C RC4 Performance

I have been trying to rewrite a Rc4-algorithm from objective-c to swift, to test out apples(now old) claims, about it running a lot faster.
However there must be somewhere that I am doing something horribly wrong with these times I am getting
This is the objective c code:
+(NSString*)Rc4:(NSString*)aInput key:(NSString *)aKey {
NSMutableArray *iS = [[NSMutableArray alloc] initWithCapacity:256];
NSMutableArray *iK = [[NSMutableArray alloc] initWithCapacity:256];
for (int i = 0; i <256;i++){
[iS addObject:[NSNumber numberWithInt:i]];
}
for(short i=0;i<256;i++){
UniChar c = [aKey characterAtIndex:i%aKey.length];
[iK addObject:[NSNumber numberWithChar:c]];
}
int j=2;
for (int i=0; i<255;i++){
int is = [[iS objectAtIndex:i] intValue];
UniChar ik = (UniChar)[[iK objectAtIndex:i]charValue];
j= (j+is+ik)%256;
NSNumber *temp = [iS objectAtIndex:i];
[iS replaceObjectAtIndex:i withObject:[iS objectAtIndex:j]];
[iS replaceObjectAtIndex:j withObject:temp];
}
int i =0;
j=0;
NSString *result = aInput;
for (short x=0;x<[aInput length]; x++){
i = (i+1)%256;
int is = [[iS objectAtIndex:i]intValue];
j=(j+is)%256;
int is_i = [[iS objectAtIndex:i]intValue];
int is_j = [[iS objectAtIndex:j]intValue];
int t= (is_i+is_j)%256;
int iY = [[iS objectAtIndex:t]intValue];
UniChar ch = (UniChar)[aInput characterAtIndex:x];
UniChar ch_y=ch^iY;
//NSLog(ch);
//NSLog(iY);
result = [result stringByReplacingCharactersInRange:NSMakeRange(x,1) withString:
[NSString stringWithCharacters:&ch_y length:1] ];
}
[iS release];
[iK release];
return result;
}
This runs pretty fast compiling with -O3 I get times of:
100 runs:0.006 seconds
With key: 6f7e2a3d744a3b5859725f412f (128bit)
and input: "MySecretCodeToBeEncryptionSoNobodySeesIt"
This is my attempt to implement it in the same way using Swift:
extension String {
subscript (i: Int) -> String {
return String(Array(self)[i])
}
}
extension Character {
func unicodeValue() -> UInt32 {
for s in String(self).unicodeScalars {
return s.value
}
return 0
}
}
func Rc4(input:String, key:String)-> String{
var iS = Array(count:256, repeatedValue: 0)
var iK = Array(count:256, repeatedValue: "")
var keyLength = countElements(key)
for var i = 0; i < 256; i++ {
iS[i] = i;
}
for var i = 0; i < 256 ; i++ {
var c = key[i%keyLength]
iK[i] = c;
}
var j = 2
for var i = 0; i < 255; i++ {
var iss = iS[i]
var ik = iK[i]
// transform string to int
var ik_x:Character = Character(ik)
var ikk_xx = Int(ik_x.unicodeValue())
j = (j+iss+ikk_xx)%256;
var temp = iS[i]
iS[i] = iS[j]
iS[j] = temp
}
var i = 0
j=0
var result = input
var eles = countElements(input)
for var x = 0 ; x<eles ; x++ {
i = (i+1)%256
var iss = iS[i]
j = (j+iss)%256
var is_i = iS[i]
var is_j = iS[j]
var t = (is_i+is_j)%256
var iY = iS[t]
var ch = (input[x])
var ch_x:Character = Character(ch)
var ch_xx = Int(ch_x.unicodeValue())
var ch_y = ch_xx^iY
var start = advance(result.startIndex, x)
var end = advance(start,1);
let range = Range(start:start, end:end)
var maybestring = String(UnicodeScalar(ch_y))
result = result.stringByReplacingCharactersInRange(range, withString:maybestring)
}
return result;
}
I have tried to implement it so it looks as much as the objective-c version as possible.
This however gives me these horrible times, using -O
100 runs: 0.5 seconds
EDIT
Code should now run in xcode 6.1 using the extension methods I posted.
I run it from terminal like this:
xcrun swiftc -O Swift.swift -o swift
where Swift.swift is my file, and swift is my executable
Usually claims of speed don't really apply to encryption algorithms, they are more for what I usually call "business logic". The functions on bits, bytes, 16/32/64 bit words etc. are usually difficult to optimize. Basically encryption algorithms are designed to be dense operations on these data structures with relatively few choices that can be optimized away.
Take for instance Java. Although infinitely faster than most interpreted languages it really doesn't compare well with C/C++, let alone with assembly optimized encryption algorithms. The same goes for most relatively small algebraic problems.
To make things faster you should at least use explicit numeric types for your numbers.
After excessive testing of the code, i have narrowed it down to what is making my times ultra slow.
If i comment out this code, so that the iK array just contains its initial value. i go from a runtime of 5 seconds to 1 second. Which is a significant increase.
for var i = 0; i < 256 ; i++ {
var c = key[i%keyLength]
iK[i] = c;
}
The problem is with this part:
var c = key[i%keyLength]
There is no "characterAtIndex(int)" method in Swift, therefore i do this as a workaround to get the characterAtIndex. I do it using my extension:
extension String {
subscript (i: Int) -> String {
return String(Array(self)[i])
}
}
But essentially it is the same as this:
var c = Array(key)[i%keyLength]
Instead of the O(1) - (constant time) of this operation in objective-c, we are getting a running time of O(n).

Second Call crashed of FT_Load_Char free type

I have a big problem with free type library.
I use FT_Load_Char to get the width of a text to draw.
The first call, all is right, but the second call (the next text to draw), the application crashes.
I use C++ borland and freetype Library (version 2.4.9).
To be more clair, this is my code :
currentLength = 0;
for(unsigned long j = 0; j < Text.size(); j++ )
{
FT_Error error = FT_Load_Char( m_Face, Text[j], FT_LOAD_RENDER );
if ( error )
{
continue; /* ignore errors */
return 0;
}
int translateX = m_Face->glyph->advance.x >> 6;
currentLength = currentLength + translateX;
}
if( currentLength > maxLength )
maxLength = currentLength;
FT_Glyph Glyphe;
FT_Get_Glyph(m_Face->glyph, &Glyphe);
FT_Done_Glyph(Glyphe);
FT_Done_Face(m_Face);
return maxLength;
What's wrong with my code !!!

Constructing bitmask ? bitwise packet

I have been wanting to experiment with this project Axon with an iOS app connecting over a tcp connection. Towards the end of the doc the protocol is explained as so
The wire protocol is simple and very much zeromq-like, where is a BE 24 bit unsigned integer representing a maximum length of roughly ~16mb. The data byte is currently only used to store the codec, for example "json" is simply 1, in turn JSON messages received on the client end will then be automatically decoded for you by selecting this same codec.
With the diagram
octet: 0 1 2 3 <length>
+------+------+------+------+------------------...
| meta | <length> | data ...
+------+------+------+------+------------------...
I have had experience working with binary protocols creating a packet such as:
NSUInteger INT_32_LENGTH = sizeof(uint32_t);
uint32_t length = [data length]; // data is an NSData object
NSMutableData *packetData = [NSMutableData dataWithCapacity:length + (INT_32_LENGTH * 2)];
[packetData appendBytes:&requestType length:INT_32_LENGTH];
[packetData appendBytes:&length length:INT_32_LENGTH];
[packetData appendData:data];
So my question is how would you create the data packet for the Axon request, I would assume some bit shifting, which I am not too clued up on.
Allocate 1 array of char or unsigned char with size == packet_size;
Decalre constants:
const int metaFieldPos = 0;
const int sizeofMetaField = sizeof(char);
const int lengthPos = metaFieldPos + sizeofMetaField;
const int sizeofLengthField = sizeof(char) * 3;
const int dataPos = lengthPos + sizeofLengthField;
If you got the data and can recognize begining of the packet, you can use constants above to
navigate by pointers.
May be these functions will help you (They use Qt, but you can easily translate them to library, that you use)
quint32 Convert::uint32_to_uint24(const quint32 value){
return value & (quint32)(0x00FFFFFFu);
}
qint32 Convert::int32_to_uint24(const qint32 value){
return value & (qint32)(0x00FFFFFF);
}
quint32 Convert::bytes_to_uint24(const char* from){
quint32 result = 0;
quint8 shift = 0;
for (int i = 0; i < bytesIn24Bits; i++) {
result |= static_cast<quint32>(*reinterpret_cast<const quint8 *>(from + i)) << shift;
shift+=bitsInByte;
}
return result;
}
void Convert::uint32_to_uint24Bytes(const quint32 value, char* from){
quint8 shift = 0;
for (int i = 0; i < bytesIn24Bits; i++) {
const quint32 buf = (value >> shift) & 0xFFu;
*(from + i) = *reinterpret_cast<const char *>(&buf);
shift+=bitsInByte;
}
}
QByteArray Convert::uint32_to_uint24QByteArray (const quint32 value){
QByteArray bytes;
bytes.resize(sizeof(value));
*reinterpret_cast<quint32 *>(bytes.data()) = value;
bytes.chop(1);
return bytes;
}