Dataset's TBytes column and SQL VarBinary field combination - sql

select convert(varbinary(8), 1) in MS SQL Server produces output : 0x00000001
On assigning the above query to dataset in Delphi, and accessing field value, we get byte array as [1, 0, 0, 0] . So Bytes[0] contains 1.
When I use IntToHex() on this bytes array it would result me value as "10000000" .
Why is IntToHex considering it in reverse order?
Thanks & Regards,
Pavan.

I think you forgot to include a reference to the code where you're somehow calling IntToHex on a TBytes array. It's from the answer to your previous question, how to convert byte array to its hex representation in Delphi.
In my answer, I forgot to account for how a pointer to an array of bytes would have the bytes in big-endian order while IntToHex (and everything else on x86) expects them in little-endian order. The solution is to switch them around. I used this function:
function Swap32(value: Integer): Integer;
asm
bswap eax
end;
In the meantime, I fixed my answer to account for that.

This seems to be a little/big endian problem. Just reverse the byte array or the return value from IntToHex. Another way would be to do it yourself:
myInt = Bytes[0];
Inc(myInt, (Bytes[1] shl 8));
Inc(myInt, (Bytes[2] shl 16));
Inc(myInt, (Bytes[3] shl 24));
Also be careful with the sign. Is the SQL value signed or unsigned - the Delphi datatype should match this (int/longint is signed, Longword/Cardinal is unsigned - see here or in the Delphi Help).

Because the x86 CPU uses little-endian numbers, a numbering system which orders its bytes in reverse order. You'll need to swap the byte order to get the right value.

Related

How to read 8-byte integers in GMS 2.x?

I need to read 8-byte integers from a stream. I could not find any documentation how to read 8-byte integers in DM. It would be something similar to a long long integer.
Is there a trick how to stream 8-byte integers from file in GMS 2.x ?
We can use the "Stream" object to read/import data of various kinds. Please refer to the DM Help > Scripting > File Input and Output:
Other examples can also be found at DM-Script-Database :
Read-Ser (http://donation.tugraz.at/dm/source_codes/127)
JEMS_.ems file reader (http://donation.tugraz.at/dm/source_codes/108)
Hope this helps.
I used the following (stupid) method to do so:
number readint32(object s){
number stream_byte_order=2
number result=0
TagGroup tg = NewTagGroup();
tg.TagGroupSetTagAsLong( "SInt32_0", 0 )
TagGroupReadTagDataFromStream( tg, "SInt32_0", s, stream_byte_order );
tg.TagGroupGetTagAsLong( "SInt32_0", result)
return result
}
number readint64(object s){
//new for reading 8-byte integer in TIA ver >3.7
//DM automatic convert result to float when the second 4-byte >1
number result = readint32(s)+ (readint32(s)*4294967296)
// 4294967296 equals to 0xFFFFFFFF in hex form
return result
}
It works with reading ser <2GB, but does not for larger file. I still did not figure it out...
#09-04-2016
Now i got a solution to the data offset problem in ser:
Here is the solution:
Void b_readint64(object s, number &lo, number &hi){
//new for reading 8-byte (64bit) integer in TIA ver >3.7
//read the low and high section individually and later work
//together with StreamSetPos32singed, StreamSetPos64 funcsions
lo = b_readint32(s)
hi = b_readint32(s)
}
Void StreamSetPos32Signed(object s, number base, number lo){
if (lo>0) StreamSetPos(s, base, lo)
else StreamSetPos(s, base, 4294967296+lo)
}
Void StreamSetPos64(object s, number base, number lo, number hi){
if (hi!=0){
StreamSetPos(s, base, 0)
for (number i=0; i<hi; i++) StreamSetPos(s, 1, 4294967296)
StreamSetPos32Signed(s, 1, lo)
} else StreamSetPos32signed(s, base, lo)
}
BTW, I just uploaded this upgraded script to
http://portal.tugraz.at/portal/page/portal/felmi/DM-Script/DM-Script-Database
There is nothing like an 8-byte integer in DigitalMicrograph. You can use the streaming to read in two successive 4-byte sections as integers (See answer above) and then display them as binary using binary() or hexadecimal using hex(), but you will have to do the maths yourself for the "meaning" of the 8-byte integer (storing it as real-number). You can use the binary operators & | ^ for bitwise numeric, when needed.

Does the "C" code algorithm in RFC1071 work well on big-endian machine?

As described in RFC1071, an extra 0-byte should be added to the last byte when calculating checksum in the situation of odd count of bytes:
But in the "C" code algorithm, only the last byte is added:
The above code does work on little-endian machine where [Z,0] equals Z, but I think there's some problem on big-endian one where [Z,0] equals Z*256.
So I wonder whether the example "C" code in RFC1071 only works on little-endian machine?
-------------New Added---------------
There's one more example of "breaking the sum into two groups" described in RFC1071:
We can just take the data here (addr[]={0x00, 0x01, 0xf2}) for example:
Here, "standard" represents the situation described in the formula [2], while "C-code" representing the C code algorithm situation.
As we can see, in "standard" situation, the final sum is f201 regardless of endian-difference since there's no endian-issue with the abstract form of [Z,0] after "Swap". But it matters in "C-code" situation because f2 is always the low-byte whether in big-endian or in little-endian.
Thus, the checksum is variable with the same data(addr&count) on different endian.
I think you're right. The code in the RFC adds the last byte in as low-order, regardless of whether it is on a litte-endian or big-endian machine.
In these examples of code on the web we see they have taken special care with the last byte:
https://github.com/sjaeckel/wireshark/blob/master/epan/in_cksum.c
and in
http://www.opensource.apple.com/source/tcpdump/tcpdump-23/tcpdump/print-ip.c
it does this:
if (nleft == 1)
sum += htons(*(u_char *)w<<8);
Which means that this text in the RFC is incorrect:
Therefore, the sum may be calculated in exactly the same way
regardless of the byte order ("big-endian" or "little-endian")
of the underlaying hardware. For example, assume a "little-
endian" machine summing data that is stored in memory in network
("big-endian") order. Fetching each 16-bit word will swap
bytes, resulting in the sum; however, storing the result
back into memory will swap the sum back into network byte order.
The following code in place of the original odd byte handling is portable (i.e. will work on both big- and little-endian machines), and doesn't depend on an external function:
if (count > 0)
{
char buf2[2] = {*addr, 0};
sum += *(unsigned short *)buf2;
}
(Assumes addr is char * or const char *).

NSInteger to byte array in reverse order

I've been banging my head for the last couple of hours with what seemed to be a very easy task.
My app is communicating with a server over tcpip. The protocol requires that the first 4 bytes of each request be the length of the stream, in reverse order. For example, if the length if 13, I need to supply (decimal) {0,0,0,13}; if it's 300, I need to supply {0,0,44,256}. Then, the actual data follows.
Apparently this is something very straightforward to do in Java, and also in VB (e.g. BitConverter.GetBytes(sendString.Length).Reverse().ToArray()). But in obj-c I just couldn't make it work, I've tried all sorts of conversions between NSString/NSData/NSArray, with no luck.
Thanks in advance!
The server is asking for the data in big-endian order (most significant byte first). Big-endian is the standard network byte order for Internet protocols (including IP, TCP, UDP, DNS, and lots more). It happens that you're compiling for a little-endian platform, so you need to swap the bytes.
However, you should not rely on being on a little-endian platform. Instead, you should make your code independent of the local (host) byte order, using the Core Foundation byte-swapping functions.
Specifically, you should use CFSwapInt32HostToBig to convert your 4-byte int to big-endian order. On a little-endian platform, this rearranges the bytes. On a big-endian platform, this does nothing.
Similarly, you should use CFSwapInt32BigToHost to convert the 4-byte ints you receive from the server to your host byte order.
Alternatively, you can use the standard POSIX byte-swapping functions. The htonl function stands for host-to-network-long, and converts a 32-bit int from host order to network (big-endian) order. The ntohl function converts a 32-bit int from network to host order. (Back when these functions were created, some popular operating systems had 16-bit ints and 32-bit longs. Can you believe it?)
NSInteger a = 300; //13;
char* aa = &a;
Byte b[] = {0,0,0,0};
memcpy(&b[0], &aa[3], 1);
memcpy(&b[1], &aa[2], 1);
memcpy(&b[2], &aa[1], 1);
memcpy(&b[3], &aa[0], 1);
As indicated in the accepted answer for the duplicate question, Foundation provides functions for byte swapping. In this case, since you're dealing with a long, you probably want NSSwapLong.

How can I do a bitwise-AND operation in VB.NET?

I want to perform a bitwise-AND operation in VB.NET, taking a Short (16-bit) variable and ANDing it with '0000000011111111' (thereby retaining only the least-significant byte / 8 least-significant bits).
How can I do it?
0000000011111111 represented as a VB hex literal is &HFF (or &H00FF if you want to be explicit), and the ordinary AND operator is actually a bitwise operator. So to mask off the top byte of a Short you'd write:
shortVal = shortVal AND &HFF
For more creative ways of getting a binary constant into VB, see: VB.NET Assigning a binary constant
Use the And operator, and write the literal in hexadecimal (easy conversion from binary):
theShort = theShort And &h00ff
If what you are actually trying to do is to divide the short into bytes, there is a built in method for that:
Dim bytes As Byte() = BitConverter.GetBytes(theShort)
Now you have an array with two bytes.
result = YourVar AND cshort('0000000011111111')

Extract first two digits of hex (UInt32 *) and convert to int

I have a bunch of hex values stored as UInt32*
2009-08-25 17:09:25.597 Particle[1211:20b] 68000000
2009-08-25 17:09:25.598 Particle[1211:20b] A9000000
2009-08-25 17:09:25.598 Particle[1211:20b] 99000000
When I convert to int as is, they're insane values when they should be from 0-255, I think. I think I just need to extract the first two digits. How do I do this? I tried dividing by 1000000 but I don't think that works in hex.
Since you're expecting < 255 for each value and only the highest byte is set in the sample data you posted, it looks like your endianness is mixed up - you loaded a big endian number then interpreted it as little endian, or vice versa, causing the order of bytes to be in the wrong order.
For example, suppose we had the number 104 stored in 32-bits on a big endian machine. In memory, the bytes would be: 00 00 00 68. If you loaded this into memory on a little endian machine, those bytes would be interpreted as 68000000.
Where did you get the numbers from? Do you need to convert them to machine byte order?
Objective C is essentially C with extra stuff on top. Your usual bit-shift operations (my_int >> 24 or whatever) should work.
This absolutely sounds like an endianness issue. Whether or not it is, simple bit shifting should do the job:
uint32_t saneValue = insaneValue >> 24;
Dividing by 0x1000000 should work (that is, by 16^6 = 2^24, not 10^6). That's the same as shifting the bits right by 24 (I don't know ObjC syntax, sorry).
Try using the function NSSwapInt(), i.e.
int x = 0x12345678;
x = NSSwapInt(x);
NSLog (#"%x", x);
Should print “78563412”.