How to convert GPS Longitude and latitude from hex - gps

I am trying to convert GPS data from GPS tracking device. The company provided protocol manual, but it's not clear. Most of the data I was able to decoding from the packets I received from the device. The communication is over TCP/IP. I am having a problem decoding the hex value of the longitude and latitude. Here is an example from the manual:
Example: 22º32.7658’=(22X60+32.7658)X3000=40582974, then converted into a hexadecimal number
40582974(Decimal)= 26B3F3E(Hexadecimal)
at last the value is 0x02 0x6B 0x3F 0x3E.
I would like to know how to reverse from the hexadecimal to longitude and latitude. The device will send 26B3F3E. I want to know the process of getting 22º32.7658.
This protocol applies to GT06 and Heacent 908.

Store all four values in unsigned 32-bit variables.
v1 = 0x02, v2 = 0x6b, v3 = 0x3f, v4 = 0x3e.
Compute (v4 << 48) | (v3 << 32) | (v2 << 16) | v1 this will yield a variable holding the value 40582974 decimal.
Convert this to a float and divide it by 30,000.0 (your 3,000 was an error), this will give you 1,352.765
Chop to integer and divide by 60. This will give you the 22.
Multiply the number you got in step 4 by 60 and subtract it from the number you got in step 3. This will give you 1352.765 - 22*60 or 32.765.
There's your answer 22, 32.765.

Related

What is the bias exponent?

My question concerns the IEEE 754 Standard
So im going through the steps necessary to convert denary numbers into a floating point number (IEEE 754 standard) but I dont understand the purpose of determining the biased exponent. I cant get my head around that step and what it is exactly and why its done?
Could any one explain what this is - please keep in mind that I have just started a computer science conversion masters so I wont completely understand certain choices of terminology!
If you think its very long to explain please point me in the right direction!
The exponent in an IEEE 32-bit floating point number is 8 bits.
In most cases where we want to store a signed quantity in an 8-bit field, we use a signed representation. 0x80 is -128, 0xFF is -1, 0x00 is 0, up to 0x7F is 127.
But that's not how the exponent is represented. The exponent field is treated as if it were unsigned 8-bit number that is 127 too large. Look at the unsigned value in the exponent field and subtract 127 to get the actual value. So 0x00 represents -127. 0x7F represents 0.
For 64-bit floats, the field is 11 bits, with a bias of 1023, but it works the same.
A floating point number could be represented (but is not) as a sign bit s, an exponent field e, and a mantissa field m, where e is a signed integer and m is an unsigned fraction of an integer. The value of that number would then be computed as (-1)^s • 2^e • m. But this would not allow to represent important special cases.
Note that one could increase the exponent by ±n and shift the mantissa right by ±n without changing the value of the number. This allows for nearly a numbers to adjust the exponent such that the mantissa starts with a 1 (one exception is of course 0, a special FP number). If one does so, one has normalized FP numbers, and since the mantissa now starts always with 1, one does not have to store the leading 1 in memory, and the saved bit is used to increase the precision of the FP number. Thus, no mantissa m is stored but a mantissa field mf.
But how is now 0 represented? And what about FP numbers that have already the max or min exponent field, but due to their normalization, cannot be made larger or smaller? And what about "not a number-s" that are e.g. the result of x/0?
Here comes the idea of a biased exponent: If half of the max exponent value is added to the exponent field, one gets the bias exponent be. To compute the value of the FP number, this bias has to be subtracted of course. But all normalized FP numbers have now 0 < be <(all 1). Therefore these special biased exponents 0 and (all 1) can now be reserved for special purposes.
be = 0, mf = 0: Exact 0.
be = 0, mf ≠ 0: A denormalized number, i.e. mf is the real mantissa that does not have a leading 1.
be = (all 1), mf = 0: Infinite
be = (all 1), mf ≠ 0: Not a number (NaN)

How to write integer value "60" in 16bit binary, 32bit binary & 64bit binary

How to write integer value "60" in other binary formats?
8bit binary code of 60 = 111100
16bit binary code of 60 = ?
32bit binary code of 60 = ?
64bit binary code of 60 = ?
is it 111100000000 for 16 bit binary?
why does 8bit binary code contain 6bits instead of 8?
I googled for the answers but I'm not able to get these answers. Please provide answers as I'm still a beginner of this area.
Imagine you're writing the decimal value 60. You can write it using 2 digits, 4 digits or 8 digits:
1. 60
2. 0060
3. 00000060
In our decimal notation, the most significant digits are to the left, so increasing the number of digits for representation, without changing the value, means just adding zeros to the left.
Now, in most binary representations, this would be the same. The decimal 60 needs only 6 bits to represent it, so an 8bit or 16bit representation would be the same, except for the left-padding of zeros:
1. 00111100
2. 00000000 00111100
Note: Some OSs, software, hardware or storage devices might have different Endianness - which means they might store 16bit values with the least significant byte first, then the most signficant byte. Binary notation is still MSB-on-the-left, as above, but reading the memory of such little-endian devices will show any 16bit chunk will be internally reversed:
1. 00111100 - 8bit - still the same.
2. 00111100 00000000 - 16bit, bytes are flipped.
every number has its own binary number, that means that there is only one!
on a 16/32/64 bit system 111100 - 60 would just look the same with many 0s added infront of the number (regulary not shown)
so on 16 bit it would be 0000000000111100
32 bit - 0000000000000000000000000011110
and so on
For storage Endian matters ... otherwise bitwidth zeros are always prefixed so 60 would be...
8bit: 00111100
16bit: 0000000000111100

anyone know 10-bit raw rgb? about omnivision

i'm using Omnivision ov5620
http://electronics123.net/amazon/datasheet/OV5620_CLCC_DS%20(1.3).pdf
this is datasheet.
than, you can see the Output Format 10-bit digital RGB Raw data.
first, i know RGB raw data is bayer array.
so, 10-bit RGB mean each channel of 1024 scale? range is 0~1023?
or 8-bit RGB each channel and four-LSB[2:0] is new fifth pixel data?
please refer the image
which is correct?
They pack every four adjacent 10-bit pixels (0..1023) of the line into 5 sequential bytes, where each of the first 4 bytes contains the MSB part of the pixel, and the 5th byte contains LSBs of all four pixels packed together into one byte.
This is convenient format because if you want to convert it to RGB8 you just ignore that fifth byte.
Also each displayed line begins with the packer header (PH) byte and terminates with the packer footer (PF) byte. And the whole frame begins with the frame start (FS) byte and terminates with the frame end (FE) byte.

Bit-shifting audio samples from Float32 to SInt16 results in severe clipping

I'm new to the iOS and its C underpinnings, but not to programming in general. My dilemma is this. I'm implementing an echo effect in a complex AudioUnits based application. The application needs reverb, echo, and compression, among other things. However, the echo only works right when I use a particular AudioStreamBasicDescription format for the audio samples generated in my app. This format however doesn't work with the other AudioUnits.
While there are other ways to solve this problem fixing the bit-twiddling in the echo algorithm might be the most straight forward approach.
The*AudioStreamBasicDescription* that works with echo has a mFormatFlag of: kAudioFormatFlagsAudioUnitCanonical; It's specifics are:
AudioUnit Stream Format (ECHO works, NO AUDIO UNITS)
Sample Rate: 44100
Format ID: lpcm
Format Flags: 3116 = kAudioFormatFlagsAudioUnitCanonical
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 32
Set ASBD on input
Set ASBD on output
au SampleRate rate: 0.000000, 2 channels, 12 formatflags, 1819304813 mFormatID, 16 bits per channel
The stream format that works with AudioUnits is the same except for the mFormatFlag: kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved -- Its specifics are:
AudioUnit Stream Format (NO ECHO, AUDIO UNITS WORK)
Sample Rate: 44100
Format ID: lpcm
Format Flags: 41
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 2
Bits per Channel: 32
Set ASBD on input
Set ASBD on output
au SampleRate rate: 44100.000000, 2 channels, 41 formatflags, 1819304813 mFormatID, 32 bits per channel
In order to create the echo effect I use two functions that bit-shift sample data into SInt16 space, and back. As I said, this works for the kAudioFormatFlagsAudioUnitCanonical, format but not the other. When it fails, the sounds are clipped and distorted, but they are there. I think this indicates that the difference between these two formats is how the data is arranged in the Float32.
// convert sample vector from fixed point 8.24 to SInt16
void fixedPointToSInt16( SInt32 * source, SInt16 * target, int length ) {
int i;
for(i = 0;i < length; i++ ) {
target[i] = (SInt16) (source[i] >> 9);
//target[i] *= 0.003;
}
}
*As you can see I tried modifying the amplitude of the samples to get rid of the clipping -- clearly that didn't work.
// convert sample vector from SInt16 to fixed point 8.24
void SInt16ToFixedPoint( SInt16 * source, SInt32 * target, int length ) {
int i;
for(i = 0;i < length; i++ ) {
target[i] = (SInt32) (source[i] << 9);
if(source[i] < 0) {
target[i] |= 0xFF000000;
}
else {
target[i] &= 0x00FFFFFF;
}
}
}
If I can determine the difference between kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved, then I can modify the above methods accordingly. But I'm not sure how to figure that out. Documentation in CoreAudio is enigmatic, but from what I've read there, and gleaned from the CoreAudioTypes.h file, both mFormatFlag(s) refer to the same Fixed Point 8.24 format. Clearly something is different, but I can't figure out what.
Thanks for reading through this long question, and thanks in advance for any insight you can provide.
kAudioFormatFlagIsFloat means that the buffer contains floating point values. If mBitsPerChannel is 32 then you are dealing with float data (also called Float32), and if it is 64 you are dealing with double data.
kAudioFormatFlagsNativeEndian refers to the fact that the data in the buffer matches the endianness of the processor, so you don't have to worry about byte swapping.
kAudioFormatFlagIsPacked means that every bit in the data is significant. For example, if you store 24 bit audio data in 32 bits, this flag will not be set.
kAudioFormatFlagIsNonInterleaved means that each individual buffer consists of one channel of data. It is common for audio data to be interleaved, with the samples alternating between L and R channels: LRLRLRLR. For DSP applications it is often easier to deinterleave the data and work on one channel at a time.
I think in your case the error is that you are treating floating point data as fixed point. Float data is generally scaled to the interval [-1, +1). To convert float to SInt16 you need to multiply each sample by the maximum 16-bit value (1u << 15, 32768) and then clip to the interval [-32768, 32767].

How can I encode four unsigned bytes (0-255) to a float and back again using HLSL?

I am facing a task where one of my hlsl shaders require multiple texture lookups per pixel. My 2d textures are fixed to 256*256, so two bytes should be sufficient to address any given texel given this constraint. My idea is then to put two xy-coordinates in each float, giving me eight xy-coordinates in pixel space when packed in a Vector4 format image. These eight coordinates are then used to sample another texture(s).
The reason for doing this is to save graphics memory and an attempt to optimize processing time, since then I don't require multiple texture lookups.
By the way: Does anyone know if encoding/decoding 16 bytes from/to 4 floats using 1 sampling is slower than 4 samplings with unencoded data?
Edit: This is for Shader Model 3
If you are targeting SM 4.0-SM 5.0 you can use Binary Casts and Bitwise operations:
uint4 myUInt4 = asuint(myFloat4);
uint x0 = myUInt4.x & 0x000000FF; //extract two xy-coordinates (x0,y0), (x1,y1)
uint y0 = (myUInt4.x & 0x0000FF00) >> 8;
uint x1 = (myUInt4.x & 0x00FF0000) >> 16;
uint y1 = (myUInt4.x & 0xFF000000) >> 24;
//repeat operation for .y .z and .w coordinates
For previous Shader Models, I think it could be more complicated since it depends on FP precision.