How to read 8-byte integers in GMS 2.x? - dm-script

I need to read 8-byte integers from a stream. I could not find any documentation how to read 8-byte integers in DM. It would be something similar to a long long integer.
Is there a trick how to stream 8-byte integers from file in GMS 2.x ?

We can use the "Stream" object to read/import data of various kinds. Please refer to the DM Help > Scripting > File Input and Output:
Other examples can also be found at DM-Script-Database :
Read-Ser (http://donation.tugraz.at/dm/source_codes/127)
JEMS_.ems file reader (http://donation.tugraz.at/dm/source_codes/108)
Hope this helps.

I used the following (stupid) method to do so:
number readint32(object s){
number stream_byte_order=2
number result=0
TagGroup tg = NewTagGroup();
tg.TagGroupSetTagAsLong( "SInt32_0", 0 )
TagGroupReadTagDataFromStream( tg, "SInt32_0", s, stream_byte_order );
tg.TagGroupGetTagAsLong( "SInt32_0", result)
return result
}
number readint64(object s){
//new for reading 8-byte integer in TIA ver >3.7
//DM automatic convert result to float when the second 4-byte >1
number result = readint32(s)+ (readint32(s)*4294967296)
// 4294967296 equals to 0xFFFFFFFF in hex form
return result
}
It works with reading ser <2GB, but does not for larger file. I still did not figure it out...
#09-04-2016
Now i got a solution to the data offset problem in ser:
Here is the solution:
Void b_readint64(object s, number &lo, number &hi){
//new for reading 8-byte (64bit) integer in TIA ver >3.7
//read the low and high section individually and later work
//together with StreamSetPos32singed, StreamSetPos64 funcsions
lo = b_readint32(s)
hi = b_readint32(s)
}
Void StreamSetPos32Signed(object s, number base, number lo){
if (lo>0) StreamSetPos(s, base, lo)
else StreamSetPos(s, base, 4294967296+lo)
}
Void StreamSetPos64(object s, number base, number lo, number hi){
if (hi!=0){
StreamSetPos(s, base, 0)
for (number i=0; i<hi; i++) StreamSetPos(s, 1, 4294967296)
StreamSetPos32Signed(s, 1, lo)
} else StreamSetPos32signed(s, base, lo)
}
BTW, I just uploaded this upgraded script to
http://portal.tugraz.at/portal/page/portal/felmi/DM-Script/DM-Script-Database

There is nothing like an 8-byte integer in DigitalMicrograph. You can use the streaming to read in two successive 4-byte sections as integers (See answer above) and then display them as binary using binary() or hexadecimal using hex(), but you will have to do the maths yourself for the "meaning" of the 8-byte integer (storing it as real-number). You can use the binary operators & | ^ for bitwise numeric, when needed.

Related

Using plus equals operator with bytes

The code below gives me the following error.
Error: Type mismatch: inferred type is kotlin.Int but kotlin.Byte was expected
var temp: Byte = 0
var temp2: Byte = 1
temp += temp2
Is there any way around this in kotlin or am I not allowed to use the += or -= operators with Byte? Is plus equals overloaded for Long and Int but not Byte and Short?
According to kotlin docs Byte's plus/minus operations with other Byte will result in an Int. So while you think it is weird try to add Byte with value of 255 to other Byte with calue of 255 ;)
I think they made it on purpose. If you are certain that your result is still within Byte bounds then just use Int.toByte() and the end of the calculations.

How to calculate crc32 in psi/si packet

We are working on sending UDP packets to PSI SI. We are developing PSI SI generator.
But we are stuck on CheckSum CRC32 - we are not able to find the check sum. I tried on few code from the internet . It comes up with some checksum but that checksum doesnt match with wireshark check sum .
We have wireshark dump of the PSISI packets working with correct checksum .
Can anyoone help me in calculating the checksum for the PSI SI ?
Regards,
vipul
I am just developing a DVB-S Head station and manipulating the SI-data I ran into the same problem. The solution is to read the ISO/IEC 13818-1 exactly and use the right algorithm.
ISO/IEC 13818-1 describes that the beginning of a section of a PSI-Table is indicated by a pointer field in the same Transport Stream packet payload. This means, that there always is a pointer field in front of the section data and this pointer field must not be put in the checksum calculation. The first byte of the pointer field is the length of the field data. In most cases there is no field data and you find a simple zero in front of your section data that starts with the table id of the section. Don´t take this zero into the checksum calculation.
MPEG´s CRC 32 is a cyclic unreflected redundancy check that starts with 0xffffffff and takes the highest bits first. The magic value is 0x04C11DB7 that can be easily derived by the polynom specified in ISO/IEC 13818-1 Annex B with assigning each bit to the polynomial exponent.
Putting all together you have this simple code to calculate the checksum:
uint calcCrc32(byte[] sectionData, int sectionDataLength)
{
uint crc32 = 0xffffffff;
for (int i = 1 + sectionData[0]; i < sectionDataLength; i++)
{
byte b = sectionData[i];
for (int bit = 0; bit < 8; bit++)
{
if ((crc32 >= 0x80000000) != (b >= 0x80))
crc32 = (crc32 << 1) ^ 0x04C11DB7;
else
crc32 = (crc32 << 1);
b <<= 1;
}
}
return crc32;
}

File (.wav) duration while writing PCM data #16KBps

I am writing some silent PCM data on a file #16KBps. This file is of .wav format. For this I have the following code:
#define DEFAULT_BITRATE 16000
long LibGsmManaged:: addSilence ()
{
char silenceBuf[DEFAULT_BITRATE];
if (fout) {
for (int i = 0; i < DEFAULT_BITRATE; i++) {
silenceBuf[i] = '\0';
}
fwrite(silenceBuf, sizeof(silenceBuf), 1, fout);
}
return ftell(fout);
}
Updated:
Here is how I write the header
void LibGsmManaged::write_wave_header( )
{
if(fout) {
fwrite("RIFF", 4, 1, fout);
total_length_pos = ftell(fout);
write_int32(0);
fwrite("WAVE", 4, 1, fout);
fwrite("fmt ",4, 1, fout);
write_int32(16);
write_int16(1);
write_int16(1);
write_int32(8000);
write_int32(16000);
write_int16(2);
write_int16(16);
fwrite("data",4,1,fout);
data_length_pos = ftell(fout);
write_int32(0);
}
else {
std::cout << "File pointer not correctly initialized";
}
}
void LibGsmManaged::write_int32( int value)
{
if(fout) {
fwrite( (const char*)&value, sizeof(value), 1, fout);
}
else {
std::cout << "File pointer not correctly initialized";
}
}
I run this code on my iOS device using NSTimer with interval 1.0 sec. So AFAIK, if I run this for 60 sec, I should get a file.wav that when played should show 60 sec as its duration (again AFAIK). But in actual test it displays almost double duration i.e. 2 min. (approx). I have also tested that when I change the DEFAULT_BITRATE to 8000, then the file duration is almost correct.
I am unable to identify what is going on here. Am I missing something bad here? I hope my code is not wrong.
What you're trying to do (write your own WAV files) should be totally doable. That's the good news. However, I'm a bit confused about your exact parameters and constraints, as are many others in the comments, which is why they have been trying to flesh out the details.
You want to write raw, uncompressed, silent PCM to a WAV file. Okay. How wide does the PCM data need to be? You are creating an array of chars that you are writing to the file. A char is an 8-bit byte. Is that what you want? If so, then you need to use a silent center point of 0x80 (128). 8-bit PCM in WAV files is unsigned, i.e., 0..255, and 128 is silent.
If you intend to store silent 16-bit data, that will be signed data, so the center point (between -32768 and 32767) is 0. Also, it will be stored in little endian byte format. But since it's silence (all 0s), that doesn't matter.
The title of your question indicates (and the first sentence reiterates) that you want to write data at 16 kbps. Are you sure you want raw 16 kbps audio? That's 16 kiloBITs per second, or 16000 bits per second. Depending on whether you are writing 8- or 16-bit PCM samples, that only allows for 2000 or 1000 Hz audio, which is probably not what you want. Did you mean 16 kHz audio? 16 kHz audio translates to 16000 audio samples per second, which more closely aligns with your code. Then again, your code mentions GSM (LibGsmManaged), so maybe you are looking for 16 kbps audio. But I'll assume we're proceeding along the raw PCM route.
Do you know in advance how many seconds of audio you need to write? That makes this process really easy. As you may have noticed, the WAV header needs length information in a few spots. You either write it in advance (if you know the values) or fill it in later (if you are writing an indeterminate amount).
Let's assume you are writing 2 seconds of raw, monophonic, 16000 Hz, 16-bit PCM to a WAV file. The center point is 0x0000.
WAV writing process:
Write 'RIFF'
Write 32-bit file size, which will be 36 (header size - first 8 bytes) + 64000 (see step 12 about that number)
Write 'WAVEfmt ' (with space)
Write 32-bit format header size (16)
Write 16-bit audio format (1 indicating raw PCM audio)
Write 16-bit channel count (1 because it's monophonic)
Write 32-bit sample rate (number of audio sample per second = 16000)
Write 32-bit byte rate (number of bytes per second = 32000)
Write 16-bit block alignment (2 bytes per sample * 1 channel = 2)
Write 16-bit bits per sample (16)
Write 'data'
Write 32-bit length of audio payload data (16000 samples/second * 2 bytes/sample * 2 seconds = 64000 bytes)
Write 64000 bytes, all 0 values
If you need to write a dynamic amount of audio data, leave the length field from steps 2 and 12 as 0, then seek back after you're done writing and fill those in. I'm not convinced that your original code was writing the length fields correctly. Some playback software might ignore those, others might not, so you could have gotten varying results.
Hope that helps! If you know Python, here's another question I answered which describes how to write a WAV file using Python's struct library (I referred to that code fragment a lot while writing the steps above).

Dealing with Int64 value with Booksleeve

I have a question about Marc Gravell's Booksleeve library.
I tried to understand how booksleeve deal the Int64 value (i have billion long value in Redis actually)
I used reflection to undestand the Set long value overrides.
// BookSleeve.RedisMessage
protected static void WriteUnified(Stream stream, long value)
{
if (value >= 0L && value <= 99L)
{
int i = (int)value;
if (i <= 9)
{
stream.Write(RedisMessage.oneByteIntegerPrefix, 0, RedisMessage.oneByteIntegerPrefix.Length);
stream.WriteByte((byte)(48 + i));
}
else
{
stream.Write(RedisMessage.twoByteIntegerPrefix, 0, RedisMessage.twoByteIntegerPrefix.Length);
stream.WriteByte((byte)(48 + i / 10));
stream.WriteByte((byte)(48 + i % 10));
}
}
else
{
byte[] bytes = Encoding.ASCII.GetBytes(value.ToString());
stream.WriteByte(36);
RedisMessage.WriteRaw(stream, (long)bytes.Length);
stream.Write(bytes, 0, bytes.Length);
}
stream.Write(RedisMessage.Crlf, 0, 2);
}
I don't understand why, with more than two digits int64, the long is encoding in ascii?
Why don't use byte[] ? I know than i can use byte[] overrides to do this, but i just want to understand this implementation to optimize mine. There may be a relationship with the Redis storage.
By advance thank you Marc :)
P.S : i'm still very enthusiastic about your next major version, than i can use long value key instead of string.
It writes it in ASCII because that is what the redis protocol demands.
If you look carefully, it is always encoded as ASCII - but for the most common cases (0-9, 10-99) I've special-cased it, as these are very simple results:
x => $1\r\nX\r\n
xy => $2\r\nXY\r\n
where x and y are the first two digits of a number in the range 0-99, and X and Y are those digits (as numbers) offset by 48 ('0') - so decimal 17 becomes the byte sequence (in hex):
24-32-0D-0A-31-37-0D-0A
Of course, that can also be achieved simply via the writing each digit sequentially and offsetting the digit value by 48 ('0'), and handling the negative sign - I guess the answer there is simply "because I coded it the simple but obviously correct way". Consider the value -123 - which is encoded as $4\r\n-123\r\n (hey, don't look at me - I didn't design the protocol). It is slightly awkward because it needs to calculate the buffer length first, then write that buffer length, then write the value - remembering to write in the order 100s, 10s, 1s (which is much harder than writing the other way around).
Perfectly willing to revisit it - simply: it works.
Of course, it becomes trivial if you have a scratch buffer available - you just write it in the simple order, then reverse the portion of the scratch buffer. I'll check to see if one is available (and if not, it wouldn't be unreasonable to add one).
I should also clarify: there is also the integer type, which would encode -123 as :-123\r\n - however, from memory there are a lot of places this simply does not work.

Converting 16Bit PCM Values into -1 to 1 values

This is my first post, so I hope someone can help!
I am reading in audio data (in CoreAudio) using the AudioFileReadPackets function, this code is working correctly and loads the 16 bit PCM values into a buffer.
The first sample value is this: '65491' (There is silence at the beginning of this audio). I understand that this is an Unsigned integer, so my question is, how to convert this value to a range of -1 to 1.
Currently I am dividing the sample value by 32768.0 into a float variable, like so...
for (UInt32 i = 0; i < packetCount; i++){
sample = *(audioData + i);
//turn it into the range -1.0 - 1.0
monoFloatDataLeft[i] = (float)sample / 32768.0;
}
However, for the sample given above (as example) this results in an output of '1.998626709' which is not zero (as it should be for silence)?
Saying this, when I look at a sample much later on in the file, the value of which i know to be around the '0.3' mark, the result of the algorithm comes out at '0.311584473' which i believe to be correct?
So why are the first samples not being read as zero, as i know them to be?
You need to subtract 32768 from your unsigned data first, so it's 0 centered.