Reading SWF Header with Objective-C - objective-c

I am trying to read the header of an SWF file using NSData.
According to SWF format specification I need to access movie's width and height reading bits, not bytes, and I couldn't find a way to do it in Obj-C
Bytes 9 thru ?: Here is stored a RECT (bounds of movie). It must be read in binary form. First of all, we will transform the first byte to binary: "01100000"
The first 5 bits will tell us the size in bits of each stored value: "01100" = 12
So, we have 4 fields of 12 bits = 48 bits
48 bits + 5 bits (header of RECT) = 53 bits
Fill to complete bytes with zeroes, till we reach a multiple of 8. 53 bits + 3 alignment bits = 56 bits (this RECT is 7 bytes length, 7 * 8 = 56)
I use this formula to determine all this stuff:
Where do I start?

ObjC is a superset of C: You can run C code alongside ObjC with no issues.
Thus, you could use a C-based library like libming to read bytes from your SWF file.
If you need to shuffle bytes into an NSData object, look into the -dataWithBytes:length: method.

Start by looking for code with a compatible license that already does what you want. C libraries can be used from Obj-C code simply by linking them in (or arranging for them to be dynamically linked in) and then calling their functions.
Failing that, start by looking at the Binary Data Programming Guide for Cocoa and NSData Class Reference. You'd want to pull out the bytes that contain the bits you're interested in, then use bit masking techniques to extract the bits you care about. You might find the BitTst(), BitSet(), and BitClr() functions and their friends useful, if they're still there in Snow Leopard; I'm not sure whether they ended up in the démodé parts of Carbon or not. There are also the Posix setbit(), clrbit(), isset(), and isclr() macros defined in . Then, finally, there are the C bitwise operators: ^, |, &, ~, <<, and >>.

Related

Compact-u16 - what is the purpose of this?

I was doing some research over the weekend on some blockchain dev in the Solana blockchain and came across a construct called Compact-u16. The definition of this in the documentation says the following: "A compact-u16 is a multi-byte encoding of 16 bits. The first byte contains the lower 7 bits of the value in its lower 7 bits. If the value is above 0x7f, the high bit is set and the next 7 bits of the value are placed into the lower 7 bits of a second byte. If the value is above 0x3fff, the high bit is set and the remaining 2 bits of the value are placed into the lower 2 bits of a third byte.".
I have been coding for 30+ years. Maybe I'm just old school on this, but why is there a construct to store 16 bits of data in 3 bytes? This is just vastly inefficient from my standpoint. Is there a reason for this? On further research, I found a doc related to assembly instruction pointers, which referenced 7 instruction pointers that are useful for caching values when context switching in and out of the processor stack. But this construct is used for a web app platform. Like, literally, there is no reason that I have been able to find that justifies using 3 bytes to store 16 bits of data. If the developers wanted to use an elegant bit mapping solution to compress space, why not just use a semaphore? Why create a brand new construct that increases the storage requirements for the data by 33%.
What am I missing?
I had some similar confusion when reading the compact-u16 description. Based on the code for parsing them in the solana python module I believe they're doing something conceptually similar to UTF-8, and storing the number in 1-3 bytes depending on its size.
Basically instead of each byte having 8 bits of a number, it has 7 bits of the number and a flag (the most significant bit) that indicates whether the number continues in the next byte. For the largest numbers they need an extra byte, but for numbers less than 128 they need only one byte. Since Solana seems to use these for storing the length of arrays, if it's common that the length of the arrays is less than 128 then they will end up with fewer total bytes to transfer across all transactions.
Some examples I worked out for myself:
hex | compact-u16
--------+------------
0x0000 | [0x00]
0x0001 | [0x01]
0x007f | [0x7f]
0x0080 | [0x80 0x01]
0x3fff | [0xff 0x7f]
0x4000 | [0x80 0x80 0x01]
0xc000 | [0x80 0x80 0x03]
0xffff | [0xff 0xff 0x03])

how to drive a dotstar strip from C on a raspberry pi

I am trying to figure out how to drive a dotstart strip by calling write(handle, datap, len) to an SPI handle, from C, on a raspberry pi. I'm not quite clear on how to lay out the data.
Looking at https://cdn-shop.adafruit.com/datasheets/APA102.pdf#page=3 makes me think you start with 4 bytes of 0, a string of coded LED values (4 bytes per LED) and then 4 bytes of 1's. But that cannot be right; the final 4 bytes of 1's would be indistinguishable from a request to set an LED to full brightness white. So how could that terminate the data?
Insight welcome. Yes, I know there's a python library out there for this, but I'm coding in C++ or C.
After much digging, I found the answer here:
https://cpldcpu.wordpress.com/2014/11/30/understanding-the-apa102-superled/
The end frame is more complex than the spec suggests, but the spec is correct if your string has 32 LEDS, and you must always specify values for all LEDS in your string.

To deploy a Tiny ML model that I created via Colab Google

When I compile the code (on arduino) I get the following error:
8 bytes lost due to alignment. To avoid this loss, please make sure the tensor_arena is 16 bytes aligned.
constexpr int tensorArenaSize = 8 * 1024;
byte tensorArena[tensorArenaSize];
Someone can help me to fix this problem?
For reasons unbeknownst to me, the compiler wants to make sure your large byte array is 16-byte-aligned. Because of variables already declared above the two lines you included, it needs to "move forward" the Large Array by 8 bytes, to make it start at an address that is on a 16-byte boundary. To fix the error (to me this should just be a warning) either add a dummy 8-byte variable before your Large Array, or move 8-byte worth of variables from before your Large Array to after it. In the first case you just lose 8 bytes of variable space.

Creating a WAV file with an arbitrary bits per sample value?

Do WAV files allow any arbitrary number of bitsPerSample?
I have failed to get it to work with anything less than 8. I am not sure how to define the blockAlign for one thing.
Dim ss As New Speech.Synthesis.SpeechSynthesizer
Dim info As New Speech.AudioFormat.SpeechAudioFormatInfo(AudioFormat.EncodingFormat.Pcm, 5000, 4, 1, 2500, 1, Nothing) ' FAILS
ss.SetOutputToWaveFile("TEST4bit.wav", info)
ss.Speak("I am 4 bit.")
My.Computer.Audio.Play("TEST4bit.wav")
AFAIK no, 4-bit PCM format is undefined, it wouldn't make much sense to have 16 volume levels of audio; quality would be horrible.
While technically possible, I know no decent software (e.g. Wavelab) that supports it, your very own player could though.
Formula: blockAlign = channels * (bitsPerSample / 8)
So for a mono 4-bit it would be : blockAlign = 1 * ((double)4 / 8) = 0.5
Note the usage of double being necessary to not end up with 0.
But if you look at the block align definition below, it really does not make much sense to have an alignment of 0.5 bytes, one would have to work at the bit-level (painful and useless because at this quality, non-compressed PCM would just sound horrible):
wBlockAlign
The block alignment (in bytes) of the waveform data. Playback
software needs to process a multiple of wBlockAlign bytes of data at
a time, so the value of wBlockAlign can be used for buffer
alignment.
Reference:
http://www-mmsp.ece.mcgill.ca/Documents/AudioFormats/WAVE/Docs/riffmci.pdf page 59
Workaround:
If you really need 4-bit, switch to ADPCM format.

Structure Packing

I'm currently learning C# and my first project (as a learning experiment) is to create a DBF reader. I'm having some difficulty understanding "packing" according to this: http://www.developerfusion.com/pix/articleimages/dec05/structs1.jpg
If I specified a packing of 2, wouldn't all structure elements begin on a 2-byte boundary, and if I specified a packing of 4, wouldn't all structure elements begin on a 4-byte boundary, and also consume a minimum of 4 bytes each?
For instance, a byte element would be placed on a 4 byte boundary, and the element following it (in a sequential layout) would be located on the next 4-byte boundary (losing 3 bytes to padding)?
In the image shown, in the "pack=4" it shows a byte that is on a 2 byte boundary, following a short.
If I understand the picture correctly, pack equal to n means that one variable cannot be stored "between" two packs of lengths n. In other words, bytes which compose a variable cannot cross one pack's boundary. This is only true if the size of a variable is less or equal to the size of a pack.
Let's take Pack = 4 as an example. Here, we can safely store a byte and a short in one pack, because they require 3 bytes of memory together. But since there is only one byte in the pack left, it requires one byte of padding to be able to store an int into the data structure, because what's left in the pack is too little to store the whole int.
I hope the explanation makes sense.
Looking at the picture again, I think it would be better if all data were aligned to the same side of a pack, either to bottom or top. This would make it clearer what's going on.