Missing chunks when creating file with AudioQueue - objective-c

So a .wav file has a few standard chunks. In most of the files I work with, the "RIFF" chunk is first, then a "fmt " chunk, then the "DATA" chunk. When recording using AVAudioRecorder, those chunks are created (although an extra "FLLR" is created before the "DATA" chunk.)
When creating a file with AudioQueue, those standard chunks aren't created. Instead, AudioQueue creates, in order, "caff","desc","lpcm","free", and "data" chunks.
What's going on? Aren't the "RIFF" and "fmt " chunks required? How does one force the inclusion of those chunks?
I'm creating a file by:
AudioFileCreateWithURL(URL, kAudioFileCAFType, &inputDataFormat, kAudioFileFlags_EraseFile, &AudioFile);
with inputDataFormat being a AudioStreamBasicDescription with a full complement of properties.
So how does one write, at least, the "RIFF" and "fmt " chunks with AudioQueue?
Thanks.

So a .wav file has a few standard chunks. …
When creating a file with AudioQueue, those standard chunks aren't created. …
⋮
I'm creating a file by:
AudioFileCreateWithURL(URL, kAudioFileCAFType, &inputDataFormat, kAudioFileFlags_EraseFile, &AudioFile);
Let this be an example of the value of showing one's code in one's question. :-)
kAudioFileCAFType is a Core Audio File, not a WAV file. Try kAudioFileWAVEType instead.

Related

Openvms: Extracting RMS Indexed file t to Windows as a sequential flat file

I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.

How can I read many large .7z files containing many CSV files?

I have many .7z files every file containing many large CSV files (more than 1GB). How can I read this in python (especially pandas and dask data frame)? Should I change the compression format to something else?
I believe you should be able to open the file using
import lzma
with lzma.open("myfile.7z", "r") as f:
df = pd.read_csv(f, ...)
This is, strictly speaking, meant for the xz file format, but may work for 7z also. If not, you will need to use libarchive.
For use with Dask, you can do the above for each file with dask.delayed.
dd.read_csv directly also allows you to specify storage_options={'compression': 'xz'}; however, ramdom access within a file is likely to be inefficient at best, so you should add blocksize=None to force one partition per file:
df = dd.read_csv('myfiles.*.7z', storage_options={'compression': 'xz'},
blocksize=None)

Compressing StringIO data to read with pandas?

I have been using pandas pd.read_sql_query to read a decent chunk of data into memory each day in order to process it (add columns, calculations, etc to about 1GB of data). This has cause my computer to freeze a few times though so today I tried using psql to create a .csv file. I then zipped that file (.xz) and read it with pandas.
Overall, it was a lot smoother and it made me think about automating the process. Is it possible to replace saving a .csv.xz file and instead copying the data directly to memory while still compressing it (ideally)?
buf = StringIO()
from_curs = from_conn.cursor()
from_curs.copy_expert("COPY table where row_date = '2016-10-17' TO STDOUT WITH CSV HEADER", buf)
(is it possible to compress this?)
buf.seek(0)
(read the buf with pandas to process it)

How does numpy handle mmap's over npz files?

I have a case where I would like to open a compressed numpy file using mmap mode, but can't seem to find any documentation about how it will work under the covers. For example, will it decompress the archive in memory and then mmap it? Will it decompress on the fly?
The documentation is absent for that configuration.
The short answer, based on looking at the code, is that archiving and compression, whether using np.savez or gzip, is not compatible with accessing files in mmap_mode. It's not just a matter of how it is done, but whether it can be done at all.
Relevant bits in the np.load function
elif isinstance(file, gzip.GzipFile):
fid = seek_gzip_factory(file)
...
if magic.startswith(_ZIP_PREFIX):
# zip-file (assume .npz)
# Transfer file ownership to NpzFile
tmp = own_fid
own_fid = False
return NpzFile(fid, own_fid=tmp)
...
if mmap_mode:
return format.open_memmap(file, mode=mmap_mode)
Look at np.lib.npyio.NpzFile. An npz file is a ZIP archive of .npy files. It loads a dictionary(like) object, and only loads the individual variables (arrays) when you access them (e.g. obj[key]). There's no provision in its code for opening those individual files inmmap_mode`.
It's pretty obvious that a file created with np.savez cannot be accessed as mmap. The ZIP archiving and compression is not the same as the gzip compression addressed earlier in the np.load.
But what of a single array saved with np.save and then gzipped? Note that format.open_memmap is called with file, not fid (which might be a gzip file).
More details on open_memmap in np.lib.npyio.format. Its first test is that file must be a string, not an existing file fid. It ends up delegating the work to np.memmap. I don't see any provision in that function for gzip.

How to best and most efficiently Split files into vb.net

Say I have a 5 GB file. I want to split it in the following way.
First 100 MB is on the file
The rest go some reserve file
I do not want to use readalllines kind of function because it's too slow for large files.
I do not want to read the whole file to the memory. I want the program to handle only a medium chunk of data at a time.
You may use BinaryReader class and its method to read file in chunks.
Dim chunk() As Byte
chunk = br.ReadBytes(1024)