Compress m4a file created on the iphone before uploading them to the server - ios7

I have merged two streams of caff files into one streo file with the format of m4a/caff
the properties of the files are the following:
44100 Hz, 16bit stereo, 256kb/sec
for a 31 seconds file i get a 667 KB
what can i do to reduce the size of this file after the fact..?
can i convert it to a single channel (mono)? can i reduce the sample size or something like that?
I tried several sample application out their - but none of them gave me a good solution.
Do you have any idea?
Using this command line on the mac worked - but i don't know how to do it on iphone
sudo afconvert -d aac -f 'caff' -b 32768 call_record.m4a test_32.caf

Normally you'd use the ExtAudioFile API to do the conversion. To reduce the size you could convert to a compressed format like AAC. See some sample code here: https://developer.apple.com/library/ios/samplecode/iPhoneExtAudioFileConvertTest/Introduction/Intro.html

Related

Openvms: Extracting RMS Indexed file t to Windows as a sequential flat file

I haven't used openvms for 20+ years. It was my 1st OS. I've been asked if it possible to copy the data from RMS files from openvms server to windows as a text file - so that it's readable.
No-one has experience or knowledge of the record structures etc.
The files are xyz.DAT and are relative files. I'm hoping the dat files are fixed length.
My 1st attempt would be to try and use Datatrieve (DTR) but get an error that the image isn't loaded.
Thought it might be as easy using CONVERT/FDL = nnnn.FDL - by changing the Relative to Sequential. The file seems still to be unreadable.
Is there an easy way to stream an RMS index file to a flat ASCII file?
I use to use COBOL and C to access the data in the past but had lots of libraries to help....
I've notice some solution may use odbc to connect but not sure what I can or cannot install on the server.
I can FTP using Filezilla to the server....
Another plan writing C application to read a file and output out as string.....or DCL too.....doesn't have to be quick...
Any ideas
Has mentioned before
The simple solution MIGHT be to to just use: $ TYPE/OUT=test.TXT test.DAT.
This will handle Relatie and Indexed files alike.
It is much the same as $ CONVERT / FDL=NL: test.DAT test.TXT
Both will just read records from the source and transfer the bytes, byte for byte, to the records in a sequential file.
FTP in ASCII mode will transfer that nicely to windows.
You can also use an 'inline' FDL file to generate a 'unix' LF file like:
$ conv /fdl="record; format stream_lf" test.DAT test.TXT
Or CR-LF file using:
$ conv /fdl="record; format stream" test.DAT test.TXT
Both can be transferring in Binary or Ascii with FTP.
MOSTLY - because this really only works well for TEXT ONLY source .DAT file.
There should be no CR, LF, FF or NUL characters in the source or things will break.
As 'habo' points out, use DUMP /RECORD=COUNT=3 to see how 'readable' the source data is.
If you spot 'binary' data using DUMP then you will need to find a record defintion somewhere which maps byte to Integers or Floating points or Dates as needed.
These defintions can be COBOL LIB files, or BASIC MAPS and are often stores IN the CDD (Common Data Dictionary) or indeed in DATATRIEVE .DIC DICTIONARIES
To use such definition you likely need a program to just read following the 'map' and write/print as text. Normally that's not too hard - notably not when you can find an example program on the server to tweak.
If it is just one or two 'suspect' byte ranges, then you can create a DCL loop to read and write and use F$EXTRACT to select the chunks you like.
If you want further help, kindly describe in words what kind of data is expected and perhaps provide the output from DUMP for 3 or 5 rows.
Good luck!
Hein.

Batch extract Hex colour from images to file

I have around 10k images that I need to get the Hex colour from for each one. I can obviously do this manually with PS or other tools but I'm looking for a solution that would ideally:
Run against a folder full of JPG images.
Extract the Hex from dead center of the image.
Output the result to a text file, ideally a CSV, containing the file name and the resulting Hex code on each row.
Can anyone suggest something that will save my sanity please? Cheers!
I would suggest ImageMagick which is installed on most Linux distros and is available for OSX (via homebrew) and Windows.
So, just at the command-line, in a directory full of JPG images, you could run this:
convert *.jpg -gravity center -crop 1x1+0+0 -format "%f,%[fx:int(mean.r*255)],%[fx:int(mean.g*255)],%[fx:int(mean.b*255)]\n" info:
Sample Output
a.png,127,0,128
b.jpg,127,0,129
b.png,255,0,0
Notes:
If you have more files in a directory than your shell can glob, you may be better of letting ImageMagick do the globbing internally, rather than using the shell, with:
convert '*.jpg' ...
If your files are large, you may better off doing them one at a time in a loop rather than loading them all into memory:
for f in *.jpg; do convert "$f" ....... ; done

read video file from mongodb with pymongo

I have a large video file stored in MongoDB gridFS.
I would like to read it and write it on my disk.
I can find the file in the database with:
file = grid_fs.find_one({"filename":'file_in_database.cin'})
I get back a grid out object gridfs.grid_file.GridOut at 0xa7b7be0
I try to write the file on my disk with:
with open('file_from_database.cin', 'w') as f:
f.write(file.read())
I get the file written but the size of the one download from the database is slightly different from the original size of the file:
05/15/2015 09:09 AM 65,585,808 file_from_database.cin
08/01/2007 01:08 PM 65,585,800 Original_file.cin
I checked the file in the database and the md5 field is the same as the original so the problem must be during the download or writing.
I'm using win7 64 and anaconda64 dirstribution for python 2.7
Any help would be appreciated.
Update
I tried the same code with a jpeg image and I get the same problem, the image is stored well in the database but when I get it and write it to the disk the size is slightly different and I cannot read it.
03/20/2015 02:36 PM 5,422,339 original_image.JPG
05/15/2015 02:44 PM 5,438,750 image_from_database.JPG
Am I doing some simple mistake reading the gridout and writing to the disk?
interesttingly if I open the image with:
PIL.Image.open(file)
I can get the image fine. Any Idea?

How to preserve formatting while dumping a man page into a text file

Seems like this should be a pretty simple one, but I can't figure it out (obviously).
When I open a terminal window and run, for example, the command man ffmpeg, the output I see within the terminal looks like this:
FFMPEG(1) FFMPEG(1)
NAME
ffmpeg - ffmpeg video converter
SYNOPSIS
ffmpeg [global_options] {[input_file_options] -i input_file} ...
{[output_file_options] output_file} ...
DESCRIPTION
ffmpeg is a very fast video and audio converter that can also grab from
a live audio/video source. It can also convert between arbitrary sample
rates and resize video on the fly with a high quality polyphase filter.
... which is how I would expect it to look. However, when I try to dump that info into a text file using the command man ffmpeg > man_ffmpeg.txt the results I get look like this:
FFMPEG(1) FFMPEG(1)
NNAAMMEE
ffmpeg - ffmpeg video converter
SSYYNNOOPPSSIISS
ffmpeg [_g_l_o_b_a_l___o_p_t_i_o_n_s] {[_i_n_p_u_t___f_i_l_e___o_p_t_i_o_n_s] -i _i_n_p_u_t___f_i_l_e} ...
{[_o_u_t_p_u_t___f_i_l_e___o_p_t_i_o_n_s] _o_u_t_p_u_t___f_i_l_e} ...
DDEESSCCRRIIPPTTIIOONN
ffmpeg is a very fast video and audio converter that can also grab from
a live audio/video source. It can also convert between arbitrary sample
rates and resize video on the fly with a high quality polyphase filter.
All I want to do is have what I would normally see inside the terminal dumped into a text file, but obviously I'm doing something wrong. What's the best way to do this?
man output has a lot of hidden backspace characters. All the extra S and BS characters you see are special characters. The following command will clear those:
man ffmpeg | col -b > man_ffmpeg.txt
Now you get crystal clear plain text output.
Source:
man man
...
To get a plain text version of a man page, without backspaces and underscores, try
# man foo | col -b > foo.mantxt
Edit: Minor clarification

How to create large PDF files (10MB, 50MB, 100MB, 200MB, 500MB, 1GB, etc.) for testing purposes?

I tried this:
for ((i=1; i<=10; i++)); do convert 100MB.pdf 10MB.pdf 100MB.pdf; done
to create 100MB file but very quickly run out of RAM.
The most simple tool: use pdftk (or pdftk.exe, if you are on Windows):
pdftk 10_MB.pdf 100_MB.pdf cat output 110_MB.pdf
This will be a valid PDF. Download pdftk here.
Update: if you want really large (and valid!), non-optimized PDFs, use this command:
pdftk 100MB.pdf 100MB.pdf 100MB.pdf 100MB.pdf 100MB.pdf cat output 500_MB.pdf
or even (if you are on Linux, Unix or Mac OS X):
pdftk $(for i in $(seq 1 100); do echo -n "100MB.pdf "; done) cat output 10_GB.pdf
Windows: fsutil
Usage:
fsutil file createnew [filename].[extension] [# of bytes]
Source: https://www.windows-commandline.com/how-to-create-large-dummy-file/
Linux: fallocate
Usage:
fallocate -l 10G [filename].[extension]
Source: Quickly create a large file on a Linux system?
For those using macOS mkfile might be a good alternative to fallocate or dd
mkfile 100m some100mfile.pdf
reference -
https://stackoverflow.com/a/33478049/711401
according to http://www.maketecheasier.com/combine-multiple-pdf-files-with-pdftk/ the command should be
pdftk file1.pdf file2.pdf file3.pdf cat output newfile.pdf
note that you should download windows version of pdftk
I had problems using pdftk with the cat parameter had a better success with output.
The following command worked for me:
pdftk file_1.pdf file_1.pdf file_1.pdf file_1.pdf cat output.pdf
Using cat produced the following error:
Error: Unexpected text in page range end, here:
output.pdf
Exiting.
Acceptable keywords, for example: "even" or "odd".
To rotate pages, use: "north" "south" "east"
"west" "left" "right" or "down"
Errors encountered. No output created.
Done. Input errors, so no output created.
http://www.pdflabs.com/docs/pdftk-cli-examples/.
I created a 172mb PDF is no time at all.
If you want a really big valid PDF file, then
take all the biggest valid pdf you can
With a tool like PDF24Creator make a fusion of pdfs
It works for me to create a big file (140MB) after some minutes.
Under Linux there is pdfunite (part of poppler) that can concatenate the same pdf files to get one large pdf file:
pdfunite in.pdf in.pdf in.pdf out.pdf
see manpage
Partly it depends on what you are trying to increase the size of... number of pages, number of images, size of a single image. In my experience, the vast bulk (90%+) of any given 'large' PDF file will be the images.
You could try using a pro product like Adobe InDesign to quickly build a large project and export it as a PDF.
Adobe Acrobat Pro has built-in tools to optimize PDF files -- you try using the tools to 'un-optimize' your file. :)
One possibility is, if you are familiar with PDF format:
Create some simply PDF with one page (Page should be contained within one object)
Copy object multiply times
Add references to the copied objects to the page catalog
Fix xref table
You get an valid document of any size, entire file will be processed by a reader.
Have you tried using cat to combine the files?
cat 10MB.pdf 10MB.pdf > 20MB.pdf
That should result in a 20MB file.