How to tell from a media file if it's a video or an audio? [duplicate] - amazon-s3

This question already has answers here:
Using ffprobe to check if file is audio or video only
(5 answers)
Closed 1 year ago.
I have some files - audio and video files - stored in an aws s3 bucket, which can have different containers and stuff, and I would like to send them to the aws mediaConvert(MC). My issue is that I need to know whenever the file is an audio or a video file, because I need to configure the MC accordingly. The main app on the server is php based, but this information might not be that interesting.
I have ffmpeg, and I already use ffprobe to get the file duration, however I can't tell from the returning data if it's a video or audio-only file. And in an another question someone said that maybe I can get this info by using the ffmpeg, but as far as I know that requires me to download the file from the s3 to the server.
There is also an s3 function that I can use to get the header of a specific object, which contains the mime type of the file, however this sometimes returns as audio/mpeg for video files, which then configures the MC incorrectly.
I could also check the error message from the first job create of MC, which then tells me about the issue, and I can resend the file with the correct configuration, but then the job is marked as failed with error, which doesn't feel right to do.
So my question is; what would be the best approach to tell from a media file if it's a video or audio only? It's possible that ffmpeg or ffprobe can do it remotely, but I just don't know the tools deeply enough, so I'm not entirely sure that it can only work in that way I described it.

You could use
ffprobe -v quiet -print_format json -show_format -show_streams sample.mp4
The above command outputs input file's (here: sample.mp4) parameters along with audio and video streams' details. You can see the sample output here.
What you're interested in is .streams[].codec_type to figure out if the input file contains only audio, only video or both streams. You can use jq for that matter, for example:
ffprobe -v quiet -print_format json -show_format -show_streams sample.mp4 | jq .streams[].codec_type
"video"
"audio"
or parse the output JSON in any other way.

Related

JSZip reports missing bytes when reading back previously uploaded zip file

I am working on a webapp where the user provides an image file-text sequence. I am compressing the sequence into a single ZIP file uisng JSZip.
On the server I simply use PHP move_uploaded_file to the desired location after having checked the file upload error status.
A test ZIP file created in this way can be found here. I have downloaded the file, expanded it in Windows Explorer and verified that its contents (two images and some HTML markup in this instance) are all present and correct.
So far so good. The trouble begins when I try to fetch that same ZIP file and expand it using JSZip.loadAsync which consistently reports Corrupted zip: missing 210 bytes. My PHP code for squirting back the ZIP file is actually pretty simple. Shorn of the various security checks I have in place the essential bits of that code are listed below
if (file_exists($file))
{
ob_clean();
readfile($file);
http_response_code(200);
die();
} else http_response_code(399);
where the 399 code is interpreted in my webapp as a need to create a new resource locally instead of trying to read existing resource data. The trouble happens when I use the result text (on an HTTP response of 200) and feed it to JSZip.loadAsync.
What am I doing wrong here? I assume there is something too naive about the way I am using readfile at the PHP end but I am unable to figure out what that might be.
What we set out to do
Attempt to grab a server-side ZIP file from JavaScript
If it does not exist send back a reply (I simply set a custom HTTP response code of 399 and interpret it) telling the client to go prepare its own new local copy of that resource
If it does exist send back that ZIP file
Good so far. However, reading the existent ZIP file into PHP and sending it back does not make sense + is fraught with problems. My approach now is to send back an http_response_code of 302 which the client interprets as being an instruction to "go get that ZIP for yourself directly".
At this point to get the ZIP "directly" simply follow the instructions in this tutorial on MDN.

How to enable the experimental AAC encoder for VLC and record AAC sound from the microphone?

I managed to record mp3 with VLC 2.1.5 on MacOSX 10.9.2 by using this command:
./VLC -vvv qtsound://AppleHDAEngineInput:1B,0,1,0:1 --sout "#transcode{acodec=mp3,ab=128}:standard{access=file,mux=mp3,dst=~/Desktop/Recording.mp3}"
However I need to record AAC audio and every time I use the AAC settings, the file is 203 bytes and broken, probably only the header gets written. Some mux/filetype combinations produce 0 bytes files or don't produce any file at all.
I used this command:
./VLC -vvv qtsound://AppleHDAEngineInput:1B,0,1,0:1 --sout "#transcode{acodec=mp4a,ab=128}:standard{access=file,mux=ts,dst=~/Desktop/Recording.mp4}"
Any command that works and records AAC audio with VLC from the Terminal would be greatly appreciated. Thanks!
Update:
I managed to get it started with this command:
./VLC -vvv qtsound://Internal\ microphone --sout "#transcode{acodec=mp4a,ab=128}:standard{access=file,mux=mp4,dst=~/Desktop/Recording.mp4}"
But when it tries to encode it tells this:
[aac # 0x10309e000] The encoder 'aac' is experimental but experimental
codecs are not enabled, add '-strict -2' if you want to use it.
[0x100469c30] avcodec encoder error: cannot open encoder
So it looks like I should add this
-strict -2
parameter to the command to fix it. Unfortunately this parameter is for ffmpeg and VLC does not recognize it. Do you have any idea how to enable the experimental AAC encoder for VLC?
Had something similar - Have no idea if this is doing anything, but it did make the error go away. Basically tried to pass params to ffmpeg
#transcode{aenc=ffmpeg{strict=-2},vcodec=mp4v,vb=1024,acodec=mp4a}
Hope this gives you some ideas
Another way this seems to work is with the --sout-avcodec-strict switch to the vlc command itself. To modify the original example:
./VLC -vvv qtsound://Internal\ microphone --sout-avcodec-strict -2 --sout "#transcode{acodec=mp4a,ab=128}:standard{access=file,mux=mp4,dst=~/Desktop/Recording.mp4}"
FYI, on Windows systems, one needs to use an equals sign for option values (not a space), so it would be --sout-avcodec-strict=-2 instead there.
I don't know what the difference is between these two forms, or indeed, if there is one. I suspect the sout-avcodec-strict form might be more robust in the face of future changes to the codec library interface, but given that it's a library-specific option in the first place, I'm not sure that matters.
My info was from:
https://forum.videolan.org/viewtopic.php?t=111801
Hope this helps.
Yes, huge thanks all the tech dudes here! Would just add that getting VLCs mp4 transcode working is also a great solution when Windows Movie Maker rejects (="is not indexed and cannot be imported") all your perfectly functional .asf / .wmv / .avi files. Install the K-Lite free codec pack to fundamentally broaden Movie Maker's import abilities; access these by switching to "All files" in the import dialogue; then stare in amazement as an (mp4) video actually loads!
For the graphically oriented, I've compiled screenshots of the various boxes, including that all-important strictness = -2. To use, Control R (=Convert...); browse to input file; [build, save &] load profile; browse to output location & supply name; finally hit Start (green ringed).
My setup is Windows 8.1, Movie Maker 6, K-Lite Codec Pack 12.0.5 (Full), VLC 2.2.1
Peace+love, AK
VLC makes mp4, now with sound!

Watson speech to text in Node-RED crashes Node-RED application

I am using Node-RED and I am trying to store text to speech data in a Cloudant database. That works fine and I also can get it out in msg.payload.speech but when I feed it into Speech To Text, my whole app crashes..... with this error:
ERR Dropped log message: message too long (>64K without a newline)
so it seems that the Speech To Text node cannot handle large messages. It also seems that Text to Speech makes a very long string regardless of what you inject. One word or a whole paragraph does not make any difference.
Is there a way to get around this issue in Node-RED?
What happens if you split the audio that you feed to the STT service into smaller chunks? Does that work? How much audio are you trying to feed?
If you give us more details about what you are trying to accomplish then we should be able to help.
Can you also explain the problem that you are experiencing with TTS, what do you mean with "Text to Speech makes a very long string regardless of what you inject"?
thank you
Thanks for your reaction.
What I basically want to do is, using the S2T node in Node-RED. I have put a .wav file in a Cloudant database. So when I feed this .wav file into the S2T node, the app crashes. I used several ways to get the Speech into the database; 1. via the text to speech node, 2. added manually the .wav file in the database.
When I look in Cloudant, it is one long line of characters, so I have placed the wave-file on different lines, which did not help, then I have split up the wave-file in smaller chucks, this did not work either probably because the wave file loses it's structure.
The next thing I tried was using a flac file, which is also supported by T2S and S2T and this is a compressed audio file (factor 10), it would be less then 64k. But I got the message that only wav files are supported. Then I looked in the code of the S2T-node and found out that only wav is supported (the Watson S2T service in Bluemix supports more audio formats).

Base64 encode very large files in objective C to upload file in sharepoint

I have a requirement where user can upload files present in app to SharePoint via same app.
I tried using http://schemas.microsoft.com/sharepoint/soap/CopyIntoItems method of sharepoint. But it needs file in base64 encoded format to be embedded into body of SOAP request . My code crashed on device when I tried to convert even a 30 MB file in base64 encoded string? Same code executed just fine on simulator
Is there any other alternative to upload files (like file streaming etc) onto sharepoint?? I may have to upload files upto 500 MB? Is there more efficient library to convert NSData into base64 encoded string for large file???
Should I read file in chunks and then convert that into base64 encoded string and upload file once complete file is converted? Any other appraoches???
First off, your code probably crashed because it ran out of memory. I would do a loop where I read chunks that I converted and then pushed to a open socket. This probably means that you need to go to a lower level than NSURLConnection, I have tried to search for NSURLConnection and chunked upload without much success.
Some seem to suggest using ASIHttp, but looking at the homepage it seems abandoned by the developer, so I can't recommend that.
AFNetworking looks really good, it has blocks support and I can see in the example on the first page how it could be used for you. Look at the streaming request example. Basically create a NSInputStream that you push chunked data to and use it in a AFHTTPURLConnectionOperation.

Decoding base64 for a large file in Objective-C

I consuming Web-Services that streams a pdf file to my iOS device. I used SOAP message to interact with web-services and using NSXMLParser:foundCharacters() after the stream is complete and I want to get the content of pdf file from my streamed xml file which was created in first step. The data I get is encoded in base64 and I have the methods to decode the content back.For small file the easiest approach is reading/collect all content with NSXMLParser:foundCharacters from first streamed file and call the decoding base64 method when I get whole data from parse:didEndElement
(the above approach works fine I tested for this case and I made the right pdf file out of it).
Now my question is what is the best approach(optimizing memory/speed) to read/decode/write to make the final pdf from a big streamed files.
Is there any code available or any thought to accomplish this in objective-C