How to enable the experimental AAC encoder for VLC and record AAC sound from the microphone? - record

I managed to record mp3 with VLC 2.1.5 on MacOSX 10.9.2 by using this command:
./VLC -vvv qtsound://AppleHDAEngineInput:1B,0,1,0:1 --sout "#transcode{acodec=mp3,ab=128}:standard{access=file,mux=mp3,dst=~/Desktop/Recording.mp3}"
However I need to record AAC audio and every time I use the AAC settings, the file is 203 bytes and broken, probably only the header gets written. Some mux/filetype combinations produce 0 bytes files or don't produce any file at all.
I used this command:
./VLC -vvv qtsound://AppleHDAEngineInput:1B,0,1,0:1 --sout "#transcode{acodec=mp4a,ab=128}:standard{access=file,mux=ts,dst=~/Desktop/Recording.mp4}"
Any command that works and records AAC audio with VLC from the Terminal would be greatly appreciated. Thanks!
Update:
I managed to get it started with this command:
./VLC -vvv qtsound://Internal\ microphone --sout "#transcode{acodec=mp4a,ab=128}:standard{access=file,mux=mp4,dst=~/Desktop/Recording.mp4}"
But when it tries to encode it tells this:
[aac # 0x10309e000] The encoder 'aac' is experimental but experimental
codecs are not enabled, add '-strict -2' if you want to use it.
[0x100469c30] avcodec encoder error: cannot open encoder
So it looks like I should add this
-strict -2
parameter to the command to fix it. Unfortunately this parameter is for ffmpeg and VLC does not recognize it. Do you have any idea how to enable the experimental AAC encoder for VLC?

Had something similar - Have no idea if this is doing anything, but it did make the error go away. Basically tried to pass params to ffmpeg
#transcode{aenc=ffmpeg{strict=-2},vcodec=mp4v,vb=1024,acodec=mp4a}
Hope this gives you some ideas

Another way this seems to work is with the --sout-avcodec-strict switch to the vlc command itself. To modify the original example:
./VLC -vvv qtsound://Internal\ microphone --sout-avcodec-strict -2 --sout "#transcode{acodec=mp4a,ab=128}:standard{access=file,mux=mp4,dst=~/Desktop/Recording.mp4}"
FYI, on Windows systems, one needs to use an equals sign for option values (not a space), so it would be --sout-avcodec-strict=-2 instead there.
I don't know what the difference is between these two forms, or indeed, if there is one. I suspect the sout-avcodec-strict form might be more robust in the face of future changes to the codec library interface, but given that it's a library-specific option in the first place, I'm not sure that matters.
My info was from:
https://forum.videolan.org/viewtopic.php?t=111801
Hope this helps.

Yes, huge thanks all the tech dudes here! Would just add that getting VLCs mp4 transcode working is also a great solution when Windows Movie Maker rejects (="is not indexed and cannot be imported") all your perfectly functional .asf / .wmv / .avi files. Install the K-Lite free codec pack to fundamentally broaden Movie Maker's import abilities; access these by switching to "All files" in the import dialogue; then stare in amazement as an (mp4) video actually loads!
For the graphically oriented, I've compiled screenshots of the various boxes, including that all-important strictness = -2. To use, Control R (=Convert...); browse to input file; [build, save &] load profile; browse to output location & supply name; finally hit Start (green ringed).
My setup is Windows 8.1, Movie Maker 6, K-Lite Codec Pack 12.0.5 (Full), VLC 2.2.1
Peace+love, AK
VLC makes mp4, now with sound!

Related

How to tell from a media file if it's a video or an audio? [duplicate]

This question already has answers here:
Using ffprobe to check if file is audio or video only
(5 answers)
Closed 1 year ago.
I have some files - audio and video files - stored in an aws s3 bucket, which can have different containers and stuff, and I would like to send them to the aws mediaConvert(MC). My issue is that I need to know whenever the file is an audio or a video file, because I need to configure the MC accordingly. The main app on the server is php based, but this information might not be that interesting.
I have ffmpeg, and I already use ffprobe to get the file duration, however I can't tell from the returning data if it's a video or audio-only file. And in an another question someone said that maybe I can get this info by using the ffmpeg, but as far as I know that requires me to download the file from the s3 to the server.
There is also an s3 function that I can use to get the header of a specific object, which contains the mime type of the file, however this sometimes returns as audio/mpeg for video files, which then configures the MC incorrectly.
I could also check the error message from the first job create of MC, which then tells me about the issue, and I can resend the file with the correct configuration, but then the job is marked as failed with error, which doesn't feel right to do.
So my question is; what would be the best approach to tell from a media file if it's a video or audio only? It's possible that ffmpeg or ffprobe can do it remotely, but I just don't know the tools deeply enough, so I'm not entirely sure that it can only work in that way I described it.
You could use
ffprobe -v quiet -print_format json -show_format -show_streams sample.mp4
The above command outputs input file's (here: sample.mp4) parameters along with audio and video streams' details. You can see the sample output here.
What you're interested in is .streams[].codec_type to figure out if the input file contains only audio, only video or both streams. You can use jq for that matter, for example:
ffprobe -v quiet -print_format json -show_format -show_streams sample.mp4 | jq .streams[].codec_type
"video"
"audio"
or parse the output JSON in any other way.

How to run baremetal application on Altera Cyclone V SoC using HPS loading from SD card

I have a Terasic DE1-SoC board and I want to run a simple led-blinking baremetal application with using HPS.
I've learned HPS tech ref, HPS Boot guide, SoC EDS guide and followed all instructions to run my app.
Here's a brief list of my steps.
Create a system in QSYS with HPS component and some PIOs (for on-board leds and buttons)
Run bsp-editor to create a preloader. I want to run from SD card so I check BOOT_FROM_SDMMC option. Also i want my app to be in FAT partition (check FAT_SUPPORT, FAT_BOOT PARTITION 1, FAT_LOAD_PAYLOAD_NAME test.img (or should it be test.bin?)) Uncheck WATCHDOG_ENABLE. I want to see booting of preloader process in serial connection, so I check SERIAL_SUPPORT. Other settings keep unchanged.
Type "make" in SOC EDS 15.0 command shell to obtain preloader-mkpimage.bin
Write preloader-mkpimage.bin by typing alt-boot-disk-util on sd card with FAT32 partition and custom partition A2.
Create simple app in DS-5, compile it and obtain test.axf then convert it to test.bin using fromelf command and then convert it in test.img with mkimage tool.
Drag-n-drop test.img on sd card FAT32 partition.
Insert it in board and power up
Load .sof on board (my system FPGA-HPS)
Reset HPS by pressing reset button
So the problem is that there is nothing going on in serial connection so i thing that preloader isn't working at all, HPS reset not helping.
I don't see even "<0>" in terminal, just blank page.
I've tried prebuilt images with linux provided by Terasic they are working on 100%.
I do everything according altera docs but nothing is working.
Even official example altera-socfpga-hardwarelib-unhosted isn't working. I've tried to use their preloader from example, but in terminal window I see only "<0>" symbol when I reset HPS.
By the way App is working only in debugger DS5 with inserted SD card and loaded .sof file on board. I don't clearly understand the meaning and the way of writing of .scat file. Don't understand what load address and entry point should I use when I use mkimage tool when converting test.bin->test.img (should i do this convertion or use in BSP settings test.bin???) In my opinion main problem is incorrect preloader. What is the problem with it?
Please help!
UPD: After editing HPS part in QSYS (UART module was added) and re-generating system and re-building preloader (other previous settings was not changed) I started to receive correct messages in terminal about successfull loading of U-boot SPL. But still can't run test.img. Working on it.
UPD2: Ok, I solved this problem, now it's working fine

Watson speech to text in Node-RED crashes Node-RED application

I am using Node-RED and I am trying to store text to speech data in a Cloudant database. That works fine and I also can get it out in msg.payload.speech but when I feed it into Speech To Text, my whole app crashes..... with this error:
ERR Dropped log message: message too long (>64K without a newline)
so it seems that the Speech To Text node cannot handle large messages. It also seems that Text to Speech makes a very long string regardless of what you inject. One word or a whole paragraph does not make any difference.
Is there a way to get around this issue in Node-RED?
What happens if you split the audio that you feed to the STT service into smaller chunks? Does that work? How much audio are you trying to feed?
If you give us more details about what you are trying to accomplish then we should be able to help.
Can you also explain the problem that you are experiencing with TTS, what do you mean with "Text to Speech makes a very long string regardless of what you inject"?
thank you
Thanks for your reaction.
What I basically want to do is, using the S2T node in Node-RED. I have put a .wav file in a Cloudant database. So when I feed this .wav file into the S2T node, the app crashes. I used several ways to get the Speech into the database; 1. via the text to speech node, 2. added manually the .wav file in the database.
When I look in Cloudant, it is one long line of characters, so I have placed the wave-file on different lines, which did not help, then I have split up the wave-file in smaller chucks, this did not work either probably because the wave file loses it's structure.
The next thing I tried was using a flac file, which is also supported by T2S and S2T and this is a compressed audio file (factor 10), it would be less then 64k. But I got the message that only wav files are supported. Then I looked in the code of the S2T-node and found out that only wav is supported (the Watson S2T service in Bluemix supports more audio formats).

How to grab video sequence using basler ip camera in opencv 2.1?

I could not stream over the ip camera because it say it return null.
It stated warning:
Error opening file c:\user\vp\ocv\opencv\src\highgui\cvcap_ffmpeg.cpp:452
hope someone can help me with it.
You cannot retrieve image data from OpenCV directly from IP cameras. You need to acces raw data through the Pylon API (get char* data), and then copy to iplimage->imageData.
Then you could grab every frame with OpenCV, but don't use ffmpeg, try VideoForWindows (vfw), it's easier to use and give less problems with OpenCV.
The error you're getting is due to some problems ffmpeg trying to find the codec you selected, or an incorrect configuration.

How do I record video to a local disk in AIR?

I'm trying to record a webcam's video and audio to a FLV file stored on the users local hard disk. I have a version of this code working which uses NetConnection and NetStream to stream the video over a network to a FMS (Red5) server, but I'd like to be able to store the video locally for low bandwidth/flaky network situations. I'm using FLex 3.2 and AIR 1.5, so I don't believe there should be any sandbox restrictions which prevent this from occurring.
Things I've seen:
FileStream - Allows reading.writing local files but no .attachCamera and .attachAudio methids for creating a FLV.
flvrecorder - Produces screen grabs from the web cam and creates it's own flv file. Doesn't support Audio. License prohibits commercial use.
SimpleFLVWriter.as - Similar to flvrecorder without the wierd license. Doesn't support audio.
This stackoverflow post - Which demonstrates the playback of a video from local disk using a NetConnection/NetStream.
Given that I have a version already which uses NetStream to stream to the server I thought the last was most promising and went ahead and put together this demo application. The code compiles and runs without errors, but I don't have a FLV file on disk which the stop button is clicked.
-
<mx:Script>
<![CDATA[
private var _diskStream:NetStream;
private var _diskConn:NetConnection;
private var _camera:Camera;
private var _mic:Microphone;
public function cmdStart_Click():void {
_camera = Camera.getCamera();
_camera.setQuality(144000, 85);
_camera.setMode(320, 240, 15);
_camera.setKeyFrameInterval(60);
_mic = Microphone.getMicrophone();
videoDisplay.attachCamera(_camera);
_diskConn = new NetConnection();
_diskConn.connect(null);
_diskStream = new NetStream(_diskConn);
_diskStream.client = this;
_diskStream.attachCamera(_camera);
_diskStream.attachAudio(_mic);
_diskStream.publish("file://c:/test.flv", "record");
}
public function cmdStop_Click() {
_diskStream.close();
videoDisplay.close();
}
]]>
</mx:Script>
<mx:VideoDisplay x="10" y="10" width="320" height="240" id="videoDisplay" />
<mx:Button x="10" y="258" label="Start" click="cmdStart_Click()" id="cmdStart"/>
<mx:Button x="73" y="258" label="Stop" id="cmdStop" click="cmdStop_Click()"/>
</mx:WindowedApplication>
It seems to me that there's either something wrong with the above code which is preventing it from working, or NetStream just can't be abused in this wany to record video.
What I'd like to know is, a) What (if anything) is wrong with the code above? b) If NetStream doesn't support recording to disk, are there any other alternatives which capture Audio AND Video to a file on the users local hard disk?
Thanks in advance!
It is not possible To stream video directly to the local disk without using some streaming service like Windows Media encoder, or Red5 or Adobe's media server or something else.
I have tried all the samples on the internet with no solution to date.
look at this link for another possibility:
http://www.zeropointnine.com/blog/updated-flv-encoder-alchem/
My solution was to embed Red5 into AIR.
Sharing with you my article
http://mydevrecords.blogspot.com/2012/01/local-recording-in-adobe-air-using-red5.html
In general, the solution is to embed free media server Red5 into AIR like an asset. So, the server will be present in AIR application folder. Then, through the NativeProcess, you can run Red5 and have its instance in memory. As result, you can have local video recording without any network issues.
I am also trying to do the same thing, but I have been told from the developers of avchat.net that it is not possible to do this with AIR at the moment. If you do find out how to do it, I would love to know!
I also found this link, not sure how helpful it is http://www.zeropointnine.com/blog/webcam-dvr-for-apollo/
Well, I just think that letting it connect to nothing(NULL) doesn't work. I've already let him try to connect to localhost, but that didn't work out either. I don't think this is even possible. Streaming video works only with Flash Media Server and Red5, not local. Maybe you could install Red5 on you PC?
Sadly video support in flash from cameras is very poor. When you stream its raw so the issue is that you have to encode to FLV and doing it in real time takes a very fast computer. First gen concepts would write raw bitmaps to a file (or serialize an array) then a second method would convert the file to an FLV. Basically you have to poll the camera and save each frame as a bitmap then stack in an array. This is very limited and could not do audio. It was also very hard to get above 5-10fps.
The gent at zero point nine, came up with a new version and your on the right path. Look at the new flv recorder. I spent a lot of time working with this but never quite got it to work for my needs (two cameras). I just could not get the FPS i needed. But it might work for you. It was much faster than the original method.
The only other working option I know of is to have the Red5 save the video and download it back to the app.