I'm playing around with the CoreAudio CAPlayThrough example provided by Apple. I'm not doing anything fancy, just attempting to get my guitar to pass through an audio interface (m-audio fast track pro) to my computer and then back out the interface into my headphones. I'm getting some audio to pass through, but the quality is terrible. I have the sample rate set to 48k on the input and output. Is there something I'm missing? I suspect it may be an issue with the bit rate, but I'm not sure how to change that. Any guesses as to what maybe causing this quality issue?
Related
I'm not sure if it's allowed ask these questions here, but looks important for us webdevelopers (even bad dev like me :p ).
The question is about export setting videons on Premiere. I'm looking for a background video 30s like airbnb or paypal. Yesterday, I check paypal size and it's only 10/15 Mb for more than a minute. How did they do?
Obviously you want a low average bit rate. Things that can help with that are: keep the resolution low (you can scale it up a bit on the client); use H.264 High Profile (for the H.264 version); use 2-pass encoding; use variable bit rate. You can try increasing the GOP length too.
I assume there's no audio, so that shouldn't be an issue. (Can't remember if Adobe has an option for no sound track, but you can set the audio to a very low bit rate, or post-process it with ffmpeg or something to remove the audio track.)
If you have any control over the video content, you can try to keep it compressible. For example, avoid video with lots of detail or rapid motion. You might be able to selectively blur parts in a way that doesn't look bad. If it doesn't move too fast, you might be able to decrease the frame rate.
If you really want to optimize, you'll probably need to experiment a lot.
I am going to read the PC's music output and getting basic information (beat/tone/..) of the song played (then flash the lights accordingly etc). Can NAudio be used for the purpose and any samples? Sorry for this too general question at the moment.
TIA
-d
You can access your PC's output using WasapiLoopbackCapture. However, NAudio does not include a beat detection algorithm, so you'd need to find one yourself. There is a FFT class though which could be used to determine frequencies present.
I am writing a simple application for streaming video over the network, using a slightly different from the ordinary "H.264 over RTP" approach (i am using my own codecs).
To achieve this, i need raw frames and raw audio samples that QTMovie, when playing back a movie, implicitly sends to QTMovieView.
The most common way to retrieve raw video frames is to use VisualContext - and then, using a display link callback, i "generate" a CVPixelBufferRef, using this VisualContext. So i am getting frames with some frequency that is synchronized with my current refresh rate (not that i need this synchronization - i only need to have a "stream" of frames that i can transmit over the network - but CoreVideo Programming Guide and most Apple samples related to video promote this approach).
The first problem i have faced with - is when i attach a VisualContext to a QTMovie, the picture can't be rendered onto the QTMovieView anymore. I don't know why does this happen (i guess it's related to the idea of GWorld and the rendering being "detached" from it when i attach VisualContext). Ok, at least i have frames, which i could render onto a simple NSView (though this sounds wrong, and performance-unfriendly. Am i doing it right?)
What about the sound, i have no idea what to do. I need to get raw samples of sound as the movie being played (ideally - something similar to what QTCaptureDecompressedAudioOutput returns in its callback).
I have prepared myself to delving into deprecated Carbon QuickTime APIs, if there is no other way. But I don't know even where to start. Should i use the same CoreVideo Display link and periodically retrieve sound somehow? Should i get QTDataReference and locate the sound frames manually?
I am actually a beginner with programming video and audio services. If you could share some experience i would REALLY appreciate any idea you could share with me :)
Thank you,
James
New to Mac OS X, familiar with Windows. Windows has DirectShow, a good number of built-in filters, COM programming, and GraphEdit for very fast prototyping and snooping on the graphs you've constructed in code.
I'm now about to go to the Mac to work with cameras, webcams, microphones, color spaces, files, splitting, synchronization, rendering, file reading, file saving, and many of things I've come to take for granted with DirecShow when putting together applications for live performance. On the Mac side, so far I've found ... nothing! Either I don't know where to look or I'm having the toughest time tying the Mac's reputation for its ease of handling media with a coherent programmatic ability to get in there and start messin' with media manipulatin' building blocks.
I've seen some weak suggestions to use gstreamer or some library for QT but I can't bring myself to believe that this is the Apple way to go. And I've come across some QuickTime documentation but I'm not looking to do transitions, sprites, broadcasting, ...
Having a brain trained on DirectShow means I don't even know how Apple thinks about providing DirectShow-like functionality. That means I don't know the right keywords and don't even know where to look. Books? Bought a few. Now I might be able to write some code that can edit your sister's wedding video (if I can't make decent headway on this topic I may next be asking what that'd be worth to you), but for identifying what filters are available and how to string them together ... nothing. Suggestions?
Video handling is going through a huge transition on the Mac at the moment. QuickTime is very old, but also big and powerful, so it's been undergoing an incremental replacement process for the past 5 years or so.
That said, QTKit is the QuickTime subset (capture, playback, format conversion and basic video editing) which is supported going forward. The legacy QuickTime APIs are still there for the moment, and probably will remain at least until its major features are available elsewhere, but are 32-bit only. For some involved video stuff you may end up needing to use it in places.
At the moment, iOS is ahead of the Mac because it could start from scratch with AV Foundation. The future of the Mac media frameworks will probably either be AV Foundation directly (with QTKit being a lightweight shim over the top) or an extension of QTKit that looks very similar.
For audio there's Core Audio which is on Mac and iOS and isn't going away any time soon. It's quite powerful but somewhat obtuse in places. Luckily online support is very good; the mailing list is an essential resource.
For filters and frame-level processing you've got Core Video as someone else mentioned, as well as Core Image. For motion graphics there's Quartz Composer which includes a graphical editor and a plugin architecture to add your own patches. For programmatic procedural animation and easily mixing rendering modelsĀ (OpenGL, Quartz, video, etc.) there's Core Animation.
In addition to all of these, of course there's no reason you can't use open source libraries where the built-in stuff doesn't do what you want.
To address your comment below:
In QuickTime (and QTKit), individual data types like audio and video are represented as tracks. It may not be immediately clear that QuickTime can open audio as well as video file formats. A common way to combine audio and video would be:
Create a QTMovie with your video file.
Create a QTMovie with your audio file.
Take the QTTrack object representing the audio and add it to the QTMovie with the video in it.
Flatten the movie, so it doesn't simply contain a reference to the other movie but actually contains the audio data.
Write the movie to disk.
Here's an example from Blender. You'll see how the A/V muxing is done in the end_qt function. There's also some use of Core Audio in there (AudioConverter*). (There's some classic QuickTime export code in quicktime_export.c but it doesn't seem to do audio.)
I'm looking into writing an app that runs as a background process and detects when an app (say, Safari) is playing audio. I can use NSWorkspace to get the process ID's of the currently running applications but I'm at a loss when it comes to detecting what those processes are doing. I assume that there is a way to listen in on a process and detect what public messages the objects are sending. I apologize for my ignorance on the subject.
Has anyone attempted anything like this or are aware of any resources that can help?
I don't think that your "answer" is an answer at all...
and there IS an answer (which is not "42")
your best bet for doing this would be to write a pass-through audio output device. Much like soundflower, actually. so your audio output device would then load the actual (physical) audio output device and pass the audio data along to it directly (after first having a look at the audio stream, of course!). then you only need to convince your users to configure your audio device as the default audio output device so that the majority of applications which play sound will use it automatically. and voila...
your audio processing function will probably just do a quick RMS on the buffer before passing it along to the actual output device. and when the audio power crosses a certain threshold (probably something like -54dB with apple audio hardware), then you know that some app is making sound.
|K<
SoundFlower is an open-source project that allows Mac OS X applications to pass audio to each other. It almost certainly does something similar to what you describe.
I've been informed on another thread that while this is possible, it is an extremely advanced technique and not recommended. It would involve using Application Enhancer (APE) and is considered a not 'nice' thing to do. Looks like that app idea is destined for the big recycling bin in the sky :)