Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
So I am building a small application that has a keyboard and a few other buttons that trigger audio samples.
For this application, there are several pre recorded audio tracks (drums, vocals, guitar) which can be muted/unmuted... I have that part working fine with AVAudioPlayer
But, as most of you know, AVAudioPlayer is a little slow and has some latency if its assigned to say triggering a small audio sample of a drum hit or a synth. So i implemented SystemSoundServices to play the short sound samples. It is working fine as far as the latency between hitting the button and the sound playing, but I have a slight problem. When the sample is say, hit twice repeatedly, you hear a small popping sound, which is expected because its cutting off the first sample from playing when the button is hit the second time.
I would like to solve this by basically detecting if a sample is playing, if it is, then set the volume to 0, stop playing, and then play the sample again. BUT systemsoundservices unfortunately doesnt have this functionality built in. AVAudioPlayer does but it is too slow. I know there is CoreAudio, AudioQueue, Open AL, but these all seem WAYYY to complex for what i need to be doing. I dont need to do audio processing of any kind.
Does anyone have any suggestions of an audio framework that doesnt require writing 100 lines of code just to play a short audio clip? EVerything seems to be pointing me to spend weeks learning CoreAudio/AudioQueue/OpenAL and that just seems like a waste of my time for what I am working on.
you can use an AUSampler for the sampler. granted, it is more complex than SystemSounds. the AUSampler is a system-supplied AudioUnit. so you'll have a little programmatic AU configuration to do, but the hard stuff is out of reach.
you can use AudioFile and ExtAudioFile for reading and creating audio files. in the case of the AUSampler, it knows how to load samples (in a subset of available formats), so you won't even need to write the file i/o parts.
for more complex audio, you will likely need to come to grips with working with audio streams yourself.
you don't mention if your audio tracks are compressed or not, but if they are short you might try using .wav or another uncompressed format. AVAudioPlayer will not have to take the time to start decompressing them. It still has a short delay but it is not as long as with a MP3.
Beyond that, I think buckling down and learning one of the lower level frameworks is going to be what you need to do.
Here is a nice easy intro to OpenAL, if you like that:
http://benbritten.com/2008/11/06/openal-sound-on-the-iphone/
EDIT: Also - you're using the prepareToPlay method call before calling play, correct?
My solution was as follows. I create three Avaudio Players with the same URL.
When I am about to start the sound I use the property isPlaying to determine which Avaudio to play.
The sound fade out with the method, performSelector:WithObject:AfterDelay. While the sound is fading out the user can play the keyboard again and the second Avaudio, and so on and so forth.
The number of Avaudio you will need is according of how fast you fade out the sound and how fast you want the user be able to play.
Related
I implemented an app that does real time broadcasting of music from one iphone to another based on Ray Wenderlich's tutorial about GKSession and Matt Ghallagher's tutorial on audio streaming.
Everything worked perfectly.. until we've decided to replace the poker game UI with that of our own. The result is that suddenly the networking throughput drops dramatically. Below is a profiler snap shot of the server
Here is a snapshot of the client of the original app
and here is a snapshot of the client of the app with the updated UI (host is the same as old one):
One thing to keep in mind is that we didn't just change the UI.. we also changed a bit of the networking code.. which is what I believe is slowing things down (I did a lot of performance testing ont the UI.. eliminating all the bells and whistles and the I got the same slow down)..
any ideas? some suggested that keeping a GKSession broadcasting availability slows thing down a bit.. I made sure that that's not the case in my app..
Update:
After looking at the network analysis (using instruments) it seems that there is a lot of network activity by some unknown process.. is there a way to detect that unknown process?
this is the screen shot for the good app:
and here is the screen shot for the bad app:
notice the difference between the two: in one other uses a lot more network activity than the other.. ideas?
Turns out that I had two different objects pointing to the same GKSession instance variable.. for some reasons that slowed things down.. the frustrating part is that GKSession is so opaque and so any debugging is pretty much guestimation. Lesson learned.. I'll just directly use bonjour next time.
I have started working on a new Xcode project, a game to be exact. Now, i will be adding what you might call sprites to the screen quite frequently, and the image that represents them will be either of a total of 3. Now, when i start adding these images programmatically to the viewcontrollers view, the app will start lagging as i reach a still fairly low number compared to many other games out there (maybe 5-10). Now, i was wondering if it had to do with caching? i see you can cache images in Cocos2d which i just started learning, to reduce the processing time of rendering the images on-screen. How do i come about this in Xcode?
IN SHORT: How do i "cache" or allow Xcode to rapidly draw images to prevent lag when drawing multiple images?
Thanks on advance.
JBJ
Xcode is the IDE and development environment, it's not the operating system, which is where any caching would really be happening.
UIImage does do some kind of caching (here is a related question that talks about this) but if you're going to be using cocos2d, you should rely more upon whatever your game framework provides versus what the O.S> provides.
You should rely on a proper API (like cocos2d, since you are talking about it) to develop games, not on UIKit classes which are not meant to be used in this way.. why should caching be supported in something that is used for layouts and interfaces and not for realtime rendering?
I agree with Jack that you should probably just use Cocos2D. But if you want to do it yourself, you should use the imageNamed: method of UIImage to load the images, because it takes care of caching automatically, and you should use UIImageView to display the images, because Apple has put a lot of effort into optimizing UIImageView.
I'm working on a pretty complicated app right now, but I just got a really good, niche market idea for an AR game for iPhone. I would love to get some preliminary research done on whether or not it is worth the effort. I got a few (about 4 days) in which to code this. Is this a realistic timeline for what I'm trying to accomplish?
While I'm pretty familiar with the CMDeviceMotion, and can get location updates from GPS, there are 4 features that I think may take a colossal amount of work:
1) Working with camera in real time to draw augmented reality controls. Are there any good tutorials on how to overlay a view on top of a live camera feed?
2)Making the app work when GPS reception is spotty. It seems that some apps know how to keep updating the location based on accelerometer/gyroscope from the last known location. Where would I start on this front?
3)The networking component. I'm very new to multiplayer games. I got a website that can run PHP. Should I abandon my networking idea until I get a web server? Or is there some way I can run this in P2P over 3G without a base station?
4)Google maps integration for fast updates. Does this take a lot of effort?
I'm sorry if any of these questions are too broad and vague. I'm very excited about this idea, but would like to know what I'm dealing with before spending time on the app and realizing that I'm dealing with a monumental task!
I think you are dealing with a monumental task (especially the multiplayer part, where you'll encounter issues like lag/timing).
For the augmented reality part of your project, you can take a look at mixare augmented reality engine. It's free and open source software and the code is available on github: https://github.com/mixare/
Be aware that if you base your code upon mixare, you'll have to release your app under the same GPLv3 license as mixare.
Good luck for your project!
HTH,
Daniele
I am writing a simple application for streaming video over the network, using a slightly different from the ordinary "H.264 over RTP" approach (i am using my own codecs).
To achieve this, i need raw frames and raw audio samples that QTMovie, when playing back a movie, implicitly sends to QTMovieView.
The most common way to retrieve raw video frames is to use VisualContext - and then, using a display link callback, i "generate" a CVPixelBufferRef, using this VisualContext. So i am getting frames with some frequency that is synchronized with my current refresh rate (not that i need this synchronization - i only need to have a "stream" of frames that i can transmit over the network - but CoreVideo Programming Guide and most Apple samples related to video promote this approach).
The first problem i have faced with - is when i attach a VisualContext to a QTMovie, the picture can't be rendered onto the QTMovieView anymore. I don't know why does this happen (i guess it's related to the idea of GWorld and the rendering being "detached" from it when i attach VisualContext). Ok, at least i have frames, which i could render onto a simple NSView (though this sounds wrong, and performance-unfriendly. Am i doing it right?)
What about the sound, i have no idea what to do. I need to get raw samples of sound as the movie being played (ideally - something similar to what QTCaptureDecompressedAudioOutput returns in its callback).
I have prepared myself to delving into deprecated Carbon QuickTime APIs, if there is no other way. But I don't know even where to start. Should i use the same CoreVideo Display link and periodically retrieve sound somehow? Should i get QTDataReference and locate the sound frames manually?
I am actually a beginner with programming video and audio services. If you could share some experience i would REALLY appreciate any idea you could share with me :)
Thank you,
James
I'm looking into writing an app that runs as a background process and detects when an app (say, Safari) is playing audio. I can use NSWorkspace to get the process ID's of the currently running applications but I'm at a loss when it comes to detecting what those processes are doing. I assume that there is a way to listen in on a process and detect what public messages the objects are sending. I apologize for my ignorance on the subject.
Has anyone attempted anything like this or are aware of any resources that can help?
I don't think that your "answer" is an answer at all...
and there IS an answer (which is not "42")
your best bet for doing this would be to write a pass-through audio output device. Much like soundflower, actually. so your audio output device would then load the actual (physical) audio output device and pass the audio data along to it directly (after first having a look at the audio stream, of course!). then you only need to convince your users to configure your audio device as the default audio output device so that the majority of applications which play sound will use it automatically. and voila...
your audio processing function will probably just do a quick RMS on the buffer before passing it along to the actual output device. and when the audio power crosses a certain threshold (probably something like -54dB with apple audio hardware), then you know that some app is making sound.
|K<
SoundFlower is an open-source project that allows Mac OS X applications to pass audio to each other. It almost certainly does something similar to what you describe.
I've been informed on another thread that while this is possible, it is an extremely advanced technique and not recommended. It would involve using Application Enhancer (APE) and is considered a not 'nice' thing to do. Looks like that app idea is destined for the big recycling bin in the sky :)