Does anyone know a faster jpeg compression library for development in Objective-C for iOS/OSX? I'm trying to stream a series of images between devices and the built in routine isn't fast enough. Basically it would look like a video on the receiving end, but the origin would be images and I want as little latency as possible. Or is there something better I can use than compressing each frame as jpegs?
Try iResize for Mac.
I've used it for a while now and is really impressed with it so far! Definitely a good program.
http://www.macupdate.com/app/mac/13039/iresize
Related
I'm not sure if it's allowed ask these questions here, but looks important for us webdevelopers (even bad dev like me :p ).
The question is about export setting videons on Premiere. I'm looking for a background video 30s like airbnb or paypal. Yesterday, I check paypal size and it's only 10/15 Mb for more than a minute. How did they do?
Obviously you want a low average bit rate. Things that can help with that are: keep the resolution low (you can scale it up a bit on the client); use H.264 High Profile (for the H.264 version); use 2-pass encoding; use variable bit rate. You can try increasing the GOP length too.
I assume there's no audio, so that shouldn't be an issue. (Can't remember if Adobe has an option for no sound track, but you can set the audio to a very low bit rate, or post-process it with ffmpeg or something to remove the audio track.)
If you have any control over the video content, you can try to keep it compressible. For example, avoid video with lots of detail or rapid motion. You might be able to selectively blur parts in a way that doesn't look bad. If it doesn't move too fast, you might be able to decrease the frame rate.
If you really want to optimize, you'll probably need to experiment a lot.
If a file is in a writing process, and at this time if I try to access it like if it is a log file which is being written every 10 milliseconds and I`m trying to access it will I damage or disturb the writing process?
Specifically I'm asking about video files, like if I start a recording process (using Windows Media Encoder) and at this time I would like to monitor the file if it is a blank file (black pixels everywhere) or there is a real content being recorded.
Sorry if my question is a newbie one, but I really really need to be sure about that.
Best on advance
In general you can certainly read files as they are being written, without corrupting their content. However:
It is possible to face an issue if your recording medium cannot deal with the combined data-rate or of both reading and writing. This can be a problem especially with slow-ish USB flash drives.
It is possible to face an issue on hard drives too, if the combination of reading and writing exceeds the rate of random seeks that the hard drive can handle. This can happen more easily on older drives (e.g. IDE) when dealing with HD video.
The end result is that if you have a real-time writer process, such as a TV recorder, it may be forced to drop some of the data - in the case of video a few frames.
Modern systems have quite fast disk subsystems, reasonably good I/O schedulers and large enough RAM capacities to allow for extensive data caching, which makes it quite unlikely that a single writer/reader combination would saturate the disk subsystem, unless you are doing something unusual like recording several video streams at once.
Keep in mind however, that:
The disk subsystem can also be saturated by unrelated processes reading/writing other files from the same drive.
If you are encoding video, you might also lose frames if something draws enough CPU resources that the encoding process is no longer able to keep-up with the real-time requirements. Depending on the video file, test-playing it might be just enough to do that - at least HD reproduction can be quite demanding. So, watch your CPU load and experiment before relying on it to record your favourite show :-)
EDIT:
If you are among the lucky ones that have SSD drives, seeks and data rate should normally be a non-issue. That leaves the CPU - you'd be surprised how easy it is to push it to the limit.
Above all, you should experiment to find out the limits of your system for each particular application. That way you won't have any nasty surprises...
Is there any way to compress the video data while taking from camera ? There is huge difference in video data bytes from taking camera and from photo library.I want to reduce some memory while taking video from camera. Is any way ?
Thanks
I filed a bug report with Apple on this matter, you could do the same, seems the more reports from developers the faster they fix things up.
No matter what videoQuality level you set on the UIImagePickerController, it always defaults to High when recording from the camera. Videos chosen from the user library respect your choice and compress really well with the hardware H.264 encoder present on the 3GS and up.
You can use FFMpeg to get video directly from camera, compress it and store it to a file.
Also FFMpeg is a standalone console application, and it doesn't need any other dlls.
Of course, this isn't objective-c, but it can be very useful in your case.
So I want to stream the audio from a mic using NAudio and then pass that stream to WCF which a Siverlight app can consume to broadcast the live audio sound. I want the latency to be as low as possible.
Any suggestions or if some one has already done it please point the source. Thanks in advance
what you are asking is certainly possible, but will be a fair amount of work to do.
NAudio can handle to capturing microphone audio.
At the Silverlight end you can play custom audio formats (in this case PCM) using a custom media element streaming source. See this one: http://code.msdn.microsoft.com/wavmss
I suspect latency would not be very good. You can reduce it by keeping the buffer sizes small. Also bear in mind that WAV is not a very efficient format to be sending over the network.
To have low latency as possible, you should use the netTcpBinding and stream your audio in binary format. I would use MemoryStream for this and try to play with the buffersize to figure out what the best performance is. Also, try checking audio formats for best performance. This also depends of the audio quality you expect.
I have a multimedia application that among other things converts video using FFMpeg. Video conversion being the pain that it is, I have in my test suits some tests that check our ability to convert various video formats, with emphasis on sample videos known not to work.
A common problem we've noticed from users is that some videos end up with their audio desynched after being processed, and I am looking for a way to check this in my tests.
Extracting the audio portion of the resulting videos is not a problem.
My best idea so far would be to check the offset of the first non-silence at both the beginning and end and compare each between the two videos, but I'm hoping someone smart has a better idea.
The application language/environment is Java, but since this is for testing, I'm free to use any toolset.
The basic problem is likely that the video and audio are different lengths. Extract the audio and test its length vs. the video length. If they are significantly different (more than maybe .05 sec, I'm not really sure what is detectable as "off"), then there's a problem.
To fix it, re-encode the audio to match the video length, and then put the audio and video back into a container format.