Providing data as needed for QTMovie - objective-c

I understand that if I wanted to provide a QTMovie with data from an arbitrary source as it is needed I probably have to deal with the QTDataReference class, but unfortunately my experience with anything similar is limited to an audio data callback with Audio Buffers on the iPhone.
If I initialized the QTDataReference with some NSMutableData, how would I know when it needs more data and furthermore, how would I "clear" already played movie data and provide it again when the user seeks back (I want them to be able to)?
Basically the movie data I want to provide would in the end come from a set of files (which are really just one movie file split up), which become available sequentially during playback. This part is crucial.
Anybody who gets me going in the right direction can get beta access to the first Mac OS X Usenet movie streamer ;)

You likely can't do this using QTKit. Start with the QuickTime Overview and go from there.
The first idea that occurs to me is creating a QuickTime movie that directs QuickTime to look to the files you expect to download (or even to look directly to their remote URLs) for its media, then siccing QuickTime Player/QTMovieView on it and hoping that QuickTime's support for progressive download patches over any rough spots.

Related

Embedded ASS subtitle track in streamed video?

We'd like to build a small specialized clone of the ill-fated popcorn-time project, that is to say a node-webkit frontend for peerflix. The videos we'd like to play are mkv files that have embedded ASS subtitle tracks, and we can't seem to get the embedded subtitles to show up: while VLC nicely shows them, html5 video players in webkit-based things don't, not even in Google Chrome (so it's not a matter of Chromium's reduced codec support).
Now, I'm a bit out of our depths here, I don't really know much about these things, but it seems to me the media engine underneath webkit just ignores the ASS subtitle track here. Is it because it's ASS? Is it a matter of codecs somehow? Or is it, after all, a html5 thing? Now, the html5 video "living standard" mentions that "captions can be provided, either embedded in the video stream or as external files using the track element" - so the feature is at least planned, but I do realize that implementation is lacking. However, given that node-webkit uses ffmpeg as the underlying engine, it seems strange to me that the subtitles are not picked up at all.
Could someone more knowledgeable please advise us the problem? Also, is there anything we could do about it?
Extracting the subtitles beforehand is not an option, though I have been playing with the idea of extracting the subtitles on the fly, and feeding that stream back to the player - I had some modest success with this, and it looks like it could be done with some effort, but I'm really out of my depth here, and the whole idea is pretty contrived anyway.
However, I find it improbable that nobody has run into this problem before, hence this question: is there any way to show embedded (ASS) subtitle tracks in a streamed video in node-webkit?
Not sure if this would help but according to this page node-webkit doesn't ship with codec for patented media formats. They do have a few suggestions on the page, one of which is to compile your own node-webkit.
You could try using Popcorn Time's ffmpegsumo file which is what I used when I needed mp3 support and Chrome's version didn't work. Although, I don't know if that supports ASS subtitle format(considering its use, I would think it has to).
Note: I would have commented this answer but unfortunately I don't have commenting privileges yet. A couple of upvotes sure would be nice ;)

How can I capture shift-command-3/4 in Cocoa [duplicate]

I have an image application and I want to release it where unregistered users can view the files but cant save until they've registered.
I'm looking for a way to prevent the user from using the built in screenshot functionality so I don't have to watermark the images. How might I accomplish this?
-- Edit Below --
I decided to watermark the images. I had been trying to avoid watermarking since the images are stereoscopic but I'm rather happy about how the watermark looks now. I put a logo in the corner and offset it enough on each image so it appears in the foreground.
Whether people agree with it in practice or not, my question is still valid. Apple's DVD Player hides the video in its screenshots, which doesn't altogether stop the user from taking screenshots but accomplishes my original goal.
I would still very much like to know how to do this. (the DVD player way)
Based on a symbols search through DVD Player, it likely uses the private API CGSSetWindowCaptureExcludeShape. Richard Heard has been kind enough to reverse engineer it and wrap it for easy use.
Being private, it may stop working (or have already stopped working) at any time.
But ultimately the answer to your question is "yes, but not in any publicly documented way". Some other takeaways from this lengthy thread are:
Asking this question inevitably excites a lot of myopic moral outrage.
Given there's no public method, reverse engineering DVD Player is a useful path to pursue.
A request to Apple DTS might be the only reliable method to find an answer.
DVD Player does this (the user can still take the screenshot, but the player window doesn't appear in it), so I'm sure there's a way. Maybe setting the window's sharing type to NSWindowSharingNone?
One option that is very user hostile is to change the folder in which screen captures are stored to a /dev/null style directory by changing the com.apple.screencapture setting.
A huge downside of this is that you might mess up the users settings and not being able to restore them if the exit from your application isn't clean.
Another option is to keep track of what files that are created in the screen capture location, see if they match the pattern for name and then remove them.
This method is still quite hostile though.
I also investigated if it was possibility to kill the process that handle the screen capture, unfortunately the process that handles it, SystemUIServer just reboots after being killed.
SystemUIServer seems to refuse taking screenshots if DVD Player currently is playing a DVD. I have no idea how the DVD playback detection works though, but it might be a lead to prevent screenshots.
Links
Technical details about Screenshots in Mac OS X
com.apple.screencapture details
ScreenCapture.strings - List of error messages from ScreenCapture
Disclaimer before people start ranting: I have a legit reason to solve this problem, but won't use the com.apple.screencapture -> /dev/null method due to it's downsides.
You could try to run your application fullscreen and then capture all the keystrokes. But please listen to siride.
No; that's a system feature.

Playing MIDI files in VB.NET

How can I play MIDI files with VB.NET? I tried using WAV, but they are too big. Any help?
look at this article i used it before and it works.
http://www.codeproject.com/Articles/8506/Simple-VB-NET-MIDI-Wave-Play-Class
just copy paste that code in your project.
create variable which holds your midi and
call the play method.
you can also try this.(not sure about it.)
http://support.microsoft.com/kb/141756
Using MIDI files can be a good idea in regard to size, but IMO a horrible idea when it comes to actual sound (or lack there-of :) ). You can find users which do music too, and has a little better alternative set up or connected to their system, but to make users fire up their MIDI instruments and so forth to listen to a MIDI-track in a software can be a bad idea if unexpected.
Most users though are stuck with the built in wave synth from Microsoft which is a torture instrument (pun intended) and should probably not be used ;)
Why not consider compressing your wave data instead using MP3 or some other excellent compressor such as AAC, Ogg Vorbis ?
This will reduce the original data amount to at least 1/10 of the original size and unless you are providing a whole album, should be overcome-able.
You can find various ways to do this, from simple such as this one using the Media Player, or more low-level such as this one which decodes the MP3 file.
Also take a look at SlimDX.

is there a Video for writing to plist in iOS?

I'm not saying I'm lazy (I am), but I want to read what is in the plist, and then when the user does something, to append that info to the existing plist, and write it back, so that info is not lost.
Now that's the type of thing I would think there'd be a video on.
Thus, my question is twofold:
a) Where's a good video (iTunes, Standford Univ, iOS App Pgming Guide) on writing to a plist using file handlers ?(gulp)
b) how can I do a search and restrict the media to videos (so I can find videos on these subjects in the future)?
I'm especially looking for how to handle the notorious first time, and how to make sure I don't overwrite good data by wrongly thinking it's empty.
There are plenty of such tutorials on YouTube. Have a look at this one:
iPhone Programming - Reading/Writing .plist Files

Technique to identify a video in iOS camera roll

I'm trying to solve a specific problem (but this could benefit others) which from googling around doesn't seem to have a definitive solution. I think there are probably several partial solutions out there, I'd like to find the best of those (or a combination) that does the trick most of the time.
My specific example is: users in my app can send videos to each other and I'm going to allow them to save videos they have received to their camera roll. I would like to prevent them from forwarding the video on to others. I don't need to identify a particular video, just that it was originally saved from my app.
I have achieved a pretty good solution for images by saving some EXIF metadata that I can use to identify that the image was saved from my app and reject any attempts to forward it on, but the same solution doesn't work for videos.
I'm open to any ideas. So far I've seen suggested:
Using ALAssetRepresentation in some way to save a filename and then compare it when reading in, but I've read that upgrading iOS wipes these names out
x-Saving metadata. Not possible.
MD5. I suspect iOS would modify the video in some way on saving which would invalidate this.
I've had a thought about appending a frame or two to the start of the video, perhaps an image which is a solid block of colour, magenta for example. Then when reading in, get the first frame, do some kind of processing to identify this. Is this practical or even possible?
What are your thoughts on these, and/or can you suggest anything better?
Thanks!
Steven
There are 2 approaches you could try. Both solutions only work under iOS5.
1) Save the url returned by [ALAssetRepresentation url]. Under iOS 5 this URL contains a CoreData objectID and should be persistent.
2) Use the customMetadata property of ALAsset to append custom info to any asset you saved yourself.
Cheers,
Hendrik