I have an image application and I want to release it where unregistered users can view the files but cant save until they've registered.
I'm looking for a way to prevent the user from using the built in screenshot functionality so I don't have to watermark the images. How might I accomplish this?
-- Edit Below --
I decided to watermark the images. I had been trying to avoid watermarking since the images are stereoscopic but I'm rather happy about how the watermark looks now. I put a logo in the corner and offset it enough on each image so it appears in the foreground.
Whether people agree with it in practice or not, my question is still valid. Apple's DVD Player hides the video in its screenshots, which doesn't altogether stop the user from taking screenshots but accomplishes my original goal.
I would still very much like to know how to do this. (the DVD player way)
Based on a symbols search through DVD Player, it likely uses the private API CGSSetWindowCaptureExcludeShape. Richard Heard has been kind enough to reverse engineer it and wrap it for easy use.
Being private, it may stop working (or have already stopped working) at any time.
But ultimately the answer to your question is "yes, but not in any publicly documented way". Some other takeaways from this lengthy thread are:
Asking this question inevitably excites a lot of myopic moral outrage.
Given there's no public method, reverse engineering DVD Player is a useful path to pursue.
A request to Apple DTS might be the only reliable method to find an answer.
DVD Player does this (the user can still take the screenshot, but the player window doesn't appear in it), so I'm sure there's a way. Maybe setting the window's sharing type to NSWindowSharingNone?
One option that is very user hostile is to change the folder in which screen captures are stored to a /dev/null style directory by changing the com.apple.screencapture setting.
A huge downside of this is that you might mess up the users settings and not being able to restore them if the exit from your application isn't clean.
Another option is to keep track of what files that are created in the screen capture location, see if they match the pattern for name and then remove them.
This method is still quite hostile though.
I also investigated if it was possibility to kill the process that handle the screen capture, unfortunately the process that handles it, SystemUIServer just reboots after being killed.
SystemUIServer seems to refuse taking screenshots if DVD Player currently is playing a DVD. I have no idea how the DVD playback detection works though, but it might be a lead to prevent screenshots.
Links
Technical details about Screenshots in Mac OS X
com.apple.screencapture details
ScreenCapture.strings - List of error messages from ScreenCapture
Disclaimer before people start ranting: I have a legit reason to solve this problem, but won't use the com.apple.screencapture -> /dev/null method due to it's downsides.
You could try to run your application fullscreen and then capture all the keystrokes. But please listen to siride.
No; that's a system feature.
Related
So i was thinking is there a way i can program an AI that reads something(numbers mostly, in a font and a particular area on the screen that i will specify) and then perform some clicks on the screen according to what it read...the data(numbers) will change constantly and the AI will have to look out for these changes and act accordingly. I am not asking exactly how do i do that. I am asking whether it is possible and if yes then which approach should i take like for example python or something else and where do i start?
You need a OCR library such as opencv to recognize digits. The rest should be regular programming.
It is quite likely that your operating system doesn't allow you access to parts of the screen that are not owned by your application, so you are either blocked at this point, or you are restricted to parts of the screen owned by your application. (If I enter my details on the screen into my banking app, I definitely don't want another app to be able to read it).
Next you'd need to find away to read the pixels on the screen programatically. That will be very different from OS to OS, so very unlikely to be built into your language's library. You might be able to interface with whatever is availabe on your OS, or find a library that does it for you. This will give you an image, made of pixels.
Then you need some OCR software to read the text. AI doesn't seem to be involved in any of this.
for my program, I need to be able to discriminate between when users are performing some actions using gestures on the trackpad and when using corresponding hotkeys. Typically, I need to know when users show desktop, and if he performed an associated hotkey or associated gesture. Same for switching spaces, etc...
Basically, I have this need for showing notification center, application windows, show desktop, show dashboard,etc... Being able to handle hot corners would even be a plus.
So far I was hoping to be able to investigate global monitors for events, using NSAnyEventMask and slightly reverse engineer to figure out what type is the "Mission control open" event, but this was not a success. In fact, NSAnyEventMask does not seem to work at all as my method is never called (while it is with other masks such as keydown or mousemove).
I also had a look at the accessibility features, hoping I could add a relevant AXObserver notification, but did not find anything neither. I guess this is not surprising since the accessibility API provides a description of basic graphical components such as menus, windows, etc... therefore, virtual spaces and notification centers are not described by it.
Finally, CGevent tap does not seem to handle these events as when I use the function keys for showing desktop, the only events handled by my CGeventTaps are the corresponding keydown and keyup events.
I suspect few possible outcomes.
(1) I have been amazing at trying, but unfortunately this is not possible at all to handle these events ... I seriously doubt this as first I am far from being an amazing programmer, especially in Cocoa, and second, apple prove me that this is possible to access lots of amazing events programmatically and I believe in the power of their API.
(2) I have tried the good methods, but failed because of side factors. It is likely.
(3) other methods could help me to handle these events globally and programmatically (private API?).
Thanks a lot for your help,
Kind regards,
Just saw this, but this is caused by an error in Apple's implementation of NSAnyEventMask. The docs describe NSAnyEventMask as 0xffffffffUyet the implementation of NSAnyEventMask is an NSUIntegerMax which is 0xffffffffffffffffU. This is possibly due to the transition from 32 bit to 64 bit computers which changes NSUInteger's from unsigned int's to unsigned long's. Replacing NSAnyEventMask with '0xffffffffU' fixes the problem. I've already listed this as a bug to apple in hopes they would fix this.
I'm trying to solve a specific problem (but this could benefit others) which from googling around doesn't seem to have a definitive solution. I think there are probably several partial solutions out there, I'd like to find the best of those (or a combination) that does the trick most of the time.
My specific example is: users in my app can send videos to each other and I'm going to allow them to save videos they have received to their camera roll. I would like to prevent them from forwarding the video on to others. I don't need to identify a particular video, just that it was originally saved from my app.
I have achieved a pretty good solution for images by saving some EXIF metadata that I can use to identify that the image was saved from my app and reject any attempts to forward it on, but the same solution doesn't work for videos.
I'm open to any ideas. So far I've seen suggested:
Using ALAssetRepresentation in some way to save a filename and then compare it when reading in, but I've read that upgrading iOS wipes these names out
x-Saving metadata. Not possible.
MD5. I suspect iOS would modify the video in some way on saving which would invalidate this.
I've had a thought about appending a frame or two to the start of the video, perhaps an image which is a solid block of colour, magenta for example. Then when reading in, get the first frame, do some kind of processing to identify this. Is this practical or even possible?
What are your thoughts on these, and/or can you suggest anything better?
Thanks!
Steven
There are 2 approaches you could try. Both solutions only work under iOS5.
1) Save the url returned by [ALAssetRepresentation url]. Under iOS 5 this URL contains a CoreData objectID and should be persistent.
2) Use the customMetadata property of ALAsset to append custom info to any asset you saved yourself.
Cheers,
Hendrik
We're in process of developing a desktop application which needs to record user's screen once he clicks a button. I read a tutorial about Adobe AIR, which says it is easy to do with AIR: http://www.adobe.com/devnet/air/flex/articles/air_screenrecording.html
But our preference is Titanium as we've explored it a little bit. So I want to know is that even possible? If yes, how can we get started with?
There's also an interesting solution which uses Java applet for recording, as demonstrated here: http://www.screencast-o-matic.com/create?step=info&sid=default&itype=choose
But again, we're not sure about JAVA and would like to know how can it be done? or if its even possible to run a JAVA applet in Titanium?
When you say "record screen", I'm assuming you mean video. Correct?
The only way to do this in Titanium Desktop right now is to take a bunch of screenshots and string them together (encoding would probably need to be done server-side).
Depending on how long your videos need to be, this probably won't work for you. I'm also not confident in how quickly you could capture screenshots, and if it would have a high enough frame rate to be usable.
Past that, a module could be developed for Desktop to support some native APIs to record video. That's not something I see on the horizon, though.
I hope this helps, albiet a rather dismal answer. -Dawson
The stage in the Flash CS4 Authoring Enviroment is a running SWF. That what makes thing like the 3D and Bone Tools to work in the IDE.
Is it possible to access that swf ? I suspect the immediate answer would be no because that would raise some security issues maybe and cause lots of developers to crash the IDE every 5 minutes :).
That said I don't expect this to be a straight forward process, but I guess there should be a way to access that.
Any thoughts ?
I can only tell you how components work on the stage, where we've attempted the type of access you talk about.
I suspect that at their core, the 3d and bone tools are implemented using component-like tech to display the "live" stage instance. In general this would involve a compiled instance of a live preview swf that is placed on the stage. It is misleading to think of the stage as a single player. Each component preview runs in its own sandbox that, as far as I can tell, has no means of communication with other component previews on the IDE stage. There is no common storage location.
Of course, if you were in charge of the preview swf (as with the case of a component), you could try LocalConnection to chat, but the previews you want to penetrate are closed. I suspect if you dig hard enough, you'd find the bone/3d preview hidden in the installation folders (perhaps in a swc.. ik.swc looks interesting) and might be able to hack about at it with a decompiler, but straight out the box, I'm not sure there's a solution to what you ask.