OSX: Hook file read event - api

I have a particular file I want to monitor for file read attempts by all applications on OSX. I'd like to be able to interrupt the requests so I could decide which applications have permission to read the file and which don't (by querying the user, or checking a cache of user responses). Is this possible with the OSX API? If not, is it even possible to get a list of which applications or processes do read a file?

I'm not saying there's no way to do this, but what #Jonathan is talking about isn't it.
That API is for tracking the creation, change, and destruction of files. Notably this tool is used by things like Spotlight to watch activity on the filesystem for new, interesting files.
But, wisely, reading isn't one of the events it tracks.
And even if reading WAS tracked, it is still the wrong mechanism, as it's a notification system after the fact, not in line with the call itself.
I seriously doubt what you want is possible the way you describe it.
With Access Control Lists, you can limit access at the user level (Fred can read the file, but Bob can not). This is a setting on the file itself. But there's no mechanism to allow Bobs App1 to read a file, while Bobs App2 can not, since there's really no formal mechanism of "application identity" beyond the command to executed, or whatever the program "says" its name is (both of which can be spoofed if motivated enough).
However, feel free to crawl the Darwin sources -- no doubt the answer is buried in there somewhere near the open(2) call.
EDIT, regarding comment.
What are you trying to do? What's the overall context?
Another thing that you may want to try is to use FUSE.
FUSE is a utility that let's you have "user space filesystems". People use FUSE for many purposes, like reading NTFS volumes, or mounting remote system via SSH.
They have a simple example, that gives you a skeleton that you can fill in for your purposes.
For most of the use cases, you'll simple defer to the system. However, for OPEN you will add your logic. Then you could point your FUSE utility at a directory, and "mount it". Then all of the files below that directory can use your new behavior.
I'm still not sure how you will identify Apps by name, but if it's not a real "security" issue, just for local control, I imaging you can come up with something. Activity Monitor has apps names, so they must be available, and FUSE will be running within the process space (I think), rather than through some external mechanism.
All that said, I think FUSE is your best bet, but it's probably not appropriate if you want to do this to "any file" with no preparation by the user (like not installing FUSE). If you wanted to do "any file", your FUSE system would need to be mounted at root, and then you'll simply have a full "clone" of the filesystem, with those files from the normal root "unprotected", while those from your new FUSE root will be protected. So, if someone wanted to NOT use your FUSE system, the real file is readily available to them through the actual file location.

If not, is it even possible to get a list of which applications or processes do read a file?
The command-line tool fs_util allows you to monitor filesystem activity, including reading.

The writings of Amit Singh should come in very handy. He explored the API that provides FileSystem events a few years ago, and provided a sample tool that allows you to intercept FS events. It's open source!
If i remember his conclusion properly, their isn't an official API, but you can use apple's tools to achieve what you want.

Related

How do small teams do secure backups of source code?

First of all, I don't mean version control such as git.
I do use git locally but, I'm trying to determine the best way to do back-ups of source code (as well as other app assets) in case of hardware failure or such.
I was thinking I could set up a script to tar my project folders, and encrypt them with gpg. I would then save the encrypted tar to external hard drives and to 1 or more off-site locations using a service such as amazon drive or dropbox.
Currently, I'm a sole developer so my thinking was that this method should be okay. But I wanted to get some input to make sure I'm doing this the best/most reliable way possible.
If there is a better approach to this that may be more applicable to small teams, then please let me know, as I'm more than happy to do the extra work implementing the approach.
There are much of ways of doing that.
But, if you always work local and you need a simple way of doing that, you may take a look at run scripts if some specific usb device is plugged in.
Meaning that a simple backup script with tar would run if you plug in your specific backup hdd.
Take a look at udev rules in linux.
udev is a generic device manager running as a daemon on a Linux system and listening (via a netlink socket) to uevents the kernel sends out if a new device is initialized or a device is removed from the system. The udev package comes with an extensive set of rules that match against exported values of the event and properties of the discovered device. A matching rule will possibly name and create a device node and run configured programs to set up and configure the device.
Take a look at these posts:
https://unix.stackexchange.com/questions/65891/how-to-execute-a-shellscript-when-i-plug-in-a-usb-device
&
https://askubuntu.com/questions/401390/running-a-script-on-connecting-usb-device
If you plan to go further, to extend the team or even to keep your code for a while in other words, if you want to be professional, I would go with a scalable and reliable tool designed for this: use a real backup and restore tool and don't use scripts. A lot of people, small (and even not so small) companies are doing it and they end up in trouble: maintenance, scalabolity, update, and so on.
There are plenty of backup & restore tools for different purposes and/or platforms, prices and so on. https://en.wikipedia.org/wiki/List_of_backup_software would be a good start :)
Cheers
Werlan

Exchanging work before accurev promote

My colleague and I are participating in a huge project located in Accurev. We've already created own workspaces backed with some stream (let's call it zzz-stream) which is used by many other participants, not only by us.
The point is that we want to exchange our work between our workspaces, make some changes, exchange again, etc. BEFORE making the changes accessible for others, i.e. in other words we don't want to propagate our changes until it is stable and tested, but we want be able to work on it together.
My idea was to create new stream (yyy-stream) backed with zzz-stream, and then change our workspaces to be backed with yyy-stream. But unfortunately I have no rights to create streams.
My second idea was to use a workspace as backed stream, but it doesn't work because Accurev can't use ws as backed stream.
Is there any solution for our problem?
UPD: I accepted Brad's answer as most detailed. However Accurev is too heavy and sluggish to be used effectively. So actually I prefer to use Git for internal needs over the accurev workspace. (see Accurev externally, git internally)
Your idea of creating the yyy-stream is the EXACT right way to do it. The other options are decent workarounds for one-off situations, but creating the extra stream is simple and is fully leveraging AccuRev's capabilities.
That being said, I understand that your admins have stream creation locked down. They of course want control, but should be allowing for maximizing developer productivity and not forcing workarounds like this. My guess is they have stream creation locked down to a particular group being enforced by the server-admin trigger. One common thing I have seen other large sites do is:
- allow streams to be freely created off of a list of acceptable streams (easy to do in the trigger)
- enforce naming rules on the stream creation. This is important to admins in large sites to keep things organized. Again, this is very easy to enforce via the server-admin trigger.
Bottom line, if this is a common situation, work with the admins to allow this capability as per the above. If they have any questions, they are more than welcome to contact AccuRev and we will help them out.
Your idea on using another stream for you and your peer is a good one and is commonly called a collaboration stream. If your site has stream creation locked down, you would need to work with your AccuRev administrator to make that happen.
Another option is for you and the other developer to pull the keeps from the other workspace into your own stream. This relies on both of you being diligent about doing keeps and then you can look at the history of the other developer's workspace to find the keep operation, right-click that transaction and then select Send to Workspace. The destination workspace must be your own.
A third option (more for a situation where you are in your workspace and know exactly what file you want to grab the other users changes)is to bring up the version browser for the file, right click and select history/browse versions. Look for the other workspace, highlight the version in that workspace, right click and select send to workspace. This will checkout that version into your workspace.
This is similar to the change palette suggestion but quicker if your looking to this on a file basis.
Another idea is to use different version control system (e.g. git or svn) over Accurev workspace to exchange the changes and keep our history separated from zzz-stream. (similar to Accurev externally, git internally) Only changed files should be added to other VCS, not whole project. Some merge problems occur though.

win8 store app access local storage

I am developing a Win8 Store app which allows users to download different types of files from an online learning platform and store them locally. I am also considering the function to help users organize these downloaded files by placing them in different folders (based on course name and etc.).
I was using Documents Library previously. But for every type of file that the user could download, I need to add a file type association, which does not make a lot of sense since my app would be able to open such files. So which local storage should my app use?
Many thanks in advance.
Kaizhi
The access to storage by Windows Store apps is quite restrictive, especially the DocumentsLibrary.
As you have noticed, you need to declare a file type association for every file type you want to read from or write to the DocumentsLibrary. This means your app need to handle file activations for these types in a meaningful way, which your app probably should not do.
But even if you jump through this hoop, there is another one that is not documented on the MSDN page of the DocumentsLibrary, but "hidden" in a lengthy page about app capability declarations: According to the current rules, you are not allowed to use the DocumentsLibrary for anything but offline access to SkyDrive! Bummer...
So what's left?
You can use SkyDrive or another cloud storage to put files in a well known place (which might or might not be somewhere on the hard disk). This is probably both overkill and undesirable in your case.
Or you save the files in the local app storage, provide your own in-app file browser and open the files with their default app. Seems viable to me.
Or, maybe, you can do something with share contracts or other contracts. I don't know much about these yet, but I doubt that they are helpful in your situation.
And that's it...
(Based on my current experience. No guaranty for correctness or completeness)

Prevent multiple copies of a file on OS X

I have a file somewhere on the hard drive and I would like to make sure it is only accessed by a particular program and not
backed up by Time Machine
copied by the Versions feature of OS X 10.7
in any other way copied by the system - unless the user explicitly does so i.e. by copying it to an other directory.
Is it possible to do this programmatically in Objective-C or C?
As far as I know, using CSBackupSetItemExcluded should be enough - you'll need to link against the CoreServices framework to access this. This takes care of Time Machine and Versions. I'm not aware of any other cases where the system will automatically copy the file unless explicitly done by the user.
It is impossible to make sure with absolute certainty that only a particular program can access a local file on a user's computer. This is because all possible methods can be bypassed if the user is savvy enough.
A common (though complicated) way of doing this is by encrypting the file with a key that is provided by a web server. In order to acquire the key and unlock the file, the program will have to contact the web server, authenticate, and then use the key to decrypt the file. If you change the keys often and tie them to the user, it will be difficult for an attacker to bypass this. The attack would include dumping the process memory while the file is in memory unencrypted, and then accessing it that way. This tough, but doable. This method stops all but the most sophisticated attackers. Many PDF and other document DRM is implemented this way (Amazon assigns a key to each device and install, but otherwise is same idea).

Remote (RDP) utility with mstscax.dll

I am looking for information on using mstscax.dll in VB. The goal is to create a utility that logs into a remote service in the same manner as remote desktop. However, my utility is not required to show the desktop.
I have a series of commands that I will start off with that will look for users, reset logins, shadow, and message. I have been using a batch file on my RDP to perform these functions, but we are already looking for more functionality and power than what the batch commands can offer.
I am googling 'mstscax.dll' but the results have been less than satisfactory although I continue to search. Does anyone have any good references? Is this even going to be possible?
If you are looking to list or perform operations on remote desktop sessions, you might find the Cassia library helpful. The library can list users logged on to a server, disconnect or logoff sessions, shadow sessions, and display message boxes in a session, among other things. (Note that the shadowing functionality requires a pre-release version of the library available on the project's build server -- use the artifacts link.)
I think you're supposed to use the msrdp.ocx control rather than that dll, though I've personally never used either so can't say for sure.
Edit: Add link
Here's a codeproject article about automating RDP:
http://www.codeproject.com/KB/cs/RemoteDesktop_CSharpNET.aspx