I'm writing a few config files and directory structure from my Elixir/Erlang service, and want specific permissions on both the directories and files (rwxr-x---(750) and rw-r-----(640) respectively).Basically, I do not want any "other" accessibility on either).
For both consistency and performance reasons, I'd rather not do a write!/3 followed immediately by a chmod/2 every time. I thought maybe I'd need to do the open/2 to get this kind of flexibility. But though the permissions masks are documented in the module, they appear to only be used by the chmod/2. I looked in Erlang :file module to see if this was one of those (use Erlang modules instead), but did not find it there.
I've tried umask, which works fine when I'm running it via mix from the command line, but not when deployed through the build product buried in a systemd service. There I've tried to set both UMask=0027 or through the environment, but it just seemed to be ignored there. I'd really rather do the explicit set at create time than have a umask operation located elsewhere to get the effect though.
In Linux, file permissions are set either at create time with open with O_CREAT or using the chmod system call.
In Erlang, you have file:write_file_info to change the permissions, but when using the equivalent to O_CREAT (file:open), there's not a great deal of flexibility.
I did a quick search of the flag in the repository, and I think that the O_CREATE mode is fixed, you can see the lines here, where it's fixed to
#ifdef NO_UMASK
#define FILE_MODE 0644
#define DIR_MODE 0755
#else
#define FILE_MODE 0666
#define DIR_MODE 0777
#endif
however, umask is applied when creating files.
That being said, if the file is opened so much that the permissions check in a concern, maybe it's worth keeping it open (and thus needing only a single chmod)
Or you could set manually the permissions on the top directory of the config, if other is unable to read/traverse the top directory, it does not matter if deeper files could be read.
Related
POSIX directory permissions include the "sticky" bit (S_ISVTX) which is described as limiting deletion or renaming to just the owner of a file, or to root. This is often used for directories such as /tmp and /var/tmp which may have permissions drwxrwxrwt to allow all users to create temporary files, but prevent other non-root users from deleting those files.
My question is about root's permission to modify files created by ordinary users within directories marked with the sticky bit.
Suppose, an ordinary user creates a file in a sticky-bit protected /var/tmp (which is on a local, non-NFS filesystem, with no SELinux restrictions):
echo "something" > /var/tmp/somefile
but then root tries to append to this file:
echo "else" >> /var/tmp/somefile
When I try this on some Linux systems (e.g. Debian-11, ArchLinux) this produces a bash: /var/tmp/somefile: Permission denied error. This seems an unexpected restriction on the powers of the superuser to change files in the local filesystem. Other flavours of Linux (e.g. Debian-10, Debian-9, Fedora-35) do not seem to have this restriction, despite no obvious differences in filesystem setup.
I've not been able to find any documentation that suggests that the sticky bit should prevent root from modifying such a file. For example, the POSIX documentation for sys/stat.h which underpins chmod, says very little about behaviour other than deletion of sticky-protected files.
Can anyone point me towards any official documentation of how the sticky bit should behave when the superuser tries to modify a file in a directory marked with the sticky bit, or what system settings influence this behaviour?
Answer found
The behavior you are showing seems to depend on the fs.protected_regular Linux kernel parameter, introduced along with fs.protected_fifos by this commit, with the aim to fix security vulnerabilities.
Solution:
sudo sysctl fs.protected_regular=0
Resources:
Since it is a patch, it probably won't be documented in more detail.
https://askubuntu.com/questions/1250974/user-root-cant-write-to-file-in-tmp-owned-by-someone-else-in-20-04-but-can-in/1251030#1251030
https://unix.stackexchange.com/questions/503111/group-permissions-for-root-not-working-in-tmp
I'm familiar with containers, but new to Singularity and I found myself fighting a broken Python installation in a Singularity container tonight. It turns out that this was because $HOME was being mounted into my container without my knowledge.
I guess that I've developed a liking for the idiom "Explicit is better than implicit" from Python. To me, automatically mounting specific directories is unexpected behavior.
Three questions:
Why does Singularity default to mounting $HOME, /tmp, /proc, etc?
So that I can become more comfortable with Singularity, what are some use cases for this behavior?
I see the --no-home flag, but is there a flag to disable all of the default mounts without needing to change the default Singularity configuration?
It's a mixture of design, convenience and technical necessity.
The biggest reason is that, unless you use certain params that say otherwise, Singularity images are read-only filesystems. You need somewhere to write output and any temporary files that get created along the way. Maybe you know to mount in your output dir, but there are all sorts of files that get created / modified / deleted in the background that we don't ever think about. Implicit automounts give reasonable defaults that work in most situations.
Simplistic example: you're doing a big sort and filter operation on some data, but you're print the results to console so you don't bother to mount in anything but the raw data. But even after some manipulation and filtering, the size of the data exceeds available memory so sort falls back to using small files in /tmp before being deleted when the process finishes. And then it crashes because you can't write to /tmp.
You can require a user to manually specify a what to mount to /tmp on run, or you can use a sane default like /tmp and also allow that to be overridden by the user (SINGULARITY_TMPDIR, -B $PWD/fake_tmp:/tmp, --contain/--containall). These are all also configurable, so the admins can set sane defaults specific the running environment.
There are also technical reasons for some of the mounts. e.g., /etc/passwd and /etc/group are needed to match permissions on the host OS. The docs on bind paths and mounts are actually pretty good and have more specifics on the whats and whys, and even the answer to your third question: --no-mount. The --contain/--containall flags will probably also be of interest. If you really want to deep dive, there are also the admin docs and the source code on github.
A simple but real singularity use case, with explanation:
singularity exec \
--cleanenv \
-H $PWD:/home \
-B /some/local/data:/data \
multiqc.sif \
multiqc -i $SAMPLE_ID /data
--cleanenv / -e: You've already experienced the fun of unexpected mounts, there's also unexpected environment variables! --cleanenv/-e tells Singularity to not persist the host execution environment in the container. You can still use, e.g., SINGULARITYENV_SOMEVAR=23 to have SOMEVAR=23 inside the container though, as that is explicitly set.
-H $PWD:/home: This mounts the current directory into the container to /home and sets HOME=/home. While using --contain/--containall and explicit mounts is probably a better solution, I am lazy and this ensures several things:
the current directory is mounted into the container. The implicit mounting of the working is allowed to fail, and will do so quietly, if the base directory does not exist in the image. e.g., if you're running from /cluster/my-lab/some-project and there is no /cluster inside your image, it will not be mounted in. This is not an issue if using explicit binds directly (-B /cluster/my-lab/some-project) or if an explicit bind has a shared path (-B /cluster/data/experiment-123) with current directory.
the command is executed from the context of the current directory. If $PWD fails to be mounted as described above, singularity uses $HOME as the working directory instead. If both $PWD and $HOME failed to mount, / is used. This can cause problems if you're using relative paths and you aren't where you expected to be. Since it is specific to the path on the host, it can be really annoying when trying to duplicate a problem locally.
the base path is inside the container is always the same regardless of host OS file structure. Consistency is good.
The rest is just the command that's being run, which in this case summarizes the logs from other programs that work with genetic data.
By default, BlueZ stores its persistent data in /var/lib/bluetooth. This includes controller settings and information about paired devices. However, I'm working in a system where the /var directory is unreliable, so I wonder if there is any way I can change this directory?
I have seen examples where it can be changed during installation, with the "--localstatedir" flag, but I'm looking for a solution that doesn't require reinstallation.
Without reinstalling its not possible. Path is configured at compile time so recompilation and installation is required. You can replace STORAGEDIR macro with string which is read from main.conf to different path at runtime. After modifying these changes you can restart bluetoothd every time you change path then it works.
I have a c++ command line application that I have already compiled into an executable and have added it into my Xcode project. I have also added the "Copy Files" section to the Build Phases tab of the project properties and added my executable with the "Executables" destination. When I build my application I see it in the test.app/Contents/MacOS folder when I View package contents on the test.app that is built.
I also have App Sandbox enabled on the Capabilities tab of the project (so that I can distribute my application through the mac app store.
How can I expose this command line executable that is bundled with my application to the user so that they can run it from the command line (terminal)? I have not been able to find anything on search engines or on StackOverflow about how to get this file (or a symlink to this file) into the users PATH. I tried using an NSTask to create a symlink, but that only works if I disable the App Sandbox (which makes sense). Has anyone done this before? How did you get it to work? Or can these executables only be executed by code within your application?
I don't see a good way to do this. First, a clarification: the PATH is a list of directories that contain executables, not a list of executables; there's no way to add a single executable to the PATH. Instead, what you'd need to do is either put your executable into one of the directories in the user's PATH, or add the directory your executable is in into the PATH.
On OS X, the default PATH is /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin. The first 4 directories shouldn't be modified from the system default, so only /usr/local/bin is a possibility. But creating it (it doesn't exist by default) would require admin (actually root) rights, which isn't allowed by App Store policies. So that's out.
That leaves modifying the user's PATH. The "right" way to do that system-wide is by placing a file in /etc/paths.d, which requires admin (/root) rights, so that's out too. Technically modifying the /etc/paths file would work, but that has the same permissions problem plus it's the wrong way to do customization.
The next possibility is to modify (/create) the user's shell initialization script(s). This'll work, but doing it at all right is going to be messy, because there are several shells the user might use, each with several different possible initialization scripts that the user might or might not have created...
Let's take a very simple case: a user who only ever uses bash, and who doesn't already have any initialization scripts. When a "login" instance of bash starts, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile (in that order), and runs the first one it finds. But your app doesn't know which shell he uses, so you'd better create ~/.profile so zsh and ksh will use it as well. So, your app creates ~/.profile, and puts this in it:
PATH="$PATH:/Applications/MyApp.app/Contents/Helpers"
Great, right? Yup, great, until the user runs something else that wants to set their PATH, it creates ~/.bash_profile, and this overrides your setup. After that, your executable will be in the PATH of zsh and ksh, but not bash. Whee.
And then one day the user decides to use tcsh instead, and it (and csh) have a completely different but equally messy pile of possible init files...
I'm having an issue with linked Resources in Flash Builder. I work in a team environment where we use Linked Resources extensively. We just started developing ANEs and noticed that while linkedResources are used in the libraryPathEntry, in the buildTargets like anePathEntry and airCertificatePath, the absolute path is stored. I tried editing the .actionScriptProperties files directly, modifying the buildtarget absolute paths to linked resource equivalents using the libraryPathEntry as a guide but FlashBuilder complained when loading the project.
Is there a way to get the buildTargets to respect linkedResources and not save the absolute path? I'm trying to avoid the draconian way where all developers must have the exact same directory structure.
Thanks!
Randy
My team had this exact problem and all attempts to fix it with relative paths or workspace macros (i.e. ${PROJECT_LOC}) failed. It seems as if the team in charge of Flash Builder neglected to support relative paths in these particular dialogs, despite them being supported elsewhere.
Here is what we have done to fix this problem. I am assuming you are on a Mac/Linux or the like. If not, the concept here can still be applied.
Most of our projects already have a "set up" bash script that contributors run when they get code. Inside of that script, we simply set up a couple of symbolic links from the user specific absolute path, to a new absolute path with a "common" user. The script first creates the directory if it does not exist, and then creates the symlinks.
sudo mkdir -p /Users/common/<project>/
sudo ln -f -h -s ~/path/to/certificate/dir /Users/common/<project>/certificates
Obviously you can use whatever you like and whatever makes sense for the common path.
Now, in your .actionScriptProperties file you can change the location pointed to by the provisingFile and airCertificatePath to this new common absolute path.
<buildTarget ... provisioningFile="/Users/common/<project>/certificates/provisionfile.mobileprovision" ... >
<airSettings airCertificatePath="/Users/common/<project>/certificates/cert.p12" ... >
We actually take this a step further (and I suspect you will need to also) and create common symlink paths for the ANE files themselves. This ends up changing the anePathEntry to the common path as well.
<anePathEntry path="/Users/common/<project>/anes/some.ane"/>
You will need to make sure that you either hand edit the .actionScriptProperties file directly, or type in the fully qualified symlink path into the dialogs directly. Any attempt at using the Finder dialog launched by Flash Builder to navigate to the files in the common location resulted in the symlinks being auto-resolved to their actual locations.
The script requires sudo, which as I'm sure you know, will require that the users of it know their root password. Maybe some more bash savvy folks can suggest a way around sudo if this is not an option for you.
This will work for android stuff as well I believe. I don't know if that matters to you or not.
Hope this helps!
It looks like this issue was called out in the Flash Builder 4.6 known issues:
http://helpx.adobe.com/flash-builder/kb/flash-builder-4-6-known.html
https://bugs.adobe.com/jira/browse/FB-32955
The bug is apparently fixed but I haven't been able to check the new Flash Builder 4.7 beta yet:
http://blogs.adobe.com/flex/2012/08/flash-builder-4-7-beta-is-here.html