Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I frequently get the following Dropbox error. The error message's proposal fixes the error, but I'm trying to figure out what it's doing to my system, and perhaps if there is a root cause at play.
Unable to monitor entire Dropbox folder hierarchy. Please run
echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf; sudo sysctl -p`
and restart Dropbox to fix the problem.
Note: I strongly encourage you to actually DO the steps and not just read them if you want to learn about Linux!
If I type apropos inotify in a shell to see which manpage are about "inotify" I get these results:
$ apropos inotify
inotify (7) - monitoring filesystem events
inotify_add_watch (2) - add a watch to an initialized inotify instance
inotify_init (2) - initialize an inotify instance
inotify_init1 (2) - initialize an inotify instance
inotify_rm_watch (2) - remove an existing watch from an inotify instance
upstart-file-bridge (8) - Bridge between Upstart and inotify
apropos finds manpages. apropos is your friend. Remember apropos.
The first one looks promising, so lets try opening that with man inotify. You should now get the documentation of ifnotify. It says:
NAME
inotify - monitoring filesystem events
DESCRIPTION
The inotify API provides a mechanism for monitoring filesystem events.
Inotify can be used to monitor individual files, or to monitor directo‐
ries. When a directory is monitored, inotify will return events for
the directory itself, and for files inside the directory.
So now we've learned what inotify roughly does. Let's see if it also has something useful to
say about your error.
We can search in manpages by pressing /<term><enter>. So lets try: /max_user_watches<enter> That brings us to this section:
/proc/sys/fs/inotify/max_user_watches
This specifies an upper limit on the number of watches that can
be created per real user ID.
The /proc/sys/fs/inotify/max_user_watches file is the same as the fs.inotify.max_user_watches setting in /etc/sysctl.conf. They're just two different ways to access the same Linux kernel parameter.
You can press q to exit.
I can see what the current value is by using:
$ cat /proc/sys/fs/inotify/max_user_watches
524288
or:
$ sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
They both use the same underlying value in the Linux kernel.
Note that this value is actually five times larger than what Dropbox recommends! This is on my Ubuntu 15.10 system.
So now we have learned that:
inotify is a Linux system to monitor changes to files and directories.
That we can set how many files or directories a user is allowed to watch simultaneously.
That Dropbox gives the error "Unable to monitor entire Dropbox folder hierarchy".
From this information, It seems to be the case that Dropbox is unable to watch enough files and directories for changes because the fs.inotify.max_user_watches is too low.
What I recommend is:
Check the /etc/sysctl.conf with any text editor as root. Make sure there are no fs.inotify.max_user_watches=100000 lines here. If there are, remove them.
Reboot the system to restore the default value.
Check what the value of fs.inotify.max_user_watches is as described above.
Double that by running sudo echo 'fs.inotify.max_user_watches=XXX' >> /etc/sysctl.conf && sudo sysctl -p /etc/sysctl.conf. You don't need to reboot after this change.
Hope this error is now fixed. Time will tell. If not, try doubling the value again.
If that doesn't fix it, there may be another problem, or maybe you just need to increase it even more. It depends a bit on what the default value is for your system.
Note: Newer versions of systemd no longer load the /etc/sysctl.conf file, it will only load files from the /etc/sysctl.d/ directory. Using a file in the /etc/sysctl.d directory should already be supported by most Linux distros, so I recommend you use that for future-proofing.
If you're wondering "why is there a limit in the first place?" then consider what would happen if a program would watch a million files. Would that still work? And what about a billion? 10 billion? 10 trillion? etc.
At some point, your system will run out of resources and crash. See here on some "fun" ways to do that ;-)
Related
I have designed a GUI that I want to run as soon as I turn on my Raspberry Pi. It is currently set up to automatically log in as user on startup, but if that makes the process more difficult I can change that. My Raspi runs on Raspbian 10 (buster), which has made things difficult since I can only find tutorials for Raspbian 8 or so.
I have tried modifying autostart folder, but it is not in the same location as it was in previous Raspbian versions and doesn't seem to be working the way it used to. Tutorials have said to create a .desktop file in /home/pi/.config/autostart but I don't have a .config folder, or at least it's hidden. For me, autostart is in /etc/xdg/autostart and when I try to create a new file here using nano in the terminal, I get the message [Directory '/etc/xdg/autostart' is not writable] and it doesn't save my file.
I have also tried calling my script in /etc/rc.local but it did nothing. Some have said it doesn't work for GUIs.
Here's what I type into terminal:
$ nano /etc/xdg/autostart/gui.desktop
and a new file pops up, but at the bottom I get the warning [Directory '/etc/xdg/autostart' is not writable]
How can I get my GUI script to run on startup with Raspbian 10 (buster)?
There are a number of issues here, first when you are looking at tutorials recognize that Linux distros are built in layers, for simplicity let's say your "layer stack" looks like this: kernel, systemd, x11, xdg, lxde. The kernel boots, then starts systemd, which then starts x11 (and a lot of other stuff), x11 starts xdg (and some other stuff, I think), lxde is started by either x11 or xdg I'm not sure which.
You want to add something to this process, you can do it at the kernel level (bad idea), at they systemd level (probably not right unless its a daemon), at the x11 level (still probably bad as you still don't have a user session yet), or at the xdg or lxde level.
xdg is probably the right place as it has all you need ( a gui, a user session) while being common (xdg will still work if you switch window managers, probably)
With that out of the way, why isn't your solution of modifying xdg working? It's because '/etc/xdg/autostart' is a system configuration directory. Any changes made to it will apply to all users. You may want this, but the system is trying to protect other users on your system and only allows root to make changes to everyone. If you want to do that use "sudo" (documented elsewhere on stack exchange and the internet). If you want to do it just for you use ~/.config/autostart, (https://wiki.archlinux.org/index.php/XDG_Autostart) you might need to create that directory with "mkdir ~/.config/" and then "emacs ~/.config/autostart"
Would it be better to have the python program run in a terminal window from startup? That way you would see what it is doing in case of errors.
If so, perhaps check this out https://stackoverflow.com/a/61730679/7575617
By the way, in the file manager, hit CTRL+H to toggle viewing hidden files and folders.
I am trying to run an application which uses pagemap in gem5 FS mode.
But I am not able to use pagemap in gem5. It throws below error -
"assert(pagemap>=0) failed"
The line of code is:
int pagemap = open("/proc/self/pagemap", O_RDONLY);
assert(pagemap >= 0);
Also, If I try to run my application on gem5 terminal with sudo ,it throws error-
sudo command not found
How can I use sudo in gem5 ??
These problems are not gem5 specific, but rather image / Linux specific, and would likely happen on any simulator or real hardware. So I recommend that you remove gem5 from the equation completely, and ask a Linux or image specific question next time, saying exactly what image your are using, kernel configs, and provide a minimal C example that reproduces the problem: this will greatly improve the probability that you will get help.
I have just done open("/proc/self/pagemap", O_RDONLY) successfully with: this program and on this fs.py setup on aarch64, see also these comments.
If /proc/<pid>/pagemap is not present for any file, do the following:
ensure that procfs is mounted on /proc. This is normally done with an fstab entry of type:
proc /proc proc defaults 0 0
but your init script needs to use fstab as well.
Alternatively, you can mount proc manually with:
mount -t proc proc proc/
you will likely want to ensure that /sys and /dev are mounted as well.
grep the kernel to see if there is some config controlling the file creation.
These kinds of things are often easy to find without knowing anything about the kernel.
If I do:
git grep '"pagemap'
to find the pagemap string, which is likely the creation point, on v4.18 this leads me to fs/proc/base.c, which contains:
#ifdef CONFIG_PROC_PAGE_MONITOR
REG("pagemap", S_IRUSR, proc_pagemap_operations),
#endif
so make sure CONFIG_PROC_PAGE_MONITOR is set.
sudo: most embedded / simulator images don't have it, you just login as root directly and can do anything by default without it. This can be seen by the conventional # in the prompt instead of $.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
If I try to login on a Debian with XFCE it gets a blackscreen for a few seconds, then it flashes really short and puts me back at the login screen.
The strange thing is, if I go into a terminal using Ctrl + Alt + F1 (Or any other F key) I can login, and get into the GUI using startx.
Everything works like usual.
I installed Debian the same way on 4 different machines but none of them had this error.
I used debian-8.2.0-i386-xfce.iso for installation with a USB stick.
Somebody has a idea what could cause this behavior?
I had the same problem using Jessie 8.6 with the kernel 4.7 with cinnamon, and I did almost the same: I just changed the ownership of the /home/user/.Xauthority file and it also worked:
chown user.user ./.Xauthority
After some research, I found an entry at Debian User Forums, where someone had almost the same issue, except that I could use startx and he didn't. The problem was that some of the hidden files inside the users home directory were owned by root. I still don't know why I could start the xserver from command line but at least I can login now with the GUI again.
The solution
I went into the command line using CTRL + ALT+ F1
Then I logged in as root and did a ls inside the home directory of the corrupted user.
cd /home/username -> ls -la
("-la" list hidden files, and the owner of the files)
depending on how many files are owned by root you can change the rights for seperate files, or be lazy like me and do:
chmod a+rwx *
(chmod changes the permissions for a usergroup)
"a" means for ALL users (i have just one user on the machine)
"+" means to ADD rights
"rwx" means read, write and execute
and * means all files inside this directory
That means, all users can now read, write (modify) and execute this files.
I know, its maybe not the cleanest solution but it worked for me.
I had this problem this morning and none of these fixes were working for me.
It turns out this was happening because my disk was full.
Deleting some large unused files fixed it after a restart.
This problem may occur due to corrupted xsession file, fix it by installing lxsession
sudo apt-get install lxsession
I had similar issues
CTRL+alt+f1 to login via CLI
Then,
chown username:username .Xauthority
worked for me.
For me the solution was not worked even I give every permission to every user blindly. However, I have found the problem in .profile in home directory where I used some export commands and added to the PATH environment variable. Some other files such as .bashrs, .xauthority, or .xsession might be cause of the problem. Double checked that files. First backup the files, then remove all added lines and see the results.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
At first I want to apologize for my poor english.
Is there any way to use bash-scripts like configure on NTFS partitions?
Today I reinstalled my dualboot-system (win7 & mint 13), because my old sys-partitions were to big and I wasted to much space, so I decided today to format disk, with two small sys-partitions and two bigger data partitions. (40G[NTFS] for Win, 40G for Mint (35G[Ext4] + 5G Swap), 2* ~200G[NTFS]). Ok I guess that'S enough for the preface.
So here comes my prob:
So I loaded the wine-git repo and stored it onto one of my data-partitions. So here comes my first prob, couldn't run ./configure because there weren't any execute permissions for that file (I already solved the prob for setting the file permissions, with usermapping to use the ntfs acl). So after setting the execute permissions I'm still not able to run ./configure, I just get the error msg: bash: ./configure: Permission denied (Just for record, Ya I try to run it as root).
So, does anybody know how I can run a configure script on a NFTS-Partition?
NTFS doesn't support permissions in the same way as EXT and similar volumes do. The problem you're running into is that since these permissions are not stored on the disk, defaults are loaded at mount time for the entire volume and changes are silently ignored after that.
You should be able to mount it with execute permissions with the following:
mount [devicename] [directory] -o default,remount
You will need to be the superuser. You do NOT include the brackets around the filenames (though they will need to be in quotes if they contain spaces.)
You can figure out what the devicename and directory are by using:
mount -l
Which will list all mounted devices, and their mount points. You should not need to be the superuser to issue this command.
On Fedora 17 I use following commands to mount NTFS volume with all executable permissions set correctly:
sudo mkdir /run/media/ohmyname/shared
sudo ntfsmount /dev/sda8 /run/media/ohmyname/shared
On Fedora 26, everything is as simple as it could be.
I mounted Win 10 partition with write permissions using the following command:
sudo ntfsfix /dev/sda9
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have setup kvm, libvirt on one of Dell poweredge1000m blades. I am using the following syntax for installing a virtual machine from an existing image (executing as root).
virt-install --name=vm_test --ram=1024 --arch=i686 --vcpus=1 --os-type=linux --import --disk path=/root/shared.qcow2,bus=virtio,format=qcow2 --graphics vnc,port=5901,listen=0.0.0.0,password=newone --noautoconsole --description --autostart
I am getting the following error.
Starting install...
ERROR internal error process exited while connecting to monitor: char device redirected to /dev/pts/1
open /dev/kvm: Permission denied
failed to initialize KVM: Operation not permitted
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
virsh --connect qemu:///system start vm_test
otherwise, please restart your installation.
I have used exactly the same command with one of other desktop hosts and it works there. I can install a VM from virt-manager using an ISO image with virt-manager storing the disk image at default location.
It seems like a file permissions error to me as it is not working with /vms directory but is working with some other /home/vm directory.
Thanks for help in advance
I got the same error message on a server, which has libvirt up for weeks.
Setting libvirt to run as root (as mentioned in the link) didn't work for me.
However, granting read & execute access to /var/lib/libvirt/images solved my problem.
chmod go+rx /var/lib/libvirt/images
chmod o-rwx /var/lib/libvirt/images/*
If you follow all the instructions on creating vm using libvirt, you may still meet the error message above. The root cause is AppArmor which can be found on recent Ubuntu distributions. The easiest way is to remove AppArmor if security is not a concern.
The official documentation of Ubuntu gives many advice on disable AppArmor:
Disable AppArmor
I had found the solution to my problem, here it is.
The real reason was that /vms was an NFS mount and its configuration(no_root_squash + rw) was such that it was required to be accessed over root.
By default libvirt runs a virtual machine with the user and group permissions of libvirt-qemu:kvm which would create trouble even if you run it with sudo privileges. So we need to set qemu process's user & group to root in /etc/libvirt/qemu.conf.
Also as others have pointed out, there can be multiple other reasons for this error and its sad that libvirt throws such a generic error.
The least frustrating solution is to give all permissions, disable selinux and make sure that it runs. Now one by one revoke the permissions testing that it works at each step and finally understanding why you were required to give the final set of permissions.
This can happen, if the modules were loaded too soon™ (the actual problem is not known to me, so please enhance this answer if you know it).
Just try to unload the modules and load them again. This did the trick for me:
rmmod kvm_intel # use kvm-amd if you use an amd processor.
rmmod kvm
modprobe kvm
modprobe kvm_intel # use kvm-amd if you use an amd processor.
I got this permission denied error on Arch. The problem turned out to be the access control list. Even though the Unix permissions showed group rw, getfacl showed group::---. This fixed it for me:
setfacl -m g::rw /dev/kvm
I confronted with this same problem. And after look into it, I found it is a problem of permission. You can just run the command below to deal with it:
chown root:kvm /dev/kvm
and you don't need to reboot.