Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
At first I want to apologize for my poor english.
Is there any way to use bash-scripts like configure on NTFS partitions?
Today I reinstalled my dualboot-system (win7 & mint 13), because my old sys-partitions were to big and I wasted to much space, so I decided today to format disk, with two small sys-partitions and two bigger data partitions. (40G[NTFS] for Win, 40G for Mint (35G[Ext4] + 5G Swap), 2* ~200G[NTFS]). Ok I guess that'S enough for the preface.
So here comes my prob:
So I loaded the wine-git repo and stored it onto one of my data-partitions. So here comes my first prob, couldn't run ./configure because there weren't any execute permissions for that file (I already solved the prob for setting the file permissions, with usermapping to use the ntfs acl). So after setting the execute permissions I'm still not able to run ./configure, I just get the error msg: bash: ./configure: Permission denied (Just for record, Ya I try to run it as root).
So, does anybody know how I can run a configure script on a NFTS-Partition?
NTFS doesn't support permissions in the same way as EXT and similar volumes do. The problem you're running into is that since these permissions are not stored on the disk, defaults are loaded at mount time for the entire volume and changes are silently ignored after that.
You should be able to mount it with execute permissions with the following:
mount [devicename] [directory] -o default,remount
You will need to be the superuser. You do NOT include the brackets around the filenames (though they will need to be in quotes if they contain spaces.)
You can figure out what the devicename and directory are by using:
mount -l
Which will list all mounted devices, and their mount points. You should not need to be the superuser to issue this command.
On Fedora 17 I use following commands to mount NTFS volume with all executable permissions set correctly:
sudo mkdir /run/media/ohmyname/shared
sudo ntfsmount /dev/sda8 /run/media/ohmyname/shared
On Fedora 26, everything is as simple as it could be.
I mounted Win 10 partition with write permissions using the following command:
sudo ntfsfix /dev/sda9
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 11 months ago.
Improve this question
I'm trying to run xubuntu-desktop on WSL as per the tutorial given by many sites. But I can't seem to connect to the display of VcXsrv and it always shows:
xfce4-session: Cannot open display: .
Type 'xfce4-session --help' for usage.
If I run startxfce4, it says:
/usr/bin/startxfce4: X server already running on display muhammadsalmanafzal:0.0
xrdb: Connection refused
xrdb: Can't open display 'muhammadsalmanafzal:0.0'
xfce4-session: Cannot open display: .
Type 'xfce4-session --help' for usage.
Although, the VcXsrv window of Xlaunch is closed.
Can anybody help me look for the error? What am I doing wrong?
Also, when I first installed xubuntu-desktop, at the very end it said:
Errors were encountered while processing:
blueman
E: Sub-process /usr/bin/dpkg returned an error code (1)
And then I read somewhere to remove it, so I did and reinstalled xubuntu-desktop and no error was given.
If you are running WSL 1 then then, you need to add following line to .bashrc in home:
export DISPLAY=:0.0
and run bash again.
However, if you are running WSL 2 then you need get the IPv4 of you WSL network (since by converting to WSL 2, it becomes a network) by checking the ipconfig in Powershell and then export the relative to .bashrc
[EDIT]
I think after the windows update to 20H2, the above solution stopped working. So, as per the official recommendation of Ubuntu from their site, you can add the following lines in your ~/.bashrc and restart it.
export DISPLAY=:0 # in WSL 1
export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0 # in WSL 2
export LIBGL_ALWAYS_INDIRECT=1
a better solution is run:
export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}'):0
wsl2 changed when restarted and can't be fixed currently.
https://qiita.com/baibai25/items/5841b0592727893d960f
The following two approaches worked for me. I use WSL2.
Approach -1:
The title bar of VcXsrv says something like VcXsrv Server Display MACHINENAME:0.0.
Start xfce4 with this command: xfce4-session --display=MACHINENAME:0.0
Replace MACHINENAME with the name of your PC.
Reference: https://github.com/Microsoft/WSL/issues/1800#issuecomment-455791220
Approach-2:
Add the following line to .bashrc file.
export DISPLAY=MACHINENAME:0.0
Again MACHINENAME should be the name of your PC
Save the file and restart WSL2.
Now you can just use the command xfce4-session.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I frequently get the following Dropbox error. The error message's proposal fixes the error, but I'm trying to figure out what it's doing to my system, and perhaps if there is a root cause at play.
Unable to monitor entire Dropbox folder hierarchy. Please run
echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf; sudo sysctl -p`
and restart Dropbox to fix the problem.
Note: I strongly encourage you to actually DO the steps and not just read them if you want to learn about Linux!
If I type apropos inotify in a shell to see which manpage are about "inotify" I get these results:
$ apropos inotify
inotify (7) - monitoring filesystem events
inotify_add_watch (2) - add a watch to an initialized inotify instance
inotify_init (2) - initialize an inotify instance
inotify_init1 (2) - initialize an inotify instance
inotify_rm_watch (2) - remove an existing watch from an inotify instance
upstart-file-bridge (8) - Bridge between Upstart and inotify
apropos finds manpages. apropos is your friend. Remember apropos.
The first one looks promising, so lets try opening that with man inotify. You should now get the documentation of ifnotify. It says:
NAME
inotify - monitoring filesystem events
DESCRIPTION
The inotify API provides a mechanism for monitoring filesystem events.
Inotify can be used to monitor individual files, or to monitor directo‐
ries. When a directory is monitored, inotify will return events for
the directory itself, and for files inside the directory.
So now we've learned what inotify roughly does. Let's see if it also has something useful to
say about your error.
We can search in manpages by pressing /<term><enter>. So lets try: /max_user_watches<enter> That brings us to this section:
/proc/sys/fs/inotify/max_user_watches
This specifies an upper limit on the number of watches that can
be created per real user ID.
The /proc/sys/fs/inotify/max_user_watches file is the same as the fs.inotify.max_user_watches setting in /etc/sysctl.conf. They're just two different ways to access the same Linux kernel parameter.
You can press q to exit.
I can see what the current value is by using:
$ cat /proc/sys/fs/inotify/max_user_watches
524288
or:
$ sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
They both use the same underlying value in the Linux kernel.
Note that this value is actually five times larger than what Dropbox recommends! This is on my Ubuntu 15.10 system.
So now we have learned that:
inotify is a Linux system to monitor changes to files and directories.
That we can set how many files or directories a user is allowed to watch simultaneously.
That Dropbox gives the error "Unable to monitor entire Dropbox folder hierarchy".
From this information, It seems to be the case that Dropbox is unable to watch enough files and directories for changes because the fs.inotify.max_user_watches is too low.
What I recommend is:
Check the /etc/sysctl.conf with any text editor as root. Make sure there are no fs.inotify.max_user_watches=100000 lines here. If there are, remove them.
Reboot the system to restore the default value.
Check what the value of fs.inotify.max_user_watches is as described above.
Double that by running sudo echo 'fs.inotify.max_user_watches=XXX' >> /etc/sysctl.conf && sudo sysctl -p /etc/sysctl.conf. You don't need to reboot after this change.
Hope this error is now fixed. Time will tell. If not, try doubling the value again.
If that doesn't fix it, there may be another problem, or maybe you just need to increase it even more. It depends a bit on what the default value is for your system.
Note: Newer versions of systemd no longer load the /etc/sysctl.conf file, it will only load files from the /etc/sysctl.d/ directory. Using a file in the /etc/sysctl.d directory should already be supported by most Linux distros, so I recommend you use that for future-proofing.
If you're wondering "why is there a limit in the first place?" then consider what would happen if a program would watch a million files. Would that still work? And what about a billion? 10 billion? 10 trillion? etc.
At some point, your system will run out of resources and crash. See here on some "fun" ways to do that ;-)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
If I try to login on a Debian with XFCE it gets a blackscreen for a few seconds, then it flashes really short and puts me back at the login screen.
The strange thing is, if I go into a terminal using Ctrl + Alt + F1 (Or any other F key) I can login, and get into the GUI using startx.
Everything works like usual.
I installed Debian the same way on 4 different machines but none of them had this error.
I used debian-8.2.0-i386-xfce.iso for installation with a USB stick.
Somebody has a idea what could cause this behavior?
I had the same problem using Jessie 8.6 with the kernel 4.7 with cinnamon, and I did almost the same: I just changed the ownership of the /home/user/.Xauthority file and it also worked:
chown user.user ./.Xauthority
After some research, I found an entry at Debian User Forums, where someone had almost the same issue, except that I could use startx and he didn't. The problem was that some of the hidden files inside the users home directory were owned by root. I still don't know why I could start the xserver from command line but at least I can login now with the GUI again.
The solution
I went into the command line using CTRL + ALT+ F1
Then I logged in as root and did a ls inside the home directory of the corrupted user.
cd /home/username -> ls -la
("-la" list hidden files, and the owner of the files)
depending on how many files are owned by root you can change the rights for seperate files, or be lazy like me and do:
chmod a+rwx *
(chmod changes the permissions for a usergroup)
"a" means for ALL users (i have just one user on the machine)
"+" means to ADD rights
"rwx" means read, write and execute
and * means all files inside this directory
That means, all users can now read, write (modify) and execute this files.
I know, its maybe not the cleanest solution but it worked for me.
I had this problem this morning and none of these fixes were working for me.
It turns out this was happening because my disk was full.
Deleting some large unused files fixed it after a restart.
This problem may occur due to corrupted xsession file, fix it by installing lxsession
sudo apt-get install lxsession
I had similar issues
CTRL+alt+f1 to login via CLI
Then,
chown username:username .Xauthority
worked for me.
For me the solution was not worked even I give every permission to every user blindly. However, I have found the problem in .profile in home directory where I used some export commands and added to the PATH environment variable. Some other files such as .bashrs, .xauthority, or .xsession might be cause of the problem. Double checked that files. First backup the files, then remove all added lines and see the results.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Directly on my Debian box, I can run the following command to show manually installed packages:
aptitude search '!~M ~i'
This works great. If I SSH in from a remote box, and run the command, I also get the same result.
However, when I run the command as a batch, it does not produce the same result.
ssh user#server aptitude search '!~M ~i'
Since the process takes a bit of time to run, I execute ps aux | grep aptitude while running both variants, and the result appears to be the same.
What am I doing wrong?
PS. I am aware that dpkg -L can produce this information, but this is just the smallest example of what is broken, I intend to use !~pstandard !~pimportant !~prequired to filter out base packages as well, which I don't believe dpkg can do (but if it can, a solution with dpkg is welcome.)
Using information from Bash - Escaping SSH commands, I was able to create a command that worked:
ssh user#server $(printf '%q ' aptitude search '!~M ~i')
If target is a more recent debian/ubuntu you can use:
ssh user#server apt-mark showmanual
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I have setup kvm, libvirt on one of Dell poweredge1000m blades. I am using the following syntax for installing a virtual machine from an existing image (executing as root).
virt-install --name=vm_test --ram=1024 --arch=i686 --vcpus=1 --os-type=linux --import --disk path=/root/shared.qcow2,bus=virtio,format=qcow2 --graphics vnc,port=5901,listen=0.0.0.0,password=newone --noautoconsole --description --autostart
I am getting the following error.
Starting install...
ERROR internal error process exited while connecting to monitor: char device redirected to /dev/pts/1
open /dev/kvm: Permission denied
failed to initialize KVM: Operation not permitted
Domain installation does not appear to have been successful.
If it was, you can restart your domain by running:
virsh --connect qemu:///system start vm_test
otherwise, please restart your installation.
I have used exactly the same command with one of other desktop hosts and it works there. I can install a VM from virt-manager using an ISO image with virt-manager storing the disk image at default location.
It seems like a file permissions error to me as it is not working with /vms directory but is working with some other /home/vm directory.
Thanks for help in advance
I got the same error message on a server, which has libvirt up for weeks.
Setting libvirt to run as root (as mentioned in the link) didn't work for me.
However, granting read & execute access to /var/lib/libvirt/images solved my problem.
chmod go+rx /var/lib/libvirt/images
chmod o-rwx /var/lib/libvirt/images/*
If you follow all the instructions on creating vm using libvirt, you may still meet the error message above. The root cause is AppArmor which can be found on recent Ubuntu distributions. The easiest way is to remove AppArmor if security is not a concern.
The official documentation of Ubuntu gives many advice on disable AppArmor:
Disable AppArmor
I had found the solution to my problem, here it is.
The real reason was that /vms was an NFS mount and its configuration(no_root_squash + rw) was such that it was required to be accessed over root.
By default libvirt runs a virtual machine with the user and group permissions of libvirt-qemu:kvm which would create trouble even if you run it with sudo privileges. So we need to set qemu process's user & group to root in /etc/libvirt/qemu.conf.
Also as others have pointed out, there can be multiple other reasons for this error and its sad that libvirt throws such a generic error.
The least frustrating solution is to give all permissions, disable selinux and make sure that it runs. Now one by one revoke the permissions testing that it works at each step and finally understanding why you were required to give the final set of permissions.
This can happen, if the modules were loaded too soon™ (the actual problem is not known to me, so please enhance this answer if you know it).
Just try to unload the modules and load them again. This did the trick for me:
rmmod kvm_intel # use kvm-amd if you use an amd processor.
rmmod kvm
modprobe kvm
modprobe kvm_intel # use kvm-amd if you use an amd processor.
I got this permission denied error on Arch. The problem turned out to be the access control list. Even though the Unix permissions showed group rw, getfacl showed group::---. This fixed it for me:
setfacl -m g::rw /dev/kvm
I confronted with this same problem. And after look into it, I found it is a problem of permission. You can just run the command below to deal with it:
chown root:kvm /dev/kvm
and you don't need to reboot.