I have a site that's been running for a while. All was going well. Until now. Dun duunnn dunnnnn.
I am unable to upload from an attachment field to a particular directory. But I can upload to that directory.
Desired directory to upload (does not work):
/sites/default/files/resources/case-studies
4 drwxrwxrwx 2 apache apache 4096 Jul 22 2013 case-studies
Uploading DOES work to the parent directory:
/sites/default/files/resources
4 drwxrwxrwx 10 apache apache 4096 Mar 18 10:15 resources
As far as I can tell they are identically permissioned. Is there something I am missing?
Thanks, hive mind!
steve
For reasons I haven't figured out, this works: chmod -R a+w files <----- yay! This make sense to me, but I really can't figure out why that would work and not : chmod -R 777 files <----- boo!
Related
I have a proxmox server so under debian, and I want to mount a remote directory from my Nas Synologies to make backups.
I normally use ssh mounts without any problem.
But this time I have an error that I have never encountered, I can create files, but not delete them.
I find this very strange and I don't see where this can come from
root#proxmox:/mnt/# sshfs user#192.168.0.1:home/data /mnt/dist-folder/ -o reconnect,
ServerAliveInterval=60,ServerAliveCountMax=30,allow_other,
default_permissions,uid=0,gid=0,umask=007
root#proxmox:/mnt# cd dist-folder/
root#proxmox:/mnt/dist-folder# touch aa.txt
root#proxmox:/mnt/dist-folder# ls -la
total 12
drwxrwx--- 1 root root 114 Mar 13 09:53 .
drwxr-xr-x 7 root root 4096 Mar 13 09:37 ..
-rwxrwx--- 1 root root 0 Mar 13 09:53 aa.txt
root#proxmox:/mnt/dist-folder# rm aa.txt
rm: cannot remove 'aa.txt': Permission denied
With uid=0,gid=0 for root user and group
Thanks
This is finally a problem specific to synology.
For the assembly of the file it is absolutely necessary to respect the path by starting with
/homes/<user>home/
So it's give
sshfs user#192.168.0.1:/homes/proxmox/home/data /mnt/dist-folder/
And it's works fine !
It's not the first time that I have an abnormal configuration for this synology tool... AGrrrr
Somehow I've made an error where my Perl files, located on a Linux server, must be set at 775 for me to edit them via Sublime, on my Windows laptop.
I can change the files to 755 and they'll run properly, but I can't edit them unless they're set to 775. When I try to save them I get a Permission Denied error.
Everything is owned by www-data
drwxr-xr-x 2 www-data www-data 4096 Jun 10 08:00 cgi-bin
The Perl file within cgi-bin directory is as well
-rwxr-xr-x 1 www-data www-data 960 Jun 10 01:22 perly.pl
When I log in via the Sublime editor I log in as the original user I created my server with "danny", and he is a member of the www-data group.
Can anyone figure out where I making a mistake or the wrong assumption?
I'm not sure why this is happening. It hasn't happened in the past. I've created several localhosts on my machine, and I follow this process the same way everytime. This is what my www directory looks like in my terminal.
drwxr-xr-x. 7 me me 4.0k Feb 2 14:51 local.site.com
drwxr-xr-x. 2 root root 4.0k Mar 16:18 local.othersite.com
drwxrwxr-x. 6 me me 4.0k Oct 13:22 local.other-othersite.com
I have tried all the regualar permission settings, like:
sudo chmod 777 local.othersite.com
and
sudo chmod 755 local.othersite.com
I've done this as me and as root, why does it say that the local.othersite.com's user is root when the others say the user is me. I obviously can't create and save any files on sublime to code this othersite. Maybe it's something really obvious that I'm just not getting. So, if this seems like a dumb question, let me apologize in advance. I have read several tutorials on this particular issue, but I keep coming up short. Any help would definitely be appreciated.
So, I figured this out and it was a really simple fix. What I did was just changed directory into my /var/www folder and did this command:
sudo chown -R me local.othersite.com
very easy fix, I just figured I'd post this in case anyone else has trouble with it in the future!
I have an Amazon S3 bucket (let's call it static.example.com) that I need to mount on an EC2 instance (Ubuntu 12.04.2). I've installed s3fs. I'm able to mount the volume, but I can't write to the bucket. I have tried:
sudo s3fs static.example.com -o use_cache=/tmp,allow_other,uid=33,gid=33 /mnt/static.example.com
I can then cd /mnt and ls -la to see:
drwxr-xr-x 5 root root 4096 Mar 28 18:03 .
drwxr-xr-x 25 root root 4096 Feb 19 19:22 ..
lrwxrwxrwx 1 root root 7 Feb 21 19:19 httpd -> /httpd/
drwx------ 2 root root 16384 Oct 9 2012 lost+found
drwxr-xr-x 1 www-data www-data 0 Jan 1 1970 static.example.com
This all looks good, but when I cd static.example.com and mkdir test, I get:
mkdir: cannot create directory `test': Permission denied
The only way I can actually create a directory or touch a file is to force it with sudo. This is not a viable option, however, because I want to write files to the bucket from Apache. My Apache server runs as user:group www-data. Running mount yields:
s3fs on /mnt/static.example.com type fuse.s3fs (rw,nosuid,nodev,allow_other)
How can I mount this bucket in a manner that will allow me to write to the bucket?
I'm the lead developer and maintainer of Open source project RioFS: a userspace filesystem to mount Amazon S3 buckets.
Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Regarding your issue:
if'd you use RioFS, you could mount a bucket and have a write access to it using the following command (assuming you have installed RioFS and have exported AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables):
riofs -o allow_other http://s3.amazonaws.com bucket_name /mnt/static.example.com
(please refer to project description for command line arguments)
Please note that the project is still in the development, there are could be still a number of bugs left.
If you find that something doesn't work as expected: please fill a issue report on the project's GitHub page.
Hope it helps and we are looking forward to seeing you joined our community !
This works for me:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache
If you need to debug, just add ,f2 -f -d:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache,f2 -f -d
Try this method using S3Backer:
mountpoint/
file # (e.g., can be used as a virtual loopback)
stats # human readable statistics
Read more about it hurr:
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer
I think I messed up my permissions on my server witch is running confixx.
Mistakenly I changed owner and Group -R for /home. So everything was messed up.
I already fixed the most of it but the directory "/home/www/confixx/" is not working.
Still getting:
Error 500 -
Premature end of script headers: index.php
Folder permission is:
drwxr-xr-x 7 root root 4096 Apr 29 20:42 confixx
I've already done:
chmod -R 755 confixx/
chmod -R 644 confixx/*.php
But still getting the error.
Is there any option in confixx to restore the default permissions?
Or someone know how to fix this problem?
Is the user for the confixx folder correct?
I couldn't find any docs about the default.