I am having an issue getting my s3 to automatically mount properly after restart. I am running an AWS ECS c5d using ubuntu 16.04. I able able to use s3fs to connect to my S3 drive manually using:
$s3fs -o uid=1000,umask=077,gid=1000 s3drive ~/localdata
Afterwards when I go into the folder I can see and change my S3 files. But when I try to set it up for automatically connecting I can't get it to work. I have tried adding the following to etc/fstab:
s3drive /home/ubuntu/localdata fuse.s3fs _netdev,passwd_file=/home/ubuntu/.passwd-s3fs, uid=1000,umask=077,gid=1000 0 0
It processes but when I go to the location and $ls -lah I see an odd entry for permissions (and I am denied permission to cd into it):
d????????? ? ? ? ? ? localdata
I get the same result when I start fresh and try adding to /etc/fstab:
s3fs#s3drive /home/ubuntu/localdata fuse _netdev,passwd_file=/home/ubuntu/.passwd-s3fs,uid=1000,umask=077,gid=1000 0 0
Lastly I tried added to /etc/re.local just above the exit 0 row either:
s3fs -o uid=1000,umask=077,gid=1000 s3drive ~/localdata
or
s3fs -o _netdev,uid=1000,umask=077,gid=1000 s3drive ~/localdata
When I reboot nothing seems to happen (i.e. no connection). But if I run it manually using:
$ sudo /etc/rc.local start
I get the same weird entry for my drive
d????????? ? ? ? ? ? localdata
Any ideas how to do this right? or what the ? ? ? permissions mean? I really hope this isn't a duplicate but i searched the existing answers and tried stuff for the whole afternoon.
Looks like permission problem.
Verify AWS keys in pass ~/.passwd-s3fs are correct, chmod is 600, and IAM user has correct permissions to that bucket.
You probably need a higher version of s3fs:
https://github.com/s3fs-fuse/s3fs-fuse/issues/1018
Either upgrade you ubuntu to 20.04
or host a docker container with ubuntu 20.04 (or some other distro), map you local folder to a folder inside container using volumes and setup s3fs inside that container using fstab.
Related
gsutil -m rm gs://{our_bucket}/{dir}/{subdir}/*
...
Removing gs://our_bucket/dir/subdir/staging-000000000102.json...
Removing gs://our_bucket/dir/subdir/staging-000000000101.json...
CommandException: 103 files/objects could not be removed.
The command is able to find the directory with the 103 .JSON files, and "tries" removing them per the Removing gs://... being output. For what reason might we be receiving CommandException: 103 files/objects could not be removed.?
This works on my local machine
This works in our docker container run locally
This does not work in our docker container on the GCP compute engine where we need it to be working.
Perhaps this is a permissions issue with the compute engine not having permission to remove files in our GCS?
Edit: We have a service account JSON in the /config folder of our Airflow project, and that service account is shared to an IAM user with Storage Admin permission. Perhaps having the JSON in the /config folder is not sufficient for assigning permissions to the entire GCP compute engine? I am particularly confused because this server is able to query from our BQ database, and WRITE to GCS, but cannot delete from GCS...
The solution in this link - https://gist.github.com/ryderdamen/926518ddddd46dd4c8c2e4ef5167243d was exactly what we needed:
Stop the instance
Edit the settings
Remove gsutil cache
this might be a dumb question, but I checked everywhere and there's no direct answer to it.
I set up both SSH keys successfully and I can connect to my instance via terminal, but when I do "ls", it doesn't show me any output. I am using iTerm2 with zsh on my Mac but I don't think this is an issue.
Can anybody give me a hint? Thanks!
When you access a VM through SSH, your working directory is the home directory of the user specified with the SSH command, i.e. /home/username. In case you access as root, the working directory will be /root.
You can check it through the command pwd
If it is a brand new machine, it is normal that the output of 'ls' is empty since in your home directory no file matches the filters of 'ls' with no parameters. The reason is that 'ls' doesn't show filenames starting with a dot ('.') because in the Linux convention they are hidden unless you run ls -al.
You can try again with $ ls -al and you will be able to see hidden files and directories as well.
On the other hand you can create as well first an empty file and then running again 'ls':
$ touch file
$ ls
In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.
I have setup an NFS file share between two CentOS 6, 64 machines. On the server the folder being shared was originally owned by the root user. On the client it turned up as being owned by nfsnobody. When I tried to write to the folder from the client I got a permissions error. So I changed the folder ownership on the server to nfsnobody and chmod'd it to 777. However, still no joy - I continue to get a permissions error. Clearly, there is more to this. I would be much obliged to any Linux gurus out there (I personally wouldn't merit being called anything more than a newbie) who might be able to help fix this issue.
Edit - I should have mentioned that trying to write to the shared folder from the client actually manages to create a file entry. However, the file size is 0 and the permissions error is reported.
The issue here is to do with the entry in /etc/exports. It should read
folder ip(rw,**all_squash**,sync,no_subtree_check)
I had missed the all_squash bit. That apart, make sure that the folder on the server is owned by nfsnobody. On my setup both my client and server nfsnobodies ended up with a user id if 65534. However, it is well worth checking this (/etc/groups) or else... .
Here are a couple of useful references
How to setup an NFS SErver
NFS on CentOS
For the benefit of anyone looking to setup an NFS server I give below what worked for me on my CentOS 6 64bit machines.
SERVER
yum install nfs-utils nfs-utils-lib - install NFS
rpm -q nfs-utils - check the install
/etc/init.d/rpcbind start
chkconfig --levels 235 nfs on
/etc/init.d/nfs start
chkconfig --level 35 rpcbind on
With this done you should create the folder you want to share
mkdir folder
chown 65534:65534 folder
chmod 755 folder
Now define the folder to be shared/exported. Use your favorite text editor (vi or whatever) to
open/create /etc/exports
folder clientIP (rw,all_squash,sync,no_subtree_check)
Client
Install, check, bind and start as above
mount -t nfs serverIP:folder clientFolderLocation
If all goes well you should now be able to write a little script on your client
<?php
$file = $_SERVER['DOCUMENT_ROOT']."/../nfsfolder/test.txt";
file_put_contents($file,'Hello world of NFS!');
?>
browse to it and find that test.txt now exists on the server with the content "Hello world of NFS". In the example I have placed my mounted drive one level before document_root.
I just started using Apache, but when I try to run myPHPadmin, I get this error message:
1 - Can't create/write to file '/var/folders/w1/5yx2p9mj7w9bm67gdwhqxwsr0000gn/T/#sql1ba_3_0.MYI' (Errcode: 13)
Another post in the Stack Overflow suggested changing the permissions on the XAMPP file, my.cnf, with this command:
sudo chmod 600 my.cnf
I tried running the code in Mac Terminal, but the result was "No such file or directory."
Does anyone know what I should try next?
This is a permission problem on your datadir (where MySQL wants to write files). Normally, at MySQL installation, correct permissions are set for the user who runs mysqld.
Are you sure that MySQL was installed correctly as part of XAMPP installation?