I want to use DDEV as a local development environment. The setup was successful and the website (a WordPress) is running.
Currenty our team is using XAMPP and to avoid downloading large files on every local machine we create symbolic links (e.g. the "uploads" folder in WordPress). The target is a network drive. So everyone in our team has access to the same files.
Now I want to do the same with DDEV. In WSL I mounted the network drive and created a symbolic link. Inside the console I have full access to the mounted directory, I can create, edit and remove files.
But when I access a file with the browser I get the following error message:
403 Forbidden. You don't have permission to access this resource.
The same error occurs when I try to upload a new file within WordPress.
Is there any way to give the webserver the permission to view and modify the files on a network drive?
The Webserver is an Apache/2.4.38.
As #rfay mentioned I had to add the network drive as volume so it's accessible by the web container. Therefore I created a new docker-compose-file within the .ddev directory (see also in the docs: https://ddev.readthedocs.io/en/stable/users/extend/custom-compose-files/#docker-compose42yaml-examples).
Additionally the permissions on the network drive were incorrect.
Related
I've deployed a release in an ubuntu server with apache. When I access the deployed app, it shows in console: Error while trying to use the following icon from the Manifest: https://mydomain/icons/Icon-192.png (Download error or resource isn't a valid image).
Seems like flutter web is trying to download the Icon-192.png that resides on a subdirectory without permissions, giving me a Forbidden error.
I'd like to know, what's the best approach to access contents (assets like favicon, images, etc) on the deployed directory and subdirectories without making it public.
I have just purchased a dedicated server from a UK hosting company that uses cPanel and I have root access
I am using scp to copy a huge (> 2tb) website from another hosting company (1&1 IONOS using Plesk not that it should make any difference)
The files are copying over .. using SSH I can use the "ls" command to list all the files that I've copied over
However, when I use the File Manager option via cPanel interface, I can see the first folder name on the left hand side (i.e. public_html/my-copied-site) but on the right hand window it shows the directory as empty
If I use the "ls" command, I can see the files & folders
if I try an access any of the files directly via a web browser then I get a 403 Forbidden message
What have I done wrong?
The answer to this problem is the ownership of the folder
Using scp over SSH meant that I was logged in as "root" and therefore the owner of the folders was also "root"
Changing the owner of the folder (using "chown" command) to the account's name resolved the problem
Hope this helps someone out
I've launched Wordpress on Google Compute Engine (via their automated launcher process). It installs quickly and easily and visiting the external IP displayed in my Compute Engine VM Instances Dashboard, I am able access the fresh installation of Wordpress.
However, when I scp an existing Wordpress installation oldWPsite into var/www/ then replace my html directory
mv html htmlFRESH
mv oldWPsite html
my site returns a 'failed to open' error. Directory permissions user:group are identical.
Moreso, when I return the directories to their original configuration
mv html oldWPsite
mv htmlFRESH html
Still, the error persists.
I am familiar with other hosting paradigms where I can easily switch between the publicly served files by simply modifying directory names. Is there something unique about Google Compute Engine? What is the best way to import existing sites, files, etc into the Google Cloud environment?
Replicate
Install Wordpress via Google Launcher on a micro-VM.
Visit public IP of the VM instance.
SCP a fresh installation of Wordpress tovar/www.
Replace the Google installed html directory with the newly created and copied Wordpress directory using mv commands.
Visit public IP of the VM instance.
===
Referenced Questions:
after replacing /var/www/html directory, apache does not work anymore
permission for var/www/html directory - a2enmod command unrecognized on new G-compute VM
The import .htaccess file had https redirect which caused the server to prompt failure since https is not setup in a fresh launch of Wordpress through GCE. Compounding the issue, the browser cache held that memory when the previous site was moved back to the initial conditions.
Per usual, the solution involved the investigation of user errors.
I've just installed Concrete 5 CMS by following the instructions on the website.
The folders application/files/, application/config/, packages/ and
updates/ will need to be writable by the web server process. This can
mean that the folders will need to be "world writable", depending on
your hosting environment. If your server supports running as
suexec/phpsuexec, the files should be owned by your user account, and
set as 755 on all of them. That means that your web server process can
do anything it likes to them, but nothing else can (although everyone
can view them, which is expected.) If this isn't possible, another
good option is to set the apache user (either "apache" or "nobody") as
having full rights to these file. If neither are possible, chmod 777
to files/ and all items within (e.g. chmod -R 777 file/*)
The packages folder has permission 777 and root/tmp folder has permission 755.
I've uploaded a new theme to /packages over FTP. When I try to install the new theme I see the following error:
An unexpected error occurred. fopen(/root/tmp/1419851019.zip) [function.fopen]: failed to open stream:
Permission denied
I have FTP access to the server and access to CPanel. How do I get this working without granting too many permissions which pose a security risk?
My install has the folders application/files, application/config, packages, and updates all set to 755 and it's working just fine.
You get that error because the system is trying to write to /root/tmp, which apparently is the environment configuration for a temp folder when your PHP request is handled.
Try adding the folder application/files/tmp in your file system (within your concrete5 installation). And then make sure that the user can write to that folder that is running PHP in your environment. As explained in the concrete5's own documentation (that you linked originally), it depends on your server which user this is.
Usually in shared hosting environments it's the same as the account you use to login there through SSH or FTP. In these cases, the 755 permissions should be enough if your own user owns the tmp folder you just created.
My question is related to TrueCrypt drive created on a server. I want to mount this drive on few computers on network with write access. In order to do so, I installed TrueCrypt on a network computer and mounted the drive.
Problem
It mounts the drive after asking the password but triggers write error. In other words, it is read only.
What I have tried so far
I have looked in the documentation at truecrypt.com and it shows there are two methods of mounting
TrueCrypt Mounted Drive (Mounts drive on a local computer with read only access)
Unmounted Drive (Drive is mounted on the server and shared across the network)
What I want
Option 2 seems to be solving the problem with exception to it doesn't ask for password. It is same as any shared folder on network which makes it less secure. So is it possible to to mount drive on network with write access but after authenticating with TrueCrypt login credentials.
Any help will be greatly appreciated.
Based on what I have read (I haven't tried it myself) when you download the truecrypt file to your local machine, you should be able to mount it there and would be prompted for password. Once mounted, you should be able to write to or modify to your hearts content and then save to the encrypted volume you local machine. You will not be able to save the changes into the original server-based volume as that file is 'read only'. However, you should be able to save your modified volume to the server under a different file name.
What I did:
mounted the TrueCrypt Drive and a TrueCrypt-Container with VeraCrypt
created a windows (samba) and mac (afp) share of the drive and container with a password in the share settings (whatever software you use)
Mounting the container prevented it from being overwritten from some one else opening the container directly.