Backing Up To Google Drive with Duplicity Not Working - backup

I'm trying to backup to Google Drive with Duplicity, and I can't seem to figure it out. I run this:
GOOGLE_DRIVE_SETTINGS=gdrive duplicity / pydrive+gdocs://*******[:*********]#other.host/server-backup
and I get this:
File "/usr/local/lib/python2.7/dist-packages/pydrive/auth.py", line 314, in LoadClientConfigFile
raise InvalidConfigError('Invalid client secrets file %s' % error)
InvalidConfigError: Invalid client secrets file ('Error opening file', 'client_secrets.json', 'No such file or directory', 2)
I saved my client ID and client secret bits in the gdrive file (for GOOGLE_DRIVE_SETTINGS), but no matter where I try, I cannot seem to figure out where to put the client_secrets.json file. I would appreciate any help in getting this working.

the manpage http://duplicity.nongnu.org/duplicity.1.html#sect22 says to give the settings filename as the GOOGLE_DRIVE_SETTINGS env var. locationwise i would guess it would be placed in the current user's home folder. alternatively try setting an absolute path to the file.
make sure that the file is formatted as the manpage states.
..ede/duply.net

Related

acme.sh script failing with Verify error: Invalid response from https://example.com/.well-known/acme-challenge/etc. Please add '--debug' or '--log'

From time to time I run into this error when trying to get a Let's Encrypt certificate via the acme.sh script.
Sometimes it's the first time trying to get a Let's Encrypt certificate, and sometimes it worked previously but now suddenly doesn't work.
The error message is similar to:
domain.com:Verify error:Invalid response from https://example.com/.well-known/acme-challenge/1kSTnls6_vcku98gwLEUMQNnbl1cSY1pdBrPi7sJdos
Please add '--debug' or '--log' to check more details.
See: https://github.com/acmesh-official/acme.sh/wiki/How-to-debug-acme.sh
Adding the --debug option, reveals some log entries similar to:
Changing owner/group of .well-known to username:nobody
chown: changing ownership of /home/path/to/example.com: Operation not permitted
What's the solution?
Hopefully this will save others some time googling, or poring over the documentation, or reading through the closed GitHub issues.
First thing to check: does the website folder have an .htaccess file in it?
(By "website folder" we mean where the actual website files are stored, such as /home/youruser/public_html/path_to_your_domain.com
(Note that dot files like .htaccess are hidden by default in CPANEL file manager, so you might need to use an FTP app to check - or enable showing hidden files in the CPANEL file manager (there is a Settings button at top right))
If so:
a) Rename the .htaccess file (to .xxxhtaccess or etc)
b) re-run the acme.sh script
c) When successful, rename the .htaccess file back again
Some References:
acme.sh GitHub Issues
acme.sh Documentation

Store Neomutt Mails in External Disk

I have installed Neomutt on Arch Linux using Luke Smith's Mutt-Wizard. It's working fine. I am storing all my emails in my local laptop's ~/.config/mutt/accounts folder which is mentioned in my .muttrc file.
But I have thousands of emails. So I wanted to change the location of storing the mails. I intend to store them on an external hard disk. But when I write the location of external disk in my .muttrc, Neomutt gives me error:
Maildir error: cannot read UIDVALIDITY.
Error: channel joy_deep#gmx.com: near side box INBOX cannot be opened.
Is there any way to config this?
I got it figured out somehow. I copied the mw file to mymw file. I changed the bash script. In maildir location, I put my Nextcloud folder. Changed same for .mbsyncrc file. Now it works.
Thanks.

Apache server not able to download file having copy as file name

In a web application user has provision to upload a files to the server which can be downloaded and viewed in future. We have to maintain the file name same as that it was uploaded for future downloads.
When a user uploads a file name having copy as a file name like Copy_template.xls the Apache server throws 403 error, but if we rename the file by removing copy they are able to download. How to fix this?

How to detect that a file is being uploaded over FTP

My application is keeping watch on a set of folders where users can upload files. When a file upload is finished I have to apply a treatment, but I don't know how to detect that a file has not finish to upload.
Any way to detect if a file is not released yet by the FTP server?
There's no generic solution to this problem.
Some FTP servers lock the file being uploaded, preventing you from accessing it, while the file is still being uploaded. For example IIS FTP server does that. Most other FTP servers do not. See my answer at Prevent file from being accessed as it's being uploaded.
There are some common workarounds to the problem (originally posted in SFTP file lock mechanism, but relevant for the FTP too):
You can have the client upload a "done" file once the upload finishes. Make your automated system wait for the "done" file to appear.
You can have a dedicated "upload" folder and have the client (atomically) move the uploaded file to a "done" folder. Make your automated system look to the "done" folder only.
Have a file naming convention for files being uploaded (".filepart") and have the client (atomically) rename the file after upload to its final name. Make your automated system ignore the ".filepart" files.
See (my) article Locking files while uploading / Upload to temporary file name for an example of implementing this approach.
Also, some FTP servers have this functionality built-in. For example ProFTPD with its HiddenStores directive.
A gross hack is to periodically check for file attributes (size and time) and consider the upload finished, if the attributes have not changed for some time interval.
You can also make use of the fact that some file formats have clear end-of-the-file marker (like XML or ZIP). So you know, that the file is incomplete.
Some FTP servers allow you to configure a hook to be called, when an upload is finished. You can make use of that. For example ProFTPD has a mod_exec module (see the ExecOnCommand directive).
I use ftputil to implement this work-around:
connect to ftp server
list all files of the directory
call stat() on each file
wait N seconds
For each file: call stat() again. If result is different, then skip this file, since it was modified during the last seconds.
If stat() result is not different, then download the file.
This whole ftp-fetching is old and obsolete technology. I hope that the customer will use a modern http API the next time :-)
If you are reading files of particular extensions, then use WINSCP for File Transfer. It will create a temporary file with extension .filepart and it will turn to the actual file extension once it fully transfer the file.
I hope, it will help someone.
This is a classic problem with FTP transfers. The only mostly reliable method I've found is to send a file, then send a second short "marker" file just to tell the recipient the transfer of the first is complete. You can use a file naming convention and just check for existence of the second file.
You might get fancy and make the content of the second file a checksum of the first file. Then you could verify the first file. (You don't have the problem with the second file because you just wait until file size = checksum size).
And of course this only works if you can get the sender to send a second file.

I keep getting file upload errors in my multisite Drupal Installation when I try to add a photo for my user avatar

keep getting the following error when I try to upload an image for my user profile:
The specified file temporary://picture-1-1366485906.jpg could not be copied, because the destination directory is not properly configured. This may be caused by a problem with file or directory permissions. More information is available in the system log
I have a multsite installation meaning I have the following:
sites/domain1.com
sites/domain2.com
sites/domain3.com
It is not happening on the first domain, but on domains 2 and 3 it is happening.
Go to file system url of each website & make sure details are saved there properly.
Example:
http://domain2.com/admin/settings/file-system