Store Neomutt Mails in External Disk - archlinux

I have installed Neomutt on Arch Linux using Luke Smith's Mutt-Wizard. It's working fine. I am storing all my emails in my local laptop's ~/.config/mutt/accounts folder which is mentioned in my .muttrc file.
But I have thousands of emails. So I wanted to change the location of storing the mails. I intend to store them on an external hard disk. But when I write the location of external disk in my .muttrc, Neomutt gives me error:
Maildir error: cannot read UIDVALIDITY.
Error: channel joy_deep#gmx.com: near side box INBOX cannot be opened.
Is there any way to config this?

I got it figured out somehow. I copied the mw file to mymw file. I changed the bash script. In maildir location, I put my Nextcloud folder. Changed same for .mbsyncrc file. Now it works.
Thanks.

Related

Smart local copy of a remote directory

Currently I have a bunch of local copies of dev/production websites. Each copy contains the "files" directory, which contains files uploaded by site users. Currently I use rsync to synchronize the directories contents from remote servers (via ssh).
There are some annoyances:
I have to run rsync manually each time when I want fresh files (this could be automated of course, but as I have a lot of website copies, it's not a good idea).
The rsync execution takes some time.
Disc space on my laptop is running out.
I think all of this could be solved if there is some kind of a software that can work like a proxy:
When I list files, it requests the file list from the remote server and caches the results for some (configurable) time.
When I first time request file contents, it retrieves the remote file and saves it locally.
When I update a file, it only gets updated locally.
When I save a new file in the "files" directory, it not goes to the remote server.
Of course, the logic of such software should be much more complex, but I hope, my idea is clear: don't waste disk space, download files on demand, no remote changes.
Is there any software that works like that?
Map a network drive with NFS or sshfs. Make local copies if you really need a file.
I did not mention it in the question, but I needed this for work with Drupal. And now I have found a Drupal-only solution, the Stage File Proxy module.
It does exactly what I need: downloads files from a remote server only when they are requested.

Replacing a empty dropbox's (fresh install) folder with a previously (uptodate) dropbox folder. Is it possible?

I am about to install Maverick and before I do that I am going to reformat my macbook air. I use dropbox and have about 15gb of (small) files on it (mainly documents/ebooks).
My question is: Is it possible to backup my Dropbox folder now, reformat my SSD and and install dropbox again. After wish I replace the dropbox folder with my backup without getting Dropbox confused (It might think it are new files? So dropbox could upload them or/and download the same files again).
Does anyone got any experience with this?
It's fine to do this - I have done it myself, but not on OSX.
The Dropbox client will index the files that it finds on your computer and compare them to the ones which are already in your account (on the server). I believe that it uses some kind of hash function to do this - the client creates a small hash value for each file and then this value is compared to the value on the server. If the value is the same then the client assumes that the file is the same and it does not need to be re-uploaded. However, if you have thousands of files, this can take some time.
Source: https://www.dropbox.com/help/1941/en - "The application will index the files and see that they are the same files in your account."
If you want to do it, when you install Dropbox again, you should sign-in to your account, let it create the Dropbox folder and then click "Pause Syncing" so that it doesn't start downloading everything. Then you should copy the backed-up Dropbox files into the new Dropbox folder and resume syncing.

0 filesize when using move_uploaded_file()

Permissions are in line (777, owned by NGINX, etc),
Folder is writable,
File is small,
Everything I've found on Google for the last 4 hours is correct / NA
Players are NGINX, PHP-FPM, FastCGI..
I upload the file, use move_uploaded_file to move to uploads directory and the file saves. However the file, upon inspection, is 0kb and 0px by 0px. EMPTY.
Not finding this issue anywhere online?
Any thoughts?
If anyone comes across this and everything seems to be perfect.. CHECK your disk space! In my case I was using a mounted drive so I didn't get any typical low-space errors.
move_uploaded_file and copy do NOT display space issues. I had to use rename() to get any useful details.
Hope this helps some poor soul.
Someone posted a similar issue as yours here - http://bytes.com/topic/php/answers/1002-move_uploaded_file-corrupts-some-files
It seems the issue resides in transferring a GIF from a Windows machine to Linux.
Try using copy function instead. copy([source]),[destination]. If this works, that means you have a permission issue with upload temp directory.
Have you checked the permission on upload temp directory? You can find the directive/path on php.ini file(upload_tmp_dir).

Uploading via SFTP over slow connection to temporary location then moving to real location

I have an issue where occasionally I need to work at Starbucks.
When I upload a PHP file the connection is slow so if a user tries to access the PHP file while I am uploading it they will of course be issues a fatal error.
This is very inconvenient to my busy websites. Is there a way that when a file is uploaded it can be uploaded to a temporary location, and then the server moves it to the real location once finished?
You can make WinSCP upload the file to temporary file and rename it once transfer completes automatically.
In Preferences go to the Transfer > Endurance tab and select All Files in the Enable ... Transfer to temporary file name box.
For details refer to:
https://winscp.net/eng/docs/ui_pref_resume
Why don't you just upload the file to a temporary folder on the server and execute commands on the server to remove the old file and move the new file? It should move the file fast enough on the server to eliminate any hiccups the users would see unless their timing was just right.

Can't read or write to directory CFFILE despite 777 permissions coldfusion

This is installed on a Unix system I don't have direct access to, but can get insight on by sitting with a network team.
The problem is this, I have 3 folders I need access to, read and write. The problem is, I only have access to 1 of them, and only read. This is via ColdFusion, I can get into them fine with the user they are assigned to (and the CF server runs on, which is the "www" user).
I CAN read and write to the temporary file directory, the place files are stored before they are moved to the destination directory (SERVER-INF/ etc etc etc), but that's not helpful. I have tried having the network people set the permissions for the other folders to the same thing, but with no results. The current settings of the folder I can access are rwxrws--- and the other folders are rwxrwxr-x, so I should have more permissions ( the "s" is not a mistake in the first folder).
We have tried setting the other folders to 777 and we did not even get read capability. Does the server need to be restarted on a Unix box after setting new permissions for ColdFusion to be able to get to them? I'm out of ideas right now, I'll take any new suggestions.
TL;DR
All using ColdFusion
temp directory - can read and write to
folder 1 - can read from (including subdirectories)
folder 2 - cannot read or write to (permission denied)
folder 3 - cannot read or write to (permission denied)
Goal: Get upload functionality working.
Edit: Server using apache
Just a random guess... Have you checked that paths you are trying to access are fully correct? They should be absolute for file operations, and www user must have X permissions on the all path directories -- to enter them.
The problem ended up being a restart was required after setting the new folder permissions. We didn't think this was an issue on a Unix box, however ColdFusion apparently did. This worked.