I am looking for a program that will make scheduled backups to a specific ftp-server/folder. I have tried:
Create Synchronizity
Great for local backup and has scheduling, but lacks ftp. Can not recognize a mapped ftp-drive letter.
AceBackup
Has both scheduling and ftp backup, but the scheduling is not built-in but tries to utilize the scheduler in windows. And fails.
EaseUS Todo Backup
No ftp.
Any suggestions?
Iperius Backup seems to do the trick.
Aaaand now I look like a sales agent.
Related
In Nexus 3 backup procedure has changed.
In Nexus 2 recommended was to run a OS scheduled task / cron job to rsync some directories to a backup location.
In Nexus 3 the recommended way seems to be to create to schedule a predefined Nexus Task Export configuration & metadata for backup Task. And then also create a cron job to backup what gets exported with this task.
Is it still possible in Nexus 3 to do a old style backup? Shutdown the server and backup certain directories? And then for restore just put everything back? Will that work?
Or use a command line to run this task?
The way this is done in Nexus 3 does not seem to be thought through very well. You need to do a lot more to do what could be done with a single cron job in Nexus 2:
Create a scheduled task to export data.
Create a cron job to backup exported data.
Make sure that scheduled task runs and finished before the cron job.
See for example https://help.sonatype.com/display/NXRM3/Restore+Exported+Databases
See also Nexus Repository 3 backup
If you back up the entire data (sonatype-work) directory this should work as you wish. However, since the data directory is large and has many moving parts, it is safer to use the task, otherwise you may get copies of things in motion which could then corrupt and your backup would not work. The copy of the work directory as far as I know is only recommended for servers that are down, which isn't an option for many bigger companies.
Copying the entire folder did not work for me and resulted in orientdb problems. Last year I started to create N3DR. Version 3.5.0 has just been released.
https://help.sonatype.com/plugins/servlet/mobile?contentId=5412146#content/view/5412146
In case link becomes bad etc. (From Oct 20, 2017)
Nexus Repository stores data in blob stores and keeps some metadata and configuration information separately in databases. You must back up the blob stores and metadata databases together. Your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Complete the steps below to perform a backup:
Blob Store Backup
You must back up the filesystem or object store containing the blobs separately from Nexus Repository.
For File blob stores, back up the directory storing the blobs.
For a typical configuration, this will be $data-dir/blobs.
For S3 blob stores, you can use bucket versioning as an alternative to backups. You can also mirror the bucket to another S3 bucket instead.
For cloud-based storage providers (S3, Azure, etc.), refer to their documentation about storage backup options.
Node ID Backup
Each Nexus Repository instance is associated with a distinct ID. You must back up this ID so that blob storage metrics (the size and count of blobs on disk) and Nexus Firewall reports will function in the event of a restore / moving Nexus Repository from one server to another. The files to back up to preserve the node ID are located in the following location (also see Directories):
$data-dir/keystores/node/
To use this backup, place these files in the same location before starting Nexus Repository.
Database Backup
The databases that you export have pointers to blob stores that contain components and assets potentially across multiple repositories. If you don’t back them up together, the component metadata can point to non-existent blob stores. So, your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Here’s a common scenario for backing up custom configurations in tandem with the database export task:
Configure the appropriate backup task to export databases:
Use the Admin - Export databases for backup task for OrientDB databases
Use the Admin - Backup H2 Database task for H2 databases PRO
Run the task to export the databases to the configured folder.
Back up custom configurations in your installation and data directories at the same time you run the export task.
Back up all blob stores.
Store all backed up configurations and exported data together.
Write access to databases is temporarily suspended until a backup is complete. It’s advised to schedule backup tasks during off-hours.
I can't seem to restore my AppEngine backups to a new app as listed in the documentation.
We are using the cron backup as listed in the documentation.
I get through all the stages to launch the restore job successfully, but when it kicks of all the shards are failing with 503 errors.
I tried this with multiple backup files and the experience is the same.
any advice?
(Java runtime)
I'm posting this hoping this will help someone, as there is really lack of resources around Google's documentation and the web in general about this.
While the appengine documentation says this can be done, I actually found the piece of code that forbids this inside the data_storeadmin app.
I managed to connect through python remote-api shell, read an entity from the backup and tried saving to the datastore, but datastore.Put(entity) operation yielded: "BadRequestError: app s~app_a cannot access app s~app_b's data" so it seems to be on an even lower level.
In the end, I decided to restore only a specific namespace to the same app which was also a tedious task - but it did save the day.
I Managed to pull my backup locally through gsutil, install a python-remote-api version on my app, accessed the interactive shell and wrote this script:
https://gist.github.com/Shuky/ed8728f8eb6187475b9a
Hope this helps.
Shuky
Does anyone know a SQL query that will purge a MediaWiki database of old revisions? My database has grown out of control, and I need to prune it to make it possible to download and manage.
I don't have shell access so, I need to do this with a SQL query.
I have tried the solution suggested here, but it doesn't work http://www.mediawiki.org/wiki/Extension_talk:SpecialDeleteOldRevisions2#Deleting_only_archived_revisions
Thanks for reading :)
Nicholas
As you, I don't have shell access to my MediaWiki. So I can't do a lot of things like maintenance.
Here is my solution : host your MediaWiki web site on your computer just to do your maintenance tasks
Backup your database
Backup your MediaWiki folder
Setup Apache (the web server) on your computer
Setup MySQL on your computer
Restore your MediaWiki database on your computer
Put your MediWiki folder on the Apache root folder
Finally run the maintenance task you want using shell. I suggest you the deleteOldRevisions script
After that, rebackup the folder and the database and restore them on the remote host
Use the Maintenance extension and run the relevant maintenance scripts with it. Direct database manipulation is pure madness, and using a local LAMP install as suggested by the other answer quite cumbersome.
Shell access is really required to properly run a MediaWiki but this is a common problem, please report your experience with the extension on the talk page or file a bug if you find any.
Does anyone know of a script or program that can be used for backing up multiple websites?
Ideally, I would like the have it setup on a server where the backups will be stored.
I would like to be able to add the website login info, and it connects and creates a zip file or similar that it would then be sent back to the remote server to be saved as a backup etc...
But it would also need to be able to be set up as a cron so it backed up everyday at least?
I can find PC to Server backups that a similar, but no server to server remote backup scripts etc...
It would be heavily used, and needs to be a gui so the less techy can use it too?
Does anyone know of anything similar to what we need?
HTTP-Track website mirroring utility.
Wget and scripts
RSync and FTP login (or SFTP for security)
Git can be used for backup and has security features and networking ability.
7Zip can be called from the command line to create a zip file.
In any case you will need to implement either secure FTP (SSH secured) OR a password-secured upload form. If you feel clever you might use WebDAV.
Here's what I would do:
Put a backup generator script on each website (outputting a ZIP)
Protect its access with a .htpasswd file
On the backupserver, make a cron script download all the backups and store them
I would like to make a complete backup of my whole joomla 1.5 based site from time to time. How would this ideally be done? Are there any common pitfalls? Not that I only have ftp access to the hosting server. Is there a step by step tutorial somewhere? I am using latest Joomgallery and Kunena 1.0.9 (Legacy mode).
Maybe there is a good way to automate this?
There's two parts of the backup you have to worry about, the database and the files.
The first part is the database. It can be backed up using something like phpMyAdmin. If you don't have this available on your server already, it's not too hard to upload and get it going yourself. From there, you can just Export the entire database to a gzip file.
The second part is the code and uploaded files. The code base shouldn't change too often, so you could probably just make one backup of this. There's a number of ways. The simplest is to just download the entire folder via FTP, though if you're Linux, I'm sure someone will know a single command line to get all the changed files (rsync?).
The database is the main thing you have to worry about though: everything else should be able to be rebuilt just by reinstalling.
I think this: http://www.joomlapack.net/ is what you need. I use it myself and it works like a charm. Both for backups and for moving my Joomla installations from developer sites and to the real site.
get an FTP synchronisation tool and keep an up-to-date copy of your site locally. Then you could run the batch script
mysqldump -hhost -uuser -p%1 schema > C:\backup.sql
to create a backup of your mysql tables at various points in time.
edit
you would have to have MySQL Server installed on your local machine and path to its bin directory in you PATH, in order to run the mysqldump command without much hassle. -p%1 would take the command-line provided password, as you wouldn't want to store passwords in your batch script.
If you only have FTP access you are in a bit of a problem, as beside all files you'll also have to backup the database. Without accessing the database, a full-backup won't do you any good.
Whatever backup strategy you choose - be sure it can handle UTF-8 correctly. Joomla 1.5 stores all content with UTF-8, even when the database charset is set on 'iso-5589-1' - so when the backup solution is detecting the database charset, some characters like € or é will result in "strange" ¬ / é - not really what you'll want.
I absolutely endorse using Joomlapack - it works great. The optional remote tools allow you to initiate the backup from a Windows desktop machine - it performs the backup and downloads it. The remote has a scheduler, and you can also set it off to backup and download a list of sites.
Joomlapack also provides a file "kickstart.php" which you copy to your empty server account along with the backup, which automates the restore procedure. You do have to create an empty database with PHPMyAdmin or similar, and you are given the opportunity to supply the database parameters (host, database, username, password) during the process.
One pitfall I did run into with this though is that some common components can have absolute URLs in their configuration - e.g. SOBI2, Virtuemart. It's then just a matter of finding the appropriate configuration file, editing it and re-uploading it.
Another problem was one archive file (either ZIP or their JPA format) got a filename with a "?" character in it (from a Linux server) and this caused a bit of a problem trying to install it locally on a Windows WAMP stack - the extract process on the ZIP file failed, and it stopped the process completing cleanly.
I suggest using automatic backup service by http://www.everlive.net
Update:
Ok, here is some more information. EverLive.net is a website where you can create a free account. Enter your website details and you are ready to take your backups withe just one click. Restore is also possible in the same way.
Further you can use automatic backup option to take automatic backups at defined intervals. Other than that, you can use the website health check service to inform you if your website is not available.