Nexus 3 backup via command line? - backup

In Nexus 3 backup procedure has changed.
In Nexus 2 recommended was to run a OS scheduled task / cron job to rsync some directories to a backup location.
In Nexus 3 the recommended way seems to be to create to schedule a predefined Nexus Task Export configuration & metadata for backup Task. And then also create a cron job to backup what gets exported with this task.
Is it still possible in Nexus 3 to do a old style backup? Shutdown the server and backup certain directories? And then for restore just put everything back? Will that work?
Or use a command line to run this task?
The way this is done in Nexus 3 does not seem to be thought through very well. You need to do a lot more to do what could be done with a single cron job in Nexus 2:
Create a scheduled task to export data.
Create a cron job to backup exported data.
Make sure that scheduled task runs and finished before the cron job.
See for example https://help.sonatype.com/display/NXRM3/Restore+Exported+Databases
See also Nexus Repository 3 backup

If you back up the entire data (sonatype-work) directory this should work as you wish. However, since the data directory is large and has many moving parts, it is safer to use the task, otherwise you may get copies of things in motion which could then corrupt and your backup would not work. The copy of the work directory as far as I know is only recommended for servers that are down, which isn't an option for many bigger companies.

Copying the entire folder did not work for me and resulted in orientdb problems. Last year I started to create N3DR. Version 3.5.0 has just been released.

https://help.sonatype.com/plugins/servlet/mobile?contentId=5412146#content/view/5412146
In case link becomes bad etc. (From Oct 20, 2017)
Nexus Repository stores data in blob stores and keeps some metadata and configuration information separately in databases. You must back up the blob stores and metadata databases together. Your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Complete the steps below to perform a backup:
Blob Store Backup
You must back up the filesystem or object store containing the blobs separately from Nexus Repository.
For File blob stores, back up the directory storing the blobs.
For a typical configuration, this will be $data-dir/blobs.
For S3 blob stores, you can use bucket versioning as an alternative to backups. You can also mirror the bucket to another S3 bucket instead.
For cloud-based storage providers (S3, Azure, etc.), refer to their documentation about storage backup options.
Node ID Backup
Each Nexus Repository instance is associated with a distinct ID. You must back up this ID so that blob storage metrics (the size and count of blobs on disk) and Nexus Firewall reports will function in the event of a restore / moving Nexus Repository from one server to another. The files to back up to preserve the node ID are located in the following location (also see Directories):
$data-dir/keystores/node/​
To use this backup, place these files in the same location before starting Nexus Repository.
Database Backup
The databases that you export have pointers to blob stores that contain components and assets potentially across multiple repositories. If you don’t back them up together, the component metadata can point to non-existent blob stores. So, your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Here’s a common scenario for backing up custom configurations in tandem with the database export task:
Configure the appropriate backup task to export databases:
Use the Admin - Export databases for backup task for OrientDB databases
Use the Admin - Backup H2 Database task for H2 databases PRO
Run the task to export the databases to the configured folder.
Back up custom configurations in your installation and data directories at the same time you run the export task.
Back up all blob stores.
Store all backed up configurations and exported data together.
Write access to databases is temporarily suspended until a backup is complete. It’s advised to schedule backup tasks during off-hours.

Related

OperationalError: Attempt to Write A ReadOnly Database on Google Cloud Application

Recently, I have been trying to deploy an interactive Google App Engine that writes to a SQLite database, which works fine when running the app locally, but when running it through the server, I receive the error:
OperationalError: attempt to write a readonly database
I tried changing the permissions on my .db, .sql but no luck.
Any advice would be greatly appreciated.
You can try changing permission of the directory and checking that .sqllite file exists and is writable
But generally speaking is not a good idea to rely on disk data when working on app engine as disk storage is ephemeral (unless you are using persistent disks on flex) but even then its better to use a cloud database solution
App Engine has exactly read-only file system, i.e. no files can be modified. It has, however, /tmp/ folder to store temporary files as the name suggests. It actually uses RAM, so not a good idea if the database is huge.
On app startup you can copy your original database file to /tmp/ folder and use it from there afterwards.
This works. However, all the changes in the database are lost when the app nodes scale to 0. Each node of the app has its own database copy and the data is not shared between the nodes. If you need the data to be shared between the app nodes, better use CloudSQL.

Restore database in mongodb replication

My requirement is to restore mongodb databases from dev box to test server having three nodes. I have setup the replication between three nodes for high availability.
I have created backup on dev using mongodump to a different directory.
Now my requirement is to restore all databases in test server using copied folder path.
I have looked number of articles over internet but did not find the exact solution to restore databases. I know mongorestore command will work but how to restore from different path.
I tried following steps to achieve this >
1. Restore with replication /without replication.
2. stop mongodb service and copy backup folder to data directory then
start service.
Mongo Config Information >
storage:
dbPath: "E:/AO_DATA/mongodb/data"
systemLog:
destination: file
path: "E:/AO_DATA/mongodb/log/mongod.log"
net:
port: 30198
replication:
replSetName: "repl_01"
Please help me to implement it.
Try using Studio3T. When you setup your replicaset in this software, you will see your nodes and your collections. Right click on the collection, export & import in your desired format and location.

Distributing a hsqldb database for use

I have gone through the docs and I haven't been able to fully understand them.
My question :
For testing purpose we created a replica of a database in hsqldb and would want to use this as inprocess db for unit testing.
Is there a way that I can distribute the replica database so that people connect to this db.I have used the backup command and have the tar file.But how do I open a connection to the db which takes this backed up db... something on the lines of handing a .mdb file in case of Access to another user and asking him/her to use that.
Regards,
Chetan
You need to expand the backed up database using standard gzip / unzip tools, before you can connect to it.
The HSQLDB Jar can be used to extract the database files from the backup file. For example:
java -cp hsqldb.jar org.hsqldb.lib.tar.DbBackup --extract tardir/backup.tar dbdir
In the example, the first file path is the backup file, and the second one, dbdir, is the directory path where the database files are expanded.

AWS EC2- Synching source code files with S3 - is it a proper approach?

On an app server in which a few source files change frequently, Is the following approach recommended?
Use a cron job with S3tools to sync the source files with S3 private bucket (every 15 mins for example).
On server start up - Use user data script to sync with the sources bucket to retrieve the latest sources.
Advantages:
1. No need to attach EBS for app server just to save a few files
2. Similar setup to all app servers
3. Sources automatically backed up.
4. As a byproduct, distributes code to multiple app servers automatically.
Disadvantages:
keeping source code on S3
other?
What do you think about this methodology? Is this the right way to use EC2 when source code change frequently (a few times a day) please recommend the best approach to run EC2 instances where sources change often.
I think you're better off using a proper source code repository, like Subversion or Git, rather than storing the source files on S3. That way you can have a central location for the source files while avoiding the update consistency problems that kdgregory mentioned.
You can put the source repository on one of your own servers outside of EC2, or host it on an EC2 instance (make sure the repository files are on an EBS volume in the latter case).
If you're going to be running a large number of EC2 instances, then it will be less effort to have them sync themselves from a central location (ie, you sync to private bucket, app-servers sync from that bucket).
HOWEVER, recognize that updates to an S3 bucket are atomic only at the object level, and more importantly, are not guaranteed to be immediately consistent (although I recall seeing a recent note that the us-west endpoint does offer read-after-write consistency).
This means that your app-servers may load a set of new files that are internally inconsistent -- some will be old, some will be new. If this is a problem for you, then you should implement a scheme that uploads directly to the app-servers, and ensures changeset consistency (perhaps by uploading to a temporary directory that is then renamed).

How do I backup a nexus repository manager

The nexus book: http://www.sonatype.com/books/nexus-book/reference/. Does not seem to spend any time on how one should go about backing up a nexus repository. If I am installing my snapshot and releases into this local repository, it seems that it would behoove me to back it up. However, I'm not really interested in backing up anything that can easily be downloaded from a remote repository.
Some google searches do not seem to reveal the canonical answer either, so perhaps for posterity it can be recorded here.
Thanks,
Nathan
When you install Nexus, you'll end up with two directories:
nexus-webapp-1.3.1.1/
sonatype-work/
We've separated the application from the data and configuration. The Nexus application is in nexus-webapp-1.3.1.1/ and the data and configuration is in sonatype-work/nexus. This was mainly done to facilitate easier upgrades, but it also has the side-effect of making it very easy to backup a Nexus installation.
The Simple Answer
Nexus doesn't store repositories in a database or do anything that would preclude a simple backup of the file system under sonatype-work/nexus. If you need to create a complete backup, just archive the contents of the sonatype-work/nexus.
Better Answer
If you want a more intelligent approach to backing up a Nexus installation, you will certainly want to backup everything under sonatype-work/nexus/conf, sonatype-work/nexus/storage, sonatype-work/nexus/template-store. If you want to backup the metadata and file attributes that Nexus keeps for proxy repository, backup sonatype-work/nexus/proxy, although this isn't required as the information about the proxy repository will be generated on-demand as attributes are requested.
You don't need to backup sonatype-work/nexus/logs and you don't need to backup the Lucene indexes in sonatype-work/nexus/indexer.
Nexus Pro Answer
There is a Nexus Professional plugin which can automate the process of creating a backup of the Nexus configuration data. This plugin is going to address the contents of the sonatype-work/nexus/conf directory. If you need to backup the sonatype-work/nexus/storage directory, you will need to configure some backup system to backup the contents of that filesystem. Once again, as with Nexus Open Source, there is currently no real benefit in backing up the contents of sonatype-work/nexus/indexer or sonatype-work/nexus/logs.
Excluding Storage for Remote Repositories
In your question you mention that you want to exclude the storage devoted to the local cache of a remote repository. If you are interested in doing this, you'll have to take a further level of granularity and just exclude the directories under sonatype-work/nexus/storage that correspond to the remote repositories.
Do you need to shut Nexus down for a backup?
Brian Fox told me no, the only real chance for file contention is going to be the files in the indexer/ directory. You shouldn't have a problem backing up the sonatype-work filesystem with a running instance of Nexus.
BTW, thanks for the question, this answer will likely be incorporated into the next version of the Nexus book.
afaik nexus (free version) does not have any backup features, but it should be as simple, as knowing your companies groupId and grabbing it from the storage directories in nexus
but i would schedule a complete repository backup too, you never know when the remote repositories are down, when you need them the most