Restore database in mongodb replication - restore

My requirement is to restore mongodb databases from dev box to test server having three nodes. I have setup the replication between three nodes for high availability.
I have created backup on dev using mongodump to a different directory.
Now my requirement is to restore all databases in test server using copied folder path.
I have looked number of articles over internet but did not find the exact solution to restore databases. I know mongorestore command will work but how to restore from different path.
I tried following steps to achieve this >
1. Restore with replication /without replication.
2. stop mongodb service and copy backup folder to data directory then
start service.
Mongo Config Information >
storage:
dbPath: "E:/AO_DATA/mongodb/data"
systemLog:
destination: file
path: "E:/AO_DATA/mongodb/log/mongod.log"
net:
port: 30198
replication:
replSetName: "repl_01"
Please help me to implement it.

Try using Studio3T. When you setup your replicaset in this software, you will see your nodes and your collections. Right click on the collection, export & import in your desired format and location.

Related

Backup and restoration of Mautic

I am a beginner in IT and Web Development.
I would like to create a backup of the Mautic installed on a hosting server “A” and restore it in another server “B”. How do I do that?
If it’s possible to automate the backup and the restoration, please tell me how to proceed.
Do the Following to Back up and Restore your Mautic.
Zip you Current Directory then Download
Export your Maultic Database
on Server B
Creat Db
Import your Current Db
Upload Mautic Files to the domain folder you wanna use.
change Db Connection in app/config/local.php
These should fix the ish.

Proposal to Migrate OpenNebula Datastore from Local FS to NFS

I have an instance of OpenNebula with 2 nodes running KVM and local file store. This means no live migration as vm images are scp'd to each node, so there is also no option of failover or Live Migration.
I would like to implement NFS shared storage and move the VM's from the local FS datastore to the NFS shared storage datastore. OpenNebula supports migrating VM's between datastores, but only datastores of the same type i.e. 'ssh' to 'ssh' but not 'ssh' to 'shared'.
I am working on a method of achieving this, and would love some feedback as to why this is a good or a bad idea.
Thanks
OpenNebula doesn't currently support migrating VM's from one type of datastore to another different type of datastore. I have been working on a method that is working and want to document it here to get some feedback and opinions on the method.
A datastore type is identified primarily by the Transfer manager Driver 'TM_MAD' setting. This setting cannot be changed, either through Sunstone or through the cli. So we need a method to do just this. This is what i did. I started with a fresh install of OpenNebula 5.4.13 in one VM, and 2 VM nodes all running Debian 9 within VMware virtual machines (don't forget to check virtualisation for the VM CPU options).
NOTE: This is an experimental process so make sure you Backup everything first!
Steps
To migrate to a different store, there are a few steps we need to do. They are as follows:
Setup the NFS share exports,
Move the VM images to the NFS share and mount the datastore,
Change the datastore types,
Configure the nodes for NFS share.
Setup NFS Server
First thing we want to do is setup the NFS shares that we want to use. I'm using a single share for the base datastore folder, but you could use separate shares for each datastore ID from different NFS servers.
On the NFS Server create the datastore folder i.e. mkdir /share/one_datastore,
Add the datastore path to exports and export the new share exportfs -rav,
Confirm the share is available showmount -e localhost
Prepare to Migrate
Before we modify the datastores there are a few things to do first:
Shut down any running VM's and undeploy them. This saves the machines states and copies the images back to the image store,
Stop Sunstone and OpenNebula services systemctl stop opennebula && systemctl stop opennebula-sunstone.
Migrate Data
Shared storage shares the VM disk images so all the nodes can access the same data. So copy the VM data to the NFS share ready for mounting.
From the Sunstone frontend server confirm the NFS shares showmount -e [nfs-server],
Create a temp folder to mount the share in mkdir /mnt/datastore,
Temporarily mount the NFS folder mount [nfs-server]:/share/one_datastore /mnt/datastore,
Move the datastore folders to the share mv /var/lib/one/datastores/* /mnt/datastore/
OpenNebula datastore folders now live on the NFS server: ls /mnt/datastore should list folders 0, 1 and 2,
Mount the NFS share to replace the OpenNebula datastore folder mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the folders are available ls /var/lib/one/datastores should list our 3 folders 0, 1 and 2,
Add the mount into /etc/fstab to persist the mount on boot.
OpenNebula frontend is now configured to access the datastore folders from the NFS share. Next we want to change the datastores type from ssh to shared.
Change Datastore Types
The data for the datastore configuration is stored in the OpenNebula database /var/lib/one/one.db. We can change the driver type by editing the datastore configuration data which then tells OpenNebula whiche drivers to use, and how to handle the datastore data. By default OpenNebula uses an sqlite database with the option of MySql. i'm using sqlite but the same works for MySql.
Open the OpenNebula database sqlite3 /var/lib/one/one.db,
View all tables with .tables. datastore_pool is the table we want to modify,
List all the records in the table select * from datastore_pool; will result in a screen-full of configuration data. Each record has an identifier oid which matches the datastore ID, like this (the first 0 is the datastore ID for the default SYSTEM database):
0|system|<DATASTORE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>system</NAME><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>0</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>0</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><DS_MAD><![CDATA[-]]></DS_MAD><TM_MAD><![CDATA[ssh]]></TM_MAD><BASE_PATH><![CDATA[/var/lib/one//datastores/0]]></BASE_PATH><TYPE>1</TYPE><DISK_TYPE>0</DISK_TYPE><STATE>0</STATE><CLUSTERS><ID>0</ID></CLUSTERS><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><IMAGES></IMAGES><TEMPLATE><ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS><DISK_TYPE><![CDATA[FILE]]></DISK_TYPE><DS_MIGRATE><![CDATA[YES]]></DS_MIGRATE><RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS><SAFE_DIRS><![CDATA[/var/tmp]]></SAFE_DIRS><SHARED><![CDATA[NO]]></SHARED><TM_MAD><![CDATA[ssh]]></TM_MAD><TYPE><![CDATA[SYSTEM_DS]]></TYPE></TEMPLATE></DATASTORE>|0|0|1|1|0
Now to change the datastore type. Grab the data from the 3rd column body
(You can run select body from datastore_pool where oid=0;) and copy to your favourite text editor (that's the chunk starting with <DATASTORE> and ending with </DATASTORE>). Find and replace:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace with: <TM_MAD><![CDATA[shared]]></TM_MAD>
Find: <SHARED><![CDATA[NO]]></SHARED>
Replace with: <SHARED><![CDATA[YES]]></SHARED>
Now to update the SYSTEM datastore record. Run the following command on the database, replacing [datastore-config] with the text block you just modified update datastore_pool set body='[datastore-config]' where oid=0,
Update IMAGE datastore is a little different. There is no SHARED option, but we want to use either shared or qcow2 drivers. I used qcow2. So: select body from datastore_pool where oid=1;:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace: <TM_MAD><![CDATA[qcow2]]></TM_MAD>
Update the record: update datastore_pool set body='[datastore-config]' where oid=1;,
Update the FILES datastore (oid=3) by replacing <TM_MAD><![CDATA[ssh]]></TM_MAD> with <TM_MAD><![CDATA[shared]]></TM_MAD> and update using the method above.
Now that the datastores have been updated to use the shared driver, lets start Sunstone and check that the datastores show up.
systemctl start opennebula && systemctl start opennebula-sunstone
Jump into Sunstone web and go to datastores. Opening each datastore to check whether SHARED is enabled, and the correct drivers show i.e. shared or qcow2.
~DONT DO ANYTHING YET~ Still need to configure the nodes!
Configure the Nodes
So because we stopped and undeployed the VMs, there shouldn't be any data in the node datastores. So we can just set up NFS shares to the datastores folder. Confirm the folders are empty first and make sure to take backups! This is an experimental process so be warned! Right, lets get onto it:
Check the contents of /var/lib/one/datastores. If you are mounting each datastore ID based folder to its own NFS share then you can do this instead of the entire datastore folder. Empty any folders with 0, 1 and 2 folders. otherwise remove all folders from the datastores folder,
If not already installed: apt-get install nfs-common,
Check for NFS shares: showmount -e [nfs-server],
Mount the nfs share to the datastore folder: mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the mount i.e. df,
Edit /etc/fstab adding the mount so its mounted on next boot.
Restart your node to confirm the datastore nfs persists, and to give them a restart!
Repeat with all host nodes.
Test it Out
In Sunstone go to the Hosts TAB and check they are up and running. Next go and grab a VM and deploy it. It should deploy without any issues and start booting.
Once up and running i like to constantly ping the VM while testing live migration. So start ping (ping [vm-ip] -t in windows) and then in Sunstone open the VM and do a 'Live Migrate' to another node. Watch the ping and check the logs to make sure it succeeded. I found i had to refresh the display, and go to the hosts TAB to check the VM had migrated. After that it showed correctly but i think its a caching issue in my browser. After the Live Migration you should still see the ping rolling along, with maybe one failed ping in the results.
Conclusion
So that's the process i used to migrate from ssh local storage to shared storage. I'v tested it and it is working without any issues. However, if you do have any issues or have an opinion on this process please let me know. If there are any pitfalls with this i have overlooked please also let me know.
Ok, have fun with it. I'm off to try moving the shared storage over to some kind of shared cluster like Ceph or GlusterFS!

Nexus 3 backup via command line?

In Nexus 3 backup procedure has changed.
In Nexus 2 recommended was to run a OS scheduled task / cron job to rsync some directories to a backup location.
In Nexus 3 the recommended way seems to be to create to schedule a predefined Nexus Task Export configuration & metadata for backup Task. And then also create a cron job to backup what gets exported with this task.
Is it still possible in Nexus 3 to do a old style backup? Shutdown the server and backup certain directories? And then for restore just put everything back? Will that work?
Or use a command line to run this task?
The way this is done in Nexus 3 does not seem to be thought through very well. You need to do a lot more to do what could be done with a single cron job in Nexus 2:
Create a scheduled task to export data.
Create a cron job to backup exported data.
Make sure that scheduled task runs and finished before the cron job.
See for example https://help.sonatype.com/display/NXRM3/Restore+Exported+Databases
See also Nexus Repository 3 backup
If you back up the entire data (sonatype-work) directory this should work as you wish. However, since the data directory is large and has many moving parts, it is safer to use the task, otherwise you may get copies of things in motion which could then corrupt and your backup would not work. The copy of the work directory as far as I know is only recommended for servers that are down, which isn't an option for many bigger companies.
Copying the entire folder did not work for me and resulted in orientdb problems. Last year I started to create N3DR. Version 3.5.0 has just been released.
https://help.sonatype.com/plugins/servlet/mobile?contentId=5412146#content/view/5412146
In case link becomes bad etc. (From Oct 20, 2017)
Nexus Repository stores data in blob stores and keeps some metadata and configuration information separately in databases. You must back up the blob stores and metadata databases together. Your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Complete the steps below to perform a backup:
Blob Store Backup
You must back up the filesystem or object store containing the blobs separately from Nexus Repository.
For File blob stores, back up the directory storing the blobs.
For a typical configuration, this will be $data-dir/blobs.
For S3 blob stores, you can use bucket versioning as an alternative to backups. You can also mirror the bucket to another S3 bucket instead.
For cloud-based storage providers (S3, Azure, etc.), refer to their documentation about storage backup options.
Node ID Backup
Each Nexus Repository instance is associated with a distinct ID. You must back up this ID so that blob storage metrics (the size and count of blobs on disk) and Nexus Firewall reports will function in the event of a restore / moving Nexus Repository from one server to another. The files to back up to preserve the node ID are located in the following location (also see Directories):
$data-dir/keystores/node/​
To use this backup, place these files in the same location before starting Nexus Repository.
Database Backup
The databases that you export have pointers to blob stores that contain components and assets potentially across multiple repositories. If you don’t back them up together, the component metadata can point to non-existent blob stores. So, your backup strategy should involve backing up both your databases and blob stores together to a new location in order to keep the data intact.
Here’s a common scenario for backing up custom configurations in tandem with the database export task:
Configure the appropriate backup task to export databases:
Use the Admin - Export databases for backup task for OrientDB databases
Use the Admin - Backup H2 Database task for H2 databases PRO
Run the task to export the databases to the configured folder.
Back up custom configurations in your installation and data directories at the same time you run the export task.
Back up all blob stores.
Store all backed up configurations and exported data together.
Write access to databases is temporarily suspended until a backup is complete. It’s advised to schedule backup tasks during off-hours.

Plastic SCM migration & configuration guide? to migrate to new machine?

Plastic SCM migration & configuration guide? to migrate to new machine?
If it exists please dont give me negative a my googling and searching did not come up with a GUIDE. I looked around could not find anything definitive.
Please link me to it or share a definitive set of steps/ guide and what/ how to be backed up/ restored for migration.
If I properly understand, you need to migrate a Plastic SCM server to a new machine. For that purpose:
Install the last Plastic SCM version in your new machine.
Migrate the databases: The goal is to move the databases to the new server location. You can directly copy the database files to the new location or even run a replica operation between the two servers.
Copy the next configuration files in order to keep your server configuration parameters:
db.conf -> Is the one handling the database connection.
users.conf and groups.conf -> If you are using the User&Password authentication mode.
plasticd.lic -> The license file.
server.conf -> Your old server configuration parameters.
PD: Remember also to reconfigure your clients to point to the new server location. You will probably have workspaces with selector pointing to the old server.

Restore Redis dump to a different database

How can I dump a redis that's running on database 0 and restore it in my local machine on a different database (8) ?
I already secure copied the dump file:
scp hostname#/var/lib/redis/dump.rdb .
But if I change my local redis dump.rdb with this one, I'll get the data on database 0. How can I restore it to a specific database?
Firstly note that the use of numbered/shared Redis databases is inadvisable. You really should consider using dedicated Redis servers with a single DB (0) on them (more info at: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances)
Redis does not offer a straightforward way to do this, but there are two basic ways one could go about it:
Pre-processing: modify the dump.rdb file to load into your database of choosing. You could build a tool for that or perhaps use one of the existing ones. Jan-Erik has done an outstanding job of documenting the RDB v7 format at http://rdb.fnordig.de/file_format.html so all you need to do is basically change the Database Selector byte.
Post-restore: use the MOVE command on the output of SCANing your restored database - should be easily scriptable.
I ended up creating a script in Ruby to dump and restore the keys I wanted. (Please note that this approach is slow, takes around 1 min for 200 keys) .
Get the keys to dump / restore
ssh hostname redis-cli --scan --pattern 'awesome_filter_pattern*'
Open an ssh connection to the production server
Dump the remote key
dump = ssh.exec!("redis-cli dump #{key}").chomp
Restore it on localhost
$redis.connection.restore(key, 0, dump)