How can I backup a Memgraph database? - windows-subsystem-for-linux

I'm running a Memgraph within Ubuntu WSL. I want to make a backup of my database. I'm having trouble locating the database files.
I've found the question that addresses Memgraph platform, but I need some solution for WSL.

While running, Memgraph generates several different files in its data directory. This is the location where Memgraph saves all permanent data. The default data directory is /var/lib/memgraph.
If you want to trigger creating a snapshot of the current database state, run the following query in mgconsole or Memgraph Lab:
CREATE SNAPSHOT;
Creating a backup of a Memgraph instance would consist of simply copying the data directory. This is impossible without additional help because the durability files can be deleted when an event is triggered (the number of snapshots exceeded the maximum allowed number).
To disable this behavior, you can use the following query in mgconsole or Memgraph Lab:
LOCK DATA DIRECTORY;
If you are using Linux to run Memgraph, here are the steps for copying files:
Start your Memgraph instance.
Open a new Linux terminal and check the location of the permanent data directory:
grep -A 1 'permanent data' /etc/memgraph/memgraph.conf
Copy a file from the snapshot directory to the backup folder, e.g.:
cp /var/lib/memgraph/snapshots/20220325125308366007_timestamp_3380 ~/backup/
To allow the deletion of the files, run the following query in mgconsole or Memgraph Lab::
UNLOCK DATA DIRECTORY;
Memgraph will delete the files which should have been deleted before and allow any future deletion of the files contained in the data directory.

Related

HBase : Retention policy for HBase Export

We are using HBase 1.2.3. I am trying to configure HBase backup functionality (Export functionality in 1.2.3 version).
Am able to successfully export table on S3. Both full & incremental backups.
On S3, all the files goes in default root/base folder and a mapping file (not sure in which language) goes inside specified folder.
2 Ques
How can i set retention policy to keep backups for x days. I wrote custom code to delete files/folders under specific folder but how to determine which block files are for which table and whether for full backup or incremental
Can we change the way HBase stores backup file. When we take backup on file system, it stores backup files under same folder. Can we achieve same result on S3?

Where do I put .mdf and .ldf files to share an SQL script through git

I am attempting to share a file that builds and populates an SQL database through git, but it won't create the DB on my team members' machines because the .mdf and .ldf files are located on my machine. How can I rectify this?
If you want to share a SQL script, you don't have to share the database with it!
What is generally done (best practice) is that you have the script needed to create the database (and eventually populate it with static/test data) in git, and then the user will launch that script to build the database.
git is here to keep track of your source code and the changes made to it, you shouldn't put in it any generated file, and .mdf / .ldf files are typically part of what should not be in your git. For generated files within your folder, there are ways to configure git to ignore them.
The value of git is to record differences between files, if you want to share your software, git is definitely not the good tool. Put those file on a shared folder (NAS), on dropbox, give them through an USB key or whatever.
However, if you really want to do this (bad idea), I guess you can add your files in your repository and either configure SQL Server to find them here or create a symbolic link.

Is it safe to delete the stage directory?

One of my servers is running out of capacity, mostly due to WebLogic's stage folder. I've been looking for information and it seems to be a temporal folder, but unlike older versions, on WL11g this folder is out of the tmp folder. So I'm not sure whether or not I can safely remove it.
Stage directory is where weblogic copies all the applications that it needs to deploy on to the managed servers. Wls does not delete any file from this folder. So in the long run if you have done deployment of many versions of your application then this folder can become rather large.
So yes you can delete the contents of this folder. At the time of restart wls will copy all the necessary files to this folder (this could take some time).
yes, you can delete the stage. All the necessary applications details will be there in this directory, but any how if it is a large environment(many applications are running), just take the back up of the directory or rename it and then you can safely delete the stage.

Backing up source files managed by source control software: TortoiseSVN

I am new to source control and I am confused with something I read on a webpage yesterday (I don't have the link). I have followed these instructions: "create folder structure", then "Start Reprobrowser", then copy source files into trunk folder. Please see the screen shot below:
However, when I navigate to the folder using Windows Explorer I do not see this folder structure. I see this:
Therefore I am wandering: where are the files physically stored? The reason I ask is because I want to ensure that NetBackup (corporate backup tool) backs up the correct directories.
To make sense of the repository structure you need to read all the documentation on SVN, but the preferred way to backup a SVN repository is through the command
svnadmin dump your_svn_repository_path > destination_filename_backup.svn
You could put this command in a scheduled task running sometime before your corporate tool execute the full backup of your data and include the destination_filename_backup.svn in your backup job
If you ever need to restore the backup (after recreating the repository) you could use the command
svnadmin load your_svn_repository_path < destination_filename_backup.svn

FTP Concurrency issues using Ipswitch WS-FTP Pro

I think we have a problem in our FTP scripts that pull files from a remote server to a local machine. I couldn't find an answer in their knowledge base, nor scripting documentation.
We are doing an MGET *.* and then a MDELETE *.* immediately after it. I think what is happening is that, while we are copying files from the server, additional files are copied into the same directory and then the delete command deletes everything from the server. So we end up deleting file we never copied down.
Is there a straight-forward way to delete only the files that were copied, or is it going to be some sort of hack job where we generate a dynamic delete script based on what we actually copied down?
Answers that are product specific would be much appreciated!
Here were the options that I came up with and what I ended up doing.
Rename the extension on the server, copy the renamed files, and then delete the renamed files. This could not work because there is no FTP rename command that works with wildcards (Windows rename command will by the way).
Move the files to a subdirectory on the server, copy the files from that location, and then delete from the remote location. This could not work because there is no FTP command to move the files on the remote server.
Copy the files down in one script and SHELL a batch file on the local side that dynamically builds a script to connect to the server and delete the files that were copied down. This is the solution I ended up using to solve this problem.