Nexus 3 Backups - Can I just backup /nexus-data? - backup

I'm using the docker container to run Nexus 3.
If I stop the container, then backup the volume directory for /nexus-data will this result in a valid backup?
Just want to confirm.
Thanks!

Yes, that will work just fine, you can directly back up the entire work directory if Nexus is not running.

Related

Nexus Repo - could not lock user prefs

I'm running Sonatype Nexus 3 inside a docker container, with the following startup command:
docker run -d -p 80:8081 --ulimit nofile=65536:65536 --name nexus -v nexus-data:/nexus-data -e INSTALL4J_ADD_VM_PARAMS="-Xms4g -Xmx4g -XX:MaxDirectMemorySize=6717m -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs" sonatype/nexus3
After updating the docker image version from 3.30.0 to 3.40.1, I keep getting the following warnings regarding user prefs.
2022-07-18 13:14:45,860+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-07-18 13:15:15,860+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
As you can see from the startup command, the user prefs directory is inside the docker volume and at directory /nexus-data/javaprefs . I have tried looking for existing locks inside the directory, but found none. I've also tried completely deleting the directory and saw that the warning still came up and the folder itself wasn't being created by Nexus.
I honestly don't even know if this is an important issue or not, since there is little to no documentation about the user preferences folder.
Even a way to turn off the warning log which fires every 30s would be useful.
----UPDATE----
I've tried doing a clean installation of Nexus through Docker, following the simple instructions inside the github sonatype nexus3 docker repository, and still find these warnings.
I even tried on a different OS (Windwos instead of linux, through Docker Desktop) and with and without a volume for /nexus-data.
At this point I believe it to be a bug in a newer Nexus version.
TLDR: Adding -Djava.util.prefs.userRoot=/nexus-data/javaprefs should solve the problem, assuming the nexus data directory is at /nexus-data/.
Just had the same issue after upgrading from 3.38.1 to 3.42.0. After some investigation found that indeed the java.util.prefs.userRoot property got lost somewhere between those versions. The default value in the vanilla Nexus 3.38.1 is /nexus-data/javaprefs.

AWS Linux and Mercurial auto add to environment

I have a development server on an EC2 instance. Mercurial is also installed.
The environment is using an Apache server working from /var/www/html.
As this is a development environment, I want each commit to the repository will also be copied to the Apache folder (so we can add changes and see them on the fly instead of both committing and then copying to environment.
Is there a simple way to do this?
Thanks!
OK - got it,
Simply need to auto update on trigger -
Edit a config file (i.e. /etc/mercurial/hgrc) and add an hook:
[hooks]
changegroup = hg update
Cheers!

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

"Official" docker backup strategy - what about consistency?

The suggested strategy to manage and backup data in docker looks something like this:
docker run --name mysqldata -v /var/lib/mysql busybox true
docker run --name mysql --volumes-from mysqldata mysql
docker run --volumes-from mysqldata -v $(pwd):/backup ubuntu tar cvf /backup/backup.tar /var/lib/mysql
However, when I backup running containers that way, I won't get a consistent backup, would I?
I'm aware of tools like mysqldump, but what if I need to backup, for example, a folder to which files are constantly added and removed?
The underlying problem you are facing, i.e. backing up changing files is independent of docker. Use a tool such as rsnapshot or dirvish to make backups into a volume and then use the approach you mentioned above to move those backups to somewhere safer like Amazon s3 or glacier based on your reliability requirement.
Whether you mount volumes from another container or the host vm using the -v switch the changes to the files are reflected in all containers (or host vm) in more or less real-time. (There is some delay because of the AUFS that docker uses on top of host fs, but its not huge). If the backup container was running perpetually it could keep taking backups and the files would always reflect latest files seen by the mysql container.
Edit: For clarity.

Atlassian Bamboo: First plan with a simple job of downloading a local git repo

I just downloaded the free trial of Bamboo continuous integration server, and created the first plan with nothing but downloading the source code from the git. I have a local git repository on the bamboo machine so the git URL is pointing to a local path.
The problem is that when I run the job, it never finishes even after waiting for an hour. This is the last lines of the activity log:
07-Apr-2011 20:03:23 Checking out revision f9dc82500914333ed4bbdae5ed038771fd658c3c.
07-Apr-2011 20:03:23 Creating local git repository in '/home/bob/bamboo-home/xml-data/build-dir/DEV-DEV-1/.git'.
From the shell I can go to the directory shown in the log and see that the source code were cloned correctly to the bamboo working directory. But the job will never finish and the log will not have any more update from here. I have to manually terminate the job. Any ideas? Do I miss something?
Just a guess, since the Bamboo instance we have at work pulls from Accurev and not Git, and I've never run into this problem myself - but it may be hung because there isn't a builder defined for that plan. You might try defining a builder (even if it's one that you know will fail) just to see if it makes it to that next step.
I had very similar problem.
It's not very original solution but I just uninstalled bamboo and installed it again.. Now it works now