Arkeia in SuSE LINUX restore with commandline - restore

Could anyone here tell my why my arkeia restore command do not work?
arkc -restore -start -D file=si105:/Fastora/Moeller from=si105:/Fastora to=localhost:/home/ws -drvname=LTO
I always get the ERROR [200] which means by doing
arkc -usage -debug -error
MLC_ERROR_EXECUTING_FUNCTION 200
I have only one LTO Drive attached on my SCSI and the name is LTO
I have never done this before, but now - after 5 years - I need to restore a directory from my tapes.

Related

How to use podman's ssh build flag?

I have been using the docker build --ssh flag to give builds access to my keys from ssh-agent.
When I try the same thing with podman it does not work. I am working on macOS Monterey 12.0.1. Intel chip. I have also reproduced this on Ubuntu and WSL2.
❯ podman --version
podman version 3.4.4
This is an example Dockerfile:
FROM python:3.10
RUN mkdir -p -m 0600 ~/.ssh \
&& ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git
When I run DOCKER_BUILDKIT=1 docker build --ssh default . it works i.e. the build succeeds, the repo is cloned and the ssh key is not baked into the image.
When I run podman build --ssh default . the build fails with:
git#github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Error: error building at STEP "RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git": error while running runtime: exit status 128
I have just begun playing around with podman. Looking at the docs, that flag does appear to be supported. I have tried playing around with the format a little, specifying the id directly for example but no variation of specifying the flag or the mount has worked so far. Is there something about how podman works that I may be missing that explains this?
Adding this line as suggested in the comments:
RUN --mount=type=ssh ssh-add -l
Results in this error:
STEP 4/5: RUN --mount=type=ssh ssh-add -l
Could not open a connection to your authentication agent.
Error: error building at STEP "RUN --mount=type=ssh ssh-add -l": error while running runtime: exit status 2
Edit:
I belive this may have something to do with this issue in buildah. A fix has been merged but has not been released yet as far as I can see.
The error while running runtime: exit status 2 does not to me appear to be necessarily related to SSH or --ssh for podman build. It's hard to say really, and I've successfully used --ssh like you are trying to do, with some minor differences that I can't relate to the error.
I am also not sure ssh-add being run as part of building the container is what you really meant to do -- if you want it to talk to an agent, you need to have two environment variables being exported from the environment in which you run ssh-add, these define where to find the agent to talk to and are as follows:
SSH_AUTH_SOCK, specifying the path to a socket file that a program uses to communicate with the agent
SSH_AGENT_PID, specifying the PID of the agent
Again, without these two variables present in the set of exported environment variables, the agent is not discoverable and might as well not exist at all so ssh-add will fail.
Since your agent is probably running as part of the set of processes to which your podman build also belongs to, at the minimum the PID denoted by SSH_AGENT_PID should be valid in that namespace (meaning it's normally invalid in the set of processes that container building is isolated to, so defining the variable as part of building the container would be a mistake). Similar story with SSH_AUTH_SOCK -- the path to the socket file dumped by starting the agent program, would not normally refer to a file that exists in the mount namespace of the container being built.
Now, you can run both the agent and ssh-add as part of building a container, but ssh-add reads keys from ~/.ssh and if you had key files there as part of the container image being built you wouldn't need --ssh in the first place, would you?
The value of --ssh lies in allowing you to transfer your authority to talk to remote services defined through your keys on the host, to the otherwise very isolated container building procedure, through use of nothing else but an SSH agent designed for this very purpose. That removes the need to do things like copying key files into the container. They (keys) should also normally not be part of the built container, especially if they were only to be used during building. The agent, on the other hand, runs on the host, securely encapsulates the keys you add to it, and since the host is where you'd have your keys that's where you're supposed to run ssh-add at to add them to the agent.

DSpace 7.1 AIP restore StackOverflowError

I try to migrate from DSpace 6.4 to 7.1. New Dspace is installed on other machine (virtual machine on Centos 7 with 8Gb of RAM)
I have created full site AIP backup with user passwords. (total size of packages - 11Gb)
I've tried to do full restore but always got the same error.
So I'm just trying to import only "first level without childs"
JAVA_OPTS="-Xmx2048m -Xss4m -Dfile.encoding=UTF-8" /dspace/bin/dspace packager -r -k -t AIP -e dinkwi.test#gmail.com -o skipIfParentMissing=true -i 123456789/0 /home/dimich/11111/repo.zip
It doesn't matter if I use -k or -f param, output ia always the same
Ingesting package located at /home/dimich/11111/repo.zip
Exception: null
java.lang.StackOverflowError
at org.dspace.eperson.GroupServiceImpl.getChildren(GroupServiceImpl.java:788)
at org.dspace.eperson.GroupServiceImpl.getChildren(GroupServiceImpl.java:802)
.... (more then 1k lines)
at org.dspace.eperson.GroupServiceImpl.getChildren(GroupServiceImpl.java:802)
my dspace.log ended with
2021-12-20 11:05:28,141 INFO unknown unknown org.dspace.eperson.GroupServiceImpl # dinkwi.test#gmail.com::update_group:group_id=9e6a2038-01d9-41ad-96b9-c6fb55b44381
2021-12-20 11:05:30,048 INFO unknown unknown org.dspace.eperson.GroupServiceImpl # dinkwi.test#gmail.com::update_group:group_id=23aaa7e9-ca2d-4af5-af64-600f7126e2be
2021-12-20 11:05:30,800 INFO unknown unknown org.springframework.cache.ehcache.EhCacheManagerFactoryBean # Shutting down EhCache CacheManager 'org.dspace.services'
So I just want to figure out the problem: small stack or some bug in user/group that fails with infinite loop/recursion, or maybe something else...
Main problem - i'm good in php/mysql and have no experience with java/postgre and the way how to debug this ...
Any help would be appreciated.
p.s after failed restore I always run command
/dspace/bin/dspace cleanup -v

rsync to remote location exits with code 12

I am trying to rsync a local folder to a remote location. This a command that I have run successfully a week ago, but now if i run:
rsync -vrtzu\
--chown=user:webadm
--delete
--exclude-from=.rsyncignore
FOLDER/
USER#REMOTE:/DESTINATION
Then I get the following error message:
zsh:1: no matches found: --usermap=*:USER
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.3]
make: *** [makefile:39: push] Error 12
The command is run from a makefile, hence the last line.
I am using a regular WSL2 Ubuntu shell, not zsh.
I am able to ssh into the remote location with USER#REMOTE.
I have also checked that both locations have rsync installed (same version).
Finally, there is plenty of disk space available on the remote location.
Any pointers? What should I be checking to improve my diagnostic?
Thanks in advance!
This can happen when the remote shell messes with the command. Not sure exactly why and what it does but it modifies escaping so that the file path becomes invalid.
In your case the shell outputs --usermap=*:USER at log in.
The solution is to change the remote (zsh) shell to bash using the chsh command
I'm pretty sure this is an rsync bug:
zsh:1: no matches found: --usermap=*:USER
It only happens the remote machine's default shell is zsh.
It was fixed somewhere between rsync 3.2.3 (where it's broken) and 3.2.5 (where the bug is gone).
You can verify this by passing -vv to rsync. This prints as one of the first output lines which command invocation rsync is doing on the remote server via SSH.
On a broken version, it prints e.g.:
... ssh ... rsync --server -vvnlogDtpRe.LsfxCIvu "--usermap=*:user" "--groupmap=*:webadm"
On a fixed version, it prints e.g.:
... ssh ... rsync --server -vvnlogDtpRe.LsfxCIvu "--usermap=\*:user" "--groupmap=\*:webadm"
As you can see, they inserted a \ to fix the string being interpreted by zsh.

Ubuntu Server Backup and Restore via tar

I'm trying to learn how to backup and restore my Ubuntu Server via tar so I know that I have a safe system. After I untar and reboot, I have several issues, but they seem to be caused by a read-only file system. The source and destination server are both Ubuntu Server on the same version, 18.04.05 LTS. The source server is a VPS that has 6 GB RAM and 4vCPUs. The destination server is a VM on my FreeNAS machine with 6GB RAM and 2 vCPUs.
The primary applications that need to work are my Graylog server and Nagios server. I've mostly followed the instructions at Ubuntu.
First, my tar command is:
sudo tar -c --use-compress-program=pigz -f backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/usr --exclude=/sbin --exclude=/proc --exclude=/sys --exclude=/tmp --exclude=/run --exclude=/mnt --exclude=/media --exclude=/lost+found --exclude=/home/*/.cache --exclude=/home/*/.gvfs --exclude=/home/*/.local/share/Trash --exclude=/var/log --exclude=/var/cache/apt/archives --exclude=/usr/src/linux-headers* --one-file-system /
I use pigz to utilize the VPS's 4 vCPUs to take less time. I transfer this to my VM which as a fresh copy of Ubuntu Server 18.04.05 and untar with:
sudo tar -xvpzf backup.tar.gz -C / --numeric-owner
After I reboot, I get the following as soon as I boot:
Unable to setup logging. [Errno 30] Read-only file system: '/var/log/landscape/sysinfo.log'
run-parts: /etc/update-motd.d/50-lanscape-sysinfo exited with return code 1
mktemp: failed to create file via template '/var/lib/update-notifier/tmp.XXXXXXXXXX': Read-only file system
run-parts: /etc/update-motd.d/95-hwe-eol exited with return code 1
/usr/lib/update-notifier/update-motd-fsck-at-reboot: 33: /usr/lib/update-motd-fsck-at-reboot: cannot create /var/lib/update-notifier/fsck-at-reboot: Read-only file system
I do see that some areas of the system do work like the original source. My SSH port changes, hostname changes, etc. But I get these above errors and my Graylog and Nagios servers do not work.
So I'm wondering where I went wrong in my process and any help would be appreciated. The source is a live server with backups so I'm safe there. I'm just making sure I have my ducks in a row for the future.

Setup Amazon S3 backup on QNAP using s3cmd

I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1