Duplicity - can't restore single file - amazon-s3

I try to restore single file or dir by duplicity from amazon s3, but but I get an errors.
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
Traceback (most recent call last):
File "/usr/bin/duplicity", line 1251, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1244, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1198, in main
restore(col_stats)
File "/usr/bin/duplicity", line 538, in restore
restore_get_patched_rop_iter(col_stats)):
File "/usr/bin/duplicity", line 560, in restore_get_patched_rop_iter
backup_chain = col_stats.get_backup_chain_at_time(time)
File "/usr/lib/python2.6/dist-packages/duplicity/collections.py", line 934, in get_backup_chain_at_time
raise CollectionsError("No backup chains found")
CollectionsError: No backup chains found
What I do wrong?
Here how I doing backups
export PASSPHRASE=****
export AWS_ACCESS_KEY_ID=****
export AWS_SECRET_ACCESS_KEY=****
GPG_KEY=****
BACKUP_SIM_RUN=1
LOGFILE="/var/log/s3-backup.log"
DAILYLOGFILE="/var/log/s3-backup-daily.log"
# The source of your backup
SOURCE=/home/u54433
# The destination
DEST=s3+http://**********
trace () {
stamp=`date +%Y-%m-%d_%H:%M:%S`
echo "$stamp: $*" >> ${DAILYLOGFILE}
}
cat /dev/null > ${DAILYLOGFILE}
trace "removing old backups..."
duplicity remove-older-than 2M --force --sign-key=${GPG_KEY} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "start backup files..."
duplicity --sign-key=${GPG_KEY} --exclude="**/logs" --s3-european-buckets --s3-use-new-style ${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1
cat "$DAILYLOGFILE" >> $LOGFILE
export PASSPHRASE=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=

Use --s3-use-new-style option in all duplicity calls.
I had the same problem as you did. I added the missing option to "duplicity remove-older-than" and everything works great now.

It is better to remove the S3 bucket form amazon and try to recreate full backup this might resolve the issue.
Also
You can see the below link
https://answers.launchpad.net/duplicity/+question/107074

For anyone coming back to this question looking for a definitive answer, #shaikh-systems link leads to the realization that there is some issue in Duplicity/AWS communication of IAM sub-account keys. To restore, I got it to work by using/exporting my primary account key/secret. I'm using duplicity 0.6.21.

Related

Accessing Kaggle tools in VM by mounting key

I am trying to use kaggle command line tool and I am running into problems with using it inside my own vm. I downloaded the API token from the site and placed it in /.kaggle/kaggle.json on windows. My vm has ubuntu installed and in the Vagrant file I have the following:
config.vm.synced_folder ENV['HOME'] + "/.kaggle", "/home/ubuntu/.kaggle", mount_options: ['dmode=700,fmode=700']
config.vm.provision "shell", inline: <<-SHELL
echo "export KAGGLE_CONFIG_DIR='/home/ubuntu/.kaggle/kaggle.json'" >> /etc/profile.d/myvar.sh
SHELL
when running env command in the vm I see it is correct:
KAGGLE_CONFIG_DIR=/home/ubuntu/.kaggle/kaggle.json
However, when I try to use the kaggle command for example kaggle -h I get the the following
(main) vagrant#dev:/home/ubuntu/.kaggle$ ls
kaggle.json
(main) vagrant#dev:/home/ubuntu/.kaggle$ kaggle -h
Traceback (most recent call last):
File "/user/home/venvs/main/bin/kaggle", line 5, in <module>
from kaggle.cli import main
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/__init__.py", line 23, in <module>
api.authenticate()
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/api/kaggle_api_extended.py", line 149, in authenticate
self.config_file, self.config_dir))
OSError: Could not find kaggle.json. Make sure it's located in /home/ubuntu/.kaggle/kaggle.json. Or use the environment method.
The paths are all correct and the file is where it should be looking for it. Anyone know what the issue could be? Is it because it is mounted?
Alright, I misread the instructions: "You can define a shell environment variable KAGGLE_CONFIG_DIR to change this location to $KAGGLE_CONFIG_DIR/kaggle.json"
So the env variable should be /home/ubuntu/.kaggle/ instead of /home/ubuntu/.kaggle/kaggle.json.

s3->GCS access denied error

I'm trying to transfer files from s3-gcs.
I don't own the s3 bucket and was provided with keys.
I edited my boto and entered the key & access id's but my gsutil cp command returned an access denied error. I can browse/dl these files with the various free s3 browser utilities out there.
Might the owner need to adjust something on their end?
gsutil cp -r s3://origin gs://destination
Copying s3://origin
17/_SUCCESS [Content-Type=binary/octet-stream]...
Exception in thread Thread-2: B]
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/daisy_chain_wrapper.py", line 196, in
PerformDownload
decryption_tuple=self.decryption_tuple)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/cloud_api_delegator.py", line 276, in
GetObjectMedia
decryption_tuple=decryption_tuple)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 513, in GetObjectMedia
generation=generation)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 1476, in _TranslateExceptionAndRaise
raise translated_exception
AccessDeniedException: AccessDeniedException: 403 AccessDenied
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message>
<RequestId>DD4EA91291B40907</RequestId>
In your SSH session, check what account is activated on the instance.
$ gcloud auth list
Then try using the -D or -DD command line options to debug why your example is failing. And you can try running gsutil with the top-level -D flag.
1) To copy from S3 to local disk
gsutil -D cp -r s3://secret-bucket/some_key/ /local/directory/my-s3-files/
2) To copy from local disk to GCS bucket
gsutil -D cp -r /local/directory/my-s3-files/ gs://secret-elsewhere/destination/
You can check Storage Transfer Service article to how transfer data into Cloud Storage.
We have to make sure permission error is in GCP or S3. Also, you can have a look at these articles for more information:
https://github.com/GoogleCloudPlatform/gsutil/issues/487
https://cloud.google.com/storage/docs/gsutil/commands/cp

I get an Input/Output error when trying to install a VOLTTRON agent

SSH into a VOLTTRON instance, installing agents works. Log out, log back in and installing results in the following error:
2016-09-13 11:46:24,409 () volttron.platform.vip.agent.subsystems.rpc ERROR: unhandled exception in JSON-RPC method 'install_agent':
Traceback (most recent call last):
File "/home/volttron/volttron/volttron/platform/vip/agent/subsystems/rpc.py", line 168, in method
return method(*args, **kwargs)
File "/home/volttron/volttron/volttron/platform/control.py", line 287, in install_agent
agent_uuid = self._aip.install_agent(path, vip_identity=vip_identity)
File "/home/volttron/volttron/volttron/platform/aip.py", line 296, in install_agent
unpack(agent_wheel, dest=agent_path)
File "/home/volttron/volttron/env/local/lib/python2.7/site-packages/wheel/tool/__init__.py", line 135, in unpack
sys.stderr.write("Unpacking to: %s\n" % (destination))
IOError: [Errno 5] Input/output error
When any background process is disowned, the ssh session is terminated, stdeff and stdout are not redirected to /dev/null, and the process tries to write to either it results in an IOError.
In this case one of the third party libraries that VOLLTRON uses when installing an agent tries to write to stderr (much to our chagrin). Even if the platform is run with the -l option it will still occasionally write to stderr. Unfortunately there is no reliable way for VOLTTRON to do the right thing with stderr in all cases so we have to leave it up to the user to know when they need to redirect output to /dev/null.
To run in the background use start-stop-daemon which automatically redirects everything to /dev/null or use this command to start the platform:
volttron -vv -l volttron.log > /dev/null 2>&1&
You can then safely disown the process and logout. Installations will still work.

unable to connect to remote host with rabbitmqadmin

I'm trying to connect to a remote rabbitmq host using the cli rabbitmqadmin.
The command I'm trying to execute is:
rabbitmqadmin --host=$RABBITMQ_HOST --port=443 --ssl --vhost=$RABBITMQ_VHOST --username=$RABBITMQ_USERNAME --password=$RABBITMQ_PASSWORD list queues
Before you ask: the environmental variables RABBITMQ_HOST, RABBITMQ_VHOST and so on are set... I double and triple checked this already.
The error I get back is:
Traceback (most recent call last):
File "/usr/local/sbin/rabbitmqadmin", line 1007, in <module>
main()
File "/usr/local/sbin/rabbitmqadmin", line 413, in main
method()
File "/usr/local/sbin/rabbitmqadmin", line 588, in invoke_list
format_list(self.get(uri), cols, obj_info, self.options)
File "/usr/local/sbin/rabbitmqadmin", line 436, in get
return self.http("GET", "%s/api%s" % (self.options.path_prefix, path), "")
File "/usr/local/sbin/rabbitmqadmin", line 475, in http
self.options.port)
File "/usr/local/sbin/rabbitmqadmin", line 451, in __initialize_https_connection
context = self.__initialize_tls_context())
File "/usr/local/sbin/rabbitmqadmin", line 467, in __initialize_tls_context
self.options.ssl_key_file)
TypeError: coercing to Unicode: need string or buffer, NoneType found
From the last line I assume it's a python related problem, my current python version is 2.7.12, if I try to connect to the local instance of rabbitmq with
rabbitmqadmin list queues
everything works fine. Any help is greatly appreciated thanks :)
shouldn't those env vars have a $ in front of them, and the params without =?
rabbitmqadmin --host $RABBITMQ_HOST --port 443 --ssl --vhost $RABBITMQ_VHOST --username $RABBITMQ_USERNAME --password $RABBITMQ_PASSWORD list queues`
maybe the = doesn't matter, but i'm pretty sure you need $ in front of the env vars
Validate that you are using the same rabbitmqadmin version as the version of your remote hosted broker. Using a mismatching rabbitmqadmin version will result in that error (for example rabbitmqadmin 3.6.4 querying a 3.5.7 server).
Browse to http://server-name:15672/cli/ and download correct tool from there.
https://github.com/rabbitmq/rabbitmq-management/issues/299

"Not a tty" error in Alpine-based duplicity image

This is my first ever question at stackoverflow, so I hope it'll adhere to the community guidelines:
I've build a docker image based on an already existing image which has the sole purpose of running duplicity in an container to backup files and folders to an Amazon S3 bucket in Europe.
Duplicity worked for a couple of days when being run manually inside a container resulting from the image. Now I moved on to run containers via unit files on the host with CoreOS and things don't work anymore - but the command also won't work it I run it manually inside a duplicity container..
The run command:
docker run --rm --env-file=<my backup env file>.env --name=<container image> -v <cache container>:/home/duplicity/.cache/duplicity -v <docker volume with gpg keys>:/home/duplicity/.gnupg --volumes-from <docker container of interest> gymnae/duplicity
the env-file contains the following:
PASSPHRASE=<my super secret passphrase>
AWS_ACCESS_KEY_ID=<my aws access key id>
AWS_SECRET_ACCESS_KEY=<my aws access key>
SOURCE_PATH=<where does the data come from>
REMOTE_URL=s3://s3.eu-central-1.amazonaws.com/<my bucket>
PARAMS_CLEAN="--remove-older-than 3M --force --extra-clean"
ENCRYPT_KEY=<derived from the gpg key>
And the init.sh, which is called on docker run, looks like this:
#!/bin/sh
duplicity \
--verbosity 8 \
--s3-use-ia \
--s3-use-new-style \
--s3-use-server-side-encryption \
--s3-european-buckets \
--allow-source-mismatch \
--ssl-no-check-certificate \
--s3-unencrypted-connection \
--volsize 150 \
--gpg-options "--no-tty" \
--encrypt-key $ENCRYPT_KEY \
--sign-key $ENCRYPT_KEY \
$SOURCE_PATH \
$REMOTE_URL
I tried with -i, -it, -t and just -d - but the result is always the same:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
GPG error detail: Traceback (most recent call last):
File "/usr/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1380, in main
do_backup(action)
File "/usr/bin/duplicity", line 1508, in do_backup
incremental_backup(sig_chain)
File "/usr/bin/duplicity", line 662, in incremental_backup
globals.backend)
File "/usr/bin/duplicity", line 425, in write_multivol
at_end = gpg.GPGWriteFile(tarblock_iter, tdp.name, globals.gpg_profile, globals.volsize)
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 356, in GPGWriteFile
file.close()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 241, in close
self.gpg_failed()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 226, in gpg_failed
raise GPGError(msg)
GPGError: GPG Failed, see log below:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
This Not a tty error while gpg tries to sign is weird.
It didn't seem to be a problem before, or I did some crazy typing on a late night shift that it worked once, but now it just doesn't want to work anymore.
For anyone who struggles from the same problem, I found the answer thanks to the developr of duply
https://sourceforge.net/p/ftplicity/bugs/76/#74c5
In short, you need to add GPG_OPTS='--pinentry-mode loopback' starting with gpg 2.1 and add allow-loopback-pinentry to .gnupg/gpg-agent.conf
This brought me much closer to a working setup.