s3->GCS access denied error - amazon-s3

I'm trying to transfer files from s3-gcs.
I don't own the s3 bucket and was provided with keys.
I edited my boto and entered the key & access id's but my gsutil cp command returned an access denied error. I can browse/dl these files with the various free s3 browser utilities out there.
Might the owner need to adjust something on their end?
gsutil cp -r s3://origin gs://destination
Copying s3://origin
17/_SUCCESS [Content-Type=binary/octet-stream]...
Exception in thread Thread-2: B]
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/daisy_chain_wrapper.py", line 196, in
PerformDownload
decryption_tuple=self.decryption_tuple)
File "/usr/lib/google-cloud-
sdk/platform/gsutil/gslib/cloud_api_delegator.py", line 276, in
GetObjectMedia
decryption_tuple=decryption_tuple)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 513, in GetObjectMedia
generation=generation)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/boto_translation.py",
line 1476, in _TranslateExceptionAndRaise
raise translated_exception
AccessDeniedException: AccessDeniedException: 403 AccessDenied
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message>
<RequestId>DD4EA91291B40907</RequestId>

In your SSH session, check what account is activated on the instance.
$ gcloud auth list
Then try using the -D or -DD command line options to debug why your example is failing. And you can try running gsutil with the top-level -D flag.
1) To copy from S3 to local disk
gsutil -D cp -r s3://secret-bucket/some_key/ /local/directory/my-s3-files/
2) To copy from local disk to GCS bucket
gsutil -D cp -r /local/directory/my-s3-files/ gs://secret-elsewhere/destination/
You can check Storage Transfer Service article to how transfer data into Cloud Storage.
We have to make sure permission error is in GCP or S3. Also, you can have a look at these articles for more information:
https://github.com/GoogleCloudPlatform/gsutil/issues/487
https://cloud.google.com/storage/docs/gsutil/commands/cp

Related

Accessing Kaggle tools in VM by mounting key

I am trying to use kaggle command line tool and I am running into problems with using it inside my own vm. I downloaded the API token from the site and placed it in /.kaggle/kaggle.json on windows. My vm has ubuntu installed and in the Vagrant file I have the following:
config.vm.synced_folder ENV['HOME'] + "/.kaggle", "/home/ubuntu/.kaggle", mount_options: ['dmode=700,fmode=700']
config.vm.provision "shell", inline: <<-SHELL
echo "export KAGGLE_CONFIG_DIR='/home/ubuntu/.kaggle/kaggle.json'" >> /etc/profile.d/myvar.sh
SHELL
when running env command in the vm I see it is correct:
KAGGLE_CONFIG_DIR=/home/ubuntu/.kaggle/kaggle.json
However, when I try to use the kaggle command for example kaggle -h I get the the following
(main) vagrant#dev:/home/ubuntu/.kaggle$ ls
kaggle.json
(main) vagrant#dev:/home/ubuntu/.kaggle$ kaggle -h
Traceback (most recent call last):
File "/user/home/venvs/main/bin/kaggle", line 5, in <module>
from kaggle.cli import main
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/__init__.py", line 23, in <module>
api.authenticate()
File "/user/home/venvs/main/lib/python3.7/site-packages/kaggle/api/kaggle_api_extended.py", line 149, in authenticate
self.config_file, self.config_dir))
OSError: Could not find kaggle.json. Make sure it's located in /home/ubuntu/.kaggle/kaggle.json. Or use the environment method.
The paths are all correct and the file is where it should be looking for it. Anyone know what the issue could be? Is it because it is mounted?
Alright, I misread the instructions: "You can define a shell environment variable KAGGLE_CONFIG_DIR to change this location to $KAGGLE_CONFIG_DIR/kaggle.json"
So the env variable should be /home/ubuntu/.kaggle/ instead of /home/ubuntu/.kaggle/kaggle.json.

Cannot start GlassFish server via Terminal - No handler was ready to authenticate

I download the zip file of GlassFish 4.1.1, after extract it, I use Terminal to start the server using asadmin start-domain command. It give me this error:
Traceback (most recent call last):
File "/usr/local/bin/asadmin", line 260, in <module> autoscale = boto.connect_autoscale()
File "/Library/Python/2.7/site-packages/boto/__init__.py", line 208, in connect_autoscale**kwargs)
File "/Library/Python/2.7/site-packages/boto/ec2/autoscale/__init__.py", line 115, in __init__profile_name=profile_name)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 1100, in __init__provider=provider)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 569, in __init__host, config, self.provider, self._required_auth_capability())
File "/Library/Python/2.7/site-packages/boto/auth.py", line 997, in get_auth_handler 'Check your credentials' % (len(names), str(names))) boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials
I'm using MacOS Sierra 10.12.2, anyone know how to fix that error?
The problem here is that you have the boto Python AWS command line utilities installed. One of those utilities is called asadmin and your shell thinks you mean to call the asadmin (AWS autoscaling admin) command, rather than the GlassFish asadmin file.
After you extract GlassFish, you need to reference the asadmin file that comes with GlassFish, so start the domain as follows:
glassfish4/bin/asadmin start-domain

"Not a tty" error in Alpine-based duplicity image

This is my first ever question at stackoverflow, so I hope it'll adhere to the community guidelines:
I've build a docker image based on an already existing image which has the sole purpose of running duplicity in an container to backup files and folders to an Amazon S3 bucket in Europe.
Duplicity worked for a couple of days when being run manually inside a container resulting from the image. Now I moved on to run containers via unit files on the host with CoreOS and things don't work anymore - but the command also won't work it I run it manually inside a duplicity container..
The run command:
docker run --rm --env-file=<my backup env file>.env --name=<container image> -v <cache container>:/home/duplicity/.cache/duplicity -v <docker volume with gpg keys>:/home/duplicity/.gnupg --volumes-from <docker container of interest> gymnae/duplicity
the env-file contains the following:
PASSPHRASE=<my super secret passphrase>
AWS_ACCESS_KEY_ID=<my aws access key id>
AWS_SECRET_ACCESS_KEY=<my aws access key>
SOURCE_PATH=<where does the data come from>
REMOTE_URL=s3://s3.eu-central-1.amazonaws.com/<my bucket>
PARAMS_CLEAN="--remove-older-than 3M --force --extra-clean"
ENCRYPT_KEY=<derived from the gpg key>
And the init.sh, which is called on docker run, looks like this:
#!/bin/sh
duplicity \
--verbosity 8 \
--s3-use-ia \
--s3-use-new-style \
--s3-use-server-side-encryption \
--s3-european-buckets \
--allow-source-mismatch \
--ssl-no-check-certificate \
--s3-unencrypted-connection \
--volsize 150 \
--gpg-options "--no-tty" \
--encrypt-key $ENCRYPT_KEY \
--sign-key $ENCRYPT_KEY \
$SOURCE_PATH \
$REMOTE_URL
I tried with -i, -it, -t and just -d - but the result is always the same:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
GPG error detail: Traceback (most recent call last):
File "/usr/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1380, in main
do_backup(action)
File "/usr/bin/duplicity", line 1508, in do_backup
incremental_backup(sig_chain)
File "/usr/bin/duplicity", line 662, in incremental_backup
globals.backend)
File "/usr/bin/duplicity", line 425, in write_multivol
at_end = gpg.GPGWriteFile(tarblock_iter, tdp.name, globals.gpg_profile, globals.volsize)
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 356, in GPGWriteFile
file.close()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 241, in close
self.gpg_failed()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 226, in gpg_failed
raise GPGError(msg)
GPGError: GPG Failed, see log below:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
This Not a tty error while gpg tries to sign is weird.
It didn't seem to be a problem before, or I did some crazy typing on a late night shift that it worked once, but now it just doesn't want to work anymore.
For anyone who struggles from the same problem, I found the answer thanks to the developr of duply
https://sourceforge.net/p/ftplicity/bugs/76/#74c5
In short, you need to add GPG_OPTS='--pinentry-mode loopback' starting with gpg 2.1 and add allow-loopback-pinentry to .gnupg/gpg-agent.conf
This brought me much closer to a working setup.

Internal Server Error right after installation of OpenERP 7.0

I am new at OpenERP and I just installed OpenERP 7.0 on Ubuntu 12.04 using the All-In-One ".deb" file. But when I tried to open it it gave me this error message:
Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
I checked the "openerp-server.log" file and it gave me this:
self.gen.next()
File "/usr/share/pyshared/openerp/addons/web/http.py", line 422, in session_context
session_store.save(request.session)
File "/usr/share/pyshared/werkzeug/contrib/sessions.py", line 237, in savedir=self.path)
File "/usr/lib/python2.7/tempfile.py", line 300, in mkstemp
return _mkstemp_inner(dir, prefix, suffix, flags)
File "/usr/lib/python2.7/tempfile.py", line 235, in _mkstemp_inner
fd = _os.open(file, flags, 0600)
OSError: [Errno 13] Permission non accordée: '/tmp/oe-sessions-openerp/tmpNUQsbf.__wz_sess'
What is going wrong and how can I fix it?
Thanks!
It looks like a permission issue. You can check permissions of your server/addons/web directory and change it's to Read/Write/Create/Delete like this
chmod 777 DIRPATH_OF_SERVER -R
chmod 777 DIRPATH_OF_ADDONS -R
chmod 777 DIRPATH_OF_WEB -R
By assigning all permissions, Can you re-check it ?

Duplicity - can't restore single file

I try to restore single file or dir by duplicity from amazon s3, but but I get an errors.
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: none
Traceback (most recent call last):
File "/usr/bin/duplicity", line 1251, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1244, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1198, in main
restore(col_stats)
File "/usr/bin/duplicity", line 538, in restore
restore_get_patched_rop_iter(col_stats)):
File "/usr/bin/duplicity", line 560, in restore_get_patched_rop_iter
backup_chain = col_stats.get_backup_chain_at_time(time)
File "/usr/lib/python2.6/dist-packages/duplicity/collections.py", line 934, in get_backup_chain_at_time
raise CollectionsError("No backup chains found")
CollectionsError: No backup chains found
What I do wrong?
Here how I doing backups
export PASSPHRASE=****
export AWS_ACCESS_KEY_ID=****
export AWS_SECRET_ACCESS_KEY=****
GPG_KEY=****
BACKUP_SIM_RUN=1
LOGFILE="/var/log/s3-backup.log"
DAILYLOGFILE="/var/log/s3-backup-daily.log"
# The source of your backup
SOURCE=/home/u54433
# The destination
DEST=s3+http://**********
trace () {
stamp=`date +%Y-%m-%d_%H:%M:%S`
echo "$stamp: $*" >> ${DAILYLOGFILE}
}
cat /dev/null > ${DAILYLOGFILE}
trace "removing old backups..."
duplicity remove-older-than 2M --force --sign-key=${GPG_KEY} ${DEST} >> ${DAILYLOGFILE} 2>&1
trace "start backup files..."
duplicity --sign-key=${GPG_KEY} --exclude="**/logs" --s3-european-buckets --s3-use-new-style ${SOURCE} ${DEST} >> ${DAILYLOGFILE} 2>&1
cat "$DAILYLOGFILE" >> $LOGFILE
export PASSPHRASE=
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
Use --s3-use-new-style option in all duplicity calls.
I had the same problem as you did. I added the missing option to "duplicity remove-older-than" and everything works great now.
It is better to remove the S3 bucket form amazon and try to recreate full backup this might resolve the issue.
Also
You can see the below link
https://answers.launchpad.net/duplicity/+question/107074
For anyone coming back to this question looking for a definitive answer, #shaikh-systems link leads to the realization that there is some issue in Duplicity/AWS communication of IAM sub-account keys. To restore, I got it to work by using/exporting my primary account key/secret. I'm using duplicity 0.6.21.