GCP Dataproc - Failed to construct kafka consumer, Failed to load SSL keystore dataproc.jks of type JKS - ssl

I'm trying to run a Structured Streaming program on GCP Dataproc, which accesses the data from Kafka and prints it.
Access to Kafka is using SSL, and the truststore and keystore files are stored in buckets.
I'm using Google Storage API to access the bucket, and store the file in the current working directory. The truststore and keystores are passed onto the Kafka Consumer/Producer.
However - i'm getting an error
Command :
gcloud dataproc jobs submit pyspark /Users/karanalang/Documents/Technology/gcp/DataProc/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py --cluster dataproc-ss-poc --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.2.0 --region us-central1
Code is shown below :
spark = SparkSession.builder.appName('StructuredStreaming_VersaSase').getOrCreate()
kafkaBrokers='<broker-ip>:9094'
topic = "versa-sase"
security_protocol="SSL"
# Google Storage API to access the keys in the buckets
client = storage.Client()
bucket = client.get_bucket('ssl-certs-karan')
blob_ssl_truststore = bucket.get_blob('cap12.jks')
ssl_truststore_location = '{}/{}'.format(os.getcwd(), blob_ssl_truststore.name)
blob_ssl_truststore.download_to_filename(ssl_truststore_location)
ssl_truststore_password="<ssl_truststore_password>"
blob_ssl_keystore = bucket.get_blob('dataproc-versa-sase-p12-1.jks')
ssl_keystore_location = '{}/{}'.format(os.getcwd(), blob_ssl_keystore.name)
blob_ssl_keystore.download_to_filename(ssl_keystore_location)
ssl_keystore_password="<ssl_keystore_password>"
consumerGroupId = "versa-sase-grp"
checkpoint = "gs://ss-checkpoint/"
print(" SPARK.SPARKCONTEXT -> ", spark.sparkContext)
df = spark.read.format('kafka')\
.option("kafka.bootstrap.servers",kafkaBrokers)\
.option("kafka.security.protocol","SSL") \
.option("kafka.ssl.truststore.location",ssl_truststore_location) \
.option("kafka.ssl.truststore.password",ssl_truststore_password) \
.option("kafka.ssl.keystore.location", ssl_keystore_location)\
.option("kafka.ssl.keystore.password", ssl_keystore_password)\
.option("subscribe", topic) \
.option("kafka.group.id", consumerGroupId)\
.option("startingOffsets", "earliest") \
.load()
print(" df -> ", df)
query = df.selectExpr("CAST(value AS STRING)", "CAST(key AS STRING)", "topic", "timestamp") \
.write \
.format("console") \
.option("numRows",100)\
.option("checkpointLocation", checkpoint) \
.option("outputMode", "complete")\
.option("truncate", "false") \
.save("output")
Error :
Traceback (most recent call last):
File "/tmp/3e7304f8e27d4436a2f382280cebe7c5/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py", line 83, in <module>
query = df.selectExpr("CAST(value AS STRING)", "CAST(key AS STRING)", "topic", "timestamp") \
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1109, in save
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/lib/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError22/02/02 23:11:08 DEBUG org.apache.hadoop.ipc.Client: IPC Client (1416219052) connection to dataproc-ss-poc-m/10.128.0.78:8030 from root sending #171 org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB.allocate
22/02/02 23:11:08 DEBUG org.apache.hadoop.ipc.Client: IPC Client (1416219052) connection to dataproc-ss-poc-m/10.128.0.78:8030 from root got value #171
22/02/02 23:11:08 DEBUG org.apache.hadoop.ipc.ProtobufRpcEngine: Call: allocate took 2ms
: An error occurred while calling o84.save.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3) (dataproc-ss-poc-w-0.c.versa-kafka-poc.internal executor 1): org.apache.kafka.common.KafkaException: Failed to construct kafka consumer
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:823)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:665)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:613)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumer.createConsumer(KafkaDataConsumer.scala:124)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumer.<init>(KafkaDataConsumer.scala:61)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumerPool$ObjectFactory.create(InternalKafkaConsumerPool.scala:206)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumerPool$ObjectFactory.create(InternalKafkaConsumerPool.scala:201)
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:60)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1041)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:342)
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:265)
at org.apache.spark.sql.kafka010.consumer.InternalKafkaConsumerPool.borrowObject(InternalKafkaConsumerPool.scala:84)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.retrieveConsumer(KafkaDataConsumer.scala:573)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.getOrRetrieveConsumer(KafkaDataConsumer.scala:558)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.$anonfun$getAvailableOffsetRange$1(KafkaDataConsumer.scala:359)
at org.apache.spark.util.UninterruptibleThread.runUninterruptibly(UninterruptibleThread.scala:77)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.runUninterruptiblyIfPossible(KafkaDataConsumer.scala:618)
at org.apache.spark.sql.kafka010.consumer.KafkaDataConsumer.getAvailableOffsetRange(KafkaDataConsumer.scala:358)
at org.apache.spark.sql.kafka010.KafkaSourceRDD.resolveRange(KafkaSourceRDD.scala:123)
at org.apache.spark.sql.kafka010.KafkaSourceRDD.compute(KafkaSourceRDD.scala:75)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:373)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:337)
...
Caused by: org.apache.kafka.common.KafkaException: Failed to load SSL keystore /tmp/3e7304f8e27d4436a2f382280cebe7c5/dataproc-versa-sase-p12-1.jks of type JKS
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:377)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.<init>(DefaultSslEngineFactory.java:349)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.createKeystore(DefaultSslEngineFactory.java:299)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory.configure(DefaultSslEngineFactory.java:161)
at org.apache.kafka.common.security.ssl.SslFactory.instantiateSslEngineFactory(SslFactory.java:138)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:95)
at org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:74)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:192)
at org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:81)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:105)
at org.apache.kafka.clients.consumer.KafkaConsumer.<init>(KafkaConsumer.java:737)
... 53 more
Caused by: java.nio.file.NoSuchFileException: /tmp/3e7304f8e27d4436a2f382280cebe7c5/dataproc-versa-sase-p12-1.jks
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)
at java.nio.file.Files.newInputStream(Files.java:152)
at org.apache.kafka.common.security.ssl.DefaultSslEngineFactory$FileBasedStore.load(DefaultSslEngineFactory.java:370)
From my mac, I'm using PKCS files (.p12) and am able to access the Kafka cluster in SSL mode. However, in Dataproc - it seems the expected file format is JKS.
here is the command i used to convert .p12 file to JKS format:
keytool -importkeystore -srckeystore dataproc-versa-sase.p12 -srcstoretype pkcs12 -srcalias versa-sase-user -destkeystore dataproc-versa-sase-p12-1.jks -deststoretype jks -deststorepass <password> -destalias versa-sase-user
What needs to be done to fix this ?
it seems the JKS file is not accessible to the Spark program ?
tia!

I would add the following option if you want to use jks
.option("kafka.ssl.keystore.type", "JKS")
.option("kafka.ssl.truststore.type", "JKS")
Also this will work with PKCS12 by the way
.option("kafka.ssl.keystore.type", "PKCS12")
.option("kafka.ssl.truststore.type", "PKCS12")
Like someone mention earlier you can check if it is jdk compatibility issue doing something like so:
keytool -v -list -storetype pkcs12 -keystore kafka-client-jdk8-truststore.p12
If you receive a message displaying keystore you are in the clear, but if you receive a message saying the Identifier can’t be found that means difference in jdks.

per note from #OneCricketer, i was able to get this working by using --files <gs://cert1>,<gs://cert2>.
Also, this works when using the cluster mode.
ClusterMode command
gcloud dataproc jobs submit pyspark /Users/karanalang/Documents/Technology/gcp/DataProc/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py --cluster dataproc-ss-poc --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2,spark.submit.deployMode=cluster --region us-central1
Client Mode :
gcloud dataproc jobs submit pyspark /Users/karanalang/Documents/Technology/gcp/DataProc/StructuredStreaming_Kafka_GCP-Batch-feb2-v2.py --cluster dataproc-ss-poc --properties spark.jars.packages=org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.2 --region us-central1
Accessing the certs in the Driver:
# access using the cert name
ssl_truststore_location="ca.p12"
ssl_keystore_location="dataproc-versa-sase.p12"
df_stream = spark.readStream.format('kafka') \
.option("kafka.security.protocol", "SSL") \
.option("kafka.ssl.truststore.location", ssl_truststore_location) \
.option("kafka.ssl.truststore.password", ssl_truststore_password) \
.option("kafka.ssl.keystore.location", ssl_keystore_location) \
.option("kafka.ssl.keystore.password", ssl_keystore_password) \
.option("kafka.bootstrap.servers",kafkaBrokers)\
.option("subscribe", topic) \
.option("kafka.group.id", consumerGroupId)\
.option("startingOffsets", "earliest") \
.option("failOnDataLoss", "false") \
.option("maxOffsetsPerTrigger", 10) \
.load()

Related

python 3.7 can't access google secret service manager

I configured the required roles for the secret service manager, but when I try to access them through python 3.7 code I get error 403 access denied:
google.api_core.exceptions.PermissionDenied: 403 Permission 'secretmanager.secrets.list' denied for resource 'projects/projectid' (or it may not exist)
If I access them with command line it works:
gcloud secrets list
This is the python code:
# Build the resource name of the parent project.
parent = f"projects/projectid"
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# List all secrets.
for secret in client.list_secrets(request={"parent": parent}):
print("Found secret: {}".format(secret.name))
Your code works for me.
BILLING="..."
PROJECT="..."
ACCOUNT="..."
SECRET="test"
gcloud projects create ${PROJECT}
gcloud beta billing projects link ${PROJECT} \
--billing-account=${BILLING}
gcloud services enable secretmanager.googleapis.com \
--project=${PROJECT}
gcloud iam service-accounts create ${ACCOUNT} \
--project=${PROJECT}
EMAIL="${ROBOT}#${PROJECT}.iam.gserviceaccount.com"
gcloud iam service-accounts keys create ${PWD}/${ACCOUNT}.json \
--iam-account=${EMAIL}
# See note: the minimum role that includes the perm to list secrets
gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/secretmanager.viewer
echo "test" > test
gcloud secrets create ${SECRET} \
--data-file="test" \
--project=${PROJECT}
python3 -m venv venv
source venv/bin/activate
python3 -m pip install google-cloud-secret-manager
# Both required by the app
export PROJECT
export GOOGLE_APPLICATION_CREDENTIALS=${PWD}/${ACCOUNT}.json
python main.py
Yields:
Found secret: projects/12345678912/secrets/test
main.py:
from google.cloud import secretmanager
import os
project=os.getenv("PROJECT")
client = secretmanager.SecretManagerServiceClient()
parent = f"projects/{project}"
secrets = client.list_secrets(request={
"parent":parent,
})
for secret in secrets:
print("Found secret: {}".format(secret.name))
NOTE roles/secretmanager.viewer is the only predefined role that includes the permission necessary to list secretmanager.secrets.list (link)

In Google Cloud's "built-in image object detection algorithm" documentation, what is config.yaml?

In the official GCP documentation for the built-in image object detection classifier, Step 2 under "Submit a training job" says:
Submit the job:
cloud ai-platform jobs submit training $JOB_ID \
--region=$REGION \
--config=config.yaml \
...
This is the first reference to "config.yaml" on this page.
Has anyone been able to implement this example?
Here is the code from the above documentation page in full, including a correction on line 2 (the original had a JOB_DIR starting with gs://gs://, which threw an error):
PROJECT_ID="myapp"
# Original:
#BUCKET_NAME="gs://mybucket/"
# Correction:
BUCKET_NAME="mybucket"
REGION="us-central1"
gcloud config set project $PROJECT_ID
gcloud config set compute/region $REGION
# Set paths to the training and validation data.
TRAINING_DATA_PATH="gs://cloud-samples-data/ai-platform/built-in/image/coco/train*"
VALIDATION_DATA_PATH="gs://cloud-samples-data/ai-platform/built-in/image/coco/val*"
# Specify the Docker container for your built-in algorithm selection.
IMAGE_URI="gcr.io/cloud-ml-algos/image_object_detection:latest"
DATASET_NAME="coco"
ALGORITHM="object_detection"
MODEL_NAME="${DATASET_NAME}_${ALGORITHM}_model"
# Give a unique name to your training job.
DATE="$(date '+%Y%m%d_%H%M%S')"
JOB_ID="${MODEL_NAME}_${DATE}"
# Make sure you have access to this Cloud Storage bucket.
JOB_DIR="gs://${BUCKET_NAME}/algorithms_training/${MODEL_NAME}/${DATE}"
gcloud ai-platform jobs submit training $JOB_ID \
--region=$REGION \
--config=config.yaml \
--job-dir=$JOB_DIR \
-- \
--training_data_path=$TRAINING_DATA_PATH \
--validation_data_path=$VALIDATION_DATA_PATH \
--train_batch_size=64 \
--num_eval_images=500 \
--train_steps_per_eval=2000 \
--max_steps=15000 \
--num_classes=90 \
--warmup_steps=500 \
--initial_learning_rate=0.08 \
--fpn_type="nasfpn" \
--aug_scale_min=0.8 \
--aug_scale_max=1.2
gcloud ai-platform jobs describe $JOB_ID
gcloud ai-platform jobs stream-logs $JOB_ID
Running the above results in the following error:
ERROR: (gcloud.ai-platform.jobs.submit.training) Failed to load YAML from [config.yaml]: Unable to read file [config.yaml]: [Errno 2] No such file or directory: u'config.yaml'
Creating an empty config.yaml produces this error:
ERROR: gcloud crashed (AttributeError): 'NoneType' object has no attribute 'get'
From the gcloud documentation:
Path to the job configuration file. This file should be a YAML
document (JSON also accepted) containing a Job resource as defined in
the API (all fields are optional):
https://cloud.google.com/ml/reference/rest/v1/projects.jobs
I submitted feedback on this page a couple of weeks ago, but haven't heard back and it is still broken.
What content is required in config.yaml to make this work?
Any and all ideas/suggestions are welcome!
I managed to get it working by replacing this command line argument:
--config=config.yaml
With this one:
--master-image-uri $IMAGE_URI

zaproxy - API scan - missing c in config

I run the following command: docker run -v /etc/hosts:/etc/hosts -v $(pwd):/zap/wrk:rw -t owasp/zap2docker-weekly zap-api-scan.py -t api.yaml -f openapi -r zap_report.html -config replacer.full_list\(0\).description=auth1 -config replacer.full_list\(0\).enabled=true -config replacer.full_list\(0\).matchtype=REQ_HEADER -config replacer.full_list\(0\).matchstr=X-XXXXX-APIkey -config replacer.full_list\(0\).regex=false -config replacer.full_list\(0\).replacement=123456789
But got the error:
Traceback (most recent call last):
File "/zap/zap-api-scan.py", line 539, in
File "/zap/zap-api-scan.py", line 246, in main
with open(base_dir + config_file) as f:
IOError: [Errno 2] No such file or directory: '/zap/wrk/onfig'
How is it possible?
The problem is in how you pass parameters to the python script. The python script parse the -config as -c onfig, and trying to read configuration from the file onfig. You should pass zap params using the following format: -z "-config aaa=bbb -config ccc=ddd"'

Duplicity is arguing BackendException: ssh connection to my server:22 failed: not a valid OPENSSH private key file

Thanks to maybeg, I've managed to backup my data from home to an external server. (An amazon one)
As i don't want to backup company datas to Amazon, i tried with an internal backup server.
I then used this command. (I have my own key)
docker run -d --name volumerize
-v /MyFolder/Keys/:/MyFolder/Keys/
-v jenkins_volume:/source:ro
-v backup_volume:/backup
-e 'VOLUMERIZE_SOURCE=/source'
-e "VOLUMERIZE_TARGET=scp://myuser#mybackupserver/home/myuser/"
-e 'VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /MyFolder/Keys/myuserkey"'
-e 'PASSPHRASE="mypassphrase"' blacklabelops/volumerize
When using duplicity backup command, inside or outside the container, i have the following error
/usr/lib/python2.7/site-packages/paramiko/ecdsakey.py:200: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
signature, ec.ECDSA(self.ecdsa_curve.hash_object())
BackendException: ssh connection to myuser#mybackupserver:22 failed: not a valid OPENSSH private key file
Strangely, inside or outside the volumerize container, the following is running properly.
ssh -i /MyFolder/Keys/myuserkey myuser#mybackupserver
key_load_public: invalid format
Enter passphrase for key '/MyFolder/Keys/myuser':
[myuser#mybackupserver ~]$
Editing backup file for example is giving me the following :
#!/bin/bash
set -o errexit
source /etc/volumerize/stopContainers
duplicity $# --allow-source-mismatch --archive-dir=/volumerize-cache --ssh-options "-i /MyFolder/Keys/myuserkey" /source scp://myuser#mybackupserver/home/myuser/
source /etc/volumerize/startContainers
I've tried to check env variables inside the container, please find below what i have : (Note that passphrase has been added as env variable as found here)
HOSTNAME=b68f0e1a2d45
TERM=xterm
BLACKLABELOPS_HOME=/var/blacklabelops
GOOGLE_DRIVE_CREDENTIAL_FILE=/credentials/googledrive.cred
VOLUMERIZE_HOME=/etc/volumerize
VOLUMERIZE_SOURCE=/source
DOCKERIZE_VERSION=v0.5.0
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/etc/volumerize
VOLUMERIZE_TARGET=scp://myuser#mybackupserver/home/myuser/
PWD=/etc/volumerize
VOLUMERIZE_DUPLICITY_OPTIONS=--ssh-options "-i /MyFolder/Keys/myuserkey"
VOLUMERIZE_CACHE=/volumerize-cache
GPG_TTY=/dev/console
SHLVL=1
HOME=/root
no_proxy=*.local, 169.254/16
GOOGLE_DRIVE_SETTINGS=/credentials/cred.file
PASSPHRASE="mypassphrase"
_=/usr/bin/env
Can someone point me in the right direction ?
Regards,
pierre
Edit1 :
I tried to compare both private key file (Amazon and Company) using
openssl rsa -in yourkey.pem -check and both says
RSA key ok
writing RSA key
-----BEGIN RSA PRIVATE KEY-----
....
-----END RSA PRIVATE KEY-----
Edit2 :
1 . Had a look without any success at duplicity-backendexception
For information, Paramiko version is 2.2.1
Connection is successful using the following python script.
import paramiko
import StringIO
f = open('/MyFolder/Keys/myuserkey','r')
s = f.read()
keyfile = StringIO.StringIO(s)
mykey = paramiko.RSAKey.from_private_key(keyfile,password='mypassphrase')
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('mybackupserver',username='mouser',pkey=mykey)
stdin, stdout, stderr = ssh.exec_command('uptime')
stdout.readlines()
[u' 12:35:27 up 3 days, 1:42, 0 users, load average: 1.59, 3.10, 3.00\n']
try the pexpect+scp:// backend (more on available ssh backends can be found in the duplicity manpage http://duplicity.nongnu.org/duplicity.1.html ).
it uses the command line ssh binaries. maybe the error is different or more detailed there?
the error on
ssh -i /MyFolder/Keys/myuserkey myuser#mybackupserver
key_load_public: invalid format
does not seem normal. try to provide the public key in the proper format or not at all.
..ede/duply.net

"Not a tty" error in Alpine-based duplicity image

This is my first ever question at stackoverflow, so I hope it'll adhere to the community guidelines:
I've build a docker image based on an already existing image which has the sole purpose of running duplicity in an container to backup files and folders to an Amazon S3 bucket in Europe.
Duplicity worked for a couple of days when being run manually inside a container resulting from the image. Now I moved on to run containers via unit files on the host with CoreOS and things don't work anymore - but the command also won't work it I run it manually inside a duplicity container..
The run command:
docker run --rm --env-file=<my backup env file>.env --name=<container image> -v <cache container>:/home/duplicity/.cache/duplicity -v <docker volume with gpg keys>:/home/duplicity/.gnupg --volumes-from <docker container of interest> gymnae/duplicity
the env-file contains the following:
PASSPHRASE=<my super secret passphrase>
AWS_ACCESS_KEY_ID=<my aws access key id>
AWS_SECRET_ACCESS_KEY=<my aws access key>
SOURCE_PATH=<where does the data come from>
REMOTE_URL=s3://s3.eu-central-1.amazonaws.com/<my bucket>
PARAMS_CLEAN="--remove-older-than 3M --force --extra-clean"
ENCRYPT_KEY=<derived from the gpg key>
And the init.sh, which is called on docker run, looks like this:
#!/bin/sh
duplicity \
--verbosity 8 \
--s3-use-ia \
--s3-use-new-style \
--s3-use-server-side-encryption \
--s3-european-buckets \
--allow-source-mismatch \
--ssl-no-check-certificate \
--s3-unencrypted-connection \
--volsize 150 \
--gpg-options "--no-tty" \
--encrypt-key $ENCRYPT_KEY \
--sign-key $ENCRYPT_KEY \
$SOURCE_PATH \
$REMOTE_URL
I tried with -i, -it, -t and just -d - but the result is always the same:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
GPG error detail: Traceback (most recent call last):
File "/usr/bin/duplicity", line 1532, in <module>
with_tempdir(main)
File "/usr/bin/duplicity", line 1526, in with_tempdir
fn()
File "/usr/bin/duplicity", line 1380, in main
do_backup(action)
File "/usr/bin/duplicity", line 1508, in do_backup
incremental_backup(sig_chain)
File "/usr/bin/duplicity", line 662, in incremental_backup
globals.backend)
File "/usr/bin/duplicity", line 425, in write_multivol
at_end = gpg.GPGWriteFile(tarblock_iter, tdp.name, globals.gpg_profile, globals.volsize)
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 356, in GPGWriteFile
file.close()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 241, in close
self.gpg_failed()
File "/usr/lib/python2.7/site-packages/duplicity/gpg.py", line 226, in gpg_failed
raise GPGError(msg)
GPGError: GPG Failed, see log below:
===== Begin GnuPG log =====
gpg: using "<supersecret>" as default secret key for signing
gpg: signing failed: Not a tty
gpg: [stdin]: sign+encrypt failed: Not a tty
===== End GnuPG log =====
This Not a tty error while gpg tries to sign is weird.
It didn't seem to be a problem before, or I did some crazy typing on a late night shift that it worked once, but now it just doesn't want to work anymore.
For anyone who struggles from the same problem, I found the answer thanks to the developr of duply
https://sourceforge.net/p/ftplicity/bugs/76/#74c5
In short, you need to add GPG_OPTS='--pinentry-mode loopback' starting with gpg 2.1 and add allow-loopback-pinentry to .gnupg/gpg-agent.conf
This brought me much closer to a working setup.