I am trying to install LetsEncrypt SSL certificate on Amazon Linux AMI
NAME="Amazon Linux AMI"
VERSION="2016.09"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2016.09"
PRETTY_NAME="Amazon Linux AMI 2016.09"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2016.09:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
Amazon Linux AMI release 2016.09
I am following the blog mentioned in https://medium.com/#mohan08p/install-and-renew-lets-encrypt-ssl-on-amazon-ami-6d3e0a61693
But in the final step of execution, I am getting an error
{
"type": "urn:acme:error:unauthorized",
"detail": "Account creation on ACMEv1 is disabled. Please upgrade your ACME client to a version that supports ACMEv2 / RFC 8555. See https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430 for details.",
"status": 403
}
Please help how should I proceed?
Related
I'm using [this][1] library to generate SSL certificates. My storage generates 4 files: certificate.pem, private_key.pem, chain.pem, and fullchain.pem.
I want to install this certificate in acquia cloud using their Rest API post endpoint to install ssl certificate. The payload looks like the following:
{
"legacy": 0,
"certificate": "pasted the content inside our certificate.pem",
"private_key": "pasted the content inside private_key.pem",
"ca_certificates": "pasted the content inside the fullchain.pem",
"label": "My New Cert"
}
When I send a request, I received an error to contact they api owner support, and searching around through the server log I came across this:
Error response: 500 (Internal Server Error). Error message: Site certificate CA chain certificates are out of order..
What exactly does this error mean by saying out of order?
Error -
*** Failed to verify remote log exists s3://airflow_test/airflow-logs/demo/task1/2022-05-13T18:20:45.561269+00:00/1.log.
An error occurred (403) when calling the HeadObject operation: Forbidden
Dockerfile -
FROM apache/airflow:2.2.3
COPY /airflow/requirements.txt /requirements.txt
RUN pip install --no-cache-dir -r /requirements.txt
RUN pip install apache-airflow[crypto,postgres,ssh,s3,log]
USER root
# Update aptitude with new repo
RUN apt-get update
# Install software
RUN apt-get install -y git
USER airflow
Under connection UI -
Connection Id * - aws_s3_log_storage
Connection Type * - S3
Host - <My company's internal link>. (ex - https://abcd.company.com)
Extra - {"aws_access_key_id": "key", "aws_secret_access_key": "key", "region_name": "us-east-1"}
Under values.yaml -
config:
logging:
remote_logging: 'True'
remote_base_log_folder: 's3://airflow_test/airflow-logs'
remote_log_conn_id: 'aws_s3_log_storage'
logging_level: 'INFO'
fab_logging_level: 'WARN'
encrypt_s3_logs: 'False'
host: '<My company's internal link>. (ex - https://abcd.company.com)'
colored_console_log: 'False'
How did I created the bucket?
Installed awscli
used the commands -
1. aws configure
AWS Access Key ID: <access key>
AWS Secret Access Key: <secret key>
Default region name: us-east-1
Default output format:
2. aws s3 mb s3://airflow_test --endpoint-url=<My company's internal link>. (ex - https://abcd.company.com)
I am not getting a clue on how to resolve the error. I am actually very new to airflow and helm charts.
I had same error message with you. your account or Key might not have enough permission for accessing S3 bucket.
Please check your role has enough permissions below.
"s3:PutObject*",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl",
"s3:GetObject*",
"s3:ListObject*",
"s3:ListBucket*",
"s3:PutBucket*",
"s3:GetBucket*",
"s3:DeleteObject
I am unable to connect to neptune instance that has IAM enabled. I have followed the AWS documentation (corrected a few of my silly errors on the way) but without luck.
When I connect via my Java application using the SigV4Signer and when I use the gremlin console, I get a 400 bad request websocket error.
o.a.t.g.d.Handler$GremlinResponseHandler : Could not process the response
io.netty.handler.codec.http.websocketx.WebSocketHandshakeException: Invalid handshake response getStatus: 400 Bad Request
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker13.verify(WebSocketClientHandshaker13.java:267)
at io.netty.handler.codec.http.websocketx.WebSocketClientHandshaker.finishHandshake(WebSocketClientHandshaker.java:302)
at org.apache.tinkerpop.gremlin.driver.handler.WebSocketClientHandler.channelRead0(WebSocketClientHandler.java:69)
When I run com.amazon.neptune.gremlin.driver.example.NeptuneGremlinSigV4Example (from my machine over port-forwarding AND from the EC2 jumphost) I get:
java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
I am able to connect to my neptune instance using the older deprecated certificate mechanism. I am using a jumphost ec2 instance and port-forwarding.
I believe that the SigV4 aspect is working as in the neptune audit logs I can see attempts to connect with the aws_access_key:
1584098990319, <jumphost_ip>:47390, <db_instance_ip>:8182, HTTP_GET, [unknown], [unknown], "HttpObjectAggregator$AggregatedFullHttpRequest(decodeResult: success, version: HTTP/1.1, content: CompositeByteBuf(ridx: 0, widx: 0, cap: 0, components=0)) GET /gremlin HTTP/1.1 upgrade: websocket connection: upgrade sec-websocket-key: g44zxck9hTI9cZrq05V19Q== sec-websocket-origin: http://localhost:8182 sec-websocket-version: 13 Host: localhost:8182 X-Amz-Date: 20200313T112950Z Authorization: AWS4-HMAC-SHA256 Credential=<my_access_key>/20200313/eu-west-2/neptune-db/aws4_request, SignedHeaders=host;sec-websocket-key;sec-websocket-origin;sec-websocket-version;upgrade;x-amz-date, Signature=<the_signature> content-length: 0", /gremlin
But when I look
This is the policy that I created:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"neptune-db:*"
],
"Resource": [
"arn:aws:neptune-db:eu-west-2:<my_aws_account>:*/*"
]
}
]
}
I have previously tried with a policy that references my cluster resource id.
I created a new api user with this policy attached as its only permission. (I've tried this twice).
IAM is showing my that the graph-user I created has not successfully logged in (duh).
Seems that the issue is with the IAM set-up somewhere along the line. Is it possible to get more information out of AWS with regards to why the connection attempt is failing?
I am using the most recent release of Neptune and the 3.4.3 Gremlin Driver and console. I am using Java 8 when running the NeptuneGremlinSigV4Example and building the libraries to deploy to the console.
thanks
It appears from the audit log output that the SigV4 Signature that is being created is using localhost as the Host header. This is most likely due to the fact that you're using a proxy to connect to Neptune. By default, the NeptuneGremlinSigV4Example assumes that you're connecting directly to a Neptune endpoint and reuses the endpoint as the Host header in creating the Signature.
To get around this, you can use the following example code that overrides this process and allows you to use a proxy and still sign the request properly.
https://github.com/aws-samples/amazon-neptune-samples/tree/master/gremlin/gremlin-java-client-demo
I was able to get this to work using the following.
Create an SSL tunnel from you local workstation to your EC2 jumphost:
ssh -i <key-pem-file> -L 8182:<neptune-endpoint>:8182 ec2-user#<ec2-jumphost-hostname>
Set the following environment variables:
export AWS_ACCESS_KEY_ID=<access_key>
export AWS_SECRET_ACCESS_KEY=<secret_key>
export SERVICE_REGION=<region_id> (i.e. us-west-2)
Once the tunnel is up and your environment variables are set, use the following format with the Gremlin-Java-Client-Demo:
java -jar target/gremlin-java-client-demo.jar --nlb-endpoint localhost --lb-port 8182 --neptune-endpoint <neptune-endpoint> --port 8182 --enable-ssl --enable-iam-auth
I want to be able to use the Minio client in arch
I use Arch with Minio version
$ minio version
Version: DEVELOPMENT.GOGET
Release-Tag: DEVELOPMENT.GOGET
Commit-ID: DEVELOPMENT.GOGET
and Minio client version
$ mcli version
Version: 2018-03-23T17:45:52Z
Release-tag: DEVELOPMENT.2018-03-23T17-45-52Z
Commit-id: fe82b0381c5ccf004515be3bfd9b6fa733890005
when I want to add the config file I get as shown when starting the minio server
$ mcli config host add myminio <YOUR-S3-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY>
mcli: Configuration written to `/home/user/.mc/config.json`. Please update your access credentials.
mcli: Successfully created `/home/user/.mc/share`.
mcli: Initialized share uploads `/home/user/.mc/share/uploads.json` file.
mcli: Initialized share downloads `/home/user/.mc/share/downloads.json` file.
mcli: <ERROR> Unable to initialize new config from the provided credentials The request signature we calculated does not match the signature you provided. Check your key and signing method
on the server I get the print out
ERRO[0529] \{
"method": "GET",
"reqURI": "/probe-bucket-sign/?location=",
"header": {
"Authorization": ["AWS4-HMAC-SHA256 Credential=<Creadentisl>/20180323/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=<signature>"],
"Host": ["<YOUR-S3-ENDPOINT>"],
"User-Agent": ["Minio (linux; amd64) minio-go/5.0.1"],
"X-Amz-Content-Sha256": ["<...>"],
"X-Amz-Date": ["20180323T174913Z"]
}
}
cause=Signature does not match source=[auth-handler.go:122:checkRequestAuthType()]
I've been using Cloudfront to terminate SSL for several websites, but I can't seem to get it to recognize my newly uploaded SSL certificate for some reason.
Here's what I've done so far:
Purchased a valid SSL certificate, and uploaded it via the AWS cli tool as follows:
$ aws iam upload-server-certificate \
--server-certificate-name www.codehappy.io \
--certificate-body file://www.codehappy.io.crt \
--private-key file://www.codehappy.io.key \
--certificate-chain file://www.codehappy.io.chain.crt \
--path /cloudfrount/codehappy-www/
For which I get the following output:
{
"ServerCertificateMetadata": {
"ServerCertificateId": "ASCAIKR2OSE6GX43URB3E",
"ServerCertificateName": "www.codehappy.io",
"Expiration": "2016-10-19T23:59:59Z",
"Path": "/cloudfrount/codehappy-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfrount/codehappy-www/www.codehappy.io",
"UploadDate": "2015-10-20T20:02:36.983Z"
}
}
NOTE: I first ran aws configure and supplied my IAM user's credentials (this worked just fine).
Next, I ran the following command to view a list of all my existing SSL certificates on IAM:
$ aws iam list-server-certificates
{
"ServerCertificateMetadataList": [
{
"ServerCertificateId": "ASCAIIMOAKWFL63EKHK4I",
"ServerCertificateName": "www.ipify.org",
"Expiration": "2016-05-25T23:59:59Z",
"Path": "/cloudfront/ipify-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfront/ipify-www/www.ipify.org",
"UploadDate": "2015-05-26T04:30:15Z"
},
{
"ServerCertificateId": "ASCAJB4VOWIYAWN5UEQAM",
"ServerCertificateName": "www.rdegges.com",
"Expiration": "2016-05-28T23:59:59Z",
"Path": "/cloudfront/rdegges-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfront/rdegges-www/www.rdegges.com",
"UploadDate": "2015-05-29T00:11:23Z"
},
{
"ServerCertificateId": "ASCAJCH7BQZU5SZZ52YEG",
"ServerCertificateName": "www.codehappy.io",
"Expiration": "2016-10-19T23:59:59Z",
"Path": "/cloudfrount/codehappy-www/",
"Arn": "arn:aws:iam::001177337028:server-certificate/cloudfrount/codehappy-www/www.codehappy.io",
"UploadDate": "2015-10-20T20:09:22Z"
}
]
}
NOTE: As you can see, I'm able to view all three of my SSL certificates, including my newly created one.
Next, I logged into the IAM UI to verify that my IAM user account has administrator access:
As you can see my user is part of an 'Admins' group, which has unlimited Admin access to AWS.
Finally, I log into the Cloudfront UI and attempt to select my new SSL certificate. Unfortunately, this is where things seem to not work =/ Only my other two SSL certs are listed:
Does anyone know what I need to do so I can use my new SSL certificate with Cloudfront?
Thanks so much!
Most likely, the issue is that the path is incorrect. It is not cloudfrount but cloudfront
I had a very similar issue and the problem was directly related to my private key's encryption. Reissuing the certificate using RSA 2048-bit instead of RSA 4096-bit CSR encryption solved the issue for me. Could be something else outside of encryption as well, such as the formatting of your blocks or using an encrypted private key.
In short, ACM's import filter won't catch everything nor will it verify working validity across all AWS products, so double check your encryption level settings are compatible with CloudFront when using external certificates. Here's a list of compatibility issues for CloudFront. Remember that compatbility can vary from product to product so always double check. https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cnames-and-https-requirements.html
Had I simply read first, as usual, I would have saved a headache. 4096-bit is perfectly fine for some ACM functionalities, however this does not include CloudFront.
Importing a certificate into AWS Certificate Manager (ACM): public
key length must be 1024 or 2048 bits. The limit for a certificate that
you use with CloudFront is 2048 bits, even though ACM supports larger
keys.