s3cmd not excluding a chosen prefix when restoring from Glacier - amazon-s3

What am I doing
I am trying to Glacier-restore all objects under the kevinturino.com prefix in my s3 bucket named simplewebsite. I want the prefix kevinturino.com/plugins to be excluded.
What is happening?
The kevinturino.com/plugins prefix is not being excluded.
My command is:
s3cmd -d --force restore --recursive -D999 --exclude=*plugins* --exclude=plugins/ --exclude=*/plugins* --exclude=kevinturino.com/plugins/ --exclude=kevinturino.com/plugins/* s3://simplewebsite/kevinturino.com/
Debug shows:
DEBUG: Canonical Request:
GET
/
marker=kevinturino.com%2Fplugins%2Fnew%2Fsym%2Froot%2Fetc%2Falternatives%2Fri&prefix=kevinturino.com%2F
host:simplewebsite.s3.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20170925T030908Z
host;x-amz-content-sha256;x-amz-date
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
----------------------
DEBUG: signature-v4 headers: {'x-amz-content-sha256': u'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': u'AWS4-HMAC-SHA256 Credential=obfuscated/20170925/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=obfuscated', 'x-amz-date': '20170925T030908Z'}
DEBUG: Processing request, please wait...
DEBUG: get_hostname(simplewebsite): simplewebsite.s3.amazonaws.com
DEBUG: ConnMan.get(): re-using connection: https://simplewebsite.s3.amazonaws.com#6
DEBUG: format_uri(): /?marker=kevinturino.com%2Fplugins%2Fnew%2Fsym%2Froot%2Fetc%2Falternatives%2Fri&prefix=kevinturino.com%2F
DEBUG: Sending request method_string='GET', uri=u'/?marker=kevinturino.com%2Fplugins%2Fnew%2Fsym%2Froot%2Fetc%2Falternatives%2Fri&prefix=kevinturino.com%2F', headers={'x-amz-content-sha256': u'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': u'AWS4-HMAC-SHA256 Credential=obfuscated/20170925/us-east-1/s3/aws4_request,SignedHeaders=host;x-amz-content-sha256;x-amz-date,Signature=obfuscated', 'x-amz-date': '20170925T030908Z'}, body=(0 bytes)
DEBUG: Response:
{'headers': {'content-type': 'application/xml',
'date': 'Mon, 25 Sep 2017 03:09:09 GMT',
'server': 'AmazonS3',
'transfer-encoding': 'chunked',
'x-amz-bucket-region': 'us-east-1',
'x-amz-id-2': 'obfuscated',
'x-amz-request-id': '500238C860B78674'},
'reason': 'OK',
'status': 200}
What do I expect?
All objects to be processed for restoration, and the kevinturino.com/plugins prefix to be excluded.

Related

(Localstack) Queue peeking API prompt s3 404 error

I am trying to use the sqs queue peeking api documented here (using both the path method and the query param method): https://docs.localstack.cloud/user-guide/aws/sqs/#peeking-into-queues
And the response is an s3 error (s3 was not enabled):
curl "http://localhost:4566/_aws/sqs/messages?QueueUrl=http://queue.localhost.localstack.cloud:4566/000000000000/queue"
<?xml version='1.0' encoding='utf-8'?>
<ErrorResponse><Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><Type>Sender</Type></Error><RequestId>W9WPTXP97BNLX1TFB2VU703TA8TPENLVJ3TBOQ4IS9DMWNJ4SR27</RequestId></ErrorResponse>%
My docker compose environment variables:
environment:
AWS_DEFAULT_REGION=us-east-1
DEFAULT_REGION=us-east-1
EDGE_PORT=4566
SERVICES=sns, sqs
LS_LOG=trace
ports:
'4566:4566'
volumes:
Has someone experienced this before? How should I fix this?
Thanks in advance!
Edit:
log from container:
GET localhost:4566/_aws/sqs/messages?QueueUrl=http://queue.localhost.localstack.cloud:4566/000000000000/queue
2023-01-26T17:52:02.045 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.GetObject => 404 (NoSuchBucket); GetObjectRequest({'Bucket': '_aws', 'IfMatch': None, 'IfModifiedSince': None, 'IfNoneMatch': None, 'IfUnmodifiedSince': None, 'Key': 'sqs/messages', 'Range': None, 'ResponseCacheControl': None, 'ResponseContentDisposition': None, 'ResponseContentEncoding': None, 'ResponseContentLanguage': None, 'ResponseContentType': None, 'ResponseExpires': None, 'VersionId': None, 'SSECustomerAlgorithm': None, 'SSECustomerKey': None, 'SSECustomerKeyMD5': None, 'RequestPayer': None, 'PartNumber': None, 'ExpectedBucketOwner': None, 'ChecksumMode': None}, headers={'Host': 'localhost:4566', 'User-Agent': 'curl/7.77.0', 'Accept': '/', 'x-localstack-tgt-api': 's3', 'Authorization': 'AWS4-HMAC-SHA256 Credential=000000000000/20160623/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=1234', 'x-localstack-edge': 'http://localhost:4566', 'X-Forwarded-For': '127.0.0.1, localhost:4566', 'Connection': 'close'}); NoSuchBucket(The specified bucket does not exist, headers={'Content-Type': 'text/xml', 'Content-Length': '258', 'x-amz-request-id': 'Z45RC1D5WHI9WLFRZXV7ARWF3VRVL1V26XCUFDVV946B5XRMA1JN', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'Access-Control-Allow-Headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'Access-Control-Expose-Headers': 'etag,x-amz-version-id'})
Hi — Can you pull the latest LocalStack Docker image:
docker pull localstack/localstack:latest
After that, please set your Compose configuration as:
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack:latest
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
- DEBUG=${DEBUG-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- DOCKER_HOST=unix:///var/run/docker.sock
- LOG_LOG=trace
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
The SERVICES configuration has been deprecated. After starting the LocalStack container, you can now run the following to verify that the SQS developer endpoints are working:
$ awslocal sqs create-queue --queue-name my-queue
$ awslocal sqs send-message --queue-url http://localhost:4566/00000000000/my-queue --message-body test
$ curl "http://localhost:4566/_aws/sqs/messages?QueueUrl=http://queue.localhost.localstack.cloud:4566/000000000000/my-queue"
This should work accurately now!

Filter Splunk response using queries from ansible playbook

Currently we manually monitor splunk dashboards during our deploys. We would like to automate this. For this, we would like to come up with an ansible playbook with the splunk queries. This playbook will be run during deployment.
I am successfully able to make connection to splunk, but I am not able to get the search query working
####
# type: task
#
# vars:
# 5xxcheck_output(str,command): raw output from command
# 5xxcheck_response(str,command): raw output to json
#
# desc:
# uses splunk to get 5xxcheck
---
- name: Tasks to query splunk
hosts: localhost
connection: local
tasks:
- name: get search_id for 5xx check from splunk
uri:
url: https://<splunk_instance>/services/search/jobs
follow_redirects: all
method: POST
user: xxxxxx
password: xxxxxxx
force_basic_auth: yes
body: "search host=tc1* ResponseCode=500 earliest=-15m"
body_format: raw
validate_certs: no
status_code: 201
return_content: true
register: search_id
- debug: msg="{{ search_id.status }}"
- name: use the search_id to get the 5xx check results
uri:
url: https://<splunk_instance>/services/search/jobs/{{ search_id }}/results/
method: GET
user: xxxxxx
password: xxxxxxx
force_basic_auth: yes
body_format: raw
return_content: true
register: 5xxcheck_output
until: 5xxcheck_output.status > 0 and 5xxcheck_output.status != 500
- name: Put results into 5xxcheck_response
set_fact:
5xxcheck_response: "{{ 5xxcheck_output.json }}"
- name: Print 5xxcheck_response if -v
debug:
var: 5xxcheck_response
verbosity: 1
I would like to use uri module to parameterize the splunk search. I am able to execute the following 2 steps from terminal, to get the response
Step1: Get the SID(Search ID)
curl -u user:pwd -k https://<splunk-instance>/services/search/jobs -d search="search host=t1* ResponseCode=200 earliest=-15m"
<?xml version="1.0" encoding="UTF-8"?>
<response>
<sid>1604947864.xxxxxx</sid>
</response>
Step2: Use the SID to get the response
curl -u user:pwd -k https://<splunk-instance>/services/search/jobs/<SID>/results/ --get -d output_mode=raw
---
- name: Tasks to query splunk
hosts: localhost
connection: local
tasks:
- name: get search_id for 5xx check from splunk
uri:
url: https://splunk_instance/services/search/jobs/
follow_redirects: all
method: POST
user: xxxxx
password: xxxxx
force_basic_auth: yes
body_format: form-urlencoded
status_code: [200, 201, 202]
body:
- [ search, "search host=t1* ResponseCode=500 earliest=-15m" ]
- [ output_mode, "json" ]
validate_certs: no
return_content: true
register: search_id
- debug: msg="{{ search_id }}"
This worked out for me. Now I get the valid sid as a response when I run this playbook.

Lambda is returning headers in body of request

I am using serverless to deploy my express js application in lambda. The weird thing is that some apis are returning headers in the body of the response I am not sure why this is happening. Here is my serverless YAML file:
org: test
app: test-api
# serverless.yml
service: test-api
package:
exclude:
#- node_modules/**
- __tests__/**
provider:
name: aws
runtime: nodejs10.x
region: us-east-1
environment:
SERVICE_NAME: ${self:service}
plugins:
- serverless-domain-manager
custom:
stage: ${opt:stage, dev}
domains:
prod: api.test.com
dev: dev-api.test.com
customDomain:
basePath: "${self:provider.environment.SERVICE_NAME}"
domainName: ${self:custom.domains.${self:custom.stage}}
stage: "${self:custom.stage}"
createRoute53Record: true
functions:
test-api:
handler: build/app.handler
environment:
stage: ${self:custom.stage}
events:
- http:
path: v1/s
method: GET
cors: true
- http:
path: v1/sc
method: GET
cors: true
- http:
path: v1/s/{s}
method: GET
cors: true
- http:
path: v1/cs
method: POST
cors: true
- http:
path: v1/s
method: POST
cors: true
- http:
path: v1/s/{s}
method: DELETE
cors: true
- http:
path: v1/s/{s}
method: PUT
cors: true
Here is what the response looks like:
curl -d '{"c":"test"}' -H "Content-Type: application/json" -X POST https://dev-api.test.com/test-api/v1/cs
HTTP/1.1 200 Not Modified
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Expose-Headers: sessionId
Vary: Origin
Content-Type: application/json; charset=utf-8
Content-Length: 55
ETag: W/"37-vmzwGqI9Wb8ACGS7qhhE3/JBqt4"
Date: Fri, 24 Apr 2020 12:30:41 GMT
Connection: keep-alive
{"rsp":{"msg":{"s":[],"c":{}},"err":null}
Any idea if its the serverless yaml or some other configuration?
I was able to fix this, contacted serverless support and they said there was a bug and now it has been fixed. Just need to update serverless and it was resolved!
Looks like you enabled the -i option of curl? Has curl an alias?

traefik with systemd don't see containers docker

I want to start traefik trough systemd, but I don't have the same results with systemd vs manual start.
Here is an example of when I start traefik manually:
$ traefik --web \
--docker \
--docker.domain=docker
$ docker ps -q
164f73add870
$ # check traefik api
$ http http://localhost:8080/api/providers
http http://localhost:8080/api/providers
HTTP/1.1 200 OK
Content-Length: 377
Content-Type: application/json; charset=UTF-8
Date: Sun, 15 Oct 2017 10:26:09 GMT
{
"docker": {
"backends": {
"backend-rancher": {
"loadBalancer": {
"method": "wrr"
},
"servers": {
"server-rancher": {
"url": "http://172.17.0.2:8080",
"weight": 0
}
}
}
},
"frontends": {
"frontend-Host-rancher-docker": {
"backend": "backend-rancher",
"basicAuth": [],
"entryPoints": [
"http"
],
"passHostHeader": true,
"priority": 0,
"routes": {
"route-frontend-Host-rancher-docker": {
"rule": "Host:rancher.docker"
}
}
}
}
}
}
And when I use systemd:
$ sudo systemctl status traefik
● traefik.service - Traefik reverse proxy
Loaded: loaded (/usr/lib/systemd/system/traefik.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-10-15 12:27:35 CEST; 4s ago
Main PID: 12643 (traefik)
Tasks: 9 (limit: 4915)
Memory: 14.6M
CPU: 256ms
CGroup: /system.slice/traefik.service
└─12643 /usr/bin/traefik --web --docker --docker.domain=docker
Oct 15 12:27:35 devbox systemd[1]: Started Traefik reverse proxy.
$ docker ps -q
164f73add870
$ # check traefik api
$ http http://localhost:8080/api/providers
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=UTF-8
Date: Sun, 15 Oct 2017 10:28:18 GMT
{}
Any idea why I don't see my container docker ?
By adding this with my user/group, it works!
[Service]
User=...
Group=...

Riak CS returns "s3.amazonaws.com" no matter what is in cs_root_host

cs_root_host is set up right:
grep root_host /var/lib/riak-cs/generated.configs/app.2015.07.09.13.59.07.config
{cs_root_host,"s3.example.com"},
But when I upload file:
s3cmd put test.jpg s3://images --acl-public
I get in return:
Public URL of the object is: http://images.s3.amazonaws.com/test.jpg
Where is the issue?
Added:
Here is output - everything looks fine, except the last line:
(example.com is just replacement for real domain which I don't want to public)
s3cmd -d -c .s3cfg put newfile.jpg s3://images --acl-public
DEBUG: ConfigParser: Reading file '.s3cfg'
DEBUG: ConfigParser: access_key->YD...17_chars...U
DEBUG: ConfigParser: bucket_location->RU
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->s3.example.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.s3.example.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->127.0.0.1
DEBUG: ConfigParser: proxy_port->80
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: secret_key->kG...37_chars...=
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->10
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'put' using UTF-8
DEBUG: Unicodising 'newfile.jpg' using UTF-8
DEBUG: Unicodising 's3://images' using UTF-8
DEBUG: Command: put
INFO: Compiling list of local files...
DEBUG: DeUnicodising u'' using UTF-8
DEBUG: DeUnicodising u'newfile.jpg' using UTF-8
DEBUG: Unicodising 'newfile.jpg' using UTF-8
DEBUG: Unicodising 'newfile.jpg' using UTF-8
INFO: Applying --exclude/--include
DEBUG: CHECK: newfile.jpg
DEBUG: PASS: newfile.jpg
INFO: Summary: 1 local files to upload
DEBUG: Content-Type set to 'image/jpeg'
DEBUG: String 'newfile.jpg' encoded to 'newfile.jpg'
DEBUG: SignHeaders: 'PUT\n\nimage/jpeg\n\nx-amz-acl:public-read\nx-amz-date:Fri, 10 Jul 2015 09:55:37 +0000\n/images/newfile.jpg'
DEBUG: CreateRequest: resource[uri]=/newfile.jpg
DEBUG: Unicodising 'newfile.jpg' using UTF-8
DEBUG: SignHeaders: 'PUT\n\nimage/jpeg\n\nx-amz-acl:public-read\nx-amz-date:Fri, 10 Jul 2015 09:55:37 +0000\n/images/newfile.jpg'
newfile.jpg -> s3://images/newfile.jpg [1 of 1]
DEBUG: get_hostname(images): images.s3.example.com
DEBUG: format_uri(): http://images.s3.example.com/newfile.jpg
32600 of 32600 100% in 0s 14.49 MB/sDEBUG: Response: {'status': 200, 'headers': {'content-length': '0', 'server': 'nginx', 'connection': 'keep-alive', 'etag': '"89e39f454c69a1ce1fadec3a222fc292"', 'date': 'Fri, 10 Jul 2015 09:55:37 GMT', 'content-type': 'text/plain'}, 'reason': 'OK', 'data': '', 'size': 32600}
32600 of 32600 100% in 0s 391.54 kB/s done
DEBUG: MD5 sums: computed=89e39f454c69a1ce1fadec3a222fc292, received="89e39f454c69a1ce1fadec3a222fc292"
Public URL of the object is: http://images.s3.amazonaws.com/newfile.jpg
This is not a Riak CS issue. s3cmd itself produce public url string
and print it.
For my environment, with s3cmd of master branch of commit
7bdefc81823699069706ea3680bfa65ec8ad3db5 (just fetched today, 2015-07-14),
it shows (seemingly) the corrent URL.
% ~/g/s3cmd/build/scripts-2.7/s3cmd -c .s3cfg.15018.alice put rebar.config -P s3://test/a
rebar.config -> s3://test/a [1 of 1]
2791 of 2791 100% in 0s 196.88 kB/s done
Public URL of the object is: http://test.s3.example.com/a
From the source code of s3cmd, it seems it uses host_bucket or host_base
configuration depending on bucket name (or maybe other configurations.)
Some other details on my environment
s3cmd configuration : host_base = s3.example.com and host_bucket = %(bucket)s.s3.example.com
Server is Riak CS of develop branch (commit 1f954aaae45429923f65fdad40c7916a55ab79f3)
Riak CS configuration : cs_root_host = s3.example.com