(Localstack) Queue peeking API prompt s3 404 error - amazon-s3

I am trying to use the sqs queue peeking api documented here (using both the path method and the query param method): https://docs.localstack.cloud/user-guide/aws/sqs/#peeking-into-queues
And the response is an s3 error (s3 was not enabled):
curl "http://localhost:4566/_aws/sqs/messages?QueueUrl=http://queue.localhost.localstack.cloud:4566/000000000000/queue"
<?xml version='1.0' encoding='utf-8'?>
<ErrorResponse><Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><Type>Sender</Type></Error><RequestId>W9WPTXP97BNLX1TFB2VU703TA8TPENLVJ3TBOQ4IS9DMWNJ4SR27</RequestId></ErrorResponse>%
My docker compose environment variables:
environment:
AWS_DEFAULT_REGION=us-east-1
DEFAULT_REGION=us-east-1
EDGE_PORT=4566
SERVICES=sns, sqs
LS_LOG=trace
ports:
'4566:4566'
volumes:
Has someone experienced this before? How should I fix this?
Thanks in advance!
Edit:
log from container:
GET localhost:4566/_aws/sqs/messages?QueueUrl=http://queue.localhost.localstack.cloud:4566/000000000000/queue
2023-01-26T17:52:02.045 INFO --- [ asgi_gw_0] localstack.request.aws : AWS s3.GetObject => 404 (NoSuchBucket); GetObjectRequest({'Bucket': '_aws', 'IfMatch': None, 'IfModifiedSince': None, 'IfNoneMatch': None, 'IfUnmodifiedSince': None, 'Key': 'sqs/messages', 'Range': None, 'ResponseCacheControl': None, 'ResponseContentDisposition': None, 'ResponseContentEncoding': None, 'ResponseContentLanguage': None, 'ResponseContentType': None, 'ResponseExpires': None, 'VersionId': None, 'SSECustomerAlgorithm': None, 'SSECustomerKey': None, 'SSECustomerKeyMD5': None, 'RequestPayer': None, 'PartNumber': None, 'ExpectedBucketOwner': None, 'ChecksumMode': None}, headers={'Host': 'localhost:4566', 'User-Agent': 'curl/7.77.0', 'Accept': '/', 'x-localstack-tgt-api': 's3', 'Authorization': 'AWS4-HMAC-SHA256 Credential=000000000000/20160623/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-date;x-amz-target, Signature=1234', 'x-localstack-edge': 'http://localhost:4566', 'X-Forwarded-For': '127.0.0.1, localhost:4566', 'Connection': 'close'}); NoSuchBucket(The specified bucket does not exist, headers={'Content-Type': 'text/xml', 'Content-Length': '258', 'x-amz-request-id': 'Z45RC1D5WHI9WLFRZXV7ARWF3VRVL1V26XCUFDVV946B5XRMA1JN', 'Access-Control-Allow-Origin': '*', 'Access-Control-Allow-Methods': 'HEAD,GET,PUT,POST,DELETE,OPTIONS,PATCH', 'Access-Control-Allow-Headers': 'authorization,cache-control,content-length,content-md5,content-type,etag,location,x-amz-acl,x-amz-content-sha256,x-amz-date,x-amz-request-id,x-amz-security-token,x-amz-tagging,x-amz-target,x-amz-user-agent,x-amz-version-id,x-amzn-requestid,x-localstack-target,amz-sdk-invocation-id,amz-sdk-request', 'Access-Control-Expose-Headers': 'etag,x-amz-version-id'})

Hi — Can you pull the latest LocalStack Docker image:
docker pull localstack/localstack:latest
After that, please set your Compose configuration as:
version: "3.8"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack_main}"
image: localstack/localstack:latest
ports:
- "127.0.0.1:4566:4566" # LocalStack Gateway
- "127.0.0.1:4510-4559:4510-4559" # external services port range
environment:
- DEBUG=${DEBUG-}
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-}
- DOCKER_HOST=unix:///var/run/docker.sock
- LOG_LOG=trace
volumes:
- "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
The SERVICES configuration has been deprecated. After starting the LocalStack container, you can now run the following to verify that the SQS developer endpoints are working:
$ awslocal sqs create-queue --queue-name my-queue
$ awslocal sqs send-message --queue-url http://localhost:4566/00000000000/my-queue --message-body test
$ curl "http://localhost:4566/_aws/sqs/messages?QueueUrl=http://queue.localhost.localstack.cloud:4566/000000000000/my-queue"
This should work accurately now!

Related

"X-Cache: Miss from cloudfront" as a result of a call to AWS API Gateway

When I send a GET request to AWS APIGateway's URL "https://blablabla.execute-api.us-east-1.amazonaws.com/dev/crs/blablabla.png" or Custom Domain's URL "devblablabla.bla.com" via browser or POSTMAN I receive a 200 response with the "X-Cache: Miss from cloudfront" header:
GET request to AWS APIGateway
Do you have any idea how I can rewrite the serverless.yml file for receiving 200 response with the "X-cache:HIT" header?
This is the configuration that I deploy:
# serverless.yml
service: s3-blablabla-service
provider:
name: aws
stage: dev
region: us-east-1
environment:
SERVICE_NAME: ${self:service}
apiGateway:
binaryMediaTypes: "*/*"
plugins:
- serverless-apigateway-service-proxy
- serverless-domain-manager
- serverless-finch
custom:
c3launchBucketName: "blabla-pl-${self:provider.stage}"
c3scormBucketName: "blabla-crs-${self:provider.stage}"
domainName: "${self:provider.stage}blablabla.bla.com" # Change this to your domain.
basePath: "" # This will be prefixed to all routes
apiGatewayServiceProxies:
- s3:
path: /pl/{myKey+} # use path param
method: get
action: GetObject
bucket:
# ${self:custom.c3launchBucketName}
Ref: S3Bucket
key:
pathParam: myKey
requestParameters:
"integration.request.header.cache-control": "'public, max-age=31536000, immutable'"
- s3:
path: /crs/{myKey+} # use path param
method: get
action: GetObject
bucket:
# ${self:custom.c3scormBucketName}
Ref: S3ScormBucket
key:
pathParam: myKey
requestParameters:
"integration.request.header.cache-control": "'public, max-age=31536000, immutable'"
customDomain:
domainName: ${self:custom.domainName}
basePath: ${self:custom.basePath}
stage: ${self:provider.stage}
createRoute53Record: true
autoDomain: true
client:
bucketName: ${self:custom.c3launchBucketName}
resources:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.c3launchBucketName}
S3ScormBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:custom.c3scormBucketName}
After the deployment I receive this result:
endpoints:
GET - https://blablabla.execute-api.us-east-1.amazonaws.com/dev/pl/{myKey+}
GET - https://blablabla.execute-api.us-east-1.amazonaws.com/dev/crs/{myKey+}
Service deployed to stack s3-blablabla-service-dev
Serverless Domain Manager:
Domain Name: devblablabla.bla.com
Target Domain: abrakadabra.cloudfront.net
Hosted Zone Id: BARBARBAR

How to serve pre-trained models like universal-sentence-encoder serving using Seldon core

I am trying to use seldon core for pre trained tensorflow models using seldon core.
What is the best way to serve the pre trained models.I tried deploying directly but its not working but just download the model and upload into bucket and serve that model as below:
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: embedding
namespace: seldon
labels:
nodepool: general
spec:
name: dan
predictors:
- graph:
implementation: TENSORFLOW_SERVER
modelUri: gs://tf_models_test/universal-sentence-encoder_4_dan
serviceAccountName: poc-seldon-sa
name: embedding
endpoint:
type: GRPC
type: REST
name: embedding
replicas: 1
I see erros as below:
2022-10-18 03:50:28,294 - seldon_core.wrapper:handle_generic_exception:53 - ERROR: {'status': {'status': 1, 'info': "HTTPConnectionPool(host='0.0.0.0', port=2001): Max retries exceeded with url: /v1/models/embedding:predict (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fb038415bd0>: Failed to establish a new connection: [Errno 111] Connection refused'))", 'code': -1, 'reason': 'MICROSERVICE_INTERNAL_ERROR'}}
Is there any other way I have to use to serve it.

How to run a Lambda Docker with serverless offline

I would like to run serverless offline using a Lambda function that points to a Docker image.
When I try to run serverless offline, I am just receiving:
Offline [http for lambda] listening on http://localhost:3002
Function names exposed for local invocation by aws-sdk:
* hello-function: sample-app3-dev-hello-function
If I try to access http://localhost:3002/hello, a 404 error is returned
serverless.yml
service: sample-app3
frameworkVersion: '3'
plugins:
- serverless-offline
provider:
name: aws
ecr:
images:
sampleapp3image:
path: ./app/
platform: linux/amd64
functions:
hello-function:
image:
name: sampleapp3image
events:
- httpApi:
path: /hello
method: GET
app/myfunction.py
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': 'Hello World!'
}
app/Dockerfile
FROM public.ecr.aws/lambda/python:3.9
COPY myfunction.py ./
CMD ["myfunction.lambda_handler"]
at the moment such functionality is not supported in serverless-offline plugin. There's an issue open where the discussion started around supporting this use case: https://github.com/dherault/serverless-offline/issues/1324

Cannot use spring cloud config and istio 1.1.1 together-cannot recover when HTTP 404 error to get remote config

when I'm tring to mix the spring cloud config with istio 1.1.1, When my app container(with istio envoy auto-injected) starts, the spring cloud config client will try to get config(applicationContext.yaml) from remote cloud config server(started in advance with good status), unfornately it fails with HTTP 404 error. Even if I've configged my app to have retry for cloud config client, it keeps retring alway with HTTP 404 error(I've confirmed the config server URL is correct from another container) and there's no chance to recover. It happens sometimes. I knew that Istio envoy and my app are in the same kubernetes POD, the app may start before istio envoy, in which case there might be network error but as soon as the envoy is up, everything should be OK. I really don't understand why my app cannot recover automatically. Here're my diagnostic steps:
1. Add retry mechanism in my app(with retry libs included in POM and modified yaml. - retry works but each retry failed with HTTP 404 error
spring-config/
fail-fast: true
retry:
initial-interval: 10000
max-attempts: 100
2. Add 'sleep xx' before my java app starts in my app k8s deployment file - less chance to have HTTP 404 error, but problem is not eliminated
command: ["/bin/sh","-c","sleep 20; java -jar -Xms512m -Xmx1024m app.jar"]
3. get the istio envoy's access log and compare the victim app's and good app's - it sounds like the good log has values for upstream_cluster and upstream_cluster key; the fields for the bad log are empty
the good access log
{
"response_code": "200",
"user_agent": "Java/1.8.0_121",
"response_flags": "-",
"start_time": "2019-06-25T01:17:29.661Z",
"method": "2019-06-25T01:17:29.661Z",
"request_id": "d3d27512-161b-4303-bb48-05a6e19e05b7",
"upstream_host": "172.20.3.104:9083",
"x_forwarded_for": "-",
"requested_server_name": "-",
"bytes_received": "0",
"istio_policy_status": "-",
"bytes_sent": "1144",
"upstream_cluster": "outbound|9083||fota-spring-config.ns-fota.svc.cluster.local",
"downstream_remote_address": "172.20.2.115:45816",
"path": "/fota-spring-config/fota-task/dev/master",
"authority": "fota-spring-config.ns-fota.svc.cluster.local:9083",
"protocol": "HTTP/1.1",
"upstream_service_time": "289",
"upstream_local_address": "-",
"duration": "290",
"downstream_local_address": "172.21.1.152:9083"
}
the bad access log:
{
"upstream_cluster": "-",
"downstream_remote_address": "172.20.2.118:41980",
"path": "/fota-spring-config/fota-dmserver/dev/master",
"authority": "fota-spring-config.ns-fota.svc.cluster.local:9083",
"protocol": "HTTP/1.1",
"upstream_service_time": "-",
"upstream_local_address": "-",
"duration": "0",
"downstream_local_address": "172.21.1.152:9083",
"response_code": "404",
"user_agent": "Java/1.8.0_121",
"response_flags": "NR",
"start_time": "2019-06-25T01:21:24.197Z",
"method": "2019-06-25T01:21:24.197Z",
"request_id": "346716e4-1def-465f-b370-cb1e71e30d25",
"upstream_host": "-",
"x_forwarded_for": "-",
"requested_server_name": "-",
"bytes_received": "0",
"istio_policy_status": "-",
"bytes_sent": "0"
}
the K8S deployment file is attached.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: fota-car
spec:
template:
metadata:
labels:
app: fota-car
version: v1
spec:
serviceAccountName: fota-serviceaccount
imagePullSecrets:
- name: uaes-docker2
containers:
- name: fota-car
image: 192.168.119.22:18080/uaes-fota/fota-car:dev-release-1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8085
env:
- name: SPRING_DATASOURCE_URL
value: jdbc:mysql://mysql-ali-dev.ns-fota-ext-svc/fota-car?useUnicode=true&characterEncoding=utf-8&useSSL=false
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
name: mysql-ali-dev-secret
key: username
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-ali-dev-secret
key: password
command: ["/bin/sh","-c","java -jar -Xms512m -Xmx1024m app.jar"]
readinessProbe:
httpGet:
path: /actuator/health
port: 18085
initialDelaySeconds: 60
timeoutSeconds: 1
kind: Service
apiVersion: v1
metadata:
labels:
app: fota-car
name: fota-car
spec:
ports:
- name: http
port: 8085
selector:
app: fota-car

Serverless Framework: Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation

I am using package uploading zipped file like
frameworkVersion: "=1.27.3"
service: recipes
provider:
name: aws
endpointType: REGIONAL
runtime: python3.6
stage: dev
region: eu-central-1
memorySize: 512
deploymentBucket:
name: dfki-meta
versionFunctions: false
stackTags:
Project: DFKIAPP
# Allows updates to all resources except deleting/replacing EC2 instances
stackPolicy:
- Effect: Allow
Principal: "*"
Action: "Update:*"
Resource: "*"
- Effect: Deny
Principal: "*"
Action:
- Update: Replace
- Update: Delete
Resource: "*"
Condition:
StringEquals:
ResourceType:
- AWS::EC2::Instance
# Access to RDS and S3 Bucket
iamRoleStatements:
- Effect: "Allow"
Action: "s3:ListBucket"
Resource: "*"
package:
individually: true
functions:
# get_recipes:
# handler: handler.get_recipes
# module: recipes_crud
# package:
# individually: true
# timeout: 30
# events:
# - http:
# path: recipes
# method: get
# request:
# parameters:
# querystring:
# persona: true
get_recommendation:
handler: handler.get_recommendation
module: recipes_ml
package:
artifact: zipped_dir.zip
timeout: 30
events:
- http:
path: recipes/{id}
method: get
request:
parameters:
paths:
id: true
querystring:
schaerfe_def: true
saettig_def: true
erfahrung_def: true
schaerfe_wunsch: true
saettig_wunsch: true
erfahrung_wunsch: true
gericht_wunsch: true
stimmung_wunsch: true
Can not understand this error, isn't 52.18 under 69905067 bytes ?
(node:50928) ExperimentalWarning: The fs.promises API is experimental
Serverless: Packaging function: get_recommendation...
Serverless: Uploading function: get_recommendation (52.18 MB)...
Serverless Error ---------------------------------------
Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 10.1.0
Serverless Version: 1.27.3
The package size should be lower than 50MB according to the docs
https://docs.aws.amazon.com/lambda/latest/dg/limits.html
from this blog post
The 20 MB addition presumably is there there to account for request
overhead involved with the AWS API (e.g. base64 encoding of the zip
file data). So far the 50 MB limit holds true-ish. But, we’re not
defeated yet.
This seems to be an issue only while uploading individual lambda function using serverless but if you don't give --function parameter and deploy full stack then it works fine!!!