I have lambda function that is triggered when image is uploaded to s3 and optimises it.
Everything is working when I upload file from console, but when I upload it from code using boto3 I receive invalid key in the event.
Uploaded by renderImage lambda function:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "eu-central-1",
"eventTime": "2022-08-08T20:32:14.880Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS::*******************:renderImage"
},
"requestParameters": {
"sourceIPAddress": "*.*.*.*"
},
"responseElements": {
"x-amz-request-id": "***********",
"x-amz-id-2": "*****************************************************************************"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "s3ImageOptimizer-2386451296541264526234",
"bucket": {
"name": "myBucket",
"ownerIdentity": {
"principalId": "**************"
},
"arn": "arn:aws:s3:::myBucket"
},
"object": {
"key": "000-2022_08_08-08%3A32%3A04_PM.jpg", # <-- Invalid
"size": 558339,
"eTag": "e7e63f48Fd1849a53628255353f7027b",
"sequencer": "0062F172CED5CE84E2"
}
}
}
]
}
Uploaded by console:
{
"Records": [
{
"eventVersion": "2.1",
"eventSource": "aws:s3",
"awsRegion": "eu-central-1",
"eventTime": "2022-08-08T20:33:43.584Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:*******************"
},
"requestParameters": {
"sourceIPAddress": "*.*.*.*"
},
"responseElements": {
"x-amz-request-id": "***********",
"x-amz-id-2": "*****************************************************************************"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "s3ImageOptimizer-2386451296541264526234",
"bucket": {
"name": "myBucket",
"ownerIdentity": {
"principalId": "**************"
},
"arn": "arn:aws:s3:::myBucket"
},
"object": {
"key": "000-2022_08_08-08-32-04_PM.jpg", # <-- Correct
"size": 558339,
"eTag": "e7e63f48Fd1849a53628255353f7027b",
"sequencer": "0062F173278112713C"
}
}
}
]
}
The thing is that when I upload using lambda function key is invalid it returns
000-2022_08_08-08%3A32%3A04_PM.jpg
instead of
000-2022_08_08-08-32-04_PM.jpg
It seems that is is encoding 2 of the dashes and replaces them with %3A.
Why is this happening?
Here is my serverless.yml definition:
service: s3-optimizer
package:
individually: true
plugins:
- serverless-iam-roles-per-function
provider:
name: aws
deploymentMethod: direct
runtime: nodejs16.x
region: eu-central-1
stage: develop
memorySize: 4096
apiGateway:
minimumCompressionSize: 1024
shouldStartNameWithService: true
httpApi:
cors: true
functions:
s3ImageOptimizer:
handler: src/functions/s3/image-optimizer/handler.main
timeout: 90
iamRoleStatements:
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
Resource:
- arn:aws:s3:::myBucket/*
events:
- s3:
bucket: myBucket
event: s3:ObjectCreated:*
rules:
- prefix: original/
- suffix: .jpg
existing: true
Related
This is my config for krakend.json
"async_agent": [
{
"name": "test-agent",
"connection": {
"max_retries": 10,
"backoff_strategy": "exponential-jitter"
},
"consumer": {
"topic": "krakend",
"workers": 5
},
"backend": [
{
"url_pattern": "/greeted",
"method": "POST",
"host": [ "http://127.0.0.1:2999" ],
"disable_host_sanitize": false
}
],
"extra_config": {
"async/amqp": {
"host": "amqp://guest:guest#localhost:5672/",
"name": "krakend",
"exchange": "ApiGatewayExchange",
"durable": true,
"delete": false,
"exclusive": false,
"no_wait": false,
"auto_ack": false
}
}
}
]
Messages are sent from service-a like so:
export class AppService {
constructor(#Inject('GREETING_SERVICE') private client: ClientProxy){}
getHello(): ResponseDTO {
const responseDTO: ResponseDTO = {
action: 'Hello',
service: 'from service A'
}
this.client.emit('', responseDTO);
return responseDTO;
}
}
And imported GREETING_SERVICE config like so:
imports: [
ClientsModule.register([
{
name: 'GREETING_SERVICE',
transport: Transport.RMQ,
options: {
urls: ['amqp://test:test#localhost:5672/'],
queue: 'krakend'
}
}
])
],
Lastly, this is the endpoint in another service (let's call this service-c) that gets that message from the consumer:
#Post('greeted')
TestHello(#Body() data: any) {
console.log(data)
return data
}
The message is successfully consumed as set by the async_agent in my krakend file, but the message isn't posted as a body to that endpoint. When I did a console.log of that data supposedly passed, it just prints {}.
Am I doing anything wrong here? Been scratching my head for hours.
The async part of your krakend.json configuration looks good to me, but I am suspecting about the problem you might have.
Most of the javascript frameworks today will require you to pass specific headers to work their magic like Content-Type or Accept. You have to take into account that KrakenD will pass a very reduced set of headers to your NestJS application (Accept-Encoding and User-Agent as far as I can remember).
I am unfamiliar with NestJS, but I would bet that you need to pass the Content-Type and you are good to go. Here's my suggestion of configuration:
"async_agent": [
{
"name": "test-agent",
"connection": {
"max_retries": 10,
"backoff_strategy": "exponential-jitter"
},
"consumer": {
"topic": "krakend",
"workers": 5
},
"backend": [
{
"url_pattern": "/greeted",
"method": "POST",
"host": [
"http://127.0.0.1:2999"
],
"disable_host_sanitize": false,
"extra_config": {
"modifier/martian": {
"header.Modifier": {
"scope": [
"request"
],
"name": "Content-Type",
"value": "application/json"
}
}
}
}
],
"extra_config": {
"async/amqp": {
"host": "amqp://guest:guest#localhost:5672/",
"name": "krakend",
"exchange": "ApiGatewayExchange",
"durable": true,
"delete": false,
"exclusive": false,
"no_wait": false,
"auto_ack": false
}
}
}
]
}
Hope this helps
My Copy Activity is setup to use a REST Get API call as my source. I keep getting Error Code 2200 Invalid PaginationRule RuleKey=supportRFC5988.
I can call the GET Rest URL using the Web Activity, but this isn't optimal as I then have to pass the output to a stored procedure to load the data to the table. I would much rather use the Copy Activity.
Any ideas why I would get an Invalid PaginationRule error on a call?
I'm using a REST Linked Service with the following properties:
Name: Workday
Connect via integration runtime: link-unknown-self-hosted-ir
Base URL: https://wd2-impl-services1.workday.com/ccx/service
Authentication type: Basic
User name: Not telling
Azure Key Vault for password
Server Certificate Validation is enabled
Parameters: Name:format Type:String Default value:json
Datasource:
"name": "Workday_Test_REST_Report",
"properties": {
"linkedServiceName": {
"referenceName": "Workday",
"type": "LinkedServiceReference",
"parameters": {
"format": "json"
}
},
"folder": {
"name": "Workday"
},
"annotations": [],
"type": "RestResource",
"typeProperties": {
"relativeUrl": "/customreport2/company1/person%40company.com/HIDDEN_BI_RaaS_Test_Outbound"
},
"schema": []
}
}
Copy Activity
{
"name": "Copy Test Workday REST API output to a table",
"properties": {
"activities": [
{
"name": "Copy data1",
"type": "Copy",
"dependsOn": [],
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false,
"secureInput": false
},
"userProperties": [],
"typeProperties": {
"source": {
"type": "RestSource",
"httpRequestTimeout": "00:01:40",
"requestInterval": "00.00:00:00.010",
"requestMethod": "GET",
"paginationRules": {
"supportRFC5988": "true"
}
},
"sink": {
"type": "SqlMISink",
"tableOption": "autoCreate"
},
"enableStaging": false
},
"inputs": [
{
"referenceName": "Workday_Test_REST_Report",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "Destination_db",
"type": "DatasetReference",
"parameters": {
"schema": "ELT",
"tableName": "WorkdayTestReportData"
}
}
]
}
],
"folder": {
"name": "Workday"
},
"annotations": []
}
}
Well after posting this, I noticed that in the copy activity code there is a nugget about "supportRFC5988": "true" I switched the true to false, and everything just worked for me. I don't see a way to change this in the Copy Activity GUI
Editing source code and setting this option to false helped!
Following is the config.json that I'm using
{
"agent": {
"metrics_collection_interval": 300,
"run_as_user": "root"
},
"metrics": {
"append_dimensions": {
"AutoScalingGroupName": "${aws:AutoScalingGroupName}",
"ImageId": "${aws:ImageId}",
"InstanceId": "${aws:InstanceId}",
"InstanceType": "${aws:InstanceType}"
},
"metrics_collected": {
"disk": {
"measurement": [
"used_percent"
],
"metrics_collection_interval": 300,
"resources": [
"/"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 300
}
}
}
}
But using this configuration I am receiving many metrics which i dont need, pasting a sample pic below.
I just need the disk_used_percent metric for device: rootfs and path: /
When trying to use IPFS from my localhost I am having trouble accessing the IPFS service. I tried setting my config to accept the localhost and all server stuff, but nothing seems to work.
The error:
Failed to load http://127.0.0.1:5001/api/v0/files/stat?arg=0x6db883c6f3b2824d26f3b2e9c30256b490d125b10a3942f49a1ac715dd2def89&stream-channels=true: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:63342' is therefore not allowed access. The response had HTTP status code 403. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
IPFS Config:
{
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Origin": [
"*"
]
}
},
"Addresses": {
"API": "/ip4/127.0.0.1/tcp/5001",
"Announce": [],
"Gateway": "/ip4/127.0.0.1/tcp/8080",
"NoAnnounce": [],
"Swarm": [
"/ip4/0.0.0.0/tcp/4001",
"/ip6/::/tcp/4001"
]
},
"Bootstrap": [
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
"/dnsaddr/bootstrap.libp2p.io/ipfs/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt",
"/ip4/104.131.131.82/tcp/4001/ipfs/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
"/ip4/104.236.179.241/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip4/128.199.219.111/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip4/104.236.76.40/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip4/178.62.158.247/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd",
"/ip6/2604:a880:1:20::203:d001/tcp/4001/ipfs/QmSoLPppuBtQSGwKDZT2M73ULpjvfd3aZ6ha4oFGL1KrGM",
"/ip6/2400:6180:0:d0::151:6001/tcp/4001/ipfs/QmSoLSafTMBsPKadTEgaXctDQVcqN88CNLHXMkTNwMKPnu",
"/ip6/2604:a880:800:10::4a:5001/tcp/4001/ipfs/QmSoLV4Bbm51jM9C4gDYZQ9Cy3U6aXMJDAbzgu2fzaDs64",
"/ip6/2a03:b0c0:0:1010::23:1001/tcp/4001/ipfs/QmSoLer265NRgSp2LA3dPaeykiS1J6DifTC88f5uVQKNAd"
],
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "10GB"
},
"Discovery": {
"MDNS": {
"Enabled": true,
"Interval": 10
}
},
"Experimental": {
"FilestoreEnabled": false,
"Libp2pStreamMounting": false,
"ShardingEnabled": false
},
"Gateway": {
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"localhost:63342"
]
},
"PathPrefixes": [],
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "QmRgQdig4Z4QNEqs5kp45bmq6gTtWi2qpN2WFBX7hFsenm"
},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Reprovider": {
"Interval": "12h",
"Strategy": "all"
},
"Swarm": {
"AddrFilters": null,
"ConnMgr": {
"GracePeriod": "20s",
"HighWater": 900,
"LowWater": 600,
"Type": "basic"
},
"DisableBandwidthMetrics": false,
"DisableNatPortMap": false,
"DisableRelay": false,
"EnableRelayHop": false
}
}
Ben, try replacing 127.0.0.1 with localhost. go-ipfs whitelists localhost only. Also check https://github.com/ipfs/js-ipfs-api/#cors
my answer might come very late, however I am trying to solve some CORS issues with IPFS on my end; therefore I might have a solution for you:
by running:
# please update origin according to your setup...
origin=http://localhost:63342
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin '["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000", "http://127.0.0.1:48084", "https://gateway.ipfs.io", "https://webui.ipfs.io"]'
ipfs config API.HTTPHeaders.Access-Control-Allow-Origin
and restarting your ipfs daemon it might fix it
if the "fetch" button in the following linked page works : you are all set ! https://gateway.ipfs.io/ipfs/QmXkhGQNruk3XcGsidCzQbcNQ5a8oHWneHZXkPvWB26RbP/
This Command Works for me
ipfs config --json API.HTTPHeaders.Access-Control-Allow-Origin
'["'"$origin"'", "http://127.0.0.1:8080","http://localhost:3000"]'
you can allow the request from multiple origins
Here's the cloudformation template I wrote to create a simple S3 bucket, How do I specify the name of the bucket? Is this the right way?
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Simple S3 Bucket",
"Parameters": {
"OwnerService": {
"Type": "String",
"Default": "CloudOps",
"Description": "Owner or service name. Used to identify the owner of the vpc stack"
},
"ProductCode": {
"Type": "String",
"Default": "cloudops",
"Description": "Lowercase version of the product code (i.e. jem). Used for tagging"
},
"StackEnvironment": {
"Type": "String",
"Default": "stage",
"Description": "Lowercase version of the environment name (i.e. stage). Used for tagging"
}
},
"Mappings": {
"RegionMap": {
"us-east-1": {
"ShortRegion": "ue1"
},
"us-west-1": {
"ShortRegion": "uw1"
},
"us-west-2": {
"ShortRegion": "uw2"
},
"eu-west-1": {
"ShortRegion": "ew1"
},
"ap-southeast-1": {
"ShortRegion": "as1"
},
"ap-northeast-1": {
"ShortRegion": "an1"
},
"ap-northeast-2": {
"ShortRegion": "an2"
}
}
},
"Resources": {
"JenkinsBuildBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
"AccessControl": "Private"
},
"DeletionPolicy": "Delete"
}
},
"Outputs": {
"DeploymentBucket": {
"Description": "Bucket Containing Chef files",
"Value": {
"Ref": "DeploymentBucket"
}
}
}
}
Here's a really simple Cloudformation template that creates an S3 bucket, including defining the bucketname.
AWSTemplateFormatVersion: '2010-09-09'
Description: create a single S3 bucket
Resources:
SampleBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: sample-bucket-0827-cc
You can also leave the "Properties: BucketName" lines off if you want AWS to name the bucket for you. Then it will look like $StackName-SampleBucket-$uniqueIdentifier.
Hope this helps.
Your code has the BucketName already specified:
"BucketName": {
"Fn::Join": [
"-",
[
{
"Ref": "ProductCode"
},
{
"Ref": "StackEnvironment"
},
"deployment",
{
"Fn::FindInMap": [
"RegionMap",
{
"Ref": "AWS::Region"
},
"ShortRegion"
]
}
]
]
},
The BucketName is a string, and since you are using 'Fn Join', it will be combined of the functions you are joining.
"The intrinsic function Fn::Join appends a set of values into a single value, separated by the specified delimiter. If a delimiter is the empty string, the set of values are concatenated with no delimiter."
Your bucket name if you don't change the defaults is:
cloudops-stage-deplyment-yourAwsRegion
If you change the default parameters, then both cloudops, and stage can be changed, deployment is hard coded, yourAWSRegion will be pulled from where the stack is running, and will be returned in short format via the Mapping.
To extend 'cloudquiz' answer, this is what it'd look in yaml format:
Resources:
SomeS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName:
Fn::Join: ["-", ["yourbucketname", {'Fn::Sub': '${AWS::Region}'}, {'Fn::Sub': '${Stage}'}]]