there is a way to connect appsync directly with s3, to store a base64?
i'm aware of this api s3 PutObject but i dont understand how to achieve it by using appsync vtl
given the following chunk of my schema...
Mutation uploadFile(base64: String): Response
...and assuming i have an http datasource for connecting to my bucket through s3 api... how can i put all pieces together?
I've managed to get it working in following way:
Schema:
type Mutation {
putFile(Key: ID!, base64: String!): File!
}
DataSource:
Type: HTTP
Endpoint: https://{bucket-name}.s3.amazonaws.com
Request mapping template:
#set( $content = $util.base64Decode($ctx.args.base64) )
{
"version": "2018-05-29",
"method": "PUT",
"params":{
"headers":{
"Content-Type":"application/json"
},
"body":$content
},
"resourcePath": "/$ctx.args.Key"
}
Response mapping template (simplified):
#set( $body = $util.parseJson($util.xml.toJsonString($ctx.result.body)))
#if ( $ctx.result.statsCode != 200 )
$util.error($body.Error.Message)
#end
$util.toJson({"Key":$ctx.args.Key})
Be aware that:
the request size limit of appsync is 1MB
You should probably handle content-type as input argument
You also need to grant access to S3 to created datasource
Related
I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});
I am trying to upload images to my S3 bucket when sending chat messages to my Aurora Database using AppSync with Lambda configured as it's data source.
My resolver for the mutation is:
{
"version": "2017-02-28",
"operation": "Invoke",
"payload": {
"field": "createMessage",
"arguments": $utils.toJson($context.arguments)
}
}
The messages are being saved correctly in the database however the S3 image data files are not being saved in my S3 bucket. I believe I have configured everything correctly except for the resolver which I am not sure about.
Uploading files with AppSync when data source is lambda is basically the same as for every other data source and it does not depend on resolvers.
Just make sure you have your credentials for complex objects set up (JS example using Amplify library for authorization):
import { Auth } from 'aws-amplify'
const client = new AWSAppSyncClient({
url: /*your endpoint*/,
region: /*your region*/,
complexObjectsCredentials: () => Auth.currentCredentials(),
})
And also you need to provide S3 complex object as an input type for your mutation:
input S3ObjectInput {
bucket: String!
key: String!
region: String!
localUri: String
mimeType: String
}
Everything else will work just fine even with lambda data source. Here you can find more information related to your question(in that example dynamoDB is used but it is basically the same for lambda: https://stackoverflow.com/a/50218870/9359164
Following this docs/tutorial in AWS AppSync Docs.
It states:
With AWS AppSync you can model these as GraphQL types. If any of your mutations have a variable with bucket, key, region, mimeType and localUri fields, the SDK will upload the file to Amazon S3 for you.
However, I cannot make my file to upload to my s3 bucket. I understand that tutorial missing a lot of details. More specifically, the tutorial does not say that the NewPostMutation.js needs to be changed.
I changed it the following way:
import gql from 'graphql-tag';
export default gql`
mutation AddPostMutation($author: String!, $title: String!, $url: String!, $content: String!, $file: S3ObjectInput ) {
addPost(
author: $author
title: $title
url: $url
content: $content
file: $file
){
__typename
id
author
title
url
content
version
}
}
`
Yet, even after I have implemented these changes, the file did not get uploaded...
There's a few moving parts under the hood you need to make sure you have in place before this "just works" (TM). First of all, you need to make sure you have an appropriate input and type for an S3 object defined in your GraphQL schema
enum Visibility {
public
private
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
visibility: Visibility
key: String
mimeType: String
}
type S3Object {
bucket: String!
region: String!
key: String!
}
The S3ObjectInput type, of course, is for use when uploading a new file - either by way of creating or updating a model within which said S3 object metadata is embedded. It can be handled in the request resolver of a mutation via the following:
{
"version": "2017-02-28",
"operation": "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.input.id),
},
#set( $attribs = $util.dynamodb.toMapValues($ctx.args.input) )
#set( $file = $ctx.args.input.file )
#set( $attribs.file = $util.dynamodb.toS3Object($file.key, $file.bucket, $file.region, $file.version) )
"attributeValues": $util.toJson($attribs)
}
This is making the assumption that the S3 file object is a child field of a model attached to a DynamoDB datasource. Note that the call to $utils.dynamodb.toS3Object() sets up the complex S3 object file, which is a field of the model with a type of S3ObjectInput. Setting up the request resolver in this way handles the upload of a file to S3 (when all the credentials are set up correctly - we'll touch on that in a moment), but it doesn't address how to get the S3Object back. This is where a field level resolver attached to a local datasource becomes necessary. In essence, you need to create a local datasource in AppSync and connect it to the model's file field in the schema with the following request and response resolvers:
## Request Resolver ##
{
"version": "2017-02-28",
"payload": {}
}
## Response Resolver ##
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
This resolver simply tells AppSync that we want to take the JSON string that is stored in DynamoDB for the file field of the model and parse it into an S3Object - this way, when you do a query of the model, instead of returning the string stored in the file field, you get an object containing the bucket, region, and key properties that you can use to build a URL to access the S3 Object (either directly via S3 or using a CDN - that's really dependent on your configuration).
Do make sure you have credentials set up for complex objects, however (told you I'd get back to this). I'll use a React example to illustrate this - when defining your AppSync parameters (endpoint, auth, etc.), there is an additional property called complexObjectCredentials that needs to be defined to tell the client what AWS credentials to use to handle S3 uploads, e.g.:
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: () => Auth.currentCredentials()
},
complexObjectsCredentials: () => Auth.currentCredentials(),
});
Assuming all of these things are in place, S3 uploads and downloads via AppSync should work.
Just to add to the discussion. For mobile clients, amplify (or if doing from aws console) will encapsulate mutation calls into an object. The clients won't auto upload if the encapsulation exists. So you can modify the mutation call directly in aws console so that the upload file : S3ObjectInput is in the calling parameters. This was happening the last time I tested (Dec 2018) following the docs.
You would change to this calling structure:
type Mutation {
createRoom(
id: ID!,
name: String!,
file: S3ObjectInput,
roomTourId: ID
): Room
}
Instead of autogenerated calls like:
type Mutation {
createRoom(input: CreateRoomInput!): Room
}
input CreateRoomInput {
id: ID
name: String!
file: S3ObjectInput
}
Once you make this change both iOS and Android will happily upload your content if you do what #hatboyzero has outlined.
[Edit] I did a bit of research, supposedly this has been fixed in 2.7.3 https://github.com/awslabs/aws-mobile-appsync-sdk-android/issues/11. They likely addressed iOS but I didn't check.
I have fake api for testing in frontend side.
i have seen that id is required to put or post your data in json-server package, my question is can i use different key instead of id for ex.
{
id: 1, ---> i want to change this with my custom id
name: 'Test'
}
Let's see CLI options of json-server package:
$ json-server -h
...
--id, -i Set database id property (e.g. _id) [default: "id"]
...
Let's try to start json-server with new id called 'customId' (for example):
json-server --id customId testDb.json
Structure of testDb.json file: $ cat testDb.json
{
"messages": [
{
"customId": 1,
"description": "somedescription",
"body": "sometext"
}
]
}
Make a simple POST request via $.ajax function (or via Fiddler/Postman/etc.). Content-type of request should be set to application/json - explanation may be found on this project's github page:
A POST, PUT or PATCH request should include a Content-Type: application/json header to use the JSON in the request body. Otherwise it will result in a 200 OK but without changes being made to the data.
So... Make a request from Browser:
$.ajax({
type: "POST",
url: 'http://127.0.0.1:3000/messages/',
data: {body: 'body', description: 'description'},
success: resp => console.log(resp),
dataType: 'json'
});
Go to testDb and see the results. New chunk added. id automatically added with the desired name specified in --id key of console cmd.
{
"body": "body",
"description": "description",
"customId": 12
}
Voila!
I've came up with using custom routes in cases where I need custom id:
json-server --watch db.json --routes routes.json
routes.json:
{ "/customIDroute/:cusomID" : "/customIDroute?cusomID=:cusomID" }
If you start your server using a server.js file (read more about it in the docs), you can define custom ID routes in server.js like this
// server.js
const jsonServer = require('json-server')
const server = jsonServer.create()
const router = jsonServer.router('db.json')
const middlewares = jsonServer.defaults()
server.use(middlewares)
// custom routes
server.use(jsonServer.rewriter({
"/route/:id": "/route?customId=:id"
}))
server.use(router)
server.listen(3000, () => {
console.log('JSON Server is running')
})
And you would start your server with command:
node server.js
Internaly getById from lodash-id is used.
If you use file-based server version the equivalent to cli --id, -i
is router.db._.id = "customId";
If you want to do per resource, you can do it with a middleware like this (put before others):
server.use((req, res, next) => {
if (req.url.includes("/resourceName/")) {
router.db._.id = "code";
} else {
router.db._.id = "pk";
}
next();
});
I am storing json blobs on azure which I am accessing via XHR. While trying to load these blobs I am getting this error:
XMLHttpRequest cannot load http://myazureaccount.blob.core.windows.net/myjsoncontainer/myblob.json?json. Origin http://localhost is not allowed by Access-Control-Allow-Origin.
Is there any way to set the Access-Control-Allow-Origin header of a blob returned by azure?
Windows Azure Storage added CORS support on November 26, 2013: Cross-Origin Resource Sharing (CORS) Support for the Windows Azure Storage Services. More details and C#/JavaScript samples - Windows Azure Storage: Introducing CORS.
The CORS options can be set on a storage account using the Windows.Azure.Storage client library version 3.0.1.0 or later, available from NuGet, using something similar to the following pseudocode:
var storageAccount = CloudStorageAccount.Parse(
"DefaultEndpointsProtocol=https;AccountName=ABC;AccountKey=XYZ");
var blobClient = storageAccount.CreateCloudBlobClient();
var serviceProperties = blobClient.GetServiceProperties();
serviceProperties.Cors.CorsRules.Clear();
serviceProperties.Cors.CorsRules.Add(new CorsRule() {
AllowedHeaders = { "..." },
AllowedMethods = CorsHttpMethods.Get | CorsHttpMethods.Head,
AllowedOrigins = { "..." },
ExposedHeaders = { "..." },
MaxAgeInSeconds = 600
});
blobClient.SetServiceProperties(serviceProperties);
Not currently but Scott Hanselman, Program Manager for Azure, has confirmed support for this is coming soon on Feb 4th 2013.
One of the helpful MSDN Blog
it might help you all.
The code which I was missing was
private static void ConfigureCors(ServiceProperties serviceProperties)
{
serviceProperties.Cors = new CorsProperties();
serviceProperties.Cors.CorsRules.Add(new CorsRule()
{
AllowedHeaders = new List<string>() { "*" },
AllowedMethods = CorsHttpMethods.Put | CorsHttpMethods.Get | CorsHttpMethods.Head | CorsHttpMethods.Post,
AllowedOrigins = new List<string>() { "*" },
ExposedHeaders = new List<string>() { "*" },
MaxAgeInSeconds = 1800 // 30 minutes
});
}
It basically add some rules to SAS Url, and I am able to upload my files to blob.
Nope, they still haven't added this. You can set up a proxy on an Amazon EC2 instance that fetches the objects on the Azure CDN, then returns the data with the Access-Control-Allow-Origin header, which allows you to make the requests through our proxy. You can also temporarily cache stuff on the proxy to help with speed/performance (this solution obviously takes a hit there), but it's still not ideal.
You might try using JSONP.
The idea is that you define a callback function on your site that will receive the JSON content, and your JSON document becomes a JavaScript file the invokes your callback with the desired data. [Thomas Conté, August 2011]
To do this, create a document that wraps your JSON content in a JavaScript function call:
{ "key": "value", ... }
becomes
myFunc({ "key": "value", ... });
Now you're not loading JSON but JavaScript, and script tags are not subject to Single Origin Policy. jQuery provides convenient methods for loading JSONP:
$.ajax({
url: 'http://myazureaccount.blob.core.windows.net/myjsoncontainer/myblob.jsonp?jsonp',
dataType: 'jsonp',
jsonpCallback: 'myFunc',
success: function (data) {
// 'data' now has your JSON object already parsed
// and converted to a JavaScript object.
}
});
While jsonp works, I wouldn't recommend it. Read the first comment to this answer for specifics. I think the best way around this is to use CORS. Unfortunately, Azure doesn't support this. So if you can, I would change storage providers to one that does (Google Cloud Storage for example)