I'm going crazy. I am trying to reproduce an example from the OpenLayers 2.10 Beginners Guide, where I am trying to display features saved in a json file and add features on the map, and save them to file.
var map;
function init(){
map = new OpenLayers.Map('map');
var options = {numZoomLevels: 3}
var floorplan = new OpenLayers.Layer.Image(
'Floorplan Map',
'temp_photos/sample-floor-plan.jpg',
new OpenLayers.Bounds(-180, -88.759, 180, 88.759),
new OpenLayers.Size(580, 288),
options
);
var roomPolygonLayer = new OpenLayers.Layer.Vector('Rooms', {
protocol: new OpenLayers.Protocol.HTTP({
url: "myFloorPlanData.json",
format: new OpenLayers.Format.GeoJSON({})}),
strategies: [new OpenLayers.Strategy.Fixed(), new OpenLayers.Strategy.Save()]
});
map.addLayers([floorplan, roomPolygonLayer]);
map.zoomToMaxExtent();
map.addControl(new OpenLayers.Control.EditingToolbar(roomPolygonLayer));
map.layers[1].onFeatureInsert = function(feature){
alert("feature id: "+feature.id);
alert("feature geometry: "+ feature.geometry);
};
}
So far, my map is displayed, I can draw vectors on the map, however it refuses to display the two points I have in my json file, and also save the new points I draw:
{
"type": "FeatureCollection",
"features": [
{"type":"Feature","properties":{}, "geometry":{"type":"Point", "coordinates":[5, 63]}},
{"type":"Feature","properties":{}, "geometry":{"type":"Point", "coordinates":[-48, 27]}}
]
}
the json file is in the same folder as my jsp file, and I am running my project on a server
OpenLayers.Strategy.Save is not capable of modifying json file directly. It only works through a web service that implements WFS-T protocol.
You can install software that supports WFS-T, for example Geoserver. Then you can use OpenLayers.Strategy.Save in your OpenLayers application.
Another option is to create custom web service that will modify json file. You will then have create some sort of Save-button that will make call to your custom web service when clicked.
Related
I have a Chalice app that reads config data from a file in an S3 bucket. The file can change from time to time, and I want the app to immediately use the updated values, so I am using the on_s3_event decorator to reload the config file.
My code looks something like this (stripped way down for clarity):
CONFIG = {}
app = Chalice(app_name='foo')
#app.on_s3_event(bucket=S3_BUCKET, events=['s3:ObjectCreated:*'],
prefix='foo/')
def event_handler(event):
_load_config()
def _load_config():
# fetch json file from S3 bucket
CONFIG['foo'] = some item from the json file...
CONFIG['bar'] = some other item from the json file...
_load_config()
#app.route('/')
def home():
# refer to CONFIG values here
My problem is that for a short while (maybe 5-10 minutes) after uploading a new version of the config file, the app still uses the old config values.
Am I doing this wrong? Should I not be depending on global state in a Lambda function at all?
So your design here is flawed.
When you create an S3 Event in chalice it will create a separate Lambda function for that event. the CONFIG variable would get updated in the running instance of that Lambda function and all new instances of the Lambda function. However any other Lambdas in your Chalice app that are already running would just continue on with their current settings until they were cleaned up and restarted.
If you cannot live with a config that is only changeable when you deploy your Lambda functions you could use redis or some other in memory cache/db.
You should be using the .config/config.json file to store your variables for your chalice application. Those variables are stored in the os library and can be called:
URL = os.environ['MYVAR']
Your config.json file might look like this:
{
"version": "2.0",
"app_name": "MyApp",
"manage_iam_role": false,
"iam_role_arn": "arn:aws:iam::************:role/Chalice",
"lambda_timeout": 300,
"stages": {
"development": {
"environment_variables": {
"MYVAR": "foo"
}
},
"production": {
"environment_variables": {
"MYVAR": "bar"
}
}
},
"lambda_memory_size": 2048
}
Following this docs/tutorial in AWS AppSync Docs.
It states:
With AWS AppSync you can model these as GraphQL types. If any of your mutations have a variable with bucket, key, region, mimeType and localUri fields, the SDK will upload the file to Amazon S3 for you.
However, I cannot make my file to upload to my s3 bucket. I understand that tutorial missing a lot of details. More specifically, the tutorial does not say that the NewPostMutation.js needs to be changed.
I changed it the following way:
import gql from 'graphql-tag';
export default gql`
mutation AddPostMutation($author: String!, $title: String!, $url: String!, $content: String!, $file: S3ObjectInput ) {
addPost(
author: $author
title: $title
url: $url
content: $content
file: $file
){
__typename
id
author
title
url
content
version
}
}
`
Yet, even after I have implemented these changes, the file did not get uploaded...
There's a few moving parts under the hood you need to make sure you have in place before this "just works" (TM). First of all, you need to make sure you have an appropriate input and type for an S3 object defined in your GraphQL schema
enum Visibility {
public
private
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
visibility: Visibility
key: String
mimeType: String
}
type S3Object {
bucket: String!
region: String!
key: String!
}
The S3ObjectInput type, of course, is for use when uploading a new file - either by way of creating or updating a model within which said S3 object metadata is embedded. It can be handled in the request resolver of a mutation via the following:
{
"version": "2017-02-28",
"operation": "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.input.id),
},
#set( $attribs = $util.dynamodb.toMapValues($ctx.args.input) )
#set( $file = $ctx.args.input.file )
#set( $attribs.file = $util.dynamodb.toS3Object($file.key, $file.bucket, $file.region, $file.version) )
"attributeValues": $util.toJson($attribs)
}
This is making the assumption that the S3 file object is a child field of a model attached to a DynamoDB datasource. Note that the call to $utils.dynamodb.toS3Object() sets up the complex S3 object file, which is a field of the model with a type of S3ObjectInput. Setting up the request resolver in this way handles the upload of a file to S3 (when all the credentials are set up correctly - we'll touch on that in a moment), but it doesn't address how to get the S3Object back. This is where a field level resolver attached to a local datasource becomes necessary. In essence, you need to create a local datasource in AppSync and connect it to the model's file field in the schema with the following request and response resolvers:
## Request Resolver ##
{
"version": "2017-02-28",
"payload": {}
}
## Response Resolver ##
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
This resolver simply tells AppSync that we want to take the JSON string that is stored in DynamoDB for the file field of the model and parse it into an S3Object - this way, when you do a query of the model, instead of returning the string stored in the file field, you get an object containing the bucket, region, and key properties that you can use to build a URL to access the S3 Object (either directly via S3 or using a CDN - that's really dependent on your configuration).
Do make sure you have credentials set up for complex objects, however (told you I'd get back to this). I'll use a React example to illustrate this - when defining your AppSync parameters (endpoint, auth, etc.), there is an additional property called complexObjectCredentials that needs to be defined to tell the client what AWS credentials to use to handle S3 uploads, e.g.:
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: () => Auth.currentCredentials()
},
complexObjectsCredentials: () => Auth.currentCredentials(),
});
Assuming all of these things are in place, S3 uploads and downloads via AppSync should work.
Just to add to the discussion. For mobile clients, amplify (or if doing from aws console) will encapsulate mutation calls into an object. The clients won't auto upload if the encapsulation exists. So you can modify the mutation call directly in aws console so that the upload file : S3ObjectInput is in the calling parameters. This was happening the last time I tested (Dec 2018) following the docs.
You would change to this calling structure:
type Mutation {
createRoom(
id: ID!,
name: String!,
file: S3ObjectInput,
roomTourId: ID
): Room
}
Instead of autogenerated calls like:
type Mutation {
createRoom(input: CreateRoomInput!): Room
}
input CreateRoomInput {
id: ID
name: String!
file: S3ObjectInput
}
Once you make this change both iOS and Android will happily upload your content if you do what #hatboyzero has outlined.
[Edit] I did a bit of research, supposedly this has been fixed in 2.7.3 https://github.com/awslabs/aws-mobile-appsync-sdk-android/issues/11. They likely addressed iOS but I didn't check.
I am trying to take a picture on my phonegap app and then use the FileTransfer plugin to upload it to my server. I am getting error code 1 but there is no other explanation - this is VERY frustrating. I have scoured every piece of documentation and blog known to man with no luck.
I am using a basic LAMP server and it continues to give me an http 500 code. I am 99.9% sure this error is specific to my server because I have tested this with a different web server of mine and the code works fine. Here is the response:
{"code":1,"source":"file:///storage/emulated/0/Android/data/io.cordova.xxappxx/cache/1477607161788.jpg","target":"https://server.com/php/uploadPhoto.php","http_status":500,"body":"\t","exception":"https://server.com/php/uploadPhoto.php"}
Below is my front-end js code:
function uploadPhoto(imageURI) {
var options = new FileUploadOptions();
options.fileKey="file";
options.fileName=imageURI.substr(imageURI.lastIndexOf('/')+1);
alert(options.fileName);
options.mimeType="image/jpeg";
var params = {};
params.value1 = sessionStorage.getItem("token");
options.params = params;
options.chunkedMode = false;
options.headers = {Connection: "close"};
var ft = new FileTransfer();
ft.upload(imageURI, "https://servername.com/php/uploadPhoto.php", function(result){
console.log(JSON.stringify(result));
}, function(error){
console.log(JSON.stringify(error));
}, options, true);
}
And here is my back-end PHP code that is being called (uploadPhoto.php):
<?php
session_start();
header('Access-Control-Allow-Origin : *');
$new_image_name = "$userId.jpg";
move_uploaded_file($_FILES["file"]["tmp_name"], "/var/img/".$new_image_name);
?>
This ended up being an issue of image size. I was working on this project for a university and their servers have lots of security installed - one of these security configurations had a very small file upload size limit which was blocking the uploads. I discovered this by scanning some of the log files in the /var/log/ directory.
I am storing json blobs on azure which I am accessing via XHR. While trying to load these blobs I am getting this error:
XMLHttpRequest cannot load http://myazureaccount.blob.core.windows.net/myjsoncontainer/myblob.json?json. Origin http://localhost is not allowed by Access-Control-Allow-Origin.
Is there any way to set the Access-Control-Allow-Origin header of a blob returned by azure?
Windows Azure Storage added CORS support on November 26, 2013: Cross-Origin Resource Sharing (CORS) Support for the Windows Azure Storage Services. More details and C#/JavaScript samples - Windows Azure Storage: Introducing CORS.
The CORS options can be set on a storage account using the Windows.Azure.Storage client library version 3.0.1.0 or later, available from NuGet, using something similar to the following pseudocode:
var storageAccount = CloudStorageAccount.Parse(
"DefaultEndpointsProtocol=https;AccountName=ABC;AccountKey=XYZ");
var blobClient = storageAccount.CreateCloudBlobClient();
var serviceProperties = blobClient.GetServiceProperties();
serviceProperties.Cors.CorsRules.Clear();
serviceProperties.Cors.CorsRules.Add(new CorsRule() {
AllowedHeaders = { "..." },
AllowedMethods = CorsHttpMethods.Get | CorsHttpMethods.Head,
AllowedOrigins = { "..." },
ExposedHeaders = { "..." },
MaxAgeInSeconds = 600
});
blobClient.SetServiceProperties(serviceProperties);
Not currently but Scott Hanselman, Program Manager for Azure, has confirmed support for this is coming soon on Feb 4th 2013.
One of the helpful MSDN Blog
it might help you all.
The code which I was missing was
private static void ConfigureCors(ServiceProperties serviceProperties)
{
serviceProperties.Cors = new CorsProperties();
serviceProperties.Cors.CorsRules.Add(new CorsRule()
{
AllowedHeaders = new List<string>() { "*" },
AllowedMethods = CorsHttpMethods.Put | CorsHttpMethods.Get | CorsHttpMethods.Head | CorsHttpMethods.Post,
AllowedOrigins = new List<string>() { "*" },
ExposedHeaders = new List<string>() { "*" },
MaxAgeInSeconds = 1800 // 30 minutes
});
}
It basically add some rules to SAS Url, and I am able to upload my files to blob.
Nope, they still haven't added this. You can set up a proxy on an Amazon EC2 instance that fetches the objects on the Azure CDN, then returns the data with the Access-Control-Allow-Origin header, which allows you to make the requests through our proxy. You can also temporarily cache stuff on the proxy to help with speed/performance (this solution obviously takes a hit there), but it's still not ideal.
You might try using JSONP.
The idea is that you define a callback function on your site that will receive the JSON content, and your JSON document becomes a JavaScript file the invokes your callback with the desired data. [Thomas Conté, August 2011]
To do this, create a document that wraps your JSON content in a JavaScript function call:
{ "key": "value", ... }
becomes
myFunc({ "key": "value", ... });
Now you're not loading JSON but JavaScript, and script tags are not subject to Single Origin Policy. jQuery provides convenient methods for loading JSONP:
$.ajax({
url: 'http://myazureaccount.blob.core.windows.net/myjsoncontainer/myblob.jsonp?jsonp',
dataType: 'jsonp',
jsonpCallback: 'myFunc',
success: function (data) {
// 'data' now has your JSON object already parsed
// and converted to a JavaScript object.
}
});
While jsonp works, I wouldn't recommend it. Read the first comment to this answer for specifics. I think the best way around this is to use CORS. Unfortunately, Azure doesn't support this. So if you can, I would change storage providers to one that does (Google Cloud Storage for example)
Does anyone have an example of uploading a file to the server using ringojs?
There's a simple upload example in the demo app, but it stores uploads in-memory which is not a good idea for most apps. To save uploads to a temporary file you'll currently have to do something like this (this is a modified version of the upload demo action):
var fu = require("ringo/webapp/fileupload");
function upload(req) {
if (fu.isFileUpload(req.contentType)) {
var params = {};
fu.parseFileUpload(req, params, req.charset, fu.TempFileFactory);
return {
status: 200,
headers: {"Content-Type": "text/plain"},
body: [params.file.name, " saved to ", params.file.tempfile]
};
}
return Response.skin(module.resolve('skins/upload.txt'), {
title: "File Upload"
});
}
Unfortunately, there was a bug with saving uploads to temp files that I just fixed, so you'll have to use a current git snapshot or patch file modules/ringo/webapp/fileupload.js manually:
http://github.com/ringo/ringojs/commit/1793a815a9ca3ffde4aa5a07c656456969b504f9
We also need some high level way of doing this for the next release (e.g. setting a req.uploadTempDir property). I'll open an issue for this.