I would like to use AWS S3 to store my app's user's files securely.
I am based in the EU (UK), so my bucket's region is EU (Ireland). Based on the Noterious example in the Backand docs, and the snippet provided by the Backand dashboard, this is my custom File Upload action:
function backandCallback(userInput, dbRow, parameters, userProfile) {
var data = {
"key" : "<my AWS key ID",
"secret" : "<my secret key>",
"filename" : parameters.filename,
"filedata" : parameters.filedata,
"region" : "Ireland",
"bucket" : "<my bucket name>"
};
var response = $http({method:"PUT",url:CONSTS.apiUrl + "/1/file/s3" ,
data: data, headers: {"Authorization":userProfile.token}});
return response;
}
When testing the action in the Backand dashboard, I get this error: 417 The remote server returned an error: (500) Internal Server Error.: An error occurred, please try again or contact the administrator. Error details: Maximum number of retry attempts reached : 3.
With an American bucket and region: "US Standard", it works without error. So, similarly to this answer, I think this is because the AWS endpoint isn't correctly set up.
I have tried region: "EU", region: "Ireland", region: "eu-west-1" and similar combinations.
So - Is there any way to configure Backand to use AWS endpoints other than US Standard? (I'd have thought that would have been the whole point of setting the region.)
We have checked this issues and apparently there is a different in the security method of AWS between east coast (N. Virginia) and newer regions like Ireland.
This issue is scheduled for one of the next releases, and I will update here when resolved.
Related
When enabling distributed map task and exporting the state transitions history to S3 triggers a exception with the following error message
An error occurred while executing the state 'Map' (entered at the event id #12). Failed to write a test manifest into the specified output bucket. | Message from S3: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint
This is my the ResultWriter key object definition
/// other keys omitted...
"ResultWriter": {
"Resource": "arn:aws:states:::s3:putObject",
"Parameters": {
"Bucket": "sds-qa-nv",
"Prefix": "distributed_excecutions/"
}
}
I tried enabling Export Map state results to Amazon S3 to save state transition to S3 and I'm expecting that the results are saved to S3 without failing.
I've successfully cloudformed a cognito identity-pool and could not see how I add the custom mappings to the "Cognito" "Authentication Providers" in cloudformation.
Inside the Cognito Authentication Provider on the console, there is a dropdown where I manually have to select "Use custom mappings" and then I can manually add the mappings to my custom user attributes. However, I need to be able to cloudform this and am struggling to find the correct place for it.
The user pool that goes along with this identity pool has "SupportedIdentityProviders" set to "COGNITO"
Update
I can get a list of identity providers by running ...
aws cognito-identity list-identities --max-results 2 --identity-pool-id xx-xxxx-x:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx
and this returns me
{
"IdentityPoolId": "xx-xxxx-x:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"Identities": [
{
"IdentityId": "yy-yyyy-y:yyyyyyyy-yyyy-yyyy-yyyyyyyyyy",
"Logins": [
"cognito-idp.eu-west-2.amazonaws.com/eu-west-2_tFT6FBwIO"
],
"CreationDate": "2021-11-15T12:38:48.249000+00:00",
"LastModifiedDate": "2021-11-15T12:38:48.263000+00:00"
}
]
}
using the "Logins" information I can now run...
aws cognito-identity get-principal-tag-attribute-map --identity-pool-id xx-xxxx-x:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx --identity-provider-name "cognito-idp.eu-west-2.amazonaws.com/eu-west-2_tFT6FBwIO"
which returns
{
"IdentityPoolId": "xx-xxxx-x:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"IdentityProviderName": "cognito-idp.eu-west-2.amazonaws.com/eu-west-2_tFT6FBwIO",
"UseDefaults": false,
"PrincipalTags": {
"attr_x": "custom:attr_x",
"attr_y": "custom:attr_y",
"attr_z": "custom:attr_z"
}
}
However, I still don't know how to setup this mapping via cloudformation...
Regards
Mark.
Setting PrincipalTag attribute mappings is not yet supported in CloudFormation but, according to the CloudFormation roadmap, will be supported soon.
In the meantime, you would have to create a CloudFormation Custom Resource or Resource Provider to achieve this.
We want to programatically order VSI using the flavor (for example. Balanced type), however instead of using the standard os_code, we want the VSI to be created from a public image template (ie. CentOS7-ChangeStable). From the following doc it seems to be possible.
http://softlayer-python.readthedocs.io/en/latest/_modules/SoftLayer/managers/vs.html
However I tried but got the following error:
SoftLayer.exceptions.SoftLayerAPIError: SoftLayerAPIError(SoftLayer_Exception_InvalidValue): Invalid value provided for 'blockDevices'. Block devices may not be provided when using an image template.
Using slcli is failing as well with a different error:
# slcli vs create --hostname testvsi --domain vmonic.local --flavor BL2_4X8X100 --image 1cc8be72-f230-4ab9-b4b2-329c3e747853 --datacenter tok02 --private
This action will incur charges on your account. Continue? [y/N]: y
SoftLayerAPIError(SoftLayer_Exception_Public): Order is missing the following category: Operating System.
Please advice whether using "image_id" with "flavor" is supported in SL API / python API. Thanks!
this is an issue with the API, the python client uses the http://sldn.softlayer.com/reference/services/softlayer_virtual_guest/createObject method to create the VSI using RESTFul the same request would be something like this:
POST: https://$USERNAME:#APIKEY#api.softlayer.com/rest/v3.1/SoftLayer_Virtual_Guest/createObject
Payload:
{
"parameters": [{
"datacenter": {
"name": "tok02"
},
"domain": "softlayer.local",
"hourlyBillingFlag": true,
"blockDeviceTemplateGroup": {
"globalIdentifier": "1cc8be72-f230-4ab9-b4b2-329c3e747853"
},
"hostname": "rcabflav",
"privateNetworkOnlyFlag": true,
"supplementalCreateObjectOptions": {
"flavorKeyName": "BL2_4X8X100"
}
}]
}
and you will get the same error, I reported this error in Softlayer, if you want you can submit a ticket in softlayer and report it as well.
I control the softlayer resources(Server, Storage etc) by JAVA API.
I am verifying an upgrade to the Evault storage space ( 20GB => 40GB) via the API but the API returns an error message
"error": "EVault service already exists for the requested location (Seoul 1).",
"code": "SoftLayer_Exception_Public"
from the POST event
URL(POST) https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Product_Order/verifyOrder.json
Here is the attached request body
{"parameters":
[
{"complexType":"SoftLayer_Container_Product_Order"
,"orderContainers":[
{"complexType":"SoftLayer_Container_Product_Order_Network_Storage_Backup_Evault_Vault"
,"location":"1555995"
,"packageId":0
,"quantity":1
,"virtualGuests":[
{"complexType":"SoftLayer_Virtual_Guest"
,"id":376047
}
],
"useHourlyPricing":false
,"prices":[
{"complexType":"SoftLayer_Product_Item_Price","id":66257}
]
}
]
}
]
}
What you are doing with that request is ordering an eVault storage, besides the itemId set is for a 60GB EVault Disk capacity and not 40 Gb.
UPDATE
Retrieve item prices only for eVault storage capacities.
https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3.1/SoftLayer_Product_Package/0/getItemPrices?objectMask=mask[id,locationGroupId,item[id,keyName,description],pricingLocationGroup[locations[id,name,longName]]]&objectFilter={"itemPrices":{"item": {"keyName":{"operation":"*=EVAULT"}}}}
Currently to perform an upgrade what you require is to use the method SoftLayer::upgradeVolumeCapacity, please see following request:
Perform capacity upgrade to an specific eVault storage:
(url POST) https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Network_Storage/eVaultId/upgradeVolumeCapacity
with the following request BODY:
{
"parameters":
[
559
]
}
Do not forget to change eVaultId on the request for your eVault storage, try this rest request to retrieve the specific eVault id:
Retrieve an account's associated EVault storage volumes:
https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Account/getEvaultNetworkStorage?objectMask=mask[id, serviceResourceName,guestId,billingItem[id,location]]
Once obtained, then you may specify an upgradeItem (e.g. "itemId": 559 that might be the itemId for 40 Gb evault disk).
To retrieve the upgrade itemId's for the different upgrade capacities allowed, use the following request:
https://IBMxxxx:xxxxx#api.softlayer.com/rest/v3/SoftLayer_Network_Storage/eVaultId/getObject?objectMask=mask[id, billingItem[id, upgradeItems[prices]]]
(don't forget to change the eVaultId).
Review the upgradeItems property and choose the capacity required, you should use the id value for the capacity you need on the method upgradeVolumeCapacity.
For more information about eVaults, see below:
Sample code to handle the upgrade of EVault?
How to find location of an EVault using SoftLayer API?
Sample code for ordering an EVault backup in SoftLayer
I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.