I am attempting to upload a file to S3 following the examples provided in your documentation and source files. Unfortunately, I'm receiving the following errors when attempting an upload:
[Fine Uploader 5.3.2] Invalid policy document or request headers!
[Fine Uploader 5.3.2] Policy signing failed. Invalid policy document
or request headers!
I found a few posts on here with similar errors, but those solutions didn't help me.
Here is my jQuery:
<script>
$('#fine-uploader').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "changeme"
},
signature: {
endpoint: "endpoint.php"
},
uploadSuccess: {
endpoint: "success.html"
},
template: 'qq-template'
});
</script>
(Please note that I changed the keys/bucket names for security sake.)
I used your endpoint-cors.php as a model and have included the portions that I modified here:
require 'assets/aws/aws-autoloader.php';
use Aws\S3\S3Client;
// These assume you have the associated AWS keys stored in
// the associated system environment variables
$clientPrivateKey = $_ENV['changeme'];
// These two keys are only needed if the delete file feature is enabled
// or if you are, for example, confirming the file size in a successEndpoint
// handler via S3's SDK, as we are doing in this example.
$serverPublicKey = $_ENV['AWS_SERVER_PUBLIC_KEY'];
$serverPrivateKey = $_ENV['AWS_SERVER_PRIVATE_KEY'];
// The following variables are used when validating the policy document
// sent by the uploader.
$expectedBucketName = $_ENV['mybucket'];
// $expectedMaxSize is the value you set the sizeLimit property of the
// validation option. We assume it is `null` here. If you are performing
// validation, then change this to match the integer value you specified
// otherwise your policy document will be invalid.
// http://docs.fineuploader.com/branch/develop/api/options.html#validation-option
$expectedMaxSize = (isset($_ENV['S3_MAX_FILE_SIZE']) ? $_ENV['S3_MAX_FILE_SIZE'] : null);
I also changed this:
// Only needed in cross-origin setups
function handleCorsRequest() {
// If you are relying on CORS, you will need to adjust the allowed domain here.
header('Access-Control-Allow-Origin: http://test.mydomain.com');
}
The POST seems to work:
POST http://test.mydomain.com/somepath/endpoint.php 200 OK
318ms
...but that's where the success ends.
I think part of the problem is that I'm not sure what to enter for "clientPrivateKey". Is that my "Secret Access Key" I set up with IAM?
And I'm definitely unclear on where I get the serverPublicKey and serverPrivateKey. Where am I generating a key-pair on the S3? I've combed through the docs, and perhaps I missed it.
Thank you in advance for your assistance!
First off, you are using endpoint-cors.php in a non-CORS environment. Communication between the browser and your endpoint appears to be same-origin, based on the URL of your signature endpoint. Switch to the endpoint.php example.
Regarding your questions about the keys, you should have create two distinct IAM users: one for client-side operations (heavily restricted) and one for server-side operations (an admin user). For each user, you'll have an access key (public) and a secret key (private). You always supply Fine Uploader with your client-side public key, and use your client-side private key to sign requests server-side. To perform other, more restricted operations (such as deleting files), you should use your server user keys.
Related
I have two flat Feed Groups, main, the primary news feed, and main_topics.
I can make a post to either one successfully.
But when I try to 'cc' the other using the to field, like, to: ["main_topics:donuts"] I get:
code: 17
detail: "You do not have permission to do this, you got this error because there are no policies allowing this request on this application. Please consult the documentation https://getstream.io/docs/"
duration: "0.16ms"
exception: "NotAllowedException"
status_code: 403
Log:
The request didn't have the right permissions or autorization. Please check our docs about how to sign requests.
We're generating user tokens server-side, and the token works to read and write to both groups without to.
// on server
stream_client.user(user.user_id).create({
name: user.name,
username: user.username,
});
Post body:
actor: "SU:5f40650ad9b60a00370686d7"
attachments: {images: [], files: []}
foreign_id: "post:1598391531232"
object: "Newsfeed"
text: "Yum #donuts"
time: "2020-08-25T14:38:51.232"
to: ["main_topics:donuts", "main_topics:all"]
verb: "post"
The docs show an example with to: ['team:barcelona', 'match:1'], and say you need to create the feed groups in the panel, but mention nothing about setting up specific permissions to use this feature.
Any idea why this would happen? Note that I'm trying to create the new topics (donuts, all) which don't exist when this post is made. However, the docs don't specify that feeds need to be explicitly created first - maybe that's the missing piece?
If you haven’t already tried creating the feed first, then try that. Besides that, the default permissions restrict a user from posting on another’s feed. I think it’s acceptable to do this if it’s a notification feed but not user or timeline.
You can email the getstream support to change the default permissions because these are not manageable from the dashboard.
Or you can do this call server side as an admin permissions.
I am using ABCPdf 11 to convert html to pdf, my html page which needs to be converted required JWT token so that needs to be passed to ABCChrome so it can use the JWT token.
I have tried the following but the auth still fails:
doc.HtmlOptions.HttpAdditionalHeaders = $"Authorization: Bearer {accessToken}";
I followed example from here: https://www.websupergoo.com/helppdfnet/default.htm?page=source%2F5-abcpdf%2Fxhtmloptions%2F2-properties%2Fhttpadditionalheaders.htm
From the description in the above URL, I have also tried the below options:
doc.HtmlOptions.NoCookie = true;
doc.HtmlOptions.Media = MediaType.Screen;
After adding HttpAdditionalHeaders and when I get the http status from the pdf library I do get 401 http status code which confirms the
var imageId = doc.AddImageUrl(model.Url);
var status = doc.HtmlOptions.ForChrome.GetHttpStatusCode(imageId);
The status here is 401 - unauthorized
The HttpAdditionalHeaders property is not currently supported by the ABCChrome Engine. The only HtmlOptions supported by ABCChrome are specified here.
There are a few things you could try:
Check whether the target server supports sending the web token via GET request parameters - I guess you've probably done this already :-)
Make the AddImageUrl request URL to an intermediary web server (even a local HttpServer) to a script which can fetch the page for you based on any GET parameters.
If the service you are attempting to access accepts ajax requests you could try using javascript to inject the response into a page using XMLHttpRequest.setRequestHeader(). NB if you use a local file (e.g. file://) for this you may come across some Chromium enforced JavaScript security issues.
I do know that WebSupergoo offer free support for all their licenses, including trial licenses.
Good luck.
Emailed ABCPdf support and unfortunately ABCChrome does not support HttpAdditionalHeaders property so the work around is to download the html ourselves and convert that to PDF, see example below:
var imageId = doc.AddImageHtml(html); // <- html downloaded from auth url
Also don't forget to add paging:
// add all pages to pdf
while (doc.Chainable(imageId))
{
doc.Page = doc.AddPage();
imageId = doc.AddImageToChain(imageId);
}
for (int i = 1; i <= doc.PageCount; i++)
{
doc.PageNumber = i;
doc.Flatten();
}
I developed a chrome extension using Rally's WSAPI v2.0, and it basically does the following things:
get user and project, and store them
get current iteration everytime
send a post request to create a workitem
For the THIRD step, I sometimes get error ["Not authorized to perform action: Invalid key"] since end of last month.
[updated]Error can be reproduced everytime if I log in Rally website via SSO before using the extension to send requests via apikey.
What's the best practice to send subsequent requests via apikey in my extension since I can't control end users' habits?
I did see some similar posts but none of them is helpful... and in case it helps:
I'm adding ZSESSIONID:apikey in my request header, instead of user /
password to authenticate, so I believe no security token is needed
(https://comm.support.ca.com/kb/api-key-and-oauth-client-faq/kb000011568)
url starts with https://rally1.rallydev.com/slm/webservice/v2.0/
issue is fixed after clearing cookies for
https://rally1.rallydev.com/, but somehow it appears again some time
later
I checked the cookie when the issue was reproduced, and found one with name of ZSESSIONID and its value became something else rather than the apikey. Not sure if that matters though...
code for request:
function initXHR(method, url, apikey, cbFunc) {
let httpRequest = new XMLHttpRequest();
...
httpRequest.open(method, url);
httpRequest.setRequestHeader('Content-Type', ' application\/json');
httpRequest.setRequestHeader('Accept', ' application\/json');
httpRequest.setRequestHeader('ZSESSIONID', apikey);
httpRequest.onreadystatechange = function() {
...
};
return httpRequest;
}
...
usReq = initXHR ('POST', baseURL+'hierarchicalrequirement/create', apikey, function(){...});
Anyone has any idea / suggestion? Thanks a million!
I've seen this error when the API key had both read-only and full-access grants configured. I would start by making sure your key only has the full-access grant.
I wrote a request filter for geoIP localization. It works the way that I request an external service for the localization and then write the information into JCR, into a dedicated workspace for caching/storage.
On the author instance this works, but on the public instance I constantly get a AccessDeniedException. I probably need to authenticate with the JCR, and I tried that too, using the crendentials from the magnolia.properties file:
magnolia.connection.jcr.userId = username
magnolia.connection.jcr.password = password
And this code for authentication:
Session session = MgnlContext.getJCRSession(WORKSPACE_IP_ADDRESSES);
session.impersonate(new SimpleCredentials("username", "password".toCharArray()));
I have the this xml to bootstrap the filter, and a FilterOrdering Task, configured as follows:
tasks.add(new FilterOrderingTask("geoIp", new String[] { "contentType", "login", "logout", "csrfSecurity",
"range", "cache", "virtualURI" }));
What am I missing?
What would be the proper to write into the JCR in Magnolia on the public instance?
Yeah, that could not work :D
Is your filter configured in Magnolia's filter chain or directly in web.xml? It needs to live in filter chain and it needs to be configured somewhere down the chain after the security filters so that user is already authenticated.
Then you can simply call MgnlContext.getJCRSession("workspace_name") to get access to repo and do whatever you need.
HTH,
Jan
I like to ask a newbie question. By setting API key in JavaScript, wouldn't anyone can read the source and use the key freely?
Edited >>
Extracted the jsFiddle codes,
filepicker.setKey('8PbzrhP9Tr2r6wPlSqzS');
/* Unsecured */
/*
filepicker.pick(function(fpfile){
console.log(fpfile);
});
filepicker.read(fpfile, function(contents){
console.log(contents);
})
*/
var fpfile = {'url': 'https://www.filepicker.io/api/file/KW9EJhYtS6y48Whm2S6D'};
var policy = 'eyJoYW5kbGUiOiJLVzlFSmhZdFM2eTQ4V2htMlM2RCIsImV4cGlyeSI6MTUwODE0MTUwNH0=';
var signature = '4098f262b9dba23e4766ce127353aaf4f37fde0fd726d164d944e031fd862c18';
filepicker.read(fpfile, {policy: policy, signature:signature}, function(contents){
console.log(contents);
})
filepicker.pick({policy: policy, signature:signature}, function(fpfile){
console.log(fpfile);
});
How does it prevents anyone from using the key ONLY, to upload or read/download files from my account?
Because browser-side javascript is always publically readable, your API key will be visible to others. To increase the protection of your API key beyond simple url/hostname checking, you can use the rich policy-based security API we provide: https://developers.filepicker.io/docs/security/
Java bytecode is easily reversable with trivial amount of work. If you need to have it, pull it from a website using TLS and store it in RAM or at the very least, xor it with a password