invalid according to policy policy condition failed starts-with $content-type "" - policy

I am trying to upload photo using fineuploader 3.8.2 on Sony Xperia Tipo or HTC Evo 3d with android 4.0 and facing a strange issue. While uploading through camera works, uploading through gallery is not working and giving me
invalid according to policy policy condition failed starts-with $content-type "" error
$('#fineuploader-s3').fineUploaderS3({
request: {
endpoint: "http://mybucket.s3.amazonaws.com",
accessKey: "MYACCESSKEY"
},
signature: {
endpoint: "myendpoint",
},
objectProperties: {
acl: 'public-read',
key: =>
uploaded_image_key = qq.getUniqueId()
return "#{uploaded_image_key}.png"
},
iframeSupport: {
localBlankPagePath: "/myiframe.html"
},
text: {
uploadButton: '<div><i class="icon-upload"></i> Upload Image</div>'
},
uploadSuccess:{
endpoint: null
},
template: 'mytemplate',
camera: {
ios: true
},
multiple: false,
retry: {
showButton: true
},
validation: {
allowedExtensions: ["gif", "jpeg", "jpg", "png"],
acceptFiles: "image/gif, image/jpeg, image/png"
},
chunking: {
enabled: true
},
resume: {
enabled: true
}
}).on('complete', (event, id, fileName, responseJSON) =>
if responseJSON.success
$(#el).find('#thumb_pics').append("<img class='thumb' src ='http://s3.amazonaws.com/mybucket/#{uploaded_image_key}.png' title = '#{fileName}' />")
$('#submit_feedpost').prop('disabled', false)
).on('error', (event, id, fileName, errorReason, xhr) =>
$('#submit_feedpost').prop('disabled', false)
alert(errorReason)
)
My policy is like this -
'{
"expiration": "myexpirationdate",
"conditions":[
{"bucket": "mybucket"},
["starts-with", "$key" , ""],
{"acl": "public-read"},
{"success_action_status": "200"},
["starts-with", "$Content-Type", ""],
["starts-with", "$x-amz-meta-qqfilename", ""]
]
}'
My CORS is like this -
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>

Sorry to trouble you. I find out the issue. I was using my own policy to sign it rather than signing the policy sent in the post request to my endpoint by fineuploader library. Anyways thanks a lot

Related

to generate client code from httpsnippet(npm)

I am not able to get headers in the output while generating the client code
const snippet = new HTTPSnippet({
method: 'GET',
url: 'http://mockbin.com/request',
headers: {
'content-type': 'Application/json',
}
});
const options = { indent: '\t' };
const output = snippet.convert('shell', 'curl', options);
console.log(output);
output
[Error [HARError]: validation failed] {
errors: [
{
keyword: 'type',
dataPath: '.headers',
schemaPath: '#/properties/headers/type',
params: { type: 'array' },
message: 'should be array'
}
]
}
expected: - headers should be the part of the curl rather than this error
The validation message says that headers should be an array.
HAR Docs (found in httpsnippet);
<headers>
This object contains list of all headers (used in <request> and <response> objects).
"headers": [
{
"name": "Accept-Encoding",
"value": "gzip,deflate",
"comment": ""
},
{
"name": "Accept-Language",
"value": "en-us,en;q=0.5",
"comment": ""
}
]

Which class in AWS CDK have option to configure Dynamic partitioning for Kinesis delivery stream

I'm using kinesis delivery stream to send stream, from event bridge to s3 bucket. But i can't seem to find which class have the option to configure dynamic partitioning?
this is my code for delivery stream:
new CfnDeliveryStream(this, `Export-delivery-stream`, {
s3DestinationConfiguration: {
bucketArn: bucket.bucketArn,
roleArn: kinesisFirehoseRole.roleArn,
prefix: `test/!{timestamp:yyyy/MM/dd}/`
}
});
I have been working on the same issue for a few days, and have finally gotten something to work. Here is an example of how it can be implemented in CDK. In short, the partitioning has to be enables as you have done, but you need to set the key and .jq expression in the so-called processingConfiguration.
Our incomming json data looks something like this:
{
"data":
{
"timestamp":1633521266990,
"defaultTopic":"Topic",
"data":
{
"OUT1":"Inactive",
"Current_mA":3.92
}
}
}
The CDK code looks as following:
const DeliveryStream = new CfnDeliveryStream(this, 'deliverystream', {
deliveryStreamName: 'deliverystream',
extendedS3DestinationConfiguration: {
cloudWatchLoggingOptions: {
enabled: true,
},
bucketArn: Bucket.bucketArn,
roleArn: deliveryStreamRole.roleArn,
prefix: 'defaultTopic=!{partitionKeyFromQuery:defaultTopic}/!{timestamp:yyyy/MM/dd}/',
errorOutputPrefix: 'error/!{firehose:error-output-type}/',
bufferingHints: {
intervalInSeconds: 60,
},
dynamicPartitioningConfiguration: {
enabled: true,
},
processingConfiguration: {
enabled: true,
processors: [
{
type: 'MetadataExtraction',
parameters: [
{
parameterName: 'MetadataExtractionQuery',
parameterValue: '{defaultTopic: .data.defaultTopic}',
},
{
parameterName: 'JsonParsingEngine',
parameterValue: 'JQ-1.6',
},
],
},
{
type: 'AppendDelimiterToRecord',
parameters: [
{
parameterName: 'Delimiter',
parameterValue: '\\n',
},
],
},
],
},
},
})

issues with fetching data from rapid API, using fetch-node

I've copied the demo code off their site but it seems that I can't crack it, I've never got this error, Used double quotes and without quotes on the keys in the post options, still getting this error.
const res = await fetch("https://textanalysis-keyword-extraction-v1.p.rapidapi.com/keyword-extractor-text", {
"method": "POST",
"headers": {
"content-type": "application/x-www-form-urlencoded",
"x-rapidapi-key": "ksjadkasjkasjkfjhjafkakjkajkasjf", // hidden key
"x-rapidapi-host": "textanalysis-keyword-extraction-v1.p.rapidapi.com"
},
"body": {
"text": "Keyword extraction is tasked with the automatic identification of terms that best describe the subject of a document. Key phrases, key terms, key segments or just keywords are the terminology which is used for defining the terms that represent the most relevant information contained in the document. Although the terminology is different, function is the same",
"wordnum": "5"
}
})
.then(response => {
console.log(response);
console.log(response.headers);
})
.catch(err => {
console.error(err);
});
Here is the error, apparently it's something to with "[Symbol(Body internals)]" coming out as an empty object
Response {
size: 0,
timeout: 0,
[Symbol(Body internals)]: {
body: PassThrough {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: true,
[Symbol(kCapture)]: false,
[Symbol(kCallback)]: null
},
disturbed: false,
error: null
},
[Symbol(Response internals)]: {
url: 'https://textanalysis-keyword-extraction-v1.p.rapidapi.com/keyword-extractor-text',
status: 400,
statusText: 'Bad Request',
headers: Headers { [Symbol(map)]: [Object: null prototype] }, // error code
counter: 0
}
}
Generally, a 404 code indicates malformed request syntax, invalid request message framing, or deceptive request routing.
But everything looks good in the code snippet that you have posted above. Try this:
fetch("https://textanalysis-keyword-extraction-v1.p.rapidapi.com/keyword-extractor-text", {
"method": "POST",
"headers": {
"content-type": "application/x-www-form-urlencoded",
"x-rapidapi-host": "textanalysis-keyword-extraction-v1.p.rapidapi.com",
"x-rapidapi-key": ""
},
"body": {
"wordnum": "5",
"text": ""
}
})
.then(response => {
console.log(response);
})
.catch(err => {
console.error(err);
});
You can always get in touch with the support team at support#rapidapi.com

AddInMemoryClients results in Unknown client or not enabled

I'm trying to get Identity server 4 to work in an ASP Net Core 3 application with an Angular 8 SPA using "oidc-client": "1.10.1".
If I add the following to my appsettings.json
"IdentityServer": {
"Key": {
"Type": "File",
"FilePath": "acertificate.pfx",
"Password": "notmyrealpassword..orisit?"
},
"Clients": {
"dev-client": {
"Profile": "IdentityServerSPA",
}
}
}
Using this client:
{
authority: 'https://localhost:5001/',
client_id: 'dev-client',
redirect_uri: 'http://localhost:4200/auth-callback',
post_logout_redirect_uri: 'http://localhost:4200/',
response_type: 'id_token token',
scope: 'openid profile API',
filterProtocolClaims: true,
loadUserInfo: true
}
I get: Invalid redirect_uri: http://localhost:4200/auth-callback
adding.
"dev-client": {
"Profile": "IdentityServerSPA",
"RedirectUris": [ "http://localhost:4200/auth-callback" ]
}
does nothing. If I add the Client config copied (almost) from the documentation
"Clients": [
{
"Enabled": true,
"ClientId": "dev-client",
"ClientName": "Local Development",
"AllowedGrantTypes": [ "implicit" ],
"AllowedScopes": [ "openid", "profile", "API" ],
"RedirectUris": [ "http://localhost:4200/auth-callback" ],
"RequireConsent": false,
"RequireClientSecret": false
}
]
I get: System.InvalidOperationException: 'Type '' is not supported.' at startup
If I try to configure the client in code, and only keep the "Key" section in appsettings
services
.AddIdentityServer(options =>
{
options.Cors.CorsPolicyName = _CorsPolicyName;
})
.AddInMemoryClients(new IdentityServer4.Models.Client[] {
new IdentityServer4.Models.Client
{
ClientId = "dev-client",
ClientName = "JavaScript Client",
ClientUri = "http://localhost:4200",
AllowedGrantTypes = { IdentityModel.OidcConstants.GrantTypes.Implicit },
AllowAccessTokensViaBrowser = true,
RedirectUris = { "http://localhost:4200/auth-callback" },
PostLogoutRedirectUris = { "http://localhost:4200" },
AllowedCorsOrigins = { "http://localhost:4200" },
AllowedScopes =
{
IdentityServer4.IdentityServerConstants.StandardScopes.OpenId,
IdentityServer4.IdentityServerConstants.StandardScopes.Profile,
IdentityServer4.IdentityServerConstants.StandardScopes.Email,
"API"
}
}
})
I get: Unknown client or not enabled: dev-client.
Someone help me keep my sanity and point out my, most likely obvious, error.
ASP.NET Identity overrides the documented method for IdentityServer Clients configuration, expecting a dictionary of well-known values. You can bypass this by creating a section that is not named Clients and reading from that section explicitly. Additionally, AddApiAuthorization exposes the Clients collection on the ApiAuthorizationOptions, which can be used to add other clients:
.AddApiAuthorization<...>(options =>
{
options.Clients.AddRange(Configuration.GetSection("IdentityServer:OtherClients").Get<Client[]>());
});

Lambda - headObject fails (NotFound) though objectCreated is the event

In a lambda function with the event: s3:ObjectCreated:*, calling head object on the created object returns a NotFound error.
module.exports.handler = async function(event, context, callback) {
try {
const Bucket = event.Records[0].s3.bucket.name;
const Key = event.Records[0].s3.object.key;
console.log('Bucket', Bucket);
console.log('Key', Key);
const objectHead = await s3.headObject({ Bucket, Key }).promise();
console.log('Alas! I will never discover that the objectHead is:', objectHead);
callback();
} catch(err) {
console.error('Error', err);
callback(err);
}
}
And this is the error I get:
{
NotFound: null
message: null,
code: 'NotFound',
region: null,
time: 2018-02-19T11:06:35.894Z,
requestId: 'XXXXXXXXXXX',
extendedRequestId: 'XXX.....XXX',
cfId: undefined,
statusCode: 404,
retryable: false,
retryDelay: 77.24564264820208
}
I've noticed that it says region null in the error. I suspect this is irrelevant as I'm 99% sure I'm setting it correctly:
const s3 = new AWS.S3({
region: 'us-east-1'
});
Here's the serverless.yml function declaration in case anybody's curious:
obj_head:
handler: obj_head.handler
events:
- s3:
bucket: ${self:provider.environment.BUCKET_NAME}
event: s3:ObjectCreated:*
role: arn:aws:iam::XXXXXXXXX:role/RoleWithAllS3PermissionsEver
And here is a sample for a received event:
{
"Records": [
{
"eventVersion": "2.0",
"eventSource": "aws:s3",
"awsRegion": "us-east-1",
"eventTime": "2018-02-19T11:03:46.761Z",
"eventName": "ObjectCreated:Put",
"userIdentity": {
"principalId": "AWS:XXX"
},
"requestParameters": {
"sourceIPAddress": "X.X.X.X"
},
"responseElements": {
"x-amz-request-id": "X",
"x-amz-id-2": "X/X/X"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "14122133-28e8-4cd9-907c-af328334c56b",
"bucket": {
"name": "BUCKET_NAME",
"ownerIdentity": {
"principalId": "X"
},
"arn": "arn:aws:s3:::BUCKET_NAME"
},
"object": {
"key": "input.key",
"size": X,
"eTag": "X",
"sequencer": "X"
}
}
}
]
}
It's puzzling that the object head isn't found though the very event that triggered the function is the object's creation.
Am I doing something wrong?
Any thoughts on where to look?
The object keyname value is URL encoded, and this was causing the issue. This behaviour is documented here:
The s3 key provides information about the bucket and object involved
in the event. Note that the object keyname value is URL encoded. For
example "red flower.jpg" becomes "red+flower.jpg".
When dealing with filenames that contains Unicode characters please see this answer from Alastair McCormack:
You need to convert the URL encoded Unicode string to a bytes str
before un-urlparsing it and decoding as UTF-8.