Fine uploader request not working from vb6 - amazon-s3

I am using below Policy format and creating a application/json format with policy base 64 and signature sha base 64 but getting
"Error attempting to parse signature response: SyntaxError: Unexpected token s"
Can you suggest where I am wrong:
strToPolicy = "{
""expiration"": ""2015-01-01T12:00:00.000Z"",
""conditions"": [
{""bucket"": manishtests3.s3-website-ap-southeast-1.amazonaws.com },
{""acl"": ""public-read"" },
{""key"": my access key id},
{""x-amz-meta-qqfilename"": Search.png},
]
}"

Your policy appears to simply be malformed JSON... Take note of the formatting below:
{
"conditions": [
{
"bucket": "manishtests3.s3-website-ap-southeast-1.amazonaws.com"
},
{
"acl": "public-read"
},
{
"key": "my access key id"
},
{
"x-amz-meta-qqfilename": "Search.png"
}
],
"expiration": "2015-01-01T12:00:00.000Z"
}
For more information there is also a specific example of Amazon S3 Policy Format with regard to leveraging Fine Uploader and Amazon S3.

Related

Converting AWS CloudWatch Metrics Insight query to CDK Metric

I am modifying the sample at https://github.com/cdk-patterns/serverless/tree/main/the-eventbridge-etl/typescript as I want to add a dashboard widget to my CloudFormation Stack that shows the Fargate vCPU usage. I have been able to upgrade the app to use CDK v2, and deployment/functionality has been confirmed. However, I cannot get the vCPU widget in the dashboard to show any data.
If I configure the widget manually, from within AWS CloudWatch's Source field, the query looks as follows:
{
"metrics": [
[ { "expression": "SELECT COUNT(ResourceCount) FROM SCHEMA(\"AWS/Usage\", Class,Resource,Service,Type) WHERE Service = 'Fargate' AND Resource = 'vCPU'", "label": "Query1", "id": "q1" } ],
[ "AWS/Usage", "ResourceCount", "Service", "Fargate", "Type", "Resource", { "id": "m1" } ]
],
"view": "timeSeries",
"title": "ExtractECSJob",
"region": "us-west-2",
"timezone": "Local",
"stat": "Sum",
"period": 300
}
However, when I attempt to use CDK, with the following TypeScript code:
const extractECSWidget = new GraphWidget({
title: "ExtractECSJob",
left: [
new Metric({
namespace: "AWS/Usage",
metricName: "ResourceCount",
statistic: "Sum",
period: Duration.seconds(300),
dimensionsMap: {
"Service": "Fargate",
"Type": "Resource",
"Resource": "vCPU"
}
})
]
});
This does not translate to the above, and no information is shown in this widget. The new source looks as follows:
{
"view": "timeSeries",
"title": "ExtractECSJob",
"region": "us-west-2",
"metrics": [
[ "AWS/Usage", "ResourceCount", "Resource", "vCPU", "Service", "Fargate", "Type", "Resource", { "stat": "Sum" } ]
],
"period": 300
}
How do I map the above metrics source definition to the CDK source construct?
I tried using MathExpression but with the following:
let metrics = new MathExpression({
expression: "SELECT COUNT('metricName') FROM SCHEMA('\"AWS/Usage\"', 'Class','Resource','Service','Type') WHERE Service = 'Fargate' AND Resource = 'vCPU'",
usingMetrics: {}
})
const extractECSWidget = new GraphWidget({
title: "ExtractECSJob",
left: [
metrics
]
});
I get the warning during cdk diff:
[Warning at /EventbridgeEtlStack/EventBridgeETLDashboard] Math expression 'SELECT COUNT(metricName) FROM SCHEMA($namespace, Class,Resource,Service,Type) WHERE Service = 'Fargate' AND Resource = 'vCPU'' references unknown identifiers: metricName, namespace, lass, esource, ervice, ype, ervice, argate, esource, vCPU. Please add them to the 'usingMetrics' map.
What should I put in the usingMetrics map? Any help is appreciated.
Thanks to AWS Support I was able to have this fixed. The updated code looks like the following:
let metrics = new MathExpression({
expression: "SELECT COUNT(ResourceCount) FROM SCHEMA(\"AWS/Usage\", Class,Resource,Service,Type) WHERE Service = 'Fargate' AND Resource = 'vCPU'",
usingMetrics: {},
label: "Query1"
})
let metric2 = new Metric({
namespace: "AWS/Usage",
metricName: "ResourceCount",
period: cdk.Duration.seconds(300),
dimensionsMap: {
"Service": "Fargate",
"Type": "Resource",
}
})
const extractECSWidget = new GraphWidget({
title: "ExtractECSJobTest",
left: [metrics, metric2],
region: "us-west-2",
statistic: "Sum",
width: 24
});
dashboardStack.addWidgets(
extractECSWidget
);
When running cdk deploy, I still get the same warning (about unknown identifiers being referenced) but the widget is functioning as expected.
This feature has not been supported by CDK yet. I've opened the issue https://github.com/aws/aws-cdk/issues/22844
I faced the same issue while creating an alarm based on a metric based on a query. I found the workaround with level 1 construct CfnAlarm
Maybe same kind of workaround exists for Widget.

Send email with several attachments

I send email without user authentication, automatic method.
{
"message":
{
"subject": "Envio de email Teste do MS Graph",
"body":
{
"contentType": "HTML",
"content": "Testando envio"
},
"toRecipients":
[
{
"emailAddress":
{
"address": "xxxxx#gmail.com"
}
}
],
"attachments":
[
{
"#odata.type": "#microsoft.graph.fileAttachment",
"name": "Teste de Anexo",
"contentLocation": "e:\ramos.xlsx",
"isInLine":"true"
}
]
}
}
How do I attach several files to the email?
I know a parameter called attachment, but I was unable to use it to point to a physical file path. I need to point to several files.
How to transform the file into base64 using VBA? Is there an API inside Graph that does this?
Solved, the problem was that I needed to convert to base64
As of now you cannot point out the files since contentLocation is not supported according to the document. And I am not from the VBA side but you can try getting the file using this thread and you can encode the data by following this thread which concentrates on text file, you can give a try with Excel file.
You will need to convert the file to base64.
import base64
def encode_base64(attachment_path):
data = open(attachment_path, 'rb').read()
base64_encoded = base64.b64encode(data).decode('UTF-8')
return base64_encoded
Define variable
attachment = encode_base64('c:/your_folder/file.xlsx')
After that you can pass to as args to JSON
"attachments": [
{
"#odata.type": "#microsoft.graph.fileAttachment",
"name": "HappyCodeAgosto.xlsx",
"contentType": "application/vnd.ms-excel",
"contentBytes": attachment
}
]

GCP Bigquery: Can't query stackdriver access logs exported in cloudstorage because invalid json field "#type"

I store the access log of a pixel image in a cloudstorage bucket dev-access-log-bucket using the standard "sink"
so the files looks like this requests/2019/05/08/15:00:00_15:59:59_S1.json
and one line looks like this (I formatted the json, but it's on one line normmaly) :
{
"httpRequest": {
"cacheLookup": true,
"remoteIp": "93.24.25.190",
"requestMethod": "GET",
"requestSize": "224",
"requestUrl": "https://dev-snowplow.legalstart.fr/one_pixel_image.png?user_id=0&action=purchase&product_id=0&money=10",
"responseSize": "779",
"status": 200,
"userAgent": "python-requests/2.21.0"
},
"insertId": "w6wyz1g2jckjn6",
"jsonPayload": {
"#type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
"statusDetails": "response_sent_by_backend"
},
"logName": "projects/tracking-pixel-239909/logs/requests",
"receiveTimestamp": "2019-05-08T15:34:24.126095758Z",
"resource": {
"labels": {
"backend_service_name": "",
"forwarding_rule_name": "dev-yolaw-pixel-forwarding-rule",
"project_id": "tracking-pixel-239909",
"target_proxy_name": "dev-yolaw-pixel-proxy",
"url_map_name": "dev-urlmap",
"zone": "global"
},
"type": "http_load_balancer"
},
"severity": "INFO",
"spanId": "7d8823509c2dc94f",
"timestamp": "2019-05-08T15:34:23.140747307Z",
"trace": "projects/tracking-pixel-239909/traces/bb55577eedd5797db2867931f8de9162"
}
all of these once again are standard GCP things, I did not customize anything here.
So now I want to do some requests on it from Bigquery, I create a dataset and an external table configured like this :
External Data Configuration
Source URI(s) gs://dev-access-log-bucket/requests/*
Auto-detect schema true (note: I don't know why it puts true though i've manually defined it)
Ignore unknown values true
Source format NEWLINE_DELIMITED_JSON
Max bad records 0
and the following manual schema:
timestamp DATETIME REQUIRED
httpRequest RECORD REQUIRED
httpRequest. requestUrl STRING REQUIRED
and when I run a request
SELECT
timestamp
FROM
`path.to.my.table`
LIMIT
1000
I got
Invalid field name "#type". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long.
How can I work around this without needing to pre-process the log to not have the "#type" field in it ?

Dropbox API V2 list_file_members/batch empty results

I'm currently trying to work with the Dropbox list_file_members API endpoint, as it appears to me to be the only place to find out who owns a file (
see follow example result taken from the documentation page )
{
"users": [
{
"access_type": {
".tag": "owner"
},
"user": {
"account_id": "dbid:AAH4f99T0taONIb-OurWxbNQ6ywGRopQngc",
"same_team": true,
"team_member_id": "dbmid:abcd1234"
},
"permissions": [],
"is_inherited": false
}
],
"groups":[...]
...
}
However, when I call the API on a single file I get the follow
{
"users": [],
"groups": [
{
"access_type": {
".tag": "editor"
},
"permissions": [],
"is_inherited": true,
"group": {
"group_name": "Everyone at TEAM_NAME_HERE",
"group_id": "g:GROUP_ID_HERE",
"member_count": 6,
"group_management_type": {
".tag": "company_managed"
},
"group_type": {
".tag": "team"
},
"is_owner": false,
"same_team": true
}
}
],
"invitees": []
}
This result contains no owner information, so I'm assuming this is because everyone has the same access levels ??
The problem worsens when I try to call files in batches using the sharing_list_file_members/batch endpoint, I get the following result
[
{
"file": "id:THIS_IS_MY_FILE_ID",
"result": {
".tag": "result",
"members": {
"users": [],
"groups": [],
"invitees": []
},
"member_count": 0
}
}
]
Obviously this is even less helpful, this is the same when I access the API via my own PHP, as well as the API explorer, could anyone tell me where I'm going wrong and why I'm getting no results from users and even groups when done in batches ?
The /2/sharing/list_file_members endpoint is documented as:
Use to obtain the members who have been invited to a file, both inherited and uninherited members.
The /2/sharing/list_file_members/batch endpoint is documented as:
Get members of multiple files at once. The arguments to this route are more limited, and the limit on query result size per file is more strict. To customize the results more, use the individual file endpoint.
Inherited users are not included in the result, and permissions are not returned for this endpoint.
It sounds like the file for your example is in a team folder, and so the group listed for your non-batch example is the team group, i.e., an inherited group. The documentation indicates that this group isn't expected when using the batch endpoint.

Error loading file stored in Google Cloud Storage to Big Query

I have been trying to create a job to load a compressed json file from Google Cloud Storage to a Google BigQuery table. I have read/write access in both Google Cloud Storage and Google BigQuery. Also, the uploaded file belongs in the same project as the BigQuery one.
The problem happens when I access to the resource behind this url https://www.googleapis.com/upload/bigquery/v2/projects/NUMERIC_ID/jobs by means of a POST request. The content of the request to the abovementioned resource can be found as follows:
{
"kind" : "bigquery#job",
"projectId" : NUMERIC_ID,
"configuration": {
"load": {
"sourceUris": ["gs://bucket_name/document.json.gz"],
"schema": {
"fields": [
{
"name": "id",
"type": "INTEGER"
},
{
"name": "date",
"type": "TIMESTAMP"
},
{
"name": "user_agent",
"type": "STRING"
},
{
"name": "queried_key",
"type": "STRING"
},
{
"name": "user_country",
"type": "STRING"
},
{
"name": "duration",
"type": "INTEGER"
},
{
"name": "target",
"type": "STRING"
}
]
},
"destinationTable": {
"datasetId": "DATASET_NAME",
"projectId": NUMERIC_ID,
"tableId": "TABLE_ID"
}
}
}
}
However, the error doesn't make any sense and can also be found below:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "invalid",
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
],
"code": 400,
"message": "Job configuration must contain exactly one job-specific configuration object (e.g., query, load, extract, spreadsheetExtract), but there were 0: "
}
}
I know the problem doesn't lie either in the project id or in the access token placed in the authentication header, because I have successfully created an empty table before. Also I specify the content-type header to be application/json which I don't think is the issue here, because the body content should be json encoded.
Thanks in advance
Your HTTP request is malformed -- BigQuery doesn't recognize this as a load job at all.
You need to look into the POST request, and check the body you send.
You need to ensure that all the above (which seams correct) is the body of the POST call. The above Json should be on a single line, and if you manually creating the multipart message, make sure there is an extra newline between the headers and body of each MIME type.
If you are using some sort of library, make sure the body is not expected in some other form, like resource, content, or body. I've seen libraries that use these differently.
Try out the BigQuery API explorer: https://developers.google.com/bigquery/docs/reference/v2/jobs/insert and ensure your request body matches the one made by the API.