I'm trying to save an image from a web app as a jpeg in Azure Blob Storage with an Azure Logic App. My web app allows the user select an image from their device and then sends it as an imageDataUri to an HTTP Trigger Logic App that uses the Create Blob Action.
Here is how the Image Data URI is created:
imageDataUri = $"data:{format};base64,{Convert.ToBase64String(memoryStream.ToArray())}";
The file will save successfully but not as a proper image.
How do I work with imageDataUri in the Azure Logic App to save it as a properly formatted jpeg in blob storage?
Figured it out, Logic App code view below:
"Create_blob": {
"inputs": {
"body": "#dataUriToBinary(triggerBody()?['imagedatauri'])",
"headers": {
"Content-Type": "image/jpeg"
},
Related
Recently I saw that in IoT Central app if I create a new Device Template, we get a Interface id in the format like this "dtmi:iosIotCentralApp:DeviceTestTemplate21i;1". And if I use this for DeviceProvisioning function as cmid then I am getting below error in Azure function "Please follow the schema if you want to pass __iot:interfaces section under data. Format: '__iot:interfaces':{'CapabilityModelId': urn:companyname:template:templatename:version, 'CapabilityModel': 'The inline contents of interfaces and capability model.'}"
If I manually create cmid in format (urn:companyname:template:templatename:version) Device is provisioned but not assigned to the specific Device Template.
I am using below API for registration in an Azure function
PUT https://global.azure-devices-provisioning.net/{idScope}/registrations/{registrationId}/register?api-version=2019-03-31
Are there any changes in API for Device provisioning or am I missing anything?
This question is related to my answer here.
The following is a DPS payload for passing the model id:
{
"registrationId":"yourDeviceId",
"payload": {
"modelId": "yourDeviceTemplateModelId"
}
}
See more details in the IoT Plug and Play device developer guide.
I am trying to render a project I created using the Video Indexer "Create Project" API call, but I am getting the following error:
{
"ErrorType": "USER_NOT_ALLOWED",
"Message": "Token is authorized to access only a video. Trace id: '6a0bd50f-d25e-405f-b853-86847c8a1bca"
}
I'm following the following steps from the API documentation:
Create a project:
https://api.videoindexer.ai/{location}/Accounts/{accountId}/Projects[?accessToken]
This returns the new project information and 200 Ok status code.
Get the project access token by sending a GET request to:
https://api.videoindexer.ai/Auth/{location}/Accounts/{accountId}/Projects/{projectId}/AccessToken[?allowEdit]
An access token is returned successfully.
Send a POST request to render the video:
https://api.videoindexer.ai/{location}/Accounts/{accountId}/Projects/{projectId}/render[?sendCompletionEmail][&accessToken]
** for the access token parameter, I am passing in the project access key in all cases
However, the documentation for this API doesn't specify the schema for the body of the call, so when I send an empty body, it returns the error:
{
"ErrorType": "USER_NOT_ALLOWED",
"Message": "Token is authorized to access only a video. Trace id: '6a0bd50f-d25e-405f-b853-86847c8a1bca"
}
I have also tried a different approach of calling the Project widget and using the "Render" button that the widget provides, but I can neither save nor render the videos that show up in the project.
My end goal is to be able to edit the videos and render the selected video ranges.
Any advice regarding this issue is welcome.
Rendering a project is an operation that requires access to other videos in your Video Indexer account (the videos that are included in the project).
Therefore, make sure you use an account access token (obtained with allowEdit=true) for step 3, or just use the same token you used to create the project from step 1.
I am implementing a blogging platform which user has to upload images for each content. I am wondering if I can during the upload I can get the permanent URL of the image as a response from AWS Amplify Storage.get function?
I want to know if this is a good idea?
Storage.get('img-sample.png', { expires: 9999999 })
.then(result => console.log(result))
.catch(err => console.log(err));
I wanted the URL because I will be saving it on the database. I don't want to call the Storage.get API function every time the postcontent is rendered because what if there are multiple images inside a post content. That will make the application less reliable I think.
EDIT: base on the question of danielfranca
I think the application will be less reliable because having multiple images in one content and getting each images one by one using Storage.get API function. Can hangup the loading of the page in just one blog post
I am learning fauna db but is not working from terminal shell.
It shows data on webshell.
> Paginate(Indexes())
{
"data": [
Index("school"),
Index("Community")
]
}
But does not show data on terminal shell.
> Paginate(Indexes())
{ data: [] }
Each secret that you use is associated with a specific database.
The web shell can access any of your databases via your username+password login. The fauna-shell terminal app can only use secrets, which are typically stored in $HOME/.fauna-shell.
If you need to, create a new key in the Dashboard for the database you wish to access via fauna-shell (or application code); using the key's secret gives you access to that database. Make sure that you record the new secret someplace safe, such as in $HOME/.fauna-shell, because it is only provided once.
Our requirement is to upload objects in Amazon S3 using a browser based interface. For this we're utilizing Query String Authentication mechanism (we don't have end-user's credentials during the upload process and we're using ASP.Net to write code to do so. I'm running into issues when trying to do multipart uploads.
I'm using AWS .Net SDK (version 2.0.2.5) to create a query string using the following code below:
GetPreSignedUrlRequest request = new GetPreSignedUrlRequest()
{
BucketName = bucketName,
Key = objectKey,
Expires = expiryTime,
Protocol = Amazon.S3.Protocol.HTTP,
Verb = Amazon.S3.HttpVerb.PUT,
ContentType = "application/octet-stream,
};
var url = AWSClientFactory.CreateAmazonS3Client(credentials, region).GetPreSignedURL(request);
This works great if I don't do multi-part upload. However I'm not able to figure out how to do a multipart upload using Query String.
Problems that I'm running into are:
In order to do a multipart upload, I need to get an UploadId first. However the method to get UploadId is a POST HTTP Request and GetPreSignedUrlRequest method does not take POST as an HTTP verb.
I finally ended up putting a bucket policy where I granted my Amazon S3 account permission on the bucket in question (which does not belong to me) and using that account I am doing HTTP POST to get UploadId.
Now based on my understanding, the query string must contain UploadId and PartNumber parameters when creating the query string authentication URL but looking at GetPreSignedUrlRequest object properties, I can't figure out how to specify these parameters there.
I am inclined to believe that .Net SDK does not support this scenario and I have to resort to native REST API to create query string. Is my understanding correct?
Any insights into this would be highly appreciated.
Happy New Year in advance.
You are correct in your belief. The AWS SDK for .NET does not currently support creating presigned URLs for multipart uploads