Upload .png file to Azure Blob storage using Kotlin - kotlin

I got this assignment that requires you to upload some .png files to azure blob storage using kotlin. I already have the storage account set up and are able to upload files using azure storage explorer. Can't seem to find any examples on how this can be done with kotlin tho.
I am relatively new to both kotlin and blob storage, so any help is appreciated

Related

How to upload a 9GB file with extension sql.gz to SQL BigQuery Sandbox

I want to upload a 9.6 GB file with the extension .sql.gz to my SQL BigQuery Sandbox (free) account. I received a message that the file is too big and that I need to upload it from the cloud. When trying to upload it from the cloud, I am asked to create a bucket, and if I want to create a bucket, I get the message: "billing must be enabled". Is there any alternative, specifically for an sql.gz file?
As of now, there is no alternative but to upload .gz files files to a bucket in Cloud Storage and use the bq command-line tool to create a new table.
You may enable billing for your existing project to use Cloud Storage.

Directly download from a link and upload file to GCS

Is there a way to download a MP4 file directly and store on Google bucket. We have a use-case to get a file URL to download and upload it on cloud. However, since file size can be more than 1 GB, it is not feasible to download in local storage first and then upload the file to cloud bucket. We are specifically looking for google cloud storage to upload files and solution should be specific to same.
Some Ref doc we found but does not look like the feasible solution as it uploads file from local storage not directly from link.
https://googleapis.dev/ruby/google-cloud-storage/latest/Google/Cloud/Storage.html
https://www.mydatahack.com/uploading-and-downloading-files-in-s3-with-ruby/
Google Cloud Storage does not offer compute features. That means you cannot directly load an object into Cloud Storage from a URL. You must fetch the object and then upload it into Cloud Storage.

How to make ohif look at s3 for loading studies

I have built object storage plugin to store orthanc data in s3 bucket in legacy mode. I am now trying to eliminate local storage of files of orthanc and move it to s3 completely. I also have OHIF viewer integrated which is serving orthanc data, How do I make it fetch from s3 bucket? I have read that json file of dicom file can be used to do this, but I dont know how to do that because the json file has url of each instance in s3 bucket. How do i generate this json file if this is the way to do it?

Unable to upload VHD file to Azure Storage

My query is that how do we upload VHD files to Azure storage?I have used blob storage and select page blob to upload VHD file but receiving this error :
RESPONSE Status: 400 Page blob is not supported for this account type.
Please advice. thanks
There are some restrictions on using page blob, you need to use Hot access tier, please refer to this official documentation.
This official document has a clearer introduction to the Access tiers:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers?tabs=azure-portal
You can set the access tiers hereļ¼š

How to receive an uploaded file using node.js formidable library and save it to Amazon S3 using knox?

I would like to upload a form from a web page and directly save the file to S3 without first saving it to disk. This node.js app will be deployed to Heroku, where there is no local disk to save the file to.
The node-formidable library provides a great way to upload files and save them to disk. I am not sure how to turn off formidable (or connect-form) from saving file first. The Knox library on the other hand provides a way to read a file from the disk and save it on Amazon S3.
1) Is there a way to hook into formidable's events (on Data) to send the stream to Knox's events, so that I can directly save the uploaded file in my Amazon S3 bucket?
2) Are there any libraries or code snippets that can allow me to directly take the uploaded file and save it Amazon S3 using node.js?
There is a similar question here but the answers there do not address NOT saving the file to disk.
It looks like there is no good way to do it. One reason might be that the node-formidable library saves the uploaded file to disk. I could not find any options to do otherwise. The knox library takes the saved file on the disk and using your Amazon S3 credentials uploads it to Amazon.
Since on Heroku I cannot save files locally, I ended up using transloadit service. Though their authentication docs have some learning curve, I found the service useful.
For those who want to use transloadit using node.js, the following code sample may help (transloadit page had only Ruby and PHP examples)
var crypto, signature;
crypto = require('crypto');
signature = crypto.createHmac("sha1", 'auth secret').
update('some string').
digest("hex")
console.log(signature);
this is Andy, creator of AwsSum:
https://github.com/appsattic/node-awssum/
I just released v0.2.0 of this library. It uploads the files that were created by Express' bodyParser() though as you say, this won't work on Heroku:
https://github.com/appsattic/connect-stream-s3
However, I shall be looking at adding the ability to stream from formidable directly to S3 in the next (v0.3.0) version. For the moment though, take a look and see if it can help. :)