I'm using an Azure SAS URL to upload a file to a blob storage:
var blockBlob = new Microsoft.WindowsAzure.Storage.Blob.CloudBlockBlob(new System.Uri(sasUrl));
blockBlob.UploadFromFile(filePath);
The file exists on my disk, and the URL should be correct since it is automatically retrieved from the Windows Store Ingestion API (and, if I slightly change one character in the URL's signature part, the upload fails with HTTP 403).
However, when checking
var blobs = blockBlob.Container.ListBlobs();
the result is Count = 0, so I'm wondering if the upload was successful? Unfortunately, the UploadFromFile method (similarly to the UploadFromStream method) has no return type, so I'm not sure how to retrieve the upload's result).
If I try to connect to the SAS URL using Azure Storage Explorer, listing
blob containers fails with the error "Authentication Error. Signature fields not well formed". I tried URL escaping the URL's signature part since that seems to be the reason for that error in some similar cases, but that doesn't solve the problem.
Is there any way to check the status of a blob upload? Has anybody an idea why an auto-generated URL (delivered by one of Microsoft's official APIs) can not be connected to using Azure Explorer?
Please examine the sp field of your SAS. It shows the rights you are authorized to do with the blob. For example, sp=rw means you could read the blob and write content to the blob using this SAS. sp=w means you can only write content to the blob using this SAS.
If you have the read right, you can copy the SAS URL to the browser address bar. The browser will download or show the blob content for you.
Is there any way to check the status of a blob upload?
If no exception throws from your code, it means the blob has been uploaded successfully. Otherwise, a exception will be thrown.
try
{
blockBlob.UploadFromFile(filePath);
}
catch(Exception ex)
{
//uploaded failed
}
You can also confirm it using any web debugging proxy tool(ex. Fiddler) to capture the response message from storage server. 201 Created status code will be returned if the blob has been uploaded successfully.
Has anybody an idea why an auto-generated URL (delivered by one of Microsoft's official APIs) can not be connected to using Azure Explorer?
Azure Storage Explorer only allow us to connect a Storage Account using SAS or attach a storage service (blob container, queue, or table) using an SAS. It doesn't allow us to connect a blob item using SAS.
In case on synchronous UPLOAD, we can try Exception based approach and also we can cross check "blockBlob.Properties.Length". Before uploading file, its "-1" and after upload completes, it become size of file got uploaded.
So we can add check, to verify block length, which will give details on state of upload.
try
{
blockBlob.UploadFromFile(filePath);
if(blockBlob.Properties.Length >= 0)
{
// File uploaded successfull
// You can take any action.
}
}
catch(Exception ex)
{
//uploaded failed
}
Related
I am using Azure.Storage..Data Movement to upload large files to Storage account in a ASp.net web form 4.5 . I dont see a way to set the metadata of uploaded blob options without getting reference. What is the best way to do it using this class. Blobcloudoptions has metadata option , but in TransferManager.UploadAsync(....).
Any pointer is appreciated.
Regards
Sunny
Actually, the TransferManager.UploadAsync() method does not support set metadata directly. You can go through the source code for more details.
Instead, we need to get a reference of the blob first -> then set the metadata -> then upload it via the TransferManager.UploadAsync() method. Like below:
//other code
//get a blob reference
CloudBlockBlob destBlob = blobContainer.GetBlockBlobReference("myblob.txt");
//set metadata here
destBlob.Metadata.Add("xxx", "yyy");
//other code
//then use the following method to upload file
TransferManager.UploadAsync(xxx);
I'm uploading files in AZURE blob storage using azure-storage java sdk version 8.6.5. If I upload a file from Web Console, I see Content-MD5 value.
But I do not see CONTENT-MD5 value when I upload using the following sample code :-
BlobRequestOptions blobRequestOptions = new BlobRequestOptions();
blobRequestOptions.setStoreBlobContentMD5(true);
cloudBlockBlob.uploadBlock(blockId, inputstream , length, null, blobRequestOptions, null);
File is split into multiple chunks and uploaded in multiple parallel threads and finally committing the block list as follows. File upload is working fine.
cloudBlockBlob.commitBlockList(blockIds, null, blobRequestOptions, null);
Any pointers would be greatly appreciated, thanks!
Also any ideas what is the best way to check the file integrity programmatically and to ensure file is uploaded correctly if content-MD5 is not available. Does Azure blob Storage support any thing for content verification?
If you want to get CONTENT-MD5 value after you have uploaded a file successfully,just try the code below :
cloudBlockBlob.getProperties().getContentMD5()
If you are still missing the content-MD5 value, this link could be helpful.
There is an upload object which will be returned by AWS on file upload. Where the upload object contains the byte transferred so far.
How to inject an object in play framework session ? So that it can be retrieved in the next ajax call to get the status of the file upload
Is there a way to get the byte transferred by AWS API by giving the file access key or file unique key in the next ajax call after file upload.
Thanks.
1) Play's session doesn't work this way : It's based ok cookies, and there is no storage out of the box (everything you set in a user's session end up in the a cookie), so you need to handle that yourself.
I would set a random UUID as session ID, and use a backend storage that storage a data blob based on a combination.
2) Sure, but you need to hande that yourself. AWS's API is async, so you get an ID on upload, and use that later on to see the status,
Is there any other way to insert data in BigQuery via API apart from via streaming data i.e. Table.insetAll
InsertAllResponse response = bigquery.insertAll(InsertAllRequest.newBuilder(tableId)
.addRow("rowId", rowContent)
.build())
As you can see in the docs, you also have 2 other possibilites:
Loading from Google Cloud Storage, BigTable, DataStore
Just run a job.insert method from the job resource and set as metadata the field configuration.load.sourceUri.
In the Python Client, this is done in the method LoadTableFromStorageJob.
You can therefore just send your files to GCS for instance and then have an API call to bring the files to BigQuery.
Media Upload
This is also a job.load operation but this time the HTTP request also carries binaries from a file in your machine. So you can pretty much send any file that you have in your disk with this request (given the format is accepted by BQ).
In Python, this is done in the resource table Table.upload_from_file.
I want to use breezejs api for storing data in local storage (indexdb or websql) and also want to sync local data with sql server.
But I am failed to achieve this and also not able to find sample app of this type of application using breezejs, knockout and mvc api.
My requirement is:
1) If internet is available, the data will come from sql server by using mvc web api.
2) If internet is shutdown, the application will retrieve data from cached local storage (indexdb or websql).
3) As soon as internet is on, the local data will sync to sql server.
Please let me know Can I achieve this requirement by using breezejs api or not?
If yes, please provide me some and links and sample.
If no, what other can we use for achieving this type of requirement?
Thanks.
Please help me to meet this requirement.
You can do this, but I would suggest simply using localstorage. Basically, every time you read from the server or save to the server, you export the entities and save that to local storage. THen, when you need to read in the data, if the server is unreachable, you read the data from localstorage and use importentities to get it into the manager and then query locally.
function getData() {
var query = breeze.EntityQuery
.from("{YourAPI}");
manager.executeQuery.then(saveLocallyAndReturnPromise)
.fail(tryLocalRestoreAndReturnPromise)
// If query was successful remotely, then save the data in case connection
// is lost
function saveLocallyAndReturnPromise(data) {
// Should add error handling here. This code
// assumes tis local processing will be successful.
var cacheData = manager.exportEntities()
window.localStorage.setItem('savedCache',cacheData);
// return queried data as a promise so that this detour is
// transparent to viewmodel
return Q(data);
}
function tryLocalRestoreAndReturnPromise(error) {
// Assume any error just means the server is inaccessible.
// Simplified for example, but more robust error handling is
// warranted
var cacheData = window.localStorage.getItem('savedCache');
// NOTE: should handle empty saved cache here by throwing error;
manager.importEntities(cacheData); // restore saved cache
var query = query.using(breeze.FetchStrategy.FromLocalCache);
return manager.executeQuery(query); // this is a promise
}
}
This is a code skeleton for simplicity. You should check catch and handle errors, add an isConnected function to determine connectivity, etc.
If you are doing editing locally, there are a few more hoops to jump through. Every time you make a change to the cache, you will need to export either the whole cache or the changes (probably depending on the size of the cache). When there is a connection, you will need to test for local changes first and, if found, save them to the server before requerying the server. In addition, any schema changes made while offline complicate matters tremendously, so be aware of that.
Hope this helps. A robust implementation is a bit more complex, but this should give you a starting point.