Hi I am new to azure I am trying to upload a file to azure container using
static void UploadBlobFromFile(Uri blobEndpoint, string accountName, string accountKey)
{
// Create service client for credentialed access to the Blob service.
CloudBlobClient blobClient =
new CloudBlobClient(blobEndpoint,
new StorageCredentials(accountName, accountKey));
// Get a reference to a container, which may or may not exist.
CloudBlobContainer container = blobClient.GetContainerReference("StackOverflowAnalysis");
// Create a new container, if it does not exist
//container.CreateIfNotExist();
// Get a reference to a blob, which may or may not exist.
CloudBlockBlob blob = container.GetBlockBlobReference("QueryResults.csv");
// Upload content to the blob, which will create the blob if it does not already exist.
using (var filst = System.IO.File.OpenRead(#"c:\users\hmohamed\Downloads\QueryResults.csv"))
{ blob.UploadFromStream(filst); }
}'
I am getting error Bad request 400; I am trying this in mvc app I have also tried it with console application where i got error the process cannot access file because it is being used by another process. Responses to similar posts advice to run netstat command to fix the problem but I do not know how to use it and what parameters to supply; can some one please help
All letters in a container name must be lowercase. So, please use "stackoverflowanalysis" as your container name.
For more information on naming, please refer to Naming and Referencing Containers, Blobs, and Metadata.
Related
We are setting a Key for the Storage Account and then using to access the contents as below;
var storageCredentials = new StorageCredentials(mediaStorageAccountName, base64EncodedKey);
var storageAccount = new CloudStorageAccount(storageCredentials, true);
var connString = storageAccount.ToString(true);
Then, using the same "storageAccount" to create the Blob Client;
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
And to get the Container;
var container = blobClient.GetContainerReference(ContainerName);
"storageAccount" Credential properties are "IsSAS" FALSE, "IsSharedKey" TRUE, "IsToken" FALSE and "KeyName" is NULL.
But, when Blob is being accessed with OpenReadAsync, its failing with following exception;
The remote server returned an error: (403) Forbidden.,The remote server returned an error: (403) Forbidden. Line number: Microsoft.WindowsAzure.Storage Trace: at Microsoft.WindowsAzure.Storage.Core.Executor.Executor.EndExecuteAsync[T](IAsyncResult result)
at Microsoft.WindowsAzure.Storage.Blob.CloudBlob.EndExists(IAsyncResult asyncResult)
at Microsoft.WindowsAzure.Storage.Core.Util.AsyncExtensions.<>c__DisplayClass2`1.b__0(IAsyncResult ar)
It is basically getting all the references to Container/Blobs etc correctly (gives correct name), but when its tried to read/download/upload those, it fails.
Also, instead of using the "storageAccount" reference directly, even if it is secured with following, it gives same exception;
CloudStorageAccount storageAccount = new CloudStorageAccount(
new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials(storageAccountName, base64EncodedKey), true);
What is wrong here and how to fix this?
Why is KeyName NULL? Is that causing this issue?
The 403 forbidden exception often caused by a wrong access key is used.
As you are using Authorize with Shared Key, all authorized requests must include the Coordinated Universal Time (UTC) timestamp for the request. You can specify the timestamp either in the x-ms-date header, or in the standard HTTP/HTTPS Date header.
The storage services ensure that a request is no older than 15 minutes by the time it reaches the service. This guards against certain security attacks, including replay attacks. When this check fails, the server returns response code 403 (Forbidden).
So, review your server datatime.
I've try to do the following instruction of this document : LINK
I used SAS authentication and added this to request header "x-ms-rename-source" but i kept getting this error "403-AuthorizationPermissionMismatch". Doing fine with all others api method but this one seem really tricky. Does anyone have success rename a file or directory with this one ?
Instead of using SAS authentication, i used authorization headers. You can check it here.
My request headers :
DateTime now = DateTime.UtcNow;
requestMessage.Headers.Add("x-ms-date", now.ToString("R", CultureInfo.InvariantCulture));
requestMessage.Headers.Add("x-ms-version", "2018-11-09");
//your source path you want to rename
requestMessage.Headers.Add("x-ms-rename-source", renameSourcePath);
//rename operation only accept authorize by shared key via header
requestMessage.Headers.Authorization = AzureStorageAuthenticationHelper.GetAuthorizationHeader(
StorageGen2AccountName, StorageGen2AccountKey, now, requestMessage);
You can try to rename the file in Blob Storage by using Storage Explorer tool
Kindly let us know if the above helps or you need further assistance on this issue.
All examples I could find just pass the url to babylonjs and it does a GET internally.
I need to either add an authentication header to this request, or just do the loading myself and pass the byte array to babylonjs.
How is authentication supposed to work with babylonjs?
Hello in this case you can just do the loading by yourself and then creates a blob and use object.createObjectUrl to refer to it:
var blob = new Blob([data]);
var url = window.URL.createobjecturl(blob);
I am trying to use the Azure File Service preview as a mapped drive between my two instances of a cloud service and I came across the following blog post with some details:
http://blogs.msdn.com/b/windowsazurestorage/archive/2014/05/12/introducing-microsoft-azure-file-service.aspx?CommentPosted=true#commentmessage
I have signed up for the preview version of the storage account and created a new storage account and verified that the file endpoint is included. I then use the following code to attempt to create the share programmatically:
CloudStorageAccount account = CloudStorageAccount.Parse(System.Configuration.ConfigurationManager.AppSettings["SecondaryStorageConnectionString"].ToString());
CloudFileClient client = account.CreateCloudFileClient();
CloudFileShare share = client.GetShareReference("SCORM");
try
{
share.CreateIfNotExistsAsync().Wait();
}
catch (AggregateException e)
{
var test = e.Message;
var test1 = e.InnerException.Message;
}
On the CreateIfNotExistsAsync().Wait() method I am getting a Aggregate exception and when I look at the Inner details it just says The remote server returned an error: (400) Bad Request.
Your comment is the correct answer. Please refer to Naming and Referencing Shares, Directories, Files, and Metadata article for more information.
The scenario is as follows:
A WCF web service exists that outputs a valid GeoRSS feed. This lives in its own domain as a number of different applications have access to it.
A web page(on a different site) has been created with an instance of a VEMap(Bing/Virtual Earth map object).
Now, VEMap can accept an input feed in this format via the following:
var layer = new VEShapeLayer();
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "someurl", layer);
map.ImportShapeLayerData(veLayerSpec, onComplete, true);
onComplete is a callback function I'm using to replace the default pin graphic with something custom.
The question is in regards to "someurl", which is a path to a local xml file containing the geographic information(georss simple format). I've realized this feed and the map must be hosted in the same domain, so I've created a generic handler that reads the remote feed and returns it in the same format.
var veLayerSpec = new VEShapeSourceSpecification(VEDataType.GeoRSS, "/somelocalhandler.ashx", layer);
When I do this, I get the VEMap error("z is null"). This is the same error one would receive when trying to access a remote feed. When I copy the feed into a local xml file(ie, "feed.xml") there is no error.
The order of operations is currently: remote feed -> local handler -> VEMap import
If I'm over complicating this procedure, let me know! I'm a bit new to the Bing Maps API and might have missed something. Any assistance is appreciated.
The format I have above is actually very close to what I needed. A similar solution was found by Mike McDougall. Although I was passing the RSS feed directly through the handler(writing the read stream directly), I just needed to specify the following from within the handler:
context.Response.ContentType = "text/xml";
context.Response.ContentEncoding = System.Text.Encoding.UTF8;
With the above fix, I'm able to have a remote GeoRSS feed successfully load a separately hosted Virtual Earth map instance.